--- /dev/null
+haproxy (1.4.23-1) unstable; urgency=low
+
+ As of 1.4.23-1, the Debian package ships an rsyslog snippet to allow logging
+ via /dev/log from chrooted HAProxy processes. If you are using rsyslog, you
+ should restart rsyslog after installing this package to enable HAProxy to log
+ via rsyslog. See /usr/share/doc/haproxy/README.Debian for more details.
+
+ Also note that as of 1.4.23-1, chrooting the HAProxy process is enabled in the
+ default Debian configuration.
+
+ -- Apollon Oikonomopoulos <apoikos@gmail.com> Thu, 25 Apr 2013 23:26:35 +0300
+
+haproxy (1.4.13-1) unstable; urgency=low
+
+ Maintainer of this package has changed.
+
+ -- Christo Buschek <crito@30loops.net> Mon, 10 Mar 2011 22:07:10 +0100
+
+haproxy (1.3.14.2-1) unstable; urgency=low
+
+ Configuration has moved to /etc/haproxy/haproxy.cfg. This allows to add the
+ configurable /etc/haproxy/errors directory.
+ The haproxy binary was also moved to /usr/sbin rather than /usr/bin, update
+ your init script or reinstall the one provided with the package.
+
+ -- Arnaud Cornet <acornet@debian.org> Mon, 21 Jan 2008 23:38:15 +0100
--- /dev/null
+Binding non-local IPv6 addresses
+================================
+
+There are cases where HAProxy needs to bind() a non-existing address, like
+for example in high-availability setups with floating IP addresses (e.g. using
+keepalived or ucarp). For IPv4 the net.ipv4.ip_nonlocal_bind sysctl can be used
+to permit binding non-existing addresses, such a control does not exist for
+IPv6 however.
+
+The solution is to add the "transparent" parameter to the frontend's bind
+statement, for example:
+
+frontend fe1
+ bind 2001:db8:abcd:f00::1:8080 transparent
+
+This will require a recent Linux kernel (>= 2.6.28) with TPROXY support (Debian
+kernels will work correctly with this option).
+
+See /usr/share/doc/haproxy/configuration.txt.gz for more information on the
+"transparent" bind parameter.
+
+ -- Apollon Oikonomopoulos <apoikos@gmail.com> Wed, 16 Oct 2013 21:18:58 +0300
--- /dev/null
+haproxy (1.6.3-1~u16.04+mos1) mos10.0; urgency=medium
+
+ * Add MIRA0001-Adding-include-configuration-statement-to-haproxy.patch
+
+ -- Dmitry Teselkin <mos-linux@mirantis.com> Fri, 17 Jun 2016 15:28:32 +0000
+
+haproxy (1.6.3-1) unstable; urgency=medium
+
+ [ Apollon Oikonomopoulos ]
+ * haproxy.init: use s-s-d's --pidfile option.
+ Thanks to Louis Bouchard (Closes: 804530)
+
+ [ Vincent Bernat ]
+ * watch: fix d/watch to look for 1.6 version
+ * Imported Upstream version 1.6.3
+
+ -- Vincent Bernat <bernat@debian.org> Thu, 31 Dec 2015 08:10:10 +0100
+
+haproxy (1.6.2-2) unstable; urgency=medium
+
+ * Enable USE_REGPARM on amd64 as well.
+
+ -- Vincent Bernat <bernat@debian.org> Tue, 03 Nov 2015 21:21:30 +0100
+
+haproxy (1.6.2-1) unstable; urgency=medium
+
+ * New upstream release.
+ - BUG/MAJOR: dns: first DNS response packet not matching queried
+ hostname may lead to a loop
+ - BUG/MAJOR: http: don't requeue an idle connection that is already
+ queued
+ * Upload to unstable.
+
+ -- Vincent Bernat <bernat@debian.org> Tue, 03 Nov 2015 13:36:22 +0100
+
+haproxy (1.6.1-2) experimental; urgency=medium
+
+ * Build the Lua manpage in -arch, fixes FTBFS in binary-only builds.
+
+ -- Apollon Oikonomopoulos <apoikos@debian.org> Thu, 22 Oct 2015 12:19:41 +0300
+
+haproxy (1.6.1-1) experimental; urgency=medium
+
+ [ Vincent Bernat ]
+ * New upstream release.
+ - BUG/MAJOR: ssl: free the generated SSL_CTX if the LRU cache is
+ disabled
+ * Drop 0001-BUILD-install-only-relevant-and-existing-documentati.patch.
+
+ [ Apollon Oikonomopoulos ]
+ * Ship and generate Lua API documentation.
+
+ -- Vincent Bernat <bernat@debian.org> Thu, 22 Oct 2015 10:45:55 +0200
+
+haproxy (1.6.0+ds1-1) experimental; urgency=medium
+
+ * New upstream release!
+ * Add a patch to fix documentation installation:
+ + 0001-BUILD-install-only-relevant-and-existing-documentati.patch
+ * Update HAProxy documentation converter to a more recent version.
+
+ -- Vincent Bernat <bernat@debian.org> Wed, 14 Oct 2015 17:29:19 +0200
+
+haproxy (1.6~dev7-1) experimental; urgency=medium
+
+ * New upstream release.
+
+ -- Vincent Bernat <bernat@debian.org> Tue, 06 Oct 2015 16:01:26 +0200
+
+haproxy (1.6~dev5-1) experimental; urgency=medium
+
+ * New upstream release.
+
+ -- Vincent Bernat <bernat@debian.org> Mon, 14 Sep 2015 15:50:28 +0200
+
+haproxy (1.6~dev4-1) experimental; urgency=medium
+
+ * New upstream release.
+ * Refresh debian/copyright.
+
+ -- Vincent Bernat <bernat@debian.org> Sun, 30 Aug 2015 23:54:10 +0200
+
+haproxy (1.6~dev3-1) experimental; urgency=medium
+
+ * New upstream release.
+ * Enable Lua support.
+
+ -- Vincent Bernat <bernat@debian.org> Sat, 15 Aug 2015 17:51:29 +0200
+
+haproxy (1.5.15-1) unstable; urgency=medium
+
+ * New upstream stable release including the following fix:
+ - BUG/MAJOR: http: don't call http_send_name_header() after an error
+
+ -- Vincent Bernat <bernat@debian.org> Mon, 02 Nov 2015 07:34:19 +0100
+
+haproxy (1.5.14-1) unstable; urgency=high
+
+ * New upstream version. Fix an information leak (CVE-2015-3281):
+ - BUG/MAJOR: buffers: make the buffer_slow_realign() function
+ respect output data.
+ * Add $named as a dependency for init script. Closes: #790638.
+
+ -- Vincent Bernat <bernat@debian.org> Fri, 03 Jul 2015 19:49:02 +0200
+
+haproxy (1.5.13-1) unstable; urgency=medium
+
+ * New upstream stable release including the following fixes:
+ - MAJOR: peers: allow peers section to be used with nbproc > 1
+ - BUG/MAJOR: checks: always check for end of list before proceeding
+ - MEDIUM: ssl: replace standards DH groups with custom ones
+ - BUG/MEDIUM: ssl: fix tune.ssl.default-dh-param value being overwritten
+ - BUG/MEDIUM: cfgparse: segfault when userlist is misused
+ - BUG/MEDIUM: stats: properly initialize the scope before dumping stats
+ - BUG/MEDIUM: http: don't forward client shutdown without NOLINGER
+ except for tunnels
+ - BUG/MEDIUM: checks: do not dereference head of a tcp-check at the end
+ - BUG/MEDIUM: checks: do not dereference a list as a tcpcheck struct
+ - BUG/MEDIUM: peers: apply a random reconnection timeout
+ - BUG/MEDIUM: config: properly compute the default number of processes
+ for a proxy
+
+ -- Vincent Bernat <bernat@debian.org> Sat, 27 Jun 2015 20:52:07 +0200
+
+haproxy (1.5.12-1) unstable; urgency=medium
+
+ * New upstream stable release including the following fixes:
+ - BUG/MAJOR: http: don't read past buffer's end in http_replace_value
+ - BUG/MAJOR: http: prevent risk of reading past end with balance
+ url_param
+ - BUG/MEDIUM: Do not consider an agent check as failed on L7 error
+ - BUG/MEDIUM: patern: some entries are not deleted with case
+ insensitive match
+ - BUG/MEDIUM: buffer: one byte miss in buffer free space check
+ - BUG/MEDIUM: http: thefunction "(req|res)-replace-value" doesn't
+ respect the HTTP syntax
+ - BUG/MEDIUM: peers: correctly configure the client timeout
+ - BUG/MEDIUM: http: hdr_cnt would not count any header when called
+ without name
+ - BUG/MEDIUM: listener: don't report an error when resuming unbound
+ listeners
+ - BUG/MEDIUM: init: don't limit cpu-map to the first 32 processes only
+ - BUG/MEDIUM: stream-int: always reset si->ops when si->end is
+ nullified
+ - BUG/MEDIUM: http: remove content-length from chunked messages
+ - BUG/MEDIUM: http: do not restrict parsing of transfer-encoding to
+ HTTP/1.1
+ - BUG/MEDIUM: http: incorrect transfer-coding in the request is a bad
+ request
+ - BUG/MEDIUM: http: remove content-length form responses with bad
+ transfer-encoding
+ - BUG/MEDIUM: http: wait for the exact amount of body bytes in
+ wait_for_request_body
+
+ -- Vincent Bernat <bernat@debian.org> Sat, 02 May 2015 16:38:28 +0200
+
+haproxy (1.5.11-2) unstable; urgency=medium
+
+ * Upload to unstable.
+
+ -- Vincent Bernat <bernat@debian.org> Sun, 26 Apr 2015 17:46:58 +0200
+
+haproxy (1.5.11-1) experimental; urgency=medium
+
+ * New upstream stable release including the following fixes:
+ - BUG/MAJOR: log: don't try to emit a log if no logger is set
+ - BUG/MEDIUM: backend: correctly detect the domain when
+ use_domain_only is used
+ - BUG/MEDIUM: Do not set agent health to zero if server is disabled
+ in config
+ - BUG/MEDIUM: Only explicitly report "DOWN (agent)" if the agent health
+ is zero
+ - BUG/MEDIUM: http: fix header removal when previous header ends with
+ pure LF
+ - BUG/MEDIUM: channel: fix possible integer overflow on reserved size
+ computation
+ - BUG/MEDIUM: channel: don't schedule data in transit for leaving until
+ connected
+ - BUG/MEDIUM: http: make http-request set-header compute the string
+ before removal
+ * Upload to experimental.
+
+ -- Vincent Bernat <bernat@debian.org> Sun, 01 Feb 2015 09:22:27 +0100
+
+haproxy (1.5.10-1) experimental; urgency=medium
+
+ * New upstream stable release including the following fixes:
+ - BUG/MAJOR: stream-int: properly check the memory allocation return
+ - BUG/MEDIUM: sample: fix random number upper-bound
+ - BUG/MEDIUM: patterns: previous fix was incomplete
+ - BUG/MEDIUM: payload: ensure that a request channel is available
+ - BUG/MEDIUM: tcp-check: don't rely on random memory contents
+ - BUG/MEDIUM: tcp-checks: disable quick-ack unless next rule is an expect
+ - BUG/MEDIUM: config: do not propagate processes between stopped
+ processes
+ - BUG/MEDIUM: memory: fix freeing logic in pool_gc2()
+ - BUG/MEDIUM: compression: correctly report zlib_mem
+ * Upload to experimental.
+
+ -- Vincent Bernat <bernat@debian.org> Sun, 04 Jan 2015 13:17:56 +0100
+
+haproxy (1.5.9-1) experimental; urgency=medium
+
+ * New upstream stable release including the following fixes:
+ - BUG/MAJOR: sessions: unlink session from list on out
+ of memory
+ - BUG/MEDIUM: pattern: don't load more than once a pattern
+ list.
+ - BUG/MEDIUM: connection: sanitize PPv2 header length before
+ parsing address information
+ - BUG/MAJOR: frontend: initialize capture pointers earlier
+ - BUG/MEDIUM: checks: fix conflicts between agent checks and
+ ssl healthchecks
+ - BUG/MEDIUM: ssl: force a full GC in case of memory shortage
+ - BUG/MEDIUM: ssl: fix bad ssl context init can cause
+ segfault in case of OOM.
+ * Upload to experimental.
+
+ -- Vincent Bernat <bernat@debian.org> Sun, 07 Dec 2014 16:37:36 +0100
+
+haproxy (1.5.8-3) unstable; urgency=medium
+
+ * Remove RC4 from the default cipher string shipped in configuration.
+
+ -- Vincent Bernat <bernat@debian.org> Fri, 27 Feb 2015 11:29:23 +0100
+
+haproxy (1.5.8-2) unstable; urgency=medium
+
+ * Cherry-pick the following patches from 1.5.9 release:
+ - 8a0b93bde77e BUG/MAJOR: sessions: unlink session from list on out
+ of memory
+ - bae03eaad40a BUG/MEDIUM: pattern: don't load more than once a pattern
+ list.
+ - 93637b6e8503 BUG/MEDIUM: connection: sanitize PPv2 header length before
+ parsing address information
+ - 8ba50128832b BUG/MAJOR: frontend: initialize capture pointers earlier
+ - 1f96a87c4e14 BUG/MEDIUM: checks: fix conflicts between agent checks and
+ ssl healthchecks
+ - 9bcc01ae2598 BUG/MEDIUM: ssl: force a full GC in case of memory shortage
+ - 909514970089 BUG/MEDIUM: ssl: fix bad ssl context init can cause
+ segfault in case of OOM.
+ * Cherry-pick the following patches from future 1.5.10 release:
+ - 1e89acb6be9b BUG/MEDIUM: payload: ensure that a request channel is
+ available
+ - bad3c6f1b6d7 BUG/MEDIUM: patterns: previous fix was incomplete
+
+ -- Vincent Bernat <bernat@debian.org> Sun, 07 Dec 2014 11:11:21 +0100
+
+haproxy (1.5.8-1) unstable; urgency=medium
+
+ * New upstream stable release including the following fixes:
+
+ + BUG/MAJOR: buffer: check the space left is enough or not when input
+ data in a buffer is wrapped
+ + BUG/MINOR: ssl: correctly initialize ssl ctx for invalid certificates
+ + BUG/MEDIUM: tcp: don't use SO_ORIGINAL_DST on non-AF_INET sockets
+ + BUG/MEDIUM: regex: fix pcre_study error handling
+ + BUG/MEDIUM: tcp: fix outgoing polling based on proxy protocol
+ + BUG/MINOR: log: fix request flags when keep-alive is enabled
+ + BUG/MAJOR: cli: explicitly call cli_release_handler() upon error
+ + BUG/MEDIUM: http: don't dump debug headers on MSG_ERROR
+ * Also includes the following new features:
+ + MINOR: ssl: add statement to force some ssl options in global.
+ + MINOR: ssl: add fetchs 'ssl_c_der' and 'ssl_f_der' to return DER
+ formatted certs
+ * Disable SSLv3 in the default configuration file.
+
+ -- Vincent Bernat <bernat@debian.org> Fri, 31 Oct 2014 13:48:19 +0100
+
+haproxy (1.5.6-1) unstable; urgency=medium
+
+ * New upstream stable release including the following fixes:
+ + BUG/MEDIUM: systemd: set KillMode to 'mixed'
+ + MINOR: systemd: Check configuration before start
+ + BUG/MEDIUM: config: avoid skipping disabled proxies
+ + BUG/MINOR: config: do not accept more track-sc than configured
+ + BUG/MEDIUM: backend: fix URI hash when a query string is present
+ * Drop systemd patches:
+ + haproxy.service-also-check-on-start.patch
+ + haproxy.service-set-killmode-to-mixed.patch
+ * Refresh other patches.
+
+ -- Vincent Bernat <bernat@debian.org> Mon, 20 Oct 2014 18:10:21 +0200
+
+haproxy (1.5.5-1) unstable; urgency=medium
+
+ [ Vincent Bernat ]
+ * initscript: use start-stop-daemon to reliably terminate all haproxy
+ processes. Also treat stopping a non-running haproxy as success.
+ (Closes: #762608, LP: #1038139)
+
+ [ Apollon Oikonomopoulos ]
+ * New upstream stable release including the following fixes:
+ + DOC: Address issue where documentation is excluded due to a gitignore
+ rule.
+ + MEDIUM: Improve signal handling in systemd wrapper.
+ + BUG/MINOR: config: don't propagate process binding for dynamic
+ use_backend
+ + MINOR: Also accept SIGHUP/SIGTERM in systemd-wrapper
+ + DOC: clearly state that the "show sess" output format is not fixed
+ + MINOR: stats: fix minor typo fix in stats_dump_errors_to_buffer()
+ + DOC: indicate in the doc that track-sc* can wait if data are missing
+ + MEDIUM: http: enable header manipulation for 101 responses
+ + BUG/MEDIUM: config: propagate frontend to backend process binding again.
+ + MEDIUM: config: properly propagate process binding between proxies
+ + MEDIUM: config: make the frontends automatically bind to the listeners'
+ processes
+ + MEDIUM: config: compute the exact bind-process before listener's
+ maxaccept
+ + MEDIUM: config: only warn if stats are attached to multi-process bind
+ directives
+ + MEDIUM: config: report it when tcp-request rules are misplaced
+ + MINOR: config: detect the case where a tcp-request content rule has no
+ inspect-delay
+ + MEDIUM: systemd-wrapper: support multiple executable versions and names
+ + BUG/MEDIUM: remove debugging code from systemd-wrapper
+ + BUG/MEDIUM: http: adjust close mode when switching to backend
+ + BUG/MINOR: config: don't propagate process binding on fatal errors.
+ + BUG/MEDIUM: check: rule-less tcp-check must detect connect failures
+ + BUG/MINOR: tcp-check: report the correct failed step in the status
+ + DOC: indicate that weight zero is reported as DRAIN
+ * Add a new patch (haproxy.service-set-killmode-to-mixed.patch) to fix the
+ systemctl stop action conflicting with the systemd wrapper now catching
+ SIGTERM.
+ * Bump standards to 3.9.6; no changes needed.
+ * haproxy-doc: link to tracker.debian.org instead of packages.qa.debian.org.
+ * d/copyright: move debian/dconv/* paragraph after debian/*, so that it
+ actually matches the files it is supposed to.
+
+ -- Apollon Oikonomopoulos <apoikos@debian.org> Wed, 08 Oct 2014 12:34:53 +0300
+
+haproxy (1.5.4-1) unstable; urgency=high
+
+ * New upstream version.
+ + Fix a critical bug that, under certain unlikely conditions, allows a
+ client to crash haproxy.
+ * Prefix rsyslog configuration file to ensure to log only to
+ /var/log/haproxy. Thanks to Paul Bourke for the patch.
+
+ -- Vincent Bernat <bernat@debian.org> Tue, 02 Sep 2014 19:14:38 +0200
+
+haproxy (1.5.3-1) unstable; urgency=medium
+
+ * New upstream stable release, fixing the following issues:
+ + Memory corruption when building a proxy protocol v2 header
+ + Memory leak in SSL DHE key exchange
+
+ -- Apollon Oikonomopoulos <apoikos@debian.org> Fri, 25 Jul 2014 10:41:36 +0300
+
+haproxy (1.5.2-1) unstable; urgency=medium
+
+ * New upstream stable release. Important fixes:
+ + A few sample fetch functions when combined in certain ways would return
+ malformed results, possibly crashing the HAProxy process.
+ + Hash-based load balancing and http-send-name-header would fail for
+ requests which contain a body which starts to be forwarded before the
+ data is used.
+
+ -- Apollon Oikonomopoulos <apoikos@debian.org> Mon, 14 Jul 2014 00:42:32 +0300
+
+haproxy (1.5.1-1) unstable; urgency=medium
+
+ * New upstream stable release:
+ + Fix a file descriptor leak for clients that disappear before connecting.
+ + Do not staple expired OCSP responses.
+
+ -- Apollon Oikonomopoulos <apoikos@debian.org> Tue, 24 Jun 2014 12:56:30 +0300
+
+haproxy (1.5.0-1) unstable; urgency=medium
+
+ * New upstream stable series. Notable changes since the 1.4 series:
+ + Native SSL support on both sides with SNI/NPN/ALPN and OCSP stapling.
+ + IPv6 and UNIX sockets are supported everywhere
+ + End-to-end HTTP keep-alive for better support of NTLM and improved
+ efficiency in static farms
+ + HTTP/1.1 response compression (deflate, gzip) to save bandwidth
+ + PROXY protocol versions 1 and 2 on both sides
+ + Data sampling on everything in request or response, including payload
+ + ACLs can use any matching method with any input sample
+ + Maps and dynamic ACLs updatable from the CLI
+ + Stick-tables support counters to track activity on any input sample
+ + Custom format for logs, unique-id, header rewriting, and redirects
+ + Improved health checks (SSL, scripted TCP, check agent, ...)
+ + Much more scalable configuration supports hundreds of thousands of
+ backends and certificates without sweating
+
+ * Upload to unstable, merge all 1.5 work from experimental. Most important
+ packaging changes since 1.4.25-1 include:
+ + systemd support.
+ + A more sane default config file.
+ + Zero-downtime upgrades between 1.5 releases by gracefully reloading
+ HAProxy during upgrades.
+ + HTML documentation shipped in the haproxy-doc package.
+ + kqueue support for kfreebsd.
+
+ * Packaging changes since 1.5~dev26-2:
+ + Drop patches merged upstream:
+ o Fix-reference-location-in-manpage.patch
+ o 0001-BUILD-stats-workaround-stupid-and-bogus-Werror-forma.patch
+ + d/watch: look for stable 1.5 releases
+ + systemd: respect CONFIG and EXTRAOPTS when specified in
+ /etc/default/haproxy.
+ + initscript: test the configuration before start or reload.
+ + initscript: remove the ENABLED flag and logic.
+
+ -- Apollon Oikonomopoulos <apoikos@debian.org> Fri, 20 Jun 2014 11:05:17 +0300
+
+haproxy (1.5~dev26-2) experimental; urgency=medium
+
+ * initscript: start should not fail when haproxy is already running
+ + Fixes upgrades from post-1.5~dev24-1 installations
+
+ -- Apollon Oikonomopoulos <apoikos@debian.org> Wed, 04 Jun 2014 13:20:39 +0300
+
+haproxy (1.5~dev26-1) experimental; urgency=medium
+
+ * New upstream development version.
+ + Add a patch to fix compilation with -Werror=format-security
+
+ -- Vincent Bernat <bernat@debian.org> Wed, 28 May 2014 20:32:10 +0200
+
+haproxy (1.5~dev25-1) experimental; urgency=medium
+
+ [ Vincent Bernat ]
+ * New upstream development version.
+ * Rename "contimeout", "clitimeout" and "srvtimeout" in the default
+ configuration file to "timeout connection", "timeout client" and
+ "timeout server".
+
+ [ Apollon Oikonomopoulos ]
+ * Build on kfreebsd using the "freebsd" target; enables kqueue support.
+
+ -- Vincent Bernat <bernat@debian.org> Thu, 15 May 2014 00:20:11 +0200
+
+haproxy (1.5~dev24-2) experimental; urgency=medium
+
+ * New binary package: haproxy-doc
+ + Contains the HTML documentation built using a version of Cyril Bonté's
+ haproxy-dconv (https://github.com/cbonte/haproxy-dconv).
+ + Add Build-Depends-Indep on python and python-mako
+ + haproxy Suggests: haproxy-doc
+ * systemd: check config file for validity on reload.
+ * haproxy.cfg:
+ + Enable the stats socket by default and bind it to
+ /run/haproxy/admin.sock, which is accessible by the haproxy group.
+ /run/haproxy creation is handled by the initscript for sysv-rc and a
+ tmpfiles.d config for systemd.
+ + Set the default locations for CA and server certificates to
+ /etc/ssl/certs and /etc/ssl/private respectively.
+ + Set the default cipher list to be used on listening SSL sockets to
+ enable PFS, preferring ECDHE ciphers by default.
+ * Gracefully reload HAProxy on upgrade instead of performing a full restart.
+ * debian/rules: split build into binary-arch and binary-indep.
+ * Build-depend on debhelper >= 9, set compat to 9.
+
+ -- Apollon Oikonomopoulos <apoikos@debian.org> Sun, 27 Apr 2014 13:37:17 +0300
+
+haproxy (1.5~dev24-1) experimental; urgency=medium
+
+ * New upstream development version, fixes major regressions introduced in
+ 1.5~dev23:
+
+ + Forwarding of a message body (request or response) would automatically
+ stop after the transfer timeout strikes, and with no error.
+ + Redirects failed to update the msg->next offset after consuming the
+ request, so if they were made with keep-alive enabled and starting with
+ a slash (relative location), then the buffer was shifted by a negative
+ amount of data, causing a crash.
+ + The code to standardize DH parameters caused an important performance
+ regression for, so it was temporarily reverted for the time needed to
+ understand the cause and to fix it.
+
+ For a complete release announcement, including other bugfixes and feature
+ enhancements, see http://deb.li/yBVA.
+
+ -- Apollon Oikonomopoulos <apoikos@debian.org> Sun, 27 Apr 2014 11:09:37 +0300
+
+haproxy (1.5~dev23-1) experimental; urgency=medium
+
+ * New upstream development version; notable changes since 1.5~dev22:
+ + SSL record size optimizations to speed up both, small and large
+ transfers.
+ + Dynamic backend name support in use_backend.
+ + Compressed chunked transfer encoding support.
+ + Dynamic ACL manipulation via the CLI.
+ + New "language" converter for extracting language preferences from
+ Accept-Language headers.
+ * Remove halog source and systemd unit files from
+ /usr/share/doc/haproxy/contrib, they are built and shipped in their
+ appropriate locations since 1.5~dev19-2.
+
+ -- Apollon Oikonomopoulos <apoikos@debian.org> Wed, 23 Apr 2014 11:12:34 +0300
+
+haproxy (1.5~dev22-1) experimental; urgency=medium
+
+ * New upstream development version
+ * watch: use the source page and not the main one
+
+ -- Apollon Oikonomopoulos <apoikos@debian.org> Mon, 03 Feb 2014 17:45:51 +0200
+
+haproxy (1.5~dev21+20140118-1) experimental; urgency=medium
+
+ * New upstream development snapshot, with the following fixes since
+ 1.5-dev21:
+ + 00b0fb9 BUG/MAJOR: ssl: fix breakage caused by recent fix abf08d9
+ + 410f810 BUG/MEDIUM: map: segmentation fault with the stats's socket
+ command "set map ..."
+ + abf08d9 BUG/MAJOR: connection: fix mismatch between rcv_buf's API and
+ usage
+ + 35249cb BUG/MINOR: pattern: pattern comparison executed twice
+ + c920096 BUG/MINOR: http: don't clear the SI_FL_DONT_WAKE flag between
+ requests
+ + b800623 BUG/MEDIUM: stats: fix HTTP/1.0 breakage introduced in previous
+ patch
+ + 61f7f0a BUG/MINOR: stream-int: do not clear the owner upon unregister
+ + 983eb31 BUG/MINOR: channel: CHN_INFINITE_FORWARD must be unsigned
+ + a3ae932 BUG/MEDIUM: stats: the web interface must check the tracked
+ servers before enabling
+ + e24d963 BUG/MEDIUM: checks: unchecked servers could not be enabled
+ anymore
+ + 7257550 BUG/MINOR: http: always disable compression on HTTP/1.0
+ + 9f708ab BUG/MINOR: checks: successful check completion must not
+ re-enable MAINT servers
+ + ff605db BUG/MEDIUM: backend: do not re-initialize the connection's
+ context upon reuse
+ + ea90063 BUG/MEDIUM: stream-int: fix the keep-alive idle connection
+ handler
+ * Update debian/copyright to reflect the license of ebtree/
+ (closes: #732614)
+ * Synchronize debian/copyright with source
+ * Add Documentation field to the systemd unit file
+
+ -- Apollon Oikonomopoulos <apoikos@debian.org> Mon, 20 Jan 2014 10:07:34 +0200
+
+haproxy (1.5~dev21-1) experimental; urgency=low
+
+ [ Prach Pongpanich ]
+ * Bump Standards-Version to 3.9.5
+
+ [ Thomas Bechtold ]
+ * debian/control: Add haproxy-dbg binary package for debug symbols.
+
+ [ Apollon Oikonomopoulos ]
+ * New upstream development version.
+ * Require syslog to be operational before starting. Closes: #726323.
+
+ -- Vincent Bernat <bernat@debian.org> Tue, 17 Dec 2013 01:38:04 +0700
+
+haproxy (1.5~dev19-2) experimental; urgency=low
+
+ [ Vincent Bernat ]
+ * Really enable systemd support by using dh-systemd helper.
+ * Don't use -L/usr/lib and rely on default search path. Closes: #722777.
+
+ [ Apollon Oikonomopoulos ]
+ * Ship halog.
+
+ -- Vincent Bernat <bernat@debian.org> Thu, 12 Sep 2013 21:58:05 +0200
+
+haproxy (1.5~dev19-1) experimental; urgency=high
+
+ [ Vincent Bernat ]
+ * New upstream version.
+ + CVE-2013-2175: fix a possible crash when using negative header
+ occurrences.
+ + Drop 0002-Fix-typo-in-src-haproxy.patch: applied upstream.
+ * Enable gzip compression feature.
+
+ [ Prach Pongpanich ]
+ * Drop bashism patch. It seems useless to maintain a patch to convert
+ example scripts from /bin/bash to /bin/sh.
+ * Fix reload/restart action of init script (LP: #1187469)
+
+ -- Vincent Bernat <bernat@debian.org> Mon, 17 Jun 2013 22:03:58 +0200
+
+haproxy (1.5~dev18-1) experimental; urgency=low
+
+ [ Apollon Oikonomopoulos ]
+ * New upstream development version
+
+ [ Vincent Bernat ]
+ * Add support for systemd. Currently, /etc/default/haproxy is not used
+ when using systemd.
+
+ -- Vincent Bernat <bernat@debian.org> Sun, 26 May 2013 12:33:00 +0200
+
+haproxy (1.4.25-1) unstable; urgency=medium
+
+ [ Prach Pongpanich ]
+ * New upstream version.
+ * Update watch file to use the source page.
+ * Bump Standards-Version to 3.9.5.
+
+ [ Thomas Bechtold ]
+ * debian/control: Add haproxy-dbg binary package for debug symbols.
+
+ [ Apollon Oikonomopoulos ]
+ * Require syslog to be operational before starting. Closes: #726323.
+ * Document how to bind non-local IPv6 addresses.
+ * Add a reference to configuration.txt.gz to the manpage.
+ * debian/copyright: synchronize with source.
+
+ -- Prach Pongpanich <prachpub@gmail.com> Fri, 28 Mar 2014 09:35:09 +0700
+
+haproxy (1.4.24-2) unstable; urgency=low
+
+ [ Apollon Oikonomopoulos ]
+ * Ship contrib/halog as /usr/bin/halog.
+
+ [ Vincent Bernat ]
+ * Don't use -L/usr/lib and rely on default search path. Closes: #722777.
+
+ -- Vincent Bernat <bernat@debian.org> Sun, 15 Sep 2013 14:36:27 +0200
+
+haproxy (1.4.24-1) unstable; urgency=high
+
+ [ Vincent Bernat ]
+ * New upstream version.
+ + CVE-2013-2175: fix a possible crash when using negative header
+ occurrences.
+
+ [ Prach Pongpanich ]
+ * Drop bashism patch. It seems useless to maintain a patch to convert
+ example scripts from /bin/bash to /bin/sh.
+ * Fix reload/restart action of init script (LP: #1187469).
+
+ -- Vincent Bernat <bernat@debian.org> Mon, 17 Jun 2013 21:56:26 +0200
+
+haproxy (1.4.23-1) unstable; urgency=low
+
+ [ Apollon Oikonomopoulos ]
+ * New upstream version (Closes: #643650, #678953)
+ + This fixes CVE-2012-2942 (Closes: #674447)
+ + This fixes CVE-2013-1912 (Closes: #704611)
+ * Ship vim addon as vim-haproxy (Closes: #702893)
+ * Check for the configuration file after sourcing /etc/default/haproxy
+ (Closes: #641762)
+ * Use /dev/log for logging by default (Closes: #649085)
+
+ [ Vincent Bernat ]
+ * debian/control:
+ + add Vcs-* fields
+ + switch maintenance to Debian HAProxy team. (Closes: #706890)
+ + drop dependency to quilt: 3.0 (quilt) format is in use.
+ * debian/rules:
+ + don't explicitly call dh_installchangelog.
+ + use dh_installdirs to install directories.
+ + use dh_install to install error and configuration files.
+ + switch to `linux2628` Makefile target for Linux.
+ * debian/postrm:
+ + remove haproxy user and group on purge.
+ * Ship a more minimal haproxy.cfg file: no `listen` blocks but `global`
+ and `defaults` block with appropriate configuration to use chroot and
+ logging in the expected way.
+
+ [ Prach Pongpanich ]
+ * debian/copyright:
+ + add missing copyright holders
+ + update years of copyright
+ * debian/rules:
+ + build with -Wl,--as-needed to get rid of unnecessary depends
+ * Remove useless files in debian/haproxy.{docs,examples}
+ * Update debian/watch file, thanks to Bart Martens
+
+ -- Vincent Bernat <bernat@debian.org> Mon, 06 May 2013 20:02:14 +0200
+
+haproxy (1.4.15-1) unstable; urgency=low
+
+ * New upstream release with critical bug fix (Closes: #631351)
+
+ -- Christo Buschek <crito@30loops.net> Thu, 14 Jul 2011 18:17:05 +0200
+
+haproxy (1.4.13-1) unstable; urgency=low
+
+ * New maintainer upload (Closes: #615246)
+ * New upstream release
+ * Standards-version goes 3.9.1 (no change)
+ * Added patch bashism (Closes: #581109)
+ * Added a README.source file.
+
+ -- Christo Buschek <crito@30loops.net> Thu, 11 Mar 2011 12:41:59 +0000
+
+haproxy (1.4.8-1) unstable; urgency=low
+
+ * New upstream release.
+
+ -- Arnaud Cornet <acornet@debian.org> Fri, 18 Jun 2010 00:42:53 +0100
+
+haproxy (1.4.4-1) unstable; urgency=low
+
+ * New upstream release
+ * Add splice and tproxy support
+ * Add regparm optimization on i386
+ * Switch to dpkg-source 3.0 (quilt) format
+
+ -- Arnaud Cornet <acornet@debian.org> Thu, 15 Apr 2010 20:00:34 +0100
+
+haproxy (1.4.2-1) unstable; urgency=low
+
+ * New upstream release
+ * Remove debian/patches/haproxy.1-hyphen.patch gone upstream
+ * Tighten quilt build dep (Closes: #567087)
+ * standards-version goes 3.8.4 (no change)
+ * Add $remote_fs to init.d script required start and stop
+
+ -- Arnaud Cornet <acornet@debian.org> Sat, 27 Mar 2010 15:19:48 +0000
+
+haproxy (1.3.22-1) unstable; urgency=low
+
+ * New upstream bugfix release
+
+ -- Arnaud Cornet <acornet@debian.org> Mon, 19 Oct 2009 22:31:45 +0100
+
+haproxy (1.3.21-1) unstable; urgency=low
+
+ [ Michael Shuler ]
+ * New Upstream Version (Closes: #538992)
+ * Added override for example shell scripts in docs (Closes: #530096)
+ * Added upstream changelog to docs
+ * Added debian/watch
+ * Updated debian/copyright format
+ * Added haproxy.1-hyphen.patch, to fix hyphen in man page
+ * Upgrade Standards-Version to 3.8.3 (no change needed)
+ * Upgrade debian/compat to 7 (no change needed)
+
+ [ Arnaud Cornet ]
+ * New upstream version.
+ * Merge Michael's work, few changelog fixes
+ * Add debian/README.source to point to quilt doc
+ * Depend on debhelper >= 7.0.50~ and use overrides in debian/rules
+
+ -- Arnaud Cornet <acornet@debian.org> Sun, 18 Oct 2009 14:01:29 +0200
+
+haproxy (1.3.18-1) unstable; urgency=low
+
+ * New Upstream Version (Closes: #534583).
+ * Add contrib directory in docs
+
+ -- Arnaud Cornet <acornet@debian.org> Fri, 26 Jun 2009 00:11:01 +0200
+
+haproxy (1.3.15.7-2) unstable; urgency=low
+
+ * Fix build without debian/patches directory (Closes: #515682) using
+ /usr/share/quilt/quilt.make.
+
+ -- Arnaud Cornet <acornet@debian.org> Tue, 17 Feb 2009 08:55:12 +0100
+
+haproxy (1.3.15.7-1) unstable; urgency=low
+
+ * New Upstream Version.
+ * Remove upstream patches:
+ -use_backend-consider-unless.patch
+ -segfault-url_param+check_post.patch
+ -server-timeout.patch
+ -closed-fd-remove.patch
+ -connection-slot-during-retry.patch
+ -srv_dynamic_maxconn.patch
+ -do-not-pause-backends-on-reload.patch
+ -acl-in-default.patch
+ -cookie-capture-check.patch
+ -dead-servers-queue.patch
+
+ -- Arnaud Cornet <acornet@debian.org> Mon, 16 Feb 2009 11:20:21 +0100
+
+haproxy (1.3.15.2-2~lenny1) testing-proposed-updates; urgency=low
+
+ * Rebuild for lenny to circumvent pcre3 shlibs bump.
+
+ -- Arnaud Cornet <acornet@debian.org> Wed, 14 Jan 2009 11:28:36 +0100
+
+haproxy (1.3.15.2-2) unstable; urgency=low
+
+ * Add stable branch bug fixes from upstream (Closes: #510185).
+ - use_backend-consider-unless.patch: consider "unless" in use_backend
+ - segfault-url_param+check_post.patch: fix segfault with url_param +
+ check_post
+ - server-timeout.patch: consider server timeout in all circumstances
+ - closed-fd-remove.patch: drop info about closed file descriptors
+ - connection-slot-during-retry.patch: do not release the connection slot
+ during a retry
+ - srv_dynamic_maxconn.patch: dynamic connection throttling api fix
+ - do-not-pause-backends-on-reload.patch: make reload reliable
+ - acl-in-default.patch: allow acl-related keywords in defaults sections
+ - cookie-capture-check.patch: cookie capture is declared in the frontend
+ but checked on the backend
+ - dead-servers-queue.patch: make dead servers not suck pending connections
+ * Add quilt build-dependancy. Use quilt in debian/rules to apply
+ patches.
+
+ -- Arnaud Cornet <acornet@debian.org> Wed, 31 Dec 2008 08:50:21 +0100
+
+haproxy (1.3.15.2-1) unstable; urgency=low
+
+ * New Upstream Version (Closes: #497186).
+
+ -- Arnaud Cornet <acornet@debian.org> Sat, 30 Aug 2008 18:06:31 +0200
+
+haproxy (1.3.15.1-1) unstable; urgency=low
+
+ * New Upstream Version
+ * Upgrade standards version to 3.8.0 (no change needed).
+ * Build with TARGET=linux26 on linux, TARGET=generic on other systems.
+
+ -- Arnaud Cornet <acornet@debian.org> Fri, 20 Jun 2008 00:38:50 +0200
+
+haproxy (1.3.14.5-1) unstable; urgency=low
+
+ * New Upstream Version (Closes: #484221)
+ * Use debhelper 7, drop CDBS.
+
+ -- Arnaud Cornet <acornet@debian.org> Wed, 04 Jun 2008 19:21:56 +0200
+
+haproxy (1.3.14.3-1) unstable; urgency=low
+
+ * New Upstream Version
+ * Add status argument support to init-script to conform to LSB.
+ * Cleanup pidfile after stop in init script. Init script return code fixups.
+
+ -- Arnaud Cornet <acornet@debian.org> Sun, 09 Mar 2008 21:30:29 +0100
+
+haproxy (1.3.14.2-3) unstable; urgency=low
+
+ * Add init script support for nbproc > 1 in configuration. That is,
+ multiple haproxy processes.
+ * Use 'option redispatch' instead of redispatch in debian default
+ config.
+
+ -- Arnaud Cornet <acornet@debian.org> Sun, 03 Feb 2008 18:22:28 +0100
+
+haproxy (1.3.14.2-2) unstable; urgency=low
+
+ * Fix init scripts's reload function to use -sf instead of -st (to wait for
+ active session to finish cleanly). Also support dash. Thanks to
+ Jean-Baptiste Quenot for noticing.
+
+ -- Arnaud Cornet <acornet@debian.org> Thu, 24 Jan 2008 23:47:26 +0100
+
+haproxy (1.3.14.2-1) unstable; urgency=low
+
+ * New Upstream Version
+ * Simplify DEB_MAKE_INVOKE, as upstream now supports us overriding
+ CFLAGS.
+ * Move haproxy to usr/sbin.
+
+ -- Arnaud Cornet <acornet@debian.org> Mon, 21 Jan 2008 22:42:51 +0100
+
+haproxy (1.3.14.1-1) unstable; urgency=low
+
+ * New upstream release.
+ * Drop dfsg list and hash code rewrite (merged upstream).
+ * Add a HAPROXY variable in init script.
+ * Drop makefile patch, fix debian/rules accordingly. Drop build-dependancy
+ on quilt.
+ * Manpage now upstream. Ship upstream's and drop ours.
+
+ -- Arnaud Cornet <acornet@debian.org> Tue, 01 Jan 2008 22:50:09 +0100
+
+haproxy (1.3.12.dfsg2-1) unstable; urgency=low
+
+ * New upstream bugfix release.
+ * Use new Homepage tag.
+ * Bump standards-version (no change needed).
+ * Add build-depend on quilt and add patch to allow proper CFLAGS passing to
+ make.
+
+ -- Arnaud Cornet <acornet@debian.org> Tue, 25 Dec 2007 21:52:59 +0100
+
+haproxy (1.3.12.dfsg-1) unstable; urgency=low
+
+ * Initial release (Closes: #416397).
+ * The DFSG removes files with GPL-incompabitle license and adds a
+ re-implementation by me.
+
+ -- Arnaud Cornet <acornet@debian.org> Fri, 17 Aug 2007 09:33:41 +0200
--- /dev/null
+doc/configuration.html
+doc/intro.html
--- /dev/null
+Source: haproxy
+Section: net
+Priority: optional
+Maintainer: Debian HAProxy Maintainers <pkg-haproxy-maintainers@lists.alioth.debian.org>
+Uploaders: Apollon Oikonomopoulos <apoikos@debian.org>,
+ Prach Pongpanich <prach@debian.org>,
+ Vincent Bernat <bernat@debian.org>
+Standards-Version: 3.9.6
+Build-Depends: debhelper (>= 9),
+ libpcre3-dev,
+ libssl-dev,
+ liblua5.3-dev,
+ dh-systemd (>= 1.5),
+ python-sphinx (>= 1.0.7+dfsg)
+Build-Depends-Indep: python, python-mako
+Homepage: http://haproxy.1wt.eu/
+Vcs-Git: git://anonscm.debian.org/pkg-haproxy/haproxy.git
+Vcs-Browser: http://anonscm.debian.org/gitweb/?p=pkg-haproxy/haproxy.git
+
+Package: haproxy
+Architecture: any
+Depends: ${shlibs:Depends}, ${misc:Depends}, adduser
+Suggests: vim-haproxy, haproxy-doc
+Description: fast and reliable load balancing reverse proxy
+ HAProxy is a TCP/HTTP reverse proxy which is particularly suited for high
+ availability environments. It features connection persistence through HTTP
+ cookies, load balancing, header addition, modification, deletion both ways. It
+ has request blocking capabilities and provides interface to display server
+ status.
+
+Package: haproxy-dbg
+Section: debug
+Priority: extra
+Architecture: any
+Depends: ${misc:Depends}, haproxy (= ${binary:Version})
+Description: fast and reliable load balancing reverse proxy (debug symbols)
+ HAProxy is a TCP/HTTP reverse proxy which is particularly suited for high
+ availability environments. It features connection persistence through HTTP
+ cookies, load balancing, header addition, modification, deletion both ways. It
+ has request blocking capabilities and provides interface to display server
+ status.
+ .
+ This package contains the debugging symbols for haproxy.
+
+Package: haproxy-doc
+Section: doc
+Priority: extra
+Architecture: all
+Depends: ${misc:Depends}, libjs-bootstrap (<< 4), libjs-jquery,
+ ${sphinxdoc:Depends}
+Description: fast and reliable load balancing reverse proxy (HTML documentation)
+ HAProxy is a TCP/HTTP reverse proxy which is particularly suited for high
+ availability environments. It features connection persistence through HTTP
+ cookies, load balancing, header addition, modification, deletion both ways. It
+ has request blocking capabilities and provides interface to display server
+ status.
+ .
+ This package contains the HTML documentation for haproxy.
+
+Package: vim-haproxy
+Architecture: all
+Depends: ${misc:Depends}
+Recommends: vim-addon-manager
+Description: syntax highlighting for HAProxy configuration files
+ The vim-haproxy package provides filetype detection and syntax highlighting
+ for HAProxy configuration files.
+ .
+ As per the Debian vim policy, installed addons are not activated
+ automatically, but the "vim-addon-manager" tool can be used for this purpose.
--- /dev/null
+Format: http://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
+Upstream-Name: haproxy
+Upstream-Contact: Willy Tarreau <w@1wt.eu>
+Source: http://haproxy.1wt.eu/
+
+Files: *
+Copyright: Copyright 2000-2015 Willy Tarreau <w@1wt.eu>.
+License: GPL-2+
+
+Files: ebtree/*
+ include/*
+ contrib/halog/fgets2.c
+Copyright: Copyright 2000-2013 Willy Tarreau - w@1wt.eu
+License: LGPL-2.1
+
+Files: include/proto/auth.h
+ include/types/checks.h
+ include/types/auth.h
+ src/auth.c
+Copyright: Copyright 2008-2010 Krzysztof Piotr Oledzki <ole@ans.pl>
+License: GPL-2+
+
+Files: include/import/lru.h
+ src/lru.c
+Copyright: Copyright (C) 2015 Willy Tarreau <w@1wt.eu>
+License: Expat
+
+Files: include/import/xxhash.h
+ src/xxhash.c
+Copyright: Copyright (C) 2012-2014, Yann Collet.
+License: BSD-2-clause
+
+Files: include/proto/shctx.h
+ src/shctx.c
+Copyright: Copyright (C) 2011-2012 EXCELIANCE
+License: GPL-2+
+
+Files: include/proto/compression.h
+ include/types/compression.h
+Copyright: Copyright 2012 (C) Exceliance, David Du Colombier <dducolombier@exceliance.fr>
+ William Lallemand <wlallemand@exceliance.fr>
+License: LGPL-2.1
+
+Files: include/proto/peers.h
+ include/proto/ssl_sock.h
+ include/types/peers.h
+ include/types/ssl_sock.h
+Copyright: Copyright (C) 2009-2012 EXCELIANCE, Emeric Brun <ebrun@exceliance.fr>
+License: LGPL-2.1
+
+Files: include/types/dns.h
+Copyright: Copyright (C) 2014 Baptiste Assmann <bedis9@gmail.com>
+License: LGPL-2.1
+
+Files: src/dns.c
+Copyright: Copyright (C) 2014 Baptiste Assmann <bedis9@gmail.com>
+License: GPL-2+
+
+Files: include/types/mailers.h
+ src/mailers.c
+Copyright: Copyright 2015 Horms Solutions Ltd., Simon Horman <horms@verge.net.au>
+ Copyright 2010 EXCELIANCE, Emeric Brun <ebrun@exceliance.fr>
+License: LGPL-2.1
+
+Files: include/proto/sample.h
+ include/proto/stick_table.h
+ include/types/sample.h
+ include/types/stick_table.h
+Copyright: Copyright (C) 2009-2012 EXCELIANCE, Emeric Brun <ebrun@exceliance.fr>
+ Copyright (C) 2010-2013 Willy Tarreau <w@1wt.eu>
+License: LGPL-2.1
+
+Files: include/types/counters.h
+Copyright: Copyright 2008-2009 Krzysztof Piotr Oledzki <ole@ans.pl>
+ Copyright 2011 Willy Tarreau <w@1wt.eu>
+License: LGPL-2.1
+
+Files: include/common/base64.h
+ include/common/uri_auth.h
+ include/proto/signal.h
+ include/types/signal.h
+Copyright: Copyright 2000-2013 Willy Tarreau <w@1wt.eu>
+License: GPL-2+
+
+Files: include/common/rbtree.h
+Copyright: (C) 1999 Andrea Arcangeli <andrea@suse.de>
+License: GPL-2+
+
+Files: src/base64.c
+ src/checks.c
+ src/dumpstats.c
+ src/server.c
+Copyright: Copyright 2000-2012 Willy Tarreau <w@1wt.eu>
+ Copyright 2007-2010 Krzysztof Piotr Oledzki <ole@ans.pl>
+License: GPL-2+
+
+Files: src/compression.c
+Copyright: Copyright 2012 (C) Exceliance, David Du Colombier <dducolombier@exceliance.fr>
+ William Lallemand <wlallemand@exceliance.fr>
+License: GPL-2+
+
+Files: src/haproxy-systemd-wrapper.c
+Copyright: Copyright 2013 Marc-Antoine Perennou <Marc-Antoine@Perennou.com>
+License: GPL-2+
+
+Files: src/rbtree.c
+Copyright: (C) 1999 Andrea Arcangeli <andrea@suse.de>
+ (C) 2002 David Woodhouse <dwmw2@infradead.org>
+License: GPL-2+
+
+Files: src/sample.c
+ src/stick_table.c
+Copyright: Copyright 2009-2010 EXCELIANCE, Emeric Brun <ebrun@exceliance.fr>
+ Copyright (C) 2010-2012 Willy Tarreau <w@1wt.eu>
+License: GPL-2+
+
+Files: src/peers.c
+ src/ssl_sock.c
+Copyright: Copyright (C) 2010-2012 EXCELIANCE, Emeric Brun <ebrun@exceliance.fr>
+License: GPL-2+
+
+Files: contrib/netsnmp-perl/haproxy.pl
+ contrib/base64/base64rev-gen.c
+Copyright: Copyright 2007-2010 Krzysztof Piotr Oledzki <ole@ans.pl>
+License: GPL-2+
+
+Files: examples/stats_haproxy.sh
+Copyright: Copyright 2007 Julien Antony and Matthieu Huguet
+License: GPL-2+
+
+Files: examples/check
+Copyright: 2006-2007 (C) Fabrice Dulaunoy <fabrice@dulaunoy.com>
+License: GPL-2+
+
+Files: tests/test_pools.c
+Copyright: Copyright 2007 Aleksandar Lazic <al-haproxy@none.at>
+License: GPL-2+
+
+Files: debian/*
+Copyright: Copyright (C) 2007-2011, Arnaud Cornet <acornet@debian.org>
+ Copyright (C) 2011, Christo Buschek <crito@30loops.net>
+ Copyright (C) 2013, Prach Pongpanich <prachpub@gmail.com>
+ Copyright (C) 2013-2014, Apollon Oikonomopoulos <apoikos@debian.org>
+ Copyright (C) 2013, Vincent Bernat <bernat@debian.org>
+License: GPL-2
+
+Files: debian/dconv/*
+Copyright: Copyright (C) 2012 Cyril Bonté
+License: Apache-2.0
+
+Files: debian/dconv/js/typeahead.bundle.js
+Copyright: Copyright 2013-2015 Twitter, Inc. and other contributors
+License: Expat
+
+License: GPL-2+
+ This program is free software; you can redistribute it
+ and/or modify it under the terms of the GNU General Public
+ License as published by the Free Software Foundation; either
+ version 2 of the License, or (at your option) any later
+ version.
+ .
+ This program is distributed in the hope that it will be
+ useful, but WITHOUT ANY WARRANTY; without even the implied
+ warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
+ PURPOSE. See the GNU General Public License for more
+ details.
+ .
+ You should have received a copy of the GNU General Public
+ License along with this package; if not, write to the Free
+ Software Foundation, Inc., 51 Franklin St, Fifth Floor,
+ Boston, MA 02110-1301 USA
+ .
+ On Debian systems, the full text of the GNU General Public
+ License version 2 can be found in the file
+ `/usr/share/common-licenses/GPL-2'.
+
+License: LGPL-2.1
+ This library is free software; you can redistribute it and/or
+ modify it under the terms of the GNU Lesser General Public
+ License as published by the Free Software Foundation; either
+ version 2.1 of the License, or (at your option) any later version.
+ .
+ This library is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ Lesser General Public License for more details.
+ .
+ You should have received a copy of the GNU Lesser General Public
+ License along with this library; if not, write to the Free Software
+ Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ .
+ On Debian systems, the complete text of the GNU Lesser General Public License,
+ version 2.1, can be found in /usr/share/common-licenses/LGPL-2.1.
+
+License: GPL-2
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License version 2 as
+ published by the Free Software Foundation.
+ .
+ On Debian systems, the complete text of the GNU General Public License, version
+ 2, can be found in /usr/share/common-licenses/GPL-2.
+
+License: Apache-2.0
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+ .
+ http://www.apache.org/licenses/LICENSE-2.0
+ .
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+ .
+ On Debian systems, the full text of the Apache License version 2.0 can be
+ found in the file `/usr/share/common-licenses/Apache-2.0'.
+
+License: Expat
+ Permission is hereby granted, free of charge, to any person obtaining
+ a copy of this software and associated documentation files (the
+ "Software"), to deal in the Software without restriction, including
+ without limitation the rights to use, copy, modify, merge, publish,
+ distribute, sublicense, and/or sell copies of the Software, and to
+ permit persons to whom the Software is furnished to do so, subject to
+ the following conditions:
+ .
+ The above copyright notice and this permission notice shall be
+ included in all copies or substantial portions of the Software.
+ .
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
+ LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
+ OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
+ WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+
+License: BSD-2-clause
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions are
+ met:
+ .
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above
+ copyright notice, this list of conditions and the following disclaimer
+ in the documentation and/or other materials provided with the
+ distribution.
+ .
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
--- /dev/null
+
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright [yyyy] [name of copyright owner]
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
--- /dev/null
+Copyright 2012 Cyril Bonté
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
--- /dev/null
+# HAProxy Documentation Converter
+
+Made to convert the HAProxy documentation into HTML.
+
+More than HTML, the main goal is to provide easy navigation.
+
+## Documentations
+
+A bot periodically fetches last commits for HAProxy 1.4 and 1.5 to produce up-to-date documentations.
+
+Converted documentations are then stored online :
+- HAProxy 1.4 Configuration Manual : [stable](http://cbonte.github.com/haproxy-dconv/configuration-1.4.html) / [snapshot](http://cbonte.github.com/haproxy-dconv/snapshot/configuration-1.4.html)
+- HAProxy 1.5 Configuration Manual : [stable](http://cbonte.github.com/haproxy-dconv/configuration-1.5.html) / [snapshot](http://cbonte.github.com/haproxy-dconv/snapshot/configuration-1.5.html)
+- HAProxy 1.6 Configuration Manual : [stable](http://cbonte.github.com/haproxy-dconv/configuration-1.6.html) / [snapshot](http://cbonte.github.com/haproxy-dconv/snapshot/configuration-1.6.html)
+
+
+## Contribute
+
+The project now lives by itself, as it is sufficiently useable. But I'm sure we can do even better.
+Feel free to report feature requests or to provide patches !
+
--- /dev/null
+/* Global Styles */
+
+body {
+ margin-top: 50px;
+ background: #eee;
+}
+
+a.anchor {
+ display: block; position: relative; top: -50px; visibility: hidden;
+}
+
+/* ------------------------------- */
+
+/* Wrappers */
+
+/* ------------------------------- */
+
+#wrapper {
+ width: 100%;
+}
+
+#page-wrapper {
+ padding: 0 15px 50px;
+ width: 740px;
+ background-color: #fff;
+ margin-left: 250px;
+}
+
+#sidebar {
+ position: fixed;
+ width: 250px;
+ top: 50px;
+ bottom: 0;
+ padding: 15px;
+ background: #f5f5f5;
+ border-right: 1px solid #ccc;
+}
+
+
+/* ------------------------------- */
+
+/* Twitter typeahead.js */
+
+/* ------------------------------- */
+
+.twitter-typeahead {
+ width: 100%;
+}
+.typeahead,
+.tt-query,
+.tt-hint {
+ width: 100%;
+ padding: 8px 12px;
+ border: 2px solid #ccc;
+ -webkit-border-radius: 8px;
+ -moz-border-radius: 8px;
+ border-radius: 8px;
+ outline: none;
+}
+
+.typeahead {
+ background-color: #fff;
+}
+
+.typeahead:focus {
+ border: 2px solid #0097cf;
+}
+
+.tt-query {
+ -webkit-box-shadow: inset 0 1px 1px rgba(0, 0, 0, 0.075);
+ -moz-box-shadow: inset 0 1px 1px rgba(0, 0, 0, 0.075);
+ box-shadow: inset 0 1px 1px rgba(0, 0, 0, 0.075);
+}
+
+.tt-hint {
+ color: #999
+}
+
+.tt-menu {
+ width: 100%;
+ margin-top: 4px;
+ padding: 8px 0;
+ background-color: #fff;
+ border: 1px solid #ccc;
+ border: 1px solid rgba(0, 0, 0, 0.2);
+ -webkit-border-radius: 8px;
+ -moz-border-radius: 8px;
+ border-radius: 8px;
+ -webkit-box-shadow: 0 5px 10px rgba(0,0,0,.2);
+ -moz-box-shadow: 0 5px 10px rgba(0,0,0,.2);
+ box-shadow: 0 5px 10px rgba(0,0,0,.2);
+}
+
+.tt-suggestion {
+ padding: 3px 8px;
+ line-height: 24px;
+}
+
+.tt-suggestion:hover {
+ cursor: pointer;
+ color: #fff;
+ background-color: #0097cf;
+}
+
+.tt-suggestion.tt-cursor {
+ color: #fff;
+ background-color: #0097cf;
+
+}
+
+.tt-suggestion p {
+ margin: 0;
+}
+
+#searchKeyword {
+ width: 100%;
+ margin: 0;
+}
+
+#searchKeyword .tt-menu {
+ max-height: 300px;
+ overflow-y: auto;
+}
+
+/* ------------------------------- */
+
+/* Misc */
+
+/* ------------------------------- */
+
+.well-small ul {
+ padding: 0px;
+}
+.table th,
+.table td.pagination-centered {
+ text-align: center;
+}
+
+pre {
+ overflow: visible; /* Workaround for dropdown menus */
+}
+
+pre.text {
+ padding: 0;
+ font-size: 13px;
+ color: #000;
+ background: transparent;
+ border: none;
+ margin-bottom: 18px;
+}
+pre.arguments {
+ font-size: 13px;
+ color: #000;
+ background: transparent;
+}
+
+.comment {
+ color: #888;
+}
+small, .small {
+ color: #888;
+}
+.level1 {
+ font-size: 125%;
+}
+.sublevels {
+ border-left: 1px solid #ccc;
+ padding-left: 10px;
+}
+.tab {
+ padding-left: 20px;
+}
+.keyword {
+ font-family: Menlo, Monaco, "Courier New", monospace;
+ white-space: pre;
+ background: #eee;
+ border-top: 1px solid #fff;
+ border-bottom: 1px solid #ccc;
+}
+
+.label-see-also {
+ background-color: #999;
+}
+.label-disabled {
+ background-color: #ccc;
+}
+h5 {
+ text-decoration: underline;
+}
+
+.example-desc {
+ border-bottom: 1px solid #ccc;
+ margin-bottom: 18px;
+}
+.noheight {
+ min-height: 0 !important;
+}
+.separator {
+ margin-bottom: 18px;
+}
+
+div {
+ word-wrap: break-word;
+}
+
+html, body {
+ width: 100%;
+ min-height: 100%:
+}
+
+.dropdown-menu > li {
+ white-space: nowrap;
+}
+/* TEMPORARILY HACKS WHILE PRE TAGS ARE USED
+-------------------------------------------------- */
+
+h5,
+.unpre,
+.example-desc,
+.dropdown-menu {
+ font-family: "Helvetica Neue", Helvetica, Arial, sans-serif;
+ white-space: normal;
+}
--- /dev/null
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+
+# Copyright 2012 Cyril Bonté
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+'''
+TODO : ability to split chapters into several files
+TODO : manage keyword locality (server/proxy/global ; ex : maxconn)
+TODO : Remove global variables where possible
+'''
+import os
+import subprocess
+import sys
+import cgi
+import re
+import time
+import datetime
+
+from optparse import OptionParser
+
+from mako.template import Template
+from mako.lookup import TemplateLookup
+from mako.exceptions import TopLevelLookupException
+
+from parser import PContext
+from parser import remove_indent
+from parser import *
+
+from urllib import quote
+
+VERSION = ""
+HAPROXY_GIT_VERSION = False
+
+def main():
+ global VERSION, HAPROXY_GIT_VERSION
+
+ usage="Usage: %prog --infile <infile> --outfile <outfile>"
+
+ optparser = OptionParser(description='Generate HTML Document from HAProxy configuation.txt',
+ version=VERSION,
+ usage=usage)
+ optparser.add_option('--infile', '-i', help='Input file mostly the configuration.txt')
+ optparser.add_option('--outfile','-o', help='Output file')
+ optparser.add_option('--base','-b', default = '', help='Base directory for relative links')
+ (option, args) = optparser.parse_args()
+
+ if not (option.infile and option.outfile) or len(args) > 0:
+ optparser.print_help()
+ exit(1)
+
+ option.infile = os.path.abspath(option.infile)
+ option.outfile = os.path.abspath(option.outfile)
+
+ os.chdir(os.path.dirname(__file__))
+
+ VERSION = get_git_version()
+ if not VERSION:
+ sys.exit(1)
+
+ HAPROXY_GIT_VERSION = get_haproxy_git_version(os.path.dirname(option.infile))
+
+ convert(option.infile, option.outfile, option.base)
+
+
+# Temporarily determine the version from git to follow which commit generated
+# the documentation
+def get_git_version():
+ if not os.path.isdir(".git"):
+ print >> sys.stderr, "This does not appear to be a Git repository."
+ return
+ try:
+ p = subprocess.Popen(["git", "describe", "--tags", "--match", "v*"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
+ except EnvironmentError:
+ print >> sys.stderr, "Unable to run git"
+ return
+ version = p.communicate()[0]
+ if p.returncode != 0:
+ print >> sys.stderr, "Unable to run git"
+ return
+
+ if len(version) < 2:
+ return
+
+ version = version[1:].strip()
+ version = re.sub(r'-g.*', '', version)
+ return version
+
+def get_haproxy_git_version(path):
+ try:
+ p = subprocess.Popen(["git", "describe", "--tags", "--match", "v*"], cwd=path, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
+ except EnvironmentError:
+ return False
+ version = p.communicate()[0]
+
+ if p.returncode != 0:
+ return False
+
+ if len(version) < 2:
+ return False
+
+ version = version[1:].strip()
+ version = re.sub(r'-g.*', '', version)
+ return version
+
+def getTitleDetails(string):
+ array = string.split(".")
+
+ title = array.pop().strip()
+ chapter = ".".join(array)
+ level = max(1, len(array))
+ if array:
+ toplevel = array[0]
+ else:
+ toplevel = False
+
+ return {
+ "title" : title,
+ "chapter" : chapter,
+ "level" : level,
+ "toplevel": toplevel
+ }
+
+# Parse the whole document to insert links on keywords
+def createLinks():
+ global document, keywords, keywordsCount, keyword_conflicts, chapters
+
+ print >> sys.stderr, "Generating keywords links..."
+
+ delimiters = [
+ dict(start='"', end='"', multi=True ),
+ dict(start='- ' , end='\n' , multi=False),
+ ]
+
+ for keyword in keywords:
+ keywordsCount[keyword] = 0
+ for delimiter in delimiters:
+ keywordsCount[keyword] += document.count(delimiter['start'] + keyword + delimiter['end'])
+ if (keyword in keyword_conflicts) and (not keywordsCount[keyword]):
+ # The keyword is never used, we can remove it from the conflicts list
+ del keyword_conflicts[keyword]
+
+ if keyword in keyword_conflicts:
+ chapter_list = ""
+ for chapter in keyword_conflicts[keyword]:
+ chapter_list += '<li><a href="#%s">%s</a></li>' % (quote("%s (%s)" % (keyword, chapters[chapter]['title'])), chapters[chapter]['title'])
+ for delimiter in delimiters:
+ if delimiter['multi']:
+ document = document.replace(delimiter['start'] + keyword + delimiter['end'],
+ delimiter['start'] + '<span class="dropdown">' +
+ '<a class="dropdown-toggle" data-toggle="dropdown" href="#">' +
+ keyword +
+ '<span class="caret"></span>' +
+ '</a>' +
+ '<ul class="dropdown-menu">' +
+ '<li class="dropdown-header">This keyword is available in sections :</li>' +
+ chapter_list +
+ '</ul>' +
+ '</span>' + delimiter['end'])
+ else:
+ document = document.replace(delimiter['start'] + keyword + delimiter['end'], delimiter['start'] + '<a href="#' + quote(keyword) + '">' + keyword + '</a>' + delimiter['end'])
+ else:
+ for delimiter in delimiters:
+ document = document.replace(delimiter['start'] + keyword + delimiter['end'], delimiter['start'] + '<a href="#' + quote(keyword) + '">' + keyword + '</a>' + delimiter['end'])
+ if keyword.startswith("option "):
+ shortKeyword = keyword[len("option "):]
+ keywordsCount[shortKeyword] = 0
+ for delimiter in delimiters:
+ keywordsCount[keyword] += document.count(delimiter['start'] + shortKeyword + delimiter['end'])
+ if (shortKeyword in keyword_conflicts) and (not keywordsCount[shortKeyword]):
+ # The keyword is never used, we can remove it from the conflicts list
+ del keyword_conflicts[shortKeyword]
+ for delimiter in delimiters:
+ document = document.replace(delimiter['start'] + shortKeyword + delimiter['start'], delimiter['start'] + '<a href="#' + quote(keyword) + '">' + shortKeyword + '</a>' + delimiter['end'])
+
+def documentAppend(text, retline = True):
+ global document
+ document += text
+ if retline:
+ document += "\n"
+
+def init_parsers(pctxt):
+ return [
+ underline.Parser(pctxt),
+ arguments.Parser(pctxt),
+ seealso.Parser(pctxt),
+ example.Parser(pctxt),
+ table.Parser(pctxt),
+ underline.Parser(pctxt),
+ keyword.Parser(pctxt),
+ ]
+
+# The parser itself
+def convert(infile, outfile, base=''):
+ global document, keywords, keywordsCount, chapters, keyword_conflicts
+
+ if len(base) > 0 and base[:-1] != '/':
+ base += '/'
+
+ hasSummary = False
+
+ data = []
+ fd = file(infile,"r")
+ for line in fd:
+ line.replace("\t", " " * 8)
+ line = line.rstrip()
+ data.append(line)
+ fd.close()
+
+ pctxt = PContext(
+ TemplateLookup(
+ directories=[
+ 'templates'
+ ]
+ )
+ )
+
+ parsers = init_parsers(pctxt)
+
+ pctxt.context = {
+ 'headers': {},
+ 'document': "",
+ 'base': base,
+ }
+
+ sections = []
+ currentSection = {
+ "details": getTitleDetails(""),
+ "content": "",
+ }
+
+ chapters = {}
+
+ keywords = {}
+ keywordsCount = {}
+
+ specialSections = {
+ "default": {
+ "hasKeywords": True,
+ },
+ "4.1": {
+ "hasKeywords": True,
+ },
+ }
+
+ pctxt.keywords = keywords
+ pctxt.keywordsCount = keywordsCount
+ pctxt.chapters = chapters
+
+ print >> sys.stderr, "Importing %s..." % infile
+
+ nblines = len(data)
+ i = j = 0
+ while i < nblines:
+ line = data[i].rstrip()
+ if i < nblines - 1:
+ next = data[i + 1].rstrip()
+ else:
+ next = ""
+ if (line == "Summary" or re.match("^[0-9].*", line)) and (len(next) > 0) and (next[0] == '-') \
+ and ("-" * len(line)).startswith(next): # Fuzzy underline length detection
+ sections.append(currentSection)
+ currentSection = {
+ "details": getTitleDetails(line),
+ "content": "",
+ }
+ j = 0
+ i += 1 # Skip underline
+ while not data[i + 1].rstrip():
+ i += 1 # Skip empty lines
+
+ else:
+ if len(line) > 80:
+ print >> sys.stderr, "Line `%i' exceeds 80 columns" % (i + 1)
+
+ currentSection["content"] = currentSection["content"] + line + "\n"
+ j += 1
+ if currentSection["details"]["title"] == "Summary" and line != "":
+ hasSummary = True
+ # Learn chapters from the summary
+ details = getTitleDetails(line)
+ if details["chapter"]:
+ chapters[details["chapter"]] = details
+ i += 1
+ sections.append(currentSection)
+
+ chapterIndexes = sorted(chapters.keys())
+
+ document = ""
+
+ # Complete the summary
+ for section in sections:
+ details = section["details"]
+ title = details["title"]
+ if title:
+ fulltitle = title
+ if details["chapter"]:
+ #documentAppend("<a name=\"%s\"></a>" % details["chapter"])
+ fulltitle = details["chapter"] + ". " + title
+ if not details["chapter"] in chapters:
+ print >> sys.stderr, "Adding '%s' to the summary" % details["title"]
+ chapters[details["chapter"]] = details
+ chapterIndexes = sorted(chapters.keys())
+
+ for section in sections:
+ details = section["details"]
+ pctxt.details = details
+ level = details["level"]
+ title = details["title"]
+ content = section["content"].rstrip()
+
+ print >> sys.stderr, "Parsing chapter %s..." % title
+
+ if (title == "Summary") or (title and not hasSummary):
+ summaryTemplate = pctxt.templates.get_template('summary.html')
+ documentAppend(summaryTemplate.render(
+ pctxt = pctxt,
+ chapters = chapters,
+ chapterIndexes = chapterIndexes,
+ ))
+ if title and not hasSummary:
+ hasSummary = True
+ else:
+ continue
+
+ if title:
+ documentAppend('<a class="anchor" id="%s" name="%s"></a>' % (details["chapter"], details["chapter"]))
+ if level == 1:
+ documentAppend("<div class=\"page-header\">", False)
+ documentAppend('<h%d id="chapter-%s" data-target="%s"><small><a class="small" href="#%s">%s.</a></small> %s</h%d>' % (level, details["chapter"], details["chapter"], details["chapter"], details["chapter"], cgi.escape(title, True), level))
+ if level == 1:
+ documentAppend("</div>", False)
+
+ if content:
+ if False and title:
+ # Display a navigation bar
+ documentAppend('<ul class="well pager">')
+ documentAppend('<li><a href="#top">Top</a></li>', False)
+ index = chapterIndexes.index(details["chapter"])
+ if index > 0:
+ documentAppend('<li class="previous"><a href="#%s">Previous</a></li>' % chapterIndexes[index - 1], False)
+ if index < len(chapterIndexes) - 1:
+ documentAppend('<li class="next"><a href="#%s">Next</a></li>' % chapterIndexes[index + 1], False)
+ documentAppend('</ul>', False)
+ content = cgi.escape(content, True)
+ content = re.sub(r'section ([0-9]+(.[0-9]+)*)', r'<a href="#\1">section \1</a>', content)
+
+ pctxt.set_content(content)
+
+ if not title:
+ lines = pctxt.get_lines()
+ pctxt.context['headers'] = {
+ 'title': '',
+ 'subtitle': '',
+ 'version': '',
+ 'author': '',
+ 'date': ''
+ }
+ if re.match("^-+$", pctxt.get_line().strip()):
+ # Try to analyze the header of the file, assuming it follows
+ # those rules :
+ # - it begins with a "separator line" (several '-' chars)
+ # - then the document title
+ # - an optional subtitle
+ # - a new separator line
+ # - the version
+ # - the author
+ # - the date
+ pctxt.next()
+ pctxt.context['headers']['title'] = pctxt.get_line().strip()
+ pctxt.next()
+ subtitle = ""
+ while not re.match("^-+$", pctxt.get_line().strip()):
+ subtitle += " " + pctxt.get_line().strip()
+ pctxt.next()
+ pctxt.context['headers']['subtitle'] += subtitle.strip()
+ if not pctxt.context['headers']['subtitle']:
+ # No subtitle, try to guess one from the title if it
+ # starts with the word "HAProxy"
+ if pctxt.context['headers']['title'].startswith('HAProxy '):
+ pctxt.context['headers']['subtitle'] = pctxt.context['headers']['title'][8:]
+ pctxt.context['headers']['title'] = 'HAProxy'
+ pctxt.next()
+ pctxt.context['headers']['version'] = pctxt.get_line().strip()
+ pctxt.next()
+ pctxt.context['headers']['author'] = pctxt.get_line().strip()
+ pctxt.next()
+ pctxt.context['headers']['date'] = pctxt.get_line().strip()
+ pctxt.next()
+ if HAPROXY_GIT_VERSION:
+ pctxt.context['headers']['version'] = 'version ' + HAPROXY_GIT_VERSION
+
+ # Skip header lines
+ pctxt.eat_lines()
+ pctxt.eat_empty_lines()
+
+ documentAppend('<div>', False)
+
+ delay = []
+ while pctxt.has_more_lines():
+ try:
+ specialSection = specialSections[details["chapter"]]
+ except:
+ specialSection = specialSections["default"]
+
+ line = pctxt.get_line()
+ if i < nblines - 1:
+ nextline = pctxt.get_line(1)
+ else:
+ nextline = ""
+
+ oldline = line
+ pctxt.stop = False
+ for parser in parsers:
+ line = parser.parse(line)
+ if pctxt.stop:
+ break
+ if oldline == line:
+ # nothing has changed,
+ # delays the rendering
+ if delay or line != "":
+ delay.append(line)
+ pctxt.next()
+ elif pctxt.stop:
+ while delay and delay[-1].strip() == "":
+ del delay[-1]
+ if delay:
+ remove_indent(delay)
+ documentAppend('<pre class="text">%s\n</pre>' % "\n".join(delay), False)
+ delay = []
+ documentAppend(line, False)
+ else:
+ while delay and delay[-1].strip() == "":
+ del delay[-1]
+ if delay:
+ remove_indent(delay)
+ documentAppend('<pre class="text">%s\n</pre>' % "\n".join(delay), False)
+ delay = []
+ documentAppend(line, True)
+ pctxt.next()
+
+ while delay and delay[-1].strip() == "":
+ del delay[-1]
+ if delay:
+ remove_indent(delay)
+ documentAppend('<pre class="text">%s\n</pre>' % "\n".join(delay), False)
+ delay = []
+ documentAppend('</div>')
+
+ if not hasSummary:
+ summaryTemplate = pctxt.templates.get_template('summary.html')
+ print chapters
+ document = summaryTemplate.render(
+ pctxt = pctxt,
+ chapters = chapters,
+ chapterIndexes = chapterIndexes,
+ ) + document
+
+
+ # Log warnings for keywords defined in several chapters
+ keyword_conflicts = {}
+ for keyword in keywords:
+ keyword_chapters = list(keywords[keyword])
+ keyword_chapters.sort()
+ if len(keyword_chapters) > 1:
+ print >> sys.stderr, 'Multi section keyword : "%s" in chapters %s' % (keyword, list(keyword_chapters))
+ keyword_conflicts[keyword] = keyword_chapters
+
+ keywords = list(keywords)
+ keywords.sort()
+
+ createLinks()
+
+ # Add the keywords conflicts to the keywords list to make them available in the search form
+ # And remove the original keyword which is now useless
+ for keyword in keyword_conflicts:
+ sections = keyword_conflicts[keyword]
+ offset = keywords.index(keyword)
+ for section in sections:
+ keywords.insert(offset, "%s (%s)" % (keyword, chapters[section]['title']))
+ offset += 1
+ keywords.remove(keyword)
+
+ print >> sys.stderr, "Exporting to %s..." % outfile
+
+ template = pctxt.templates.get_template('template.html')
+ try:
+ footerTemplate = pctxt.templates.get_template('footer.html')
+ footer = footerTemplate.render(
+ pctxt = pctxt,
+ headers = pctxt.context['headers'],
+ document = document,
+ chapters = chapters,
+ chapterIndexes = chapterIndexes,
+ keywords = keywords,
+ keywordsCount = keywordsCount,
+ keyword_conflicts = keyword_conflicts,
+ version = VERSION,
+ date = datetime.datetime.now().strftime("%Y/%m/%d"),
+ )
+ except TopLevelLookupException:
+ footer = ""
+
+ fd = open(outfile,'w')
+
+ print >> fd, template.render(
+ pctxt = pctxt,
+ headers = pctxt.context['headers'],
+ base = base,
+ document = document,
+ chapters = chapters,
+ chapterIndexes = chapterIndexes,
+ keywords = keywords,
+ keywordsCount = keywordsCount,
+ keyword_conflicts = keyword_conflicts,
+ version = VERSION,
+ date = datetime.datetime.now().strftime("%Y/%m/%d"),
+ footer = footer
+ )
+ fd.close()
+
+if __name__ == '__main__':
+ main()
--- /dev/null
+/*!
+ * typeahead.js 0.11.1
+ * https://github.com/twitter/typeahead.js
+ * Copyright 2013-2015 Twitter, Inc. and other contributors; Licensed MIT
+ */
+
+(function(root, factory) {
+ if (typeof define === "function" && define.amd) {
+ define("bloodhound", [ "jquery" ], function(a0) {
+ return root["Bloodhound"] = factory(a0);
+ });
+ } else if (typeof exports === "object") {
+ module.exports = factory(require("jquery"));
+ } else {
+ root["Bloodhound"] = factory(jQuery);
+ }
+})(this, function($) {
+ var _ = function() {
+ "use strict";
+ return {
+ isMsie: function() {
+ return /(msie|trident)/i.test(navigator.userAgent) ? navigator.userAgent.match(/(msie |rv:)(\d+(.\d+)?)/i)[2] : false;
+ },
+ isBlankString: function(str) {
+ return !str || /^\s*$/.test(str);
+ },
+ escapeRegExChars: function(str) {
+ return str.replace(/[\-\[\]\/\{\}\(\)\*\+\?\.\\\^\$\|]/g, "\\$&");
+ },
+ isString: function(obj) {
+ return typeof obj === "string";
+ },
+ isNumber: function(obj) {
+ return typeof obj === "number";
+ },
+ isArray: $.isArray,
+ isFunction: $.isFunction,
+ isObject: $.isPlainObject,
+ isUndefined: function(obj) {
+ return typeof obj === "undefined";
+ },
+ isElement: function(obj) {
+ return !!(obj && obj.nodeType === 1);
+ },
+ isJQuery: function(obj) {
+ return obj instanceof $;
+ },
+ toStr: function toStr(s) {
+ return _.isUndefined(s) || s === null ? "" : s + "";
+ },
+ bind: $.proxy,
+ each: function(collection, cb) {
+ $.each(collection, reverseArgs);
+ function reverseArgs(index, value) {
+ return cb(value, index);
+ }
+ },
+ map: $.map,
+ filter: $.grep,
+ every: function(obj, test) {
+ var result = true;
+ if (!obj) {
+ return result;
+ }
+ $.each(obj, function(key, val) {
+ if (!(result = test.call(null, val, key, obj))) {
+ return false;
+ }
+ });
+ return !!result;
+ },
+ some: function(obj, test) {
+ var result = false;
+ if (!obj) {
+ return result;
+ }
+ $.each(obj, function(key, val) {
+ if (result = test.call(null, val, key, obj)) {
+ return false;
+ }
+ });
+ return !!result;
+ },
+ mixin: $.extend,
+ identity: function(x) {
+ return x;
+ },
+ clone: function(obj) {
+ return $.extend(true, {}, obj);
+ },
+ getIdGenerator: function() {
+ var counter = 0;
+ return function() {
+ return counter++;
+ };
+ },
+ templatify: function templatify(obj) {
+ return $.isFunction(obj) ? obj : template;
+ function template() {
+ return String(obj);
+ }
+ },
+ defer: function(fn) {
+ setTimeout(fn, 0);
+ },
+ debounce: function(func, wait, immediate) {
+ var timeout, result;
+ return function() {
+ var context = this, args = arguments, later, callNow;
+ later = function() {
+ timeout = null;
+ if (!immediate) {
+ result = func.apply(context, args);
+ }
+ };
+ callNow = immediate && !timeout;
+ clearTimeout(timeout);
+ timeout = setTimeout(later, wait);
+ if (callNow) {
+ result = func.apply(context, args);
+ }
+ return result;
+ };
+ },
+ throttle: function(func, wait) {
+ var context, args, timeout, result, previous, later;
+ previous = 0;
+ later = function() {
+ previous = new Date();
+ timeout = null;
+ result = func.apply(context, args);
+ };
+ return function() {
+ var now = new Date(), remaining = wait - (now - previous);
+ context = this;
+ args = arguments;
+ if (remaining <= 0) {
+ clearTimeout(timeout);
+ timeout = null;
+ previous = now;
+ result = func.apply(context, args);
+ } else if (!timeout) {
+ timeout = setTimeout(later, remaining);
+ }
+ return result;
+ };
+ },
+ stringify: function(val) {
+ return _.isString(val) ? val : JSON.stringify(val);
+ },
+ noop: function() {}
+ };
+ }();
+ var VERSION = "0.11.1";
+ var tokenizers = function() {
+ "use strict";
+ return {
+ nonword: nonword,
+ whitespace: whitespace,
+ obj: {
+ nonword: getObjTokenizer(nonword),
+ whitespace: getObjTokenizer(whitespace)
+ }
+ };
+ function whitespace(str) {
+ str = _.toStr(str);
+ return str ? str.split(/\s+/) : [];
+ }
+ function nonword(str) {
+ str = _.toStr(str);
+ return str ? str.split(/\W+/) : [];
+ }
+ function getObjTokenizer(tokenizer) {
+ return function setKey(keys) {
+ keys = _.isArray(keys) ? keys : [].slice.call(arguments, 0);
+ return function tokenize(o) {
+ var tokens = [];
+ _.each(keys, function(k) {
+ tokens = tokens.concat(tokenizer(_.toStr(o[k])));
+ });
+ return tokens;
+ };
+ };
+ }
+ }();
+ var LruCache = function() {
+ "use strict";
+ function LruCache(maxSize) {
+ this.maxSize = _.isNumber(maxSize) ? maxSize : 100;
+ this.reset();
+ if (this.maxSize <= 0) {
+ this.set = this.get = $.noop;
+ }
+ }
+ _.mixin(LruCache.prototype, {
+ set: function set(key, val) {
+ var tailItem = this.list.tail, node;
+ if (this.size >= this.maxSize) {
+ this.list.remove(tailItem);
+ delete this.hash[tailItem.key];
+ this.size--;
+ }
+ if (node = this.hash[key]) {
+ node.val = val;
+ this.list.moveToFront(node);
+ } else {
+ node = new Node(key, val);
+ this.list.add(node);
+ this.hash[key] = node;
+ this.size++;
+ }
+ },
+ get: function get(key) {
+ var node = this.hash[key];
+ if (node) {
+ this.list.moveToFront(node);
+ return node.val;
+ }
+ },
+ reset: function reset() {
+ this.size = 0;
+ this.hash = {};
+ this.list = new List();
+ }
+ });
+ function List() {
+ this.head = this.tail = null;
+ }
+ _.mixin(List.prototype, {
+ add: function add(node) {
+ if (this.head) {
+ node.next = this.head;
+ this.head.prev = node;
+ }
+ this.head = node;
+ this.tail = this.tail || node;
+ },
+ remove: function remove(node) {
+ node.prev ? node.prev.next = node.next : this.head = node.next;
+ node.next ? node.next.prev = node.prev : this.tail = node.prev;
+ },
+ moveToFront: function(node) {
+ this.remove(node);
+ this.add(node);
+ }
+ });
+ function Node(key, val) {
+ this.key = key;
+ this.val = val;
+ this.prev = this.next = null;
+ }
+ return LruCache;
+ }();
+ var PersistentStorage = function() {
+ "use strict";
+ var LOCAL_STORAGE;
+ try {
+ LOCAL_STORAGE = window.localStorage;
+ LOCAL_STORAGE.setItem("~~~", "!");
+ LOCAL_STORAGE.removeItem("~~~");
+ } catch (err) {
+ LOCAL_STORAGE = null;
+ }
+ function PersistentStorage(namespace, override) {
+ this.prefix = [ "__", namespace, "__" ].join("");
+ this.ttlKey = "__ttl__";
+ this.keyMatcher = new RegExp("^" + _.escapeRegExChars(this.prefix));
+ this.ls = override || LOCAL_STORAGE;
+ !this.ls && this._noop();
+ }
+ _.mixin(PersistentStorage.prototype, {
+ _prefix: function(key) {
+ return this.prefix + key;
+ },
+ _ttlKey: function(key) {
+ return this._prefix(key) + this.ttlKey;
+ },
+ _noop: function() {
+ this.get = this.set = this.remove = this.clear = this.isExpired = _.noop;
+ },
+ _safeSet: function(key, val) {
+ try {
+ this.ls.setItem(key, val);
+ } catch (err) {
+ if (err.name === "QuotaExceededError") {
+ this.clear();
+ this._noop();
+ }
+ }
+ },
+ get: function(key) {
+ if (this.isExpired(key)) {
+ this.remove(key);
+ }
+ return decode(this.ls.getItem(this._prefix(key)));
+ },
+ set: function(key, val, ttl) {
+ if (_.isNumber(ttl)) {
+ this._safeSet(this._ttlKey(key), encode(now() + ttl));
+ } else {
+ this.ls.removeItem(this._ttlKey(key));
+ }
+ return this._safeSet(this._prefix(key), encode(val));
+ },
+ remove: function(key) {
+ this.ls.removeItem(this._ttlKey(key));
+ this.ls.removeItem(this._prefix(key));
+ return this;
+ },
+ clear: function() {
+ var i, keys = gatherMatchingKeys(this.keyMatcher);
+ for (i = keys.length; i--; ) {
+ this.remove(keys[i]);
+ }
+ return this;
+ },
+ isExpired: function(key) {
+ var ttl = decode(this.ls.getItem(this._ttlKey(key)));
+ return _.isNumber(ttl) && now() > ttl ? true : false;
+ }
+ });
+ return PersistentStorage;
+ function now() {
+ return new Date().getTime();
+ }
+ function encode(val) {
+ return JSON.stringify(_.isUndefined(val) ? null : val);
+ }
+ function decode(val) {
+ return $.parseJSON(val);
+ }
+ function gatherMatchingKeys(keyMatcher) {
+ var i, key, keys = [], len = LOCAL_STORAGE.length;
+ for (i = 0; i < len; i++) {
+ if ((key = LOCAL_STORAGE.key(i)).match(keyMatcher)) {
+ keys.push(key.replace(keyMatcher, ""));
+ }
+ }
+ return keys;
+ }
+ }();
+ var Transport = function() {
+ "use strict";
+ var pendingRequestsCount = 0, pendingRequests = {}, maxPendingRequests = 6, sharedCache = new LruCache(10);
+ function Transport(o) {
+ o = o || {};
+ this.cancelled = false;
+ this.lastReq = null;
+ this._send = o.transport;
+ this._get = o.limiter ? o.limiter(this._get) : this._get;
+ this._cache = o.cache === false ? new LruCache(0) : sharedCache;
+ }
+ Transport.setMaxPendingRequests = function setMaxPendingRequests(num) {
+ maxPendingRequests = num;
+ };
+ Transport.resetCache = function resetCache() {
+ sharedCache.reset();
+ };
+ _.mixin(Transport.prototype, {
+ _fingerprint: function fingerprint(o) {
+ o = o || {};
+ return o.url + o.type + $.param(o.data || {});
+ },
+ _get: function(o, cb) {
+ var that = this, fingerprint, jqXhr;
+ fingerprint = this._fingerprint(o);
+ if (this.cancelled || fingerprint !== this.lastReq) {
+ return;
+ }
+ if (jqXhr = pendingRequests[fingerprint]) {
+ jqXhr.done(done).fail(fail);
+ } else if (pendingRequestsCount < maxPendingRequests) {
+ pendingRequestsCount++;
+ pendingRequests[fingerprint] = this._send(o).done(done).fail(fail).always(always);
+ } else {
+ this.onDeckRequestArgs = [].slice.call(arguments, 0);
+ }
+ function done(resp) {
+ cb(null, resp);
+ that._cache.set(fingerprint, resp);
+ }
+ function fail() {
+ cb(true);
+ }
+ function always() {
+ pendingRequestsCount--;
+ delete pendingRequests[fingerprint];
+ if (that.onDeckRequestArgs) {
+ that._get.apply(that, that.onDeckRequestArgs);
+ that.onDeckRequestArgs = null;
+ }
+ }
+ },
+ get: function(o, cb) {
+ var resp, fingerprint;
+ cb = cb || $.noop;
+ o = _.isString(o) ? {
+ url: o
+ } : o || {};
+ fingerprint = this._fingerprint(o);
+ this.cancelled = false;
+ this.lastReq = fingerprint;
+ if (resp = this._cache.get(fingerprint)) {
+ cb(null, resp);
+ } else {
+ this._get(o, cb);
+ }
+ },
+ cancel: function() {
+ this.cancelled = true;
+ }
+ });
+ return Transport;
+ }();
+ var SearchIndex = window.SearchIndex = function() {
+ "use strict";
+ var CHILDREN = "c", IDS = "i";
+ function SearchIndex(o) {
+ o = o || {};
+ if (!o.datumTokenizer || !o.queryTokenizer) {
+ $.error("datumTokenizer and queryTokenizer are both required");
+ }
+ this.identify = o.identify || _.stringify;
+ this.datumTokenizer = o.datumTokenizer;
+ this.queryTokenizer = o.queryTokenizer;
+ this.reset();
+ }
+ _.mixin(SearchIndex.prototype, {
+ bootstrap: function bootstrap(o) {
+ this.datums = o.datums;
+ this.trie = o.trie;
+ },
+ add: function(data) {
+ var that = this;
+ data = _.isArray(data) ? data : [ data ];
+ _.each(data, function(datum) {
+ var id, tokens;
+ that.datums[id = that.identify(datum)] = datum;
+ tokens = normalizeTokens(that.datumTokenizer(datum));
+ _.each(tokens, function(token) {
+ var node, chars, ch;
+ node = that.trie;
+ chars = token.split("");
+ while (ch = chars.shift()) {
+ node = node[CHILDREN][ch] || (node[CHILDREN][ch] = newNode());
+ node[IDS].push(id);
+ }
+ });
+ });
+ },
+ get: function get(ids) {
+ var that = this;
+ return _.map(ids, function(id) {
+ return that.datums[id];
+ });
+ },
+ search: function search(query) {
+ var that = this, tokens, matches;
+ tokens = normalizeTokens(this.queryTokenizer(query));
+ _.each(tokens, function(token) {
+ var node, chars, ch, ids;
+ if (matches && matches.length === 0) {
+ return false;
+ }
+ node = that.trie;
+ chars = token.split("");
+ while (node && (ch = chars.shift())) {
+ node = node[CHILDREN][ch];
+ }
+ if (node && chars.length === 0) {
+ ids = node[IDS].slice(0);
+ matches = matches ? getIntersection(matches, ids) : ids;
+ } else {
+ matches = [];
+ return false;
+ }
+ });
+ return matches ? _.map(unique(matches), function(id) {
+ return that.datums[id];
+ }) : [];
+ },
+ all: function all() {
+ var values = [];
+ for (var key in this.datums) {
+ values.push(this.datums[key]);
+ }
+ return values;
+ },
+ reset: function reset() {
+ this.datums = {};
+ this.trie = newNode();
+ },
+ serialize: function serialize() {
+ return {
+ datums: this.datums,
+ trie: this.trie
+ };
+ }
+ });
+ return SearchIndex;
+ function normalizeTokens(tokens) {
+ tokens = _.filter(tokens, function(token) {
+ return !!token;
+ });
+ tokens = _.map(tokens, function(token) {
+ return token.toLowerCase();
+ });
+ return tokens;
+ }
+ function newNode() {
+ var node = {};
+ node[IDS] = [];
+ node[CHILDREN] = {};
+ return node;
+ }
+ function unique(array) {
+ var seen = {}, uniques = [];
+ for (var i = 0, len = array.length; i < len; i++) {
+ if (!seen[array[i]]) {
+ seen[array[i]] = true;
+ uniques.push(array[i]);
+ }
+ }
+ return uniques;
+ }
+ function getIntersection(arrayA, arrayB) {
+ var ai = 0, bi = 0, intersection = [];
+ arrayA = arrayA.sort();
+ arrayB = arrayB.sort();
+ var lenArrayA = arrayA.length, lenArrayB = arrayB.length;
+ while (ai < lenArrayA && bi < lenArrayB) {
+ if (arrayA[ai] < arrayB[bi]) {
+ ai++;
+ } else if (arrayA[ai] > arrayB[bi]) {
+ bi++;
+ } else {
+ intersection.push(arrayA[ai]);
+ ai++;
+ bi++;
+ }
+ }
+ return intersection;
+ }
+ }();
+ var Prefetch = function() {
+ "use strict";
+ var keys;
+ keys = {
+ data: "data",
+ protocol: "protocol",
+ thumbprint: "thumbprint"
+ };
+ function Prefetch(o) {
+ this.url = o.url;
+ this.ttl = o.ttl;
+ this.cache = o.cache;
+ this.prepare = o.prepare;
+ this.transform = o.transform;
+ this.transport = o.transport;
+ this.thumbprint = o.thumbprint;
+ this.storage = new PersistentStorage(o.cacheKey);
+ }
+ _.mixin(Prefetch.prototype, {
+ _settings: function settings() {
+ return {
+ url: this.url,
+ type: "GET",
+ dataType: "json"
+ };
+ },
+ store: function store(data) {
+ if (!this.cache) {
+ return;
+ }
+ this.storage.set(keys.data, data, this.ttl);
+ this.storage.set(keys.protocol, location.protocol, this.ttl);
+ this.storage.set(keys.thumbprint, this.thumbprint, this.ttl);
+ },
+ fromCache: function fromCache() {
+ var stored = {}, isExpired;
+ if (!this.cache) {
+ return null;
+ }
+ stored.data = this.storage.get(keys.data);
+ stored.protocol = this.storage.get(keys.protocol);
+ stored.thumbprint = this.storage.get(keys.thumbprint);
+ isExpired = stored.thumbprint !== this.thumbprint || stored.protocol !== location.protocol;
+ return stored.data && !isExpired ? stored.data : null;
+ },
+ fromNetwork: function(cb) {
+ var that = this, settings;
+ if (!cb) {
+ return;
+ }
+ settings = this.prepare(this._settings());
+ this.transport(settings).fail(onError).done(onResponse);
+ function onError() {
+ cb(true);
+ }
+ function onResponse(resp) {
+ cb(null, that.transform(resp));
+ }
+ },
+ clear: function clear() {
+ this.storage.clear();
+ return this;
+ }
+ });
+ return Prefetch;
+ }();
+ var Remote = function() {
+ "use strict";
+ function Remote(o) {
+ this.url = o.url;
+ this.prepare = o.prepare;
+ this.transform = o.transform;
+ this.transport = new Transport({
+ cache: o.cache,
+ limiter: o.limiter,
+ transport: o.transport
+ });
+ }
+ _.mixin(Remote.prototype, {
+ _settings: function settings() {
+ return {
+ url: this.url,
+ type: "GET",
+ dataType: "json"
+ };
+ },
+ get: function get(query, cb) {
+ var that = this, settings;
+ if (!cb) {
+ return;
+ }
+ query = query || "";
+ settings = this.prepare(query, this._settings());
+ return this.transport.get(settings, onResponse);
+ function onResponse(err, resp) {
+ err ? cb([]) : cb(that.transform(resp));
+ }
+ },
+ cancelLastRequest: function cancelLastRequest() {
+ this.transport.cancel();
+ }
+ });
+ return Remote;
+ }();
+ var oParser = function() {
+ "use strict";
+ return function parse(o) {
+ var defaults, sorter;
+ defaults = {
+ initialize: true,
+ identify: _.stringify,
+ datumTokenizer: null,
+ queryTokenizer: null,
+ sufficient: 5,
+ sorter: null,
+ local: [],
+ prefetch: null,
+ remote: null
+ };
+ o = _.mixin(defaults, o || {});
+ !o.datumTokenizer && $.error("datumTokenizer is required");
+ !o.queryTokenizer && $.error("queryTokenizer is required");
+ sorter = o.sorter;
+ o.sorter = sorter ? function(x) {
+ return x.sort(sorter);
+ } : _.identity;
+ o.local = _.isFunction(o.local) ? o.local() : o.local;
+ o.prefetch = parsePrefetch(o.prefetch);
+ o.remote = parseRemote(o.remote);
+ return o;
+ };
+ function parsePrefetch(o) {
+ var defaults;
+ if (!o) {
+ return null;
+ }
+ defaults = {
+ url: null,
+ ttl: 24 * 60 * 60 * 1e3,
+ cache: true,
+ cacheKey: null,
+ thumbprint: "",
+ prepare: _.identity,
+ transform: _.identity,
+ transport: null
+ };
+ o = _.isString(o) ? {
+ url: o
+ } : o;
+ o = _.mixin(defaults, o);
+ !o.url && $.error("prefetch requires url to be set");
+ o.transform = o.filter || o.transform;
+ o.cacheKey = o.cacheKey || o.url;
+ o.thumbprint = VERSION + o.thumbprint;
+ o.transport = o.transport ? callbackToDeferred(o.transport) : $.ajax;
+ return o;
+ }
+ function parseRemote(o) {
+ var defaults;
+ if (!o) {
+ return;
+ }
+ defaults = {
+ url: null,
+ cache: true,
+ prepare: null,
+ replace: null,
+ wildcard: null,
+ limiter: null,
+ rateLimitBy: "debounce",
+ rateLimitWait: 300,
+ transform: _.identity,
+ transport: null
+ };
+ o = _.isString(o) ? {
+ url: o
+ } : o;
+ o = _.mixin(defaults, o);
+ !o.url && $.error("remote requires url to be set");
+ o.transform = o.filter || o.transform;
+ o.prepare = toRemotePrepare(o);
+ o.limiter = toLimiter(o);
+ o.transport = o.transport ? callbackToDeferred(o.transport) : $.ajax;
+ delete o.replace;
+ delete o.wildcard;
+ delete o.rateLimitBy;
+ delete o.rateLimitWait;
+ return o;
+ }
+ function toRemotePrepare(o) {
+ var prepare, replace, wildcard;
+ prepare = o.prepare;
+ replace = o.replace;
+ wildcard = o.wildcard;
+ if (prepare) {
+ return prepare;
+ }
+ if (replace) {
+ prepare = prepareByReplace;
+ } else if (o.wildcard) {
+ prepare = prepareByWildcard;
+ } else {
+ prepare = idenityPrepare;
+ }
+ return prepare;
+ function prepareByReplace(query, settings) {
+ settings.url = replace(settings.url, query);
+ return settings;
+ }
+ function prepareByWildcard(query, settings) {
+ settings.url = settings.url.replace(wildcard, encodeURIComponent(query));
+ return settings;
+ }
+ function idenityPrepare(query, settings) {
+ return settings;
+ }
+ }
+ function toLimiter(o) {
+ var limiter, method, wait;
+ limiter = o.limiter;
+ method = o.rateLimitBy;
+ wait = o.rateLimitWait;
+ if (!limiter) {
+ limiter = /^throttle$/i.test(method) ? throttle(wait) : debounce(wait);
+ }
+ return limiter;
+ function debounce(wait) {
+ return function debounce(fn) {
+ return _.debounce(fn, wait);
+ };
+ }
+ function throttle(wait) {
+ return function throttle(fn) {
+ return _.throttle(fn, wait);
+ };
+ }
+ }
+ function callbackToDeferred(fn) {
+ return function wrapper(o) {
+ var deferred = $.Deferred();
+ fn(o, onSuccess, onError);
+ return deferred;
+ function onSuccess(resp) {
+ _.defer(function() {
+ deferred.resolve(resp);
+ });
+ }
+ function onError(err) {
+ _.defer(function() {
+ deferred.reject(err);
+ });
+ }
+ };
+ }
+ }();
+ var Bloodhound = function() {
+ "use strict";
+ var old;
+ old = window && window.Bloodhound;
+ function Bloodhound(o) {
+ o = oParser(o);
+ this.sorter = o.sorter;
+ this.identify = o.identify;
+ this.sufficient = o.sufficient;
+ this.local = o.local;
+ this.remote = o.remote ? new Remote(o.remote) : null;
+ this.prefetch = o.prefetch ? new Prefetch(o.prefetch) : null;
+ this.index = new SearchIndex({
+ identify: this.identify,
+ datumTokenizer: o.datumTokenizer,
+ queryTokenizer: o.queryTokenizer
+ });
+ o.initialize !== false && this.initialize();
+ }
+ Bloodhound.noConflict = function noConflict() {
+ window && (window.Bloodhound = old);
+ return Bloodhound;
+ };
+ Bloodhound.tokenizers = tokenizers;
+ _.mixin(Bloodhound.prototype, {
+ __ttAdapter: function ttAdapter() {
+ var that = this;
+ return this.remote ? withAsync : withoutAsync;
+ function withAsync(query, sync, async) {
+ return that.search(query, sync, async);
+ }
+ function withoutAsync(query, sync) {
+ return that.search(query, sync);
+ }
+ },
+ _loadPrefetch: function loadPrefetch() {
+ var that = this, deferred, serialized;
+ deferred = $.Deferred();
+ if (!this.prefetch) {
+ deferred.resolve();
+ } else if (serialized = this.prefetch.fromCache()) {
+ this.index.bootstrap(serialized);
+ deferred.resolve();
+ } else {
+ this.prefetch.fromNetwork(done);
+ }
+ return deferred.promise();
+ function done(err, data) {
+ if (err) {
+ return deferred.reject();
+ }
+ that.add(data);
+ that.prefetch.store(that.index.serialize());
+ deferred.resolve();
+ }
+ },
+ _initialize: function initialize() {
+ var that = this, deferred;
+ this.clear();
+ (this.initPromise = this._loadPrefetch()).done(addLocalToIndex);
+ return this.initPromise;
+ function addLocalToIndex() {
+ that.add(that.local);
+ }
+ },
+ initialize: function initialize(force) {
+ return !this.initPromise || force ? this._initialize() : this.initPromise;
+ },
+ add: function add(data) {
+ this.index.add(data);
+ return this;
+ },
+ get: function get(ids) {
+ ids = _.isArray(ids) ? ids : [].slice.call(arguments);
+ return this.index.get(ids);
+ },
+ search: function search(query, sync, async) {
+ var that = this, local;
+ local = this.sorter(this.index.search(query));
+ sync(this.remote ? local.slice() : local);
+ if (this.remote && local.length < this.sufficient) {
+ this.remote.get(query, processRemote);
+ } else if (this.remote) {
+ this.remote.cancelLastRequest();
+ }
+ return this;
+ function processRemote(remote) {
+ var nonDuplicates = [];
+ _.each(remote, function(r) {
+ !_.some(local, function(l) {
+ return that.identify(r) === that.identify(l);
+ }) && nonDuplicates.push(r);
+ });
+ async && async(nonDuplicates);
+ }
+ },
+ all: function all() {
+ return this.index.all();
+ },
+ clear: function clear() {
+ this.index.reset();
+ return this;
+ },
+ clearPrefetchCache: function clearPrefetchCache() {
+ this.prefetch && this.prefetch.clear();
+ return this;
+ },
+ clearRemoteCache: function clearRemoteCache() {
+ Transport.resetCache();
+ return this;
+ },
+ ttAdapter: function ttAdapter() {
+ return this.__ttAdapter();
+ }
+ });
+ return Bloodhound;
+ }();
+ return Bloodhound;
+});
+
+(function(root, factory) {
+ if (typeof define === "function" && define.amd) {
+ define("typeahead.js", [ "jquery" ], function(a0) {
+ return factory(a0);
+ });
+ } else if (typeof exports === "object") {
+ module.exports = factory(require("jquery"));
+ } else {
+ factory(jQuery);
+ }
+})(this, function($) {
+ var _ = function() {
+ "use strict";
+ return {
+ isMsie: function() {
+ return /(msie|trident)/i.test(navigator.userAgent) ? navigator.userAgent.match(/(msie |rv:)(\d+(.\d+)?)/i)[2] : false;
+ },
+ isBlankString: function(str) {
+ return !str || /^\s*$/.test(str);
+ },
+ escapeRegExChars: function(str) {
+ return str.replace(/[\-\[\]\/\{\}\(\)\*\+\?\.\\\^\$\|]/g, "\\$&");
+ },
+ isString: function(obj) {
+ return typeof obj === "string";
+ },
+ isNumber: function(obj) {
+ return typeof obj === "number";
+ },
+ isArray: $.isArray,
+ isFunction: $.isFunction,
+ isObject: $.isPlainObject,
+ isUndefined: function(obj) {
+ return typeof obj === "undefined";
+ },
+ isElement: function(obj) {
+ return !!(obj && obj.nodeType === 1);
+ },
+ isJQuery: function(obj) {
+ return obj instanceof $;
+ },
+ toStr: function toStr(s) {
+ return _.isUndefined(s) || s === null ? "" : s + "";
+ },
+ bind: $.proxy,
+ each: function(collection, cb) {
+ $.each(collection, reverseArgs);
+ function reverseArgs(index, value) {
+ return cb(value, index);
+ }
+ },
+ map: $.map,
+ filter: $.grep,
+ every: function(obj, test) {
+ var result = true;
+ if (!obj) {
+ return result;
+ }
+ $.each(obj, function(key, val) {
+ if (!(result = test.call(null, val, key, obj))) {
+ return false;
+ }
+ });
+ return !!result;
+ },
+ some: function(obj, test) {
+ var result = false;
+ if (!obj) {
+ return result;
+ }
+ $.each(obj, function(key, val) {
+ if (result = test.call(null, val, key, obj)) {
+ return false;
+ }
+ });
+ return !!result;
+ },
+ mixin: $.extend,
+ identity: function(x) {
+ return x;
+ },
+ clone: function(obj) {
+ return $.extend(true, {}, obj);
+ },
+ getIdGenerator: function() {
+ var counter = 0;
+ return function() {
+ return counter++;
+ };
+ },
+ templatify: function templatify(obj) {
+ return $.isFunction(obj) ? obj : template;
+ function template() {
+ return String(obj);
+ }
+ },
+ defer: function(fn) {
+ setTimeout(fn, 0);
+ },
+ debounce: function(func, wait, immediate) {
+ var timeout, result;
+ return function() {
+ var context = this, args = arguments, later, callNow;
+ later = function() {
+ timeout = null;
+ if (!immediate) {
+ result = func.apply(context, args);
+ }
+ };
+ callNow = immediate && !timeout;
+ clearTimeout(timeout);
+ timeout = setTimeout(later, wait);
+ if (callNow) {
+ result = func.apply(context, args);
+ }
+ return result;
+ };
+ },
+ throttle: function(func, wait) {
+ var context, args, timeout, result, previous, later;
+ previous = 0;
+ later = function() {
+ previous = new Date();
+ timeout = null;
+ result = func.apply(context, args);
+ };
+ return function() {
+ var now = new Date(), remaining = wait - (now - previous);
+ context = this;
+ args = arguments;
+ if (remaining <= 0) {
+ clearTimeout(timeout);
+ timeout = null;
+ previous = now;
+ result = func.apply(context, args);
+ } else if (!timeout) {
+ timeout = setTimeout(later, remaining);
+ }
+ return result;
+ };
+ },
+ stringify: function(val) {
+ return _.isString(val) ? val : JSON.stringify(val);
+ },
+ noop: function() {}
+ };
+ }();
+ var WWW = function() {
+ "use strict";
+ var defaultClassNames = {
+ wrapper: "twitter-typeahead",
+ input: "tt-input",
+ hint: "tt-hint",
+ menu: "tt-menu",
+ dataset: "tt-dataset",
+ suggestion: "tt-suggestion",
+ selectable: "tt-selectable",
+ empty: "tt-empty",
+ open: "tt-open",
+ cursor: "tt-cursor",
+ highlight: "tt-highlight"
+ };
+ return build;
+ function build(o) {
+ var www, classes;
+ classes = _.mixin({}, defaultClassNames, o);
+ www = {
+ css: buildCss(),
+ classes: classes,
+ html: buildHtml(classes),
+ selectors: buildSelectors(classes)
+ };
+ return {
+ css: www.css,
+ html: www.html,
+ classes: www.classes,
+ selectors: www.selectors,
+ mixin: function(o) {
+ _.mixin(o, www);
+ }
+ };
+ }
+ function buildHtml(c) {
+ return {
+ wrapper: '<span class="' + c.wrapper + '"></span>',
+ menu: '<div class="' + c.menu + '"></div>'
+ };
+ }
+ function buildSelectors(classes) {
+ var selectors = {};
+ _.each(classes, function(v, k) {
+ selectors[k] = "." + v;
+ });
+ return selectors;
+ }
+ function buildCss() {
+ var css = {
+ wrapper: {
+ position: "relative",
+ display: "inline-block"
+ },
+ hint: {
+ position: "absolute",
+ top: "0",
+ left: "0",
+ borderColor: "transparent",
+ boxShadow: "none",
+ opacity: "1"
+ },
+ input: {
+ position: "relative",
+ verticalAlign: "top",
+ backgroundColor: "transparent"
+ },
+ inputWithNoHint: {
+ position: "relative",
+ verticalAlign: "top"
+ },
+ menu: {
+ position: "absolute",
+ top: "100%",
+ left: "0",
+ zIndex: "100",
+ display: "none"
+ },
+ ltr: {
+ left: "0",
+ right: "auto"
+ },
+ rtl: {
+ left: "auto",
+ right: " 0"
+ }
+ };
+ if (_.isMsie()) {
+ _.mixin(css.input, {
+ backgroundImage: "url(data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)"
+ });
+ }
+ return css;
+ }
+ }();
+ var EventBus = function() {
+ "use strict";
+ var namespace, deprecationMap;
+ namespace = "typeahead:";
+ deprecationMap = {
+ render: "rendered",
+ cursorchange: "cursorchanged",
+ select: "selected",
+ autocomplete: "autocompleted"
+ };
+ function EventBus(o) {
+ if (!o || !o.el) {
+ $.error("EventBus initialized without el");
+ }
+ this.$el = $(o.el);
+ }
+ _.mixin(EventBus.prototype, {
+ _trigger: function(type, args) {
+ var $e;
+ $e = $.Event(namespace + type);
+ (args = args || []).unshift($e);
+ this.$el.trigger.apply(this.$el, args);
+ return $e;
+ },
+ before: function(type) {
+ var args, $e;
+ args = [].slice.call(arguments, 1);
+ $e = this._trigger("before" + type, args);
+ return $e.isDefaultPrevented();
+ },
+ trigger: function(type) {
+ var deprecatedType;
+ this._trigger(type, [].slice.call(arguments, 1));
+ if (deprecatedType = deprecationMap[type]) {
+ this._trigger(deprecatedType, [].slice.call(arguments, 1));
+ }
+ }
+ });
+ return EventBus;
+ }();
+ var EventEmitter = function() {
+ "use strict";
+ var splitter = /\s+/, nextTick = getNextTick();
+ return {
+ onSync: onSync,
+ onAsync: onAsync,
+ off: off,
+ trigger: trigger
+ };
+ function on(method, types, cb, context) {
+ var type;
+ if (!cb) {
+ return this;
+ }
+ types = types.split(splitter);
+ cb = context ? bindContext(cb, context) : cb;
+ this._callbacks = this._callbacks || {};
+ while (type = types.shift()) {
+ this._callbacks[type] = this._callbacks[type] || {
+ sync: [],
+ async: []
+ };
+ this._callbacks[type][method].push(cb);
+ }
+ return this;
+ }
+ function onAsync(types, cb, context) {
+ return on.call(this, "async", types, cb, context);
+ }
+ function onSync(types, cb, context) {
+ return on.call(this, "sync", types, cb, context);
+ }
+ function off(types) {
+ var type;
+ if (!this._callbacks) {
+ return this;
+ }
+ types = types.split(splitter);
+ while (type = types.shift()) {
+ delete this._callbacks[type];
+ }
+ return this;
+ }
+ function trigger(types) {
+ var type, callbacks, args, syncFlush, asyncFlush;
+ if (!this._callbacks) {
+ return this;
+ }
+ types = types.split(splitter);
+ args = [].slice.call(arguments, 1);
+ while ((type = types.shift()) && (callbacks = this._callbacks[type])) {
+ syncFlush = getFlush(callbacks.sync, this, [ type ].concat(args));
+ asyncFlush = getFlush(callbacks.async, this, [ type ].concat(args));
+ syncFlush() && nextTick(asyncFlush);
+ }
+ return this;
+ }
+ function getFlush(callbacks, context, args) {
+ return flush;
+ function flush() {
+ var cancelled;
+ for (var i = 0, len = callbacks.length; !cancelled && i < len; i += 1) {
+ cancelled = callbacks[i].apply(context, args) === false;
+ }
+ return !cancelled;
+ }
+ }
+ function getNextTick() {
+ var nextTickFn;
+ if (window.setImmediate) {
+ nextTickFn = function nextTickSetImmediate(fn) {
+ setImmediate(function() {
+ fn();
+ });
+ };
+ } else {
+ nextTickFn = function nextTickSetTimeout(fn) {
+ setTimeout(function() {
+ fn();
+ }, 0);
+ };
+ }
+ return nextTickFn;
+ }
+ function bindContext(fn, context) {
+ return fn.bind ? fn.bind(context) : function() {
+ fn.apply(context, [].slice.call(arguments, 0));
+ };
+ }
+ }();
+ var highlight = function(doc) {
+ "use strict";
+ var defaults = {
+ node: null,
+ pattern: null,
+ tagName: "strong",
+ className: null,
+ wordsOnly: false,
+ caseSensitive: false
+ };
+ return function hightlight(o) {
+ var regex;
+ o = _.mixin({}, defaults, o);
+ if (!o.node || !o.pattern) {
+ return;
+ }
+ o.pattern = _.isArray(o.pattern) ? o.pattern : [ o.pattern ];
+ regex = getRegex(o.pattern, o.caseSensitive, o.wordsOnly);
+ traverse(o.node, hightlightTextNode);
+ function hightlightTextNode(textNode) {
+ var match, patternNode, wrapperNode;
+ if (match = regex.exec(textNode.data)) {
+ wrapperNode = doc.createElement(o.tagName);
+ o.className && (wrapperNode.className = o.className);
+ patternNode = textNode.splitText(match.index);
+ patternNode.splitText(match[0].length);
+ wrapperNode.appendChild(patternNode.cloneNode(true));
+ textNode.parentNode.replaceChild(wrapperNode, patternNode);
+ }
+ return !!match;
+ }
+ function traverse(el, hightlightTextNode) {
+ var childNode, TEXT_NODE_TYPE = 3;
+ for (var i = 0; i < el.childNodes.length; i++) {
+ childNode = el.childNodes[i];
+ if (childNode.nodeType === TEXT_NODE_TYPE) {
+ i += hightlightTextNode(childNode) ? 1 : 0;
+ } else {
+ traverse(childNode, hightlightTextNode);
+ }
+ }
+ }
+ };
+ function getRegex(patterns, caseSensitive, wordsOnly) {
+ var escapedPatterns = [], regexStr;
+ for (var i = 0, len = patterns.length; i < len; i++) {
+ escapedPatterns.push(_.escapeRegExChars(patterns[i]));
+ }
+ regexStr = wordsOnly ? "\\b(" + escapedPatterns.join("|") + ")\\b" : "(" + escapedPatterns.join("|") + ")";
+ return caseSensitive ? new RegExp(regexStr) : new RegExp(regexStr, "i");
+ }
+ }(window.document);
+ var Input = function() {
+ "use strict";
+ var specialKeyCodeMap;
+ specialKeyCodeMap = {
+ 9: "tab",
+ 27: "esc",
+ 37: "left",
+ 39: "right",
+ 13: "enter",
+ 38: "up",
+ 40: "down"
+ };
+ function Input(o, www) {
+ o = o || {};
+ if (!o.input) {
+ $.error("input is missing");
+ }
+ www.mixin(this);
+ this.$hint = $(o.hint);
+ this.$input = $(o.input);
+ this.query = this.$input.val();
+ this.queryWhenFocused = this.hasFocus() ? this.query : null;
+ this.$overflowHelper = buildOverflowHelper(this.$input);
+ this._checkLanguageDirection();
+ if (this.$hint.length === 0) {
+ this.setHint = this.getHint = this.clearHint = this.clearHintIfInvalid = _.noop;
+ }
+ }
+ Input.normalizeQuery = function(str) {
+ return _.toStr(str).replace(/^\s*/g, "").replace(/\s{2,}/g, " ");
+ };
+ _.mixin(Input.prototype, EventEmitter, {
+ _onBlur: function onBlur() {
+ this.resetInputValue();
+ this.trigger("blurred");
+ },
+ _onFocus: function onFocus() {
+ this.queryWhenFocused = this.query;
+ this.trigger("focused");
+ },
+ _onKeydown: function onKeydown($e) {
+ var keyName = specialKeyCodeMap[$e.which || $e.keyCode];
+ this._managePreventDefault(keyName, $e);
+ if (keyName && this._shouldTrigger(keyName, $e)) {
+ this.trigger(keyName + "Keyed", $e);
+ }
+ },
+ _onInput: function onInput() {
+ this._setQuery(this.getInputValue());
+ this.clearHintIfInvalid();
+ this._checkLanguageDirection();
+ },
+ _managePreventDefault: function managePreventDefault(keyName, $e) {
+ var preventDefault;
+ switch (keyName) {
+ case "up":
+ case "down":
+ preventDefault = !withModifier($e);
+ break;
+
+ default:
+ preventDefault = false;
+ }
+ preventDefault && $e.preventDefault();
+ },
+ _shouldTrigger: function shouldTrigger(keyName, $e) {
+ var trigger;
+ switch (keyName) {
+ case "tab":
+ trigger = !withModifier($e);
+ break;
+
+ default:
+ trigger = true;
+ }
+ return trigger;
+ },
+ _checkLanguageDirection: function checkLanguageDirection() {
+ var dir = (this.$input.css("direction") || "ltr").toLowerCase();
+ if (this.dir !== dir) {
+ this.dir = dir;
+ this.$hint.attr("dir", dir);
+ this.trigger("langDirChanged", dir);
+ }
+ },
+ _setQuery: function setQuery(val, silent) {
+ var areEquivalent, hasDifferentWhitespace;
+ areEquivalent = areQueriesEquivalent(val, this.query);
+ hasDifferentWhitespace = areEquivalent ? this.query.length !== val.length : false;
+ this.query = val;
+ if (!silent && !areEquivalent) {
+ this.trigger("queryChanged", this.query);
+ } else if (!silent && hasDifferentWhitespace) {
+ this.trigger("whitespaceChanged", this.query);
+ }
+ },
+ bind: function() {
+ var that = this, onBlur, onFocus, onKeydown, onInput;
+ onBlur = _.bind(this._onBlur, this);
+ onFocus = _.bind(this._onFocus, this);
+ onKeydown = _.bind(this._onKeydown, this);
+ onInput = _.bind(this._onInput, this);
+ this.$input.on("blur.tt", onBlur).on("focus.tt", onFocus).on("keydown.tt", onKeydown);
+ if (!_.isMsie() || _.isMsie() > 9) {
+ this.$input.on("input.tt", onInput);
+ } else {
+ this.$input.on("keydown.tt keypress.tt cut.tt paste.tt", function($e) {
+ if (specialKeyCodeMap[$e.which || $e.keyCode]) {
+ return;
+ }
+ _.defer(_.bind(that._onInput, that, $e));
+ });
+ }
+ return this;
+ },
+ focus: function focus() {
+ this.$input.focus();
+ },
+ blur: function blur() {
+ this.$input.blur();
+ },
+ getLangDir: function getLangDir() {
+ return this.dir;
+ },
+ getQuery: function getQuery() {
+ return this.query || "";
+ },
+ setQuery: function setQuery(val, silent) {
+ this.setInputValue(val);
+ this._setQuery(val, silent);
+ },
+ hasQueryChangedSinceLastFocus: function hasQueryChangedSinceLastFocus() {
+ return this.query !== this.queryWhenFocused;
+ },
+ getInputValue: function getInputValue() {
+ return this.$input.val();
+ },
+ setInputValue: function setInputValue(value) {
+ this.$input.val(value);
+ this.clearHintIfInvalid();
+ this._checkLanguageDirection();
+ },
+ resetInputValue: function resetInputValue() {
+ this.setInputValue(this.query);
+ },
+ getHint: function getHint() {
+ return this.$hint.val();
+ },
+ setHint: function setHint(value) {
+ this.$hint.val(value);
+ },
+ clearHint: function clearHint() {
+ this.setHint("");
+ },
+ clearHintIfInvalid: function clearHintIfInvalid() {
+ var val, hint, valIsPrefixOfHint, isValid;
+ val = this.getInputValue();
+ hint = this.getHint();
+ valIsPrefixOfHint = val !== hint && hint.indexOf(val) === 0;
+ isValid = val !== "" && valIsPrefixOfHint && !this.hasOverflow();
+ !isValid && this.clearHint();
+ },
+ hasFocus: function hasFocus() {
+ return this.$input.is(":focus");
+ },
+ hasOverflow: function hasOverflow() {
+ var constraint = this.$input.width() - 2;
+ this.$overflowHelper.text(this.getInputValue());
+ return this.$overflowHelper.width() >= constraint;
+ },
+ isCursorAtEnd: function() {
+ var valueLength, selectionStart, range;
+ valueLength = this.$input.val().length;
+ selectionStart = this.$input[0].selectionStart;
+ if (_.isNumber(selectionStart)) {
+ return selectionStart === valueLength;
+ } else if (document.selection) {
+ range = document.selection.createRange();
+ range.moveStart("character", -valueLength);
+ return valueLength === range.text.length;
+ }
+ return true;
+ },
+ destroy: function destroy() {
+ this.$hint.off(".tt");
+ this.$input.off(".tt");
+ this.$overflowHelper.remove();
+ this.$hint = this.$input = this.$overflowHelper = $("<div>");
+ }
+ });
+ return Input;
+ function buildOverflowHelper($input) {
+ return $('<pre aria-hidden="true"></pre>').css({
+ position: "absolute",
+ visibility: "hidden",
+ whiteSpace: "pre",
+ fontFamily: $input.css("font-family"),
+ fontSize: $input.css("font-size"),
+ fontStyle: $input.css("font-style"),
+ fontVariant: $input.css("font-variant"),
+ fontWeight: $input.css("font-weight"),
+ wordSpacing: $input.css("word-spacing"),
+ letterSpacing: $input.css("letter-spacing"),
+ textIndent: $input.css("text-indent"),
+ textRendering: $input.css("text-rendering"),
+ textTransform: $input.css("text-transform")
+ }).insertAfter($input);
+ }
+ function areQueriesEquivalent(a, b) {
+ return Input.normalizeQuery(a) === Input.normalizeQuery(b);
+ }
+ function withModifier($e) {
+ return $e.altKey || $e.ctrlKey || $e.metaKey || $e.shiftKey;
+ }
+ }();
+ var Dataset = function() {
+ "use strict";
+ var keys, nameGenerator;
+ keys = {
+ val: "tt-selectable-display",
+ obj: "tt-selectable-object"
+ };
+ nameGenerator = _.getIdGenerator();
+ function Dataset(o, www) {
+ o = o || {};
+ o.templates = o.templates || {};
+ o.templates.notFound = o.templates.notFound || o.templates.empty;
+ if (!o.source) {
+ $.error("missing source");
+ }
+ if (!o.node) {
+ $.error("missing node");
+ }
+ if (o.name && !isValidName(o.name)) {
+ $.error("invalid dataset name: " + o.name);
+ }
+ www.mixin(this);
+ this.highlight = !!o.highlight;
+ this.name = o.name || nameGenerator();
+ this.limit = o.limit || 5;
+ this.displayFn = getDisplayFn(o.display || o.displayKey);
+ this.templates = getTemplates(o.templates, this.displayFn);
+ this.source = o.source.__ttAdapter ? o.source.__ttAdapter() : o.source;
+ this.async = _.isUndefined(o.async) ? this.source.length > 2 : !!o.async;
+ this._resetLastSuggestion();
+ this.$el = $(o.node).addClass(this.classes.dataset).addClass(this.classes.dataset + "-" + this.name);
+ }
+ Dataset.extractData = function extractData(el) {
+ var $el = $(el);
+ if ($el.data(keys.obj)) {
+ return {
+ val: $el.data(keys.val) || "",
+ obj: $el.data(keys.obj) || null
+ };
+ }
+ return null;
+ };
+ _.mixin(Dataset.prototype, EventEmitter, {
+ _overwrite: function overwrite(query, suggestions) {
+ suggestions = suggestions || [];
+ if (suggestions.length) {
+ this._renderSuggestions(query, suggestions);
+ } else if (this.async && this.templates.pending) {
+ this._renderPending(query);
+ } else if (!this.async && this.templates.notFound) {
+ this._renderNotFound(query);
+ } else {
+ this._empty();
+ }
+ this.trigger("rendered", this.name, suggestions, false);
+ },
+ _append: function append(query, suggestions) {
+ suggestions = suggestions || [];
+ if (suggestions.length && this.$lastSuggestion.length) {
+ this._appendSuggestions(query, suggestions);
+ } else if (suggestions.length) {
+ this._renderSuggestions(query, suggestions);
+ } else if (!this.$lastSuggestion.length && this.templates.notFound) {
+ this._renderNotFound(query);
+ }
+ this.trigger("rendered", this.name, suggestions, true);
+ },
+ _renderSuggestions: function renderSuggestions(query, suggestions) {
+ var $fragment;
+ $fragment = this._getSuggestionsFragment(query, suggestions);
+ this.$lastSuggestion = $fragment.children().last();
+ this.$el.html($fragment).prepend(this._getHeader(query, suggestions)).append(this._getFooter(query, suggestions));
+ },
+ _appendSuggestions: function appendSuggestions(query, suggestions) {
+ var $fragment, $lastSuggestion;
+ $fragment = this._getSuggestionsFragment(query, suggestions);
+ $lastSuggestion = $fragment.children().last();
+ this.$lastSuggestion.after($fragment);
+ this.$lastSuggestion = $lastSuggestion;
+ },
+ _renderPending: function renderPending(query) {
+ var template = this.templates.pending;
+ this._resetLastSuggestion();
+ template && this.$el.html(template({
+ query: query,
+ dataset: this.name
+ }));
+ },
+ _renderNotFound: function renderNotFound(query) {
+ var template = this.templates.notFound;
+ this._resetLastSuggestion();
+ template && this.$el.html(template({
+ query: query,
+ dataset: this.name
+ }));
+ },
+ _empty: function empty() {
+ this.$el.empty();
+ this._resetLastSuggestion();
+ },
+ _getSuggestionsFragment: function getSuggestionsFragment(query, suggestions) {
+ var that = this, fragment;
+ fragment = document.createDocumentFragment();
+ _.each(suggestions, function getSuggestionNode(suggestion) {
+ var $el, context;
+ context = that._injectQuery(query, suggestion);
+ $el = $(that.templates.suggestion(context)).data(keys.obj, suggestion).data(keys.val, that.displayFn(suggestion)).addClass(that.classes.suggestion + " " + that.classes.selectable);
+ fragment.appendChild($el[0]);
+ });
+ this.highlight && highlight({
+ className: this.classes.highlight,
+ node: fragment,
+ pattern: query
+ });
+ return $(fragment);
+ },
+ _getFooter: function getFooter(query, suggestions) {
+ return this.templates.footer ? this.templates.footer({
+ query: query,
+ suggestions: suggestions,
+ dataset: this.name
+ }) : null;
+ },
+ _getHeader: function getHeader(query, suggestions) {
+ return this.templates.header ? this.templates.header({
+ query: query,
+ suggestions: suggestions,
+ dataset: this.name
+ }) : null;
+ },
+ _resetLastSuggestion: function resetLastSuggestion() {
+ this.$lastSuggestion = $();
+ },
+ _injectQuery: function injectQuery(query, obj) {
+ return _.isObject(obj) ? _.mixin({
+ _query: query
+ }, obj) : obj;
+ },
+ update: function update(query) {
+ var that = this, canceled = false, syncCalled = false, rendered = 0;
+ this.cancel();
+ this.cancel = function cancel() {
+ canceled = true;
+ that.cancel = $.noop;
+ that.async && that.trigger("asyncCanceled", query);
+ };
+ this.source(query, sync, async);
+ !syncCalled && sync([]);
+ function sync(suggestions) {
+ if (syncCalled) {
+ return;
+ }
+ syncCalled = true;
+ suggestions = (suggestions || []).slice(0, that.limit);
+ rendered = suggestions.length;
+ that._overwrite(query, suggestions);
+ if (rendered < that.limit && that.async) {
+ that.trigger("asyncRequested", query);
+ }
+ }
+ function async(suggestions) {
+ suggestions = suggestions || [];
+ if (!canceled && rendered < that.limit) {
+ that.cancel = $.noop;
+ rendered += suggestions.length;
+ that._append(query, suggestions.slice(0, that.limit - rendered));
+ that.async && that.trigger("asyncReceived", query);
+ }
+ }
+ },
+ cancel: $.noop,
+ clear: function clear() {
+ this._empty();
+ this.cancel();
+ this.trigger("cleared");
+ },
+ isEmpty: function isEmpty() {
+ return this.$el.is(":empty");
+ },
+ destroy: function destroy() {
+ this.$el = $("<div>");
+ }
+ });
+ return Dataset;
+ function getDisplayFn(display) {
+ display = display || _.stringify;
+ return _.isFunction(display) ? display : displayFn;
+ function displayFn(obj) {
+ return obj[display];
+ }
+ }
+ function getTemplates(templates, displayFn) {
+ return {
+ notFound: templates.notFound && _.templatify(templates.notFound),
+ pending: templates.pending && _.templatify(templates.pending),
+ header: templates.header && _.templatify(templates.header),
+ footer: templates.footer && _.templatify(templates.footer),
+ suggestion: templates.suggestion || suggestionTemplate
+ };
+ function suggestionTemplate(context) {
+ return $("<div>").text(displayFn(context));
+ }
+ }
+ function isValidName(str) {
+ return /^[_a-zA-Z0-9-]+$/.test(str);
+ }
+ }();
+ var Menu = function() {
+ "use strict";
+ function Menu(o, www) {
+ var that = this;
+ o = o || {};
+ if (!o.node) {
+ $.error("node is required");
+ }
+ www.mixin(this);
+ this.$node = $(o.node);
+ this.query = null;
+ this.datasets = _.map(o.datasets, initializeDataset);
+ function initializeDataset(oDataset) {
+ var node = that.$node.find(oDataset.node).first();
+ oDataset.node = node.length ? node : $("<div>").appendTo(that.$node);
+ return new Dataset(oDataset, www);
+ }
+ }
+ _.mixin(Menu.prototype, EventEmitter, {
+ _onSelectableClick: function onSelectableClick($e) {
+ this.trigger("selectableClicked", $($e.currentTarget));
+ },
+ _onRendered: function onRendered(type, dataset, suggestions, async) {
+ this.$node.toggleClass(this.classes.empty, this._allDatasetsEmpty());
+ this.trigger("datasetRendered", dataset, suggestions, async);
+ },
+ _onCleared: function onCleared() {
+ this.$node.toggleClass(this.classes.empty, this._allDatasetsEmpty());
+ this.trigger("datasetCleared");
+ },
+ _propagate: function propagate() {
+ this.trigger.apply(this, arguments);
+ },
+ _allDatasetsEmpty: function allDatasetsEmpty() {
+ return _.every(this.datasets, isDatasetEmpty);
+ function isDatasetEmpty(dataset) {
+ return dataset.isEmpty();
+ }
+ },
+ _getSelectables: function getSelectables() {
+ return this.$node.find(this.selectors.selectable);
+ },
+ _removeCursor: function _removeCursor() {
+ var $selectable = this.getActiveSelectable();
+ $selectable && $selectable.removeClass(this.classes.cursor);
+ },
+ _ensureVisible: function ensureVisible($el) {
+ var elTop, elBottom, nodeScrollTop, nodeHeight;
+ elTop = $el.position().top;
+ elBottom = elTop + $el.outerHeight(true);
+ nodeScrollTop = this.$node.scrollTop();
+ nodeHeight = this.$node.height() + parseInt(this.$node.css("paddingTop"), 10) + parseInt(this.$node.css("paddingBottom"), 10);
+ if (elTop < 0) {
+ this.$node.scrollTop(nodeScrollTop + elTop);
+ } else if (nodeHeight < elBottom) {
+ this.$node.scrollTop(nodeScrollTop + (elBottom - nodeHeight));
+ }
+ },
+ bind: function() {
+ var that = this, onSelectableClick;
+ onSelectableClick = _.bind(this._onSelectableClick, this);
+ this.$node.on("click.tt", this.selectors.selectable, onSelectableClick);
+ _.each(this.datasets, function(dataset) {
+ dataset.onSync("asyncRequested", that._propagate, that).onSync("asyncCanceled", that._propagate, that).onSync("asyncReceived", that._propagate, that).onSync("rendered", that._onRendered, that).onSync("cleared", that._onCleared, that);
+ });
+ return this;
+ },
+ isOpen: function isOpen() {
+ return this.$node.hasClass(this.classes.open);
+ },
+ open: function open() {
+ this.$node.addClass(this.classes.open);
+ },
+ close: function close() {
+ this.$node.removeClass(this.classes.open);
+ this._removeCursor();
+ },
+ setLanguageDirection: function setLanguageDirection(dir) {
+ this.$node.attr("dir", dir);
+ },
+ selectableRelativeToCursor: function selectableRelativeToCursor(delta) {
+ var $selectables, $oldCursor, oldIndex, newIndex;
+ $oldCursor = this.getActiveSelectable();
+ $selectables = this._getSelectables();
+ oldIndex = $oldCursor ? $selectables.index($oldCursor) : -1;
+ newIndex = oldIndex + delta;
+ newIndex = (newIndex + 1) % ($selectables.length + 1) - 1;
+ newIndex = newIndex < -1 ? $selectables.length - 1 : newIndex;
+ return newIndex === -1 ? null : $selectables.eq(newIndex);
+ },
+ setCursor: function setCursor($selectable) {
+ this._removeCursor();
+ if ($selectable = $selectable && $selectable.first()) {
+ $selectable.addClass(this.classes.cursor);
+ this._ensureVisible($selectable);
+ }
+ },
+ getSelectableData: function getSelectableData($el) {
+ return $el && $el.length ? Dataset.extractData($el) : null;
+ },
+ getActiveSelectable: function getActiveSelectable() {
+ var $selectable = this._getSelectables().filter(this.selectors.cursor).first();
+ return $selectable.length ? $selectable : null;
+ },
+ getTopSelectable: function getTopSelectable() {
+ var $selectable = this._getSelectables().first();
+ return $selectable.length ? $selectable : null;
+ },
+ update: function update(query) {
+ var isValidUpdate = query !== this.query;
+ if (isValidUpdate) {
+ this.query = query;
+ _.each(this.datasets, updateDataset);
+ }
+ return isValidUpdate;
+ function updateDataset(dataset) {
+ dataset.update(query);
+ }
+ },
+ empty: function empty() {
+ _.each(this.datasets, clearDataset);
+ this.query = null;
+ this.$node.addClass(this.classes.empty);
+ function clearDataset(dataset) {
+ dataset.clear();
+ }
+ },
+ destroy: function destroy() {
+ this.$node.off(".tt");
+ this.$node = $("<div>");
+ _.each(this.datasets, destroyDataset);
+ function destroyDataset(dataset) {
+ dataset.destroy();
+ }
+ }
+ });
+ return Menu;
+ }();
+ var DefaultMenu = function() {
+ "use strict";
+ var s = Menu.prototype;
+ function DefaultMenu() {
+ Menu.apply(this, [].slice.call(arguments, 0));
+ }
+ _.mixin(DefaultMenu.prototype, Menu.prototype, {
+ open: function open() {
+ !this._allDatasetsEmpty() && this._show();
+ return s.open.apply(this, [].slice.call(arguments, 0));
+ },
+ close: function close() {
+ this._hide();
+ return s.close.apply(this, [].slice.call(arguments, 0));
+ },
+ _onRendered: function onRendered() {
+ if (this._allDatasetsEmpty()) {
+ this._hide();
+ } else {
+ this.isOpen() && this._show();
+ }
+ return s._onRendered.apply(this, [].slice.call(arguments, 0));
+ },
+ _onCleared: function onCleared() {
+ if (this._allDatasetsEmpty()) {
+ this._hide();
+ } else {
+ this.isOpen() && this._show();
+ }
+ return s._onCleared.apply(this, [].slice.call(arguments, 0));
+ },
+ setLanguageDirection: function setLanguageDirection(dir) {
+ this.$node.css(dir === "ltr" ? this.css.ltr : this.css.rtl);
+ return s.setLanguageDirection.apply(this, [].slice.call(arguments, 0));
+ },
+ _hide: function hide() {
+ this.$node.hide();
+ },
+ _show: function show() {
+ this.$node.css("display", "block");
+ }
+ });
+ return DefaultMenu;
+ }();
+ var Typeahead = function() {
+ "use strict";
+ function Typeahead(o, www) {
+ var onFocused, onBlurred, onEnterKeyed, onTabKeyed, onEscKeyed, onUpKeyed, onDownKeyed, onLeftKeyed, onRightKeyed, onQueryChanged, onWhitespaceChanged;
+ o = o || {};
+ if (!o.input) {
+ $.error("missing input");
+ }
+ if (!o.menu) {
+ $.error("missing menu");
+ }
+ if (!o.eventBus) {
+ $.error("missing event bus");
+ }
+ www.mixin(this);
+ this.eventBus = o.eventBus;
+ this.minLength = _.isNumber(o.minLength) ? o.minLength : 1;
+ this.input = o.input;
+ this.menu = o.menu;
+ this.enabled = true;
+ this.active = false;
+ this.input.hasFocus() && this.activate();
+ this.dir = this.input.getLangDir();
+ this._hacks();
+ this.menu.bind().onSync("selectableClicked", this._onSelectableClicked, this).onSync("asyncRequested", this._onAsyncRequested, this).onSync("asyncCanceled", this._onAsyncCanceled, this).onSync("asyncReceived", this._onAsyncReceived, this).onSync("datasetRendered", this._onDatasetRendered, this).onSync("datasetCleared", this._onDatasetCleared, this);
+ onFocused = c(this, "activate", "open", "_onFocused");
+ onBlurred = c(this, "deactivate", "_onBlurred");
+ onEnterKeyed = c(this, "isActive", "isOpen", "_onEnterKeyed");
+ onTabKeyed = c(this, "isActive", "isOpen", "_onTabKeyed");
+ onEscKeyed = c(this, "isActive", "_onEscKeyed");
+ onUpKeyed = c(this, "isActive", "open", "_onUpKeyed");
+ onDownKeyed = c(this, "isActive", "open", "_onDownKeyed");
+ onLeftKeyed = c(this, "isActive", "isOpen", "_onLeftKeyed");
+ onRightKeyed = c(this, "isActive", "isOpen", "_onRightKeyed");
+ onQueryChanged = c(this, "_openIfActive", "_onQueryChanged");
+ onWhitespaceChanged = c(this, "_openIfActive", "_onWhitespaceChanged");
+ this.input.bind().onSync("focused", onFocused, this).onSync("blurred", onBlurred, this).onSync("enterKeyed", onEnterKeyed, this).onSync("tabKeyed", onTabKeyed, this).onSync("escKeyed", onEscKeyed, this).onSync("upKeyed", onUpKeyed, this).onSync("downKeyed", onDownKeyed, this).onSync("leftKeyed", onLeftKeyed, this).onSync("rightKeyed", onRightKeyed, this).onSync("queryChanged", onQueryChanged, this).onSync("whitespaceChanged", onWhitespaceChanged, this).onSync("langDirChanged", this._onLangDirChanged, this);
+ }
+ _.mixin(Typeahead.prototype, {
+ _hacks: function hacks() {
+ var $input, $menu;
+ $input = this.input.$input || $("<div>");
+ $menu = this.menu.$node || $("<div>");
+ $input.on("blur.tt", function($e) {
+ var active, isActive, hasActive;
+ active = document.activeElement;
+ isActive = $menu.is(active);
+ hasActive = $menu.has(active).length > 0;
+ if (_.isMsie() && (isActive || hasActive)) {
+ $e.preventDefault();
+ $e.stopImmediatePropagation();
+ _.defer(function() {
+ $input.focus();
+ });
+ }
+ });
+ $menu.on("mousedown.tt", function($e) {
+ $e.preventDefault();
+ });
+ },
+ _onSelectableClicked: function onSelectableClicked(type, $el) {
+ this.select($el);
+ },
+ _onDatasetCleared: function onDatasetCleared() {
+ this._updateHint();
+ },
+ _onDatasetRendered: function onDatasetRendered(type, dataset, suggestions, async) {
+ this._updateHint();
+ this.eventBus.trigger("render", suggestions, async, dataset);
+ },
+ _onAsyncRequested: function onAsyncRequested(type, dataset, query) {
+ this.eventBus.trigger("asyncrequest", query, dataset);
+ },
+ _onAsyncCanceled: function onAsyncCanceled(type, dataset, query) {
+ this.eventBus.trigger("asynccancel", query, dataset);
+ },
+ _onAsyncReceived: function onAsyncReceived(type, dataset, query) {
+ this.eventBus.trigger("asyncreceive", query, dataset);
+ },
+ _onFocused: function onFocused() {
+ this._minLengthMet() && this.menu.update(this.input.getQuery());
+ },
+ _onBlurred: function onBlurred() {
+ if (this.input.hasQueryChangedSinceLastFocus()) {
+ this.eventBus.trigger("change", this.input.getQuery());
+ }
+ },
+ _onEnterKeyed: function onEnterKeyed(type, $e) {
+ var $selectable;
+ if ($selectable = this.menu.getActiveSelectable()) {
+ this.select($selectable) && $e.preventDefault();
+ }
+ },
+ _onTabKeyed: function onTabKeyed(type, $e) {
+ var $selectable;
+ if ($selectable = this.menu.getActiveSelectable()) {
+ this.select($selectable) && $e.preventDefault();
+ } else if ($selectable = this.menu.getTopSelectable()) {
+ this.autocomplete($selectable) && $e.preventDefault();
+ }
+ },
+ _onEscKeyed: function onEscKeyed() {
+ this.close();
+ },
+ _onUpKeyed: function onUpKeyed() {
+ this.moveCursor(-1);
+ },
+ _onDownKeyed: function onDownKeyed() {
+ this.moveCursor(+1);
+ },
+ _onLeftKeyed: function onLeftKeyed() {
+ if (this.dir === "rtl" && this.input.isCursorAtEnd()) {
+ this.autocomplete(this.menu.getTopSelectable());
+ }
+ },
+ _onRightKeyed: function onRightKeyed() {
+ if (this.dir === "ltr" && this.input.isCursorAtEnd()) {
+ this.autocomplete(this.menu.getTopSelectable());
+ }
+ },
+ _onQueryChanged: function onQueryChanged(e, query) {
+ this._minLengthMet(query) ? this.menu.update(query) : this.menu.empty();
+ },
+ _onWhitespaceChanged: function onWhitespaceChanged() {
+ this._updateHint();
+ },
+ _onLangDirChanged: function onLangDirChanged(e, dir) {
+ if (this.dir !== dir) {
+ this.dir = dir;
+ this.menu.setLanguageDirection(dir);
+ }
+ },
+ _openIfActive: function openIfActive() {
+ this.isActive() && this.open();
+ },
+ _minLengthMet: function minLengthMet(query) {
+ query = _.isString(query) ? query : this.input.getQuery() || "";
+ return query.length >= this.minLength;
+ },
+ _updateHint: function updateHint() {
+ var $selectable, data, val, query, escapedQuery, frontMatchRegEx, match;
+ $selectable = this.menu.getTopSelectable();
+ data = this.menu.getSelectableData($selectable);
+ val = this.input.getInputValue();
+ if (data && !_.isBlankString(val) && !this.input.hasOverflow()) {
+ query = Input.normalizeQuery(val);
+ escapedQuery = _.escapeRegExChars(query);
+ frontMatchRegEx = new RegExp("^(?:" + escapedQuery + ")(.+$)", "i");
+ match = frontMatchRegEx.exec(data.val);
+ match && this.input.setHint(val + match[1]);
+ } else {
+ this.input.clearHint();
+ }
+ },
+ isEnabled: function isEnabled() {
+ return this.enabled;
+ },
+ enable: function enable() {
+ this.enabled = true;
+ },
+ disable: function disable() {
+ this.enabled = false;
+ },
+ isActive: function isActive() {
+ return this.active;
+ },
+ activate: function activate() {
+ if (this.isActive()) {
+ return true;
+ } else if (!this.isEnabled() || this.eventBus.before("active")) {
+ return false;
+ } else {
+ this.active = true;
+ this.eventBus.trigger("active");
+ return true;
+ }
+ },
+ deactivate: function deactivate() {
+ if (!this.isActive()) {
+ return true;
+ } else if (this.eventBus.before("idle")) {
+ return false;
+ } else {
+ this.active = false;
+ this.close();
+ this.eventBus.trigger("idle");
+ return true;
+ }
+ },
+ isOpen: function isOpen() {
+ return this.menu.isOpen();
+ },
+ open: function open() {
+ if (!this.isOpen() && !this.eventBus.before("open")) {
+ this.menu.open();
+ this._updateHint();
+ this.eventBus.trigger("open");
+ }
+ return this.isOpen();
+ },
+ close: function close() {
+ if (this.isOpen() && !this.eventBus.before("close")) {
+ this.menu.close();
+ this.input.clearHint();
+ this.input.resetInputValue();
+ this.eventBus.trigger("close");
+ }
+ return !this.isOpen();
+ },
+ setVal: function setVal(val) {
+ this.input.setQuery(_.toStr(val));
+ },
+ getVal: function getVal() {
+ return this.input.getQuery();
+ },
+ select: function select($selectable) {
+ var data = this.menu.getSelectableData($selectable);
+ if (data && !this.eventBus.before("select", data.obj)) {
+ this.input.setQuery(data.val, true);
+ this.eventBus.trigger("select", data.obj);
+ this.close();
+ return true;
+ }
+ return false;
+ },
+ autocomplete: function autocomplete($selectable) {
+ var query, data, isValid;
+ query = this.input.getQuery();
+ data = this.menu.getSelectableData($selectable);
+ isValid = data && query !== data.val;
+ if (isValid && !this.eventBus.before("autocomplete", data.obj)) {
+ this.input.setQuery(data.val);
+ this.eventBus.trigger("autocomplete", data.obj);
+ return true;
+ }
+ return false;
+ },
+ moveCursor: function moveCursor(delta) {
+ var query, $candidate, data, payload, cancelMove;
+ query = this.input.getQuery();
+ $candidate = this.menu.selectableRelativeToCursor(delta);
+ data = this.menu.getSelectableData($candidate);
+ payload = data ? data.obj : null;
+ cancelMove = this._minLengthMet() && this.menu.update(query);
+ if (!cancelMove && !this.eventBus.before("cursorchange", payload)) {
+ this.menu.setCursor($candidate);
+ if (data) {
+ this.input.setInputValue(data.val);
+ } else {
+ this.input.resetInputValue();
+ this._updateHint();
+ }
+ this.eventBus.trigger("cursorchange", payload);
+ return true;
+ }
+ return false;
+ },
+ destroy: function destroy() {
+ this.input.destroy();
+ this.menu.destroy();
+ }
+ });
+ return Typeahead;
+ function c(ctx) {
+ var methods = [].slice.call(arguments, 1);
+ return function() {
+ var args = [].slice.call(arguments);
+ _.each(methods, function(method) {
+ return ctx[method].apply(ctx, args);
+ });
+ };
+ }
+ }();
+ (function() {
+ "use strict";
+ var old, keys, methods;
+ old = $.fn.typeahead;
+ keys = {
+ www: "tt-www",
+ attrs: "tt-attrs",
+ typeahead: "tt-typeahead"
+ };
+ methods = {
+ initialize: function initialize(o, datasets) {
+ var www;
+ datasets = _.isArray(datasets) ? datasets : [].slice.call(arguments, 1);
+ o = o || {};
+ www = WWW(o.classNames);
+ return this.each(attach);
+ function attach() {
+ var $input, $wrapper, $hint, $menu, defaultHint, defaultMenu, eventBus, input, menu, typeahead, MenuConstructor;
+ _.each(datasets, function(d) {
+ d.highlight = !!o.highlight;
+ });
+ $input = $(this);
+ $wrapper = $(www.html.wrapper);
+ $hint = $elOrNull(o.hint);
+ $menu = $elOrNull(o.menu);
+ defaultHint = o.hint !== false && !$hint;
+ defaultMenu = o.menu !== false && !$menu;
+ defaultHint && ($hint = buildHintFromInput($input, www));
+ defaultMenu && ($menu = $(www.html.menu).css(www.css.menu));
+ $hint && $hint.val("");
+ $input = prepInput($input, www);
+ if (defaultHint || defaultMenu) {
+ $wrapper.css(www.css.wrapper);
+ $input.css(defaultHint ? www.css.input : www.css.inputWithNoHint);
+ $input.wrap($wrapper).parent().prepend(defaultHint ? $hint : null).append(defaultMenu ? $menu : null);
+ }
+ MenuConstructor = defaultMenu ? DefaultMenu : Menu;
+ eventBus = new EventBus({
+ el: $input
+ });
+ input = new Input({
+ hint: $hint,
+ input: $input
+ }, www);
+ menu = new MenuConstructor({
+ node: $menu,
+ datasets: datasets
+ }, www);
+ typeahead = new Typeahead({
+ input: input,
+ menu: menu,
+ eventBus: eventBus,
+ minLength: o.minLength
+ }, www);
+ $input.data(keys.www, www);
+ $input.data(keys.typeahead, typeahead);
+ }
+ },
+ isEnabled: function isEnabled() {
+ var enabled;
+ ttEach(this.first(), function(t) {
+ enabled = t.isEnabled();
+ });
+ return enabled;
+ },
+ enable: function enable() {
+ ttEach(this, function(t) {
+ t.enable();
+ });
+ return this;
+ },
+ disable: function disable() {
+ ttEach(this, function(t) {
+ t.disable();
+ });
+ return this;
+ },
+ isActive: function isActive() {
+ var active;
+ ttEach(this.first(), function(t) {
+ active = t.isActive();
+ });
+ return active;
+ },
+ activate: function activate() {
+ ttEach(this, function(t) {
+ t.activate();
+ });
+ return this;
+ },
+ deactivate: function deactivate() {
+ ttEach(this, function(t) {
+ t.deactivate();
+ });
+ return this;
+ },
+ isOpen: function isOpen() {
+ var open;
+ ttEach(this.first(), function(t) {
+ open = t.isOpen();
+ });
+ return open;
+ },
+ open: function open() {
+ ttEach(this, function(t) {
+ t.open();
+ });
+ return this;
+ },
+ close: function close() {
+ ttEach(this, function(t) {
+ t.close();
+ });
+ return this;
+ },
+ select: function select(el) {
+ var success = false, $el = $(el);
+ ttEach(this.first(), function(t) {
+ success = t.select($el);
+ });
+ return success;
+ },
+ autocomplete: function autocomplete(el) {
+ var success = false, $el = $(el);
+ ttEach(this.first(), function(t) {
+ success = t.autocomplete($el);
+ });
+ return success;
+ },
+ moveCursor: function moveCursoe(delta) {
+ var success = false;
+ ttEach(this.first(), function(t) {
+ success = t.moveCursor(delta);
+ });
+ return success;
+ },
+ val: function val(newVal) {
+ var query;
+ if (!arguments.length) {
+ ttEach(this.first(), function(t) {
+ query = t.getVal();
+ });
+ return query;
+ } else {
+ ttEach(this, function(t) {
+ t.setVal(newVal);
+ });
+ return this;
+ }
+ },
+ destroy: function destroy() {
+ ttEach(this, function(typeahead, $input) {
+ revert($input);
+ typeahead.destroy();
+ });
+ return this;
+ }
+ };
+ $.fn.typeahead = function(method) {
+ if (methods[method]) {
+ return methods[method].apply(this, [].slice.call(arguments, 1));
+ } else {
+ return methods.initialize.apply(this, arguments);
+ }
+ };
+ $.fn.typeahead.noConflict = function noConflict() {
+ $.fn.typeahead = old;
+ return this;
+ };
+ function ttEach($els, fn) {
+ $els.each(function() {
+ var $input = $(this), typeahead;
+ (typeahead = $input.data(keys.typeahead)) && fn(typeahead, $input);
+ });
+ }
+ function buildHintFromInput($input, www) {
+ return $input.clone().addClass(www.classes.hint).removeData().css(www.css.hint).css(getBackgroundStyles($input)).prop("readonly", true).removeAttr("id name placeholder required").attr({
+ autocomplete: "off",
+ spellcheck: "false",
+ tabindex: -1
+ });
+ }
+ function prepInput($input, www) {
+ $input.data(keys.attrs, {
+ dir: $input.attr("dir"),
+ autocomplete: $input.attr("autocomplete"),
+ spellcheck: $input.attr("spellcheck"),
+ style: $input.attr("style")
+ });
+ $input.addClass(www.classes.input).attr({
+ autocomplete: "off",
+ spellcheck: false
+ });
+ try {
+ !$input.attr("dir") && $input.attr("dir", "auto");
+ } catch (e) {}
+ return $input;
+ }
+ function getBackgroundStyles($el) {
+ return {
+ backgroundAttachment: $el.css("background-attachment"),
+ backgroundClip: $el.css("background-clip"),
+ backgroundColor: $el.css("background-color"),
+ backgroundImage: $el.css("background-image"),
+ backgroundOrigin: $el.css("background-origin"),
+ backgroundPosition: $el.css("background-position"),
+ backgroundRepeat: $el.css("background-repeat"),
+ backgroundSize: $el.css("background-size")
+ };
+ }
+ function revert($input) {
+ var www, $wrapper;
+ www = $input.data(keys.www);
+ $wrapper = $input.parent().filter(www.selectors.wrapper);
+ _.each($input.data(keys.attrs), function(val, key) {
+ _.isUndefined(val) ? $input.removeAttr(key) : $input.attr(key, val);
+ });
+ $input.removeData(keys.typeahead).removeData(keys.www).removeData(keys.attr).removeClass(www.classes.input);
+ if ($wrapper.length) {
+ $input.detach().insertAfter($wrapper);
+ $wrapper.remove();
+ }
+ }
+ function $elOrNull(obj) {
+ var isValid, $el;
+ isValid = _.isJQuery(obj) || _.isElement(obj);
+ $el = isValid ? $(obj).first() : [];
+ return $el.length ? $el : null;
+ }
+ })();
+});
\ No newline at end of file
--- /dev/null
+__all__ = [
+ 'arguments',
+ 'example',
+ 'keyword',
+ 'seealso',
+ 'table',
+ 'underline'
+]
+
+
+class Parser:
+ def __init__(self, pctxt):
+ self.pctxt = pctxt
+
+ def parse(self, line):
+ return line
+
+class PContext:
+ def __init__(self, templates = None):
+ self.set_content_list([])
+ self.templates = templates
+
+ def set_content(self, content):
+ self.set_content_list(content.split("\n"))
+
+ def set_content_list(self, content):
+ self.lines = content
+ self.nblines = len(self.lines)
+ self.i = 0
+ self.stop = False
+
+ def get_lines(self):
+ return self.lines
+
+ def eat_lines(self):
+ count = 0
+ while self.has_more_lines() and self.lines[self.i].strip():
+ count += 1
+ self.next()
+ return count
+
+ def eat_empty_lines(self):
+ count = 0
+ while self.has_more_lines() and not self.lines[self.i].strip():
+ count += 1
+ self.next()
+ return count
+
+ def next(self, count=1):
+ self.i += count
+
+ def has_more_lines(self, offset=0):
+ return self.i + offset < self.nblines
+
+ def get_line(self, offset=0):
+ return self.lines[self.i + offset].rstrip()
+
+
+# Get the indentation of a line
+def get_indent(line):
+ indent = 0
+ length = len(line)
+ while indent < length and line[indent] == ' ':
+ indent += 1
+ return indent
+
+
+# Remove unneeded indentation
+def remove_indent(list):
+ # Detect the minimum indentation in the list
+ min_indent = -1
+ for line in list:
+ if not line.strip():
+ continue
+ indent = get_indent(line)
+ if min_indent < 0 or indent < min_indent:
+ min_indent = indent
+ # Realign the list content to remove the minimum indentation
+ if min_indent > 0:
+ for index, line in enumerate(list):
+ list[index] = line[min_indent:]
--- /dev/null
+import sys
+import re
+import parser
+
+'''
+TODO: Allow inner data parsing (this will allow to parse the examples provided in an arguments block)
+'''
+class Parser(parser.Parser):
+ def __init__(self, pctxt):
+ parser.Parser.__init__(self, pctxt)
+ #template = pctxt.templates.get_template("parser/arguments.tpl")
+ #self.replace = template.render().strip()
+
+ def parse(self, line):
+ #return re.sub(r'(Arguments *:)', self.replace, line)
+ pctxt = self.pctxt
+
+ result = re.search(r'(Arguments? *:)', line)
+ if result:
+ label = result.group(0)
+ content = []
+
+ desc_indent = False
+ desc = re.sub(r'.*Arguments? *:', '', line).strip()
+
+ indent = parser.get_indent(line)
+
+ pctxt.next()
+ pctxt.eat_empty_lines()
+
+ arglines = []
+ if desc != "none":
+ add_empty_lines = 0
+ while pctxt.has_more_lines() and (parser.get_indent(pctxt.get_line()) > indent):
+ for j in xrange(0, add_empty_lines):
+ arglines.append("")
+ arglines.append(pctxt.get_line())
+ pctxt.next()
+ add_empty_lines = pctxt.eat_empty_lines()
+ '''
+ print line
+
+ if parser.get_indent(line) == arg_indent:
+ argument = re.sub(r' *([^ ]+).*', r'\1', line)
+ if argument:
+ #content.append("<b>%s</b>" % argument)
+ arg_desc = [line.replace(argument, " " * len(self.unescape(argument)), 1)]
+ #arg_desc = re.sub(r'( *)([^ ]+)(.*)', r'\1<b>\2</b>\3', line)
+ arg_desc_indent = parser.get_indent(arg_desc[0])
+ arg_desc[0] = arg_desc[0][arg_indent:]
+ pctxt.next()
+ add_empty_lines = 0
+ while pctxt.has_more_lines and parser.get_indent(pctxt.get_line()) >= arg_indent:
+ for i in xrange(0, add_empty_lines):
+ arg_desc.append("")
+ arg_desc.append(pctxt.get_line()[arg_indent:])
+ pctxt.next()
+ add_empty_lines = pctxt.eat_empty_lines()
+ # TODO : reduce space at the beginnning
+ content.append({
+ 'name': argument,
+ 'desc': arg_desc
+ })
+ '''
+
+ if arglines:
+ new_arglines = []
+ #content = self.parse_args(arglines)
+ parser.remove_indent(arglines)
+ '''
+ pctxt2 = parser.PContext(pctxt.templates)
+ pctxt2.set_content_list(arglines)
+ while pctxt2.has_more_lines():
+ new_arglines.append(parser.example.Parser(pctxt2).parse(pctxt2.get_line()))
+ pctxt2.next()
+ arglines = new_arglines
+ '''
+
+ pctxt.stop = True
+
+ template = pctxt.templates.get_template("parser/arguments.tpl")
+ return template.render(
+ pctxt=pctxt,
+ label=label,
+ desc=desc,
+ content=arglines
+ #content=content
+ )
+ return line
+
+ return line
+
+'''
+ def parse_args(self, data):
+ args = []
+
+ pctxt = parser.PContext()
+ pctxt.set_content_list(data)
+
+ while pctxt.has_more_lines():
+ line = pctxt.get_line()
+ arg_indent = parser.get_indent(line)
+ argument = re.sub(r' *([^ ]+).*', r'\1', line)
+ if True or argument:
+ arg_desc = []
+ trailing_desc = line.replace(argument, " " * len(self.unescape(argument)), 1)[arg_indent:]
+ if trailing_desc.strip():
+ arg_desc.append(trailing_desc)
+ pctxt.next()
+ add_empty_lines = 0
+ while pctxt.has_more_lines() and parser.get_indent(pctxt.get_line()) > arg_indent:
+ for i in xrange(0, add_empty_lines):
+ arg_desc.append("")
+ arg_desc.append(pctxt.get_line()[arg_indent:])
+ pctxt.next()
+ add_empty_lines = pctxt.eat_empty_lines()
+
+ parser.remove_indent(arg_desc)
+
+ args.append({
+ 'name': argument,
+ 'desc': arg_desc
+ })
+ return args
+
+ def unescape(self, s):
+ s = s.replace("<", "<")
+ s = s.replace(">", ">")
+ # this has to be last:
+ s = s.replace("&", "&")
+ return s
+'''
--- /dev/null
+import re
+import parser
+
+# Detect examples blocks
+class Parser(parser.Parser):
+ def __init__(self, pctxt):
+ parser.Parser.__init__(self, pctxt)
+ template = pctxt.templates.get_template("parser/example/comment.tpl")
+ self.comment = template.render(pctxt=pctxt).strip()
+
+
+ def parse(self, line):
+ pctxt = self.pctxt
+
+ result = re.search(r'^ *(Examples? *:)(.*)', line)
+ if result:
+ label = result.group(1)
+
+ desc_indent = False
+ desc = result.group(2).strip()
+
+ # Some examples have a description
+ if desc:
+ desc_indent = len(line) - len(desc)
+
+ indent = parser.get_indent(line)
+
+ if desc:
+ # And some description are on multiple lines
+ while pctxt.get_line(1) and parser.get_indent(pctxt.get_line(1)) == desc_indent:
+ desc += " " + pctxt.get_line(1).strip()
+ pctxt.next()
+
+ pctxt.next()
+ add_empty_line = pctxt.eat_empty_lines()
+
+ content = []
+
+ if parser.get_indent(pctxt.get_line()) > indent:
+ if desc:
+ desc = desc[0].upper() + desc[1:]
+ add_empty_line = 0
+ while pctxt.has_more_lines() and ((not pctxt.get_line()) or (parser.get_indent(pctxt.get_line()) > indent)):
+ if pctxt.get_line():
+ for j in xrange(0, add_empty_line):
+ content.append("")
+
+ content.append(re.sub(r'(#.*)$', self.comment, pctxt.get_line()))
+ add_empty_line = 0
+ else:
+ add_empty_line += 1
+ pctxt.next()
+ elif parser.get_indent(pctxt.get_line()) == indent:
+ # Simple example that can't have empty lines
+ if add_empty_line and desc:
+ # This means that the example was on the same line as the 'Example' tag
+ # and was not a description
+ content.append(" " * indent + desc)
+ desc = False
+ else:
+ while pctxt.has_more_lines() and (parser.get_indent(pctxt.get_line()) >= indent):
+ content.append(pctxt.get_line())
+ pctxt.next()
+ pctxt.eat_empty_lines() # Skip empty remaining lines
+
+ pctxt.stop = True
+
+ parser.remove_indent(content)
+
+ template = pctxt.templates.get_template("parser/example.tpl")
+ return template.render(
+ pctxt=pctxt,
+ label=label,
+ desc=desc,
+ content=content
+ )
+ return line
--- /dev/null
+import re
+import parser
+from urllib import quote
+
+class Parser(parser.Parser):
+ def __init__(self, pctxt):
+ parser.Parser.__init__(self, pctxt)
+ self.keywordPattern = re.compile(r'^(%s%s)(%s)' % (
+ '([a-z][a-z0-9\-\+_\.]*[a-z0-9\-\+_)])', # keyword
+ '( [a-z0-9\-_]+)*', # subkeywords
+ '(\([^ ]*\))?', # arg (ex: (<backend>), (<frontend>/<backend>), (<offset1>,<length>[,<offset2>]) ...
+ ))
+
+ def parse(self, line):
+ pctxt = self.pctxt
+ keywords = pctxt.keywords
+ keywordsCount = pctxt.keywordsCount
+ chapters = pctxt.chapters
+
+ res = ""
+
+ if line != "" and not re.match(r'^ ', line):
+ parsed = self.keywordPattern.match(line)
+ if parsed != None:
+ keyword = parsed.group(1)
+ arg = parsed.group(4)
+ parameters = line[len(keyword) + len(arg):]
+ if (parameters != "" and not re.match("^ +((<|\[|\{|/).*|(: [a-z +]+))?(\(deprecated\))?$", parameters)):
+ # Dirty hack
+ # - parameters should only start with the characer "<", "[", "{", "/"
+ # - or a column (":") followed by a alpha keywords to identify fetching samples (optionally separated by the character "+")
+ # - or the string "(deprecated)" at the end
+ keyword = False
+ else:
+ splitKeyword = keyword.split(" ")
+
+ parameters = arg + parameters
+ else:
+ keyword = False
+
+ if keyword and (len(splitKeyword) <= 5):
+ toplevel = pctxt.details["toplevel"]
+ for j in xrange(0, len(splitKeyword)):
+ subKeyword = " ".join(splitKeyword[0:j + 1])
+ if subKeyword != "no":
+ if not subKeyword in keywords:
+ keywords[subKeyword] = set()
+ keywords[subKeyword].add(pctxt.details["chapter"])
+ res += '<a class="anchor" name="%s"></a>' % subKeyword
+ res += '<a class="anchor" name="%s-%s"></a>' % (toplevel, subKeyword)
+ res += '<a class="anchor" name="%s-%s"></a>' % (pctxt.details["chapter"], subKeyword)
+ res += '<a class="anchor" name="%s (%s)"></a>' % (subKeyword, chapters[toplevel]['title'])
+ res += '<a class="anchor" name="%s (%s)"></a>' % (subKeyword, chapters[pctxt.details["chapter"]]['title'])
+
+ deprecated = parameters.find("(deprecated)")
+ if deprecated != -1:
+ prefix = ""
+ suffix = ""
+ parameters = parameters.replace("(deprecated)", '<span class="label label-warning">(deprecated)</span>')
+ else:
+ prefix = ""
+ suffix = ""
+
+ nextline = pctxt.get_line(1)
+
+ while nextline.startswith(" "):
+ # Found parameters on the next line
+ parameters += "\n" + nextline
+ pctxt.next()
+ if pctxt.has_more_lines(1):
+ nextline = pctxt.get_line(1)
+ else:
+ nextline = ""
+
+
+ parameters = self.colorize(parameters)
+ res += '<div class="keyword">%s<b><a class="anchor" name="%s"></a><a href="#%s">%s</a></b>%s%s</div>' % (prefix, keyword, quote("%s-%s" % (pctxt.details["chapter"], keyword)), keyword, parameters, suffix)
+ pctxt.next()
+ pctxt.stop = True
+ elif line.startswith("/*"):
+ # Skip comments in the documentation
+ while not pctxt.get_line().endswith("*/"):
+ pctxt.next()
+ pctxt.next()
+ else:
+ # This is probably not a keyword but a text, ignore it
+ res += line
+ else:
+ res += line
+
+ return res
+
+ # Used to colorize keywords parameters
+ # TODO : use CSS styling
+ def colorize(self, text):
+ colorized = ""
+ tags = [
+ [ "[" , "]" , "#008" ],
+ [ "{" , "}" , "#800" ],
+ [ "<", ">", "#080" ],
+ ]
+ heap = []
+ pos = 0
+ while pos < len(text):
+ substring = text[pos:]
+ found = False
+ for tag in tags:
+ if substring.startswith(tag[0]):
+ # Opening tag
+ heap.append(tag)
+ colorized += '<span style="color: %s">%s' % (tag[2], substring[0:len(tag[0])])
+ pos += len(tag[0])
+ found = True
+ break
+ elif substring.startswith(tag[1]):
+ # Closing tag
+
+ # pop opening tags until the corresponding one is found
+ openingTag = False
+ while heap and openingTag != tag:
+ openingTag = heap.pop()
+ if openingTag != tag:
+ colorized += '</span>'
+ # all intermediate tags are now closed, we can display the tag
+ colorized += substring[0:len(tag[1])]
+ # and the close it if it was previously opened
+ if openingTag == tag:
+ colorized += '</span>'
+ pos += len(tag[1])
+ found = True
+ break
+ if not found:
+ colorized += substring[0]
+ pos += 1
+ # close all unterminated tags
+ while heap:
+ tag = heap.pop()
+ colorized += '</span>'
+
+ return colorized
+
+
--- /dev/null
+import re
+import parser
+
+class Parser(parser.Parser):
+ def parse(self, line):
+ pctxt = self.pctxt
+
+ result = re.search(r'(See also *:)', line)
+ if result:
+ label = result.group(0)
+
+ desc = re.sub(r'.*See also *:', '', line).strip()
+
+ indent = parser.get_indent(line)
+
+ # Some descriptions are on multiple lines
+ while pctxt.has_more_lines(1) and parser.get_indent(pctxt.get_line(1)) >= indent:
+ desc += " " + pctxt.get_line(1).strip()
+ pctxt.next()
+
+ pctxt.eat_empty_lines()
+ pctxt.next()
+ pctxt.stop = True
+
+ template = pctxt.templates.get_template("parser/seealso.tpl")
+ return template.render(
+ pctxt=pctxt,
+ label=label,
+ desc=desc,
+ )
+
+ return line
--- /dev/null
+import re
+import sys
+import parser
+
+class Parser(parser.Parser):
+ def __init__(self, pctxt):
+ parser.Parser.__init__(self, pctxt)
+ self.table1Pattern = re.compile(r'^ *(-+\+)+-+')
+ self.table2Pattern = re.compile(r'^ *\+(-+\+)+')
+
+ def parse(self, line):
+ global document, keywords, keywordsCount, chapters, keyword_conflicts
+
+ pctxt = self.pctxt
+
+ if pctxt.context['headers']['subtitle'] != 'Configuration Manual':
+ # Quick exit
+ return line
+ elif pctxt.details['chapter'] == "4":
+ # BUG: the matrix in chapter 4. Proxies is not well displayed, we skip this chapter
+ return line
+
+ if pctxt.has_more_lines(1):
+ nextline = pctxt.get_line(1)
+ else:
+ nextline = ""
+
+ if self.table1Pattern.match(nextline):
+ # activate table rendering only for the Configuration Manual
+ lineSeparator = nextline
+ nbColumns = nextline.count("+") + 1
+ extraColumns = 0
+ print >> sys.stderr, "Entering table mode (%d columns)" % nbColumns
+ table = []
+ if line.find("|") != -1:
+ row = []
+ while pctxt.has_more_lines():
+ line = pctxt.get_line()
+ if pctxt.has_more_lines(1):
+ nextline = pctxt.get_line(1)
+ else:
+ nextline = ""
+ if line == lineSeparator:
+ # New row
+ table.append(row)
+ row = []
+ if nextline.find("|") == -1:
+ break # End of table
+ else:
+ # Data
+ columns = line.split("|")
+ for j in xrange(0, len(columns)):
+ try:
+ if row[j]:
+ row[j] += "<br />"
+ row[j] += columns[j].strip()
+ except:
+ row.append(columns[j].strip())
+ pctxt.next()
+ else:
+ row = []
+ headers = nextline
+ while pctxt.has_more_lines():
+ line = pctxt.get_line()
+ if pctxt.has_more_lines(1):
+ nextline = pctxt.get_line(1)
+ else:
+ nextline = ""
+
+ if nextline == "":
+ if row: table.append(row)
+ break # End of table
+
+ if (line != lineSeparator) and (line[0] != "-"):
+ start = 0
+
+ if row and not line.startswith(" "):
+ # Row is complete, parse a new one
+ table.append(row)
+ row = []
+
+ tmprow = []
+ while start != -1:
+ end = headers.find("+", start)
+ if end == -1:
+ end = len(headers)
+
+ realend = end
+ if realend == len(headers):
+ realend = len(line)
+ else:
+ while realend < len(line) and line[realend] != " ":
+ realend += 1
+ end += 1
+
+ tmprow.append(line[start:realend])
+
+ start = end + 1
+ if start >= len(headers):
+ start = -1
+ for j in xrange(0, nbColumns):
+ try:
+ row[j] += tmprow[j].strip()
+ except:
+ row.append(tmprow[j].strip())
+
+ deprecated = row[0].endswith("(deprecated)")
+ if deprecated:
+ row[0] = row[0][: -len("(deprecated)")].rstrip()
+
+ nooption = row[1].startswith("(*)")
+ if nooption:
+ row[1] = row[1][len("(*)"):].strip()
+
+ if deprecated or nooption:
+ extraColumns = 1
+ extra = ""
+ if deprecated:
+ extra += '<span class="label label-warning">(deprecated)</span>'
+ if nooption:
+ extra += '<span>(*)</span>'
+ row.append(extra)
+
+ pctxt.next()
+ print >> sys.stderr, "Leaving table mode"
+ pctxt.next() # skip useless next line
+ pctxt.stop = True
+
+ return self.renderTable(table, nbColumns, pctxt.details["toplevel"])
+ # elif self.table2Pattern.match(line):
+ # return self.parse_table_format2()
+ elif line.find("May be used in sections") != -1:
+ nextline = pctxt.get_line(1)
+ rows = []
+ headers = line.split(":")
+ rows.append(headers[1].split("|"))
+ rows.append(nextline.split("|"))
+ table = {
+ "rows": rows,
+ "title": headers[0]
+ }
+ pctxt.next(2) # skip this previous table
+ pctxt.stop = True
+
+ return self.renderTable(table)
+
+ return line
+
+
+ def parse_table_format2(self):
+ pctxt = self.pctxt
+
+ linesep = pctxt.get_line()
+ rows = []
+
+ pctxt.next()
+ maxcols = 0
+ while pctxt.get_line().strip().startswith("|"):
+ row = pctxt.get_line().strip()[1:-1].split("|")
+ rows.append(row)
+ maxcols = max(maxcols, len(row))
+ pctxt.next()
+ if pctxt.get_line() == linesep:
+ # TODO : find a way to define a special style for next row
+ pctxt.next()
+ pctxt.stop = True
+
+ return self.renderTable(rows, maxcols)
+
+ # Render tables detected by the conversion parser
+ def renderTable(self, table, maxColumns = 0, toplevel = None):
+ pctxt = self.pctxt
+ template = pctxt.templates.get_template("parser/table.tpl")
+
+ res = ""
+
+ title = None
+ if isinstance(table, dict):
+ title = table["title"]
+ table = table["rows"]
+
+ if not maxColumns:
+ maxColumns = len(table[0])
+
+ rows = []
+
+ mode = "th"
+ headerLine = ""
+ hasKeywords = False
+ i = 0
+ for row in table:
+ line = ""
+
+ if i == 0:
+ row_template = pctxt.templates.get_template("parser/table/header.tpl")
+ else:
+ row_template = pctxt.templates.get_template("parser/table/row.tpl")
+
+ if i > 1 and (i - 1) % 20 == 0 and len(table) > 50:
+ # Repeat headers periodically for long tables
+ rows.append(headerLine)
+
+ j = 0
+ cols = []
+ for column in row:
+ if j >= maxColumns:
+ break
+
+ tplcol = {}
+
+ data = column.strip()
+ keyword = column
+ if j == 0 and i == 0 and keyword == 'keyword':
+ hasKeywords = True
+ if j == 0 and i != 0 and hasKeywords:
+ if keyword.startswith("[no] "):
+ keyword = keyword[len("[no] "):]
+ tplcol['toplevel'] = toplevel
+ tplcol['keyword'] = keyword
+ tplcol['extra'] = []
+ if j == 0 and len(row) > maxColumns:
+ for k in xrange(maxColumns, len(row)):
+ tplcol['extra'].append(row[k])
+ tplcol['data'] = data
+ cols.append(tplcol)
+ j += 1
+ mode = "td"
+
+ line = row_template.render(
+ pctxt=pctxt,
+ columns=cols
+ ).strip()
+ if i == 0:
+ headerLine = line
+
+ rows.append(line)
+
+ i += 1
+
+ return template.render(
+ pctxt=pctxt,
+ title=title,
+ rows=rows,
+ )
--- /dev/null
+import parser
+
+class Parser(parser.Parser):
+ # Detect underlines
+ def parse(self, line):
+ pctxt = self.pctxt
+ if pctxt.has_more_lines(1):
+ nextline = pctxt.get_line(1)
+ if (len(line) > 0) and (len(nextline) > 0) and (nextline[0] == '-') and ("-" * len(line) == nextline):
+ template = pctxt.templates.get_template("parser/underline.tpl")
+ line = template.render(pctxt=pctxt, data=line).strip()
+ pctxt.next(2)
+ pctxt.eat_empty_lines()
+ pctxt.stop = True
+
+ return line
--- /dev/null
+<div class="separator">
+<span class="label label-info">${label}</span>\
+% if desc:
+ ${desc}
+% endif
+% if content:
+<pre class="prettyprint arguments">${"\n".join(content)}</pre>
+% endif
+</div>
--- /dev/null
+<div class="separator">
+<span class="label label-success">${label}</span>
+<pre class="prettyprint">
+% if desc:
+<div class="example-desc">${desc}</div>\
+% endif
+<code>\
+% for line in content:
+${line}
+% endfor
+</code></pre>
+</div>
\ No newline at end of file
--- /dev/null
+<span class="comment">\1</span>
\ No newline at end of file
--- /dev/null
+<div class="page-header"><b>${label}</b> ${desc}</div>
--- /dev/null
+% if title:
+<div><p>${title} :</p>\
+% endif
+<table class="table table-bordered" border="0" cellspacing="0" cellpadding="0">
+% for row in rows:
+${row}
+% endfor
+</table>\
+% if title:
+</div>
+% endif
\ No newline at end of file
--- /dev/null
+<thead><tr>\
+% for col in columns:
+<% data = col['data'] %>\
+<th>${data}</th>\
+% endfor
+</tr></thead>
--- /dev/null
+<% from urllib import quote %>
+<% base = pctxt.context['base'] %>
+<tr>\
+% for col in columns:
+<% data = col['data'] %>\
+<%
+ if data in ['yes']:
+ style = "class=\"alert-success pagination-centered\""
+ data = 'yes<br /><img src="%scss/check.png" alt="yes" title="yes" />' % base
+ elif data in ['no']:
+ style = "class=\"alert-error pagination-centered\""
+ data = 'no<br /><img src="%scss/cross.png" alt="no" title="no" />' % base
+ elif data in ['X']:
+ style = "class=\"pagination-centered\""
+ data = '<img src="%scss/check.png" alt="X" title="yes" />' % base
+ elif data in ['-']:
+ style = "class=\"pagination-centered\""
+ data = ' '
+ elif data in ['*']:
+ style = "class=\"pagination-centered\""
+ else:
+ style = None
+%>\
+<td ${style}>\
+% if "keyword" in col:
+<a href="#${quote("%s-%s" % (col['toplevel'], col['keyword']))}">\
+% for extra in col['extra']:
+<span class="pull-right">${extra}</span>\
+% endfor
+${data}</a>\
+% else:
+${data}\
+% endif
+</td>\
+% endfor
+</tr>
--- /dev/null
+<h5>${data}</h5>
--- /dev/null
+<a class="anchor" id="summary" name="summary"></a>
+<div class="page-header">
+ <h1 id="chapter-summary" data-target="summary">Summary</h1>
+</div>
+<div class="row">
+ <div class="col-md-6">
+ <% previousLevel = None %>
+ % for k in chapterIndexes:
+ <% chapter = chapters[k] %>
+ % if chapter['title']:
+ <%
+ if chapter['level'] == 1:
+ otag = "<b>"
+ etag = "</b>"
+ else:
+ otag = etag = ""
+ %>
+ % if chapter['chapter'] == '7':
+ ## Quick and dirty hack to split the summary in 2 columns
+ ## TODO : implement a generic way split the summary
+ </div><div class="col-md-6">
+ <% previousLevel = None %>
+ % endif
+ % if otag and previousLevel:
+ <br />
+ % endif
+ <div class="row">
+ <div class="col-md-2 pagination-right noheight">${otag}<small>${chapter['chapter']}.</small>${etag}</div>
+ <div class="col-md-10 noheight">
+ % for tab in range(1, chapter['level']):
+ <div class="tab">
+ % endfor
+ <a href="#${chapter['chapter']}">${otag}${chapter['title']}${etag}</a>
+ % for tab in range(1, chapter['level']):
+ </div>
+ % endfor
+ </div>
+ </div>
+ <% previousLevel = chapter['level'] %>
+ % endif
+ % endfor
+ </div>
+</div>
--- /dev/null
+<!DOCTYPE html>
+<html lang="en">
+ <head>
+ <meta charset="utf-8" />
+ <title>${headers['title']} ${headers['version']} - ${headers['subtitle']}</title>
+ <link href="//cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.1.1/css/bootstrap.min.css" rel="stylesheet" />
+ <link href="${base}css/page.css?${version}" rel="stylesheet" />
+ </head>
+ <body>
+ <nav class="navbar navbar-default navbar-fixed-top" role="navigation">
+ <div class="navbar-header">
+ <button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#menu">
+ <span class="sr-only">Toggle navigation</span>
+ <span class="icon-bar"></span>
+ <span class="icon-bar"></span>
+ <span class="icon-bar"></span>
+ </button>
+ <a class="navbar-brand" href="${base}index.html">${headers['title']} <small>${headers['subtitle']}</small></a>
+ </div>
+ <!-- /.navbar-header -->
+
+ <!-- Collect the nav links, forms, and other content for toggling -->
+ <div class="collapse navbar-collapse" id="menu">
+ <ul class="nav navbar-nav">
+ <li><a href="http://www.haproxy.org/">HAProxy home page</a></li>
+ <li class="dropdown">
+ <a href="#" class="dropdown-toggle" data-toggle="dropdown">Versions <b class="caret"></b></a>
+ <ul class="dropdown-menu">
+ ## TODO : provide a structure to dynamically generate per version links
+ <li class="dropdown-header">HAProxy 1.4</li>
+ <li><a href="${base}configuration-1.4.html">Configuration Manual <small>(stable)</small></a></li>
+ <li><a href="${base}snapshot/configuration-1.4.html">Configuration Manual <small>(snapshot)</small></a></li>
+ <li><a href="http://git.1wt.eu/git/haproxy-1.4.git/">GIT Repository</a></li>
+ <li><a href="http://www.haproxy.org/git/?p=haproxy-1.4.git">Browse repository</a></li>
+ <li><a href="http://www.haproxy.org/download/1.4/">Browse directory</a></li>
+ <li class="divider"></li>
+ <li class="dropdown-header">HAProxy 1.5</li>
+ <li><a href="${base}configuration-1.5.html">Configuration Manual <small>(stable)</small></a></li>
+ <li><a href="${base}snapshot/configuration-1.5.html">Configuration Manual <small>(snapshot)</small></a></li>
+ <li><a href="http://git.1wt.eu/git/haproxy-1.5.git/">GIT Repository</a></li>
+ <li><a href="http://www.haproxy.org/git/?p=haproxy-1.5.git">Browse repository</a></li>
+ <li><a href="http://www.haproxy.org/download/1.5/">Browse directory</a></li>
+ <li class="divider"></li>
+ <li class="dropdown-header">HAProxy 1.6</li>
+ <li><a href="${base}configuration-1.6.html">Configuration Manual <small>(stable)</small></a></li>
+ <li><a href="${base}snapshot/configuration-1.6.html">Configuration Manual <small>(snapshot)</small></a></li>
+ <li><a href="${base}intro-1.6.html">Starter Guide <small>(stable)</small></a></li>
+ <li><a href="${base}snapshot/intro-1.6.html">Starter Guide <small>(snapshot)</small></a></li>
+ <li><a href="http://git.1wt.eu/git/haproxy.git/">GIT Repository</a></li>
+ <li><a href="http://www.haproxy.org/git/?p=haproxy.git">Browse repository</a></li>
+ <li><a href="http://www.haproxy.org/download/1.6/">Browse directory</a></li>
+ </ul>
+ </li>
+ </ul>
+ </div>
+ </nav>
+ <!-- /.navbar-static-side -->
+
+ <div id="wrapper">
+
+ <div id="sidebar">
+ <form onsubmit="search(this.keyword.value); return false" role="form">
+ <div id="searchKeyword" class="form-group">
+ <input type="text" class="form-control typeahead" id="keyword" name="keyword" placeholder="Search..." autocomplete="off">
+ </div>
+ </form>
+ <p>
+ Keyboard navigation : <span id="keyboardNavStatus"></span>
+ </p>
+ <p>
+ When enabled, you can use <strong>left</strong> and <strong>right</strong> arrow keys to navigate between chapters.<br>
+ The feature is automatically disabled when the search field is focused.
+ </p>
+ <p class="text-right">
+ <small>Converted with <a href="https://github.com/cbonte/haproxy-dconv">haproxy-dconv</a> v<b>${version}</b> on <b>${date}</b></small>
+ </p>
+ </div>
+ <!-- /.sidebar -->
+
+ <div id="page-wrapper">
+ <div class="row">
+ <div class="col-lg-12">
+ <div class="text-center">
+ <h1>${headers['title']}</h1>
+ <h2>${headers['subtitle']}</h2>
+ <p><strong>${headers['version']}</strong></p>
+ <p>
+ <a href="http://www.haproxy.org/" title="HAProxy Home Page"><img src="${base}img/logo-med.png" /></a><br>
+ ${headers['author']}<br>
+ ${headers['date']}
+ </p>
+ </div>
+
+ ${document}
+ <br>
+ <hr>
+ <div class="text-right">
+ ${headers['title']} ${headers['version'].replace("version ", "")} – ${headers['subtitle']}<br>
+ <small>${headers['date']}, ${headers['author']}</small>
+ </div>
+ </div>
+ <!-- /.col-lg-12 -->
+ </div>
+ <!-- /.row -->
+ <div style="position: fixed; z-index: 1000; bottom: 0; left: 0; right: 0; padding: 10px">
+ <ul class="pager" style="margin: 0">
+ <li class="previous"><a id="previous" href="#"></a></li>
+ <li class="next"><a id="next" href="#"></a></li>
+ </ul>
+ </div>
+ </div>
+ <!-- /#page-wrapper -->
+
+ </div>
+ <!-- /#wrapper -->
+
+ <script src="//cdnjs.cloudflare.com/ajax/libs/jquery/1.11.0/jquery.min.js"></script>
+ <script src="//cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.1.1/js/bootstrap.min.js"></script>
+ <script src="//cdnjs.cloudflare.com/ajax/libs/typeahead.js/0.11.1/typeahead.bundle.min.js"></script>
+ <script>
+ /* Keyword search */
+ var searchFocus = false
+ var keywords = [
+ "${'",\n\t\t\t\t"'.join(keywords)}"
+ ]
+
+ function updateKeyboardNavStatus() {
+ var status = searchFocus ? '<span class="label label-disabled">Disabled</span>' : '<span class="label label-success">Enabled</span>'
+ $('#keyboardNavStatus').html(status)
+ }
+
+ function search(keyword) {
+ if (keyword && !!~$.inArray(keyword, keywords)) {
+ window.location.hash = keyword
+ }
+ }
+ // constructs the suggestion engine
+ var kwbh = new Bloodhound({
+ datumTokenizer: Bloodhound.tokenizers.obj.whitespace('value'),
+ queryTokenizer: Bloodhound.tokenizers.whitespace,
+ local: $.map(keywords, function(keyword) { return { value: keyword }; })
+ });
+ kwbh.initialize()
+
+ $('#searchKeyword .typeahead').typeahead({
+ hint: true,
+ highlight: true,
+ minLength: 1,
+ autoselect: true
+ },
+ {
+ name: 'keywords',
+ displayKey: 'value',
+ limit: keywords.length,
+ source: kwbh.ttAdapter()
+ }).focus(function() {
+ searchFocus = true
+ updateKeyboardNavStatus()
+ }).blur(function() {
+ searchFocus = false
+ updateKeyboardNavStatus()
+ }).bind('typeahead:selected', function ($e, datum) {
+ search(datum.value)
+ })
+
+ /* EXPERIMENTAL - Previous/Next navigation */
+ var headings = $(":header")
+ var previousTarget = false
+ var nextTarget = false
+ var $previous = $('#previous')
+ var $next = $('#next')
+ function refreshNavigation() {
+ var previous = false
+ var next = false
+ $.each(headings, function(item, value) {
+ var el = $(value)
+
+ // TODO : avoid target recalculation on each refresh
+ var target = el.attr('data-target')
+ if (! target) return true
+
+ var target_el = $('#' + target.replace(/\./, "\\."))
+ if (! target_el.attr('id')) return true
+
+ if (target_el.offset().top < $(window).scrollTop()) {
+ previous = el
+ }
+ if (target_el.offset().top - 1 > $(window).scrollTop()) {
+ next = el
+ }
+ if (next) return false
+ })
+
+ previousTarget = previous ? previous.attr('data-target') : 'top'
+ $previous.html(
+ previous && previousTarget ?
+ '<span class="glyphicon glyphicon-arrow-left"></span> ' + previous.text() :
+ '<span class="glyphicon glyphicon-arrow-up"></span> Top'
+ ).attr('href', '#' + previousTarget)
+
+ nextTarget = next ? next.attr('data-target') : 'bottom'
+ $next.html(
+ next && nextTarget ?
+ next.text() + ' <span class="glyphicon glyphicon-arrow-right"></span>' :
+ 'Bottom <span class="glyphicon glyphicon-arrow-down"></span>'
+ ).attr('href', '#' + nextTarget)
+ }
+
+ $(window).scroll(function () {
+ refreshNavigation()
+ });
+ $(document).ready(function() {
+ refreshNavigation()
+ updateKeyboardNavStatus()
+ });
+
+ /* EXPERIMENTAL - Enable keyboard navigation */
+ $(document).keydown(function(e){
+ if (searchFocus) return
+
+ switch(e.which) {
+ case 37: // left
+ window.location.hash = previousTarget ? previousTarget : 'top'
+ break
+
+ case 39: // right
+ window.location.hash = nextTarget ? nextTarget : 'bottom'
+ break
+
+ default: return // exit this handler for other keys
+ }
+ e.preventDefault()
+ })
+ </script>
+ ${footer}
+ <a class="anchor" name="bottom"></a>
+ </body>
+</html>
--- /dev/null
+#!/bin/bash
+
+PROJECT_HOME=$(dirname $(readlink -f $0))
+cd $PROJECT_HOME || exit 1
+
+WORK_DIR=$PROJECT_HOME/work
+
+function on_exit()
+{
+ echo "-- END $(date)"
+}
+
+function init()
+{
+ trap on_exit EXIT
+
+ echo
+ echo "-- START $(date)"
+ echo "PROJECT_HOME = $PROJECT_HOME"
+
+ echo "Preparing work directories..."
+ mkdir -p $WORK_DIR || exit 1
+ mkdir -p $WORK_DIR/haproxy || exit 1
+ mkdir -p $WORK_DIR/haproxy-dconv || exit 1
+
+ UPDATED=0
+ PUSH=0
+
+}
+
+# Needed as "git -C" is only available since git 1.8.5
+function git-C()
+{
+ _gitpath=$1
+ shift
+ echo "git --git-dir=$_gitpath/.git --work-tree=$_gitpath $@" >&2
+ git --git-dir=$_gitpath/.git --work-tree=$_gitpath "$@"
+}
+
+function fetch_haproxy_dconv()
+{
+ echo "Fetching latest haproxy-dconv public version..."
+ if [ ! -e $WORK_DIR/haproxy-dconv/master ];
+ then
+ git clone -v git://github.com/cbonte/haproxy-dconv.git $WORK_DIR/haproxy-dconv/master || exit 1
+ fi
+ GIT="git-C $WORK_DIR/haproxy-dconv/master"
+
+ OLD_MD5="$($GIT log -1 | md5sum) $($GIT describe --tags)"
+ $GIT checkout master && $GIT pull -v
+ version=$($GIT describe --tags)
+ version=${version%-g*}
+ NEW_MD5="$($GIT log -1 | md5sum) $($GIT describe --tags)"
+ if [ "$OLD_MD5" != "$NEW_MD5" ];
+ then
+ UPDATED=1
+ fi
+
+ echo "Fetching last haproxy-dconv public pages version..."
+ if [ ! -e $WORK_DIR/haproxy-dconv/gh-pages ];
+ then
+ cp -a $WORK_DIR/haproxy-dconv/master $WORK_DIR/haproxy-dconv/gh-pages || exit 1
+ fi
+ GIT="git-C $WORK_DIR/haproxy-dconv/gh-pages"
+
+ $GIT checkout gh-pages && $GIT pull -v
+}
+
+function fetch_haproxy()
+{
+ url=$1
+ path=$2
+
+ echo "Fetching HAProxy 1.4 repository..."
+ if [ ! -e $path ];
+ then
+ git clone -v $url $path || exit 1
+ fi
+ GIT="git-C $path"
+
+ $GIT checkout master && $GIT pull -v
+}
+
+function _generate_file()
+{
+ infile=$1
+ destfile=$2
+ git_version=$3
+ state=$4
+
+ $GIT checkout $git_version
+
+ if [ -e $gitpath/doc/$infile ];
+ then
+
+ git_version_simple=${git_version%-g*}
+ doc_version=$(tail -n1 $destfile 2>/dev/null | grep " git:" | sed 's/.* git:\([^ ]*\).*/\1/')
+ if [ $UPDATED -eq 1 -o "$git_version" != "$doc_version" ];
+ then
+ HTAG="VERSION-$(basename $gitpath | sed 's/[.]/\\&/g')"
+ if [ "$state" == "snapshot" ];
+ then
+ base=".."
+ HTAG="$HTAG-SNAPSHOT"
+ else
+ base="."
+ fi
+
+
+ $WORK_DIR/haproxy-dconv/master/haproxy-dconv.py -i $gitpath/doc/$infile -o $destfile --base=$base &&
+ echo "<!-- git:$git_version -->" >> $destfile &&
+ sed -i "s/\(<\!-- $HTAG -->\)\(.*\)\(<\!-- \/$HTAG -->\)/\1${git_version_simple}\3/" $docroot/index.html
+
+ else
+ echo "Already up to date."
+ fi
+
+ if [ "$doc_version" != "" -a "$git_version" != "$doc_version" ];
+ then
+ changelog=$($GIT log --oneline $doc_version..$git_version $gitpath/doc/$infile)
+ else
+ changelog=""
+ fi
+
+ GITDOC="git-C $docroot"
+ if [ "$($GITDOC status -s $destfile)" != "" ];
+ then
+ $GITDOC add $destfile &&
+ $GITDOC commit -m "Updating HAProxy $state $infile ${git_version_simple} generated by haproxy-dconv $version" -m "$changelog" $destfile $docroot/index.html &&
+ PUSH=1
+ fi
+ fi
+}
+
+function generate_docs()
+{
+ url=$1
+ gitpath=$2
+ docroot=$3
+ infile=$4
+ outfile=$5
+
+ fetch_haproxy $url $gitpath
+
+ GIT="git-C $gitpath"
+
+ $GIT checkout master
+ git_version=$($GIT describe --tags --match 'v*')
+ git_version_stable=${git_version%-*-g*}
+
+ echo "Generating snapshot version $git_version..."
+ _generate_file $infile $docroot/snapshot/$outfile $git_version snapshot
+
+ echo "Generating stable version $git_version..."
+ _generate_file $infile $docroot/$outfile $git_version_stable stable
+}
+
+function push()
+{
+ docroot=$1
+ GITDOC="git-C $docroot"
+
+ if [ $PUSH -eq 1 ];
+ then
+ $GITDOC push origin gh-pages
+ fi
+
+}
+
+
+init
+fetch_haproxy_dconv
+generate_docs http://git.1wt.eu/git/haproxy-1.4.git/ $WORK_DIR/haproxy/1.4 $WORK_DIR/haproxy-dconv/gh-pages configuration.txt configuration-1.4.html
+generate_docs http://git.1wt.eu/git/haproxy-1.5.git/ $WORK_DIR/haproxy/1.5 $WORK_DIR/haproxy-dconv/gh-pages configuration.txt configuration-1.5.html
+generate_docs http://git.1wt.eu/git/haproxy.git/ $WORK_DIR/haproxy/1.6 $WORK_DIR/haproxy-dconv/gh-pages configuration.txt configuration-1.6.html
+generate_docs http://git.1wt.eu/git/haproxy.git/ $WORK_DIR/haproxy/1.6 $WORK_DIR/haproxy-dconv/gh-pages intro.txt intro-1.6.html
+push $WORK_DIR/haproxy-dconv/gh-pages
--- /dev/null
+[DEFAULT]
+pristine-tar = True
+upstream-branch = upstream-1.6
+debian-branch = master
--- /dev/null
+.TH HALOG "1" "July 2013" "halog" "User Commands"
+.SH NAME
+halog \- HAProxy log statistics reporter
+.SH SYNOPSIS
+.B halog
+[\fI-h|--help\fR]
+.br
+.B halog
+[\fIoptions\fR] <LOGFILE
+.SH DESCRIPTION
+.B halog
+reads HAProxy log data from stdin and extracts and displays lines matching
+user-specified criteria.
+.SH OPTIONS
+.SS Input filters \fR(several filters may be combined)
+.TP
+\fB\-H\fR
+Only match lines containing HTTP logs (ignore TCP)
+.TP
+\fB\-E\fR
+Only match lines without any error (no 5xx status)
+.TP
+\fB\-e\fR
+Only match lines with errors (status 5xx or negative)
+.TP
+\fB\-rt\fR|\fB\-RT\fR <time>
+Only match response times larger|smaller than <time>
+.TP
+\fB\-Q\fR|\fB\-QS\fR
+Only match queued requests (any queue|server queue)
+.TP
+\fB\-tcn\fR|\fB\-TCN\fR <code>
+Only match requests with/without termination code <code>
+.TP
+\fB\-hs\fR|\fB\-HS\fR <[min][:][max]>
+Only match requests with HTTP status codes within/not within min..max. Any of
+them may be omitted. Exact code is checked for if no ':' is specified.
+.SS
+Modifiers
+.TP
+\fB\-v\fR
+Invert the input filtering condition
+.TP
+\fB\-q\fR
+Don't report errors/warnings
+.TP
+\fB\-m\fR <lines>
+Limit output to the first <lines> lines
+.SS
+Output filters \fR\- only one may be used at a time
+.TP
+\fB\-c\fR
+Only report the number of lines that would have been printed
+.TP
+\fB\-pct\fR
+Output connect and response times percentiles
+.TP
+\fB\-st\fR
+Output number of requests per HTTP status code
+.TP
+\fB\-cc\fR
+Output number of requests per cookie code (2 chars)
+.TP
+\fB\-tc\fR
+Output number of requests per termination code (2 chars)
+.TP
+\fB\-srv\fR
+Output statistics per server (time, requests, errors)
+.TP
+\fB\-u\fR*
+Output statistics per URL (time, requests, errors)
+.br
+Additional characters indicate the output sorting key:
+.RS
+.TP
+\fB\-u\fR
+URL
+.TP
+\fB\-uc\fR
+Request count
+.TP
+\fB\-ue\fR
+Error count
+.TP
+\fB\-ua\fR
+Average response time
+.TP
+\fB\-ut\fR
+Average total time
+.TP
+\fB\-uao\fR, \fB\-uto\fR
+Average times computed on valid ('OK') requests
+.TP
+\fB\-uba\fR
+Average bytes returned
+.TP
+\fB\-ubt\fR
+Total bytes returned
+.RE
+.SH "SEE ALSO"
+.BR haproxy (1)
+.SH AUTHOR
+.PP
+\fBhalog\fR was written by Willy Tarreau <w@1wt.eu> and is part of \fBhaproxy\fR(1).
+.PP
+This manual page was written by Apollon Oikonomopoulos <apoikos@gmail.com> for the Debian project (but may
+be used by others).
+
--- /dev/null
+Document: haproxy-doc
+Title: HAProxy Documentation
+Author: Willy Tarreau
+Abstract: This documentation covers the configuration of HAProxy.
+Section: System/Administration
+
+Format: HTML
+Index: /usr/share/doc/haproxy-doc/html/configuration.html
+Files: /usr/share/doc/haproxy-doc/html/*.html
--- /dev/null
+doc/configuration.html usr/share/doc/haproxy-doc/html/
+doc/intro.html usr/share/doc/haproxy-doc/html/
+doc/lua-api/_build/html/* usr/share/doc/haproxy-doc/lua/
+debian/dconv/css/* usr/share/doc/haproxy-doc/html/css/
+debian/dconv/js/* usr/share/doc/haproxy-doc/html/js/
+debian/dconv/img/* usr/share/doc/haproxy-doc/html/img/
--- /dev/null
+usr/share/javascript/bootstrap/css/bootstrap.min.css usr/share/doc/haproxy-doc/html/css/bootstrap.min.css
+usr/share/javascript/bootstrap/js/bootstrap.min.js usr/share/doc/haproxy-doc/html/js/bootstrap.min.js
+usr/share/javascript/bootstrap/fonts usr/share/doc/haproxy-doc/html/fonts
+usr/share/javascript/jquery/jquery.min.js usr/share/doc/haproxy-doc/html/js/jquery.min.js
--- /dev/null
+Syslog support
+--------------
+Upstream recommends using syslog over UDP to log from HAProxy processes, as
+this allows seamless logging from chroot'ed processes without access to
+/dev/log. However, many syslog implementations do not enable UDP syslog by
+default.
+
+The default HAProxy configuration in Debian uses /dev/log for logging and
+ships an rsyslog snippet that creates /dev/log in HAProxy's chroot and logs all
+HAProxy messages to /var/log/haproxy.log. To take advantage of this, you must
+restart rsyslog after installing this package. For other syslog daemons you
+will have to take manual measures to enable UDP logging or create /dev/log
+under HAProxy's chroot:
+a. For sysklogd, add SYSLOG="-a /var/lib/haproxy/dev/log" to
+ /etc/default/syslog.
+b. For inetutils-syslogd, add SYSLOGD_OPTS="-a /var/lib/haproxy/dev/log" to
+ /etc/default/inetutils-syslogd.
--- /dev/null
+global
+ log /dev/log local0
+ log /dev/log local1 notice
+ chroot /var/lib/haproxy
+ stats socket /run/haproxy/admin.sock mode 660 level admin
+ stats timeout 30s
+ user haproxy
+ group haproxy
+ daemon
+
+ # Default SSL material locations
+ ca-base /etc/ssl/certs
+ crt-base /etc/ssl/private
+
+ # Default ciphers to use on SSL-enabled listening sockets.
+ # For more information, see ciphers(1SSL). This list is from:
+ # https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
+ ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS
+ ssl-default-bind-options no-sslv3
+
+defaults
+ log global
+ mode http
+ option httplog
+ option dontlognull
+ timeout connect 5000
+ timeout client 50000
+ timeout server 50000
+ errorfile 400 /etc/haproxy/errors/400.http
+ errorfile 403 /etc/haproxy/errors/403.http
+ errorfile 408 /etc/haproxy/errors/408.http
+ errorfile 500 /etc/haproxy/errors/500.http
+ errorfile 502 /etc/haproxy/errors/502.http
+ errorfile 503 /etc/haproxy/errors/503.http
+ errorfile 504 /etc/haproxy/errors/504.http
--- /dev/null
+# Defaults file for HAProxy
+#
+# This is sourced by both, the initscript and the systemd unit file, so do not
+# treat it as a shell script fragment.
+
+# Change the config file location if needed
+#CONFIG="/etc/haproxy/haproxy.cfg"
+
+# Add extra flags here, see haproxy(1) for a few options
+#EXTRAOPTS="-de -m 16"
--- /dev/null
+etc/haproxy
+etc/haproxy/errors
+var/lib/haproxy
+var/lib/haproxy/dev
--- /dev/null
+doc/architecture.txt
+doc/configuration.txt
+contrib
+README
--- /dev/null
+examples/*.cfg
--- /dev/null
+#!/bin/sh
+### BEGIN INIT INFO
+# Provides: haproxy
+# Required-Start: $local_fs $network $remote_fs $syslog $named
+# Required-Stop: $local_fs $remote_fs $syslog $named
+# Default-Start: 2 3 4 5
+# Default-Stop: 0 1 6
+# Short-Description: fast and reliable load balancing reverse proxy
+# Description: This file should be used to start and stop haproxy.
+### END INIT INFO
+
+# Author: Arnaud Cornet <acornet@debian.org>
+
+PATH=/sbin:/usr/sbin:/bin:/usr/bin
+PIDFILE=/var/run/haproxy.pid
+CONFIG=/etc/haproxy/haproxy.cfg
+HAPROXY=/usr/sbin/haproxy
+RUNDIR=/run/haproxy
+EXTRAOPTS=
+
+test -x $HAPROXY || exit 0
+
+if [ -e /etc/default/haproxy ]; then
+ . /etc/default/haproxy
+fi
+
+test -f "$CONFIG" || exit 0
+
+[ -f /etc/default/rcS ] && . /etc/default/rcS
+. /lib/lsb/init-functions
+
+
+check_haproxy_config()
+{
+ $HAPROXY -c -f "$CONFIG" >/dev/null
+ if [ $? -eq 1 ]; then
+ log_end_msg 1
+ exit 1
+ fi
+}
+
+haproxy_start()
+{
+ [ -d "$RUNDIR" ] || mkdir "$RUNDIR"
+ chown haproxy:haproxy "$RUNDIR"
+ chmod 2775 "$RUNDIR"
+
+ check_haproxy_config
+
+ start-stop-daemon --quiet --oknodo --start --pidfile "$PIDFILE" \
+ --exec $HAPROXY -- -f "$CONFIG" -D -p "$PIDFILE" \
+ $EXTRAOPTS || return 2
+ return 0
+}
+
+haproxy_stop()
+{
+ if [ ! -f $PIDFILE ] ; then
+ # This is a success according to LSB
+ return 0
+ fi
+
+ ret=0
+ tmppid="$(mktemp)"
+
+ # HAProxy's pidfile may contain multiple PIDs, if nbproc > 1, so loop
+ # over each PID. Note that start-stop-daemon has a --pid option, but it
+ # was introduced in dpkg 1.17.6, post wheezy, so we use a temporary
+ # pidfile instead to ease backports.
+ for pid in $(cat $PIDFILE); do
+ echo "$pid" > "$tmppid"
+ start-stop-daemon --quiet --oknodo --stop \
+ --retry 5 --pidfile "$tmppid" --exec $HAPROXY || ret=$?
+ done
+
+ rm -f "$tmppid"
+ [ $ret -eq 0 ] && rm -f $PIDFILE
+
+ return $ret
+}
+
+haproxy_reload()
+{
+ check_haproxy_config
+
+ $HAPROXY -f "$CONFIG" -p $PIDFILE -D $EXTRAOPTS -sf $(cat $PIDFILE) \
+ || return 2
+ return 0
+}
+
+haproxy_status()
+{
+ if [ ! -f $PIDFILE ] ; then
+ # program not running
+ return 3
+ fi
+
+ for pid in $(cat $PIDFILE) ; do
+ if ! ps --no-headers p "$pid" | grep haproxy > /dev/null ; then
+ # program running, bogus pidfile
+ return 1
+ fi
+ done
+
+ return 0
+}
+
+
+case "$1" in
+start)
+ log_daemon_msg "Starting haproxy" "haproxy"
+ haproxy_start
+ ret=$?
+ case "$ret" in
+ 0)
+ log_end_msg 0
+ ;;
+ 1)
+ log_end_msg 1
+ echo "pid file '$PIDFILE' found, haproxy not started."
+ ;;
+ 2)
+ log_end_msg 1
+ ;;
+ esac
+ exit $ret
+ ;;
+stop)
+ log_daemon_msg "Stopping haproxy" "haproxy"
+ haproxy_stop
+ ret=$?
+ case "$ret" in
+ 0|1)
+ log_end_msg 0
+ ;;
+ 2)
+ log_end_msg 1
+ ;;
+ esac
+ exit $ret
+ ;;
+reload|force-reload)
+ log_daemon_msg "Reloading haproxy" "haproxy"
+ haproxy_reload
+ ret=$?
+ case "$ret" in
+ 0|1)
+ log_end_msg 0
+ ;;
+ 2)
+ log_end_msg 1
+ ;;
+ esac
+ exit $ret
+ ;;
+restart)
+ log_daemon_msg "Restarting haproxy" "haproxy"
+ haproxy_stop
+ haproxy_start
+ ret=$?
+ case "$ret" in
+ 0)
+ log_end_msg 0
+ ;;
+ 1)
+ log_end_msg 1
+ ;;
+ 2)
+ log_end_msg 1
+ ;;
+ esac
+ exit $ret
+ ;;
+status)
+ haproxy_status
+ ret=$?
+ case "$ret" in
+ 0)
+ echo "haproxy is running."
+ ;;
+ 1)
+ echo "haproxy dead, but $PIDFILE exists."
+ ;;
+ *)
+ echo "haproxy not running."
+ ;;
+ esac
+ exit $ret
+ ;;
+*)
+ echo "Usage: /etc/init.d/haproxy {start|stop|reload|restart|status}"
+ exit 2
+ ;;
+esac
+
+:
--- /dev/null
+debian/haproxy.cfg etc/haproxy
+examples/errorfiles/*.http etc/haproxy/errors
+contrib/systemd/haproxy.service lib/systemd/system
+contrib/halog/halog usr/bin
--- /dev/null
+haproxy binary: binary-without-manpage usr/sbin/haproxy-systemd-wrapper
--- /dev/null
+mv_conffile /etc/rsyslog.d/haproxy.conf /etc/rsyslog.d/49-haproxy.conf 1.5.3-2~
--- /dev/null
+doc/haproxy.1
+doc/lua-api/_build/man/haproxy-lua.1
+debian/halog.1
--- /dev/null
+#!/bin/sh
+
+set -e
+
+adduser --system --disabled-password --disabled-login --home /var/lib/haproxy \
+ --no-create-home --quiet --force-badname --group haproxy
+
+#DEBHELPER#
+
+if [ -n "$2" ] && dpkg --compare-versions "$2" gt "1.5~dev24-2~"; then
+ # Reload already running instances. Since 1.5~dev24-2 we do not stop
+ # haproxy in prerm during upgrades.
+ invoke-rc.d haproxy reload || true
+fi
+
+exit 0
--- /dev/null
+#!/bin/sh
+
+set -e
+
+#DEBHELPER#
+
+case "$1" in
+ purge)
+ deluser --system haproxy || true
+ delgroup --system haproxy || true
+ ;;
+ *)
+ ;;
+esac
+
+exit 0
--- /dev/null
+d /run/haproxy 2775 haproxy haproxy -
--- /dev/null
+" detect HAProxy configuration
+au BufRead,BufNewFile haproxy*.cfg set filetype=haproxy
--- /dev/null
+/var/log/haproxy.log {
+ daily
+ rotate 52
+ missingok
+ notifempty
+ compress
+ delaycompress
+ postrotate
+ invoke-rc.d rsyslog rotate >/dev/null 2>&1 || true
+ endscript
+}
--- /dev/null
+From: Apollon Oikonomopoulos <apoikos@gmail.com>
+Date: Tue, 2 Jul 2013 15:24:59 +0300
+Subject: Use dpkg-buildflags to build halog
+
+Forwarded: no
+Last-Update: 2013-07-02
+---
+ contrib/halog/Makefile | 16 +++++-----------
+ 1 file changed, 5 insertions(+), 11 deletions(-)
+
+diff --git a/contrib/halog/Makefile b/contrib/halog/Makefile
+index 5e687c0..ab34027 100644
+--- a/contrib/halog/Makefile
++++ b/contrib/halog/Makefile
+@@ -1,22 +1,16 @@
+ EBTREE_DIR = ../../ebtree
+ INCLUDE = -I../../include -I$(EBTREE_DIR)
+
+-CC = gcc
+-
+-# note: it is recommended to also add -fomit-frame-pointer on i386
+-OPTIMIZE = -O3
++CPPFLAGS:=$(shell dpkg-buildflags --get CPPFLAGS)
++CFLAGS:=$(shell dpkg-buildflags --get CFLAGS)
++LDFLAGS:=$(shell dpkg-buildflags --get LDFLAGS)
+
+-# most recent glibc provide platform-specific optimizations that make
+-# memchr faster than the generic C implementation (eg: SSE and prefetch
+-# on x86_64). Try with an without. In general, on x86_64 it's better to
+-# use memchr using the define below.
+-# DEFINE = -DUSE_MEMCHR
+-DEFINE =
++CC = gcc
+
+ OBJS = halog
+
+ halog: halog.c fgets2.c
+- $(CC) $(OPTIMIZE) $(DEFINE) -o $@ $(INCLUDE) $(EBTREE_DIR)/ebtree.c $(EBTREE_DIR)/eb32tree.c $(EBTREE_DIR)/eb64tree.c $(EBTREE_DIR)/ebmbtree.c $(EBTREE_DIR)/ebsttree.c $(EBTREE_DIR)/ebistree.c $(EBTREE_DIR)/ebimtree.c $^
++ $(CC) $(CPPFLAGS) $(CFLAGS) $(LDFLAGS) -o $@ $(INCLUDE) $(EBTREE_DIR)/ebtree.c $(EBTREE_DIR)/eb32tree.c $(EBTREE_DIR)/eb64tree.c $(EBTREE_DIR)/ebmbtree.c $(EBTREE_DIR)/ebsttree.c $(EBTREE_DIR)/ebistree.c $(EBTREE_DIR)/ebimtree.c $^
+
+ clean:
+ rm -f $(OBJS) *.[oas]
--- /dev/null
+From ca3fa95fbb1cc4060dcdd785cd76b1fa82c13b4a Mon Sep 17 00:00:00 2001
+From: Sergii Golovatiuk <sgolovatiuk@mirantis.com>
+Date: Tue, 24 May 2016 13:54:12 +0000
+Subject: [PATCH] Adding "include" configuration statement to haproxy.
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+This patch ia based on original work done by Brane F. Gračnar:
+http://marc.info/?l=haproxy&m=129235503410444
+
+Original patch was modified according to upstream changes in 1.6.*
+---
+ include/common/cfgparse.h | 6 +-
+ src/cfgparse.c | 159 +++++++++++++++++++++++++++++++++++++++++++++-
+ src/haproxy.c | 2 +-
+ 3 files changed, 162 insertions(+), 5 deletions(-)
+
+diff --git a/include/common/cfgparse.h b/include/common/cfgparse.h
+index d785327..b521302 100644
+--- a/include/common/cfgparse.h
++++ b/include/common/cfgparse.h
+@@ -36,6 +36,10 @@
+ #define CFG_USERLIST 3
+ #define CFG_PEERS 4
+
++
++/* maximum include recursion level */
++#define INCLUDE_RECURSION_LEVEL_MAX 10
++
+ struct cfg_keyword {
+ int section; /* section type for this keyword */
+ const char *kw; /* the keyword itself */
+@@ -65,7 +69,7 @@ extern int cfg_maxconn;
+
+ int cfg_parse_global(const char *file, int linenum, char **args, int inv);
+ int cfg_parse_listen(const char *file, int linenum, char **args, int inv);
+-int readcfgfile(const char *file);
++int readcfgfile(const char *file, int recdepth);
+ void cfg_register_keywords(struct cfg_kw_list *kwl);
+ void cfg_unregister_keywords(struct cfg_kw_list *kwl);
+ void init_default_instance();
+diff --git a/src/cfgparse.c b/src/cfgparse.c
+index 97f4243..99a19e5 100644
+--- a/src/cfgparse.c
++++ b/src/cfgparse.c
+@@ -32,6 +32,8 @@
+ #include <sys/stat.h>
+ #include <fcntl.h>
+ #include <unistd.h>
++#include <glob.h>
++#include <libgen.h>
+
+ #include <common/cfgparse.h>
+ #include <common/chunk.h>
+@@ -6844,6 +6846,149 @@ out:
+ return err_code;
+ }
+
++/**
++ * This function takes glob(3) pattern and tries to resolve
++ * that pattern to files and tries to include them.
++ *
++ * See readcfgfile() for return values.
++ */
++int cfgfile_include (char *pattern, char *dir, int recdepth) {
++
++ int err_code = 0;
++
++ if (pattern == NULL) {
++ Alert("Config file include pattern == NULL; This should never happen.\n");
++ err_code |= ERR_ABORT;
++ goto out;
++ }
++ if (recdepth >= INCLUDE_RECURSION_LEVEL_MAX) {
++ Alert(
++ "Refusing to include filename pattern: '%s': too deep recursion level: %d.\n",
++ pattern,
++ recdepth
++ );
++ err_code|= ERR_ABORT;
++ goto out;
++ }
++
++ /** don't waste time with empty strings */
++ if (strlen(pattern) < 1) return 0;
++
++ /** we want to support relative to include file glob patterns */
++ int buf_len = 3;
++ if (dir != NULL)
++ buf_len += strlen(dir);
++ buf_len += strlen(pattern);
++ char *real_pattern = malloc(buf_len);
++ if (real_pattern == NULL) {
++ Alert("Error allocating memory for glob pattern: %s\n", strerror(errno));
++ err_code |= ERR_ABORT;
++ goto out;
++ }
++ memset(real_pattern, '\0', buf_len);
++ if (dir != NULL && pattern[0] != '/') {
++ strcat(real_pattern, dir);
++ strcat(real_pattern, "/");
++ }
++ strcat(real_pattern, pattern);
++
++ /* file inclusion result */
++ int result = 0;
++
++ /** glob the pattern */
++ glob_t res;
++ int rv = glob(
++ real_pattern,
++ (GLOB_NOESCAPE | GLOB_BRACE | GLOB_TILDE),
++ NULL,
++ &res
++ );
++ /* check for glob(3) injuries */
++ switch (rv) {
++ case GLOB_NOMATCH:
++ /* nothing was found */
++ break;
++
++ case GLOB_ABORTED:
++ Alert("Error globbing pattern '%s': read error.\n", real_pattern);
++ result = ERR_ABORT;
++ break;
++
++ case GLOB_NOSPACE:
++ Alert("Error globbing pattern '%s': out of memory.\n", real_pattern);
++ result = ERR_ABORT;
++ break;
++
++ default:
++ ;
++ int i = 0;
++ for (i = 0; i < res.gl_pathc; i++) {
++ char *file = res.gl_pathv[i];
++
++ /* parse configuration fragment */
++ int r = readcfgfile(file, recdepth);
++
++ /* check for injuries */
++ if (r != 0) {
++ result = r;
++ goto outta_cfgfile_include;
++ }
++ }
++ }
++
++outta_cfgfile_include:
++
++ /** free glob result. */
++ globfree(&res);
++ free(real_pattern);
++
++ return result;
++
++out:
++ return err_code;
++}
++
++int
++cfg_parse_include(const char *file, int linenum, char **args, int recdepth) {
++
++ int err_code = 0;
++
++ if (strcmp(args[0], "include") == 0) {
++ if (args[1] == NULL || strlen(args[1]) < 1) {
++ Alert("parsing [%s:%d]: include statement requires file glob pattern.\n",
++ file, linenum);
++ err_code |= ERR_ABORT;
++ goto out;
++ }
++ /**
++ * compute file's dirname - this is necessary because
++ * dirname(3) returns shared buffer address
++ */
++ int buf_len = strlen(file) + 1;
++ char *file_dir = malloc(buf_len);
++ if (file_dir == NULL) {
++ Alert("Unable to allocate memory for config file dirname.");
++ err_code |= ERR_ABORT;
++ goto out;
++ }
++ memset(file_dir, '\0', buf_len);
++ strcpy(file_dir, file);
++ strcpy(file_dir, dirname(file_dir));
++
++ /* include pattern */
++ int r = cfgfile_include(args[1], file_dir, (recdepth + 1));
++ //int r = cfgfile_include(args[1], file_dir, 1);
++ free(file_dir);
++ /* check for injuries */
++ if (r != 0) {
++ err_code |= r;
++ goto out;
++ }
++ }
++out:
++ return err_code;
++}
++
+ /*
+ * This function reads and parses the configuration file given in the argument.
+ * Returns the error code, 0 if OK, or any combination of :
+@@ -6854,7 +6999,7 @@ out:
+ * Only the two first ones can stop processing, the two others are just
+ * indicators.
+ */
+-int readcfgfile(const char *file)
++int readcfgfile(const char *file, int recdepth)
+ {
+ char *thisline;
+ int linesize = LINESIZE;
+@@ -6878,13 +7023,16 @@ int readcfgfile(const char *file)
+ !cfg_register_section("global", cfg_parse_global) ||
+ !cfg_register_section("userlist", cfg_parse_users) ||
+ !cfg_register_section("peers", cfg_parse_peers) ||
++ !cfg_register_section("include", cfg_parse_include) ||
+ !cfg_register_section("mailers", cfg_parse_mailers) ||
+ !cfg_register_section("namespace_list", cfg_parse_netns) ||
+ !cfg_register_section("resolvers", cfg_parse_resolvers))
+ return -1;
+
+- if ((f=fopen(file,"r")) == NULL)
++ if ((f=fopen(file,"r")) == NULL) {
++ Alert("Error opening configuration file %s: %s\n", file, strerror(errno));
+ return -1;
++ }
+
+ next_line:
+ while (fgets(thisline + readbytes, linesize - readbytes, f) != NULL) {
+@@ -7168,7 +7316,12 @@ next_line:
+
+ /* else it's a section keyword */
+ if (cs)
+- err_code |= cs->section_parser(file, linenum, args, kwm);
++ if (strcmp("include", cs->section_name) == 0) {
++ err_code |= cs->section_parser(file, linenum, args, recdepth);
++ }
++ else {
++ err_code |= cs->section_parser(file, linenum, args, kwm);
++ }
+ else {
+ Alert("parsing [%s:%d]: unknown keyword '%s' out of section.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+diff --git a/src/haproxy.c b/src/haproxy.c
+index 4299328..63a9bfd 100644
+--- a/src/haproxy.c
++++ b/src/haproxy.c
+@@ -770,7 +770,7 @@ void init(int argc, char **argv)
+ list_for_each_entry(wl, &cfg_cfgfiles, list) {
+ int ret;
+
+- ret = readcfgfile(wl->s);
++ ret = readcfgfile(wl->s, 0);
+ if (ret == -1) {
+ Alert("Could not open configuration file %s : %s\n",
+ wl->s, strerror(errno));
+--
+2.7.4
+
--- /dev/null
+From: Apollon Oikonomopoulos <apoikos@debian.org>
+Date: Wed, 29 Apr 2015 13:51:49 +0300
+Subject: [PATCH] dconv: debianize
+
+ - Use Debian bootstrap and jquery packages
+ - Add Debian-related resources to the template
+ - Use the package's version instead of HAProxy's git version
+ - Strip the conversion date from the output to ensure reproducible
+ build.
+
+diff --git a/debian/dconv/haproxy-dconv.py b/debian/dconv/haproxy-dconv.py
+index fe2b96dce325..702eefac6a3b 100755
+--- a/debian/dconv/haproxy-dconv.py
++++ b/debian/dconv/haproxy-dconv.py
+@@ -44,12 +44,11 @@ VERSION = ""
+ HAPROXY_GIT_VERSION = False
+
+ def main():
+- global VERSION, HAPROXY_GIT_VERSION
++ global HAPROXY_GIT_VERSION
+
+ usage="Usage: %prog --infile <infile> --outfile <outfile>"
+
+ optparser = OptionParser(description='Generate HTML Document from HAProxy configuation.txt',
+- version=VERSION,
+ usage=usage)
+ optparser.add_option('--infile', '-i', help='Input file mostly the configuration.txt')
+ optparser.add_option('--outfile','-o', help='Output file')
+@@ -65,11 +64,7 @@ def main():
+
+ os.chdir(os.path.dirname(__file__))
+
+- VERSION = get_git_version()
+- if not VERSION:
+- sys.exit(1)
+-
+- HAPROXY_GIT_VERSION = get_haproxy_git_version(os.path.dirname(option.infile))
++ HAPROXY_GIT_VERSION = get_haproxy_debian_version(os.path.dirname(option.infile))
+
+ convert(option.infile, option.outfile, option.base)
+
+@@ -114,6 +109,15 @@ def get_haproxy_git_version(path):
+ version = re.sub(r'-g.*', '', version)
+ return version
+
++def get_haproxy_debian_version(path):
++ try:
++ version = subprocess.check_output(["dpkg-parsechangelog", "-Sversion"],
++ cwd=os.path.join(path, ".."))
++ except subprocess.CalledProcessError:
++ return False
++
++ return version.strip()
++
+ def getTitleDetails(string):
+ array = string.split(".")
+
+@@ -506,7 +510,6 @@ def convert(infile, outfile, base=''):
+ keywords = keywords,
+ keywordsCount = keywordsCount,
+ keyword_conflicts = keyword_conflicts,
+- version = VERSION,
+ date = datetime.datetime.now().strftime("%Y/%m/%d"),
+ )
+ except TopLevelLookupException:
+@@ -524,7 +527,6 @@ def convert(infile, outfile, base=''):
+ keywords = keywords,
+ keywordsCount = keywordsCount,
+ keyword_conflicts = keyword_conflicts,
+- version = VERSION,
+ date = datetime.datetime.now().strftime("%Y/%m/%d"),
+ footer = footer
+ )
+diff --git a/debian/dconv/templates/template.html b/debian/dconv/templates/template.html
+index c72b3558c2dd..9aefa16dd82d 100644
+--- a/debian/dconv/templates/template.html
++++ b/debian/dconv/templates/template.html
+@@ -3,8 +3,8 @@
+ <head>
+ <meta charset="utf-8" />
+ <title>${headers['title']} ${headers['version']} - ${headers['subtitle']}</title>
+- <link href="//cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.1.1/css/bootstrap.min.css" rel="stylesheet" />
+- <link href="${base}css/page.css?${version}" rel="stylesheet" />
++ <link href="${base}css/bootstrap.min.css" rel="stylesheet" />
++ <link href="${base}css/page.css" rel="stylesheet" />
+ </head>
+ <body>
+ <nav class="navbar navbar-default navbar-fixed-top" role="navigation">
+@@ -15,7 +15,7 @@
+ <span class="icon-bar"></span>
+ <span class="icon-bar"></span>
+ </button>
+- <a class="navbar-brand" href="${base}index.html">${headers['title']} <small>${headers['subtitle']}</small></a>
++ <a class="navbar-brand" href="${base}configuration.html">${headers['title']}</a>
+ </div>
+ <!-- /.navbar-header -->
+
+@@ -24,31 +24,16 @@
+ <ul class="nav navbar-nav">
+ <li><a href="http://www.haproxy.org/">HAProxy home page</a></li>
+ <li class="dropdown">
+- <a href="#" class="dropdown-toggle" data-toggle="dropdown">Versions <b class="caret"></b></a>
++ <a href="#" class="dropdown-toggle" data-toggle="dropdown">Debian resources <b class="caret"></b></a>
+ <ul class="dropdown-menu">
+ ## TODO : provide a structure to dynamically generate per version links
+- <li class="dropdown-header">HAProxy 1.4</li>
+- <li><a href="${base}configuration-1.4.html">Configuration Manual <small>(stable)</small></a></li>
+- <li><a href="${base}snapshot/configuration-1.4.html">Configuration Manual <small>(snapshot)</small></a></li>
+- <li><a href="http://git.1wt.eu/git/haproxy-1.4.git/">GIT Repository</a></li>
+- <li><a href="http://www.haproxy.org/git/?p=haproxy-1.4.git">Browse repository</a></li>
+- <li><a href="http://www.haproxy.org/download/1.4/">Browse directory</a></li>
+- <li class="divider"></li>
+- <li class="dropdown-header">HAProxy 1.5</li>
+- <li><a href="${base}configuration-1.5.html">Configuration Manual <small>(stable)</small></a></li>
+- <li><a href="${base}snapshot/configuration-1.5.html">Configuration Manual <small>(snapshot)</small></a></li>
+- <li><a href="http://git.1wt.eu/git/haproxy-1.5.git/">GIT Repository</a></li>
+- <li><a href="http://www.haproxy.org/git/?p=haproxy-1.5.git">Browse repository</a></li>
+- <li><a href="http://www.haproxy.org/download/1.5/">Browse directory</a></li>
+- <li class="divider"></li>
+- <li class="dropdown-header">HAProxy 1.6</li>
+- <li><a href="${base}configuration-1.6.html">Configuration Manual <small>(stable)</small></a></li>
+- <li><a href="${base}snapshot/configuration-1.6.html">Configuration Manual <small>(snapshot)</small></a></li>
+- <li><a href="${base}intro-1.6.html">Starter Guide <small>(stable)</small></a></li>
+- <li><a href="${base}snapshot/intro-1.6.html">Starter Guide <small>(snapshot)</small></a></li>
+- <li><a href="http://git.1wt.eu/git/haproxy.git/">GIT Repository</a></li>
+- <li><a href="http://www.haproxy.org/git/?p=haproxy.git">Browse repository</a></li>
+- <li><a href="http://www.haproxy.org/download/1.6/">Browse directory</a></li>
++ <li><a href="https://bugs.debian.org/src:haproxy">Bug Tracking System</a></li>
++ <li><a href="https://packages.debian.org/haproxy">Package page</a></li>
++ <li><a href="http://tracker.debian.org/pkg/haproxy">Package Tracking System</a></li>
++ <li class="divider"></li>
++ <li><a href="${base}intro.html">Starter Guide</a></li>
++ <li><a href="${base}configuration.html">Configuration Manual</a></li>
++ <li><a href="http://anonscm.debian.org/gitweb/?p=pkg-haproxy/haproxy.git">Package Git Repository</a></li>
+ </ul>
+ </li>
+ </ul>
+@@ -72,7 +57,7 @@
+ The feature is automatically disabled when the search field is focused.
+ </p>
+ <p class="text-right">
+- <small>Converted with <a href="https://github.com/cbonte/haproxy-dconv">haproxy-dconv</a> v<b>${version}</b> on <b>${date}</b></small>
++ <small>Converted with <a href="https://github.com/cbonte/haproxy-dconv">haproxy-dconv</a></small>
+ </p>
+ </div>
+ <!-- /.sidebar -->
+@@ -83,7 +68,7 @@
+ <div class="text-center">
+ <h1>${headers['title']}</h1>
+ <h2>${headers['subtitle']}</h2>
+- <p><strong>${headers['version']}</strong></p>
++ <p><strong>${headers['version']} (Debian)</strong></p>
+ <p>
+ <a href="http://www.haproxy.org/" title="HAProxy Home Page"><img src="${base}img/logo-med.png" /></a><br>
+ ${headers['author']}<br>
+@@ -114,9 +99,9 @@
+ </div>
+ <!-- /#wrapper -->
+
+- <script src="//cdnjs.cloudflare.com/ajax/libs/jquery/1.11.0/jquery.min.js"></script>
+- <script src="//cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.1.1/js/bootstrap.min.js"></script>
+- <script src="//cdnjs.cloudflare.com/ajax/libs/typeahead.js/0.11.1/typeahead.bundle.min.js"></script>
++ <script src="${base}js/jquery.min.js"></script>
++ <script src="${base}js/bootstrap.min.js"></script>
++ <script src="${base}js/typeahead.bundle.js"></script>
+ <script>
+ /* Keyword search */
+ var searchFocus = false
--- /dev/null
+Subject: Add documentation field to the systemd unit
+Author: Apollon Oikonomopoulos <apoikos@gmail.com>
+
+Forwarded: no
+Last-Update: 2014-01-03
+--- a/contrib/systemd/haproxy.service.in
++++ b/contrib/systemd/haproxy.service.in
+@@ -1,5 +1,7 @@
+ [Unit]
+ Description=HAProxy Load Balancer
++Documentation=man:haproxy(1)
++Documentation=file:/usr/share/doc/haproxy/configuration.txt.gz
+ After=network.target syslog.service
+ Wants=syslog.service
+
--- /dev/null
+Author: Apollon Oikonomopoulos
+Description: Check the configuration before reloading HAProxy
+ While HAProxy will survive a reload with an invalid configuration, explicitly
+ checking the config file for validity will make "systemctl reload" return an
+ error and let the user know something went wrong.
+
+Forwarded: no
+Last-Update: 2014-04-27
+Index: haproxy/contrib/systemd/haproxy.service.in
+===================================================================
+--- haproxy.orig/contrib/systemd/haproxy.service.in
++++ haproxy/contrib/systemd/haproxy.service.in
+@@ -8,6 +8,7 @@ Wants=syslog.service
+ [Service]
+ ExecStartPre=@SBINDIR@/haproxy -f /etc/haproxy/haproxy.cfg -c -q
+ ExecStart=@SBINDIR@/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
++ExecReload=@SBINDIR@/haproxy -c -f /etc/haproxy/haproxy.cfg
+ ExecReload=/bin/kill -USR2 $MAINPID
+ KillMode=mixed
+ Restart=always
--- /dev/null
+Subject: start after the syslog service using systemd
+Author: Apollon Oikonomopoulos <apoikos@gmail.com>
+
+Forwarded: no
+Last-Update: 2013-10-15
+Index: haproxy/contrib/systemd/haproxy.service.in
+===================================================================
+--- haproxy.orig/contrib/systemd/haproxy.service.in
++++ haproxy/contrib/systemd/haproxy.service.in
+@@ -1,6 +1,7 @@
+ [Unit]
+ Description=HAProxy Load Balancer
+-After=network.target
++After=network.target syslog.service
++Wants=syslog.service
+
+ [Service]
+ ExecStartPre=@SBINDIR@/haproxy -f /etc/haproxy/haproxy.cfg -c -q
--- /dev/null
+Author: Apollon Oikonomopoulos <apoikos@debian.org>
+Description: Use the variables from /etc/default/haproxy
+ This will allow seamless upgrades from the sysvinit system while respecting
+ any changes the users may have made. It will also make local configuration
+ easier than overriding the systemd unit file.
+
+Last-Update: 2014-06-20
+Forwarded: not-needed
+Index: haproxy/contrib/systemd/haproxy.service.in
+===================================================================
+--- haproxy.orig/contrib/systemd/haproxy.service.in
++++ haproxy/contrib/systemd/haproxy.service.in
+@@ -6,9 +6,11 @@ After=network.target syslog.service
+ Wants=syslog.service
+
+ [Service]
+-ExecStartPre=@SBINDIR@/haproxy -f /etc/haproxy/haproxy.cfg -c -q
+-ExecStart=@SBINDIR@/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
+-ExecReload=@SBINDIR@/haproxy -c -f /etc/haproxy/haproxy.cfg
++Environment=CONFIG=/etc/haproxy/haproxy.cfg
++EnvironmentFile=-/etc/default/haproxy
++ExecStartPre=@SBINDIR@/haproxy -f ${CONFIG} -c -q
++ExecStart=@SBINDIR@/haproxy-systemd-wrapper -f ${CONFIG} -p /run/haproxy.pid $EXTRAOPTS
++ExecReload=@SBINDIR@/haproxy -c -f ${CONFIG}
+ ExecReload=/bin/kill -USR2 $MAINPID
+ KillMode=mixed
+ Restart=always
--- /dev/null
+0002-Use-dpkg-buildflags-to-build-halog.patch
+haproxy.service-start-after-syslog.patch
+haproxy.service-add-documentation.patch
+haproxy.service-check-config-before-reload.patch
+haproxy.service-use-environment-variables.patch
+MIRA0001-Adding-include-configuration-statement-to-haproxy.patch
--- /dev/null
+# Create an additional socket in haproxy's chroot in order to allow logging via
+# /dev/log to chroot'ed HAProxy processes
+$AddUnixListenSocket /var/lib/haproxy/dev/log
+
+# Send HAProxy messages to a dedicated logfile
+if $programname startswith 'haproxy' then /var/log/haproxy.log
+&~
--- /dev/null
+#!/usr/bin/make -f
+
+export DEB_LDFLAGS_MAINT_APPEND = -Wl,--as-needed
+
+MAKEARGS=DESTDIR=debian/haproxy \
+ PREFIX=/usr \
+ IGNOREGIT=true \
+ MANDIR=/usr/share/man \
+ DOCDIR=/usr/share/doc/haproxy \
+ USE_PCRE=1 PCREDIR= \
+ USE_OPENSSL=1 \
+ USE_ZLIB=1 \
+ USE_LUA=1 \
+ LUA_INC=/usr/include/lua5.3
+
+OS_TYPE = $(shell dpkg-architecture -qDEB_HOST_ARCH_OS)
+
+ifeq ($(OS_TYPE),linux)
+ MAKEARGS+= TARGET=linux2628
+else ifeq ($(OS_TYPE),kfreebsd)
+ MAKEARGS+= TARGET=freebsd
+else
+ MAKEARGS+= TARGET=generic
+endif
+
+ifneq ($(filter amd64 i386, $(shell dpkg-architecture -qDEB_HOST_ARCH_CPU)),)
+ MAKEARGS+= USE_REGPARM=1
+else ifeq ($(shell dpkg-architecture -qDEB_HOST_ARCH_CPU),amd64)
+ MAKEARGS+= USE_REGPARM=1
+endif
+
+MAKEARGS += CFLAGS="$(shell dpkg-buildflags --get CFLAGS) $(shell dpkg-buildflags --get CPPFLAGS)"
+MAKEARGS += LDFLAGS="$(shell dpkg-buildflags --get LDFLAGS)"
+
+%:
+ dh $@ --with systemd,sphinxdoc
+
+override_dh_auto_configure:
+
+override_dh_auto_build-arch:
+ make $(MAKEARGS)
+ make -C contrib/systemd $(MAKEARGS)
+ dh_auto_build -Dcontrib/halog
+ $(MAKE) -C doc/lua-api man
+
+override_dh_auto_build-indep:
+ # Build the HTML documentation, after patching dconv
+ patch -p1 < $(CURDIR)/debian/patches/debianize-dconv.patch
+ python -B $(CURDIR)/debian/dconv/haproxy-dconv.py \
+ -i $(CURDIR)/doc/configuration.txt \
+ -o $(CURDIR)/doc/configuration.html
+ python -B $(CURDIR)/debian/dconv/haproxy-dconv.py \
+ -i $(CURDIR)/doc/intro.txt \
+ -o $(CURDIR)/doc/intro.html
+ patch -p1 -R < $(CURDIR)/debian/patches/debianize-dconv.patch
+ $(MAKE) -C doc/lua-api html
+
+override_dh_auto_clean:
+ make -C contrib/systemd clean
+ $(MAKE) -C doc/lua-api clean
+ dh_auto_clean
+ dh_auto_clean -Dcontrib/halog
+
+override_dh_auto_install-arch:
+ make $(MAKEARGS) install
+ install -m 0644 -D debian/rsyslog.conf debian/haproxy/etc/rsyslog.d/49-haproxy.conf
+ install -m 0644 -D debian/logrotate.conf debian/haproxy/etc/logrotate.d/haproxy
+
+override_dh_auto_install-indep:
+
+override_dh_installdocs:
+ dh_installdocs -Xsystemd/ -Xhalog/
+
+override_dh_installexamples:
+ dh_installexamples -X build.cfg
+
+override_dh_installinit:
+ dh_installinit --no-restart-on-upgrade
+
+override_dh_strip:
+ dh_strip --dbg-package=haproxy-dbg
--- /dev/null
+3.0 (quilt)
--- /dev/null
+debian/dconv/css/check.png
+debian/dconv/css/cross.png
+debian/dconv/img/logo-med.png
--- /dev/null
+debian/vim-haproxy.yaml /usr/share/vim/registry
+debian/haproxy.vim /usr/share/vim/addons/ftdetect
+examples/haproxy.vim /usr/share/vim/addons/syntax
--- /dev/null
+addon: haproxy
+description: "Syntax highlighting for HAProxy"
+files:
+ - syntax/haproxy.vim
+ - ftdetect/haproxy.vim
--- /dev/null
+version=3
+opts="uversionmangle=s/-(dev\d+)/~$1/" http://haproxy.1wt.eu/download/1.6/src/ haproxy-(1\.6\.\d+)\.(?:tgz|tbz2|tar\.(?:gz|bz2|xz))
--- /dev/null
+*.o
+*/.svn
+*~
+.flxdisk*
+.flxpkg
+.flxstatus*
+.svn
+haproxy
+src/*.o
+*.rej
+*.orig
+*.log*
+*.trace*
+haproxy-*
+!doc/haproxy-*.txt
+!src/*.c
+make-*
+dlmalloc.c
+00*.patch
+*.service
+*.bak
+.nfs*
+contrib/base64/base64rev
+contrib/halog/halog
+contrib/ip6range/ip6range
+contrib/iprange/iprange
+tests/test_hashes
+/*.cfg
+/*.conf
+/*.diff
+/*.patch
+/*.c
+/*.o
+/*.so
+/*.txt
+/*.TXT
+/*.txt.*
+/*.prof
+/*.gprof
+/*.prof.*
+/*.gprof.*
+/*.tar
+/*.tar.gz
+/*.tgz
+/*.mbox
+/*.sh
+/bug*
+/TAGS
+# Below we forbid everything and only allow what we know, that's much easier
+# than blocking about 500 different test files and bug report outputs.
+/.*
+/*
+!/.gitignore
+!/CHANGELOG
+!/LICENSE
+!/Makefile
+!/README
+!/CONTRIBUTING
+!/MAINTAINERS
+!/ROADMAP
+!/SUBVERS
+!/VERDATE
+!/VERSION
+!/contrib
+!/doc
+!/ebtree
+!/examples
+!/include
+!/src
+!/tests
+!/debian
--- /dev/null
+ChangeLog :
+===========
+
+2015/12/27 : 1.6.3
+ - BUG/MINOR: http rule: http capture 'id' rule points to a non existing id
+ - BUG/MINOR: server: check return value of fgets() in apply_server_state()
+ - BUG/MINOR: acl: don't use record layer in req_ssl_ver
+ - BUILD: freebsd: double declaration
+ - BUG/MEDIUM: lua: clean output buffer
+ - BUILD: check for libressl to be able to build against it
+ - DOC: lua-api/index.rst small example fixes, spelling correction.
+ - DOC: lua: architecture and first steps
+ - DOC: relation between timeout http-request and option http-buffer-request
+ - BUILD: Make deviceatlas require PCRE
+ - BUG: http: do not abort keep-alive connections on server timeout
+ - BUG/MEDIUM: http: switch the request channel to no-delay once done.
+ - BUG/MINOR: lua: don't force-sslv3 LUA's SSL socket
+ - BUILD/MINOR: http: proto_http.h needs sample.h
+ - BUG/MEDIUM: http: don't enable auto-close on the response side
+ - BUG/MEDIUM: stream: fix half-closed timeout handling
+ - CLEANUP: compression: don't allocate DEFAULT_MAXZLIBMEM without USE_ZLIB
+ - BUG/MEDIUM: cli: changing compression rate-limiting must require admin level
+ - BUG/MEDIUM: sample: urlp can't match an empty value
+ - BUILD: dumpstats: silencing warning for printf format specifier / time_t
+ - CLEANUP: proxy: calloc call inverted arguments
+ - MINOR: da: silent logging by default and displaying DeviceAtlas support if built.
+ - BUG/MEDIUM: da: stop DeviceAtlas processing in the convertor if there is no input.
+ - DOC: Edited 51Degrees section of README/ (cherry picked from commit a7bbdd955984f0d69812ff055cc145a338e76daa)
+ - BUG/MEDIUM: checks: email-alert not working when declared in defaults
+ - BUG/MINOR: checks: email-alert causes a segfault when an unknown mailers section is configured
+ - BUG/MINOR: checks: typo in an email-alert error message
+ - BUG/MINOR: tcpcheck: conf parsing error when no port configured on server and last rule is a CONNECT with no port
+ - BUG/MINOR: tcpcheck: conf parsing error when no port configured on server and first rule(s) is (are) COMMENT
+ - BUG/MEDIUM: http: fix http-reuse when frontend and backend differ
+ - DOC: prefer using http-request/response over reqXXX/rspXXX directives
+ - BUG/MEDIUM: config: properly adjust maxconn with nbproc when memmax is forced
+ - BUG/MEDIUM: peers: table entries learned from a remote are pushed to others after a random delay.
+ - BUG/MEDIUM: peers: old stick table updates could be repushed.
+ - CLEANUP: haproxy: using _GNU_SOURCE instead of __USE_GNU macro.
+ - MINOR: lua: service/applet can have access to the HTTP headers when a POST is received
+ - REORG/MINOR: lua: convert boolean "int" to bitfield
+ - BUG/MEDIUM: lua: Lua applets must not fetch samples using http_txn
+ - BUG/MINOR: lua: Lua applets must not use http_txn
+ - BUG/MEDIUM: lua: Forbid HTTP applets from being called from tcp rulesets
+ - BUG/MAJOR: lua: Do not force the HTTP analysers in use-services
+ - CLEANUP: lua: bad error messages
+ - DOC: lua: fix lua API
+ - DOC: mailers: typo in 'hostname' description
+ - DOC: compression: missing mention of libslz for compression algorithm
+ - BUILD/MINOR: regex: missing header
+ - BUG/MINOR: stream: bad return code
+ - DOC: lua: fix somme errors and add implicit types
+
+2015/11/03 : 1.6.2
+ - BUILD: ssl: fix build error introduced in commit 7969a3 with OpenSSL < 1.0.0
+ - DOC: fix a typo for a "deviceatlas" keyword
+ - FIX: small typo in an example using the "Referer" header
+ - BUG/MEDIUM: config: count memory limits on 64 bits, not 32
+ - BUG/MAJOR: dns: first DNS response packet not matching queried hostname may lead to a loop
+ - BUG/MINOR: dns: unable to parse CNAMEs response
+ - BUG/MINOR: examples/haproxy.init: missing brace in quiet_check()
+ - DOC: deviceatlas: more example use cases.
+ - BUG/BUILD: replace haproxy-systemd-wrapper with $(EXTRA) in install-bin.
+ - BUG/MAJOR: http: don't requeue an idle connection that is already queued
+ - DOC: typo on capture.res.hdr and capture.req.hdr
+ - BUG/MINOR: dns: check for duplicate nameserver id in a resolvers section was missing
+ - CLEANUP: use direction names in place of numeric values
+ - BUG/MEDIUM: lua: sample fetches based on response doesn't work
+
+2015/10/20 : 1.6.1
+ - DOC: specify that stats socket doc (section 9.2) is in management
+ - BUILD: install only relevant and existing documentation
+ - CLEANUP: don't ignore debian/ directory if present
+ - BUG/MINOR: dns: parsing error of some DNS response
+ - BUG/MEDIUM: namespaces: don't fail if no namespace is used
+ - BUG/MAJOR: ssl: free the generated SSL_CTX if the LRU cache is disabled
+ - MEDIUM: dns: Don't use the ANY query type
+
+2015/10/13 : 1.6.0
+ - BUG/MINOR: Handle interactive mode in cli handler
+ - DOC: global section missing parameters
+ - DOC: backend section missing parameters
+ - DOC: stats paramaters available in frontend
+ - MINOR: lru: do not allocate useless memory in lru64_lookup
+ - BUG/MINOR: http: Add OPTIONS in supported http methods (found by find_http_meth)
+ - BUG/MINOR: ssl: fix management of the cache where forged certificates are stored
+ - MINOR: ssl: Release Servers SSL context when HAProxy is shut down
+ - MINOR: ssl: Read the file used to generate certificates in any order
+ - MINOR: ssl: Add support for EC for the CA used to sign generated certificates
+ - MINOR: ssl: Add callbacks to set DH/ECDH params for generated certificates
+ - BUG/MEDIUM: logs: fix time zone offset format in RFC5424
+ - BUILD: Fix the build on OSX (htonll/ntohll)
+ - BUILD: enable build on Linux/s390x
+ - BUG/MEDIUM: lua: direction test failed
+ - MINOR: lua: fix a spelling error in some error messages
+ - CLEANUP: cli: ensure we can never double-free error messages
+ - BUG/MEDIUM: lua: force server-close mode on Lua services
+ - MEDIUM: init: support more command line arguments after pid list
+ - MEDIUM: init: support a list of files on the command line
+ - MINOR: debug: enable memory poisonning to use byte 0
+ - BUILD: ssl: fix build error introduced by recent commit
+ - BUG/MINOR: config: make the stats socket pass the correct proxy to the parsers
+ - MEDIUM: server: implement TCP_USER_TIMEOUT on the server
+ - DOC: mention the "namespace" options for bind and server lines
+ - DOC: add the "management" documentation
+ - DOC: move the stats socket documentation from config to management
+ - MINOR: examples: update haproxy.spec to mention new docs
+ - DOC: mention management.txt in README
+ - DOC: remove haproxy-{en,fr}.txt
+ - BUILD: properly report when USE_ZLIB and USE_SLZ are used together
+ - MINOR: init: report use of libslz instead of "no compression"
+ - CLEANUP: examples: remove some obsolete and confusing files
+ - CLEANUP: examples: remove obsolete configuration file samples
+ - CLEANUP: examples: fix the example file content-sw-sample.cfg
+ - CLEANUP: examples: update sample file option-http_proxy.cfg
+ - CLEANUP: examples: update sample file ssl.cfg
+ - CLEANUP: tests: move a test file from examples/ to tests/
+ - CLEANUP: examples: shut up warnings in transparent proxy example
+ - CLEANUP: tests: removed completely obsolete test files
+ - DOC: update ROADMAP to remove what was done in 1.6
+ - BUG/MEDIUM: pattern: fixup use_after_free in the pat_ref_delete_by_id
+
+2015/10/06 : 1.6-dev7
+ - MINOR: cli: Dump all resolvers stats if no resolver section is given
+ - BUG: config: external-check command validation is checking for incorrect arguments.
+ - DOC: documentation format cleanups
+ - DOC: lua: few typos.
+ - BUG/MEDIUM: str2ip: make getaddrinfo() consider local address selection policy
+ - BUG/MEDIUM: logs: segfault writing to log from Lua
+ - DOC: fix lua use-service example
+ - MINOR: payload: add support for tls session ticket ext
+ - MINOR: lua: remove the run flag
+ - MEDIUM: lua: change the timeout execution
+ - MINOR: lua: rename the tune.lua.applet-timeout
+ - DOC: lua: update Lua doc
+ - DOC: lua: update doc according with the last Lua changes
+ - MINOR: http/tcp: fill the avalaible actions
+ - DOC: reorder misplaced res.ssl_hello_type in the doc
+ - BUG/MINOR: tcp: make silent-drop always force a TCP reset
+ - CLEANUP: tcp: silent-drop: only drain the connection when quick-ack is disabled
+ - BUILD: tcp: use IPPROTO_IP when SOL_IP is not available
+ - BUILD: server: fix build warnings introduced by load-server-state
+ - BUG/MEDIUM: server: fix misuse of format string in load-server-state's warnings
+
+2015/09/28 : 1.6-dev6
+ - BUG/MAJOR: can't enable a server through the stat socket
+ - MINOR: server: Macro definition for server-state
+ - MINOR: cli: new stats socket command: show servers state
+ - DOC: stats socket command: show servers state
+ - MINOR: config: new global directive server-state-base
+ - DOC: global directive server-state-base
+ - MINOR: config: new global section directive: server-state-file
+ - DOC: new global directive: server-state-file
+ - MINOR: config: new backend directives: load-server-state-from-file and server-state-file-name
+ - DOC: load-server-state-from-file
+ - MINOR: init: server state loaded from file
+ - MINOR: server: startup slowstart task when using seamless reload of HAProxy
+ - MINOR: cli: new stats socket command: show backend
+ - DOC: servers state seamless reload example
+ - BUG: dns: can't connect UDP socket on FreeBSD
+ - MINOR: cfgparse: New function cfg_unregister_sections()
+ - MINOR: chunk: New function free_trash_buffers()
+ - BUG/MEDIUM: main: Freeing a bunch of static pointers
+ - MINOR: proto_http: Externalisation of previously internal functions
+ - MINOR: global: Few new struct fields for da module
+ - MAJOR: da: Update of the DeviceAtlas API module
+ - DOC: DeviceAtlas new keywords
+ - DOC: README: DeviceAtlas sample configuration updates
+ - MEDIUM: log: replace sendto() with sendmsg() in __send_log()
+ - MEDIUM: log: use a separate buffer for the header and for the message
+ - MEDIUM: logs: remove the hostname, tag and pid part from the logheader
+ - MEDIUM: logs: add support for RFC5424 header format per logger
+ - MEDIUM: logs: add a new RFC5424 log-format for the structured-data
+ - DOC: mention support for the RFC5424 syslog message format
+ - MEDIUM: logs: have global.log_send_hostname not contain the trailing space
+ - MEDIUM: logs: pass the trailing "\n" as an iovec
+ - BUG/MEDIUM: peers: some table updates are randomly not pushed.
+ - BUG/MEDIUM: peers: same table updates re-pushed after a re-connect
+ - BUG/MINOR: fct peer_prepare_ackmsg should not use trash.
+ - MINOR: http: made CHECK_HTTP_MESSAGE_FIRST accessible to other functions
+ - MINOR: global: Added new fields for 51Degrees device detection
+ - DOC: Added more explanation for 51Degrees V3.2
+ - BUILD: Changed 51Degrees option to support V3.2
+ - MAJOR: 51d: Upgraded to support 51Degrees V3.2 and new features
+ - MINOR: 51d: Improved string handling for LRU cache
+ - DOC: add references to rise/fall for the fastinter explanation
+ - MINOR: support cpu-map feature through the compile option USE_CPU_AFFINITY on FreeBSD
+ - BUG/MAJOR: lua: potential unexpected aborts()
+ - BUG/MINOR: lua: breaks the log message if his size exceed one buffer
+ - MINOR: action: add private configuration
+ - MINOR: action: add reference to the original keywork matched for the called parser.
+ - MINOR: lua: change actions registration
+ - MEDIUM: proto_http: smp_prefetch_http initialize txn
+ - MINOR: channel: rename function chn_sess to chn_strm
+ - CLEANUP: lua: align defines
+ - MINOR: http: export http_get_path() function
+ - MINOR: http: export the get_reason() function
+ - MINOR: http: export function http_msg_analyzer()
+ - MINOR: http: split initialization
+ - MINOR: lua: reset pointer after use
+ - MINOR: lua: identify userdata objects
+ - MEDIUM: lua: use the function lua_rawset in place of lua_settable
+ - BUG/MAJOR: lua: segfault after the channel data is modified by some Lua action.
+ - CLEANUP: lua: use calloc in place of malloc
+ - BUG/MEDIUM: lua: longjmp function must be unregistered
+ - BUG/MEDIUM: lua: forces a garbage collection
+ - BUG/MEDIUM: lua: wakeup task on bad conditions
+ - MINOR: standard: avoid DNS resolution from the function str2sa_range()
+ - MINOR: lua: extend socket address to support non-IP families
+ - MINOR: lua/applet: the cosocket applet should use appctx_wakeup in place of task_wakeup
+ - BUG/MEDIUM: lua: socket destroy before reading pending data
+ - MEDIUM: lua: change the GC policy
+ - OPTIM/MEDIUM: lua: executes the garbage collector only when using cosocket
+ - BUG/MEDIUM: lua: don't reset undesired flags in hlua_ctx_resume
+ - MINOR: applet: add init function
+ - MINOR: applet: add an execution timeout
+ - MINOR: stream/applet: add use-service action
+ - MINOR: lua: add AppletTCP class and service
+ - MINOR: lua: add AppletHTTP class and service
+ - DOC: lua: some documentation update
+ - DOC: add the documentation about internal circular lists
+ - DOC: add a CONTRIBUTING file
+ - DOC: add a MAINTAINERS file
+ - BUG/MAJOR: peers: fix a crash when stopping peers on unbound processes
+ - DOC: update coding-style to reference checkpatch.pl
+ - BUG/MEDIUM: stick-tables: fix double-decrement of tracked entries
+ - BUG/MINOR: args: add name for ARGT_VAR
+ - DOC: add more entries to MAINTAINERS
+ - DOC: add more entries to MAINTAINERS
+ - CLEANUP: stream-int: remove obsolete function si_applet_call()
+ - BUG/MAJOR: cli: do not dereference strm_li()->proto->name
+ - BUG/MEDIUM: http: do not dereference strm_li(stream)
+ - BUG/MEDIUM: proxy: do not dereference strm_li(stream)
+ - BUG/MEDIUM: stream: do not dereference strm_li(stream)
+ - MINOR: stream-int: use si_release_endpoint() to close idle conns
+ - BUG/MEDIUM: payload: make req.payload and payload_lv aware of dynamic buffers
+ - BUG/MEDIUM: acl: always accept match "found"
+ - MINOR: applet: rename applet_runq to applet_active_queue
+ - BUG/MAJOR: applet: use a separate run queue to maintain list integrity
+ - MEDIUM: stream-int: split stream_int_update_conn() into si- and conn-specific parts
+ - MINOR: stream-int: implement a new stream_int_update() function
+ - MEDIUM: stream-int: factor out the stream update functions
+ - MEDIUM: stream-int: call stream_int_update() from si_update()
+ - MINOR: stream-int: export stream_int_update_*
+ - MINOR: stream-int: move the applet_pause call out of the stream updates
+ - MEDIUM: stream-int: clean up the conditions to enable reading in si_conn_wake_cb
+ - MINOR: stream-int: implement the stream_int_notify() function
+ - MEDIUM: stream-int: use the same stream notification function for applets and conns
+ - MEDIUM: stream-int: completely remove stream_int_update_embedded()
+ - MINOR: stream-int: rename si_applet_done() to si_applet_wake_cb()
+ - BUG/MEDIUM: applet: fix reporting of broken write situation
+ - BUG/MINOR: stats: do not call cli_release_handler 3 times
+ - BUG/MEDIUM: cli: properly handle closed output
+ - MINOR: cli: do not call the release handler on internal error.
+ - BUG/MEDIUM: stream-int: avoid double-call to applet->release
+ - DEBUG: add p_malloc() to return a poisonned memory area
+ - CLEANUP: lua: remove unneeded memset(0) after calloc()
+ - MINOR: lua: use the proper applet wakeup mechanism
+ - BUG/MEDIUM: lua: better fix for the protocol check
+ - BUG/MEDIUM: lua: properly set the target on the connection
+ - MEDIUM: actions: pass a new "flags" argument to custom actions
+ - MEDIUM: actions: add new flag ACT_FLAG_FINAL to notify about last call
+ - MEDIUM: http: pass ACT_FLAG_FINAL to custom actions
+ - MEDIUM: lua: only allow actions to yield if not in a final call
+ - DOC: clarify how to make use of abstract sockets in socat
+ - CLEANUP: config: make the errorloc/errorfile messages less confusing
+ - MEDIUM: action: add a new flag ACT_FLAG_FIRST
+ - BUG/MINOR: config: check that tune.bufsize is always positive
+ - MEDIUM: config: set tune.maxrewrite to 1024 by default
+ - DOC: add David Carlier as maintainer of da.c
+ - DOC: fix some broken unexpected unicode chars in the Lua doc.
+ - BUG/MEDIUM: proxy: ignore stopped peers
+ - BUG/MEDIUM: proxy: do not wake stopped proxies' tasks during soft_stop()
+ - MEDIUM: init: completely deallocate unused peers
+ - BUG/MEDIUM: tcp: fix inverted condition to call custom actions
+ - DOC: remove outdated actions lists on tcp-request/response
+ - MEDIUM: tcp: add new tcp action "silent-drop"
+ - DOC: add URLs to optional libraries in the README
+
+2015/09/14 : 1.6-dev5
+ - MINOR: dns: dns_resolution structure update: time_t to unsigned int
+ - BUG/MEDIUM: dns: DNS resolution doesn't start
+ - BUG/MAJOR: dns: dns client resolution infinite loop
+ - MINOR: dns: coding style update
+ - MINOR: dns: new bitmasks to use against DNS flags
+ - MINOR: dns: dns_nameserver structure update: new counter for truncated response
+ - MINOR: dns: New DNS response analysis code: DNS_RESP_TRUNCATED
+ - MEDIUM: dns: handling of truncated response
+ - MINOR: DNS client query type failover management
+ - MINOR: dns: no expected DNS record type found
+ - MINOR: dns: new flag to report that no IP can be found in a DNS response packet
+ - BUG/MINOR: DNS request retry counter used for retry only
+ - DOC: DNS documentation updated
+ - MEDIUM: actions: remove ACTION_STOP
+ - BUG/MEDIUM: lua: outgoing connection was broken since 1.6-dev2 (bis)
+ - BUG/MINOR: lua: last log character truncated.
+ - CLEANUP: typo: bad indent
+ - CLEANUP: actions: missplaced includes
+ - MINOR: build: missing header
+ - CLEANUP: lua: Merge log functions
+ - BUG/MAJOR: http: don't manipulate the server connection if it's killed
+ - BUG/MINOR: http: remove stupid HTTP_METH_NONE entry
+ - BUG/MAJOR: http: don't call http_send_name_header() after an error
+ - MEDIUM: tools: make str2sa_range() optionally return the FQDN
+ - BUG/MINOR: tools: make str2sa_range() report unresolvable addresses
+ - BUG/MEDIUM: dns: use the correct server hostname when resolving
+
+2015/08/30 : 1.6-dev4
+ - MINOR: log: Add log-format variable %HQ, to log HTTP query strings
+ - DOC: typo in 'redirect', 302 code meaning
+ - DOC: typos in tcp-check expect examples
+ - DOC: resolve-prefer default value and default-server update
+ - MINOR: DNS counters: increment valid counter
+ - BUG/MEDIUM: DNS resolution response parsing broken
+ - MINOR: server: add new SRV_ADMF_CMAINT flag
+ - MINOR: server SRV_ADMF_CMAINT flag doesn't imply SRV_ADMF_FMAINT
+ - BUG/MEDIUM: dns: wrong first time DNS resolution
+ - BUG/MEDIUM: lua: Lua tasks fail to start.
+ - BUILD: add USE_LUA to BUILD_OPTIONS when it's used
+ - DOC/MINOR: fix OpenBSD versions where haproxy works
+ - MINOR: 51d: unable to start haproxy without "51degrees-data-file"
+ - BUG/MEDIUM: peers: fix wrong message id on stick table updates acknowledgement.
+ - BUG/MAJOR: peers: fix current table pointer not re-initialized on session release.
+ - BUILD: ssl: Allow building against libssl without SSLv3.
+ - DOC: clarify some points about SSL and the proxy protocol
+ - DOC: mention support for RFC 5077 TLS Ticket extension in starter guide
+ - BUG/MEDIUM: mailer: DATA part must be terminated with <CRLF>.<CRLF>
+ - DOC: match several lua configuration option names to those implemented in code
+ - MINOR cfgparse: Correct the mailer warning text to show the right names to the user
+ - BUG/MINOR: ssl: TLS Ticket Key rotation broken via socket command
+ - MINOR: stream: initialize the current_rule field to NULL on stream init
+ - BUG/MEDIUM: lua: timeout error with converters, wrapper and actions.
+ - CLEANUP: proto_http: remove useless initialisation
+ - CLEANUP: http/tcp actions: remove the scope member
+ - BUG/MINOR: proto_tcp: custom action continue is ignored
+ - MINOR: proto_tcp: add session in the action prototype
+ - MINOR: vars: reduce the code size of some wrappers
+ - MINOR: Move http method enum from proto_http to sample
+ - MINOR: sample: Add ipv6 to ipv4 and sint to ipv6 casts
+ - MINOR: sample/proto_tcp: export "smp_fetch_src"
+ - MEDIUM: cli: rely on the map's output type instead of the sample type
+ - BUG/MEDIUM: stream: The stream doen't inherit SC from the session
+ - BUG/MEDIUM: vars: segfault during the configuration parsing
+ - BUG/MEDIUM: stick-tables: refcount error after copying SC for the session to the stream
+ - BUG/MEDIUM: lua: bad error processing
+ - MINOR: samples: rename a struct from sample_storage to sample_data
+ - MINOR: samples: rename some struct member from "smp" to "data"
+ - MEDIUM: samples: Use the "struct sample_data" in the "struct sample"
+ - MINOR: samples: extract the anonymous union and create the union sample_value
+ - MINOR: samples: rename union from "data" to "u"
+ - MEDIUM: 51degrees: Adapt the 51Degrees library
+ - MINOR: samples: data assignation simplification
+ - MEDIUM: pattern/map: Maps can returns various types
+ - MINOR: map: The map can return IPv4 and IPv6
+ - MEDIUM: actions: Merge (http|tcp)-(request|reponse) action structs
+ - MINOR: actions: Remove the data opaque pointer
+ - MINOR: lua: use the hlua_rule type in place of opaque type
+ - MINOR: vars: use the vars types as argument in place of opaque type
+ - MINOR: proto_http: use an "expr" type in place of generic opaque type.
+ - MINOR: proto_http: replace generic opaque types by real used types for the actions on thr request line
+ - MINOR: proto_http: replace generic opaque types by real used types in "http_capture"
+ - MINOR: proto_http: replace generic opaque types by real used types in "http_capture" by id
+ - MEDIUM: track-sc: Move the track-sc configuration storage in the union
+ - MEDIUM: capture: Move the capture configuration storage in the union
+ - MINOR: actions: add "from" information
+ - MINOR: actions: remove the mark indicating the last entry in enum
+ - MINOR: actions: Declare all the embedded actions in the same header file
+ - MINOR: actions: change actions names
+ - MEDIUM: actions: Add standard return code for the action API
+ - MEDIUM: actions: Merge (http|tcp)-(request|reponse) keywords structs
+ - MINOR: proto_tcp: proto_tcp.h is now useles
+ - MINOR: actions: mutualise the action keyword lookup
+ - MEDIUM: actions: Normalize the return code of the configuration parsers
+ - MINOR: actions: Remove wrappers
+ - MAJOR: stick-tables: use sample types in place of dedicated types
+ - MEDIUM: stick-tables: use the sample type names
+ - MAJOR: stick-tables: remove key storage from the key struct
+ - MEDIUM: stick-tables: Add GPT0 in the stick tables
+ - MINOR: stick-tables: Add GPT0 access
+ - MINOR: stick-tables: Add GPC0 actions
+ - BUG/MEDIUM: lua: the lua fucntion Channel:close() causes a segfault
+ - DOC: ssl: missing LF
+ - MINOR: lua: add core.done() function
+ - DOC: fix function name
+ - BUG/MINOR: lua: in some case a sample may remain undefined
+ - DOC: fix "http_action_set_req_line()" comments
+ - MINOR: http: Action for manipulating the returned status code.
+ - MEDIUM: lua: turns txn:close into txn:done
+ - BUG/MEDIUM: lua: cannot process more Lua hooks after a "done()" function call
+ - BUILD: link with libdl if needed for Lua support
+ - CLEANUP: backend: factor out objt_server() in connect_server()
+ - MEDIUM: backend: don't call si_alloc_conn() when we reuse a valid connection
+ - MEDIUM: stream-int: simplify si_alloc_conn()
+ - MINOR: stream-int: add new function si_detach_endpoint()
+ - MINOR: server: add a list of private idle connections
+ - MINOR: connection: add a new list member in the connection struct
+ - MEDIUM: stream-int: queue idle connections at the server
+ - MINOR: stream-int: make si_idle_conn() only accept valid connections
+ - MINOR: server: add a list of already used idle connections
+ - MINOR: connection: add a new flag CO_FL_PRIVATE
+ - MINOR: config: add new setting "http-reuse"
+ - MAJOR: backend: initial work towards connection reuse
+ - MAJOR: backend: improve the connection reuse mechanism
+ - MEDIUM: backend: implement "http-reuse safe"
+ - MINOR: server: add a list of safe, already reused idle connections
+ - MEDIUM: backend: add the "http-reuse aggressive" strategy
+ - DOC: document the new http-reuse directive
+ - DOC: internals: document next steps for HTTP connection reuse
+ - DOC: mention that %ms is left-padded with zeroes.
+ - MINOR: init: indicate to check 'bind' lines when no listeners were found.
+ - MAJOR: http: remove references to appsession
+ - CLEANUP: config: remove appsession initialization
+ - CLEANUP: appsession: remove appsession.c and sessionhash.c
+ - CLEANUP: tests: remove sessionhash_test.c and test-cookie-appsess.cfg
+ - CLEANUP: proxy: remove last references to appsession
+ - CLEANUP: appsession: remove the last include files
+ - DOC: remove documentation about appsession
+ - CLEANUP: .gitignore: ignore more test files
+ - CLEANUP: .gitignore: finally ignore everything but what is known.
+ - MEDIUM: config: emit a warning on a frontend without listener
+ - DOC: add doc/internals/entities-v2.txt
+ - DOC: add doc/linux-syn-cookies.txt
+ - DOC: add design thoughts on HTTP/2
+ - DOC: add some thoughts on connection sharing for HTTP/2
+ - DOC: add design thoughts on dynamic buffer allocation
+ - BUG/MEDIUM: counters: ensure that src_{inc,clr}_gpc0 creates a missing entry
+ - DOC: add new file intro.txt
+ - MAJOR: tproxy: remove support for cttproxy
+ - BUG/MEDIUM: lua: outgoing connection was broken since 1.6-dev2
+ - DOC: lua: replace txn:close with txn:done in lua-api
+ - DOC: intro: minor updates and fixes
+ - DOC: intro: fix too long line.
+ - DOC: fix example of http-request using ssl_fc_session_id
+ - BUG/MEDIUM: lua: txn:done() still causes a segfault in TCP mode
+ - CLEANUP: lua: fix some indent issues
+ - BUG/MEDIUM: lua: fix a segfault in txn:done() if called twice
+ - DOC: lua: mention than txn:close was renamed txn:done.
+
+2015/07/22 : 1.6-dev3
+ - CLEANUP: sample: generalize sample_fetch_string() as sample_fetch_as_type()
+ - MEDIUM: http: Add new 'set-src' option to http-request
+ - DOC usesrc root privileges requirments
+ - BUG/MINOR: dns: wrong time unit for some DNS default parameters
+ - MINOR: proxy: bit field for proxy_find_best_match diff status
+ - MINOR: server: new server flag: SRV_F_FORCED_ID
+ - MINOR: server: server_find functions: id, name, best_match
+ - DOC: dns: fix chapters syntax
+ - BUILD/MINOR: tools: rename popcount to my_popcountl
+ - BUILD: add netbsd TARGET
+ - MEDIUM: 51Degrees code refactoring and cleanup
+ - MEDIUM: 51d: add LRU-based cache on User-Agent string detection
+ - DOC: add notes about the "51degrees-cache-size" parameter
+ - BUG/MEDIUM: 51d: possible incorrect operations on smp->data.str.str
+ - BUG/MAJOR: connection: fix TLV offset calculation for proxy protocol v2 parsing
+ - MINOR: Add sample fetch to detect Supported Elliptic Curves Extension
+ - BUG/MINOR: payload: Add volatile flag to smp_fetch_req_ssl_ec_ext
+ - BUG/MINOR: lua: type error in the arguments wrapper
+ - CLEANUP: vars: remove unused struct
+ - BUG/MINOR: http/sample: gmtime/localtime can fail
+ - MINOR: standard: add 64 bits conversion functions
+ - MAJOR: sample: converts uint and sint in 64 bits signed integer
+ - MAJOR: arg: converts uint and sint in sint
+ - MEDIUM: sample: switch to saturated arithmetic
+ - MINOR: vars: returns variable content
+ - MEDIUM: vars/sample: operators can use variables as parameter
+ - BUG/MINOR: ssl: fix smp_fetch_ssl_fc_session_id
+ - BUILD/MINOR: lua: fix a harmless build warning
+ - BUILD/MINOR: stats: fix build warning due to condition always true
+ - BUG/MAJOR: lru: fix unconditional call to free due to unexpected semi-colon
+ - BUG/MEDIUM: logs: fix improper systematic use of quotes with a few tags
+ - BUILD/MINOR: lua: ensure that hlua_ctx_destroy is properly defined
+ - BUG/MEDIUM: lru: fix possible memory leak when ->free() is used
+ - MINOR: vars: make the accounting not depend on the stream
+ - MEDIUM: vars: move the session variables to the session, not the stream
+ - BUG/MEDIUM: vars: do not freeze the connection when the expression cannot be fetched
+ - BUG/MAJOR: buffers: make the buffer_slow_realign() function respect output data
+ - BUG/MAJOR: tcp: tcp rulesets were still broken
+ - MINOR: stats: improve compression stats reporting
+ - MINOR: ssl: make self-generated certs also work with raw IPv6 addresses
+ - CLEANUP: ssl: make ssl_sock_generated_cert_serial() take a const
+ - CLEANUP: ssl: make ssl_sock_generate_certificate() use ssl_sock_generated_cert_serial()
+ - BUG/MINOR: log: missing some ARGC_* entries in fmt_directives()
+ - MINOR: args: add new context for servers
+ - MINOR: stream: maintain consistence between channel_forward and HTTP forward
+ - MINOR: ssl: provide ia function to set the SNI extension on a connection
+ - MEDIUM: ssl: add sni support on the server lines
+ - CLEANUP: stream: remove a useless call to si_detach()
+ - CLEANUP: stream-int: fix a few outdated comments about stream_int_register_handler()
+ - CLEANUP: stream-int: remove stream_int_unregister_handler() and si_detach()
+ - MINOR: stream-int: only use si_release_endpoint() to release a connection
+ - MINOR: standard: provide htonll() and ntohll()
+ - CLEANUP/MINOR: dns: dns_str_to_dn_label() only needs a const char
+ - BUG/MAJOR: dns: fix the length of the string to be copied
+
+2015/06/17 : 1.6-dev2
+ - BUG/MINOR: ssl: Display correct filename in error message
+ - MEDIUM: logs: Add HTTP request-line log format directives
+ - BUG/MEDIUM: check: tcpcheck regression introduced by e16c1b3f
+ - BUG/MINOR: check: fix tcpcheck error message
+ - MINOR: use an int instead of calling tcpcheck_get_step_id
+ - MINOR: tcpcheck_rule structure update
+ - MINOR: include comment in tcpcheck error log
+ - DOC: tcpcheck comment documentation
+ - MEDIUM: server: add support for changing a server's address
+ - MEDIUM: server: change server ip address from stats socket
+ - MEDIUM: protocol: add minimalist UDP protocol client
+ - MEDIUM: dns: implement a DNS resolver
+ - MAJOR: server: add DNS-based server name resolution
+ - DOC: server name resolution + proto DNS
+ - MINOR: dns: add DNS statistics
+ - MEDIUM: http: configurable http result codes for http-request deny
+ - BUILD: Compile clean when debug options defined
+ - MINOR: lru: Add the possibility to free data when an item is removed
+ - MINOR: lru: Add lru64_lookup function
+ - MEDIUM: ssl: Add options to forge SSL certificates
+ - MINOR: ssl: Export functions to manipulate generated certificates
+ - MEDIUM: config: add DeviceAtlas global keywords
+ - MEDIUM: global: add the DeviceAtlas required elements to struct global
+ - MEDIUM: sample: add the da-csv converter
+ - MEDIUM: init: DeviceAtlas initialization
+ - BUILD: Makefile: add options to build with DeviceAtlas
+ - DOC: README: explain how to build with DeviceAtlas
+ - BUG/MEDIUM: http: fix the url_param fetch
+ - BUG/MEDIUM: init: segfault if global._51d_property_names is not initialized
+ - MAJOR: peers: peers protocol version 2.0
+ - MINOR: peers: avoid re-scheduling of pending stick-table's updates still not pushed.
+ - MEDIUM: peers: re-schedule stick-table's entry for sync when data is modified.
+ - MEDIUM: peers: support of any stick-table data-types for sync
+ - BUG/MAJOR: sample: regression on sample cast to stick table types.
+ - CLEANUP: deinit: remove codes for cleaning p->block_rules
+ - DOC: Fix L4TOUT typo in documentation
+ - DOC: set-log-level in Logging section preamble
+ - BUG/MEDIUM: compat: fix segfault on FreeBSD
+ - MEDIUM: check: include server address and port in the send-state header
+ - MEDIUM: backend: Allow redispatch on retry intervals
+ - MINOR: Add TLS ticket keys reference and use it in the listener struct
+ - MEDIUM: Add support for updating TLS ticket keys via socket
+ - DOC: Document new socket commands "show tls-keys" and "set ssl tls-key"
+ - MINOR: Add sample fetch which identifies if the SSL session has been resumed
+ - DOC: Update doc about weight, act and bck fields in the statistics
+ - BUG/MEDIUM: ssl: fix tune.ssl.default-dh-param value being overwritten
+ - MINOR: ssl: add a destructor to free allocated SSL ressources
+ - MEDIUM: ssl: add the possibility to use a global DH parameters file
+ - MEDIUM: ssl: replace standards DH groups with custom ones
+ - MEDIUM: stats: Add enum srv_stats_state
+ - MEDIUM: stats: Separate server state and colour in stats
+ - MEDIUM: stats: Only report drain state in stats if server has SRV_ADMF_DRAIN set
+ - MEDIUM: stats: Differentiate between DRAIN and DRAIN (agent)
+ - MEDIUM: Lower priority of email alerts for log-health-checks messages
+ - MEDIUM: Send email alerts when servers are marked as UP or enter the drain state
+ - MEDIUM: Document when email-alerts are sent
+ - BUG/MEDIUM: lua: bad argument number in analyser and in error message
+ - MEDIUM: lua: automatically converts strings in proxy, tables, server and ip
+ - BUG/MINOR: utf8: remove compilator warning
+ - MEDIUM: map: uses HAProxy facilities to store default value
+ - BUG/MINOR: lua: error in detection of mandatory arguments
+ - BUG/MINOR: lua: set current proxy as default value if it is possible
+ - BUG/MEDIUM: http: the action set-{method|path|query|uri} doesn't run.
+ - BUG/MEDIUM: lua: undetected infinite loop
+ - BUG/MAJOR: http: don't read past buffer's end in http_replace_value
+ - BUG/MEDIUM: http: the function "(req|res)-replace-value" doesn't respect the HTTP syntax
+ - MEDIUM/CLEANUP: http: rewrite and lighten http_transform_header() prototype
+ - BUILD: lua: it miss the '-ldl' directive
+ - MEDIUM: http: allows 'R' and 'S' in the protocol alphabet
+ - MINOR: http: split the function http_action_set_req_line() in two parts
+ - MINOR: http: split http_transform_header() function in two parts.
+ - MINOR: http: export function inet_set_tos()
+ - MINOR: lua: txn: add function set_(loglevel|tos|mark)
+ - MINOR: lua: create and register HTTP class
+ - DOC: lua: fix some typos
+ - MINOR: lua: add log functions
+ - BUG/MINOR: lua: Fix SSL initialisation
+ - DOC: lua: some fixes
+ - MINOR: lua: (req|res)_get_headers return more than one header value
+ - MINOR: lua: map system integration in Lua
+ - BUG/MEDIUM: http: functions set-{path,query,method,uri} breaks the HTTP parser
+ - MINOR: sample: add url_dec converter
+ - MEDIUM: sample: fill the struct sample with the session, proxy and stream pointers
+ - MEDIUM: sample change the prototype of sample-fetches and converters functions
+ - MINOR: sample: fill the struct sample with the options.
+ - MEDIUM: sample: change the prototype of sample-fetches functions
+ - MINOR: http: split the url_param in two parts
+ - CLEANUP: http: bad indentation
+ - MINOR: http: add body_param fetch
+ - MEDIUM: http: url-encoded parsing function can run throught wrapped buffer
+ - DOC: http: req.body_param documentation
+ - MINOR: proxy: custom capture declaration
+ - MINOR: capture: add two "capture" converters
+ - MEDIUM: capture: Allow capture with slot identifier
+ - MINOR: http: add array of generic pointers in http_res_rules
+ - MEDIUM: capture: adds http-response capture
+ - MINOR: common: escape CSV strings
+ - MEDIUM: stats: escape some strings in the CSV dump
+ - MINOR: tcp: add custom actions that can continue tcp-(request|response) processing
+ - MINOR: lua: Lua tcp action are not final action
+ - DOC: lua: schematics about lua socket organization
+ - BUG/MINOR: debug: display (null) in place of "meth"
+ - DOC: mention the "lua action" in documentation
+ - MINOR: standard: add function that converts signed int to a string
+ - BUG/MINOR: sample: wrong conversion of signed values
+ - MEDIUM: sample: Add type any
+ - MINOR: debug: add a special converter which display its input sample content.
+ - MINOR: tcp: increase the opaque data array
+ - MINOR: tcp/http/conf: extends the keyword registration options
+ - MINOR: build: fix build dependency
+ - MEDIUM: vars: adds support of variables
+ - MINOR: vars: adds get and set functions
+ - MINOR: lua: Variable access
+ - MINOR: samples: add samples which returns constants
+ - BUG/MINOR: vars/compil: fix some warnings
+ - BUILD: add 51degrees options to makefile.
+ - MINOR: global: add several 51Degrees members to global
+ - MINOR: config: add 51Degrees config parsing.
+ - MINOR: init: add 51Degrees initialisation code
+ - MEDIUM: sample: add fiftyone_degrees converter.
+ - MEDIUM: deinit: add cleanup for 51Degrees to deinit
+ - MEDIUM: sample: add trie support to 51Degrees
+ - DOC: add 51Degrees notes to configuration.txt.
+ - DOC: add build indications for 51Degrees to README.
+ - MEDIUM: cfgparse: introduce weak and strong quoting
+ - BUG/MEDIUM: cfgparse: incorrect memmove in quotes management
+ - MINOR: cfgparse: remove line size limitation
+ - MEDIUM: cfgparse: expand environment variables
+ - BUG/MINOR: cfgparse: fix typo in 'option httplog' error message
+ - BUG/MEDIUM: cfgparse: segfault when userlist is misused
+ - CLEANUP: cfgparse: remove reference to 'ruleset' section
+ - MEDIUM: cfgparse: check section maximum number of arguments
+ - MEDIUM: cfgparse: max arguments check in the global section
+ - MEDIUM: cfgparse: check max arguments in the proxies sections
+ - CLEANUP: stream-int: remove a redundant clearing of the linger_risk flag
+ - MINOR: connection: make conn_sock_shutw() actually perform the shutdown() call
+ - MINOR: stream-int: use conn_sock_shutw() to shutdown a connection
+ - MINOR: connection: perform the call to xprt->shutw() in conn_data_shutw()
+ - MEDIUM: stream-int: replace xprt->shutw calls with conn_data_shutw()
+ - MINOR: checks: use conn_data_shutw_hard() instead of call via xprt
+ - MINOR: connection: implement conn_sock_send()
+ - MEDIUM: stream-int: make conn_si_send_proxy() use conn_sock_send()
+ - MEDIUM: connection: make conn_drain() perform more controls
+ - REORG: connection: move conn_drain() to connection.c and rename it
+ - CLEANUP: stream-int: remove inclusion of fd.h that is not used anymore
+ - MEDIUM: channel: don't always set CF_WAKE_WRITE on bi_put*
+ - CLEANUP: lua: don't use si_ic/si_oc on known stream-ints
+ - BUG/MEDIUM: peers: correctly configure the client timeout
+ - MINOR: peers: centralize configuration of the peers frontend
+ - MINOR: proxy: store the default target into the frontend's configuration
+ - MEDIUM: stats: use frontend_accept() as the accept function
+ - MEDIUM: peers: use frontend_accept() instead of peer_accept()
+ - CLEANUP: listeners: remove unused timeout
+ - MEDIUM: listener: store the default target per listener
+ - BUILD: fix automatic inclusion of libdl.
+ - MEDIUM: lua: implement a simple memory allocator
+ - MEDIUM: compression: postpone buffer adjustments after compression
+ - MEDIUM: compression: don't send leading zeroes with chunk size
+ - BUG/MINOR: compression: consider the expansion factor in init
+ - MINOR: http: check the algo name "identity" instead of the function pointer
+ - CLEANUP: compression: statify all algo-specific functions
+ - MEDIUM: compression: add a distinction between UA- and config- algorithms
+ - MEDIUM: compression: add new "raw-deflate" compression algorithm
+ - MEDIUM: compression: split deflate_flush() into flush and finish
+ - CLEANUP: compression: remove unused reset functions
+ - MAJOR: compression: integrate support for libslz
+ - BUG/MEDIUM: http: hdr_cnt would not count any header when called without name
+ - BUG/MAJOR: http: null-terminate the http actions keywords list
+ - CLEANUP: lua: remove the unused hlua_sleep memory pool
+ - BUG/MAJOR: lua: use correct object size when initializing a new converter
+ - CLEANUP: lua: remove hard-coded sizeof() in object creations and mallocs
+ - CLEANUP: lua: fix confusing local variable naming in hlua_txn_new()
+ - CLEANUP: hlua: stop using variable name "s" alternately for hlua_txn and hlua_smp
+ - CLEANUP: lua: get rid of the last "*ht" for struct hlua_txn.
+ - CLEANUP: lua: rename last occurrences of "*s" to "*htxn" for hlua_txn
+ - CLEANUP: lua: rename variable "sc" for struct hlua_smp
+ - CLEANUP: lua: get rid of the last two "*hs" for hlua_smp
+ - REORG/MAJOR: session: rename the "session" entity to "stream"
+ - REORG/MEDIUM: stream: rename stream flags from SN_* to SF_*
+ - MINOR: session: start to reintroduce struct session
+ - MEDIUM: stream: allocate the session when a stream is created
+ - MEDIUM: stream: move the listener's pointer to the session
+ - MEDIUM: stream: move the frontend's pointer to the session
+ - MINOR: session: add a pointer to the session's origin
+ - MEDIUM: session: use the pointer to the origin instead of s->si[0].end
+ - CLEANUP: sample: remove useless tests in fetch functions for l4 != NULL
+ - MEDIUM: http: move header captures from http_txn to struct stream
+ - MINOR: http: create a dedicated pool for http_txn
+ - MAJOR: http: move http_txn out of struct stream
+ - MAJOR: sample: don't pass l7 anymore to sample fetch functions
+ - CLEANUP: lua: remove unused hlua_smp->l7 and hlua_txn->l7
+ - MEDIUM: http: remove the now useless http_txn from {req/res} rules
+ - CLEANUP: lua: don't pass http_txn anymore to hlua_request_act_wrapper()
+ - MAJOR: sample: pass a pointer to the session to each sample fetch function
+ - MINOR: stream: provide a few helpers to retrieve frontend, listener and origin
+ - CLEANUP: stream: don't set ->target to the incoming connection anymore
+ - MINOR: stream: move session initialization before the stream's
+ - MINOR: session: store the session's accept date
+ - MINOR: session: don't rely on s->logs.logwait in embryonic sessions
+ - MINOR: session: implement session_free() and use it everywhere
+ - MINOR: session: add stick counters to the struct session
+ - REORG: stktable: move the stkctr_* functions from stream to sticktable
+ - MEDIUM: streams: support looking up stkctr in the session
+ - MEDIUM: session: update the session's stick counters upon session_free()
+ - MEDIUM: proto_tcp: track the session's counters in the connection ruleset
+ - MAJOR: tcp: make tcp_exec_req_rules() only rely on the session
+ - MEDIUM: stream: don't call stream_store_counters() in kill_mini_session() nor session_accept()
+ - MEDIUM: stream: move all the session-specific stuff of stream_accept() earlier
+ - MAJOR: stream: don't initialize the stream anymore in stream_accept
+ - MEDIUM: session: remove the task pointer from the session
+ - REORG: session: move the session parts out of stream.c
+ - MINOR: stream-int: make appctx_new() take the applet in argument
+ - MEDIUM: peers: move the appctx initialization earlier
+ - MINOR: session: introduce session_new()
+ - MINOR: session: make use of session_new() when creating a new session
+ - MINOR: peers: make use of session_new() when creating a new session
+ - MEDIUM: peers: initialize the task before the stream
+ - MINOR: session: set the CO_FL_CONNECTED flag on the connection once ready
+ - CLEANUP: stream.c: do not re-attach the connection to the stream
+ - MEDIUM: stream: isolate connection-specific initialization code
+ - MEDIUM: stream: also accept appctx as origin in stream_accept_session()
+ - MEDIUM: peers: make use of stream_accept_session()
+ - MEDIUM: frontend: make ->accept only return +/-1
+ - MEDIUM: stream: return the stream upon accept()
+ - MEDIUM: frontend: move some stream initialisation to stream_new()
+ - MEDIUM: frontend: move the fd-specific settings to session_accept_fd()
+ - MEDIUM: frontend: don't restrict frontend_accept() to connections anymore
+ - MEDIUM: frontend: move some remaining stream settings to stream_new()
+ - CLEANUP: frontend: remove one useless local variable
+ - MEDIUM: stream: don't rely on the session's listener anymore in stream_new()
+ - MEDIUM: lua: make use of stream_new() to create an outgoing connection
+ - MINOR: lua: minor cleanup in hlua_socket_new()
+ - MINOR: lua: no need for setting timeouts / conn_retries in hlua_socket_new()
+ - MINOR: peers: no need for setting timeouts / conn_retries in peer_session_create()
+ - CLEANUP: stream-int: swap stream-int and appctx declarations
+ - CLEANUP: namespaces: fix protection against multiple inclusions
+ - MINOR: session: maintain the session count stats in the session, not the stream
+ - MEDIUM: session: adjust the connection flags before stream_new()
+ - MINOR: stream: pass the pointer to the origin explicitly to stream_new()
+ - CLEANUP: poll: move the conditions for waiting out of the poll functions
+ - BUG/MEDIUM: listener: don't report an error when resuming unbound listeners
+ - BUG/MEDIUM: init: don't limit cpu-map to the first 32 processes only
+ - BUG/MAJOR: tcp/http: fix current_rule assignment when restarting over a ruleset
+ - BUG/MEDIUM: stream-int: always reset si->ops when si->end is nullified
+ - DOC: update the entities diagrams
+ - BUG/MEDIUM: http: properly retrieve the front connection
+ - MINOR: applet: add a new "owner" pointer in the appctx
+ - MEDIUM: applet: make the applet not depend on a stream interface anymore
+ - REORG: applet: move the applet definitions out of stream_interface
+ - CLEANUP: applet: rename struct si_applet to applet
+ - REORG: stream-int: create si_applet_ops dedicated to applets
+ - MEDIUM: applet: add basic support for an applet run queue
+ - MEDIUM: applet: implement a run queue for active appctx
+ - MEDIUM: stream-int: add a new function si_applet_done()
+ - MAJOR: applet: now call si_applet_done() instead of si_update() in I/O handlers
+ - MAJOR: stream: use a regular ->update for all stream interfaces
+ - MEDIUM: dumpstats: don't unregister the applet anymore
+ - MEDIUM: applet: centralize the call to si_applet_done() in the I/O handler
+ - MAJOR: stream: do not allocate request buffers anymore when the left side is an applet
+ - MINOR: stream-int: add two flags to indicate an applet's wishes regarding I/O
+ - MEDIUM: applet: make the applets only use si_applet_{cant|want|stop}_{get|put}
+ - MEDIUM: stream-int: pause the appctx if the task is woken up
+ - BUG/MAJOR: tcp: only call registered actions when they're registered
+ - BUG/MEDIUM: peers: fix applet scheduling
+ - BUG/MEDIUM: peers: recent applet changes broke peers updates scheduling
+ - MINOR: tools: provide an rdtsc() function for time comparisons
+ - IMPORT: lru: import simple ebtree-based LRU functions
+ - IMPORT: hash: import xxhash-r39
+ - MEDIUM: pattern: add a revision to all pattern expressions
+ - MAJOR: pattern: add LRU-based cache on pattern matching
+ - BUG/MEDIUM: http: remove content-length from chunked messages
+ - DOC: http: update the comments about the rules for determining transfer-length
+ - BUG/MEDIUM: http: do not restrict parsing of transfer-encoding to HTTP/1.1
+ - BUG/MEDIUM: http: incorrect transfer-coding in the request is a bad request
+ - BUG/MEDIUM: http: remove content-length form responses with bad transfer-encoding
+ - MEDIUM: http: restrict the HTTP version token to 1 digit as per RFC7230
+ - MEDIUM: http: disable support for HTTP/0.9 by default
+ - MEDIUM: http: add option-ignore-probes to get rid of the floods of 408
+ - BUG/MINOR: config: clear proxy->table.peers.p for disabled proxies
+ - MEDIUM: init: don't stop proxies in parent process when exiting
+ - MINOR: stick-table: don't attach to peers in stopped state
+ - MEDIUM: config: initialize stick-tables after peers, not before
+ - MEDIUM: peers: add the ability to disable a peers section
+ - MINOR: peers: store the pointer to the signal handler
+ - MEDIUM: peers: unregister peers that were never started
+ - MEDIUM: config: propagate the table's process list to the peers sections
+ - MEDIUM: init: stop any peers section not bound to the correct process
+ - MEDIUM: config: validate that peers sections are bound to exactly one process
+ - MAJOR: peers: allow peers section to be used with nbproc > 1
+ - DOC: relax the peers restriction to single-process
+ - DOC: document option http-ignore-probes
+ - DOC: fix the comments about the meaning of msg->sol in HTTP
+ - BUG/MEDIUM: http: wait for the exact amount of body bytes in wait_for_request_body
+ - BUG/MAJOR: http: prevent risk of reading past end with balance url_param
+ - MEDIUM: stream: move HTTP request body analyser before process_common
+ - MEDIUM: http: add a new option http-buffer-request
+ - MEDIUM: http: provide 3 fetches for the body
+ - DOC: update the doc on the proxy protocol
+ - BUILD: pattern: fix build warnings introduced in the LRU cache
+ - BUG/MEDIUM: stats: properly initialize the scope before dumping stats
+ - CLEANUP: config: fix misleading information in error message.
+ - MINOR: config: report the number of processes using a peers section in the error case
+ - BUG/MEDIUM: config: properly compute the default number of processes for a proxy
+ - MEDIUM: http: add new "capture" action for http-request
+ - BUG/MEDIUM: http: fix the http-request capture parser
+ - BUG/MEDIUM: http: don't forward client shutdown without NOLINGER except for tunnels
+ - BUILD/MINOR: ssl: fix build failure introduced by recent patch
+ - BUG/MAJOR: check: fix breakage of inverted tcp-check rules
+ - CLEANUP: checks: fix double usage of cur / current_step in tcp-checks
+ - BUG/MEDIUM: checks: do not dereference head of a tcp-check at the end
+ - CLEANUP: checks: simplify the loop processing of tcp-checks
+ - BUG/MAJOR: checks: always check for end of list before proceeding
+ - BUG/MEDIUM: checks: do not dereference a list as a tcpcheck struct
+ - BUG/MAJOR: checks: break infinite loops when tcp-checks starts with comment
+ - MEDIUM: http: make url_param iterate over multiple occurrences
+ - BUG/MEDIUM: peers: apply a random reconnection timeout
+ - MEDIUM: config: reject invalid config with name duplicates
+ - MEDIUM: config: reject conflicts in table names
+ - CLEANUP: proxy: make the proxy lookup functions more user-friendly
+ - MINOR: proxy: simply ignore duplicates in proxy name lookups
+ - MINOR: config: don't open-code proxy name lookups
+ - MEDIUM: config: clarify the conflicting modes detection for backend rules
+ - CLEANUP: proxy: remove now unused function findproxy_mode()
+ - MEDIUM: stick-table: remove the now duplicate find_stktable() function
+ - MAJOR: config: remove the deprecated reqsetbe / reqisetbe actions
+ - MINOR: proxy: add a new function proxy_find_by_id()
+ - MINOR: proxy: add a flag to memorize that the proxy's ID was forced
+ - MEDIUM: proxy: add a new proxy_find_best_match() function
+ - CLEANUP: http: explicitly reference request in http_apply_redirect_rules()
+ - MINOR: http: prepare support for parsing redirect actions on responses
+ - MEDIUM: http: implement http-response redirect rules
+ - MEDIUM: http: no need to close the request on redirect if data was parsed
+ - BUG/MEDIUM: http: fix body processing for the stats applet
+ - BUG/MINOR: da: fix log-level comparison to emove annoying warning
+ - CLEANUP: global: remove one ifdef USE_DEVICEATLAS
+ - CLEANUP: da: move the converter registration to da.c
+ - CLEANUP: da: register the config keywords in da.c
+ - CLEANUP: adjust the envelope name in da.h to reflect the file name
+ - CLEANUP: da: remove ifdef USE_DEVICEATLAS from da.c
+ - BUILD: make 51D easier to build by defaulting to 51DEGREES_SRC
+ - BUILD: fix build warning when not using 51degrees
+ - BUILD: make DeviceAtlas easier to build by defaulting to DEVICEATLAS_SRC
+ - BUILD: ssl: fix recent build breakage on older SSL libs
+
+2015/03/11 : 1.6-dev1
+ - CLEANUP: extract temporary $CFG to eliminate duplication
+ - CLEANUP: extract temporary $BIN to eliminate duplication
+ - CLEANUP: extract temporary $PIDFILE to eliminate duplication
+ - CLEANUP: extract temporary $LOCKFILE to eliminate duplication
+ - CLEANUP: extract quiet_check() to avoid duplication
+ - BUG/MINOR: don't start haproxy on reload
+ - DOC: Address issue where documentation is excluded due to a gitignore rule.
+ - BUG/MEDIUM: systemd: set KillMode to 'mixed'
+ - BUILD: fix "make install" to support spaces in the install dirs
+ - BUG/MINOR: config: http-request replace-header arg typo
+ - BUG: config: error in http-response replace-header number of arguments
+ - DOC: missing track-sc* in http-request rules
+ - BUILD: lua: missing ifdef related to SSL when enabling LUA
+ - BUG/MEDIUM: regex: fix pcre_study error handling
+ - MEDIUM: regex: Use pcre_study always when PCRE is used, regardless of JIT
+ - BUG/MINOR: Fix search for -p argument in systemd wrapper.
+ - MEDIUM: Improve signal handling in systemd wrapper.
+ - DOC: fix typo in Unix Socket commands
+ - BUG/MEDIUM: checks: external checks can't change server status to UP
+ - BUG/MEDIUM: checks: segfault with external checks in a backend section
+ - BUG/MINOR: checks: external checks shouldn't wait for timeout to return the result
+ - BUG/MEDIUM: auth: fix segfault with http-auth and a configuration with an unknown encryption algorithm
+ - BUG/MEDIUM: config: userlists should ensure that encrypted passwords are supported
+ - BUG/MINOR: config: don't propagate process binding for dynamic use_backend
+ - BUG/MINOR: log: fix request flags when keep-alive is enabled
+ - BUG/MEDIUM: checks: fix conflicts between agent checks and ssl healthchecks
+ - MINOR: checks: allow external checks in backend sections
+ - MEDIUM: checks: provide environment variables to the external checks
+ - MINOR: checks: update dynamic environment variables in external checks
+ - DOC: checks: environment variables used by "external-check command"
+ - BUG/MEDIUM: backend: correctly detect the domain when use_domain_only is used
+ - MINOR: ssl: load certificates in alphabetical order
+ - BUG/MINOR: checks: prevent http keep-alive with http-check expect
+ - MINOR: lua: typo in an error message
+ - MINOR: report the Lua version in -vv
+ - MINOR: lua: add a compilation error message when compiled with an incompatible version
+ - BUG/MEDIUM: lua: segfault when calling haproxy sample fetches from lua
+ - BUILD: try to automatically detect the Lua library name
+ - BUILD/CLEANUP: systemd: avoid a warning due to mixed code and declaration
+ - BUG/MEDIUM: backend: Update hash to use unsigned int throughout
+ - BUG/MEDIUM: connection: fix memory corruption when building a proxy v2 header
+ - MEDIUM: connection: add new bit in Proxy Protocol V2
+ - BUG/MINOR: ssl: rejects OCSP response without nextupdate.
+ - BUG/MEDIUM: ssl: Fix to not serve expired OCSP responses.
+ - BUG/MINOR: ssl: Fix OCSP resp update fails with the same certificate configured twice.
+ - BUG/MINOR: ssl: Fix external function in order not to return a pointer on an internal trash buffer.
+ - MINOR: add fetchs 'ssl_c_der' and 'ssl_f_der' to return DER formatted certs
+ - MINOR: ssl: add statement to force some ssl options in global.
+ - BUG/MINOR: ssl: correctly initialize ssl ctx for invalid certificates
+ - BUG/MEDIUM: ssl: fix bad ssl context init can cause segfault in case of OOM.
+ - BUG/MINOR: samples: fix unnecessary memcopy converting binary to string.
+ - MINOR: samples: adds the bytes converter.
+ - MINOR: samples: adds the field converter.
+ - MINOR: samples: add the word converter.
+ - BUG/MINOR: server: move the directive #endif to the end of file
+ - BUG/MAJOR: buffer: check the space left is enough or not when input data in a buffer is wrapped
+ - DOC: fix a few typos
+ - CLEANUP: epoll: epoll_events should be allocated according to global.tune.maxpollevents
+ - BUG/MINOR: http: fix typo: "401 Unauthorized" => "407 Unauthorized"
+ - BUG/MINOR: parse: refer curproxy instead of proxy
+ - BUG/MINOR: parse: check the validity of size string in a more strict way
+ - BUILD: add new target 'make uninstall' to support uninstalling haproxy from OS
+ - DOC: expand the docs for the provided stats.
+ - BUG/MEDIUM: unix: do not unlink() abstract namespace sockets upon failure.
+ - MEDIUM: ssl: Certificate Transparency support
+ - MEDIUM: stats: proxied stats admin forms fix
+ - MEDIUM: http: Compress HTTP responses with status codes 201,202,203 in addition to 200
+ - BUG/MEDIUM: connection: sanitize PPv2 header length before parsing address information
+ - MAJOR: namespace: add Linux network namespace support
+ - MINOR: systemd: Check configuration before start
+ - BUILD: ssl: handle boringssl in openssl version detection
+ - BUILD: ssl: disable OCSP when using boringssl
+ - BUILD: ssl: don't call get_rfc2409_prime when using boringssl
+ - MINOR: ssl: don't use boringssl's cipher_list
+ - BUILD: ssl: use OPENSSL_NO_OCSP to detect OCSP support
+ - MINOR: stats: fix minor typo in HTML page
+ - MINOR: Also accept SIGHUP/SIGTERM in systemd-wrapper
+ - MEDIUM: Add support for configurable TLS ticket keys
+ - DOC: Document the new tls-ticket-keys bind keyword
+ - DOC: clearly state that the "show sess" output format is not fixed
+ - MINOR: stats: fix minor typo fix in stats_dump_errors_to_buffer()
+ - DOC: httplog does not support 'no'
+ - BUG/MEDIUM: ssl: Fix a memory leak in DHE key exchange
+ - MINOR: ssl: use SSL_get_ciphers() instead of directly accessing the cipher list.
+ - BUG/MEDIUM: Consistently use 'check' in process_chk
+ - MEDIUM: Add external check
+ - BUG/MEDIUM: Do not set agent health to zero if server is disabled in config
+ - MEDIUM/BUG: Only explicitly report "DOWN (agent)" if the agent health is zero
+ - MEDIUM: Remove connect_chk
+ - MEDIUM: Refactor init_check and move to checks.c
+ - MEDIUM: Add free_check() helper
+ - MEDIUM: Move proto and addr fields struct check
+ - MEDIUM: Attach tcpcheck_rules to check
+ - MEDIUM: Add parsing of mailers section
+ - MEDIUM: Allow configuration of email alerts
+ - MEDIUM: Support sending email alerts
+ - DOC: Document email alerts
+ - MINOR: Remove trailing '.' from email alert messages
+ - MEDIUM: Allow suppression of email alerts by log level
+ - BUG/MEDIUM: Do not consider an agent check as failed on L7 error
+ - MINOR: deinit: fix memory leak
+ - MINOR: http: export the function 'smp_fetch_base32'
+ - BUG/MEDIUM: http: tarpit timeout is reset
+ - MINOR: sample: add "json" converter
+ - BUG/MEDIUM: pattern: don't load more than once a pattern list.
+ - MINOR: map/acl/dumpstats: remove the "Done." message
+ - BUG/MAJOR: ns: HAProxy segfault if the cli_conn is not from a network connection
+ - BUG/MINOR: pattern: error message missing
+ - BUG/MEDIUM: pattern: some entries are not deleted with case insensitive match
+ - BUG/MINOR: ARG6 and ARG7 don't fit in a 32 bits word
+ - MAJOR: poll: only rely on wake_expired_tasks() to compute the wait delay
+ - MEDIUM: task: call session analyzers if the task is woken by a message.
+ - MEDIUM: protocol: automatically pick the proto associated to the connection.
+ - MEDIUM: channel: wake up any request analyzer on response activity
+ - MINOR: converters: add a "void *private" argument to converters
+ - MINOR: converters: give the session pointer as converter argument
+ - MINOR: sample: add private argument to the struct sample_fetch
+ - MINOR: global: export function and permits to not resolve DNS names
+ - MINOR: sample: add function for browsing samples.
+ - MINOR: global: export many symbols.
+ - MINOR: includes: fix a lot of missing or useless includes
+ - MEDIUM: tcp: add register keyword system.
+ - MEDIUM: buffer: make bo_putblk/bo_putstr/bo_putchk return the number of bytes copied.
+ - MEDIUM: http: change the code returned by the response processing rule functions
+ - MEDIUM: http/tcp: permit to resume http and tcp custom actions
+ - MINOR: channel: functions to get data from a buffer without copy
+ - MEDIUM: lua: lua integration in the build and init system.
+ - MINOR: lua: add ease functions
+ - MINOR: lua: add runtime execution context
+ - MEDIUM: lua: "com" signals
+ - MINOR: lua: add the configuration directive "lua-load"
+ - MINOR: lua: core: create "core" class and object
+ - MINOR: lua: post initialisation bindings
+ - MEDIUM: lua: add coroutine as tasks.
+ - MINOR: lua: add sample and args type converters
+ - MINOR: lua: txn: create class TXN associated with the transaction.
+ - MINOR: lua: add shared context in the lua stack
+ - MINOR: lua: txn: import existing sample-fetches in the class TXN
+ - MINOR: lua: txn: add lua function in TXN that returns an array of http headers
+ - MINOR: lua: register and execute sample-fetches in LUA
+ - MINOR: lua: register and execute converters in LUA
+ - MINOR: lua: add bindings for tcp and http actions
+ - MINOR: lua: core: add sleep functions
+ - MEDIUM: lua: socket: add "socket" class for TCP I/O
+ - MINOR: lua: core: pattern and acl manipulation
+ - MINOR: lua: channel: add "channel" class
+ - MINOR: lua: txn: object "txn" provides two objects "channel"
+ - MINOR: lua: core: can set the nice of the current task
+ - MINOR: lua: core: can yield an execution stack
+ - MINOR: lua: txn: add binding for closing the client connection.
+ - MEDIUM: lua: Lua initialisation "on demand"
+ - BUG/MAJOR: lua: send function fails and return bad bytes
+ - MINOR: remove unused declaration.
+ - MINOR: lua: remove some #define
+ - MINOR: lua: use bitfield and macro in place of integer and enum
+ - MINOR: lua: set skeleton for Lua execution expiration
+ - MEDIUM: lua: each yielding function returns a wake up time.
+ - MINOR: lua: adds "forced yield" flag
+ - MEDIUM: lua: interrupt the Lua execution for running other process
+ - MEDIUM: lua: change the sleep function core
+ - BUG/MEDIUM: lua: the execution timeout is ignored in yield case
+ - DOC: lua: Lua configuration documentation
+ - MINOR: lua: add the struct session in the lua channel struct
+ - BUG/MINOR: lua: set buffer if it is nnot avalaible.
+ - BUG/MEDIUM: lua: reset flags before resuming execution
+ - BUG/MEDIUM: lua: fix infinite loop about channel
+ - BUG/MEDIUM: lua: the Lua process is not waked up after sending data on requests side
+ - BUG/MEDIUM: lua: many errors when we try to send data with the channel API
+ - MEDIUM: lua: use the Lua-5.3 version of the library
+ - BUG/MAJOR: lua: some function are not yieldable, the forced yield causes errors
+ - BUG/MEDIUM: lua: can't handle the response bytes
+ - BUG/MEDIUM: lua: segfault with buffer_replace2
+ - BUG/MINOR: lua: check buffers before initializing socket
+ - BUG/MINOR: log: segfault if there are no proxy reference
+ - BUG/MEDIUM: lua: sockets don't have buffer to write data
+ - BUG/MEDIUM: lua: cannot connect socket
+ - BUG/MINOR: lua: sockets receive behavior doesn't follows the specs
+ - BUG/BUILD: lua: The strict Lua 5.3 version check is not done.
+ - BUG/MEDIUM: buffer: one byte miss in buffer free space check
+ - MEDIUM: lua: make the functions hlua_gethlua() and hlua_sethlua() faster
+ - MINOR: replace the Core object by a simple model.
+ - MEDIUM: lua: change the objects configuration
+ - MEDIUM: lua: create a namespace for the fetches
+ - MINOR: converters: add function to browse converters
+ - MINOR: lua: wrapper for converters
+ - MINOR: lua: replace function (req|get)_channel by a variable
+ - MINOR: lua: fetches and converters can return an empty string in place of nil
+ - DOC: lua api
+ - BUG/MEDIUM: sample: fix random number upper-bound
+ - BUG/MINOR: stats:Fix incorrect printf type.
+ - BUG/MAJOR: session: revert all the crappy client-side timeout changes
+ - BUG/MINOR: logs: properly initialize and count log sockets
+ - BUG/MEDIUM: http: fetch "base" is not compatible with set-header
+ - BUG/MINOR: counters: do not untrack counters before logging
+ - BUG/MAJOR: sample: correctly reinitialize sample fetch context before calling sample_process()
+ - MINOR: stick-table: make stktable_fetch_key() indicate why it failed
+ - BUG/MEDIUM: counters: fix track-sc* to wait on unstable contents
+ - BUILD: remove TODO from the spec file and add README
+ - MINOR: log: make MAX_SYSLOG_LEN overridable at build time
+ - MEDIUM: log: support a user-configurable max log line length
+ - DOC: provide an example of how to use ssl_c_sha1
+ - BUILD: checks: external checker needs signal.h
+ - BUILD: checks: kill a minor warning on Solaris in external checks
+ - BUILD: http: fix isdigit & isspace warnings on Solaris
+ - BUG/MINOR: listener: set the listener's fd to -1 after deletion
+ - BUG/MEDIUM: unix: failed abstract socket binding is retryable
+ - MEDIUM: listener: implement a per-protocol pause() function
+ - MEDIUM: listener: support rebinding during resume()
+ - BUG/MEDIUM: unix: completely unbind abstract sockets during a pause()
+ - DOC: explicitly mention the limits of abstract namespace sockets
+ - DOC: minor fix on {sc,src}_kbytes_{in,out}
+ - DOC: fix alphabetical sort of converters
+ - MEDIUM: stick-table: implement lookup from a sample fetch
+ - MEDIUM: stick-table: add new converters to fetch table data
+ - MINOR: samples: add two converters for the date format
+ - BUG/MAJOR: http: correctly rewind the request body after start of forwarding
+ - DOC: remove references to CPU=native in the README
+ - DOC: mention that "compression offload" is ignored in defaults section
+ - DOC: mention that Squid correctly responds 400 to PPv2 header
+ - BUILD: fix dependencies between config and compat.h
+ - MINOR: session: export the function 'smp_fetch_sc_stkctr'
+ - MEDIUM: stick-table: make it easier to register extra data types
+ - BUG/MINOR: http: base32+src should use the big endian version of base32
+ - MINOR: sample: allow IP address to cast to binary
+ - MINOR: sample: add new converters to hash input
+ - MINOR: sample: allow integers to cast to binary
+ - BUILD: report commit ID in git versions as well
+ - CLEANUP: session: move the stick counters declarations to stick_table.h
+ - MEDIUM: http: add the track-sc* actions to http-request rules
+ - BUG/MEDIUM: connection: fix proxy v2 header again!
+ - BUG/MAJOR: tcp: fix a possible busy spinning loop in content track-sc*
+ - OPTIM/MINOR: proxy: reduce struct proxy by 48 bytes on 64-bit archs
+ - MINOR: log: add a new field "%lc" to implement a per-frontend log counter
+ - BUG/MEDIUM: http: fix inverted condition in pat_match_meth()
+ - BUG/MEDIUM: http: fix improper parsing of HTTP methods for use with ACLs
+ - BUG/MINOR: pattern: remove useless allocation of unused trash in pat_parse_reg()
+ - BUG/MEDIUM: acl: correctly compute the output type when a converter is used
+ - CLEANUP: acl: cleanup some of the redundancy and spaghetti after last fix
+ - BUG/CRITICAL: http: don't update msg->sov once data start to leave the buffer
+ - MEDIUM: http: enable header manipulation for 101 responses
+ - BUG/MEDIUM: config: propagate frontend to backend process binding again.
+ - MEDIUM: config: properly propagate process binding between proxies
+ - MEDIUM: config: make the frontends automatically bind to the listeners' processes
+ - MEDIUM: config: compute the exact bind-process before listener's maxaccept
+ - MEDIUM: config: only warn if stats are attached to multi-process bind directives
+ - MEDIUM: config: report it when tcp-request rules are misplaced
+ - DOC: indicate in the doc that track-sc* can wait if data are missing
+ - MINOR: config: detect the case where a tcp-request content rule has no inspect-delay
+ - MEDIUM: systemd-wrapper: support multiple executable versions and names
+ - BUG/MEDIUM: remove debugging code from systemd-wrapper
+ - BUG/MEDIUM: http: adjust close mode when switching to backend
+ - BUG/MINOR: config: don't propagate process binding on fatal errors.
+ - BUG/MEDIUM: check: rule-less tcp-check must detect connect failures
+ - BUG/MINOR: tcp-check: report the correct failed step in the status
+ - DOC: indicate that weight zero is reported as DRAIN
+ - BUG/MEDIUM: config: avoid skipping disabled proxies
+ - BUG/MINOR: config: do not accept more track-sc than configured
+ - BUG/MEDIUM: backend: fix URI hash when a query string is present
+ - BUG/MEDIUM: http: don't dump debug headers on MSG_ERROR
+ - BUG/MAJOR: cli: explicitly call cli_release_handler() upon error
+ - BUG/MEDIUM: tcp: fix outgoing polling based on proxy protocol
+ - BUILD/MINOR: ssl: de-constify "ciphers" to avoid a warning on openssl-0.9.8
+ - BUG/MEDIUM: tcp: don't use SO_ORIGINAL_DST on non-AF_INET sockets
+ - BUG/BUILD: revert accidental change in the makefile from latest SSL fix
+ - BUG/MEDIUM: ssl: force a full GC in case of memory shortage
+ - MEDIUM: ssl: add support for smaller SSL records
+ - MINOR: session: release a few other pools when stopping
+ - MINOR: task: release the task pool when stopping
+ - BUG/MINOR: config: don't inherit the default balance algorithm in frontends
+ - BUG/MAJOR: frontend: initialize capture pointers earlier
+ - BUG/MINOR: stats: correctly set the request/response analysers
+ - MAJOR: polling: centralize calls to I/O callbacks
+ - DOC: fix typo in the body parser documentation for msg.sov
+ - BUG/MINOR: peers: the buffer size is global.tune.bufsize, not trash.size
+ - MINOR: sample: add a few basic internal fetches (nbproc, proc, stopping)
+ - DEBUG: pools: apply poisonning on every allocated pool
+ - BUG/MAJOR: sessions: unlink session from list on out of memory
+ - BUG/MEDIUM: patterns: previous fix was incomplete
+ - BUG/MEDIUM: payload: ensure that a request channel is available
+ - BUG/MINOR: tcp-check: don't condition data polling on check type
+ - BUG/MEDIUM: tcp-check: don't rely on random memory contents
+ - BUG/MEDIUM: tcp-checks: disable quick-ack unless next rule is an expect
+ - BUG/MINOR: config: fix typo in condition when propagating process binding
+ - BUG/MEDIUM: config: do not propagate processes between stopped processes
+ - BUG/MAJOR: stream-int: properly check the memory allocation return
+ - BUG/MEDIUM: memory: fix freeing logic in pool_gc2()
+ - BUG/MAJOR: namespaces: conn->target is not necessarily a server
+ - BUG/MEDIUM: compression: correctly report zlib_mem
+ - CLEANUP: lists: remove dead code
+ - CLEANUP: memory: remove dead code
+ - CLEANUP: memory: replace macros pool_alloc2/pool_free2 with functions
+ - MINOR: memory: cut pool allocator in 3 layers
+ - MEDIUM: memory: improve pool_refill_alloc() to pass a refill count
+ - MINOR: stream-int: retrieve session pointer from stream-int
+ - MINOR: buffer: reset a buffer in b_reset() and not channel_init()
+ - MEDIUM: buffer: use b_alloc() to allocate and initialize a buffer
+ - MINOR: buffer: move buffer initialization after channel initialization
+ - MINOR: buffer: only use b_free to release buffers
+ - MEDIUM: buffer: always assign a dummy empty buffer to channels
+ - MEDIUM: buffer: add a new buf_wanted dummy buffer to report failed allocations
+ - MEDIUM: channel: do not report full when buf_empty is present on a channel
+ - MINOR: session: group buffer allocations together
+ - MINOR: buffer: implement b_alloc_fast()
+ - MEDIUM: buffer: implement b_alloc_margin()
+ - MEDIUM: session: implement a basic atomic buffer allocator
+ - MAJOR: session: implement a wait-queue for sessions who need a buffer
+ - MAJOR: session: only allocate buffers when needed
+ - MINOR: stats: report a "waiting" flags for sessions
+ - MAJOR: session: only wake up as many sessions as available buffers permit
+ - MINOR: config: implement global setting tune.buffers.reserve
+ - MINOR: config: implement global setting tune.buffers.limit
+ - MEDIUM: channel: implement a zero-copy buffer transfer
+ - MEDIUM: stream-int: support splicing from applets
+ - OPTIM: stream-int: try to send pending spliced data
+ - CLEANUP: session: remove session_from_task()
+ - DOC: add missing entry for log-format and clarify the text
+ - MINOR: logs: add a new per-proxy "log-tag" directive
+ - BUG/MEDIUM: http: fix header removal when previous header ends with pure LF
+ - MINOR: config: extend the default max hostname length to 64 and beyond
+ - BUG/MEDIUM: channel: fix possible integer overflow on reserved size computation
+ - BUG/MINOR: channel: compare to_forward with buf->i, not buf->size
+ - MINOR: channel: add channel_in_transit()
+ - MEDIUM: channel: make buffer_reserved() use channel_in_transit()
+ - MEDIUM: channel: make bi_avail() use channel_in_transit()
+ - BUG/MEDIUM: channel: don't schedule data in transit for leaving until connected
+ - CLEANUP: channel: rename channel_reserved -> channel_is_rewritable
+ - MINOR: channel: rename channel_full() to !channel_may_recv()
+ - MINOR: channel: rename buffer_reserved() to channel_reserved()
+ - MINOR: channel: rename buffer_max_len() to channel_recv_limit()
+ - MINOR: channel: rename bi_avail() to channel_recv_max()
+ - MINOR: channel: rename bi_erase() to channel_truncate()
+ - BUG/MAJOR: log: don't try to emit a log if no logger is set
+ - MINOR: tools: add new round_2dig() function to round integers
+ - MINOR: global: always export some SSL-specific metrics
+ - MINOR: global: report information about the cost of SSL connections
+ - MAJOR: init: automatically set maxconn and/or maxsslconn when possible
+ - MINOR: http: add a new fetch "query" to extract the request's query string
+ - MINOR: hash: add new function hash_crc32
+ - MINOR: samples: provide a "crc32" converter
+ - MEDIUM: backend: add the crc32 hash algorithm for load balancing
+ - BUG/MINOR: args: add missing entry for ARGT_MAP in arg_type_names
+ - BUG/MEDIUM: http: make http-request set-header compute the string before removal
+ - MEDIUM: args: use #define to specify the number of bits used by arg types and counts
+ - MEDIUM: args: increase arg type to 5 bits and limit arg count to 5
+ - MINOR: args: add type-specific flags for each arg in a list
+ - MINOR: args: implement a new arg type for regex : ARGT_REG
+ - MEDIUM: regex: add support for passing regex flags to regex_exec_match()
+ - MEDIUM: samples: add a regsub converter to perform regex-based transformations
+ - BUG/MINOR: sample: fix case sensitivity for the regsub converter
+ - MEDIUM: http: implement http-request set-{method,path,query,uri}
+ - DOC: fix missing closing brackend on regsub
+ - MEDIUM: samples: provide basic arithmetic and bitwise operators
+ - MEDIUM: init: continue to enforce SYSTEM_MAXCONN with auto settings if set
+ - BUG/MINOR: http: fix incorrect header value offset in replace-hdr/replace-value
+ - BUG/MINOR: http: abort request processing on filter failure
+ - MEDIUM: tcp: implement tcp-ut bind option to set TCP_USER_TIMEOUT
+ - MINOR: ssl/server: add the "no-ssl-reuse" server option
+ - BUG/MAJOR: peers: initialize s->buffer_wait when creating the session
+ - MINOR: http: add a new function to iterate over each header line
+ - MINOR: http: add the new sample fetches req.hdr_names and res.hdr_names
+ - MEDIUM: task: always ensure that the run queue is consistent
+ - BUILD: Makefile: add -Wdeclaration-after-statement
+ - BUILD/CLEANUP: ssl: avoid a warning due to mixed code and declaration
+ - BUILD/CLEANUP: config: silent 3 warnings about mixed declarations with code
+ - MEDIUM: protocol: use a family array to index the protocol handlers
+ - BUILD: lua: cleanup many mixed occurrences declarations & code
+ - BUG/MEDIUM: task: fix recently introduced scheduler skew
+ - BUG/MINOR: lua: report the correct function name in an error message
+ - BUG/MAJOR: http: fix stats regression consecutive to HTTP_RULE_RES_YIELD
+ - Revert "BUG/MEDIUM: lua: can't handle the response bytes"
+ - MINOR: lua: convert IP addresses to type string
+ - CLEANUP: lua: use the same function names in C and Lua
+ - REORG/MAJOR: move session's req and resp channels back into the session
+ - CLEANUP: remove now unused channel pool
+ - REORG/MEDIUM: stream-int: introduce si_ic/si_oc to access channels
+ - MEDIUM: stream-int: add a flag indicating which side the SI is on
+ - MAJOR: stream-int: only rely on SI_FL_ISBACK to find the requested channel
+ - MEDIUM: stream-interface: remove now unused pointers to channels
+ - MEDIUM: stream-int: make si_sess() use the stream int's side
+ - MEDIUM: stream-int: use si_task() to retrieve the task from the stream int
+ - MEDIUM: stream-int: remove any reference to the owner
+ - CLEANUP: stream-int: add si_ib/si_ob to dereference the buffers
+ - CLEANUP: stream-int: add si_opposite() to find the other stream interface
+ - REORG/MEDIUM: channel: only use chn_prod / chn_cons to find stream-interfaces
+ - MEDIUM: channel: add a new flag "CF_ISRESP" for the response channel
+ - MAJOR: channel: only rely on the new CF_ISRESP flag to find the SI
+ - MEDIUM: channel: remove now unused ->prod and ->cons pointers
+ - CLEANUP: session: simplify references to chn_{prod,cons}(&s->{req,res})
+ - CLEANUP: session: use local variables to access channels / stream ints
+ - CLEANUP: session: don't needlessly pass a pointer to the stream-int
+ - CLEANUP: session: don't use si_{ic,oc} when we know the session.
+ - CLEANUP: stream-int: limit usage of si_ic/si_oc
+ - CLEANUP: lua: limit usage of si_ic/si_oc
+ - MINOR: channel: add chn_sess() helper to retrieve session from channel
+ - MEDIUM: session: simplify receive buffer allocator to only use the channel
+ - MEDIUM: lua: use CF_ISRESP to detect the channel's side
+ - CLEANUP: lua: remove the session pointer from hlua_channel
+ - CLEANUP: lua: hlua_channel_new() doesn't need the pointer to the session anymore
+ - MEDIUM: lua: remove struct hlua_channel
+ - MEDIUM: lua: remove hlua_sample_fetch
+
+2014/06/19 : 1.6-dev0
+ - exact copy of 1.5.0
+
+2014/06/19 : 1.5.0
+ - MEDIUM: ssl: ignored file names ending as '.issuer' or '.ocsp'.
+ - MEDIUM: ssl: basic OCSP stapling support.
+ - MINOR: ssl/cli: Fix unapropriate comment in code on 'set ssl ocsp-response'
+ - MEDIUM: ssl: add 300s supported time skew on OCSP response update.
+ - MINOR: checks: mysql-check: Add support for v4.1+ authentication
+ - MEDIUM: ssl: Add the option to use standardized DH parameters >= 1024 bits
+ - MEDIUM: ssl: fix detection of ephemeral diffie-hellman key exchange by using the cipher description.
+ - MEDIUM: http: add actions "replace-header" and "replace-values" in http-req/resp
+ - MEDIUM: Break out check establishment into connect_chk()
+ - MEDIUM: Add port_to_str helper
+ - BUG/MEDIUM: fix ignored values for half-closed timeouts (client-fin and server-fin) in defaults section.
+ - BUG/MEDIUM: Fix unhandled connections problem with systemd daemon mode and SO_REUSEPORT.
+ - MINOR: regex: fix a little configuration memory leak.
+ - MINOR: regex: Create JIT compatible function that return match strings
+ - MEDIUM: regex: replace all standard regex function by own functions
+ - MEDIUM: regex: Remove null terminated strings.
+ - MINOR: regex: Use native PCRE API.
+ - MINOR: missing regex.h include
+ - DOC: Add Exim as Proxy Protocol implementer.
+ - BUILD: don't use type "uint" which is not portable
+ - BUILD: stats: workaround stupid and bogus -Werror=format-security behaviour
+ - BUG/MEDIUM: http: clear CF_READ_NOEXP when preparing a new transaction
+ - CLEANUP: http: don't clear CF_READ_NOEXP twice
+ - DOC: fix proxy protocol v2 decoder example
+ - DOC: fix remaining occurrences of "pattern extraction"
+ - MINOR: log: allow the HTTP status code to be logged even in TCP frontends
+ - MINOR: logs: don't limit HTTP header captures to HTTP frontends
+ - MINOR: sample: improve sample_fetch_string() to report partial contents
+ - MINOR: capture: extend the captures to support non-header keys
+ - MINOR: tcp: prepare support for the "capture" action
+ - MEDIUM: tcp: add a new tcp-request capture directive
+ - MEDIUM: session: allow shorter retry delay if timeout connect is small
+ - MEDIUM: session: don't apply the retry delay when redispatching
+ - MEDIUM: session: redispatch earlier when possible
+ - MINOR: config: warn when tcp-check rules are used without option tcp-check
+ - BUG/MINOR: connection: make proxy protocol v1 support the UNKNOWN protocol
+ - DOC: proxy protocol example parser was still wrong
+ - DOC: minor updates to the proxy protocol doc
+ - CLEANUP: connection: merge proxy proto v2 header and address block
+ - MEDIUM: connection: add support for proxy protocol v2 in accept-proxy
+ - MINOR: tools: add new functions to quote-encode strings
+ - DOC: clarify the CSV format
+ - MEDIUM: stats: report the last check and last agent's output on the CSV status
+ - MINOR: freq_ctr: introduce a new averaging method
+ - MEDIUM: session: maintain per-backend and per-server time statistics
+ - MEDIUM: stats: report per-backend and per-server time stats in HTML and CSV outputs
+ - BUG/MINOR: http: fix typos in previous patch
+ - DOC: remove the ultra-obsolete TODO file
+ - DOC: update roadmap
+ - DOC: minor updates to the README
+ - DOC: mention the maxconn limitations with the select poller
+ - DOC: commit a few old design thoughts files
+
+2014/05/28 : 1.5-dev26
+ - BUG/MEDIUM: polling: fix possible CPU hogging of worker processes after receiving SIGUSR1.
+ - BUG/MINOR: stats: fix a typo on a closing tag for a server tracking another one
+ - OPTIM: stats: avoid the calculation of a useless link on tracking servers in maintenance
+ - MINOR: fix a few memory usage errors
+ - CONTRIB: halog: Filter input lines by date and time through timestamp
+ - MINOR: ssl: SSL_CTX_set_options() and SSL_CTX_set_mode() take a long, not an int
+ - BUG/MEDIUM: regex: fix risk of buffer overrun in exp_replace()
+ - MINOR: acl: set "str" as default match for strings
+ - DOC: Add some precisions about acl default matching method
+ - MEDIUM: acl: strenghten the option parser to report invalid options
+ - BUG/MEDIUM: config: a stats-less config crashes in 1.5-dev25
+ - BUG/MINOR: checks: tcp-check must not stop on '\0' for binary checks
+ - MINOR: stats: improve alignment of color codes to save one line of header
+ - MINOR: checks: simplify and improve reporting of state changes when using log-health-checks
+ - MINOR: server: remove the SRV_DRAIN flag which can always be deduced
+ - MINOR: server: use functions to detect state changes and to update them
+ - MINOR: server: create srv_was_usable() from srv_is_usable() and use a pointer
+ - BUG/MINOR: stats: do not report "100%" in the thottle column when server is draining
+ - BUG/MAJOR: config: don't free valid regex memory
+ - BUG/MEDIUM: session: don't clear CF_READ_NOEXP if analysers are not called
+ - BUG/MINOR: stats: tracking servers may incorrectly report an inherited DRAIN status
+ - MEDIUM: proxy: make timeout parser a bit stricter
+ - REORG/MEDIUM: server: split server state and flags in two different variables
+ - REORG/MEDIUM: server: move the maintenance bits out of the server state
+ - MAJOR: server: use states instead of flags to store the server state
+ - REORG: checks: put the functions in the appropriate files !
+ - MEDIUM: server: properly support and propagate the maintenance status
+ - MEDIUM: server: allow multi-level server tracking
+ - CLEANUP: checks: rename the server_status_printf function
+ - MEDIUM: checks: simplify server up/down/nolb transitions
+ - MAJOR: checks: move health checks changes to set_server_check_status()
+ - MINOR: server: make the status reporting function support a reason
+ - MINOR: checks: simplify health check reporting functions
+ - MINOR: server: implement srv_set_stopped()
+ - MINOR: server: implement srv_set_running()
+ - MINOR: server: implement srv_set_stopping()
+ - MEDIUM: checks: simplify failure notification using srv_set_stopped()
+ - MEDIUM: checks: simplify success notification using srv_set_running()
+ - MEDIUM: checks: simplify stopping mode notification using srv_set_stopping()
+ - MEDIUM: stats: report a server's own state instead of the tracked one's
+ - MINOR: server: make use of srv_is_usable() instead of checking eweight
+ - MAJOR: checks: add support for a new "drain" administrative mode
+ - MINOR: stats: use the admin flags for soft enable/disable/stop/start on the web page
+ - MEDIUM: stats: introduce new actions to simplify admin status management
+ - MINOR: cli: introduce a new "set server" command
+ - MINOR: stats: report a distinct output for DOWN caused by agent
+ - MINOR: checks: support specific check reporting for the agent
+ - MINOR: checks: support a neutral check result
+ - BUG/MINOR: cli: "agent" was missing from the "enable"/"disable" help message
+ - MEDIUM: cli: add support for enabling/disabling health checks.
+ - MEDIUM: stats: report down caused by agent prior to reporting up
+ - MAJOR: agent: rework the response processing and support additional actions
+ - MINOR: stats: improve the stats web page to support more actions
+ - CONTRIB: halog: avoid calling time/localtime/mktime for each line
+ - DOC: document the workarouds for Google Chrome's bogus pre-connect
+ - MINOR: stats: report SSL key computations per second
+ - MINOR: stats: add counters for SSL cache lookups and misses
+
+2014/05/10 : 1.5-dev25
+ - MEDIUM: connection: Implement and extented PROXY Protocol V2
+ - MINOR: ssl: clean unused ACLs declarations
+ - MINOR: ssl: adds fetchs and ACLs for ssl back connection.
+ - MINOR: ssl: merge client's and frontend's certificate functions.
+ - MINOR: ssl: adds ssl_f_sha1 fetch to return frontend's certificate fingerprint
+ - MINOR: ssl: adds sample converter base64 for binary type.
+ - MINOR: ssl: convert to binary ssl_fc_unique_id and ssl_bc_unique_id.
+ - BUG/MAJOR: ssl: Fallback to private session cache if current lock mode is not supported.
+ - MAJOR: ssl: Change default locks on ssl session cache.
+ - BUG/MINOR: chunk: Fix function chunk_strcmp and chunk_strcasecmp match a substring.
+ - MINOR: ssl: add global statement tune.ssl.force-private-cache.
+ - MINOR: ssl: remove fallback to SSL session private cache if lock init fails.
+ - BUG/MEDIUM: patterns: last fix was still not enough
+ - MINOR: http: export the smp_fetch_cookie function
+ - MINOR: http: generic pointer to rule argument
+ - BUG/MEDIUM: pattern: a typo breaks automatic acl/map numbering
+ - BUG/MAJOR: patterns: -i and -n are ignored for inlined patterns
+ - BUG/MINOR: proxy: unsafe initialization of HTTP transaction when switching from TCP frontend
+ - BUG/MINOR: http: log 407 in case of proxy auth
+ - MINOR: http: rely on the message body parser to send 100-continue
+ - MEDIUM: http: move reqadd after execution of http_request redirect
+ - MEDIUM: http: jump to dedicated labels after http-request processing
+ - BUG/MINOR: http: block rules forgot to increment the denied_req counter
+ - BUG/MINOR: http: block rules forgot to increment the session's request counter
+ - MEDIUM: http: move Connection header processing earlier
+ - MEDIUM: http: remove even more of the spaghetti in the request path
+ - MINOR: http: silently support the "block" action for http-request
+ - CLEANUP: proxy: rename "block_cond" to "block_rules"
+ - MEDIUM: http: emulate "block" rules using "http-request" rules
+ - MINOR: http: remove the now unused loop over "block" rules
+ - MEDIUM: http: factorize the "auth" action of http-request and stats
+ - MEDIUM: http: make http-request rules processing return a verdict instead of a rule
+ - MINOR: config: add minimum support for emitting warnings only once
+ - MEDIUM: config: inform the user about the deprecatedness of "block" rules
+ - MEDIUM: config: inform the user that "reqsetbe" is deprecated
+ - MEDIUM: config: inform the user only once that "redispatch" is deprecated
+ - MEDIUM: config: warn that '{cli,con,srv}timeout' are deprecated
+ - BUG/MINOR: auth: fix wrong return type in pat_match_auth()
+ - BUILD: config: remove a warning with clang
+ - BUG/MAJOR: http: connection setup may stall on balance url_param
+ - BUG/MEDIUM: http/session: disable client-side expiration only after body
+ - BUG/MEDIUM: http: correctly report request body timeouts
+ - BUG/MEDIUM: http: disable server-side expiration until client has sent the body
+ - MEDIUM: listener: make the accept function more robust against pauses
+ - BUILD: syscalls: remove improper inline statement in front of syscalls
+ - BUILD: ssl: SSL_CTX_set_msg_callback() needs openssl >= 0.9.7
+ - BUG/MAJOR: session: recover the correct connection pointer in half-initialized sessions
+ - DOC: add some explanation on the shared cache build options in the readme.
+ - MEDIUM: proxy: only adjust the backend's bind-process when already set
+ - MEDIUM: config: limit nbproc to the machine's word size
+ - MEDIUM: config: check the bind-process settings according to nbproc
+ - MEDIUM: listener: parse the new "process" bind keyword
+ - MEDIUM: listener: inherit the process mask from the proxy
+ - MAJOR: listener: only start listeners bound to the same processes
+ - MINOR: config: only report a warning when stats sockets are bound to more than 1 process
+ - CLEANUP: config: set the maxaccept value for peers listeners earlier
+ - BUG/MINOR: backend: only match IPv4 addresses with RDP cookies
+ - BUG/MINOR: checks: correctly configure the address family and protocol
+ - MINOR: tools: split is_addr() and is_inet_addr()
+ - MINOR: protocols: use is_inet_addr() when only INET addresses are desired
+ - MEDIUM: unix: add preliminary support for connecting to servers over UNIX sockets
+ - MEDIUM: checks: only complain about the missing port when the check uses TCP
+ - MEDIUM: unix: implement support for Linux abstract namespace sockets
+ - DOC: map_beg was missing from the table of map_* converters
+ - DOC: ebtree: indicate that prefix insertion/lookup may be used with strings
+ - MEDIUM: pattern: use ebtree's longest match to index/lookup string beginning
+ - BUILD: remove the obsolete BSD and OSX makefiles
+ - MEDIUM: unix: avoid a double connect probe when no data are sent
+ - DOC: stop referencing the slow git repository in the README
+ - BUILD: only build the systemd wrapper on Linux 2.6 and above
+ - DOC: update roadmap with completed tasks
+ - MEDIUM: session: implement half-closed timeouts (client-fin and server-fin)
+
+2014/04/26 : 1.5-dev24
+ - MINOR: pattern: find element in a reference
+ - MEDIUM: http: ACL and MAP updates through http-(request|response) rules
+ - MEDIUM: ssl: explicitly log failed handshakes after a heartbeat
+ - DOC: Full section dedicated to the converters
+ - MEDIUM: http: register http-request and http-response keywords
+ - BUG/MINOR: compression: correctly report incoming byte count
+ - BUG/MINOR: http: don't report server aborts as client aborts
+ - BUG/MEDIUM: channel: bi_putblk() must not wrap before the end of buffer
+ - CLEANUP: buffers: remove unused function buffer_contig_space_with_res()
+ - MEDIUM: stats: reimplement HTTP keep-alive on the stats page
+ - BUG/MAJOR: http: fix timeouts during data forwarding
+ - BUG/MEDIUM: http: 100-continue responses must process the next part immediately
+ - MEDIUM: http: move skipping of 100-continue earlier
+ - BUILD: stats: let gcc know that last_fwd cannot be used uninitialized...
+ - CLEANUP: general: get rid of all old occurrences of "session *t"
+ - CLEANUP: http: remove the useless "if (1)" inherited from version 1.4
+ - BUG/MEDIUM: stats: mismatch between behaviour and doc about front/back
+ - MEDIUM: http: enable analysers to have keep-alive on stats
+ - REORG: http: move HTTP Connection response header parsing earlier
+ - MINOR: stats: always emit HTTP/1.1 in responses
+ - MINOR: http: add capture.req.ver and capture.res.ver
+ - MINOR: checks: add a new global max-spread-checks directive
+ - BUG/MAJOR: http: fix the 'next' pointer when performing a redirect
+ - MINOR: http: implement the max-keep-alive-queue setting
+ - DOC: fix alphabetic order of tcp-check
+ - MINOR: connection: add a new error code for SSL with heartbeat
+ - MEDIUM: ssl: implement a workaround for the OpenSSL heartbleed attack
+ - BUG/MEDIUM: Revert "MEDIUM: ssl: Add standardized DH parameters >= 1024 bits"
+ - BUILD: http: remove a warning on strndup
+ - BUILD: ssl: avoid a warning about conn not used with OpenSSL < 1.0.1
+ - BUG/MINOR: ssl: really block OpenSSL's response to heartbleed attack
+ - MINOR: ssl: finally catch the heartbeats missing the padding
+
+2014/04/23 : 1.5-dev23
+ - BUG/MINOR: reject malformed HTTP/0.9 requests
+ - MINOR: systemd wrapper: re-execute on SIGUSR2
+ - MINOR: systemd wrapper: improve logging
+ - MINOR: systemd wrapper: propagate exit status
+ - BUG/MINOR: tcpcheck connect wrong behavior
+ - MEDIUM: proxy: support use_backend with dynamic names
+ - MINOR: stats: Enhancement to stats page to provide information of last session time.
+ - BUG/MEDIUM: peers: fix key consistency for integer stick tables
+ - DOC: fix a typo on http-server-close and encapsulate options with double-quotes
+ - DOC: fix fetching samples syntax
+ - MINOR: ssl: add ssl_fc_unique_id to fetch TLS Unique ID
+ - MEDIUM: ssl: Use ALPN support as it will be available in OpenSSL 1.0.2
+ - DOC: fix typo
+ - CLEANUP: code style: use tabs to indent codes instead of spaces
+ - DOC: fix a few config typos.
+ - BUG/MINOR: raw_sock: also consider ENOTCONN in addition to EAGAIN for recv()
+ - DOC: lowercase format string in unique-id
+ - MINOR: set IP_FREEBIND on IPv6 sockets in transparent mode
+ - BUG/MINOR: acl: req_ssl_sni fails with SSLv3 record version
+ - BUG/MINOR: build: add missing objects in osx and bsd Makefiles
+ - BUG/MINOR: build: handle whitespaces in wc -l output
+ - BUG/MINOR: Fix name lookup ordering when compiled with USE_GETADDRINFO
+ - MEDIUM: ssl: Add standardized DH parameters >= 1024 bits
+ - BUG/MEDIUM: map: The map parser includes blank lines.
+ - BUG/MINOR: log: The log of quotted capture header has been terminated by 2 quotes.
+ - MINOR: standard: add function "encode_chunk"
+ - BUG/MINOR: http: fix encoding of samples used in http headers
+ - MINOR: sample: add hex converter
+ - MEDIUM: sample: change the behavior of the bin2str cast
+ - MAJOR: auth: Change the internal authentication system.
+ - MEDIUM: acl/pattern: standardisation "of pat_parse_int()" and "pat_parse_dotted_ver()"
+ - MEDIUM: pattern: The pattern parser no more uses <opaque> and just takes one string.
+ - MEDIUM: pattern: Change the prototype of the function pattern_register().
+ - CONTRIB: ip6range: add a network IPv6 range to mask converter
+ - MINOR: pattern: separe list element from the data part.
+ - MEDIUM: pattern: add indexation function.
+ - MEDIUM: pattern: The parse functions just return "struct pattern" without memory allocation
+ - MINOR: pattern: Rename "pat_idx_elt" to "pattern_tree"
+ - MINOR: sample: dont call the sample cast function "c_none"
+ - MINOR: standard: Add function for converting cidr to network mask.
+ - MEDIUM: sample: Remove types SMP_T_CSTR and SMP_T_CBIN, replace it by SMP_F_CONST flags
+ - MEDIUM: sample/http_proto: Add new type called method
+ - MINOR: dumpstats: Group map inline help
+ - MEDIUM: pattern: The function pattern_exec_match() returns "struct pattern" if the patten match.
+ - MINOR: dumpstats: change map inline sentences
+ - MINOR: dumpstats: change the "get map" display management
+ - MINOR: map/dumpstats: The cli cmd "get map ..." display the "int" format.
+ - MEDIUM: pattern: The match function browse itself the list or the tree.
+ - MEDIUM: pattern: Index IPv6 addresses in a tree.
+ - MEDIUM: pattern: add delete functions
+ - MEDIUM: pattern: add prune function
+ - MEDIUM: pattern: add sample lookup function.
+ - MEDIUM: pattern/dumpstats: The function pattern_lookup() is no longer used
+ - MINOR: map/pattern: The sample parser is stored in the pattern
+ - MAJOR: pattern/map: Extends the map edition system in the patterns
+ - MEDIUM: pattern: merge same pattern
+ - MEDIUM: pattern: The expected type is stored in the pattern head, and conversion is executed once.
+ - MINOR: pattern: Each pattern is identified by unique id.
+ - MINOR: pattern/acl: Each pattern of each acl can be load with specified id
+ - MINOR: pattern: The function "pattern_register()" is no longer used.
+ - MINOR: pattern: Merge function pattern_add() with pat_ref_push().
+ - MINOR: pattern: store configuration reference for each acl or map pattern.
+ - MINOR: pattern: Each pattern expression element store the reference struct.
+ - MINOR: dumpstats: display the reference for th key/pattern and value.
+ - MEDIUM: pattern: delete() function uses the pat_ref_elt to find the element to be removed
+ - MEDIUM: pattern_find_smp: functions find_smp uses the pat_ref_elt to find the element to be removed
+ - MEDIUM: dumpstats/pattern: display and use each pointer of each pattern dumped
+ - MINOR: pattern/map/acl: Centralization of the file parsers
+ - MINOR: pattern: Check if the file reference is not used with acl and map
+ - MINOR: acl/pattern: Acl "-M" option force to load file as map file with two columns
+ - MEDIUM: dumpstats: Display error message during add of values.
+ - MINOR: pattern: The function pat_ref_set() have now atomic behavior
+ - MINOR: regex: The pointer regstr in the struc regex is no longer used.
+ - MINOR: cli: Block the usage of the command "acl add" in many cases.
+ - MINOR: doc: Update the documentation about the map and acl
+ - MINOR: pattern: index duplicates
+ - MINOR: configuration: File and line propagation
+ - MINOR: dumpstat/conf: display all the configuration lines that using pattern reference
+ - MINOR: standard: Disable ip resolution during the runtime
+ - MINOR: pattern: Remove the flag "PAT_F_FROM_FILE".
+ - MINOR: pattern: forbid dns resolutions
+ - DOC: document "get map" / "get acl" on the CLI
+ - MEDIUM: acl: Change the acl register struct
+ - BUG/MEDIUM: acl: boolean only matches were broken by recent changes
+ - DOC: pattern: pattern organisation schematics
+ - MINOR: pattern/cli: Update used terms in documentation and cli
+ - MINOR: cli: remove information about acl or map owner.
+ - MINOR: session: don't always assume there's a listener
+ - MINOR: pattern: Add function to prune and reload pattern list.
+ - MINOR: standard: Add ipv6 support in the function url2sa().
+ - MEDIUM: config: Dynamic sections.
+ - BUG/MEDIUM: stick-table: fix IPv4-to-IPv6 conversion in src_* fetches
+ - MINOR: http: Add the "language" converter to for use with accept-language
+ - BUG/MINOR: log: Don't dump empty unique-id
+ - BUG/MAJOR: session: fix a possible crash with src_tracked
+ - DOC: Update "language" documentation
+ - MINOR: http: add the function "del-header" to the directives http-request and http-response
+ - DOC: add some information on capture.(req|res).hdr
+ - MINOR: http: capture.req.method and capture.req.uri
+ - MINOR: http: optimize capture.req.method and capture.req.uri
+ - MINOR: session: clean up the connection free code
+ - BUG/MEDIUM: checks: immediately report a connection success
+ - MEDIUM: connection: don't use real send() flags in snd_buf()
+ - OPTIM: ssl: implement dynamic record size adjustment
+ - MINOR: stats: report exact last session time in backend too
+ - BUG/MEDIUM: stats: the "lastsess" field must appear last in the CSV.
+ - BUG/MAJOR: check: fix memory leak in "tcp-check connect" over SSL
+ - BUG/MINOR: channel: initialize xfer_small/xfer_large on new buffers
+ - MINOR: channel: add the date of last read in the channel
+ - MEDIUM: stream-int: automatically disable CF_STREAMER flags after idle
+ - MINOR: ssl: add DEFAULT_SSL_MAX_RECORD to set the record size at build time
+ - MINOR: config: make the stream interface idle timer user-configurable
+ - MINOR: config: add global directives to set default SSL ciphers
+ - MINOR: sample: add a rand() sample fetch to return a sample.
+ - BUG/MEDIUM: config: immediately abort if peers section has no name
+ - BUG/MINOR: ssl: fix syntax in config error message
+ - BUG/MEDIUM: ssl: always send a full buffer after EAGAIN
+ - BUG/MINOR: config: server on-marked-* statement is ignored in default-server
+ - BUG/MEDIUM: backend: prefer-last-server breaks redispatch
+ - BUG/MEDIUM: http: continue to emit 503 on keep-alive to different server
+ - MEDIUM: acl: fix pattern type for payload / payload_lv
+ - BUG/MINOR: config: fix a crash on startup when a disabled backend references a peer
+ - BUG/MEDIUM: compression: fix the output type of the compressor name
+ - BUG/MEDIUM: http: don't start to forward request data before the connect
+ - MINOR: http: release compression context only in http_end_txn()
+ - MINOR: protect ebimtree/ebistree against multiple inclusions
+ - MEDIUM: proxy: create a tree to store proxies by name
+ - MEDIUM: proxy: make findproxy() use trees to look up proxies
+ - MEDIUM: proxy: make get_backend_server() use findproxy() to lookup proxies
+ - MEDIUM: stick-table: lookup table names using trees.
+ - MEDIUM: config: faster lookup for duplicated proxy name
+ - CLEANUP: acl: remove obsolete test in parse_acl_expr()
+ - MINOR: sample: move smp_to_type to sample.c
+ - MEDIUM: compression: consider the "q=" attribute in Accept-Encoding
+ - REORG: cfgparse: move server keyword parsing to server.c
+ - BUILD: adjust makefile for AIX 5.1
+ - BUG/MEDIUM: pattern: fix wrong definition of the pat_prune_fcts array
+ - CLEANUP: pattern: move array definitions to proto/ and not types/
+ - BUG/MAJOR: counters: check for null-deref when looking up an alternate table
+ - BUILD: ssl: previous patch failed
+ - BUILD/MEDIUM: standard: get rid of the last strcpy()
+ - BUILD/MEDIUM: standard: get rid of sprintf()
+ - BUILD/MEDIUM: cfgparse: get rid of sprintf()
+ - BUILD/MEDIUM: checks: get rid of sprintf()
+ - BUILD/MEDIUM: http: remove calls to sprintf()
+ - BUG/MEDIUM: systemd-wrapper: fix locating of haproxy binary
+ - BUILD/MINOR: ssl: remove one call to sprintf()
+ - MEDIUM: http: don't reject anymore message bodies not containing the url param
+ - MEDIUM: http: wait for the first chunk or message body length in http_process_body
+ - CLEANUP: http: rename http_process_request_body()
+ - CLEANUP: http: prepare dedicated processing for chunked encoded message bodies
+ - MINOR: http: make msg->eol carry the last CRLF length
+ - MAJOR: http: do not use msg->sol while processing messages or forwarding data
+ - MEDIUM: http: http_parse_chunk_crlf() must not advance the buffer pointer
+ - MAJOR: http: don't update msg->sov anymore while processing the body
+ - MINOR: http: add a small helper to compute the amount of body bytes present
+ - MEDIUM: http: add a small helper to compute how far to rewind to find headers
+ - MINOR: http: add a small helper to compute how far to rewind to find URI
+ - MEDIUM: http: small helpers to compute how far to rewind to find BODY and DATA
+ - MAJOR: http: reset msg->sov after headers are forwarded
+ - MEDIUM: http: forward headers again while waiting for connection to complete
+ - BUG/MINOR: http: deinitialize compression after a parsing error
+ - BUG/MINOR: http: deinitialize compression after a compression error
+ - MEDIUM: http: headers must be forwarded even if data was already inspected
+ - MAJOR: http: re-enable compression on chunked encoding
+ - MAJOR: http/compression: fix chunked-encoded response processing
+ - MEDIUM: http: cleanup: centralize a little bit HTTP compression end
+ - MEDIUM: http: start to centralize the forwarding code
+ - MINOR: http: further cleanups of response forwarding function
+ - MEDIUM: http: only allocate the temporary compression buffer when needed
+ - MAJOR: http: centralize data forwarding in the request path
+ - CLEANUP: http: document the response forwarding states
+ - CLEANUP: http: remove all calls to http_silent_debug()
+ - DOC: internal: add some reminders about HTTP parsing and pointer states
+ - BUG/MAJOR: http: fix bug in parse_qvalue() when selecting compression algo
+ - BUG/MINOR: stats: last session was not always set
+ - DOC: add pointer to the Cyril's HTML doc in the README
+ - MEDIUM: config: relax use_backend check to make the condition optional
+ - MEDIUM: config: report misplaced http-request rules
+ - MEDIUM: config: report misplaced use-server rules
+ - DOC: update roadmap with what was done.
+
+2014/02/03 : 1.5-dev22
+ - MEDIUM: tcp-check new feature: connect
+ - MEDIUM: ssl: Set verify 'required' as global default for servers side.
+ - MINOR: ssl: handshake optim for long certificate chains.
+ - BUG/MINOR: pattern: pattern comparison executed twice
+ - BUG/MEDIUM: map: segmentation fault with the stats's socket command "set map ..."
+ - BUG/MEDIUM: pattern: Segfault in binary parser
+ - MINOR: pattern: move functions for grouping pat_match_* and pat_parse_* and add documentation.
+ - MINOR: standard: The parse_binary() returns the length consumed and his documentation is updated
+ - BUG/MINOR: payload: the patterns of the acl "req.ssl_ver" are no parsed with the good function.
+ - BUG/MEDIUM: pattern: "pat_parse_dotted_ver()" set bad expect_type.
+ - BUG/MINOR: sample: The c_str2int converter does not fail if the entry is not an integer
+ - BUG/MEDIUM: http/auth: Sometimes the authentication credentials can be mix between two requests
+ - MINOR: doc: Bad cli function name.
+ - MINOR: http: smp_fetch_capture_header_* fetch captured headers
+ - BUILD: last release inadvertently prepended a "+" in front of the date
+ - BUG/MEDIUM: stream-int: fix the keep-alive idle connection handler
+ - BUG/MEDIUM: backend: do not re-initialize the connection's context upon reuse
+ - BUG: Revert "OPTIM/MEDIUM: epoll: fuse active events into polled ones during polling changes"
+ - BUG/MINOR: checks: successful check completion must not re-enable MAINT servers
+ - MINOR: http: try to stick to same server after status 401/407
+ - BUG/MINOR: http: always disable compression on HTTP/1.0
+ - OPTIM: poll: restore polling after a poll/stop/want sequence
+ - OPTIM: http: don't stop polling for read on the client side after a request
+ - BUG/MEDIUM: checks: unchecked servers could not be enabled anymore
+ - BUG/MEDIUM: stats: the web interface must check the tracked servers before enabling
+ - BUG/MINOR: channel: CHN_INFINITE_FORWARD must be unsigned
+ - BUG/MINOR: stream-int: do not clear the owner upon unregister
+ - MEDIUM: stats: add support for HTTP keep-alive on the stats page
+ - BUG/MEDIUM: stats: fix HTTP/1.0 breakage introduced in previous patch
+ - Revert "MEDIUM: stats: add support for HTTP keep-alive on the stats page"
+ - MAJOR: channel: add a new flag CF_WAKE_WRITE to notify the task of writes
+ - OPTIM: session: set the READ_DONTWAIT flag when connecting
+ - BUG/MINOR: http: don't clear the SI_FL_DONT_WAKE flag between requests
+ - MINOR: session: factor out the connect time measurement
+ - MEDIUM: session: prepare to support earlier transitions to the established state
+ - MEDIUM: stream-int: make si_connect() return an established state when possible
+ - MINOR: checks: use an inline function for health_adjust()
+ - OPTIM: session: put unlikely() around the freewheeling code
+ - MEDIUM: config: report a warning when multiple servers have the same name
+ - BUG: Revert "OPTIM: poll: restore polling after a poll/stop/want sequence"
+ - BUILD/MINOR: listener: remove a glibc warning on accept4()
+ - BUG/MAJOR: connection: fix mismatch between rcv_buf's API and usage
+ - BUILD: listener: fix recent accept4() again
+ - BUG/MAJOR: ssl: fix breakage caused by recent fix abf08d9
+ - BUG/MEDIUM: polling: ensure we update FD status when there's no more activity
+ - MEDIUM: listener: fix polling management in the accept loop
+ - MINOR: protocol: improve the proto->drain() API
+ - MINOR: connection: add a new conn_drain() function
+ - MEDIUM: tcp: report in tcp_drain() that lingering is already disabled on close
+ - MEDIUM: connection: update callers of ctrl->drain() to use conn_drain()
+ - MINOR: connection: add more error codes to report connection errors
+ - MEDIUM: tcp: report connection error at the connection level
+ - MEDIUM: checks: make use of chk_report_conn_err() for connection errors
+ - BUG/MEDIUM: unique_id: HTTP request counter is not stable
+ - DOC: fix misleading information about SIGQUIT
+ - BUG/MAJOR: fix freezes during compression
+ - BUG/MEDIUM: stream-interface: don't wake the task up before end of transfer
+ - BUILD: fix VERDATE exclusion regex
+ - CLEANUP: polling: rename "spec_e" to "state"
+ - DOC: add a diagram showing polling state transitions
+ - REORG: polling: rename "spec_e" to "state" and "spec_p" to "cache"
+ - REORG: polling: rename "fd_spec" to "fd_cache"
+ - REORG: polling: rename the cache allocation functions
+ - REORG: polling: rename "fd_process_spec_events()" to "fd_process_cached_events()"
+ - MAJOR: polling: rework the whole polling system
+ - MAJOR: connection: remove the CO_FL_WAIT_{RD,WR} flags
+ - MEDIUM: connection: remove conn_{data,sock}_poll_{recv,send}
+ - MEDIUM: connection: add check for readiness in I/O handlers
+ - MEDIUM: stream-interface: the polling flags must always be updated in chk_snd_conn
+ - MINOR: stream-interface: no need to call fd_stop_both() on error
+ - MEDIUM: connection: no need to recheck FD state
+ - CLEANUP: connection: use conn_ctrl_ready() instead of checking the flag
+ - CLEANUP: connection: use conn_xprt_ready() instead of checking the flag
+ - CLEANUP: connection: fix comments in connection.h to reflect new behaviour.
+ - OPTIM: raw-sock: don't speculate after a short read if polling is enabled
+ - MEDIUM: polling: centralize polled events processing
+ - MINOR: polling: create function fd_compute_new_polled_status()
+ - MINOR: cli: add more information to the "show info" output
+ - MEDIUM: listener: add support for limiting the session rate in addition to the connection rate
+ - MEDIUM: listener: apply a limit on the session rate submitted to SSL
+ - REORG: stats: move the stats socket states to dumpstats.c
+ - MINOR: cli: add the new "show pools" command
+ - BUG/MEDIUM: counters: flush content counters after each request
+ - BUG/MEDIUM: counters: fix stick-table entry leak when using track-sc2 in connection
+ - MINOR: tools: add very basic support for composite pointers
+ - MEDIUM: counters: stop relying on session flags at all
+ - BUG/MINOR: cli: fix missing break in command line parser
+ - BUG/MINOR: config: correctly report when log-format headers require HTTP mode
+ - MAJOR: http: update connection mode configuration
+ - MEDIUM: http: make keep-alive + httpclose be passive mode
+ - MAJOR: http: switch to keep-alive mode by default
+ - BUG/MEDIUM: http: fix regression caused by recent switch to keep-alive by default
+ - BUG/MEDIUM: listener: improve detection of non-working accept4()
+ - BUILD: listener: add fcntl.h and unistd.h
+ - BUG/MINOR: raw_sock: correctly set the MSG_MORE flag
+
+2013/12/17 : 1.5-dev21
+ - MINOR: stats: don't use a monospace font to report numbers
+ - MINOR: session: remove debugging code
+ - BUG/MAJOR: patterns: fix double free caused by loading strings from files
+ - MEDIUM: http: make option http_proxy automatically rewrite the URL
+ - BUG/MEDIUM: http: cook_cnt() forgets to set its output type
+ - BUG/MINOR: stats: correctly report throttle rate of low weight servers
+ - BUG/MEDIUM: checks: servers must not start in slowstart mode
+ - BUG/MINOR: acl: parser must also stop at comma on ACL-only keywords
+ - MEDIUM: stream-int: implement a very simplistic idle connection manager
+ - DOC: update the ROADMAP file
+
+2013/12/16 : 1.5-dev20
+ - DOC: add missing options to the manpage
+ - DOC: add manpage references to all system calls
+ - DOC: update manpage reference to haproxy-en.txt
+ - DOC: remove -s and -l options from the manpage
+ - DOC: missing information for the "description" keyword
+ - DOC: missing http-send-name-header keyword in keyword table
+ - MINOR: tools: function my_memmem() to lookup binary contents
+ - MEDIUM: checks: add send/expect tcp based check
+ - MEDIUM: backend: Enhance hash-type directive with an algorithm options
+ - MEDIUM: backend: Implement avalanche as a modifier of the hashing functions.
+ - DOC: Documentation for hashing function, with test results.
+ - BUG/MEDIUM: ssl: potential memory leak using verifyhost
+ - BUILD: ssl: compilation issue with openssl v0.9.6.
+ - BUG/MINOR: ssl: potential memory leaks using ssl_c_key_alg or ssl_c_sig_alg.
+ - MINOR: ssl: optimization of verifyhost on wildcard certificates.
+ - BUG/MINOR: ssl: verifyhost does not match empty strings on wildcard.
+ - MINOR: ssl: Add statement 'verifyhost' to "server" statements
+ - CLEANUP: session: remove event_accept() which was not used anymore
+ - BUG/MINOR: deinit: free fdinfo while doing cleanup
+ - DOC: minor typo fix in documentation
+ - BUG/MEDIUM: server: set the macro for server's max weight SRV_UWGHT_MAX to SRV_UWGHT_RANGE
+ - BUG/MINOR: use the same check condition for server as other algorithms
+ - DOC: fix typo in comments
+ - BUG/MINOR: deinit: free server map which is allocated in init_server_map()
+ - CLEANUP: stream_interface: cleanup loop information in si_conn_send_loop()
+ - MINOR: buffer: align the last output line of buffer_dump()
+ - MINOR: buffer: align the last output line if there are less than 8 characters left
+ - DOC: stick-table: modify the description
+ - OPTIM: stream_interface: return directly if the connection flag CO_FL_ERROR has been set
+ - CLEANUP: code style: use tabs to indent codes
+ - DOC: checkcache: block responses with cacheable cookies
+ - BUG/MINOR: check_config_validity: check the returned value of stktable_init()
+ - MEDIUM: haproxy-systemd-wrapper: Use haproxy in same directory
+ - MEDIUM: systemd-wrapper: Kill child processes when interrupted
+ - LOW: systemd-wrapper: Write debug information to stdout
+ - BUG/MINOR: http: fix "set-tos" not working in certain configurations
+ - MEDIUM: http: add IPv6 support for "set-tos"
+ - DOC: ssl: update build instructions to use new SSL_* variables
+ - BUILD/MINOR: systemd: fix compiler warning about unused result
+ - url32+src - like base32+src but whole url including parameters
+ - BUG/MINOR: fix forcing fastinter in "on-error"
+ - CLEANUP: Make parameters of srv_downtime and srv_getinter const
+ - CLEANUP: Remove unused 'last_slowstart_change' field from struct peer
+ - MEDIUM: Split up struct server's check element
+ - MEDIUM: Move result element to struct check
+ - MEDIUM: Paramatise functions over the check of a server
+ - MEDIUM: cfgparse: Factor out check initialisation
+ - MEDIUM: Add state to struct check
+ - MEDIUM: Move health element to struct check
+ - MEDIUM: Add helper for task creation for checks
+ - MEDIUM: Add helper function for failed checks
+ - MEDIUM: Log agent fail, stopped or down as info
+ - MEDIUM: Remove option lb-agent-chk
+ - MEDIUM: checks: Add supplementary agent checks
+ - MEDIUM: Do not mark a server as down if the agent is unavailable
+ - MEDIUM: Set rise and fall of agent checks to 1
+ - MEDIUM: Add enable and disable agent unix socket commands
+ - MEDIUM: Add DRAIN state and report it on the stats page
+ - BUILD/MINOR: missing header file
+ - CLEANUP: regex: Create regex_comp function that compiles regex using compilation options
+ - CLEANUP: The function "regex_exec" needs the string length but in many case they expect null terminated char.
+ - MINOR: http: some exported functions were not in the header file
+ - MINOR: http: change url_decode to return the size of the decoded string.
+ - BUILD/MINOR: missing header file
+ - BUG/MEDIUM: sample: The function v4tov6 cannot support input and output overlap
+ - BUG/MINOR: arg: fix error reporting for add-header/set-header sample fetch arguments
+ - MINOR: sample: export the generic sample conversion parser
+ - MINOR: sample: export sample_casts
+ - MEDIUM: acl: use the fetch syntax 'fetch(args),conv(),conv()' into the ACL keyword
+ - MINOR: stick-table: use smp_expr_output_type() to retrieve the output type of a "struct sample_expr"
+ - MINOR: sample: provide the original sample_conv descriptor struct to the argument checker function.
+ - MINOR: tools: Add a function to convert buffer to an ipv6 address
+ - MINOR: acl: export acl arrays
+ - MINOR: acl: Extract the pattern parsing and indexation from the "acl_read_patterns_from_file()" function
+ - MINOR: acl: Extract the pattern matching function
+ - MINOR: sample: Define new struct sample_storage
+ - MEDIUM: acl: associate "struct sample_storage" to each "struct acl_pattern"
+ - REORG: acl/pattern: extract pattern matching from the acl file and create pattern.c
+ - MEDIUM: pattern: create pattern expression
+ - MEDIUM: pattern: rename "acl" prefix to "pat"
+ - MEDIUM: sample: let the cast functions set their output type
+ - MINOR: sample: add a private field to the struct sample_conv
+ - MINOR: map: Define map types
+ - MEDIUM: sample: add the "map" converter
+ - MEDIUM: http: The redirect strings follows the log format rules.
+ - BUG/MINOR: acl: acl parser does not recognize empty converter list
+ - BUG/MINOR: map: The map list was declared in the map.h file
+ - MINOR: map: Cleanup the initialisation of map descriptors.
+ - MEDIUM: map: merge identical maps
+ - BUG/MEDIUM: pattern: Pattern node has type of "struct pat_idx_elt" in place of "struct eb_node"
+ - BUG/MEDIUM: map: Bad map file parser
+ - CLEANUP/MINOR: standard: use the system define INET6_ADDRSTRLEN in place of MAX_IP6_LEN
+ - BUG/MEDIUM: sample: conversion from str to ipv6 may read data past end
+ - MINOR: map: export map_get_reference() function
+ - MINOR: pattern: Each pattern sets the expected input type
+ - MEDIUM: acl: Last patch change the output type
+ - MEDIUM: pattern: Extract the index process from the pat_parse_*() functions
+ - MINOR: standard: The function parse_binary() can use preallocated buffer
+ - MINOR: regex: Change the struct containing regex
+ - MINOR: regex: Copy the original regex expression into string.
+ - MINOR: pattern: add support for compiling patterns for lookups
+ - MINOR: pattern: make the pattern matching function return a pointer to the matched element
+ - MINOR: map: export parse output sample functions
+ - MINOR: pattern: add function to lookup a specific entry in pattern list
+ - MINOR: pattern/map: Each pattern must free the associated sample
+ - MEDIUM: dumpstat: make the CLI parser understand the backslash as an escape char
+ - MEDIUM: map: dynamic manipulation of maps
+ - BUG/MEDIUM: unique_id: junk in log on empty unique_id
+ - BUG/MINOR: log: junk at the end of syslog packet
+ - MINOR: Makefile: provide cscope rule
+ - DOC: compression: chunk are not compressed anymore
+ - MEDIUM: session: disable lingering on the server when the client aborts
+ - BUG/MEDIUM: prevent gcc from moving empty keywords lists into BSS
+ - DOC: remove the comment saying that SSL certs are not checked on the server side
+ - BUG: counters: third counter was not stored if others unset
+ - BUG/MAJOR: http: don't emit the send-name-header when no server is available
+ - BUG/MEDIUM: http: "option checkcache" fails with the no-cache header
+ - BUG/MAJOR: http: sample prefetch code was not properly migrated
+ - BUG/MEDIUM: splicing: fix abnormal CPU usage with splicing
+ - BUG/MINOR: stream_interface: don't call chk_snd() on polled events
+ - OPTIM: splicing: use splice() for the last block when relevant
+ - MEDIUM: sample: handle comma-delimited converter list
+ - MINOR: sample: fix sample_process handling of unstable data
+ - CLEANUP: acl: move the 3 remaining sample fetches to samples.c
+ - MINOR: sample: add a new "date" fetch to return the current date
+ - MINOR: samples: add the http_date([<offset>]) sample converter.
+ - DOC: minor improvements to the part on the stats socket.
+ - MEDIUM: sample: systematically pass the keyword pointer to the keyword
+ - MINOR: payload: split smp_fetch_rdp_cookie()
+ - MINOR: counters: factor out smp_fetch_sc*_tracked
+ - MINOR: counters: provide a generic function to retrieve a stkctr for sc* and src.
+ - MEDIUM: counters: factor out smp_fetch_sc*_get_gpc0
+ - MEDIUM: counters: factor out smp_fetch_sc*_gpc0_rate
+ - MEDIUM: counters: factor out smp_fetch_sc*_inc_gpc0
+ - MEDIUM: counters: factor out smp_fetch_sc*_clr_gpc0
+ - MEDIUM: counters: factor out smp_fetch_sc*_conn_cnt
+ - MEDIUM: counters: factor out smp_fetch_sc*_conn_rate
+ - MEDIUM: counters: factor out smp_fetch_sc*_conn_cur
+ - MEDIUM: counters: factor out smp_fetch_sc*_sess_cnt
+ - MEDIUM: counters: factor out smp_fetch_sc*_sess_rate
+ - MEDIUM: counters: factor out smp_fetch_sc*_http_req_cnt
+ - MEDIUM: counters: factor out smp_fetch_sc*_http_req_rate
+ - MEDIUM: counters: factor out smp_fetch_sc*_http_err_cnt
+ - MEDIUM: counters: factor out smp_fetch_sc*_http_err_rate
+ - MEDIUM: counters: factor out smp_fetch_sc*_kbytes_in
+ - MEDIUM: counters: factor out smp_fetch_sc*_bytes_in_rate
+ - MEDIUM: counters: factor out smp_fetch_sc*_kbytes_out
+ - MEDIUM: counters: factor out smp_fetch_sc*_bytes_out_rate
+ - MEDIUM: counters: factor out smp_fetch_sc*_trackers
+ - MINOR: session: make the number of stick counter entries more configurable
+ - MEDIUM: counters: support passing the counter number as a fetch argument
+ - MEDIUM: counters: support looking up a key in an alternate table
+ - MEDIUM: cli: adjust the method for feeding frequency counters in tables
+ - MINOR: cli: make it possible to enter multiple values at once with "set table"
+ - MINOR: payload: allow the payload sample fetches to retrieve arbitrary lengths
+ - BUG/MINOR: cli: "clear table" must not kill entries that don't match condition
+ - MINOR: ssl: use MAXPATHLEN instead of PATH_MAX
+ - MINOR: config: warn when a server with no specific port uses rdp-cookie
+ - BUG/MEDIUM: unique_id: HTTP request counter must be unique!
+ - DOC: add a mention about the limited chunk size
+ - BUG/MEDIUM: fix broken send_proxy on FreeBSD
+ - MEDIUM: stick-tables: flush old entries upon soft-stop
+ - MINOR: tcp: add new "close" action for tcp-response
+ - MINOR: payload: provide the "res.len" fetch method
+ - BUILD: add SSL_INC/SSL_LIB variables to force the path to openssl
+ - MINOR: http: compute response time before processing headers
+ - BUG/MINOR: acl: fix improper string size assignment in proxy argument
+ - BUG/MEDIUM: http: accept full buffers on smp_prefetch_http
+ - BUG/MINOR: acl: implicit arguments of ACL keywords were not properly resolved
+ - BUG/MEDIUM: session: risk of crash on out of memory conditions
+ - BUG/MINOR: peers: set the accept date in outgoing connections
+ - BUG/MEDIUM: tcp: do not skip tracking rules on second pass
+ - BUG/MEDIUM: acl: do not evaluate next terms after a miss
+ - MINOR: acl: add a warning when an ACL keyword is used without any value
+ - MINOR: tcp: don't use tick_add_ifset() when timeout is known to be set
+ - BUG/MINOR: acl: remove patterns from the tree before freeing them
+ - MEDIUM: backend: add support for the wt6 hash
+ - OPTIM/MEDIUM: epoll: fuse active events into polled ones during polling changes
+ - OPTIM/MINOR: mark the source address as already known on accept()
+ - BUG/MINOR: stats: don't count tarpitted connections twice
+ - CLEANUP: http: homogenize processing of denied req counter
+ - CLEANUP: http: merge error handling for req* and http-request *
+ - BUG/MEDIUM: http: fix possible parser crash when parsing erroneous "http-request redirect" rules
+ - BUG/MINOR: http: fix build warning introduced with url32/url32_src
+ - BUG/MEDIUM: checks: fix slow start regression after fix attempt
+ - BUG/MAJOR: server: weight calculation fails for map-based algorithms
+ - MINOR: stats: report correct throttling percentage for servers in slowstart
+ - OPTIM: connection: fold the error handling with handshake handling
+ - MINOR: peers: accept to learn strings of different lengths
+ - BUG/MAJOR: fix haproxy crash when using server tracking instead of checks
+ - BUG/MAJOR: check: fix haproxy crash during soft-stop/soft-start
+ - BUG/MINOR: stats: do not report "via" on tracking servers in maintenance
+ - BUG/MINOR: connection: fix typo in error message report
+ - BUG/MINOR: backend: fix target address retrieval in transparent mode
+ - BUG/MINOR: config: report the correct track-sc number in tcp-rules
+ - BUG/MINOR: log: fix log-format parsing errors
+ - DOC: add some information about how to apply converters to samples
+ - MINOR: acl/pattern: use types different from int to clarify who does what.
+ - MINOR: pattern: import acl_find_match_name() into pattern.h
+ - MEDIUM: stick-tables: support automatic conversion from ipv4<->ipv6
+ - MEDIUM: log-format: relax parsing of '%' followed by unsupported characters
+ - BUG/MINOR: http: usual deinit stuff in last commit
+ - BUILD: log: silent a warning about isblank() with latest patches
+ - BUG/MEDIUM: checks: fix health check regression causing them to depend on declaration order
+ - BUG/MEDIUM: checks: fix a long-standing issue with reporting connection errors
+ - BUG/MINOR: checks: don't consider errno and use conn->err_code
+ - BUG/MEDIUM: checks: also update the DRAIN state from the web interface
+ - MINOR: stats: remove some confusion between the DRAIN state and NOLB
+ - BUG/MINOR: tcp: check that no error is pending during a connect probe
+ - BUG/MINOR: connection: check EINTR when sending a PROXY header
+ - MEDIUM: connection: set the socket shutdown flags on socket errors
+ - BUG/MEDIUM: acl: fix regression introduced by latest converters support
+ - MINOR: connection: clear errno prior to checking for errors
+ - BUG/MINOR: checks: do not trust errno in write event before any syscall
+ - MEDIUM: checks: centralize error reporting
+ - OPTIM: checks: don't poll on recv when using plain TCP connects
+ - OPTIM: checks: avoid setting SO_LINGER twice
+ - MINOR: tools: add a generic binary hex string parser
+ - BUG/MEDIUM: checks: tcp-check: do not poll when there's nothing to send
+ - BUG/MEDIUM: check: tcp-check might miss some outgoing data when socket buffers are full
+ - BUG/MEDIUM: args: fix double free on error path in argument expression parser
+ - BUG/MINOR: acl: fix sample expression error reporting
+ - BUG/MINOR: checks: tcp-check actions are enums, not flags
+ - MEDIUM: checks: make tcp-check perform multiple send() at once
+ - BUG/MEDIUM: stick: completely remove the unused flag from the store entries
+ - OPTIM: ebtree: pack the struct eb_node to avoid holes on 64-bit
+ - BUG/MEDIUM: stick-tables: complete the latest fix about store-responses
+ - CLEANUP: stream_interface: remove unused field err_loc
+ - MEDIUM: stats: don't use conn->xprt_st anymore
+ - MINOR: session: add a simple function to retrieve a session from a task
+ - MEDIUM: stats: don't use conn->xprt_ctx anymore
+ - MEDIUM: peers: don't rely on conn->xprt_ctx anymore
+ - MINOR: http: prevent smp_fetch_url_{ip,port} from using si->conn
+ - MINOR: connection: make it easier to emit proxy protocol for unknown addresses
+ - MEDIUM: stats: prepare the HTTP stats I/O handler to support more states
+ - MAJOR: stats: move the HTTP stats handling to its applet
+ - MEDIUM: stats: move request argument processing to the final step
+ - MEDIUM: session: detect applets from the session by using s->target
+ - MAJOR: session: check for a connection to an applet in sess_prepare_conn_req()
+ - MAJOR: session: pass applet return traffic through the response analysers
+ - MEDIUM: stream-int: split the shutr/shutw functions between applet and conn
+ - MINOR: stream-int: make the shutr/shutw functions void
+ - MINOR: obj: provide a safe and an unsafe access to pointed objects
+ - MINOR: connection: add a field to store an object type
+ - MINOR: connection: always initialize conn->objt_type to OBJ_TYPE_CONN
+ - MEDIUM: stream interface: move the peers' ptr into the applet context
+ - MINOR: stream-interface: move the applet context to its own struct
+ - MINOR: obj: introduce a new type appctx
+ - MINOR: stream-int: rename ->applet to ->appctx
+ - MINOR: stream-int: split si_prepare_embedded into si_prepare_none and si_prepare_applet
+ - MINOR: stream-int: add a new pointer to the end point
+ - MEDIUM: stream-interface: set the pointer to the applet into the applet context
+ - MAJOR: stream interface: remove the ->release function pointer
+ - MEDIUM: stream-int: make ->end point to the connection or the appctx
+ - CLEANUP: stream-int: remove obsolete si_ctrl function
+ - MAJOR: stream-int: stop using si->conn and use si->end instead
+ - MEDIUM: stream-int: do not allocate a connection in parallel to applets
+ - MEDIUM: session: attach incoming connection to target on embryonic sessions
+ - MINOR: connection: add conn_init() to (re)initialize a connection
+ - MINOR: checks: call conn_init() to properly initialize the connection.
+ - MINOR: peers: make use of conn_init() to initialize the connection
+ - MINOR: session: use conn_init() to initialize the connections
+ - MINOR: http: use conn_init() to reinitialize the server connection
+ - MEDIUM: connection: replace conn_prepare with conn_assign
+ - MINOR: get rid of si_takeover_conn()
+ - MINOR: connection: add conn_new() / conn_free()
+ - MAJOR: connection: add two new flags to indicate readiness of control/transport
+ - MINOR: stream-interface: introduce si_reset() and si_set_state()
+ - MINOR: connection: reintroduce conn_prepare to set the protocol and transport
+ - MINOR: connection: replace conn_assign with conn_attach
+ - MEDIUM: stream-interface: introduce si_attach_conn to replace si_prepare_conn
+ - MAJOR: stream interface: dynamically allocate the outgoing connection
+ - MEDIUM: connection: move the send_proxy offset to the connection
+ - MINOR: connection: check for send_proxy during the connect(), not the SI
+ - MEDIUM: connection: merge the send_proxy and local_send_proxy calls
+ - MEDIUM: stream-int: replace occurrences of si->appctx with si_appctx()
+ - MEDIUM: stream-int: return the allocated appctx in stream_int_register_handler()
+ - MAJOR: stream-interface: dynamically allocate the applet context
+ - MEDIUM: session: automatically register the applet designated by the target
+ - MEDIUM: stats: delay appctx initialization
+ - CLEANUP: peers: use less confusing state/status code names
+ - MEDIUM: peers: delay appctx initialization
+ - MINOR: stats: provide some appctx information in "show sess all"
+ - DIET/MINOR: obj: pack the obj_type enum to 8 bits
+ - DIET/MINOR: connection: rearrange a few fields to save 8 bytes in the struct
+ - DIET/MINOR: listener: rearrange a few fields in struct listener to save 16 bytes
+ - DIET/MINOR: proxy: rearrange a few fields in struct proxy to save 16 bytes
+ - DIET/MINOR: session: reduce the struct session size by 8 bytes
+ - DIET/MINOR: stream-int: rearrange a few fields in struct stream_interface to save 8 bytes
+ - DIET/MINOR: http: reduce the size of struct http_txn by 8 bytes
+ - MINOR: http: switch the http state to an enum
+ - MINOR: http: use an enum for the auth method in http_auth_data
+ - DIET/MINOR: task: reduce struct task size by 8 bytes
+ - MINOR: stream_interface: add reporting of ressouce allocation errors
+ - MINOR: session: report lack of resources using the new stream-interface's error code
+ - BUILD: simplify the date and version retrieval in the makefile
+ - BUILD: prepare the makefile to skip format lines in SUBVERS and VERDATE
+ - BUILD: use format tags in VERDATE and SUBVERS files
+ - BUG/MEDIUM: channel: bo_getline() must wait for \n until buffer is full
+ - CLEANUP: check: server port is unsigned
+ - BUG/MEDIUM: checks: agent doesn't get the response if server does not closes
+ - MINOR: tools: buf2ip6 must not modify output on failure
+ - MINOR: pattern: do not assign SMP_TYPES by default to patterns
+ - MINOR: sample: make sample_parse_expr() use memprintf() to report parse errors
+ - MINOR: arg: improve wording on error reporting
+ - BUG/MEDIUM: sample: simplify and fix the argument parsing
+ - MEDIUM: acl: fix the argument parser to let the lower layer report detailed errors
+ - MEDIUM: acl: fix the initialization order of the ACL expression
+ - CLEANUP: acl: remove useless blind copy-paste from sample converters
+ - TESTS: add regression tests for ACL and sample expression parsers
+ - BUILD: time: adapt the type of TV_ETERNITY to the local system
+ - MINOR: chunks: allocate the trash chunks before parsing the config
+ - BUILD: definitely silence some stupid GCC warnings
+ - MINOR: chunks: always initialize the output chunk in get_trash_chunk()
+ - MINOR: checks: improve handling of the servers tracking chain
+ - REORG: checks: retrieve the check-specific defines from server.h to checks.h
+ - MINOR: checks: use an enum instead of flags to report a check result
+ - MINOR: checks: rename the state flags
+ - MINOR: checks: replace state DISABLED with CONFIGURED and ENABLED
+ - MINOR: checks: use check->state instead of srv->state & SRV_CHECKED
+ - MINOR: checks: fix agent check interval computation
+ - MINOR: checks: add a PAUSED state for the checks
+ - MINOR: checks: create the agent tasks even when no check is configured
+ - MINOR: checks: add a flag to indicate what check is an agent
+ - MEDIUM: checks: enable agent checks even if health checks are disabled
+ - BUG/MEDIUM: checks: ensure we can enable a server after boot
+ - BUG/MEDIUM: checks: tracking servers must not inherit the MAINT flag
+ - BUG/MAJOR: session: repair tcp-request connection rules
+ - BUILD: fix SUBVERS extraction in the Makefile
+ - BUILD: pattern: silence a warning about uninitialized value
+ - BUILD: log: fix build warning on Solaris
+ - BUILD: dumpstats: fix build error on Solaris
+ - DOC: move option pgsql-check to the correct place
+ - DOC: move option tcp-check to the proper place
+ - MINOR: connection: add simple functions to report connection readiness
+ - MEDIUM: connection: centralize handling of nolinger in fd management
+ - OPTIM: http: set CF_READ_DONTWAIT on response message
+ - OPTIM: http: do not re-enable reading on client side while closing the server side
+ - MINOR: config: add option http-keep-alive
+ - MEDIUM: connection: inform si_alloc_conn() whether existing conn is OK or not
+ - MAJOR: stream-int: handle the connection reuse in si_connect()
+ - MAJOR: http: add the keep-alive transition on the server side
+ - MAJOR: backend: enable connection reuse
+ - MINOR: http: add option prefer-last-server
+ - MEDIUM: http: do not report connection errors for second and further requests
+
+2013/06/17 : 1.5-dev19
+ - MINOR: stats: remove the autofocus on the scope input field
+ - BUG/MEDIUM: Fix crt-list file parsing error: filtered name was ignored.
+ - BUG/MEDIUM: ssl: EDH ciphers are not usable if no DH parameters present in pem file.
+ - BUG/MEDIUM: shctx: makes the code independent on SSL runtime version.
+ - MEDIUM: ssl: improve crt-list format to support negation
+ - BUG: ssl: fix crt-list for clients not supporting SNI
+ - MINOR: stats: show soft-stopped servers in different color
+ - BUG/MINOR: config: "source" does not work in defaults section
+ - BUG: regex: fix pcre compile error when using JIT
+ - MINOR: ssl: add pattern fetch 'ssl_c_sha1'
+ - BUG: ssl: send payload gets corrupted if tune.ssl.maxrecord is used
+ - MINOR: show PCRE version and JIT status in -vv
+ - BUG/MINOR: jit: don't rely on USE flag to detect support
+ - DOC: readme: add suggestion to link against static openssl
+ - DOC: examples: provide simplified ssl configuration
+ - REORG: tproxy: prepare the transparent proxy defines for accepting other OSes
+ - MINOR: tproxy: add support for FreeBSD
+ - MINOR: tproxy: add support for OpenBSD
+ - DOC: examples: provide an example of transparent proxy configuration for FreeBSD 8
+ - CLEANUP: fix minor typo in error message.
+ - CLEANUP: fix missing include <string.h> in proto/listener.h
+ - CLEANUP: protect checks.h from multiple inclusions
+ - MINOR: compression: acl "res.comp" and fetch "res.comp_algo"
+ - BUG/MINOR: http: add-header/set-header did not accept the ACL condition
+ - BUILD: mention in the Makefile that USE_PCRE_JIT is for libpcre >= 8.32
+ - BUG/MEDIUM: splicing is broken since 1.5-dev12
+ - BUG/MAJOR: acl: add implicit arguments to the resolve list
+ - BUG/MINOR: tcp: fix error reporting for TCP rules
+ - CLEANUP: peers: remove a bit of spaghetti to prepare for the next bugfix
+ - MINOR: stick-table: allow to allocate an entry without filling it
+ - BUG/MAJOR: peers: fix an overflow when syncing strings larger than 16 bytes
+ - MINOR: session: only call http_send_name_header() when changing the server
+ - MINOR: tcp: report the erroneous word in tcp-request track*
+ - BUG/MAJOR: backend: consistent hash can loop forever in certain circumstances
+ - BUG/MEDIUM: log: fix regression on log-format handling
+ - MEDIUM: log: report file name, line number, and directive name with log-format errors
+ - BUG/MINOR: cli: "clear table" did not work anymore without a key
+ - BUG/MINOR: cli: "clear table xx data.xx" does not work anymore
+ - BUG/MAJOR: http: compression still has defects on chunked responses
+ - BUG/MINOR: stats: fix confirmation links on the stats interface
+ - BUG/MINOR: stats: the status bar does not appear anymore after a change
+ - BUG/MEDIUM: stats: allocate the stats frontend also on "stats bind-process"
+ - BUG/MEDIUM: stats: fix a regression when dealing with POST requests
+ - BUG/MINOR: fix unterminated ACL array in compression
+ - BUILD: last fix broke non-linux platforms
+ - MINOR: init: indicate the SSL runtime version on -vv.
+ - BUG/MEDIUM: compression: the deflate algorithm must use global settings as well
+ - BUILD: stdbool is not portable (again)
+ - DOC: readme: add a small reminder about restrictions to respect in the code
+ - MINOR: ebtree: add new eb_next_dup/eb_prev_dup() functions to visit duplicates
+ - BUG/MINOR: acl: fix a double free during exit when using PCRE_JIT
+ - DOC: fix wrong copy-paste in the rspdel example
+ - MINOR: counters: make it easier to extend the amount of tracked counters
+ - MEDIUM: counters: add support for tracking a third counter
+ - MEDIUM: counters: add a new "gpc0_rate" counter in stick-tables
+ - BUG/MAJOR: http: always ensure response buffer has some room for a response
+ - MINOR: counters: add fetch/acl sc*_tracked to indicate whether a counter is tracked
+ - MINOR: defaults: allow REQURI_LEN and CAPTURE_LEN to be redefined
+ - MINOR: log: add a new flag 'L' for locally processed requests
+ - MINOR: http: add full-length header fetch methods
+ - MEDIUM: protocol: implement a "drain" function in protocol layers
+ - MEDIUM: http: add a new "http-response" ruleset
+ - MEDIUM: http: add the "set-nice" action to http-request and http-response
+ - MEDIUM: log: add a log level override value in struct session
+ - MEDIUM: http: add support for action "set-log-level" in http-request/http-response
+ - MEDIUM: http: add support for "set-tos" in http-request/http-response
+ - MEDIUM: http: add the "set-mark" action on http-request/http-response rules
+ - MEDIUM: tcp: add "tcp-request connection expect-proxy layer4"
+ - MEDIUM: acl: automatically detect the type of certain fetches
+ - MEDIUM: acl: remove a lot of useless ACLs that are equivalent to their fetches
+ - MEDIUM: acl: remove 15 additional useless ACLs that are equivalent to their fetches
+ - DOC: major reorg of ACL + sample fetch
+ - CLEANUP: http: remove the bogus urlp_ip ACL match
+ - MINOR: acl: add the new "env()" fetch method to retrieve an environment variable
+ - BUG/MINOR: acl: correctly consider boolean fetches when doing casts
+ - BUG/CRITICAL: fix a possible crash when using negative header occurrences
+ - DOC: update ROADMAP file
+ - MEDIUM: counters: use sc0/sc1/sc2 instead of sc1/sc2/sc3
+ - MEDIUM: stats: add proxy name filtering on the statistic page
+
+2013/04/03 : 1.5-dev18
+ - DOCS: Add explanation of intermediate certs to crt paramater
+ - DOC: typo and minor fixes in compression paragraph
+ - MINOR: config: http-request configuration error message misses new keywords
+ - DOC: minor typo fix in documentation
+ - BUG/MEDIUM: ssl: ECDHE ciphers not usable without named curve configured.
+ - MEDIUM: ssl: add bind-option "strict-sni"
+ - MEDIUM: ssl: add mapping from SNI to cert file using "crt-list"
+ - MEDIUM: regex: Use PCRE JIT in acl
+ - DOC: simplify bind option "interface" explanation
+ - DOC: tfo: bump required kernel to linux-3.7
+ - BUILD: add explicit support for TFO with USE_TFO
+ - MEDIUM: New cli option -Ds for systemd compatibility
+ - MEDIUM: add haproxy-systemd-wrapper
+ - MEDIUM: add systemd service
+ - BUG/MEDIUM: systemd-wrapper: don't leak zombie processes
+ - BUG/MEDIUM: remove supplementary groups when changing gid
+ - BUG/MEDIUM: config: fix parser crash with bad bind or server address
+ - BUG/MINOR: Correct logic in cut_crlf()
+ - CLEANUP: checks: Make desc argument to set_server_check_status const
+ - CLEANUP: dumpstats: Make cli_release_handler() static
+ - MEDIUM: server: Break out set weight processing code
+ - MEDIUM: server: Allow relative weights greater than 100%
+ - MEDIUM: server: Tighten up parsing of weight string
+ - MEDIUM: checks: Add agent health check
+ - BUG/MEDIUM: ssl: openssl 0.9.8 doesn't open /dev/random before chroot
+ - BUG/MINOR: time: frequency counters are not totally accurate
+ - BUG/MINOR: http: don't process abortonclose when request was sent
+ - BUG/MEDIUM: stream_interface: don't close outgoing connections on shutw()
+ - BUG/MEDIUM: checks: ignore late resets after valid responses
+ - DOC: fix bogus recommendation on usage of gpc0 counter
+ - BUG/MINOR: http-compression: lookup Cache-Control in the response, not the request
+ - MINOR: signal: don't block SIGPROF by default
+ - OPTIM: epoll: make use of EPOLLRDHUP
+ - OPTIM: splice: detect shutdowns and avoid splice() == 0
+ - OPTIM: splice: assume by default that splice is working correctly
+ - BUG/MINOR: log: temporary fix for lost SSL info in some situations
+ - BUG/MEDIUM: peers: only the last peers section was used by tables
+ - BUG/MEDIUM: config: verbosely reject peers sections with multiple local peers
+ - BUG/MINOR: epoll: use a fix maxevents argument in epoll_wait()
+ - BUG/MINOR: config: fix improper check for failed memory alloc in ACL parser
+ - BUG/MINOR: config: free peer's address when exiting upon parsing error
+ - BUG/MINOR: config: check the proper variable when parsing log minlvl
+ - BUG/MEDIUM: checks: ensure the health_status is always within bounds
+ - BUG/MINOR: cli: show sess should always validate s->listener
+ - BUG/MINOR: log: improper NULL return check on utoa_pad()
+ - CLEANUP: http: remove a useless null check
+ - CLEANUP: tcp/unix: remove useless NULL check in {tcp,unix}_bind_listener()
+ - BUG/MEDIUM: signal: signal handler does not properly check for signal bounds
+ - BUG/MEDIUM: tools: off-by-one in quote_arg()
+ - BUG/MEDIUM: uri_auth: missing NULL check and memory leak on memory shortage
+ - BUG/MINOR: unix: remove the 'level' field from the ux struct
+ - CLEANUP: http: don't try to deinitialize http compression if it fails before init
+ - CLEANUP: config: slowstart is never negative
+ - CLEANUP: config: maxcompcpuusage is never negative
+ - BUG/MEDIUM: log: emit '-' for empty fields again
+ - BUG/MEDIUM: checks: fix a race condition between checks and observe layer7
+ - BUILD: fix a warning emitted by isblank() on non-c99 compilers
+ - BUILD: improve the makefile's support for libpcre
+ - MEDIUM: halog: add support for counting per source address (-ic)
+ - MEDIUM: tools: make str2sa_range support all address syntaxes
+ - MEDIUM: config: make use of str2sa_range() instead of str2sa()
+ - MEDIUM: config: use str2sa_range() to parse server addresses
+ - MEDIUM: config: use str2sa_range() to parse peers addresses
+ - MINOR: tests: add a config file to ease address parsing tests.
+ - MINOR: ssl: add a global tunable for the max SSL/TLS record size
+ - BUG/MINOR: syscall: fix NR_accept4 system call on sparc/linux
+ - BUILD/MINOR: syscall: add definition of NR_accept4 for ARM
+ - MINOR: config: report missing peers section name
+ - BUG/MEDIUM: tools: fix bad character handling in str2sa_range()
+ - BUG/MEDIUM: stats: never apply "unix-bind prefix" to the global stats socket
+ - MINOR: tools: prepare str2sa_range() to return an error message
+ - BUG/MEDIUM: checks: don't call connect() on unsupported address families
+ - MINOR: tools: prepare str2sa_range() to accept a prefix
+ - MEDIUM: tools: make str2sa_range() parse unix addresses too
+ - MEDIUM: config: make str2listener() use str2sa_range() to parse unix addresses
+ - MEDIUM: config: use a single str2sa_range() call to parse bind addresses
+ - MEDIUM: config: use str2sa_range() to parse log addresses
+ - CLEANUP: tools: remove str2sun() which is not used anymore.
+ - MEDIUM: config: add complete support for str2sa_range() in dispatch
+ - MEDIUM: config: add complete support for str2sa_range() in server addr
+ - MEDIUM: config: add complete support for str2sa_range() in 'server'
+ - MEDIUM: config: add complete support for str2sa_range() in 'peer'
+ - MEDIUM: config: add complete support for str2sa_range() in 'source' and 'usesrc'
+ - CLEANUP: minor cleanup in str2sa_range() and str2ip()
+ - CLEANUP: config: do not use multiple errmsg at once
+ - MEDIUM: tools: support specifying explicit address families in str2sa_range()
+ - MAJOR: listener: support inheriting a listening fd from the parent
+ - MAJOR: tools: support environment variables in addresses
+ - BUG/MEDIUM: http: add-header should not emit "-" for empty fields
+ - BUG/MEDIUM: config: ACL compatibility check on "redirect" was wrong
+ - BUG/MEDIUM: http: fix another issue caused by http-send-name-header
+ - DOC: mention the new HTTP 307 and 308 redirect statues
+ - MEDIUM: poll: do not use FD_* macros anymore
+ - BUG/MAJOR: ev_select: disable the select() poller if maxsock > FD_SETSIZE
+ - BUG/MINOR: acl: ssl_fc_{alg,use}_keysize must parse integers, not strings
+ - BUG/MINOR: acl: ssl_c_used, ssl_fc{,_has_crt,_has_sni} take no pattern
+ - BUILD: fix usual isdigit() warning on solaris
+ - BUG/MEDIUM: tools: vsnprintf() is not always reliable on Solaris
+ - OPTIM: buffer: remove one jump in buffer_count()
+ - OPTIM: http: improve branching in chunk size parser
+ - OPTIM: http: optimize the response forward state machine
+ - BUILD: enable poll() by default in the makefile
+ - BUILD: add explicit support for Mac OS/X
+ - BUG/MAJOR: http: use a static storage for sample fetch context
+ - BUG/MEDIUM: ssl: improve error processing and reporting in ssl_sock_load_cert_list_file()
+ - BUG/MAJOR: http: fix regression introduced by commit a890d072
+ - BUG/MAJOR: http: fix regression introduced by commit d655ffe
+ - BUG/CRITICAL: using HTTP information in tcp-request content may crash the process
+ - MEDIUM: acl: remove flag ACL_MAY_LOOKUP which is improperly used
+ - MEDIUM: samples: use new flags to describe compatibility between fetches and their usages
+ - MINOR: log: indicate it when some unreliable sample fetches are logged
+ - MEDIUM: samples: move payload-based fetches and ACLs to their own file
+ - MINOR: backend: rename sample fetch functions and declare the sample keywords
+ - MINOR: frontend: rename sample fetch functions and declare the sample keywords
+ - MINOR: listener: rename sample fetch functions and declare the sample keywords
+ - MEDIUM: http: unify acl and sample fetch functions
+ - MINOR: session: rename sample fetch functions and declare the sample keywords
+ - MAJOR: acl: make all ACLs reference the fetch function via a sample.
+ - MAJOR: acl: remove the arg_mask from the ACL definition and use the sample fetch's
+ - MAJOR: acl: remove fetch argument validation from the ACL struct
+ - MINOR: http: add new direction-explicit sample fetches for headers and cookies
+ - MINOR: payload: add new direction-explicit sample fetches
+ - CLEANUP: acl: remove ACL hooks which were never used
+ - MEDIUM: proxy: remove acl_requires and just keep a flag "http_needed"
+ - MINOR: sample: provide a function to report the name of a sample check point
+ - MAJOR: acl: convert all ACL requires to SMP use+val instead of ->requires
+ - CLEANUP: acl: remove unused references to ACL_USE_*
+ - MINOR: http: replace acl_parse_ver with acl_parse_str
+ - MEDIUM: acl: move the ->parse, ->match and ->smp fields to acl_expr
+ - MAJOR: acl: add option -m to change the pattern matching method
+ - MINOR: acl: remove the use_count in acl keywords
+ - MEDIUM: acl: have a pointer to the keyword name in acl_expr
+ - MEDIUM: acl: support using sample fetches directly in ACLs
+ - MEDIUM: http: remove val_usr() to validate user_lists
+ - MAJOR: sample: maintain a per-proxy list of the fetch args to resolve
+ - MINOR: ssl: add support for the "alpn" bind keyword
+ - MINOR: http: status code 303 is HTTP/1.1 only
+ - MEDIUM: http: implement redirect 307 and 308
+ - MINOR: http: status 301 should not be marked non-cacheable
+
+2012/12/28 : 1.5-dev17
+ - MINOR: ssl: Setting global tune.ssl.cachesize value to 0 disables SSL session cache.
+ - BUG/MEDIUM: stats: fix stats page regression introduced by commit 20b0de5
+ - BUG/MINOR: stats: last fix was still wrong
+ - BUG/MINOR: stats: http-request rules still don't cope with stats
+ - BUG/MINOR: http: http-request add-header emits a corrupted header
+ - BUG/MEDIUM: stats: disable request analyser when processing POST or HEAD
+ - BUG/MINOR: log: make log-format, unique-id-format and add-header more independant
+ - BUILD: log: unused variable svid
+ - CLEANUP: http: rename the misleading http_check_access_rule
+ - MINOR: http: move redirect rule processing to its own function
+ - REORG: config: move the http redirect rule parser to proto_http.c
+ - MEDIUM: http: add support for "http-request redirect" rules
+ - MEDIUM: http: add support for "http-request tarpit" rule
+
+2012/12/24 : 1.5-dev16
+ - BUG/MEDIUM: ssl: Prevent ssl error from affecting other connections.
+ - BUG/MINOR: ssl: error is not reported if it occurs simultaneously with peer close detection.
+ - MINOR: ssl: add fetch and acl "ssl_c_used" to check if current SSL session uses a client certificate.
+ - MINOR: contrib: make the iprange tool grep for addresses
+ - CLEANUP: polling: gcc doesn't always optimize constants away
+ - OPTIM: poll: optimize fd management functions for low register count CPUs
+ - CLEANUP: poll: remove a useless double-check on fdtab[fd].owner
+ - OPTIM: epoll: use a temp variable for intermediary flag computations
+ - OPTIM: epoll: current fd does not count as a new one
+ - BUG/MINOR: poll: the I/O handler was called twice for polled I/Os
+ - MINOR: http: make resp_ver and status ACLs check for the presence of a response
+ - BUG/MEDIUM: stream-interface: fix possible stalls during transfers
+ - BUG/MINOR: stream_interface: don't return when the fd is already set
+ - BUG/MEDIUM: connection: always update connection flags prior to computing polling
+ - CLEANUP: buffer: use buffer_empty() instead of buffer_len()==0
+ - BUG/MAJOR: stream_interface: fix occasional data transfer freezes
+ - BUG/MEDIUM: stream_interface: fix another case where the reader might not be woken up
+ - BUG/MINOR: http: don't abort client connection on premature responses
+ - BUILD: no need to clean up when making git-tar
+ - MINOR: log: add a tag for amount of bytes uploaded from client to server
+ - BUG/MEDIUM: log: fix possible segfault during config parsing
+ - MEDIUM: log: change a few log tokens to make them easier to remember
+ - BUG/MINOR: log: add_to_logformat_list() used the wrong constants
+ - MEDIUM: log-format: make the format parser more robust and more extensible
+ - MINOR: sample: support cast from bool to string
+ - MINOR: samples: add a function to fetch and convert any sample to a string
+ - MINOR: log: add lf_text_len
+ - MEDIUM: log: add the ability to include samples in logs
+ - REORG: stats: massive code reorg and cleanup
+ - REORG: stats: move the HTTP header injection to proto_http
+ - REORG: stats: functions are now HTTP/CLI agnostic
+ - BUG/MINOR: log: fix regression introduced by commit 8a3f52
+ - MINOR: chunks: centralize the trash chunk allocation
+ - MEDIUM: stats: use hover boxes instead of title to report details
+ - MEDIUM: stats: use multi-line tips to display detailed counters
+ - MINOR: tools: simplify the use of the int to ascii macros
+ - MINOR: stats: replace STAT_FMT_CSV with STAT_FMT_HTML
+ - MINOR: http: prepare to support more http-request actions
+ - MINOR: log: make parse_logformat_string() take a const char *
+ - MEDIUM: http: add http-request 'add-header' and 'set-header' to build headers
+
+2012/12/12 : 1.5-dev15
+ - DOC: add a few precisions on compression
+ - BUG/MEDIUM: ssl: Fix handshake failure on session resumption with client cert.
+ - BUG/MINOR: ssl: One free session in cache remains unused.
+ - BUG/MEDIUM: ssl: first outgoing connection would fail with {ca,crt}-ignore-err
+ - MEDIUM: ssl: manage shared cache by blocks for huge sessions.
+ - MINOR: acl: add fetch for server session rate
+ - BUG/MINOR: compression: Content-Type is case insensitive
+ - MINOR: compression: disable on multipart or status != 200
+ - BUG/MINOR: http: don't report client aborts as server errors
+ - MINOR: stats: compute the ratio of compressed response based on 2xx responses
+ - MINOR: http: factor out the content-type checks
+ - BUG/MAJOR: stats: correctly check for a possible divide error when showing compression ratios
+ - BUILD: ssl: OpenSSL 0.9.6 has no renegociation
+ - BUG/MINOR: http: disable compression when message has no body
+ - MINOR: compression: make the stats a bit more robust
+ - BUG/MEDIUM: comp: DEFAULT_MAXZLIBMEM was expressed in bytes and not megabytes
+ - MINOR: connection: don't remove failed handshake flags
+ - MEDIUM: connection: add an error code in connections
+ - MEDIUM: connection: add minimal error reporting in logs for incomplete connections
+ - MEDIUM: connection: add error reporting for the PROXY protocol header
+ - MEDIUM: connection: add error reporting for the SSL
+ - DOC: document the connection error format in logs
+ - BUG/MINOR: http: don't log a 503 on client errors while waiting for requests
+ - BUILD: stdbool is not portable
+ - BUILD: ssl: NAME_MAX is not portable, use MAXPATHLEN instead
+ - BUG/MAJOR: raw_sock: must check error code on hangup
+ - BUG/MAJOR: polling: do not set speculative events on ERR nor HUP
+ - BUG/MEDIUM: session: fix FD leak when transport layer logging is enabled
+ - MINOR: stats: add a few more information on session dump
+ - BUG/MINOR: tcp: set the ADDR_TO_SET flag on outgoing connections
+ - CLEANUP: connection: remove unused server/proxy/task/si_applet declarations
+ - BUG/MEDIUM: tcp: process could theorically crash on lack of source ports
+ - MINOR: cfgparse: mention "interface" in the list of allowed "source" options
+ - MEDIUM: connection: introduce "struct conn_src" for servers and proxies
+ - CLEANUP: proto_tcp: use the same code to bind servers and backends
+ - CLEANUP: backend: use the same tproxy address selection code for servers and backends
+ - BUG/MEDIUM: stick-tables: conversions to strings were broken in dev13
+ - MEDIUM: proto_tcp: add support for tracking L7 information
+ - MEDIUM: counters: add sc1_trackers/sc2_trackers
+ - MINOR: http: add the "base32" pattern fetch function
+ - MINOR: http: add the "base32+src" fetch method.
+ - CLEANUP: session: use an array for the stick counters
+ - BUG/MINOR: proto_tcp: fix parsing of "table" in track-sc1/2
+ - BUG/MINOR: proto_tcp: bidirectional fetches not supported anymore in track-sc1/2
+ - BUG/MAJOR: connection: always recompute polling status upon I/O
+ - BUG/MINOR: connection: remove a few synchronous calls to polling updates
+ - MINOR: config: improve error checking on TCP stick-table tracking
+ - DOC: add some clarifications to the readme
+
+2012/11/26 : 1.5-dev14
+ - DOC: fix minor typos
+ - BUG/MEDIUM: compression: does not forward trailers
+ - MINOR: buffer_dump with ASCII
+ - BUG/MEDIUM: checks: mark the check as stopped after a connect error
+ - BUG/MEDIUM: checks: ensure we completely disable polling upon success
+ - BUG/MINOR: checks: don't mark the FD as closed before transport close
+ - MEDIUM: checks: avoid accumulating TIME_WAITs during checks
+ - MINOR: cli: report the msg state in full text in "show sess $PTR"
+ - CLEANUP: checks: rename some server check flags
+ - MAJOR: checks: rework completely bogus state machine
+ - BUG/MINOR: checks: slightly clean the state machine up
+ - MEDIUM: checks: avoid waking the application up for pure TCP checks
+ - MEDIUM: checks: close the socket as soon as we have a response
+ - BUG/MAJOR: checks: close FD on all timeouts
+ - MINOR: checks: fix recv polling after connect()
+ - MEDIUM: connection: provide a common conn_full_close() function
+ - BUG/MEDIUM: checks: prevent TIME_WAITs from appearing also on timeouts
+ - BUG/MAJOR: peers: the listener's maxaccept was not set and caused loops
+ - MINOR: listeners: make the accept loop more robust when maxaccept==0
+ - BUG/MEDIUM: acl: correctly resolve all args, not just the first one
+ - BUG/MEDIUM: acl: make prue_acl_expr() correctly free ACL expressions upon exit
+ - BUG/MINOR: stats: fix inversion of the report of a check in progress
+ - MEDIUM: tcp: add explicit support for delayed ACK in connect()
+ - BUG/MEDIUM: connection: always disable polling upon error
+ - MINOR: connection: abort earlier when errors are detected
+ - BUG/MEDIUM: checks: report handshake failures
+ - BUG/MEDIUM: connection: local_send_proxy must wait for connection to establish
+ - MINOR: tcp: add support for the "v6only" bind option
+ - MINOR: stats: also report the computed compression savings in html stats
+ - MINOR: stats: report the total number of compressed responses per front/back
+ - MINOR: tcp: add support for the "v4v6" bind option
+ - DOC: stats: document the comp_rsp stats column
+ - BUILD: buffer: fix another isprint() warning on solaris
+ - MINOR: cli: add support for the "show sess all" command
+ - BUG/MAJOR: cli: show sess <id> may randomly corrupt the back-ref list
+ - MINOR: cli: improve output format for show sess $ptr
+
+2012/11/22 : 1.5-dev13
+ - BUILD: fix build issue without USE_OPENSSL
+ - BUILD: fix compilation error with DEBUG_FULL
+ - DOC: ssl: remove prefer-server-ciphers documentation
+ - DOC: ssl: surround keywords with quotes
+ - DOC: fix minor typo on http-send-name-header
+ - BUG/MEDIUM: acls using IPv6 subnets patterns incorrectly match IPs
+ - BUG/MAJOR: fix a segfault on option http_proxy and url_ip acl
+ - MEDIUM: http: accept IPv6 values with (s)hdr_ip acl
+ - BUILD: report zlib support in haproxy -vv
+ - DOC: compression: add some details and clean up the formatting
+ - DOC: Change is_ssl acl to ssl_fc acl in example
+ - DOC: make it clear what the HTTP request size is
+ - MINOR: ssl: try to load Diffie-Hellman parameters from cert file
+ - DOC: ssl: update 'crt' statement on 'bind' about Diffie-Hellman parameters loading
+ - MINOR: ssl: add elliptic curve Diffie-Hellman support for ssl key generation
+ - DOC: ssl: add 'ecdhe' statement on 'bind'
+ - MEDIUM: ssl: add client certificate authentication support
+ - DOC: ssl: add 'verify', 'cafile' and 'crlfile' statements on 'bind'
+ - MINOR: ssl: add fetch and ACL 'client_crt' to test a client cert is present
+ - DOC: ssl: add fetch and ACL 'client_cert'
+ - MINOR: ssl: add ignore verify errors options
+ - DOC: ssl: add 'ca-ignore-err' and 'crt-ignore-err' statements on 'bind'
+ - MINOR: ssl: add fetch and ACL 'ssl_verify_result'
+ - DOC: ssl: add fetch and ACL 'ssl_verify_result'
+ - MINOR: ssl: add fetches and ACLs to return verify errors
+ - DOC: ssl: add fetches and ACLs 'ssl_verify_crterr', 'ssl_verify_caerr', and 'ssl_verify_crterr_depth'
+ - MINOR: ssl: disable shared memory and locks on session cache if nbproc == 1
+ - MINOR: ssl: add build param USE_PRIVATE_CACHE to build cache without shared memory
+ - MINOR: ssl : add statements 'notlsv11' and 'notlsv12' and rename 'notlsv1' to 'notlsv10'.
+ - DOC: ssl : add statements 'notlsv11' and 'notlsv12' and rename 'notlsv1' to 'notlsv10'.
+ - MEDIUM: config: authorize frontend and listen without bind.
+ - MINOR: ssl: add statement 'no-tls-tickets' on bind to disable stateless session resumption
+ - DOC: ssl: add 'no-tls-tickets' statement documentation.
+ - BUG/MINOR: ssl: Fix CRL check was not enabled when crlfile was specified.
+ - BUG/MINOR: build: Fix compilation issue on openssl 0.9.6 due to missing CRL feature.
+ - BUG/MINOR: conf: Fix 'maxsslconn' statement error if built without OPENSSL.
+ - BUG/MINOR: build: Fix failure with USE_OPENSSL=1 and USE_FUTEX=1 on archs i486 and i686.
+ - MINOR: ssl: remove prefer-server-ciphers statement and set it as the default on ssl listeners.
+ - BUG/MEDIUM: ssl: subsequent handshakes fail after server configuration changes
+ - MINOR: ssl: add 'crt-base' and 'ca-base' global statements.
+ - MEDIUM: conf: rename 'nosslv3' and 'notlsvXX' statements 'no-sslv3' and 'no-tlsvXX'.
+ - MEDIUM: conf: rename 'cafile' and 'crlfile' statements 'ca-file' and 'crl-file'
+ - MINOR: ssl: use bit fields to store ssl options instead of one int each
+ - MINOR: ssl: add 'force-sslv3' and 'force-tlsvXX' statements on bind.
+ - MINOR: ssl: add 'force-sslv3' and 'force-tlsvXX' statements on server
+ - MINOR: ssl: add defines LISTEN_DEFAULT_CIPHERS and CONNECT_DEFAULT_CIPHERS.
+ - BUG/MINOR: ssl: Fix issue on server statements 'no-tls*' and 'no-sslv3'
+ - MINOR: ssl: move ssl context init for servers from cfgparse.c to ssl_sock.c
+ - MEDIUM: ssl: reject ssl server keywords in default-server statement
+ - MINOR: ssl: add statement 'no-tls-tickets' on server side.
+ - MINOR: ssl: add statements 'verify', 'ca-file' and 'crl-file' on servers.
+ - DOC: Fix rename of options cafile and crlfile to ca-file and crl-file.
+ - MINOR: sample: manage binary to string type convertion in stick-table and samples.
+ - MINOR: acl: add parse and match primitives to use binary type on ACLs
+ - MINOR: sample: export 'sample_get_trash_chunk(void)'
+ - MINOR: conf: rename all ssl modules fetches using prefix 'ssl_fc' and 'ssl_c'
+ - MINOR: ssl: add pattern and ACLs fetches 'ssl_fc_protocol', 'ssl_fc_cipher', 'ssl_fc_use_keysize' and 'ssl_fc_alg_keysize'
+ - MINOR: ssl: add pattern fetch 'ssl_fc_session_id'
+ - MINOR: ssl: add pattern and ACLs fetches 'ssl_c_version' and 'ssl_f_version'
+ - MINOR: ssl: add pattern and ACLs fetches 'ssl_c_s_dn', 'ssl_c_i_dn', 'ssl_f_s_dn' and 'ssl_c_i_dn'
+ - MINOR: ssl: add pattern and ACLs 'ssl_c_sig_alg' and 'ssl_f_sig_alg'
+ - MINOR: ssl: add pattern and ACLs fetches 'ssl_c_key_alg' and 'ssl_f_key_alg'
+ - MINOR: ssl: add pattern and ACLs fetches 'ssl_c_notbefore', 'ssl_c_notafter', 'ssl_f_notbefore' and 'ssl_f_notafter'
+ - MINOR: ssl: add 'crt' statement on server.
+ - MINOR: ssl: checks the consistency of a private key with the corresponding certificate
+ - BUG/MEDIUM: ssl: review polling on reneg.
+ - BUG/MEDIUM: ssl: Fix some reneg cases not correctly handled.
+ - BUG/MEDIUM: ssl: Fix sometimes reneg fails if requested by server.
+ - MINOR: build: allow packagers to specify the ssl cache size
+ - MINOR: conf: add warning if ssl is not enabled and a certificate is present on bind.
+ - MINOR: ssl: Add tune.ssl.lifetime statement in global.
+ - MINOR: compression: Enable compression for IE6 w/SP2, IE7 and IE8
+ - BUG: http: revert broken optimisation from 82fe75c1a79dac933391501b9d293bce34513755
+ - DOC: duplicate ssl_sni section
+ - MEDIUM: HTTP compression (zlib library support)
+ - CLEANUP: use struct comp_ctx instead of union
+ - BUILD: remove dependency to zlib.h
+ - MINOR: compression: memlevel and windowsize
+ - MEDIUM: use pool for zlib
+ - MINOR: compression: try init in cfgparse.c
+ - MINOR: compression: init before deleting headers
+ - MEDIUM: compression: limit RAM usage
+ - MINOR: compression: tune.comp.maxlevel
+ - MINOR: compression: maximum compression rate limit
+ - MINOR: log-format: check number of arguments in cfgparse.c
+ - BUG/MEDIUM: compression: no Content-Type header but type in configuration
+ - BUG/MINOR: compression: deinit zlib only when required
+ - MEDIUM: compression: don't compress when no data
+ - MEDIUM: compression: use pool for comp_ctx
+ - MINOR: compression: rate limit in 'show info'
+ - MINOR: compression: report zlib memory usage
+ - BUG/MINOR: compression: dynamic level increase
+ - DOC: compression: unsupported cases.
+ - MINOR: compression: CPU usage limit
+ - MEDIUM: http: add "redirect scheme" to ease HTTP to HTTPS redirection
+ - BUG/MAJOR: ssl: missing tests in ACL fetch functions
+ - MINOR: config: add a function to indent error messages
+ - REORG: split "protocols" files into protocol and listener
+ - MEDIUM: config: replace ssl_conf by bind_conf
+ - CLEANUP: listener: remove unused conf->file and conf->line
+ - MEDIUM: listener: add a minimal framework to register "bind" keyword options
+ - MEDIUM: config: move the "bind" TCP parameters to proto_tcp
+ - MEDIUM: move bind SSL parsing to ssl_sock
+ - MINOR: config: improve error reporting for "bind" lines
+ - MEDIUM: config: move the common "bind" settings to listener.c
+ - MEDIUM: config: move all unix-specific bind keywords to proto_uxst.c
+ - MEDIUM: config: enumerate full list of registered "bind" keywords upon error
+ - MINOR: listener: add a scope field in the bind keyword lists
+ - MINOR: config: pass the file and line to config keyword parsers
+ - MINOR: stats: fill the file and line numbers in the stats frontend
+ - MINOR: config: set the bind_conf entry on listeners created from a "listen" line.
+ - MAJOR: listeners: use dual-linked lists to chain listeners with frontends
+ - REORG: listener: move unix perms from the listener to the bind_conf
+ - BUG: backend: balance hdr was broken since 1.5-dev11
+ - MINOR: standard: make memprintf() support a NULL destination
+ - MINOR: config: make str2listener() use memprintf() to report errors.
+ - MEDIUM: stats: remove the stats_sock struct from the global struct
+ - MINOR: ssl: set the listeners' data layer to ssl during parsing
+ - MEDIUM: stats: make use of the standard "bind" parsers to parse global socket
+ - DOC: move bind options to their own section
+ - DOC: stats: refer to "bind" section for "stats socket" settings
+ - DOC: fix index to reference bind and server options
+ - BUG: http: do not print garbage on invalid requests in debug mode
+ - BUG/MINOR: config: check the proper pointer to report unknown protocol
+ - CLEANUP: connection: offer conn_prepare() to set up a connection
+ - CLEANUP: config: fix typo inteface => interface
+ - BUG: stats: fix regression introduced by commit 4348fad1
+ - MINOR: cli: allow to set frontend maxconn to zero
+ - BUG/MAJOR: http: chunk parser was broken with buffer changes
+ - MEDIUM: monitor: simplify handling of monitor-net and mode health
+ - MINOR: connection: add a pointer to the connection owner
+ - MEDIUM: connection: make use of the owner instead of container_of
+ - BUG/MINOR: ssl: report the L4 connection as established when possible
+ - BUG/MEDIUM: proxy: must not try to stop disabled proxies upon reload
+ - BUG/MINOR: config: use a copy of the file name in proxy configurations
+ - BUG/MEDIUM: listener: don't pause protocols that do not support it
+ - MEDIUM: proxy: add the global frontend to the list of normal proxies
+ - BUG/MINOR: epoll: correctly disable FD polling in fd_rem()
+ - MINOR: signal: really ignore signals configured with no handler
+ - MINOR: buffers: add a few functions to write chars, strings and blocks
+ - MINOR: raw_sock: always report asynchronous connection errors
+ - MEDIUM: raw_sock: improve connection error reporting
+ - REORG: connection: rename the data layer the "transport layer"
+ - REORG: connection: rename app_cb "data"
+ - MINOR: connection: provide a generic data layer wakeup callback
+ - MINOR: connection: split conn_prepare() in two functions
+ - MINOR: connection: add an init callback to the data_cb struct
+ - MEDIUM: session: use a specific data_cb for embryonic sessions
+ - MEDIUM: connection: use a generic data-layer init() callback
+ - MEDIUM: connection: reorganize connection flags
+ - MEDIUM: connection: only call the data->wake callback on activity
+ - MEDIUM: connection: make it possible for data->wake to return an error
+ - MEDIUM: session: register a data->wake callback to process errors
+ - MEDIUM: connection: don't call the data->init callback upon error
+ - MEDIUM: connection: it's not the data layer's role to validate the connection
+ - MEDIUM: connection: automatically disable polling on error
+ - REORG: connection: move the PROXY protocol management to connection.c
+ - MEDIUM: connection: add a new local send-proxy transport callback
+ - MAJOR: checks: make use of the connection layer to send checks
+ - REORG: server: move the check-specific parts into a check subsection
+ - MEDIUM: checks: use real buffers to store requests and responses
+ - MEDIUM: check: add the ctrl and transport layers in the server check structure
+ - MAJOR: checks: completely use the connection transport layer
+ - MEDIUM: checks: add the "check-ssl" server option
+ - MEDIUM: checks: enable the PROXY protocol with health checks
+ - CLEANUP: checks: remove minor warnings for assigned but not used variables
+ - MEDIUM: tcp: enable TCP Fast Open on systems which support it
+ - BUG: connection: fix regression from commit 9e272bf9
+ - CLEANUP: cttproxy: remove a warning on undeclared close()
+ - BUG/MAJOR: ensure that hdr_idx is always reserved when L7 fetches are used
+ - MEDIUM: listener: add support for linux's accept4() syscall
+ - MINOR: halog: sort output by cookie code
+ - BUG/MINOR: halog: -ad/-ac report the correct number of output lines
+ - BUG/MINOR: halog: fix help message for -ut/-uto
+ - MINOR: halog: add a parameter to limit output line count
+ - BUILD: accept4: move the socketcall declaration outside of accept4()
+ - MINOR: server: add minimal infrastructure to parse keywords
+ - MINOR: standard: make indent_msg() support empty messages
+ - MEDIUM: server: check for registered keywords when parsing unknown keywords
+ - MEDIUM: server: move parsing of keyword "id" to server.c
+ - BUG/MEDIUM: config: check-send-proxy was ignored if SSL was not builtin
+ - MEDIUM: ssl: move "server" keyword SSL options parsing to ssl_sock.c
+ - MEDIUM: log: suffix the frontend's name with '~' when using SSL
+ - MEDIUM: connection: always unset the transport layer upon close
+ - BUG/MINOR: session: fix some leftover from debug code
+ - BUG/MEDIUM: session: enable the conn_session_update() callback
+ - MEDIUM: connection: add a flag to hold the transport layer
+ - MEDIUM: log: add a new LW_XPRT flag to pin the transport layer
+ - MINOR: log: make lf_text use a const char *
+ - MEDIUM: log: report SSL ciphers and version in logs using logformat %sslc/%sslv
+ - REORG: http: rename msg->buf to msg->chn since it's a channel
+ - CLEANUP: http: use 'chn' to name channel variables, not 'buf'
+ - CLEANUP: channel: use 'chn' instead of 'buf' as local variable names
+ - CLEANUP: tcp: use 'chn' instead of 'buf' or 'b' for channel pointer names
+ - CLEANUP: stream_interface: use 'chn' instead of 'b' to name channel pointers
+ - CLEANUP: acl: use 'chn' instead of 'b' to name channel pointers
+ - MAJOR: channel: replace the struct buffer with a pointer to a buffer
+ - OPTIM: channel: reorganize struct members to improve cache efficiency
+ - CLEANUP: session: remove term_trace which is not used anymore
+ - OPTIM: session: reorder struct session fields
+ - OPTIM: connection: pack the struct target
+ - DOC: document relations between internal entities
+ - MINOR: ssl: add 'ssl_npn' sample/acl to extract TLS/NPN information
+ - BUILD: ssl: fix shctx build on older compilers
+ - MEDIUM: ssl: add support for the "npn" bind keyword
+ - BUG: ssl: fix ssl_sni ACLs to correctly process regular expressions
+ - MINOR: chunk: provide string compare functions
+ - MINOR: sample: accept fetch keywords without parenthesis
+ - MEDIUM: sample: pass an empty list instead of a null for fetch args
+ - MINOR: ssl: improve socket behaviour upon handshake abort.
+ - BUG/MEDIUM: http: set DONTWAIT on data when switching to tunnel mode
+ - MEDIUM: listener: provide a fallback for accept4() when not supported
+ - BUG/MAJOR: connection: risk of crash on certain tricky close scenario
+ - MEDIUM: cli: allow the stats socket to be bound to a specific set of processes
+ - OPTIM: channel: inline channel_forward's fast path
+ - OPTIM: http: inline http_parse_chunk_size() and http_skip_chunk_crlf()
+ - OPTIM: tools: inline hex2i()
+ - CLEANUP: http: rename HTTP_MSG_DATA_CRLF state
+ - MINOR: compression: automatically disable compression for older browsers
+ - MINOR: compression: optimize memLevel to improve byte rate
+ - BUG/MINOR: http: compression should consider all Accept-Encoding header values
+ - BUILD: fix coexistence of openssl and zlib
+ - MINOR: ssl: add pattern and ACLs fetches 'ssl_c_serial' and 'ssl_f_serial'
+ - BUG/MEDIUM: command-line option -D must have precedence over "debug"
+ - MINOR: tools: add a clear_addr() function to unset an address
+ - BUG/MEDIUM: tcp: transparent bind to the source only when address is set
+ - CLEANUP: remove trashlen
+ - MAJOR: session: detach the connections from the stream interfaces
+ - DOC: update document describing relations between internal entities
+ - BUILD: make it possible to specify ZLIB path
+ - MINOR: compression: add an offload option to remove the Accept-Encoding header
+ - BUG: compression: disable auto-close and enable MSG_MORE during transfer
+ - CLEANUP: completely remove trashlen
+ - MINOR: chunk: add a function to reset a chunk
+ - CLEANUP: replace chunk_printf() with chunk_appendf()
+ - MEDIUM: make the trash be a chunk instead of a char *
+ - MEDIUM: remove remains of BUFSIZE in HTTP auth and sample conversions
+ - MEDIUM: stick-table: allocate the table key of size buffer size
+ - BUG/MINOR: stream_interface: don't loop over ->snd_buf()
+ - BUG/MINOR: session: ensure that we don't retry connection if some data were sent
+ - OPTIM: session: don't process the whole session when only timers need a refresh
+ - BUG/MINOR: session: mark the handshake as complete earlier
+ - MAJOR: connection: remove the CO_FL_CURR_*_POL flag
+ - BUG/MAJOR: always clear the CO_FL_WAIT_* flags after updating polling flags
+ - MAJOR: sepoll: make the poller totally event-driven
+ - OPTIM: stream_interface: disable reading when CF_READ_DONTWAIT is set
+ - BUILD: compression: remove a build warning
+ - MEDIUM: fd: don't unset fdtab[].updated upon delete
+ - REORG: fd: move the speculative I/O management from ev_sepoll
+ - REORG: fd: move the fd state management from ev_sepoll
+ - REORG: fd: centralize the processing of speculative events
+ - BUG: raw_sock: also consider ENOTCONN in addition to EAGAIN
+ - BUILD: stream_interface: remove si_fd() and its references
+ - BUILD: compression: enable build in BSD and OSX Makefiles
+ - MAJOR: ev_select: make the poller support speculative events
+ - MAJOR: ev_poll: make the poller support speculative events
+ - MAJOR: ev_kqueue: make the poller support speculative events
+ - MAJOR: polling: replace epoll with sepoll and remove sepoll
+ - MAJOR: polling: remove unused callbacks from the poller struct
+ - MEDIUM: http: refrain from sending "Connection: close" when Upgrade is present
+ - CLEANUP: channel: remove any reference of the hijackers
+ - CLEANUP: stream_interface: remove the external task type target
+ - MAJOR: connection: replace struct target with a pointer to an enum
+ - BUG: connection: fix typo in previous commit
+ - BUG: polling: don't skip polled events in the spec list
+ - MINOR: splice: disable it when the system returns EBADF
+ - MINOR: build: allow packagers to specify the default maxzlibmem
+ - BUG: halog: fix broken output limitation
+ - BUG: proxy: fix server name lookup in get_backend_server()
+ - BUG: compression: do not always increment the round counter on allocation failure
+ - BUG/MEDIUM: compression: release the zlib pools between keep-alive requests
+ - MINOR: global: don't prevent nbproc from being redefined
+ - MINOR: config: support process ranges for "bind-process"
+ - MEDIUM: global: add support for CPU binding on Linux ("cpu-map")
+ - MINOR: ssl: rename and document the tune.ssl.cachesize option
+ - DOC: update the PROXY protocol spec to support v2
+ - MINOR: standard: add a simple popcount function
+ - MEDIUM: adjust the maxaccept per listener depending on the number of processes
+ - BUG: compression: properly disable compression when content-type does not match
+ - MINOR: cli: report connection status in "show sess xxx"
+ - BUG/MAJOR: stream_interface: certain workloads could cause get stuck
+ - BUILD: cli: fix build when SSL is enabled
+ - MINOR: cli: report the fd state in "show sess xxx"
+ - MINOR: cli: report an error message on missing argument to compression rate
+ - MINOR: http: add some debugging functions to pretty-print msg state names
+ - BUG/MAJOR: stream_interface: read0 not always handled since dev12
+ - DOC: documentation on http header capture is wrong
+ - MINOR: http: allow the cookie capture size to be changed
+ - DOC: http header capture has not been limited in size for a long time
+ - DOC: update readme with build methods for BSD
+ - BUILD: silence a warning on Solaris about usage of isdigit()
+ - MINOR: stats: report HTTP compression stats per frontend and per backend
+ - MINOR: log: add '%Tl' to log-format
+ - MINOR: samples: update the url_param fetch to match parameters in the path
+
+2012/09/10 : 1.5-dev12
+ - CONTRIB: halog: sort URLs by avg bytes_read or total bytes_read
+ - MEDIUM: ssl: add support for prefer-server-ciphers option
+ - MINOR: IPv6 support for transparent proxy
+ - MINOR: protocol: add SSL context to listeners if USE_OPENSSL is defined
+ - MINOR: server: add SSL context to servers if USE_OPENSSL is defined
+ - MEDIUM: connection: add a new handshake flag for SSL (CO_FL_SSL_WAIT_HS).
+ - MEDIUM: ssl: add new files ssl_sock.[ch] to provide the SSL data layer
+ - MEDIUM: config: add the 'ssl' keyword on 'bind' lines
+ - MEDIUM: config: add support for the 'ssl' option on 'server' lines
+ - MEDIUM: ssl: protect against client-initiated renegociation
+ - BUILD: add optional support for SSL via the USE_OPENSSL flag
+ - MEDIUM: ssl: add shared memory session cache implementation.
+ - MEDIUM: ssl: replace OpenSSL's session cache with the shared cache
+ - MINOR: ssl add global setting tune.sslcachesize to set SSL session cache size.
+ - MEDIUM: ssl: add support for SNI and wildcard certificates
+ - DOC: Typos cleanup
+ - DOC: fix name for "option independant-streams"
+ - DOC: specify the default value for maxconn in the context of a proxy
+ - BUG/MINOR: to_log erased with unique-id-format
+ - LICENSE: add licence exception for OpenSSL
+ - BUG/MAJOR: cookie prefix doesn't support cookie-less servers
+ - BUILD: add an AIX 5.2 (and later) target.
+ - MEDIUM: fd/si: move peeraddr from struct fdinfo to struct connection
+ - MINOR: halog: use the more recent dual-mode fgets2 implementation
+ - BUG/MEDIUM: ebtree: ebmb_insert() must not call cmp_bits on full-length matches
+ - CLEANUP: halog: make clean should also remove .o files
+ - OPTIM: halog: make use of memchr() on platforms which provide a fast one
+ - OPTIM: halog: improve cold-cache behaviour when loading a file
+ - BUG/MINOR: ACL implicit arguments must be created with unresolved flag
+ - MINOR: replace acl_fetch_{path,url}* with smp_fetch_*
+ - MEDIUM: pattern: add the "base" sample fetch method
+ - OPTIM: i386: make use of kernel-mode-linux when available
+ - BUG/MINOR: tarpit: fix condition to return the HTTP 500 message
+ - BUG/MINOR: polling: some events were not set in various pollers
+ - MINOR: http: add the urlp_val ACL match
+ - BUG: stktable: tcp_src_to_stktable_key() must return NULL on invalid families
+ - MINOR: stats/cli: add plans to support more stick-table actions
+ - MEDIUM: stats/cli: add support for "set table key" to enter values
+ - REORG/MEDIUM: fd: remove FD_STCLOSE from struct fdtab
+ - REORG/MEDIUM: fd: remove checks for FD_STERROR in ev_sepoll
+ - REORG/MEDIUM: fd: get rid of FD_STLISTEN
+ - REORG/MINOR: connection: move declaration to its own include file
+ - REORG/MINOR: checks: put a struct connection into the server
+ - MINOR: connection: add flags to the connection struct
+ - MAJOR: get rid of fdtab[].state and use connection->flags instead
+ - MINOR: fd: add a new I/O handler to fdtab
+ - MEDIUM: polling: prepare to call the iocb() function when defined.
+ - MEDIUM: checks: make use of fdtab->iocb instead of cb[]
+ - MEDIUM: protocols: use the generic I/O callback for accept callbacks
+ - MINOR: connection: add a handler for fd-based connections
+ - MAJOR: connection: replace direct I/O callbacks with the connection callback
+ - MINOR: fd: make fdtab->owner a connection and not a stream_interface anymore
+ - MEDIUM: connection: remove the FD_POLL_* flags only once
+ - MEDIUM: connection: extract the send_proxy callback from proto_tcp
+ - MAJOR: tcp: remove the specific I/O callbacks for TCP connection probes
+ - CLEANUP: remove the now unused fdtab direct I/O callbacks
+ - MAJOR: remove the stream interface and task management code from sock_*
+ - MEDIUM: stream_interface: pass connection instead of fd in sock_ops
+ - MEDIUM: stream_interface: centralize the SI_FL_ERR management
+ - MAJOR: connection: add a new CO_FL_CONNECTED flag
+ - MINOR: rearrange tcp_connect_probe() and fix wrong return codes
+ - MAJOR: connection: call data layer handshakes from the handler
+ - MEDIUM: fd: remove the EV_FD_COND_* primitives
+ - MINOR: sock_raw: move calls to si_data_close upper
+ - REORG: connection: replace si_data_close() with conn_data_close()
+ - MEDIUM: sock_raw: introduce a read0 callback that is different from shutr
+ - MAJOR: stream_int: use a common stream_int_shut*() functions regardless of the data layer
+ - MAJOR: fd: replace all EV_FD_* macros with new fd_*_* inline calls
+ - MEDIUM: fd: add fd_poll_{recv,send} for use when explicit polling is required
+ - MEDIUM: connection: add definitions for dual polling mechanisms
+ - MEDIUM: connection: make use of the new polling functions
+ - MAJOR: make use of conn_{data|sock}_{poll|stop|want}* in connection handlers
+ - MEDIUM: checks: don't use FD_WAIT_* anymore
+ - MINOR: fd: get rid of FD_WAIT_*
+ - MEDIUM: stream_interface: offer a generic function for connection updates
+ - MEDIUM: stream-interface: offer a generic chk_rcv function for connections
+ - MEDIUM: stream-interface: add a snd_buf() callback to sock_ops
+ - MEDIUM: stream-interface: provide a generic stream_int_chk_snd_conn() function
+ - MEDIUM: stream-interface: provide a generic si_conn_send_cb callback
+ - MEDIUM: stream-interface: provide a generic stream_sock_read0() function
+ - REORG/MAJOR: use "struct channel" instead of "struct buffer"
+ - REORG/MAJOR: extract "struct buffer" from "struct channel"
+ - MINOR: connection: provide conn_{data|sock}_{read0|shutw} functions
+ - REORG: sock_raw: rename the files raw_sock*
+ - MAJOR: raw_sock: extract raw_sock_to_buf() from raw_sock_read()
+ - MAJOR: raw_sock: temporarily disable splicing
+ - MINOR: stream-interface: add an rcv_buf callback to sock_ops
+ - REORG: stream-interface: move sock_raw_read() to si_conn_recv_cb()
+ - MAJOR: connection: split the send call into connection and stream interface
+ - MAJOR: stream-interface: restore splicing mechanism
+ - MAJOR: stream-interface: make conn_notify_si() more robust
+ - MEDIUM: proxy-proto: don't use buffer flags in conn_si_send_proxy()
+ - MAJOR: stream-interface: don't commit polling changes in every callback
+ - MAJOR: stream-interface: fix splice not to call chk_snd by itself
+ - MEDIUM: stream-interface: don't remove WAIT_DATA when a handshake is in progress
+ - CLEANUP: connection: split sock_ops into data_ops, app_cp and si_ops
+ - REORG: buffers: split buffers into chunk,buffer,channel
+ - MAJOR: channel: remove the BF_OUT_EMPTY flag
+ - REORG: buffer: move buffer_flush, b_adv and b_rew to buffer.h
+ - MINOR: channel: rename bi_full to channel_full as it checks the whole channel
+ - MINOR: buffer: provide a new buffer_full() function
+ - MAJOR: channel: stop relying on BF_FULL to take action
+ - MAJOR: channel: remove the BF_FULL flag
+ - REORG: channel: move buffer_{replace,insert_line}* to buffer.{c,h}
+ - CLEANUP: channel: usr CF_/CHN_ prefixes instead of BF_/BUF_
+ - CLEANUP: channel: use "channel" instead of "buffer" in function names
+ - REORG: connection: move the target pointer from si to connection
+ - MAJOR: connection: move the addr field from the stream_interface
+ - MEDIUM: stream_interface: remove CAP_SPLTCP/CAP_SPLICE flags
+ - MEDIUM: proto_tcp: remove any dependence on stream_interface
+ - MINOR: tcp: replace tcp_src_to_stktable_key with addr_to_stktable_key
+ - MEDIUM: connection: add an ->init function to data layer
+ - MAJOR: session: introduce embryonic sessions
+ - MAJOR: connection: make the PROXY decoder a handshake handler
+ - CLEANUP: frontend: remove the old proxy protocol decoder
+ - MAJOR: connection: rearrange the polling flags.
+ - MEDIUM: connection: only call tcp_connect_probe when nothing was attempted yet
+ - MEDIUM: connection: complete the polling cleanups
+ - MEDIUM: connection: avoid calling handshakes when polling is required
+ - MAJOR: stream_interface: continue to update data polling flags during handshakes
+ - CLEANUP: fd: remove fdtab->flags
+ - CLEANUP: fdtab: flatten the struct and merge the spec struct with the rest
+ - CLEANUP: includes: fix includes for a number of users of fd.h
+ - MINOR: ssl: disable TCP quick-ack by default on SSL listeners
+ - MEDIUM: config: add a "ciphers" keyword to set SSL cipher suites
+ - MEDIUM: config: add "nosslv3" and "notlsv1" on bind and server lines
+ - BUG: ssl: mark the connection as waiting for an SSL connection during the handshake
+ - BUILD: http: rename error_message http_error_message to fix conflicts on RHEL
+ - BUILD: ssl: fix shctx build on RHEL with futex
+ - BUILD: include sys/socket.h to fix build failure on FreeBSD
+ - BUILD: fix build error without SSL (ssl_cert)
+ - BUILD: ssl: use MAP_ANON instead of MAP_ANONYMOUS
+ - BUG/MEDIUM: workaround an eglibc bug which truncates the pidfiles when nbproc > 1
+ - MEDIUM: config: support per-listener backlog and maxconn
+ - MINOR: session: do not send an HTTP/500 error on SSL sockets
+ - MEDIUM: config: implement maxsslconn in the global section
+ - BUG: tcp: close socket fd upon connect error
+ - MEDIUM: connection: improve error handling around the data layer
+ - MINOR: config: make the tasks "nice" value configurable on "bind" lines.
+ - BUILD: shut a gcc warning introduced by commit 269ab31
+ - MEDIUM: config: centralize handling of SSL config per bind line
+ - BUILD: makefile: report USE_OPENSSL status in build options
+ - BUILD: report openssl build settings in haproxy -vv
+ - MEDIUM: ssl: add sample fetches for is_ssl, ssl_has_sni, ssl_sni_*
+ - DOC: add a special acknowledgement for the stud project
+ - DOC: add missing SSL options for servers and listeners
+ - BUILD: automatically add -lcrypto for SSL
+ - DOC: add some info about openssl build in the README
+
+2012/06/04 : 1.5-dev11
+ - BUG/MEDIUM: option forwardfor if-none doesn't work with some configurations
+ - BUG/MAJOR: trash must always be the size of a buffer
+ - DOC: fix minor regex example issue and improve doc on stats
+ - MINOR: stream_interface: add a pointer to the listener for TARG_TYPE_CLIENT
+ - MEDIUM: protocol: add a pointer to struct sock_ops to the listener struct
+ - MINOR: checks: add on-marked-up option
+ - MINOR: balance uri: added 'whole' parameter to include query string in hash calculation
+ - MEDIUM: stream_interface: remove the si->init
+ - MINOR: buffers: add a rewind function
+ - BUG/MAJOR: fix regression on content-based hashing and http-send-name-header
+ - MAJOR: http: stop using msg->sol outside the parsers
+ - CLEANUP: http: make it more obvious that msg->som is always null outside of chunks
+ - MEDIUM: http: get rid of msg->som which is not used anymore
+ - MEDIUM: http: msg->sov and msg->sol will never wrap
+ - BUG/MAJOR: checks: don't call set_server_status_* when no LB algo is set
+ - BUG/MINOR: stop connect timeout when connect succeeds
+ - REORG: move the send-proxy code to tcp_connect_write()
+ - REORG/MINOR: session: detect the TCP monitor checks at the protocol accept
+ - MINOR: stream_interface: introduce a new "struct connection" type
+ - REORG/MINOR: stream_interface: move si->fd to struct connection
+ - REORG/MEDIUM: stream_interface: move applet->state and private to connection
+ - MINOR: stream_interface: add a data channel close function
+ - MEDIUM: stream_interface: call si_data_close() before releasing the si
+ - MINOR: peers: use the socket layer operations from the peer instead of sock_raw
+ - BUG/MINOR: checks: expire on timeout.check if smaller than timeout.connect
+ - MINOR: add a new function call tracer for debugging purposes
+ - BUG/MINOR: perform_http_redirect also needs to rewind the buffer
+ - BUG/MAJOR: b_rew() must pass a signed offset to b_ptr()
+ - BUG/MEDIUM: register peer sync handler in the proper order
+ - BUG/MEDIUM: buffers: fix bi_putchr() to correctly advance the pointer
+ - BUG/MINOR: fix option httplog validation with TCP frontends
+ - BUG/MINOR: log: don't report logformat errors in backends
+ - REORG/MINOR: use dedicated proxy flags for the cookie handling
+ - BUG/MINOR: config: do not report twice the incompatibility between cookie and non-http
+ - MINOR: http: add support for "httponly" and "secure" cookie attributes
+ - BUG/MEDIUM: ensure that unresolved arguments are freed exactly once
+ - BUG/MINOR: commit 196729ef used wrong condition resulting in freeing constants
+ - MEDIUM: stats: add support for soft stop/soft start in the admin interface
+ - MEDIUM: stats: add the ability to kill sessions from the admin interface
+ - BUILD: add support for linux kernels >= 2.6.28
+
+2012/05/14 : 1.5-dev10
+ - BUG/MINOR: stats admin: "Unexpected result" was displayed unconditionally
+ - BUG/MAJOR: acl: http_auth_group() must not accept any user from the userlist
+ - CLEANUP: auth: make the code build again with DEBUG_AUTH
+ - BUG/MEDIUM: config: don't crash at config load time on invalid userlist names
+ - REORG: use the name sock_raw instead of stream_sock
+ - MINOR: stream_interface: add a client target : TARG_TYPE_CLIENT
+ - BUG/MEDIUM: stream_interface: restore get_src/get_dst
+ - CLEANUP: sock_raw: remove last references to stream_sock
+ - CLEANUP: stream_interface: stop exporting socket layer functions
+ - MINOR: stream_interface: add an init callback to sock_ops
+ - MEDIUM: stream_interface: derive the socket operations from the target
+ - MAJOR: fd: remove the need for the socket layer to recheck the connection
+ - MINOR: session: call the socket layer init function when a session establishes
+ - MEDIUM: session: add support for tunnel timeouts
+ - MINOR: standard: add a new debug macro : fddebug()
+ - CLEANUP: fd: remove unused cb->b pointers in the struct fdtab
+ - OPTIM: proto_http: don't enable quick-ack on empty buffers
+ - OPTIM/MAJOR: ev_sepoll: process spec events after polled events
+ - OPTIM/MEDIUM: stream_interface: add a new SI_FL_NOHALF flag
+
+2012/05/08 : 1.5-dev9
+ - MINOR: Add release callback to si_applet
+ - CLEANUP: Fix some minor typos
+ - MINOR: Add TO/FROM_SET flags to struct stream_interface
+ - CLEANUP: Fix some minor whitespace issues
+ - MINOR: stats admin: allow unordered parameters in POST requests
+ - CLEANUP: fix typo in findserver() log message
+ - MINOR: stats admin: use the backend id instead of its name in the form
+ - MINOR: stats admin: reduce memcmp()/strcmp() calls on status codes
+ - DOC: cleanup indentation, alignment, columns and chapters
+ - DOC: fix some keywords arguments documentation
+ - MINOR: cli: display the 4 IP addresses and ports on "show sess XXX"
+ - BUG/MAJOR: log: possible segfault with logformat
+ - MEDIUM: log: split of log_format generation
+ - MEDIUM: log: New format-log flags: %Fi %Fp %Si %Sp %Ts %rt %H %pid
+ - MEDIUM: log: Unique ID
+ - MINOR: log: log-format: usable without httplog and tcplog
+ - BUG/MEDIUM: balance source did not properly hash IPv6 addresses
+ - MINOR: contrib/iprange: add a network IP range to mask converter
+ - MEDIUM: session: implement the "use-server" directive
+ - MEDIUM: log: add a new cookie flag 'U' to report situations where cookie is not used
+ - MEDIUM: http: make extract_cookie_value() iterate over cookie values
+ - MEDIUM: http: add cookie and scookie ACLs
+ - CLEANUP: lb_first: add reference to a paper describing the original idea
+ - MEDIUM: stream_sock: add a get_src and get_dst callback and remove SN_FRT_ADDR_SET
+ - BUG/MINOR: acl: req_ssl_sni would randomly fail if a session ID is present
+ - BUILD: http: make extract_cookie_value() return an int not size_t
+ - BUILD: http: stop gcc-4.1.2 from complaining about possibly uninitialized values
+ - CLEANUP: http: message parser must ignore HTTP_MSG_ERROR
+ - MINOR: standard: add a memprintf() function to build formatted error messages
+ - CLEANUP: remove a few warning about unchecked return values in debug code
+ - MEDIUM: move message-related flags from transaction to message
+ - DOC: add a diagram to explain how circular buffers work
+ - MAJOR: buffer rework: replace ->send_max with ->o
+ - MAJOR: buffer: replace buf->l with buf->{o+i}
+ - MINOR: buffers: provide simple pointer normalization functions
+ - MINOR: buffers: remove unused function buffer_contig_data()
+ - MAJOR: buffers: replace buf->w with buf->p - buf->o
+ - MAJOR: buffers: replace buf->r with buf->p + buf->i
+ - MAJOR: http: move buffer->lr to http_msg->next
+ - MAJOR: http: change msg->{som,col,sov,eoh} to be relative to buffer origin
+ - CLEANUP: http: remove unused http_msg->col
+ - MAJOR: http: turn http_msg->eol to a buffer-relative offset
+ - MEDIUM: http: add a pointer to the buffer in http_msg
+ - MAJOR: http: make http_msg->sol relative to buffer's origin
+ - MEDIUM: http: http_send_name_header: remove references to msg and buffer
+ - MEDIUM: http: remove buffer arg in a few header manipulation functions
+ - MEDIUM: http: remove buffer arg in http_capture_bad_message
+ - MEDIUM: http: remove buffer arg in http_msg_analyzer
+ - MEDIUM: http: remove buffer arg in http_upgrade_v09_to_v10
+ - MEDIUM: http: remove buffer arg in http_buffer_heavy_realign
+ - MEDIUM: http: remove buffer arg in chunk parsing functions
+ - MINOR: http: remove useless wrapping checks in http_msg_analyzer
+ - MEDIUM: buffers: fix unsafe use of buffer_ignore at some places
+ - MEDIUM: buffers: add new pointer wrappers and get rid of almost all buffer_wrap_add calls
+ - MEDIUM: buffers: implement b_adv() to advance a buffer's pointer
+ - MEDIUM: buffers: rename a number of buffer management functions
+ - MEDIUM: http: add a prefetch function for ACL pattern fetch
+ - MEDIUM: http: make all ACL fetch function use acl_prefetch_http()
+ - BUG/MINOR: http_auth: ACLs are volatile, not permanent
+ - MEDIUM: http/acl: merge all request and response ACL fetches of headers and cookies
+ - MEDIUM: http/acl: make acl_fetch_hdr_{ip,val} rely on acl_fetch_hdr()
+ - MEDIUM: add a new typed argument list parsing framework
+ - MAJOR: acl: make use of the new argument parsing framework
+ - MAJOR: acl: store the ACL argument types in the ACL keyword declaration
+ - MEDIUM: acl: acl_find_target() now resolves arguments based on their types
+ - MAJOR: acl: make acl_find_targets also resolve proxy names at config time
+ - MAJOR: acl: ensure that implicit table and proxies are valid
+ - MEDIUM: acl: remove unused tests for missing args when args are mandatory
+ - MEDIUM: pattern: replace type pattern_arg with type arg
+ - MEDIUM: pattern: get rid of arg_i in all functions making use of arguments
+ - MEDIUM: pattern: use the standard arg parser
+ - MEDIUM: pattern: add an argument validation callback to pattern descriptors
+ - MEDIUM: pattern: report the precise argument parsing error when known.
+ - MEDIUM: acl: remove the ACL_TEST_F_NULL_MATCH flag
+ - MINOR: pattern: add a new 'sample' type to store fetched data
+ - MEDIUM: pattern: add new sample types to replace pattern types
+ - MAJOR: acl: make use of the new sample struct and get rid of acl_test
+ - MEDIUM: pattern/acl: get rid of temp_pattern in ACLs
+ - MEDIUM: acl: get rid of the SET_RES flags
+ - MEDIUM: get rid of SMP_F_READ_ONLY and SMP_F_MUST_FREE
+ - MINOR: pattern: replace struct pattern with struct sample
+ - MEDIUM: pattern: integrate pattern_data into sample and use sample everywhere
+ - MEDIUM: pattern: retrieve the sample type in the sample, not in the keyword description
+ - MEDIUM: acl/pattern: switch rdp_cookie functions stack up-down
+ - MEDIUM: acl: replace acl_expr with args in acl fetch_* functions
+ - MINOR: tcp: replace acl_fetch_rdp_cookie with smp_fetch_rdp_cookie
+ - MEDIUM: acl/pattern: use the same direction scheme
+ - MEDIUM: acl/pattern: start merging common sample fetch functions
+ - MEDIUM: pattern: ensure that sample types always cast into other types.
+ - MEDIUM: acl/pattern: factor out the src/dst address fetches
+ - MEDIUM: acl: implement payload and payload_lv
+ - CLEANUP: pattern: ensure that payload and payload_lv always stay in the buffer
+ - MINOR: stick_table: centralize the handling of empty keys
+ - MINOR: pattern: centralize handling of unstable data in pattern_process()
+ - MEDIUM: pattern: use smp_fetch_rdp_cookie instead of the pattern specific version
+ - MINOR: acl: set SMP_OPT_ITERATE on fetch functions
+ - MINOR: acl: add a val_args field to keywords
+ - MINOR: proto_tcp: validate arguments of payload and payload_lv ACLs
+ - MEDIUM: http: merge acl and pattern header fetch functions
+ - MEDIUM: http: merge ACL and pattern cookie fetches into a single one
+ - MEDIUM: acl: report parsing errors to the caller
+ - MINOR: arg: improve error reporting on invalid arguments
+ - MINOR: acl: report errors encountered when loading patterns from files
+ - MEDIUM: acl: extend the pattern parsers to report meaningful errors
+ - REORG: use the name "sample" instead of "pattern" to designate extracted data
+ - REORG: rename "pattern" files
+ - MINOR: acl: add types to ACL patterns
+ - MINOR: standard: add an IPv6 parsing function (str62net)
+ - MEDIUM: acl: support IPv6 address matching
+ - REORG: stream_interface: create a struct sock_ops to hold socket operations
+ - REORG/MEDIUM: move protocol->{read,write} to sock_ops
+ - REORG/MEDIUM: stream_interface: initialize socket ops from descriptors
+ - REORG/MEDIUM: replace stream interface protocol functions by a proto pointer
+ - REORG/MEDIUM: move the default accept function from sockstream to protocols.c
+ - MEDIUM: proto_tcp: remove src6 and dst6 pattern fetch methods
+ - BUG/MINOR: http: error snapshots are wrong if buffer wraps
+ - BUG/MINOR: http: ensure that msg->err_pos is always relative to buf->p
+ - MEDIUM: http: improve error capture reports
+ - MINOR: acl: add the cook_val() match to match a cookie against an integer
+ - BUG/MEDIUM: send_proxy: fix initialisation of send_proxy_ofs
+ - MEDIUM: memory: add the ability to poison memory at run time
+ - BUG/MEDIUM: log: ensure that unique_id is properly initialized
+ - MINOR: cfgparse: use a common errmsg pointer for all parsers
+ - MEDIUM: cfgparse: make backend_parse_balance() use memprintf to report errors
+ - MEDIUM: cfgparse: use the new error reporting framework for remaining cfg_keywords
+ - MINOR: http: replace http_message_realign() with buffer_slow_realign()
+
+2012/03/26 : 1.5-dev8
+ - MINOR: patch for minor typo (ressources/resources)
+ - MEDIUM: http: add support for sending the server's name in the outgoing request
+ - DOC: mention that default checks are TCP connections
+ - BUG/MINOR: fix options forwardfor if-none when an alternative header name is specified
+ - CLEANUP: Make check_statuses, analyze_statuses and process_chk static
+ - CLEANUP: Fix HCHK spelling errors
+ - BUG/MINOR: fix typo in processing of http-send-name-header
+ - MEDIUM: log: Use linked lists for loggers
+ - BUILD: fix declaration inside a scope block
+ - REORG: log: split send_log function
+ - MINOR: config: Parse the string of the log-format config keyword
+ - MINOR: add ultoa, ulltoa, ltoa, lltoa implementations
+ - MINOR: Date and time fonctions that don't use snprintf
+ - MEDIUM: log: make http_sess_log use log_format
+ - DOC: log-format documentation
+ - MEDIUM: log: use log_format for mode tcplog
+ - MEDIUM: log-format: backend source address %Bi %Bp
+ - BUG/MINOR: log-format: fix %o flag
+ - BUG/MEDIUM: bad length in log_format and __send_log
+ - MINOR: logformat %st is signed
+ - BUILD/MINOR: fix the source URL in the spec file
+ - DOC: acl is http_first_req, not http_req_first
+ - BUG/MEDIUM: don't trim last spaces from headers consisting only of spaces
+ - MINOR: acl: add new matches for header/path/url length
+ - BUILD: halog: make halog build on solaris
+ - BUG/MINOR: don't use a wrong port when connecting to a server with mapped ports
+ - MINOR: remove the client/server side distinction in SI addresses
+ - MINOR: halog: add support for matching queued requests
+ - DOC: indicate that cookie "prefix" and "indirect" should not be mixed
+ - OPTIM/MINOR: move struct sockaddr_storage to the tail of structs
+ - OPTIM/MINOR: make it possible to change pipe size (tune.pipesize)
+ - BUILD/MINOR: silent a build warning in src/pipe.c (fcntl)
+ - OPTIM/MINOR: move the hdr_idx pools out of the proxy struct
+ - MEDIUM: tune.http.maxhdr makes it possible to configure the maximum number of HTTP headers
+ - BUG/MINOR: fix a segfault when parsing a config with undeclared peers
+ - CLEANUP: rename possibly confusing struct field "tracked"
+ - BUG/MEDIUM: checks: fix slowstart behaviour when server tracking is in use
+ - MINOR: config: tolerate server "cookie" setting in non-HTTP mode
+ - MEDIUM: buffers: add some new primitives and rework existing ones
+ - BUG: buffers: don't return a negative value on buffer_total_space_res()
+ - MINOR: buffers: make buffer_pointer() support negative pointers too
+ - CLEANUP: kill buffer_replace() and use an inline instead
+ - BUG: tcp: option nolinger does not work on backends
+ - CLEANUP: ebtree: remove a few annoying signedness warnings
+ - CLEANUP: ebtree: clarify licence and update to 6.0.6
+ - CLEANUP: ebtree: remove 4-year old harmless typo in duplicates insertion code
+ - CLEANUP: ebtree: remove another typo, a wrong initialization in insertion code
+ - BUG: ebtree: ebst_lookup() could return the wrong entry
+ - OPTIM: stream_sock: reduce the amount of in-flight spliced data
+ - OPTIM: stream_sock: save a failed recv syscall when splice returns EAGAIN
+ - MINOR: acl: add support for TLS server name matching using SNI
+ - BUG: http: re-enable TCP quick-ack upon incomplete HTTP requests
+ - BUG: proto_tcp: don't try to bind to a foreign address if sin_family is unknown
+ - MINOR: pattern: export the global temporary pattern
+ - CLEANUP: patterns: get rid of pattern_data_setstring()
+ - MEDIUM: acl: use temp_pattern to store fetched information in the "method" match
+ - MINOR: acl: include pattern.h to make pattern migration more transparent
+ - MEDIUM: pattern: change the pattern data integer from unsigned to signed
+ - MEDIUM: acl: use temp_pattern to store any integer-type information
+ - MEDIUM: acl: use temp_pattern to store any address-type information
+ - CLEANUP: acl: integer part of acl_test is not used anymore
+ - MEDIUM: acl: use temp_pattern to store any string-type information
+ - CLEANUP: acl: remove last data fields from the acl_test struct
+ - MEDIUM: http: replace get_ip_from_hdr2() with http_get_hdr()
+ - MEDIUM: patterns: the hdr() pattern is now of type string
+ - DOC: add minimal documentation on how ACLs work internally
+ - DOC: add a coding-style file
+ - OPTIM: halog: keep a fast path for the lines-count only
+ - CLEANUP: silence a warning when building on sparc
+ - BUG: http: tighten the list of allowed characters in a URI
+ - MEDIUM: http: block non-ASCII characters in URIs by default
+ - DOC: add some documentation from RFC3986 about URI format
+ - BUG/MINOR: cli: correctly remove the whole table on "clear table"
+ - BUG/MEDIUM: correctly disable servers tracking another disabled servers.
+ - BUG/MEDIUM: zero-weight servers must not dequeue requests from the backend
+ - MINOR: halog: add some help on the command line
+ - BUILD: fix build error on FreeBSD
+ - BUG: fix double free in peers config error path
+ - MEDIUM: improve config check return codes
+ - BUILD: make it possible to look for pcre in the default system paths
+ - MINOR: config: emit a warning when 'default_backend' masks servers
+ - MINOR: backend: rework the LC definition to support other connection-based algos
+ - MEDIUM: backend: add the 'first' balancing algorithm
+ - BUG: fix httplog trailing LF
+ - MEDIUM: increase chunk-size limit to 2GB-1
+ - BUG: queue: fix dequeueing sequence on HTTP keep-alive sessions
+ - BUG: http: disable TCP delayed ACKs when forwarding content-length data
+ - BUG: checks: fix server maintenance exit sequence
+ - BUG/MINOR: stream_sock: don't remove BF_EXPECT_MORE and BF_SEND_DONTWAIT on partial writes
+ - DOC: enumerate valid status codes for "observe layer7"
+ - MINOR: buffer: switch a number of buffer args to const
+ - CLEANUP: silence signedness warning in acl.c
+ - BUG: stream_sock: si->release was not called upon shutw()
+ - MINOR: log: use "%ts" to log term status only and "%tsc" to log with cookie
+ - BUG/CRITICAL: log: fix risk of crash in development snapshot
+ - BUG/MAJOR: possible crash when using capture headers on TCP frontends
+ - MINOR: config: disable header captures in TCP mode and complain
+
+2011/09/10 : 1.5-dev7
+ - [BUG] fix binary stick-tables
+ - [MINOR] http: *_dom matching header functions now also split on ":"
+ - [BUG] checks: fix support of Mysqld >= 5.5 for mysql-check
+ - [MINOR] acl: add srv_conn acl to count connections on a specific backend server
+ - [MINOR] check: add redis check support
+ - [DOC] small fixes to clearly distinguish between keyword and variables
+ - [MINOR] halog: add support for termination code matching (-tcn/-TCN)
+ - [DOC] Minor spelling fixes and grammatical enhancements
+ - [CLEANUP] dumpstats: make symbols static where possible
+ - [MINOR] Break out dumping table
+ - [MINOR] Break out processing of clear table
+ - [MINOR] Allow listing of stick table by key
+ - [MINOR] Break out all stick table socat command parsing
+ - [MINOR] More flexible clearing of stick table
+ - [MINOR] Allow showing and clearing by key of ipv6 stick tables
+ - [MINOR] Allow showing and clearing by key of integer stick tables
+ - [MINOR] Allow showing and clearing by key of string stick tables
+ - [CLEANUP] Remove assigned but unused variables
+ - [CLEANUP] peers.h: fix declarations
+ - [CLEANUP] session.c: Make functions static where possible
+ - [MINOR] Add active connection list to server
+ - [MINOR] Allow shutdown of sessions when a server becomes unavailable
+ - [MINOR] Add down termination condition
+ - [MINOR] Make appsess{,ion}_refresh static
+ - [MINOR] Add rdp_cookie pattern fetch function
+ - [CLEANUP] Remove unnecessary casts
+ - [MINOR] Add non-stick server option
+ - [MINOR] Consistently use error in tcp_parse_tcp_req()
+ - [MINOR] Consistently free expr on error in cfg_parse_listen()
+ - [MINOR] Free rdp_cookie_name on denint()
+ - [MINOR] Free tcp rules on denint()
+ - [MINOR] Free stick table pool on denint()
+ - [MINOR] Free stick rules on denint()
+ - [MEDIUM] Fix stick-table replication on soft-restart
+ - [MEDIUM] Correct ipmask() logic
+ - [MINOR] Correct type in table dump examples
+ - [MINOR] Fix build error in stream_int_register_handler()
+ - [MINOR] Use DPRINTF in assign_server()
+ - [BUG] checks: http-check expect could fail a check on multi-packet responses
+ - [DOC] fix minor typo in the "dispatch" doc
+ - [BUG] proto_tcp: fix address binding on remote source
+ - [MINOR] http: don't report the "haproxy" word on the monitoring response
+ - [REORG] http: move HTTP error codes back to proto_http.h
+ - [MINOR] http: make the "HTTP 200" status code configurable.
+ - [MINOR] http: partially revert the chunking optimization for now
+ - [MINOR] stream_sock: always clear BF_EXPECT_MORE upon complete transfer
+ - [CLEANUP] stream_sock: remove unneeded FL_TCP and factor out test
+ - [MEDIUM] http: add support for "http-no-delay"
+ - [OPTIM] http: optimize chunking again in non-interactive mode
+ - [OPTIM] stream_sock: avoid fast-forwarding of partial data
+ - [OPTIM] stream_sock: don't use splice on too small payloads
+ - [MINOR] config: make it possible to specify a cookie even without a server
+ - [BUG] stats: support url-encoded forms
+ - [MINOR] config: automatically compute a default fullconn value
+ - [CLEANUP] config: remove some left-over printf debugging code from previous patch
+ - [DOC] add missing entry or stick store-response
+ - [MEDIUM] http: add support for 'cookie' and 'set-cookie' patterns
+ - [BUG] halog: correctly handle truncated last line
+ - [MINOR] halog: make SKIP_CHAR stop on field delimiters
+ - [MINOR] halog: add support for HTTP log matching (-H)
+ - [MINOR] halog: gain back performance before SKIP_CHAR fix
+ - [OPTIM] halog: cache some common fields positions
+ - [OPTIM] halog: check once for correct line format and reuse the pointer
+ - [OPTIM] halog: remove many 'if' by using a function pointer for the filters
+ - [OPTIM] halog: remove support for tab delimiters in input data
+ - [BUG] session: risk of crash on out of memory (1.5-dev regression)
+ - [MINOR] session: try to emit a 500 response on memory allocation errors
+ - [OPTIM] stream_sock: reduce the default number of accepted connections at once
+ - [BUG] stream_sock: disable listener when system resources are exhausted
+ - [MEDIUM] proxy: add a PAUSED state to listeners and move socket tricks out of proxy.c
+ - [BUG] stream_sock: ensure orphan listeners don't accept too many connections
+ - [MINOR] listeners: add listen_full() to mark a listener full
+ - [MINOR] listeners: add support for queueing resource limited listeners
+ - [MEDIUM] listeners: put listeners in queue upon resource shortage
+ - [MEDIUM] listeners: queue proxy-bound listeners at the proxy's
+ - [MEDIUM] listeners: don't stop proxies when global maxconn is reached
+ - [MEDIUM] listeners: don't change listeners states anymore in maintain_proxies
+ - [CLEANUP] proxy: rename a few proxy states (PR_STIDLE and PR_STRUN)
+ - [MINOR] stats: report a "WAITING" state for sockets waiting for resource
+ - [MINOR] proxy: make session rate-limit more accurate
+ - [MINOR] sessions: only wake waiting listeners up if rate limit is OK
+ - [BUG] proxy: peers must only be stopped once, not upon every call to maintain_proxies
+ - [CLEANUP] proxy: merge maintain_proxies() operation inside a single loop
+ - [MINOR] task: new function task_schedule() to schedule a wake up
+ - [MAJOR] proxy: finally get rid of maintain_proxies()
+ - [BUG] proxy: stats frontend and peers were missing many initializers
+ - [MEDIUM] listeners: add a global listener management task
+ - [MINOR] proxy: make findproxy() return proxies from numeric IDs too
+ - [DOC] fix typos, "#" is a sharp, not a dash
+ - [MEDIUM] stats: add support for changing frontend's maxconn at runtime
+ - [MEDIUM] checks: group health checks methods by values and save option bits
+ - [MINOR] session-counters: add the ability to clear the counters
+ - [BUG] check: http-check expect + regex would crash in defaults section
+ - [MEDIUM] http: make x-forwarded-for addition conditional
+ - [REORG] build: move syscall redefinition to specific places
+ - [CLEANUP] update the year in the copyright banner
+ - [BUG] possible crash in 'show table' on stats socket
+ - [BUG] checks: use the correct destination port for sending checks
+ - [BUG] backend: risk of picking a wrong port when mapping is used with crossed families
+ - [MINOR] make use of set_host_port() and get_host_port() to get rid of family mismatches
+ - [DOC] fixed a few "sensible" -> "sensitive" errors
+ - [MINOR] make use of addr_to_str() and get_host_port() to replace many inet_ntop()
+ - [BUG] http: trailing white spaces must also be trimmed after headers
+ - [MINOR] stats: display "<NONE>" instead of the frontend name when unknown
+ - [MINOR] http: take a capture of too large requests and responses
+ - [MINOR] http: take a capture of truncated responses
+ - [MINOR] http: take a capture of bad content-lengths.
+ - [DOC] add a few old and uncommitted docs
+ - [CLEANUP] cfgparse: fix reported options for the "bind" keyword
+ - [MINOR] halog: add -hs/-HS to filter by HTTP status code range
+ - [MINOR] halog: support backslash-escaped quotes
+ - [CLEANUP] remove dirty left-over of a debugging message
+ - [MEDIUM] stats: disable complex socket reservation for stats socket
+ - [CLEANUP] remove a useless test in manage_global_listener_queue()
+ - [MEDIUM] stats: add the "set maxconn" setting to the command line interface
+ - [MEDIUM] add support for global.maxconnrate to limit the per-process conn rate.
+ - [MINOR] stats: report the current and max global connection rates
+ - [MEDIUM] stats: add the ability to adjust the global maxconnrate
+ - [BUG] peers: don't pre-allocate 65000 connections to each peer
+ - [MEDIUM] don't limit peers nor stats socket to maxconn nor maxconnrate
+ - [BUG] peers: the peer frontend must not emit any log
+ - [CLEANUP] proxy: make pause_proxy() perform the required controls and emit the logs
+ - [BUG] peers: don't keep a peers section which has a NULL frontend
+ - [BUG] peers: ensure the peers are resumed if they were paused
+ - [MEDIUM] stats: add the ability to enable/disable/shutdown a frontend at runtime
+ - [MEDIUM] session: make session_shutdown() an independant function
+ - [MEDIUM] stats: offer the possibility to kill a session from the CLI
+ - [CLEANUP] stats: centralize tests for backend/server inputs on the CLI
+ - [MEDIUM] stats: offer the possibility to kill sessions by server
+ - [MINOR] halog: do not consider byte 0x8A as end of line
+ - [MINOR] frontend: ensure debug message length is always initialized
+ - [OPTIM] halog: make fgets parse more bytes by blocks
+ - [OPTIM] halog: add assembly version of the field lookup code
+ - [MEDIUM] poll: add a measurement of idle vs work time
+ - [CLEANUP] startup: report only the basename in the usage message
+ - [MINOR] startup: add an option to change to a new directory
+ - [OPTIM] task: don't scan the run queue if we know it's empty
+ - [BUILD] stats: stdint is not present on solaris
+ - [DOC] update the README file to reflect new naming rules for patches
+ - [MINOR] stats: report the number of requests intercepted by the frontend
+ - [DOC] update ROADMAP file
+
+2011/04/08 : 1.5-dev6
+ - [BUG] stream_sock: use get_addr_len() instead of sizeof() on sockaddr_storage
+ - [BUG] TCP source tracking was broken with IPv6 changes
+ - [BUG] stick-tables did not work when converting IPv6 to IPv4
+ - [CRITICAL] fix risk of crash when dealing with space in response cookies
+
+2011/03/29 : 1.5-dev5
+ - [BUG] standard: is_addr return value for IPv4 was inverted
+ - [MINOR] update comment about IPv6 support for server
+ - [MEDIUM] use getaddrinfo to resolve names if gethostbyname fail
+ - [DOC] update IPv6 support for bind
+ - [DOC] document IPv6 support for server
+ - [DOC] fix a minor typo
+ - [MEDIUM] IPv6 support for syslog
+ - [DOC] document IPv6 support for syslog
+ - [MEDIUM] IPv6 support for stick-tables
+ - [DOC] document IPv6 support for stick-tables
+ - [DOC] update ROADMAP file
+ - [BUG] session: src_conn_cur was returning src_conn_cnt instead
+ - [MINOR] frontend: add a make_proxy_line function
+ - [MEDIUM] stream_sock: add support for sending the proxy protocol header line
+ - [MEDIUM] server: add support for the "send-proxy" option
+ - [DOC] update the spec on the proxy protocol
+ - [BUILD] proto_tcp: fix build issue with CTTPROXY
+ - [DOC] update ROADMAP file
+ - [MEDIUM] config: rework the IPv4/IPv6 address parser to support host-only addresses
+ - [MINOR] cfgparse: better report wrong listening addresses and make use of str2sa_range
+ - [BUILD] add the USE_GETADDRINFO build option
+ - [TESTS] provide a test case for various address formats
+ - [BUG] session: conn_retries was not always initialized
+ - [BUG] log: retrieve the target from the session, not the SI
+ - [BUG] http: fix possible incorrect forwarded wrapping chunk size (take 2)
+ - [MINOR] tools: add two macros MID_RANGE and MAX_RANGE
+ - [BUG] http: fix content-length handling on 32-bit platforms
+ - [OPTIM] buffers: uninline buffer_forward()
+ - [BUG] stream_sock: fix handling for server side PROXY protocol
+ - [MINOR] acl: add support for table_cnt and table_avl matches
+ - [DOC] update ROADMAP file
+
+2011/03/13 : 1.5-dev4
+ - [MINOR] cfgparse: Check whether the path given for the stats socket actually fits into the sockaddr_un structure to avoid truncation.
+ - [MINOR] unix sockets : inherits the backlog size from the listener
+ - [CLEANUP] unix sockets : move create_uxst_socket() in uxst_bind_listener()
+ - [DOC] fix a minor typo
+ - [DOC] fix ignore-persist documentation
+ - [MINOR] add warnings on features not compatible with multi-process mode
+ - [BUG] http: fix http-pretend-keepalive and httpclose/tunnel mode
+ - [MINOR] stats: add support for several packets in stats admin
+ - [BUG] stats: admin commands must check the proxy state
+ - [BUG] stats: admin web interface must check the proxy state
+ - [MINOR] http: add pattern extraction method to stick on query string parameter
+ - [MEDIUM] add internal support for IPv6 server addresses
+ - [MINOR] acl: add be_id/srv_id to match backend's and server's id
+ - [MINOR] log: add support for passing the forwarded hostname
+ - [MINOR] log: ability to override the syslog tag
+ - [MINOR] checks: add PostgreSQL health check
+ - [DOC] update ROADMAP file
+ - [BUILD] pattern: use 'int' instead of 'int32_t'
+ - [OPTIM] linux: add support for bypassing libc to force using vsyscalls
+ - [BUG] debug: report the correct poller list in verbose mode
+ - [BUG] capture: do not capture a cookie if there is no memory left
+ - [BUG] appsession: fix possible double free in case of out of memory
+ - [CRITICAL] cookies: mixing cookies in indirect mode and appsession can crash the process
+ - [BUG] http: correctly update the header list when removing two consecutive headers
+ - [BUILD] add the CPU=native and ARCH=32/64 build options
+ - [BUILD] add -fno-strict-aliasing to fix warnings with gcc >= 4.4
+ - [CLEANUP] hash: move the avalanche hash code globally available
+ - [MEDIUM] hash: add support for an 'avalanche' hash-type
+ - [DOC] update roadmap file
+ - [BUG] http: do not re-enable the PROXY analyser on keep-alive
+ - [OPTIM] http: don't send each chunk in a separate packet
+ - [DOC] fix minor typos reported recently in the peers section
+ - [DOC] fix another typo in the doc
+ - [MINOR] stats: report HTTP message state and buffer flags in error dumps
+ - [BUG] http chunking: don't report a parsing error on connection errors
+ - [BUG] stream_interface: truncate buffers when sending error messages
+ - [MINOR] http: support wrapping messages in error captures
+ - [MINOR] http: capture incorrectly chunked message bodies
+ - [MINOR] stats: add global event ID and count
+ - [BUG] http: analyser optimizations broke pipelining
+ - [CLEANUP] frontend: only apply TCP-specific settings to TCP/TCP6 sockets
+ - [BUG] http: fix incorrect error reporting during data transfers
+ - [CRITICAL] session: correctly leave turn-around and queue states on abort
+ - [BUG] session: release slot before processing pending connections
+ - [MINOR] tcp: add support for dynamic MSS setting
+ - [BUG] stick-table: correctly terminate string keys during lookups
+ - [BUG] acl: fix handling of empty lines in pattern files
+ - [BUG] stick-table: use the private buffer when padding strings
+ - [BUG] ebtree: fix ebmb_lookup() with len smaller than the tree's keys
+ - [OPTIM] ebtree: ebmb_lookup: reduce stack usage by moving the return code out of the loop
+ - [OPTIM] ebtree: inline ebst_lookup_len and ebis_lookup_len
+ - [REVERT] undo the stick-table string key lookup fixes
+ - [MINOR] http: improve url_param pattern extraction to ignore empty values
+ - [BUILD] frontend: shut a warning with TCP_MAXSEG
+ - [BUG] http: update the header list's tail when removing the last header
+ - [DOC] fix minor typo in the proxy protocol doc
+ - [DOC] fix typos (http-request instead of http-check)
+ - [BUG] http: use correct ACL pointer when evaluating authentication
+ - [BUG] cfgparse: correctly count one socket per port in ranges
+ - [BUG] startup: set the rlimits before binding ports, not after.
+ - [BUG] acl: srv_id must return no match when the server is NULL
+ - [MINOR] acl: add ability to check for internal response-only parameters
+ - [MINOR] acl: srv_id is only valid in responses
+ - [MINOR] config: warn if response-only conditions are used in "redirect" rules
+ - [BUG] acl: fd leak when reading patterns from file
+ - [DOC] fix minor typo in "usesrc"
+ - [BUG] http: fix possible incorrect forwarded wrapping chunk size
+ - [BUG] http: fix computation of message body length after forwarding has started
+ - [BUG] http: balance url_param did not work with first parameters on POST
+ - [TESTS] update the url_param regression test to test check_post too
+ - [DOC] update ROADMAP
+ - [DOC] internal: reflect the fact that SI_ST_ASS is transient
+ - [BUG] config: don't crash on empty pattern files.
+ - [MINOR] stream_interface: make use of an applet descriptor for IO handlers
+ - [REORG] stream_interface: move the st0, st1 and private members to the applet
+ - [REORG] stream_interface: split the struct members in 3 parts
+ - [REORG] session: move client and server address to the stream interface
+ - [REORG] tcp: make tcpv4_connect_server() take the target address from the SI
+ - [MEDIUM] stream_interface: store the target pointer and type
+ - [CLEANUP] stream_interface: remove the applet.handler pointer
+ - [MEDIUM] log: take the logged server name from the stream interface
+ - [CLEANUP] session: remove data_source from struct session
+ - [CLEANUP] stats: make all dump functions only rely on the stream interface
+ - [REORG] session: move the data_ctx struct to the stream interface's applet
+ - [MINOR] proxy: add PR_O2_DISPATCH to detect dispatch mode
+ - [MINOR] cfgparse: only keep one of dispatch, transparent, http_proxy
+ - [MINOR] session: add a pointer to the new target into the session
+ - [MEDIUM] session: remove s->prev_srv which is not needed anymore
+ - [CLEANUP] stream_interface: use inline functions to manipulate targets
+ - [MAJOR] session: remove the ->srv pointer from struct session
+ - [MEDIUM] stats: split frontend and backend stats
+ - [MEDIUM] http: always evaluate http-request rules before stats http-request
+ - [REORG] http: move the http-request rules to proto_http
+ - [BUG] http: stats were not incremented on http-request deny
+ - [MINOR] checks: report it if checks fail due to socket creation error
+
+2010/11/11 : 1.5-dev3
+ - [DOC] fix http-request documentation
+ - [MEDIUM] enable/disable servers from the stats web interface
+ - [MEDIUM] stats: add an admin level
+ - [DOC] stats: document the "stats admin" statement
+ - [MINOR] startup: print the proxy socket which caused an error
+ - [CLEANUP] Remove unneeded chars allocation
+ - [MINOR] config: detect options not supported due to compilation options
+ - [MINOR] Add pattern's fetchs payload and payload_lv
+ - [MINOR] frontend: improve accept-proxy header parsing
+ - [MINOR] frontend: add tcpv6 support on accept-proxy bind
+ - [MEDIUM] Enhance message errors management on binds
+ - [MINOR] Manage unix socket source field on logs
+ - [MINOR] Manage unix socket source field on session dump on sock stats
+ - [MINOR] Support of unix listener sockets for debug and log event messages on frontend.c
+ - [MINOR] Add some tests on sockets family for port remapping and mode transparent.
+ - [MINOR] Manage socket type unix for some logs
+ - [MINOR] Enhance controls of socket's family on acls and pattern fetch
+ - [MINOR] Support listener's sockets unix on http logs.
+ - [MEDIUM] Add supports of bind on unix sockets.
+ - [BUG] stick table purge failure if size less than 255
+ - [BUG] stick table entries expire on counters updates/read or show table, even if there is no "expire" parameter
+ - [MEDIUM] Implement tcp inspect response rules
+ - [DOC] tcp-response content and inspect
+ - [MINOR] new acls fetch req_ssl_hello_type and rep_ssl_hello_type
+ - [DOC] acls rep_ssl_hello and req_ssl_hello
+ - [MEDIUM] Create new protected pattern types CONSTSTRING and CONSTDATA to force memcpy if data from protected areas need to be manipulated.
+ - [DOC] new type binary in stick-table
+ - [DOC] stick store-response and new patterns payload and payload_lv
+ - [MINOR] Manage all types (ip, integer, string, binary) on cli "show table" command
+ - [MEDIUM] Create updates tree on stick table to manage sync.
+ - [MAJOR] Add new files src/peer.c, include/proto/peers.h and include/types/peers.h for sync stick table management
+ - [MEDIUM] Manage peers section parsing and stick table registration on peers.
+ - [MEDIUM] Manage soft stop on peers proxy
+ - [DOC] add documentation for peers section
+ - [MINOR] checks: add support for LDAPv3 health checks
+ - [MINOR] add better support to "mysql-check"
+ - [BUG] Restore info about available active/backup servers
+ - [CONTRIB] Update haproxy.pl
+ - [CONTRIB] Update Cacti Tempates
+ - [CONTRIB] add templates for Cacti.
+ - [BUG] http: don't consider commas as a header delimitor within quotes
+ - [MINOR] support a global jobs counter
+ - [DOC] add a summary about cookie incompatibilities between specs and browsers
+ - [DOC] fix description of cookie "insert" and "indirect" modes
+ - [MEDIUM] http: fix space handling in the request cookie parser
+ - [MEDIUM] http: fix space handling in the response cookie parser
+ - [DOC] fix typo in the queue() definition (backend, not frontend)
+ - [BUG] deinit: unbind listeners before freeing them
+ - [BUG] stream_interface: only call si->release when both dirs are closed
+ - [MEDIUM] buffers: rework the functions to exchange between SI and buffers
+ - [DOC] fix typo in the avg_queue() and be_conn() definition (backend, not frontend)
+ - [MINOR] halog: add '-tc' to sort by termination codes
+ - [MINOR] halog: skip non-traffic logs for -st and -tc
+ - [BUG] stream_sock: cleanly disable the listener in case of resource shortage
+ - [BUILD] stream_sock: previous fix lacked the #include, causing a warning.
+ - [DOC] bind option is "defer-accept", not "defer_accept"
+ - [DOC] missing index entry for http-check send-state
+ - [DOC] tcp-request inspect-delay is for backends too
+ - [BUG] ebtree: string_equal_bits() could return garbage on identical strings
+ - [BUG] stream_sock: try to flush any extra pending request data after a POST
+ - [BUILD] proto_http: eliminate some build warnings with gcc-2.95
+ - [MEDIUM] make it possible to combine http-pretend-keepalived with httpclose
+ - [MEDIUM] tcp-request : don't wait for inspect-delay to expire when the buffer is full
+ - [MEDIUM] checks: add support for HTTP contents lookup
+ - [TESTS] add test-check-expect to test various http-check methods
+ - [MINOR] global: add "tune.chksize" to change the default check buffer size
+ - [MINOR] cookie: add options "maxidle" and "maxlife"
+ - [MEDIUM] cookie: support client cookies with some contents appended to their value
+ - [MINOR] http: make some room in the transaction flags to extend cookies
+ - [MINOR] cookie: add the expired (E) and old (O) flags for request cookies
+ - [MEDIUM] cookie: reassign set-cookie status flags to store more states
+ - [MINOR] add encode/decode function for 30-bit integers from/to base64
+ - [MEDIUM] cookie: check for maxidle and maxlife for incoming dated cookies
+ - [MEDIUM] cookie: set the date in the cookie if needed
+ - [DOC] document the cookie maxidle and maxlife parameters
+ - [BUG] checks: don't log backend down for all zero-weight servers
+ - [MEDIUM] checks: set server state to one state from failure when leaving maintenance
+ - [BUG] config: report correct keywords for "observe"
+ - [MINOR] checks: ensure that we can inherit binary checks from the defaults section
+ - [MINOR] acl: add the http_req_first match
+ - [DOC] fix typos about bind-process syntax
+ - [BUG] cookie: correctly unset default cookie parameters
+ - [MINOR] cookie: add support for the "preserve" option
+ - [BUG] ebtree: fix duplicate strings insertion
+ - [CONTRIB] halog: report per-url counts, errors and times
+ - [CONTRIB] halog: minor speed improvement in timer parser
+ - [MINOR] buffers: add a new request analyser flag for PROXY mode
+ - [MINOR] listener: add the "accept-proxy" option to the "bind" keyword
+ - [MINOR] standard: add read_uint() to parse a delimited unsigned integer
+ - [MINOR] standard: change arg type from const char* to char*
+ - [MINOR] frontend: add a new analyser to parse a proxied connection
+ - [MEDIUM] session: call the frontend_decode_proxy analyser on proxied connections
+ - [DOC] add the proxy protocol's specifications
+ - [DOC] document the 'accept-proxy' bind option
+ - [MINOR] cfgparse: report support of <path> for the 'bind' statements
+ - [DOC] add references to unix socket handling
+ - [MINOR] move MAXPATHLEN definition to compat.h
+ - [MEDIUM] unix sockets: cleanup the error reporting path
+ - [BUG] session: don't stop forwarding of data upon last packet
+ - [CLEANUP] accept: replace some inappropriate Alert() calls with send_log()
+ - [BUILD] peers: shut a printf format warning (key_size is a size_t)
+ - [BUG] accept: don't close twice upon error
+ - [OPTIM] session: don't recheck analysers when buffer flags have not changed
+ - [OPTIM] stream_sock: don't clear FDs that are already cleared
+ - [BUG] proto_tcp: potential bug on pattern fetch dst and dport
+
+2010/08/28 : 1.5-dev2
+ - [MINOR] startup: release unused structs after forking
+ - [MINOR] startup: don't wait for nothing when no old pid remains
+ - [CLEANUP] reference product branch 1.5
+ - [MEDIUM] signals: add support for registering functions and tasks
+ - [MEDIUM] signals: support redistribution of signal zero when stopping
+ - [BUG] http: don't set auto_close if more data are expected
+
+2010/08/25 : 1.5-dev1
+ - [BUG] stats: session rate limit gets garbaged in the stats
+ - [DOC] mention 'option http-server-close' effect in Tq section
+ - [DOC] summarize and highlight persistent connections behaviour
+ - [DOC] add configuration samples
+ - [BUG] http: dispatch and http_proxy modes were broken for a long time
+ - [BUG] http: the transaction must be initialized even in TCP mode
+ - [BUG] tcp: dropped connections must be counted as "denied" not "failed"
+ - [BUG] consistent hash: balance on all servers, not only 2 !
+ - [CONTRIB] halog: report per-server status codes, errors and response times
+ - [BUG] http: the transaction must be initialized even in TCP mode (part 2)
+ - [BUG] client: always ensure to zero rep->analysers
+ - [BUG] session: clear BF_READ_ATTACHED before next I/O
+ - [BUG] http: automatically close response if req is aborted
+ - [BUG] proxy: connection rate limiting was eating lots of CPU
+ - [BUG] http: report correct flags in case of client aborts during body
+ - [TESTS] refine non-regression tests and add 4 new tests
+ - [BUG] debug: wrong pointer was used to report a status line
+ - [BUG] debug: correctly report truncated messages
+ - [DOC] document the "dispatch" keyword
+ - [BUG] stick_table: fix possible memory leak in case of connection error
+ - [CLEANUP] acl: use 'L6' instead of 'L4' in ACL flags relying on contents
+ - [MINOR] accept: count the incoming connection earlier
+ - [CLEANUP] tcp: move some non tcp-specific layer6 processing out of proto_tcp
+ - [CLEANUP] client: move some ACLs away to their respective locations
+ - [CLEANUP] rename client -> frontend
+ - [MEDIUM] separate protocol-level accept() from the frontend's
+ - [MINOR] proxy: add a list to hold future layer 4 rules
+ - [MEDIUM] config: parse tcp layer4 rules (tcp-request accept/reject)
+ - [MEDIUM] tcp: check for pure layer4 rules immediately after accept()
+ - [OPTIM] frontend: tell the compiler that errors are unlikely to occur
+ - [MEDIUM] frontend: check for LI_O_TCP_RULES in the listener
+ - [MINOR] frontend: only check for monitor-net rules if LI_O_CHK_MONNET is set
+ - [CLEANUP] buffer->cto is not used anymore
+ - [MEDIUM] session: finish session establishment sequence in with I/O handlers
+ - [MEDIUM] session: initialize server-side timeouts after connect()
+ - [MEDIUM] backend: initialize the server stream_interface upon connect()
+ - [MAJOR] frontend: don't initialize the server-side stream_int anymore
+ - [MEDIUM] session: move the conn_retries attribute to the stream interface
+ - [MEDIUM] session: don't assign conn_retries upon accept() anymore
+ - [MINOR] frontend: rely on the frontend and not the backend for INDEPSTR
+ - [MAJOR] frontend: reorder the session initialization upon accept
+ - [MINOR] proxy: add an accept() callback for the application layer
+ - [MAJOR] frontend: split accept() into frontend_accept() and session_accept()
+ - [MEDIUM] stats: rely on the standard session_accept() function
+ - [MINOR] buffer: refine the flags that may wake an analyser up.
+ - [MINOR] stream_sock: don't dereference a non-existing frontend
+ - [MINOR] session: differenciate between accepted connections and received connections
+ - [MEDIUM] frontend: count the incoming connection earlier
+ - [MINOR] frontend: count denied TCP requests separately
+ - [CLEANUP] stick_table: add/clarify some comments
+ - [BUILD] memory: add a few missing parenthesis to the pool management macros
+ - [MINOR] stick_table: add support for variable-sized data
+ - [CLEANUP] stick_table: rename some stksess struct members to avoid confusion
+ - [CLEANUP] stick_table: move pattern to key functions to stick_table.c
+ - [MEDIUM] stick_table: add room for extra data types
+ - [MINOR] stick_table: add support for "conn_cum" data type.
+ - [MEDIUM] stick_table: don't overwrite data when storing an entry
+ - [MINOR] config: initialize stick tables after all the parsing
+ - [MINOR] stick_table: provide functions to return stksess data from a type
+ - [MEDIUM] stick_table: move the server ID to a generic data type
+ - [MINOR] stick_table: enable it for frontends too
+ - [MINOR] stick_table: export the stick_table_key
+ - [MINOR] tcp: add per-source connection rate limiting
+ - [MEDIUM] stick_table: separate storage and update of session entries
+ - [MEDIUM] stick-tables: add a reference counter to each entry
+ - [MINOR] session: add a pointer to the tracked counters for the source
+ - [CLEANUP] proto_tcp: make the config parser a little bit more flexible
+ - [BUG] config: report the correct proxy type in tcp-request errors
+ - [MINOR] config: provide a function to quote args in a more friendly way
+ - [BUG] stick_table: the fix for the memory leak caused a regression
+ - [MEDIUM] backend: support servers on 0.0.0.0
+ - [BUG] stick-table: correctly refresh expiration timers
+ - [MEDIUM] stream-interface: add a ->release callback
+ - [MINOR] proxy: add a "parent" member to the structure
+ - [MEDIUM] session: make it possible to call an I/O handler on both SI
+ - [MINOR] tools: add a fast div64_32 function
+ - [MINOR] freq_ctr: add new types and functions for periods different from 1s
+ - [MINOR] errors: provide new status codes for config parsing functions
+ - [BUG] http: denied requests must not be counted as denied resps in listeners
+ - [MINOR] tools: add a get_std_op() function to parse operators
+ - [MEDIUM] acl: make use of get_std_op() to parse intger ranges
+ - [MAJOR] stream_sock: better wakeup conditions on read()
+ - [BUG] session: analysers must be checked when SI state changes
+ - [MINOR] http: reset analysers to listener's, not frontend's
+ - [MEDIUM] session: support "tcp-request content" rules in backends
+ - [BUILD] always match official tags when doing git-tar
+ - [MAJOR] stream_interface: fix the wakeup conditions for embedded iohandlers
+ - [MEDIUM] buffer: make buffer_feed* support writing non-contiguous chunks
+ - [MINOR] tcp: src_count acl does not have a permanent result
+ - [MAJOR] session: add track-counters to track counters related to the session
+ - [MINOR] stick-table: provide a table lookup function
+ - [MINOR] stick-table: use suffix "_cnt" for cumulated counts
+ - [MEDIUM] session: move counter ACL fetches from proto_tcp
+ - [MEDIUM] session: add concurrent connections counter
+ - [MEDIUM] session: add data in and out volume counters
+ - [MINOR] session: add the trk_conn_cnt ACL keyword to track connection counts
+ - [MEDIUM] session-counters: automatically update tracked connection count
+ - [MINOR] session: add the trk_conn_cur ACL keyword to track concurrent connection
+ - [MINOR] session: add trk_kbytes_* ACL keywords to track data size
+ - [MEDIUM] session: add a counter on the cumulated number of sessions
+ - [MINOR] config: support a comma-separated list of store data types in stick-table
+ - [MEDIUM] stick-tables: add support for arguments to data_types
+ - [MEDIUM] stick-tables: add stored data argument type checking
+ - [MEDIUM] session counters: add conn_rate and sess_rate counters
+ - [MEDIUM] session counters: add bytes_in_rate and bytes_out_rate counters
+ - [MINOR] stktable: add a stktable_update_key() function
+ - [MINOR] session-counters: add a general purpose counter (gpc0)
+ - [MEDIUM] session-counters: add HTTP req/err tracking
+ - [MEDIUM] stats: add "show table [<name>]" to dump a stick-table
+ - [MEDIUM] stats: add "clear table <name> key <value>" to clear table entries
+ - [CLEANUP] stick-table: declare stktable_data_types as extern
+ - [MEDIUM] stick-table: make use of generic types for stored data
+ - [MINOR] stats: correctly report errors on "show table" and "clear table"
+ - [MEDIUM] stats: add the ability to dump table entries matching criteria
+ - [DOC] configuration: document all the new tracked counters
+ - [DOC] stats: document "show table" and "clear table"
+ - [MAJOR] session-counters: split FE and BE track counters
+ - [MEDIUM] tcp: accept the "track-counters" in "tcp-request content" rules
+ - [MEDIUM] session counters: automatically remove expired entries.
+ - [MEDIUM] config: replace 'tcp-request <action>' with "tcp-request connection"
+ - [MEDIUM] session-counters: make it possible to count connections from frontend
+ - [MINOR] session-counters: use "track-sc{1,2}" instead of "track-{fe,be}-counters"
+ - [MEDIUM] session-counters: correctly unbind the counters tracked by the backend
+ - [CLEANUP] stats: use stksess_kill() to remove table entries
+ - [DOC] update the references to session counters and to tcp-request connection
+ - [DOC] cleanup: split a few long lines
+ - [MEDIUM] http: forward client's close when abortonclose is set
+ - [BUG] queue: don't dequeue proxy-global requests on disabled servers
+ - [BUG] stats: global stats timeout may be specified before stats socket.
+ - [BUG] conf: add tcp-request content rules to the correct list
+
+2010/05/23 : 1.5-dev0
+ - exact copy of 1.4.6
+
+2010/05/16 : 1.4.6
+ - [BUILD] ebtree: update to v6.0.1 to remove references to dprintf()
+ - [CLEANUP] acl: make use of eb_is_empty() instead of open coding the tree's emptiness test
+ - [MINOR] acl: add srv_is_up() to check that a specific server is up or not
+ - [DOC] add a few precisions about the use of RDP cookies
+
+2010/05/13 : 1.4.5
+ - [DOC] report minimum kernel version for tproxy in the Makefile
+ - [MINOR] add the "ignore-persist" option to conditionally ignore persistence
+ - [DOC] add the "ignore-persist" option to conditionally ignore persistence
+ - [DOC] fix ignore-persist/force-persist documentation
+ - [BUG] cttproxy: socket fd leakage in check_cttproxy_version
+ - [DOC] doc/configuration.txt: fix typos
+ - [MINOR] option http-pretend-keepalive is both for FEs and BEs
+ - [MINOR] fix possible crash in debug mode with invalid responses
+ - [MINOR] halog: add support for statisticts on status codes
+ - [OPTIM] halog: use a faster zero test in fgets()
+ - [OPTIM] halog: minor speedup by using unlikely()
+ - [OPTIM] halog: speed up fgets2-64 by about 10%
+ - [DOC] refresh the README file and merge the CONTRIB file into it
+ - [MINOR] acl: support loading values from files
+ - [MEDIUM] ebtree: upgrade to version 6.0
+ - [MINOR] acl trees: add flags and union members to store values in trees
+ - [MEDIUM] acl: add ability to insert patterns in trees
+ - [MEDIUM] acl: add tree-based lookups of exact strings
+ - [MEDIUM] acl: add tree-based lookups of networks
+ - [MINOR] acl: ignore empty lines and comments in pattern files
+ - [MINOR] stick-tables: add support for "stick on hdr"
+
+2010/04/07 : 1.4.4
+ - [BUG] appsession should match the whole cookie name
+ - [CLEANUP] proxy: move PR_O_SSL3_CHK to options2 to release one flag
+ - [MEDIUM] backend: move the transparent proxy address selection to backend
+ - [MINOR] add very fast IP parsing functions
+ - [MINOR] add new tproxy flags for dynamic source address binding
+ - [MEDIUM] add ability to connect to a server from an IP found in a header
+ - [BUILD] config: last patch breaks build without CONFIG_HAP_LINUX_TPROXY
+ - [MINOR] http: make it possible to pretend keep-alive when doing close
+ - [MINOR] config: report "default-server" instead of "(null)" in error messages
+
+2010/03/30 : 1.4.3
+ - [CLEANUP] stats: remove printf format warning in stats_dump_full_sess_to_buffer()
+ - [MEDIUM] session: better fix for connection to servers with closed input
+ - [DOC] indicate in the doc how to bind to port ranges
+ - [BUG] backend: L7 hashing must not be performed on incomplete requests
+ - [TESTS] add a simple program to test connection resets
+ - [MINOR] cli: "show errors" should display "backend <NONE>" when backend was not used
+ - [MINOR] config: emit warnings when HTTP-only options are used in TCP mode
+ - [MINOR] config: allow "slowstart 0s"
+ - [BUILD] 'make tags' did not consider files ending in '.c'
+ - [MINOR] checks: add the ability to disable a server in the config
+
+2010/03/17 : 1.4.2
+ - [CLEANUP] product branch update
+ - [DOC] Some more documentation cleanups
+ - [BUG] clf logs segfault when capturing a non existant header
+ - [OPTIM] config: only allocate check buffer when checks are enabled
+ - [MEDIUM] checks: support multi-packet health check responses
+ - [CLEANUP] session: remove duplicate test
+ - [BUG] http: don't wait for response data to leave buffer is client has left
+ - [MINOR] proto_uxst: set accept_date upon accept() to the wall clock time
+ - [MINOR] stats: don't send empty lines in "show errors"
+ - [MINOR] stats: make the data dump function reusable for other purposes
+ - [MINOR] stats socket: add show sess <id> to dump details about a session
+ - [BUG] stats: connection reset counters must be plain ascii, not HTML
+ - [BUG] url_param hash may return a down server
+ - [MINOR] force null-termination of hostname
+ - [MEDIUM] connect to servers even when the input has already been closed
+ - [BUG] don't merge anonymous ACLs !
+ - [BUG] config: fix endless loop when parsing "on-error"
+ - [MINOR] http: don't mark a server as failed when it returns 501/505
+ - [OPTIM] checks: try to detect the end of response without polling again
+ - [BUG] checks: don't report an error when recv() returns an error after data
+ - [BUG] checks: don't abort when second poll returns an error
+ - [MINOR] checks: make shutdown() silently fail
+ - [BUG] http: fix truncated responses on chunk encoding when size divides buffer size
+ - [BUG] init: unconditionally catch SIGPIPE
+ - [BUG] checks: don't wait for a close to start parsing the response
+
+2010/03/04 : 1.4.1
+ - [BUG] Clear-cookie path issue
+ - [DOC] fix typo on stickiness rules
+ - [BUILD] fix BSD and OSX makefiles for missing files
+ - [BUILD] includes order breaks OpenBSD build
+ - [BUILD] fix some build warnings on Solaris with is* macros
+ - [BUG] logs: don't report "last data" when we have just closed after an error
+ - [BUG] logs: don't report "proxy request" when server closes early
+ - [BUILD] fix platform-dependant build issues related to crypt()
+ - [STATS] count transfer aborts caused by client and by server
+ - [STATS] frontend requests were not accounted for failed requests
+ - [MINOR] report total number of processed connections when stopping a proxy
+ - [DOC] be more clear about the limitation to one single monitor-net entry
+
+2010/02/26 : 1.4.0
+ - [MINOR] stats: report maint state for tracking servers too
+ - [DOC] fix summary to add pattern extraction
+ - [DOC] Documentation cleanups
+ - [BUG] cfgparse memory leak and missing free calls in deinit()
+ - [BUG] pxid/puid/luid: don't shift IDs when some of them are forced
+ - [EXAMPLES] add auth.cfg
+ - [BUG] uri_auth: ST_SHLGNDS should be 0x00000008 not 0x0000008
+ - [BUG] uri_auth: do not attemp to convert uri_auth -> http-request more than once
+ - [BUILD] auth: don't use unnamed unions
+ - [BUG] config: report unresolvable host names as errors
+ - [BUILD] fix build breakage with DEBUG_FULL
+ - [DOC] fix a typo about timeout check and clarify the explanation.
+ - [MEDIUM] http: don't use trash to realign large buffers
+ - [STATS] report HTTP requests (total and rate) in frontends
+ - [STATS] separate frontend and backend HTTP stats
+ - [MEDIUM] http: revert to use a swap buffer for realignment
+ - [MINOR] stats: report the request rate in frontends as cell titles
+ - [MINOR] stats: mark areas with an underline when tooltips are available
+ - [DOC] reorder some entries to maintain the alphabetical order
+ - [DOC] cleanup of the keyword matrix
+
+2010/02/02 : 1.4-rc1
+ - [MEDIUM] add a maintenance mode to servers
+ - [MINOR] http-auth: last fix was wrong
+ - [CONTRIB] add base64rev-gen.c that was used to generate the base64rev table.
+ - [MINOR] Base64 decode
+ - [MINOR] generic auth support with groups and encrypted passwords
+ - [MINOR] add ACL_TEST_F_NULL_MATCH
+ - [MINOR] http-request: allow/deny/auth support for frontend/backend/listen
+ - [MINOR] acl: add http_auth and http_auth_group
+ - [MAJOR] use the new auth framework for http stats
+ - [DOC] add info about userlists, http-request and http_auth/http_auth_group acls
+ - [STATS] make it possible to change a CLI connection timeout
+ - [BUG] patterns: copy-paste typo in type conversion arguments
+ - [MINOR] pattern: make the converter more flexible by supporting void* and int args
+ - [MINOR] standard: str2mask: string to netmask converter
+ - [MINOR] pattern: add support for argument parsers for converters
+ - [MINOR] pattern: add the "ipmask()" converting function
+ - [MINOR] config: off-by-one in "stick-table" after list of converters
+ - [CLEANUP] acl, patterns: make use of my_strndup() instead of malloc+memcpy
+ - [BUG] restore accidentely removed line in last patch !
+ - [MINOR] checks: make the HTTP check code add the CRLF itself
+ - [MINOR] checks: add the server's status in the checks
+ - [BUILD] halog: make without arch-specific optimizations
+ - [BUG] halog: fix segfault in case of empty log in PCT mode (cherry picked from commit fe362fe4762151d209b9656639ee1651bc2b329d)
+ - [MINOR] http: disable keep-alive when process is going down
+ - [MINOR] acl: add build_acl_cond() to make it easier to add ACLs in config
+ - [CLEANUP] config: use build_acl_cond() instead of parse_acl_cond()
+ - [CLEANUP] config: use warnif_cond_requires_resp() to check for bad ACLs
+ - [MINOR] prepare req_*/rsp_* to receive a condition
+ - [CLEANUP] config: specify correct const char types to warnif_* functions
+ - [MEDIUM] config: factor out the parsing of 20 req*/rsp* keywords
+ - [MEDIUM] http: make the request filter loop check for optional conditions
+ - [MEDIUM] http: add support for conditional request filter execution
+ - [DOC] add some build info about the AIX platform (cherry picked from commit e41914c77edbc40aebf827b37542d37d758e371e)
+ - [MEDIUM] http: add support for conditional request header addition
+ - [MEDIUM] http: add support for conditional response header rewriting
+ - [DOC] add some missing ACLs about response header matching
+ - [MEDIUM] http: add support for proxy authentication
+ - [MINOR] http-auth: make the 'unless' keyword work as expected
+ - [CLEANUP] config: use build_acl_cond() to simplify http-request ACL parsing
+ - [MEDIUM] add support for anonymous ACLs
+ - [MEDIUM] http: switch to tunnel mode after status 101 responses
+ - [MEDIUM] http: stricter processing of the CONNECT method
+ - [BUG] config: reset check request to avoid double free when switching to ssl/sql
+ - [MINOR] config: fix too large ssl-hello-check message.
+ - [BUG] fix error response in case of server error
+
+2010/01/25 : 1.4-dev8
+ - [CLEANUP] Keep in sync "defaults" support between documentation and code
+ - [MEDIUM] http: add support for Proxy-Connection header
+ - [CRITICAL] buffers: buffer_insert_line2 must not change the ->w entry
+ - [MINOR] http: remove a copy-paste typo in transaction cleaning
+ - [BUG] http: trim any excess buffer data when recycling a connection
+
+2010/01/25 : 1.4-dev7
+ - [BUG] appsession: possible memory leak in case of out of memory condition
+ - [MINOR] config: don't accept 'appsession' in defaults section
+ - [MINOR] Add function to parse a size in configuration
+ - [MEDIUM] Add stick table (persistence) management functions and types
+ - [MEDIUM] Add pattern fetch management types and functions
+ - [MEDIUM] Add src dst and dport pattern fetches.
+ - [MEDIUM] Add stick table configuration and init.
+ - [MEDIUM] Add stick and store rules analysers.
+ - [MINOR] add option "mysql-check" to use MySQL health checks
+ - [BUG] health checks: fix requeued message
+ - [OPTIM] remove SSP_O_VIA and SSP_O_STATUS
+ - [BUG] checks: fix newline termination
+ - [MINOR] acl: add fe_id/so_id to match frontend's and socket's id
+ - [BUG] appsession's sessid must be reset at end of transaction
+ - [BUILD] appsession did not build anymore under gcc-2.95
+ - [BUG] server redirection used an uninitialized string.
+ - [MEDIUM] http: fix handling of message pointers
+ - [MINOR] http: fix double slash prefix with server redirect
+ - [MINOR] http redirect: add the ability to append a '/' to the URL
+ - [BUG] stream_interface: fix retnclose and remove cond_close
+ - [MINOR] http redirect: don't explicitly state keep-alive on 1.1
+ - [MINOR] http: move appsession 'sessid' from session to http_txn
+ - [OPTIM] reorder http_txn to optimize cache lines placement
+ - [MINOR] http: differentiate waiting for new request and waiting for a complete requst
+ - [MINOR] http: add a separate "http-keep-alive" timeout
+ - [MINOR] config: remove undocumented and buggy 'timeout appsession'
+ - [DOC] fix various too large lines
+ - [DOC] remove several trailing spaces
+ - [DOC] add the doc about stickiness
+ - [BUILD] remove a warning in standard.h on AIX
+ - [BUG] checks: chars are unsigned on AIX, check was always true
+ - [CLEANUP] stream_sock: MSG_NOSIGNAL is only for send(), not recv()
+ - [BUG] check: we must not check for error before reading a response
+ - [BUG] buffers: remove remains of wrong obsolete length check
+ - [OPTIM] stream_sock: don't shutdown(write) when the socket is in error
+ - [BUG] http: don't count req errors on client resets or t/o during keep-alive
+ - [MEDIUM] http: don't switch to tunnel mode upon close
+ - [DOC] add documentation about connection header processing
+ - [MINOR] http: add http_remove_header2() to remove a header value.
+ - [MINOR] tools: add a "word_match()" function to match words and ignore spaces
+ - [MAJOR] http: rework request Connection header handling
+ - [MAJOR] http: rework response Connection header handling
+ - [MINOR] add the ability to force kernel socket buffer size.
+ - [BUG] http_server_error() must not purge a previous pending response
+ - [OPTIM] http: don't delay response if next request is incomplete
+ - [MINOR] add the "force-persist" statement to force persistence on down servers
+ - [MINOR] http: logs must report persistent connections to down servers
+ - [BUG] buffer_replace2 must never change the ->w entry
+
+2010/01/08 : 1.4-dev6
+ - [BUILD] warning in stream_interface.h
+ - [BUILD] warning ultoa_r returns char *
+ - [MINOR] hana: only report stats if it is enabled
+ - [MINOR] stats: add "a link" & "a href" for sockets
+ - [MINOR]: stats: add show-legends to report additional informations
+ - [MEDIUM] default-server support
+ - [BUG]: add 'observer', 'on-error', 'error-limit' to supported options list
+ - [MINOR] stats: add href to tracked server
+ - [BUG] stats: show UP/DOWN status also in tracking servers
+ - [DOC] Restore ability to search a keyword at the beginning of a line
+ - [BUG] stats: cookie should be reported under backend not under proxy
+ - [BUG] cfgparser/stats: fix error message
+ - [BUG] http: disable auto-closing during chunk analysis
+ - [BUG] http: fix hopefully last closing issue on data forwarding
+ - [DEBUG] add an http_silent_debug function to debug HTTP states
+ - [MAJOR] http: fix again the forward analysers
+ - [BUG] http_process_res_common() must not skip the forward analyser
+ - [BUG] http: some possible missed close remain in the forward chain
+ - [BUG] http: redirect needed to be updated after recent changes
+ - [BUG] http: don't set no-linger on response in case of forced close
+ - [MEDIUM] http: restore the original behaviour of option httpclose
+ - [TESTS] add a file to test various connection modes
+ - [BUG] http: check options before the connection header
+ - [MAJOR] session: fix the order by which the analysers are run
+ - [MEDIUM] session: also consider request analysers added during response
+ - [MEDIUM] http: make safer use of the DONT_READ and AUTO_CLOSE flags
+ - [BUG] http: memory leak with captures when using keep-alive
+ - [BUG] http: fix for capture memory leak was incorrect
+ - [MINOR] http redirect: use proper call to return last response
+ - [MEDIUM] http: wait for some flush of the response buffer before a new request
+ - [MEDIUM] session: limit the number of analyser loops
+
+2010/01/03 : 1.4-dev5
+ - [MINOR] server tracking: don't care about the tracked server's mode
+ - [MEDIUM] appsession: add "len", "prefix" and "mode" options
+ - [MEDIUM] appsession: add the "request-learn" option
+ - [BUG] Configuration parser bug when escaping characters
+ - [MINOR] CSS & HTML fun
+ - [MINOR] Collect & provide http response codes received from servers
+ - [BUG] Fix silly typo: hspr_other -> hrsp_other
+ - [MINOR] Add "a name" to stats page
+ - [MINOR] add additional "a href"s to stats page
+ - [MINOR] Collect & provide http response codes for frontends, fix backends
+ - [DOC] some small spell fixes and unifications
+ - [MEDIUM] Decrease server health based on http responses / events, version 3
+ - [BUG] format '%d' expects type 'int', but argument 5 has type 'long int'
+ - [BUG] config: fix erroneous check on cookie domain names, again
+ - [BUG] Healthchecks: get a proper error code if connection cannot be completed immediately
+ - [DOC] trivial fix for man page
+ - [MINOR] config: report all supported options for the "bind" keyword
+ - [MINOR] tcp: add support for the defer_accept bind option
+ - [MINOR] unix socket: report the socket path in case of bind error
+ - [CONTRIB] halog: support searching by response time
+ - [DOC] add a reminder about obsolete documents
+ - [DOC] point to 1.4 doc, not 1.3
+ - [DOC] option tcp-smart-connect was missing from index
+ - [MINOR] http: detect connection: close earlier
+ - [CLEANUP] sepoll: clean up the fd_clr/fd_set functions
+ - [OPTIM] move some rarely used fields out of fdtab
+ - [MEDIUM] fd: merge fd_list into fdtab
+ - [MAJOR] buffer: flag BF_DONT_READ to disable reads when not required
+ - [MINOR] http: add new transaction flags for keep-alive and content-length
+ - [MEDIUM] http request: parse connection, content-length and transfer-encoding
+ - [MINOR] http request: update the TX_SRV_CONN_KA flag on rewrite
+ - [MINOR] http request: simplify the test of no-data
+ - [MEDIUM] http request: simplify POST length detection
+ - [MEDIUM] http request: make use of pre-parsed transfer-encoding header
+ - [MAJOR] http: create the analyser which waits for a response
+ - [MINOR] http: pre-set the persistent flags in the transaction
+ - [MEDIUM] http response: check body length and set transaction flags
+ - [MINOR] http response: update the TX_CLI_CONN_KA flag on rewrite
+ - [MINOR] http: remove the last call to stream_int_return
+ - [IMPORT] import ebtree v5.0 into directory ebtree/
+ - [MEDIUM] build: switch ebtree users to use new ebtree version
+ - [CLEANUP] ebtree: remove old unused files
+ - [BUG] definitely fix regparm issues between haproxy core and ebtree
+ - [CLEANUP] ebtree: cast to char * to get rid of gcc warning
+ - [BUILD] missing #ifndef in ebmbtree.h
+ - [BUILD] missing #ifndef in ebsttree.h
+ - [MINOR] tools: add hex2i() function to convert hex char to int
+ - [MINOR] http: create new MSG_BODY sub-states
+ - [BUG] stream_sock: BUF_INFINITE_FORWARD broke splice on 64-bit platforms
+ - [DOC] option is "defer-accept", not "defer_accept"
+ - [MINOR] http: keep pointer to beginning of data
+ - [BUG] x-original-to: name was not set in default instance
+ - [MINOR] http: detect tunnel mode and set it in the session
+ - [BUG] config: fix error message when config file is not found
+ - [BUG] config: fix wrong handling of too large argument count
+ - [BUG] config: disable 'option httplog' on TCP proxies
+ - [BUG] config: fix erroneous check on cookie domain names
+ - [BUG] config: cookie domain was ignored in defaults sections
+ - [MINOR] config: support passing multiple "domain" statements to cookies
+ - [MINOR] ebtree: add functions to lookup non-null terminated strings
+ - [MINOR] config: don't report error on all subsequent files on failure
+ - [BUG] second fix for the printf format warning
+ - [BUG] check_post: limit analysis to the buffer length
+ - [MEDIUM] http: process request body in a specific analyser
+ - [MEDIUM] backend: remove HTTP POST parsing from get_server_ph_post()
+ - [MAJOR] http: completely process the "connection" header
+ - [MINOR] http: only consider chunk encoding with HTTP/1.1
+ - [MAJOR] buffers: automatically compute the maximum buffer length
+ - [MINOR] http: move the http transaction init/cleanup code to proto_http
+ - [MINOR] http: move 1xx handling earlier to eliminate a lot of ifs
+ - [MINOR] http: introduce a new synchronisation state : HTTP_MSG_DONE
+ - [MEDIUM] http: rework chunk-size parser
+ - [MEDIUM] http: add a new transaction flags indicating if we know the transfer length
+ - [MINOR] buffers: add buffer_ignore() to skip some bytes
+ - [BUG] http: offsets are relative to the buffer, not to ->som
+ - [MEDIUM] http: automatically re-aling request buffer
+ - [BUG] http: body parsing must consider the start of message
+ - [MINOR] new function stream_int_cond_close()
+ - [MAJOR] http: implement body parser
+ - [BUG] http: typos on several unlikely() around header insertion
+ - [BUG] stream_sock: wrong max computation on recv
+ - [MEDIUM] http: rework the buffer alignment logic
+ - [BUG] buffers: wrong size calculation for displaced data
+ - [MINOR] stream_sock: prepare for closing when all pending data are sent
+ - [MEDIUM] http: add two more states for the closing period
+ - [MEDIUM] http: properly handle "option forceclose"
+ - [MINOR] stream_sock: add SI_FL_NOLINGER for faster close
+ - [MEDIUM] http: make forceclose use SI_FL_NOLINGER
+ - [MEDIUM] session: set SI_FL_NOLINGER when aborting on write timeouts
+ - [MEDIUM] http: add some SI_FL_NOLINGER around server errors
+ - [MINOR] config: option forceclose is valid in frontends too
+ - [BUILD] halog: insufficient include path in makefile
+ - [MEDIUM] http: make the analyser not rely on msg being initialized anymore
+ - [MEDIUM] http: make the parsers able to wait for a buffer flush
+ - [MAJOR] http: add support for option http-server-close
+ - [BUG] http: ensure we abort data transfer on write error
+ - [BUG] last fix was overzealous and disabled server-close
+ - [BUG] http: fix erroneous trailers size computation
+ - [MINOR] stream_sock: enable MSG_MORE when forwarding finite amount of data
+ - [OPTIM] http: set MSG_MORE on response when a pipelined request is pending
+ - [BUG] http: redirects were broken by chunk changes
+ - [BUG] http: the request URI pointer is relative to the buffer
+ - [OPTIM] http: don't immediately enable reading on request
+ - [MINOR] http: move redirect messages to HTTP/1.1 with a content-length
+ - [BUG] http: take care of errors, timeouts and aborts during the data phase
+ - [MINOR] http: don't wait for sending requests to the server
+ - [MINOR] http: make the conditional redirect support keep-alive
+ - [BUG] http: fix cookie parser to support spaces and commas in values
+ - [MINOR] config: some options were missing for "redirect"
+ - [MINOR] redirect: add support for unconditional rules
+ - [MINOR] config: centralize proxy struct initialization
+ - [MEDIUM] config: remove the limitation of 10 reqadd/rspadd statements
+ - [MEDIUM] config: remove the limitation of 10 config files
+ - [CLEANUP] http: remove a remaining impossible condition
+ - [OPTIM] http: optimize a bit the construct of the forward loops
+
+2009/10/12 : 1.4-dev4
+ - [DOC] add missing rate_lim and rate_max
+ - [MAJOR] struct chunk rework
+ - [MEDIUM] Health check reporting code rework + health logging, v3
+ - [BUG] check if rise/fall has an argument and it is > 0
+ - [MINOR] health checks logging unification
+ - [MINOR] add "description", "node" and show-node"/"show-desc", remove "node-name", v2
+ - [MINOR] Allow dots in show-node & add "white-space: nowrap" in th.pxname.
+ - [DOC] Add information about http://haproxy.1wt.eu/contrib.html
+ - [MINOR] Introduce include/types/counters.h
+ - [CLEANUP] Move counters to dedicated structures
+ - [MINOR] Add "clear counters" to clear statistics counters
+ - [MEDIUM] Collect & provide separate statistics for sockets, v2
+ - [BUG] Fix NULL pointer dereference in stats_check_uri_auth(), v2
+ - [MINOR] acl: don't report valid acls as potential mistakes
+ - [MINOR] Add cut_crlf(), ltrim(), rtrim() and alltrim()
+ - [MINOR] Add chunk_htmlencode and chunk_asciiencode
+ - [MINOR] Capture & display more data from health checks, v2
+ - [BUG] task.c: don't assing last_timer to node-less entries
+ - [BUG] http stats: large outputs sometimes got some parts chopped off
+ - [MINOR] backend: export some functions to recount servers
+ - [MINOR] backend: uninline some LB functions
+ - [MINOR] include time.h from freq_ctr.h as is uses "now".
+ - [CLEANUP] backend: move LB algos to individual files
+ - [MINOR] lb_map: reorder code in order to ease integration of new hash functions
+ - [CLEANUP] proxy: move last lb-specific bits to their respective files
+ - [MINOR] backend: separate declarations of LB algos from their lookup method
+ - [MINOR] backend: reorganize the LB algorithm selection
+ - [MEDIUM] backend: introduce the "static-rr" LB algorithm
+ - [MINOR] report list of supported pollers with -vv
+ - [DOC] log-health-checks is an option, not a directive
+ - [MEDIUM] new option "independant-streams" to stop updating read timeout on writes
+ - [BUG] stats: don't call buffer_shutw(), but ->shutw() instead
+ - [MINOR] stats: strip CR and LF from the input command line
+ - [BUG] don't refresh timeouts late after detected activity
+ - [MINOR] stats_dump_errors_to_buffer: use buffer_feed_chunk()
+ - [MINOR] stats_dump_sess_to_buffer: use buffer_feed_chunk()
+ - [MINOR] stats: make stats_dump_raw_to_buffer() use buffer_feed_chunk
+ - [MEDIUM] stats: don't use s->ana_state anymore
+ - [MINOR] remove now obsolete ana_state from the session struct
+ - [MEDIUM] stats: make HTTP stats use an I/O handler
+ - [MEDIUM] stream_int: adjust WAIT_ROOM handling
+ - [BUG] config: look for ID conflicts in all sockets, not only last ones.
+ - [MINOR] config: reference file and line with any listener/proxy/server declaration
+ - [MINOR] config: report places of duplicate names or IDs
+ - [MINOR] config: add pointer to file name in block/redirect/use_backend/monitor rules
+ - [MINOR] tools: add a new get_next_id() function
+ - [MEDIUM] config: automatically find unused IDs for proxies, servers and listeners
+ - [OPTIM] counters: move some max numbers to the counters struct
+ - [BUG] counters: fix segfault on missing counters for a listener
+ - [MEDIUM] backend: implement consistent hashing variation
+ - [MINOR] acl: add fe_conn, be_conn, queue, avg_queue
+ - [MINOR] stats: use 'clear counters all' to clear all values
+ - [MEDIUM] add access restrictions to the stats socket
+ - [MINOR] buffers: add buffer_feed2() and make buffer_feed() measure string length
+ - [MINOR] proxy: provide function to retrieve backend/server pointers
+ - [MINOR] add the "initial weight" to the server struct.
+ - [MEDIUM] stats: add the "get weight" command to report a server's weight
+ - [MEDIUM] stats: add the "set weight" command
+ - [BUILD] add a 'make tags' target
+ - [MINOR] stats: add support for numeric IDs in set weight/get weight
+ - [MINOR] stats: use a dedicated state to output static data
+ - [OPTIM] stats: check free space before trying to print
+
+2009/09/24 : 1.4-dev3
+ - [BUILD] compilation of haproxy-1.4-dev2 on FreeBSD
+ - [MEDIUM] Collect & show information about last health check, v3
+ - [MINOR] export the hostname variable so that all the code can access it
+ - [MINOR] stats: add a new node-name setting
+ - [MEDIUM] remove old experimental tcpsplice option
+ - [BUILD] fix build for systems without SOL_TCP
+ - [MEDIUM] move connection establishment from backend to the SI.
+ - [MEDIUM] make the global stats socket part of a frontend
+ - [MEDIUM] session: account per-listener connections
+ - [MINOR] session: switch to established state if no connect function
+ - [MEDIUM] make the unix stats sockets use the generic session handler
+ - [CLEANUP] unix: remove uxst_process_session()
+ - [CLEANUP] move remaining stats sockets code to dumpstats
+ - [MINOR] move the initial task's nice value to the listener
+ - [MINOR] cleanup set_session_backend by using pre-computed analysers
+ - [MINOR] set s->srv_error according to the analysers
+ - [MEDIUM] set rep->analysers from fe and be analysers
+ - [MEDIUM] replace BUFSIZE with buf->size in computations
+ - [MEDIUM] make it possible to change the buffer size in the configuration
+ - [MEDIUM] report error on buffer writes larger than buffer size
+ - [MEDIUM] stream_interface: add and use ->update function to resync
+ - [CLEANUP] remove ifdef MSG_NOSIGNAL and define it instead
+ - [MEDIUM] remove TCP_CORK and make use of MSG_MORE instead
+ - [BUG] tarpit did not work anymore
+ - [MINOR] acl: add support for hdr_ip to match IP addresses in headers
+ - [MAJOR] buffers: fix misuse of the BF_SHUTW_NOW flag
+ - [MINOR] buffers: provide more functions to handle buffer data
+ - [MEDIUM] buffers: provide new buffer_feed*() function
+ - [MINOR] buffers: add peekchar and peekline functions for stream interfaces
+ - [MINOR] buffers: provide buffer_si_putchar() to send a char from a stream interface
+ - [BUG] buffer_forward() would not correctly consider data already scheduled
+ - [MINOR] buffers: add buffer_cut_tail() to cut only unsent data
+ - [MEDIUM] stream_interface: make use of buffer_cut_tail() to report errors
+ - [MAJOR] http: add support for HTTP 1xx informational responses
+ - [MINOR] buffers: inline buffer_si_putchar()
+ - [MAJOR] buffers: split BF_WRITE_ENA into BF_AUTO_CONNECT and BF_AUTO_CLOSE
+ - [MAJOR] buffers: fix the BF_EMPTY flag's meaning
+ - [BUG] stream_interface: SI_ST_CLO must have buffers SHUT
+ - [MINOR] stream_sock: don't set SI_FL_WAIT_DATA if BF_SHUTW_NOW is set
+ - [MEDIUM] add support for infinite forwarding
+ - [BUILD] stream_interface: fix conflicting declaration
+ - [BUG] buffers: buffer_forward() must not always clear BF_OUT_EMPTY
+ - [BUG] variable buffer size ignored at initialization time
+ - [MINOR] ensure that buffer_feed() and buffer_skip() set BF_*_PARTIAL
+ - [BUG] fix buffer_skip() and buffer_si_getline() to correctly handle wrap-arounds
+ - [MINOR] stream_interface: add SI_FL_DONT_WAKE flag
+ - [MINOR] stream_interface: add iohandler callback
+ - [MINOR] stream_interface: add functions to support running as internal/external tasks
+ - [MEDIUM] session: call iohandler for embedded tasks (applets)
+ - [MINOR] add a ->private member to the stream_interface
+ - [MEDIUM] stats: prepare the connection for closing before dumping
+ - [MEDIUM] stats: replace the stats socket analyser with an SI applet
+
+2009/08/09 : 1.4-dev2
+ - [BUG] task: fix possible crash when some timeouts are not configured
+ - [BUG] log: option tcplog would log to global if no logger was defined
+
+2009/07/29 : 1.4-dev1
+ - [MINOR] acl: add support for matching of RDP cookies
+ - [MEDIUM] add support for RDP cookie load-balancing
+ - [MEDIUM] add support for RDP cookie persistence
+ - [MINOR] add a new CLF log format
+ - [MINOR] startup: don't imply -q with -D
+ - [BUG] ensure that we correctly re-start old process in case of error
+ - [MEDIUM] add support for binding to source port ranges during connect
+ - [MINOR] config: track "no option"/"option" changes
+ - [MINOR] config: support resetting options do default values
+ - [MEDIUM] implement option tcp-smart-accept at the frontend
+ - [MEDIUM] stream_sock: implement tcp-cork for use during shutdowns on Linux
+ - [MEDIUM] implement tcp-smart-connect option at the backend
+ - [MEDIUM] add support for TCP MSS adjustment for listeners
+ - [MEDIUM] support setting a server weight to zero
+ - [MINOR] make DEFAULT_MAXCONN user-configurable at build time
+ - [MAJOR] session: don't clear buffer status flags anymore
+ - [MAJOR] session: only check for timeouts when they have just occurred.
+ - [MAJOR] session: simplify buffer error handling
+ - [MEDIUM] config: split parser and checker in two functions
+ - [MEDIUM] config: support loading multiple configuration files
+ - [MEDIUM] stream_sock: don't close prematurely when nolinger is set
+ - [MEDIUM] session: rework buffer analysis to permit permanent analysers
+ - [MEDIUM] splice: set the capability on each stream_interface
+ - [BUG] http: redirect rules were processed too early
+ - [CLEANUP] remove unused DEBUG_PARSE_NO_SPEEDUP define
+ - [MEDIUM] http: split request waiter from request processor
+ - [MEDIUM] session: tell analysers what bit they were called for
+ - [MAJOR] http: complete splitting of the remaining stages
+ - [MINOR] report in the proxies the requirements for ACLs
+ - [MINOR] http: rely on proxy->acl_requires to allocate hdr_idx
+ - [MINOR] acl: add HTTP protocol detection (req_proto_http)
+ - [MINOR] prepare callers of session_set_backend to handle errors
+ - [BUG] default ACLs did not properly set the ->requires flag
+ - [MEDIUM] allow a TCP frontend to switch to an HTTP backend
+ - [MINOR] ensure we can jump from swiching rules to http without data
+ - [MINOR] http: take http request timeout from the backend
+ - [MINOR] allow TCP inspection rules to make use of HTTP ACLs
+ - [BUILD] report commit date and not author's date as build date
+ - [MINOR] acl: don't complain anymore when using L7 acls in TCP
+ - [BUG] stream_sock: always shutdown(SHUT_WR) before closing
+ - [BUG] stream_sock: don't stop reading when the poller reports an error
+ - [BUG] config: tcp-request content only accepts "if" or "unless"
+ - [BUG] task: fix possible timer drift after update
+ - [MINOR] apply tcp-smart-connect option for the checks too
+ - [MINOR] stats: better displaying in MSIE
+ - [MINOR] config: improve error reporting in global section
+ - [MINOR] config: improve error reporting in listen sections
+ - [MINOR] config: the "capture" keyword is not allowed in backends
+ - [MINOR] config: improve error reporting when checking configuration
+ - [BUILD] fix a minor build warning on AIX
+ - [BUILD] use "git cmd" instead of "git-cmd"
+ - [CLEANUP] report 2009 not 2008 in the copyright banner.
+ - [MINOR] print usage on the stats sockets upon invalid commands
+ - [MINOR] acl: detect and report potential mistakes in ACLs
+ - [BUILD] fix incorrect printf arg count with tcp_splice
+ - [BUG] fix random pauses on last segment of a series
+ - [BUILD] add support for build under Cygwin
+
+2009/06/09 : 1.4-dev0
+ - exact copy of 1.3.18
+
+2009/05/10 : 1.3.18
+ - [MEDIUM] add support for "balance hdr(name)"
+ - [CLEANUP] give a little bit more information in error message
+ - [MINOR] add X-Original-To: header
+ - [BUG] x-original-to: fix missing initialization to default value
+ - [BUILD] spec file: fix broken pipe during rpmbuild and add man file
+ - [MINOR] improve reporting of misplaced acl/reqxxx rules
+ - [MEDIUM] http: add options to ignore invalid header names
+ - [MEDIUM] http: capture invalid requests/responses even if accepted
+ - [BUILD] add format(printf) to printf-like functions
+ - [MINOR] fix several printf formats and missing arguments
+ - [BUG] stats: total and lbtot are unsigned
+ - [MINOR] fix a few remaining printf-like formats on 64-bit platforms
+ - [CLEANUP] remove unused make option from haproxy.spec
+ - [BUILD] make it possible to pass alternative arch at build time
+ - [MINOR] switch all stat counters to 64-bit
+ - [MEDIUM] ensure we don't recursively call pool_gc2()
+ - [CRITICAL] uninitialized response field can sometimes cause crashes
+ - [BUG] fix wrong pointer arithmetics in HTTP message captures
+ - [MINOR] rhel init script : support the reload operation
+ - [MINOR] add basic signal handling functions
+ - [BUILD] add signal.o to all makefiles
+ - [MEDIUM] call signal_process_queue from run_poll_loop
+ - [MEDIUM] pollers: don't wait if a signal is pending
+ - [MEDIUM] convert all signals to asynchronous signals
+ - [BUG] O(1) pollers should check their FD before closing it
+ - [MINOR] don't close stdio fds twice
+ - [MINOR] add options dontlog-normal and log-separate-errors
+ - [DOC] minor fixes and rearrangements
+ - [BUG] fix parser crash on unconditional tcp content rules
+ - [DOC] rearrange the configuration manual and add a summary
+ - [MINOR] standard: provide a new 'my_strndup' function
+ - [MINOR] implement per-logger log level limitation
+ - [MINOR] compute the max of sessions/s on fe/be/srv
+ - [MINOR] stats: report max sessions/s and limit in CSV export
+ - [MINOR] stats: report max sessions/s and limit in HTML stats
+ - [MINOR] stats/html: use the arial font before helvetica
+
+2009/03/29 : 1.3.17
+ - Update specfile to build for v2.6 kernel.
+ - [BUG] reset the stream_interface connect timeout upon connect or error
+ - [BUG] reject unix accepts when connection limit is reached
+ - [MINOR] show sess: report number of calls to each task
+ - [BUG] don't call epoll_ctl() on closed sockets
+ - [BUG] stream_sock: disable I/O on fds reporting an error
+ - [MINOR] sepoll: don't count two events on the same FD.
+ - [MINOR] show sess: report a lot more information about sessions
+ - [BUG] stream_sock: check for shut{r,w} before refreshing some timeouts
+ - [BUG] don't set an expiration date directly from now_ms
+ - [MINOR] implement ulltoh() to write HTML-formatted numbers
+ - [MINOR] stats/html: group digits by 3 to clarify numbers
+ - [BUILD] remove haproxy-small.spec
+ - [BUILD] makefile: remove unused references to linux24eold and EPOLL_CTL_WORKAROUND
+
+2009/03/22 : 1.3.16
+ - [BUILD] Fixed Makefile for linking pcre
+ - [CONTRIB] selinux policy for haproxy
+ - [MINOR] show errors: encode backslash as well as non-ascii characters
+ - [MINOR] cfgparse: some cleanups in the consistency checks
+ - [MINOR] cfgparse: set backends to "balance roundrobin" by default
+ - [MINOR] tcp-inspect: permit the use of no-delay inspection
+ - [MEDIUM] reverse internal proxy declaration order to match configuration
+ - [CLEANUP] config: catch and report some possibly wrong rule ordering
+ - [BUG] connect timeout is in the stream interface, not the buffer
+ - [BUG] session: errors were not reported in termination flags in TCP mode
+ - [MINOR] tcp_request: let the caller take care of errors and timeouts
+ - [CLEANUP] http: remove some commented out obsolete code in process_response
+ - [MINOR] update ebtree to version 4.1
+ - [MEDIUM] scheduler: get rid of the 4 trees thanks and use ebtree v4.1
+ - [BUG] sched: don't leave 3 lasts tasks unprocessed when niced tasks are present
+ - [BUG] scheduler: fix improper handling of duplicates __task_queue()
+ - [MINOR] sched: permit a task to stay up between calls
+ - [MINOR] task: keep a task count and clean up task creators
+ - [MINOR] stats: report number of tasks (active and running)
+ - [BUG] server check intervals must not be null
+ - [OPTIM] stream_sock: don't retry to read after a large read
+ - [OPTIM] buffer: new BF_READ_DONTWAIT flag reduces EAGAIN rates
+ - [MEDIUM] session: don't resync FSMs on non-interesting changes
+ - [BUG] check for global.maxconn before doing accept()
+ - [OPTIM] sepoll: do not re-check whole list upon accepts
+
+2009/03/09 : 1.3.16-rc2
+ - [BUG] stream_sock: write timeout must be updated when forwarding !
+
+2009/03/09 : 1.3.16-rc1
+ - appsessions: cleanup DEBUG_HASH and initialize request_counter
+ - [MINOR] acl: add new keyword "connslots"
+ - [MINOR] cfgparse: fix off-by 2 in error message size
+ - [BUILD] fix build with gcc 4.3
+ - [BUILD] fix MANDIR default location to match documentation
+ - [TESTS] add a debug patch to help trigger the stats bug
+ - [BUG] Flush buffers also where there are exactly 0 bytes left
+ - [MINOR] Allow to specify a domain for a cookie
+ - [BUG/CLEANUP] cookiedomain -> cookie_domain rename + free(p->cookie_domain)
+ - [MEDIUM] Fix memory freeing at exit
+ - [MEDIUM] Fix memory freeing at exit, part 2
+ - [BUG] Fix listen & more of 2 couples <ip>:<port>
+ - [DOC] remove buggy comment for use_backend
+ - [CRITICAL] fix server state tracking: it was O(n!) instead of O(n)
+ - [MEDIUM] add support for URI hash depth and length limits
+ - [MINOR] permit renaming of x-forwarded-for header
+ - [BUILD] fix Makefile.bsd and Makefile.osx for stream_interface
+ - [BUILD] Haproxy won't compile if DEBUG_FULL is defined
+ - [MEDIUM] upgrade to ebtree v4.0
+ - [DOC] update the README file with new build options
+ - [MEDIUM] reduce risk of event starvation in ev_sepoll
+ - [MEDIUM] detect streaming buffers and tag them as such
+ - [MEDIUM] add support for conditional HTTP redirection
+ - [BUILD] make install should depend on haproxy not "all"
+ - [DEBUG] add a TRACE macro to facilitate runtime data extraction
+ - [BUG] event pollers must not wait if a task exists in the run queue
+ - [BUG] queue management: wake oldest request in queues
+ - [BUG] log: reported queue position was offed-by-one
+ - [BUG] fix the dequeuing logic to ensure that all requests get served
+ - [DOC] documentation for the "retries" parameter was missing.
+ - [MEDIUM] implement a monotonic internal clock
+ - [MEDIUM] further improve monotonic clock by check forward jumps
+ - [OPTIM] add branch prediction hints in list manipulations
+ - [MAJOR] replace ultree with ebtree in wait-queues
+ - [BUG] we could segfault during exit while freeing uri_auths
+ - [BUG] wqueue: perform proper timeout comparisons with wrapping values
+ - [MINOR] introduce now_ms, the current date in milliseconds
+ - [BUG] disable buffer read timeout when reading stats
+ - [MEDIUM] rework the wait queue mechanism
+ - [BUILD] change declaration of base64tab to fix build with Intel C++
+ - [OPTIM] shrink wake_expired_tasks() by using task_wakeup()
+ - [MAJOR] use an ebtree instead of a list for the run queue
+ - [MEDIUM] introduce task->nice and boot access to statistics
+ - [OPTIM] task_queue: assume most consecutive timers are equal
+ - [BUILD] silent a warning in unlikely() with gcc 4.x
+ - [MAJOR] convert all expiration timers from timeval to ticks
+ - [BUG] use_backend would not correctly consider "unless"
+ - [TESTS] added test-acl.cfg to test some ACL combinations
+ - [MEDIUM] add support for configuration keyword registration
+ - [MEDIUM] modularize the global "stats" keyword configuration parser
+ - [MINOR] cfgparse: add support for warnings in external functions
+ - [MEDIUM] modularize the "timeout" keyword configuration parser
+ - [MAJOR] implement tcp request content inspection
+ - [MINOR] acl: add a new parsing function: parse_dotted_ver
+ - [MINOR] acl: add req_ssl_ver in TCP, to match an SSL version
+ - [CLEANUP] remove unused include/types/client.h
+ - [CLEANUP] remove many #include <types/xxx> from C files
+ - [CLEANUP] remove dependency on obsolete INTBITS macro
+ - [DOC] document the new "tcp-request" keyword and associated ACLs
+ - [MINOR] acl: add REQ_CONTENT to the list of default acls
+ - [MEDIUM] acl: permit fetch() functions to set the result themselves
+ - [MEDIUM] acl: get rid of dummy values in always_true/always_false
+ - [MINOR] acl: add the "wait_end" acl verb
+ - [MEDIUM] acl: enforce ACL type checking
+ - [MEDIUM] acl: set types on all currently known ACL verbs
+ - [MEDIUM] acl: when possible, report the name and requirements of ACLs in warnings
+ - [CLEANUP] remove 65 useless NULL checks before free
+ - [MEDIUM] memory: update pool_free2() to support NULL pointers
+ - [MEDIUM] buffers: ensure buffer_shut* are properly called upon shutdowns
+ - [MEDIUM] process_srv: rely on buffer flags for client shutdown
+ - [MEDIUM] process_srv: don't rely at all on client state
+ - [MEDIUM] process_cli: don't rely at all on server state
+ - [BUG] fix segfault with url_param + check_post
+ - [BUG] server timeout was not considered in some circumstances
+ - [BUG] client timeout incorrectly rearmed while waiting for server
+ - [MAJOR] kill CL_STINSPECT and CL_STHEADERS (step 1)
+ - [MAJOR] get rid of SV_STANALYZE (step 2)
+ - [MEDIUM] simplify and centralize request timeout cancellation and request forwarding
+ - [MAJOR] completely separate HTTP and TCP states on the request path
+ - [BUG] fix recently introduced loop when client closes early
+ - [MAJOR] get rid of the SV_STHEADERS state
+ - [MAJOR] better separation of response processing and server state
+ - [MAJOR] clearly separate HTTP response processing from TCP server state
+ - [MEDIUM] remove unused references to {CL|SV}_STSHUT*
+ - [MINOR] term_trace: add better instrumentations to trace the code
+ - [BUG] ev_sepoll: closed file descriptors could persist in the spec list
+ - [BUG] process_response must not enable the read FD
+ - [BUG] buffers: remove BF_MAY_CONNECT and fix forwarding issue
+ - [BUG] process_response: do not touch srv_state
+ - [BUG] maintain_proxies must not disable backends
+ - [CLEANUP] get rid of BF_SHUT*_PENDING
+ - [MEDIUM] buffers: add BF_EMPTY and BF_FULL to remove dependency on req/rep->l
+ - [MAJOR] process_session: rely only on buffer flags
+ - [MEDIUM] use buffer->wex instead of buffer->cex for connect timeout
+ - [MEDIUM] centralize buffer timeout checks at the top of process_session
+ - [MINOR] ensure the termination flags are set by process_xxx
+ - [MEDIUM] session: move the analysis bit field to the buffer
+ - [OPTIM] process_cli/process_srv: reduce the number of tests
+ - [BUG] regparm is broken on gcc < 3
+ - [BUILD] fix warning in proto_tcp.c with gcc >= 4
+ - [MEDIUM] merge inspect_exp and txn->exp into request buffer
+ - [BUG] process_cli/process_srv: don't call shutdown when already done
+ - [BUG] process_request: HTTP body analysis must return zero if missing data
+ - [TESTS] test-fsm: 22 regression tests for state machines
+ - [BUG] Fix empty X-Forwarded-For header name when set in defaults section
+ - [BUG] fix harmless but wrong fd insertion sequence
+ - [MEDIUM] make it possible for analysers to follow the whole session
+ - [MAJOR] rework of the server FSM
+ - [OPTIM] remove useless fd_set(read) upon shutdown(write)
+ - [MEDIUM] massive cleanup of process_srv()
+ - [MEDIUM] second level of code cleanup for process_srv_data
+ - [MEDIUM] third cleanup and optimization of process_srv_data()
+ - [MEDIUM] process_srv_data: ensure that we always correctly re-arm timeouts
+ - [MEDIUM] stream_sock_process_data moved to stream_sock.c
+ - [MAJOR] make the client side use stream_sock_process_data()
+ - [MEDIUM] split stream_sock_process_data
+ - [OPTIM] stream_sock_read must check for null-reads more often
+ - [MINOR] only call flow analysers when their read side is connected.
+ - [MEDIUM] reintroduce BF_HIJACK with produce_content
+ - [MINOR] re-arrange buffer flags and rename some of them
+ - [MINOR] do not check for BF_SHUTR when computing write timeout
+ - [OPTIM] ev_sepoll: detect newly created FDs and check them once
+ - [OPTIM] reduce the number of calls to task_wakeup()
+ - [OPTIM] force inlining of large functions with gcc >= 3
+ - [MEDIUM] indicate a reason for a task wakeup
+ - [MINOR] change type of fdtab[]->owner to void*
+ - [MAJOR] make stream sockets aware of the stream interface
+ - [MEDIUM] stream interface: add the ->shutw method as well as in and out buffers
+ - [MEDIUM] buffers: add BF_READ_ATTACHED and BF_ANA_TIMEOUT
+ - [MEDIUM] process_session: make use of the new buffer flags
+ - [CLEANUP] process_session: move debug outputs out of the critical loop
+ - [MEDIUM] move QUEUE and TAR timers to stream interfaces
+ - [OPTIM] add compiler hints in tick_is_expired()
+ - [MINOR] add buffer_check_timeouts() to check what timeouts have fired.
+ - [MEDIUM] use buffer_check_timeouts instead of stream_sock_check_timeouts()
+ - [MINOR] add an expiration flag to the stream_sock_interface
+ - [MAJOR] migrate the connection logic to stream interface
+ - [MAJOR] add a connection error state to the stream_interface
+ - [MEDIUM] add the SN_CURR_SESS flag to the session to track open sessions
+ - [MEDIUM] continue layering cleanups.
+ - [MEDIUM] stream_interface: added a DISconnected state between CON/EST and CLO
+ - [MEDIUM] remove stream_sock_update_data()
+ - [MINOR] maintain a global session list in order to ease debugging
+ - [BUG] shutw must imply close during a connect
+ - [MEDIUM] process shutw during connection attempt
+ - [MEDIUM] make the stream interface control the SHUT{R,W} bits
+ - [MAJOR] complete layer4/7 separation
+ - [CLEANUP] move the session-related functions to session.c
+ - [MINOR] call session->do_log() for logging
+ - [MINOR] replace the ambiguous client_return function by stream_int_return
+ - [MINOR] replace client_retnclose() with stream_int_retnclose()
+ - [MINOR] replace srv_close_with_err() with http_server_error()
+ - [MEDIUM] make the http server error function a pointer in the session
+ - [CLEANUP] session.c: removed some migration left-overs in sess_establish()
+ - [MINOR] stream_sock_data_finish() should not expose fd
+ - [MEDIUM] extract TCP request processing from HTTP
+ - [MEDIUM] extract the HTTP tarpit code from process_request().
+ - [MEDIUM] move the HTTP request body analyser out of process_request().
+ - [MEDIUM] rename process_request to http_process_request
+ - [BUG] fix forgotten server session counter
+ - [MINOR] declare process_session in session.h, not proto_http.h
+ - [MEDIUM] first pass of lifting to proto_uxst.c:uxst_event_accept()
+ - [MINOR] add an analyser code for UNIX stats request
+ - [MINOR] pre-set analyser flags on the listener at registration time
+ - [BUG] do not forward close from cons to prod with analysers
+ - [MEDIUM] ensure that sock->shutw() also closes read for init states
+ - [MINOR] add an analyser state in struct session
+ - [MAJOR] make unix sockets work again with stats
+ - [MEDIUM] remove cli_fd, srv_fd, cli_state and srv_state from the session
+ - [MINOR] move the listener reference from fd to session
+ - [MEDIUM] reference the current hijack function in the buffer itself
+ - [MINOR] slightly rebalance stats_dump_{raw,http}
+ - [MINOR] add a new back-reference type : struct bref
+ - [MINOR] add back-references to sessions for later use by a dumper.
+ - [MEDIUM] add support for "show sess" in unix stats socket
+ - [BUG] do not release the connection slot during a retry
+ - [BUG] dynamic connection throttling could return a max of zero conns
+ - [BUG] do not try to pause backends during reload
+ - [BUG] ensure that listeners from disabled proxies are correctly unbound.
+ - [BUG] acl-related keywords are not allowed in defaults sections
+ - [BUG] cookie capture is declared in the frontend but checked on the backend
+ - [BUG] critical errors should be reported even in daemon mode
+ - [MINOR] redirect: add support for the "drop-query" option
+ - [MINOR] redirect: add support for "set-cookie" and "clear-cookie"
+ - [MINOR] redirect: in prefix mode a "/" means not to change the URI
+ - [BUG] do not dequeue requests on a dead server
+ - [BUG] do not dequeue the backend's pending connections on a dead server
+ - [MINOR] stats: indicate if a task is running in "show sess"
+ - [BUG] check timeout must not be changed if timeout.check is not set
+ - [BUG] "option transparent" is for backend, not frontend !
+ - [MINOR] transfer errors were not reported anymore in data phase
+ - [MEDIUM] add a send limit to a buffer
+ - [MEDIUM] don't report buffer timeout when there is I/O activity
+ - [MEDIUM] indicate when we don't care about read timeout
+ - [MINOR] add flags to indicate when a stream interface is waiting for space/data
+ - [MEDIUM] enable inter-stream_interface wakeup calls
+ - [MAJOR] implement autonomous inter-socket forwarding
+ - [MINOR] add the splice_len member to the buffer struct in preparation of splice support
+ - [MEDIUM] stream_sock: factor out the return path in case of no-writes
+ - [MEDIUM] i/o: rework ->to_forward and ->send_max
+ - [OPTIM] stream_sock: do not ask for polling on EAGAIN if we have read
+ - [OPTIM] buffer: replace rlim by max_len
+ - [OPTIM] stream_sock: factor out the buffer full handling out of the loop
+ - [CLEANUP] replace a few occurrences of (flags & X) && !(flags & Y)
+ - [CLEANUP] stream_sock: move the write-nothing condition out of the loop
+ - [MEDIUM] split stream_sock_write() into callback and core functions
+ - [MEDIUM] stream_sock_read: call ->chk_snd whenever there are data pending
+ - [MINOR] stream_sock: fix a few wrong empty calculations
+ - [MEDIUM] stream_sock: try to send pending data on chk_snd()
+ - [MINOR] global.maxpipes: add the ability to reserve file descriptors for pipes
+ - [MEDIUM] splice: add configuration options and set global.maxpipes
+ - [MINOR] introduce structures required to support Linux kernel splicing
+ - [MEDIUM] add definitions for Linux kernel splicing
+ - [MAJOR] complete support for linux 2.6 kernel splicing
+ - [BUG] reserve some pipes for backends with splice enabled
+ - [MEDIUM] splice: add hints to support older buggy kernels
+ - [MEDIUM] introduce pipe pools
+ - [MEDIUM] splice: make use of pipe pools
+ - [STATS] report pipe usage in the statistics
+ - [OPTIM] make global.maxpipes default to global.maxconn/4 when not specified
+ - [BUILD] fix snapshot date extraction with negative timezones
+ - [MEDIUM] move global tuning options to the global structure
+ - [MEDIUM] splice: add the global "nosplice" option
+ - [BUILD] add USE_LINUX_SPLICE to enable LINUX_SPLICE on linux 2.6
+ - [BUG] we must not exit if protocol binding only returns a warning
+ - [MINOR] add support for bind interface name
+ - [BUG] inform the user when root is expected but not set
+ - [MEDIUM] add support for source interface binding
+ - [MEDIUM] add support for source interface binding at the server level
+ - [MEDIUM] implement bind-process to limit service presence by process
+ - [DOC] document maxpipes, nosplice, option splice-{auto,request,response}
+ - [DOC] filled the logging section of the configuration manual
+ - [DOC] document HTTP status codes
+ - [DOC] document a few missing info about errorfile
+ - [BUG] fix random memory corruption using "show sess"
+ - [BUG] fix unix socket processing of interrupted output
+ - [DOC] add diagrams of queuing and future ACL design
+ - [BUILD] proto_http did not build on gcc-2.95
+ - [BUG] the "source" keyword must first clear optional settings
+ - [BUG] global.tune.maxaccept must be limited even in mono-process mode
+ - [MINOR] ensure that http_msg_analyzer updates pointer to invalid char
+ - [MEDIUM] store a complete dump of request and response errors in proxies
+ - [MEDIUM] implement error dump on unix socket with "show errors"
+ - [DOC] document "show errors"
+ - [MINOR] errors dump must use user-visible date, not internal date.
+ - [MINOR] time: add __usec_to_1024th to convert usecs to 1024th of second
+ - [MINOR] add curr_sec_ms and curr_sec_ms_scaled for current second.
+ - [MEDIUM] measure and report session rate on frontend, backends and servers
+ - [BUG] the "connslots" keyword was matched as "connlots"
+ - [MINOR] acl: add 2 new verbs: fe_sess_rate and be_sess_rate
+ - [MEDIUM] implement "rate-limit sessions" for the frontend
+ - [BUG] interface binding: length must include the trailing zero
+ - [BUG] typo in timeout error reporting : report *res and not *err
+ - [OPTIM] maintain_proxies: only wake up when the frontend will be ready
+ - [OPTIM] rate-limit: cleaner behaviour on low rates and reduce consumption
+ - [BUG] switch server-side stream interface to close in case of abort
+ - [CLEANUP] remove last references to term_trace
+ - [OPTIM] freq_ctr: do not rotate the counters when reading
+ - [BUG] disable any analysers for monitoring requests
+ - [BUG] rate-limit in defaults section was ignored
+ - [BUG] task: fix handling of duplicate keys
+ - [OPTIM] task: don't unlink a task from a wait queue when waking it up
+ - [OPTIM] displace tasks in the wait queue only if absolutely needed
+ - [MEDIUM] minor update to the task api: let the scheduler queue itself
+ - [BUG] event_accept() must always wake the task up, even in health mode
+ - [CLEANUP] task: distinguish between clock ticks and timers
+ - [OPTIM] task: reduce the number of calls to task_queue()
+ - [OPTIM] do not re-check req buffer when only response has changed
+ - [CLEANUP] don't enable kernel splicing when socket is closed
+ - [CLEANUP] buffer_flush() was misleading, rename it as buffer_erase
+ - [MINOR] buffers: implement buffer_flush()
+ - [MEDIUM] rearrange forwarding condition to enable splice during analysis
+ - [BUILD] build fixes for Solaris
+ - [BUILD] proto_http did not build on gcc-2.95 (again)
+ - [CONTRIB] halog: fast log parser for haproxy
+ - [CONTRIB] halog: faster fgets() and add support for percentile reporting
+
+2008/04/19 : 1.3.15
+ - [BUILD] Added support for 'make install'
+ - [BUILD] Added 'install-man' make target for installing the man page
+ - [BUILD] Added 'install-bin' make target
+ - [BUILD] Added 'install-doc' make target
+ - [BUILD] Removed "/" after '$(DESTDIR)' in install targets
+ - [BUILD] Changed 'install' target to install the binaries first
+ - [BUILD] Replace hardcoded 'LD = gcc' with 'LD = $(CC)'
+ - [MEDIUM]: Inversion for options
+ - [MEDIUM]: Count retries and redispatches also for servers, fix redistribute_pending, extend logs, %d->%u cleanup
+ - [BUG]: Restore clearing t->logs.bytes
+ - [MEDIUM]: rework checks handling
+ - [DOC] Update a "contrib" file with a hint about a scheme used for formathing subjects
+ - [MEDIUM] Implement "track [<backend>/]<server>"
+ - [MINOR] Implement persistent id for proxies and servers
+ - [BUG] Don't increment server connections too much + fix retries
+ - [MEDIUM]: Prevent redispatcher from selecting the same server, version #3
+ - [MAJOR] proto_uxst rework -> SNMP support
+ - [BUG] appsession lookup in URL does not work
+ - [BUG] transparent proxy address was ignored in backend
+ - [BUG] hot reconfiguration failed because of a wrong error check
+ - [DOC] big update to the configuration manual
+ - [DOC] large update to the configuration manual
+ - [DOC] document more options
+ - [BUILD] major rework of the GNU Makefile
+ - [STATS] add support for "show info" on the unix socket
+ - [DOC] document options forwardfor to logasap
+ - [MINOR] add support for the "backlog" parameter
+ - [OPTIM] introduce global parameter "tune.maxaccept"
+ - [MEDIUM] introduce "timeout http-request" in frontends
+ - [MINOR] tarpit timeout is also allowed in backends
+ - [BUG] increment server connections for each connect()
+ - [MEDIUM] add a turn-around state of one second after a connection failure
+ - [BUG] fix typo in redispatched connection
+ - [DOC] document options nolinger to ssl-hello-chk
+ - [DOC] added documentation for "option tcplog" to "use_backend"
+ - [BUG] connect_server: server might not exist when sending error report
+ - [MEDIUM] support fully transparent proxy on Linux (USE_LINUX_TPROXY)
+ - [MEDIUM] add non-local bind to connect() on Linux
+ - [MINOR] add transparent proxy support for balabit's Tproxy v4
+ - [BUG] use backend's source and not server's source with tproxy
+ - [BUG] fix overlapping server flags
+ - [MEDIUM] fix server health checks source address selection
+ - [BUG] build failed on CONFIG_HAP_LINUX_TPROXY without CONFIG_HAP_CTTPROXY
+ - [DOC] added "server", "source" and "stats" keywords
+ - [DOC] all server parameters have been documented
+ - [DOC] document all req* and rsp* keywords.
+ - [DOC] added documentation about HTTP header manipulations
+ - [BUG] log response byte count, not request
+ - [BUILD] code did not build in full debug mode
+ - [BUG] fix truncated responses with sepoll
+ - [MINOR] use s->frt_addr as the server's address in transparent proxy
+ - [MINOR] fix configuration hint about timeouts
+ - [DOC] minor cleanup of the doc and notice to contributors
+ - [MINOR] report correct section type for unknown keywords.
+ - [BUILD] update MacOS Makefile to build on newer versions
+ - [DOC] fix erroneous "useallbackups" option in the doc
+ - [DOC] applied small fixes from early readers
+ - [MINOR] add configuration support for "redir" server keyword
+ - [MEDIUM] completely implement the server redirection method
+ - [TESTS] add a test case for the server redirection mechanism
+ - [DOC] add a configuration entry for "server ... redir <prefix>"
+ - [BUILD] backend.c and checks.c did not build without tproxy !
+ - Revert "[BUILD] backend.c and checks.c did not build without tproxy !"
+ - [BUILD] backend.c and checks.c did not build without tproxy !
+ - [OPTIM] used unsigned ints for HTTP state and message offsets
+ - [OPTIM] GCC4's builtin_expect() is suboptimal
+ - [BUG] failed conns were sometimes incremented in the frontend!
+ - [BUG] timeout.check was not pre-set to eternity
+ - [TESTS] add test-pollers.cfg to easily report pollers in use
+ - [BUG] do not apply timeout.connect in checks if unset
+ - [BUILD] ensure that makefile understands USE_DLMALLOC=1
+ - [MINOR] silent gcc for a wrong warning
+ - [CLEANUP] update .gitignore to ignore more temporary files
+ - [CLEANUP] report dlmalloc's source path only if explictly specified
+ - [BUG] str2sun could leak a small buffer in case of error during parsing
+ - [BUG] option allbackups was not working anymore in roundrobin mode
+ - [MAJOR] implementation of the "leastconn" load balancing algorithm
+ - [BUILD] ensure that users don't build without setting the target anymore.
+ - [DOC] document the leastconn LB algo
+ - [MEDIUM] fix stats socket limitation to 16 kB
+ - [DOC] fix unescaped space in httpchk example.
+ - [BUG] fix double-decrement of server connections
+ - [TESTS] add a test case for port mapping
+ - [TESTS] add a benchmark for integer hashing
+ - [TESTS] add new methods in ip-hash test file
+ - [MAJOR] implement parameter hashing for POST requests
+
+2007/12/06 : 1.3.14
+ - New option http_proxy (Alexandre Cassen)
+ - add support for "maxqueue" to limit server queue overload (Elijah Epifanov)
+ - Check for duplicated conflicting proxies (Krzysztof Oledzki)
+ - stats: report server and backend cumulated downtime (Krzysztof Oledzki)
+ - use backends only with use_backend directive (Krzysztof Oledzki)
+ - Handle long lines properly (Krzysztof Oledzki)
+ - Implement and use generic findproxy and relax duplicated proxy check (Krzysztof Oledzki)
+ - continous statistics (Krzysztof Oledzki)
+ - add support for logging via a UNIX socket (Robert Tsai)
+ - fix error checking in strl2ic/strl2uic()
+ - fix calls to localtime()
+ - provide easier-to-use ultoa_* functions
+ - provide easy-to-use limit_r and LIM2A* macros
+ - add a simple test for the status page
+ - move error codes to common/errors.h
+ - silent warning about LIST_* being redefined on OpenBSD
+ - add socket address length to the protocols
+ - group PR_O_BALANCE_* bits into a checkable value
+ - externalize the "balance" option parser to backend.c
+ - introduce the "url_param" balance method
+ - make default_backend work in TCP mode too
+ - disable warning about localtime_r on Solaris
+ - adjust error messages about conflicting proxies
+ - avoid calling some layer7 functions if not needed
+ - simplify error path in event_accept()
+ - add an options field to the listeners
+ - added a new state to listeners
+ - unbind_listener() must use fd_delete() and not close()
+ - add a generic unbind_listener() primitive
+ - add a generic delete_listener() primitive
+ - add a generic unbind_all_listeners() primitive
+ - create proto_tcp and move initialization of proxy listeners
+ - stats: report numerical process ID, proxy ID and server ID
+ - relative_pid was not initialized
+ - missing header names in raw stats output
+ - fix missing parenthesis in check_response_for_cacheability
+ - small optimization on session_process_counters()
+ - merge ebtree version 3.0
+ - make ebtree headers multiple-include compatible
+ - ebtree: include config.h for REGPRM*
+ - differentiate between generic LB params and map-specific ones
+ - add a weight divisor to the struct proxy
+ - implement the Fast Weighted Round Robin (FWRR) algo
+ - include filltab25.c to experiment on FWRR for dynamic weights
+ - merge test-fwrr.cfg to validate dynamic weights
+ - move the load balancing algorithm to be->lbprm.algo
+ - change server check result to a bit field
+ - implement "http-check disable-on-404" for graceful shutdown
+ - secure the calling conditions of ->set_server_status_{up,down}
+ - report disabled servers as "NOLB" when they are still UP
+ - document the "http-check disable-on-404" option
+ - http-check disable-on-404 is not limited to HTTP mode
+ - add a test file for disable-on-404
+ - use distinct bits per load-balancing algorithm type
+ - implement the slowstart parameter for servers
+ - document the server's slowstart parameter
+ - stats: report the server warm up status in a "throttle" column
+ - fix 2 minor issues on AIX
+ - add the "nbsrv" ACL verb
+ - add the "fail" condition to monitor requests
+ - remove a warning from gcc due to htons() in standard.c
+ - fwrr: ensure that we never overflow in placements
+ - store the build options to report with -vv
+ - fix the status return of the init script (R.I. Pienaar)
+ - stats: real time monitoring script for unix socket (Prizee)
+ - document "nbsrv" and "monitor fail"
+ - restrict the set of allowed characters for identifiers
+ - implement a time parsing function
+ - add support for time units in the configuration
+ - add a bit of documentation about timers
+ - introduce separation between contimeout, and tarpit + queue
+ - introduce the "timeout" keyword
+ - grouped all timeouts in one structure
+ - slowstart is in ms, not seconds
+ - slowstart: ensure we don't start with a null weight
+ - report the number of times each server was selected
+ - fix build on AIX due to recent log changes
+ - fix build on Solaris due to recent log changes
+
+2007/10/18 : 1.3.13
+ - replace the code under O'Reilly license (Arnaud Cornet)
+ - add a small man page (Arnaud Cornet)
+ - stats: report haproxy's version by default (Krzysztof Oledzki)
+ - stats: count server retries and redispatches (Krzysztof Oledzki)
+ - core: added easy support for Doug Lea's malloc (dlmalloc)
+ - core: fade out memory usage when stopping proxies
+ - core: moved the sockaddr pointer to the fdtab structure
+ - core: add generic protocol support
+ - core: implement client-side support for PF_UNIX sockets
+ - stats: implement the CSV output
+ - stats: add a link to the CSV export HTML page
+ - stats: implement the statistics output on a unix socket
+ - config: introduce the "stats" keyword in global section
+ - build: centralize version and date into one file for each
+ - tests: added a new hash algorithm
+
+2007/10/18 : 1.3.12.3
+ - add the "nolinger" option to disable data lingering (Alexandre Cassen)
+ - fix double-free during clean exit (Krzysztof Oledzki)
+ - prevent the system from sending an RST when closing health-checks
+ (Krzysztof Oledzki)
+ - do not add a cache-control header when on non-cacheable responses
+ (Krzysztof Oledzki)
+ - spread health checks even more (Krzysztof Oledzki)
+ - stats: scope "." must match the backend and not the frontend
+ - fixed call to chroot() during startup
+ - fix wrong timeout computation in event_accept()
+ - remove condition for exit() under fork() failure
+
+2007/09/20 : 1.3.12.2
+ - fix configuration sanity checks for TCP listeners
+ - set the log socket receive window to zero bytes
+ - pre-initialize timeouts to infinity, not zero
+ - fix the SIGHUP message not to alert on server-less proxies
+ - timeouts and retries could be ignored when switching backend
+ - added a file to check that "retries" works.
+ - O'Reilly has clarified its license
+
+2007/09/05 : 1.3.12.1
+ - spec I/O: fix allocations of spec entries for an FD
+ - ensure we never overflow in chunk_printf()
+ - improve behaviour with large number of servers per proxy
+ - add support for "stats refresh <interval>"
+ - stats page: added links for 'refresh' and 'hide down'
+ - fix backend's weight in the stats page.
+ - the "stats" keyword is not allowed in a pure frontend.
+ - provide a test configuration file for stats and checks
+
+2007/06/17 : 1.3.12
+ - fix segfault at exit when using captures
+ - bug: negation in ACL conds was not cleared between terms
+ - errorfile: use a local file to feed error messages
+ - acl: support '-i' to ignore case when matching
+ - acl: smarter integer comparison with operators eq,lt,gt,le,ge
+ - acl: support maching on 'path' component
+ - acl: implement matching on header values
+ - acl: distinguish between request and response headers
+ - acl: permit to return any header when no name specified
+ - acl: provide default ACLs
+ - added the 'use_backend' keyword for full content-switching
+ - acl: specify the direction during fetches
+ - acl: provide the argument length for fetch functions
+ - acl: provide a reference to the expr to fetch()
+ - improve memory freeing upon exit
+ - str2net() must not change the const char *
+ - shut warnings 'is*' macros from ctype.h on solaris
+
+2007/06/03 : 1.3.11.4
+ - do not re-arm read timeout in SHUTR state !
+ - optimize I/O by detecting system starvation
+ - the epoll FD must not be shared between processes
+ - limit the number of events returned by *poll*
+
+2007/05/14 : 1.3.11.3
+ - pre-initialize timeouts with tv_eternity during parsing
+
+2007/05/14 : 1.3.11.2
+ - fixed broken health-checks since switch to timeval
+
+2007/05/14 : 1.3.11.1
+ - fixed ev_kqueue which was forgotten during the switch to timeval
+ - allowed null timeouts for past events in select
+
+2007/05/14 : 1.3.11
+ - fixed ev_sepoll again by rewriting the state machine
+ - switched all timeouts to timevals instead of milliseconds
+ - improved memory management using mempools v2.
+ - several minor optimizations
+
+2007/05/09 : 1.3.10.2
+ - fixed build on OpenBSD (missing types.h)
+
+2007/05/09 : 1.3.10.1
+ - fixed sepoll transition matrix (two states were missing)
+
+2007/05/08 : 1.3.10
+ - several fixes in ev_sepoll
+ - fixed some expiration dates on some tasks
+ - fixed a bug in connection establishment detection due to speculative I/O
+ - fixed rare bug occuring on TCP with early close (reported by Andy Smith)
+ - implemented URI hashing algorithm (Guillaume Dallaire)
+ - implemented SMTP health checks (Peter van Dijk)
+ - replaced the rbtree with ul2tree from old scheduler project
+ - new framework for generic ACL support
+ - added the 'acl' and 'block' keywords to the config language
+ - added several ACL criteria and matches (IP, port, URI, ...)
+ - cleaned up and better modularization for some time functions
+ - fixed list macros
+ - fixed useless memory allocation in str2net()
+ - store the original destination address in the session
+
+2007/04/15 : 1.3.9
+ - modularized the polling mechanisms and use function pointers instead
+ of macros at many places
+ - implemented support for FreeBSD's kqueue() polling mechanism
+ - fixed a warning on OpenBSD : MIN/MAX redefined
+ - change socket registration order at startup to accomodate kqueue.
+ - several makefile cleanups to support old shells
+ - fix build with limits.h once for all
+ - ev_epoll: do not rely on fd_sets anymore, use changes stacks instead.
+ - fdtab now holds the results of polling
+ - implemented support for speculative I/O processing with epoll()
+ - remove useless calls to shutdown(SHUT_RD), resulting in small speed boost
+ - auto-registering of pollers at load time
+
+2007/04/03 : 1.3.8.2
+ - rewriting either the status line or request line could crash the
+ process due to a pointer which ought to be reset before parsing.
+ - rewriting the status line in the response did not work, it caused
+ a 502 Bad Gateway due to an erroneous state during parsing
+
+2007/04/01 : 1.3.8.1
+ - fix reqadd when no option httpclose is used.
+ - removed now unused fiprm and beprm from proxies
+ - split logs into two versions : TCP and HTTP
+ - added some docs about http headers storage and acls
+ - added a VIM script for syntax color highlighting (Bruno Michel)
+
+2007/03/25 : 1.3.8
+ - fixed several bugs which might have caused a crash with bad configs
+ - several optimizations in header processing
+ - many progresses towards transaction-based processing
+ - option forwardfor may be used in frontends
+ - completed HTTP response processing
+ - some code refactoring between request and response processing
+ - new HTTP header manipulation functions
+ - optimizations on the recv() patch to reduce CPU usage under very
+ high data rates.
+ - more user-friendly help about the 'usesrc' keyword (CTTPROXY)
+ - username/groupname support from Marcus Rueckert
+ - added the "except" keyword to the "forwardfor" option (Bryan German)
+ - support for health-checks on other addresses (Fabrice Dulaunoy)
+ - makefile for MacOS 10.4 / Darwin (Dan Zinngrabe)
+ - do not insert "Connection: close" in HTTP/1.0 messages
+
+2007/01/26 : 1.3.7
+ - fix critical bug introduced with 1.3.6 : an empty request header
+ may lead to a crash due to missing pointer assignment
+ - hdr_idx might be left uninitialized in debug mode
+ - fixed build on FreeBSD due to missing fd_set declaration
+
+2007/01/22 : 1.3.6.1
+ - change in the header chaining broke cookies and authentication
+
+2007/01/22 : 1.3.6
+ - stats now support the HEAD method too
+ - extracted http request from the session
+ - huge rework of the HTTP parser which is now a 28-state FSM.
+ - linux-style likely/unlikely macros for optimization hints
+ - do not create a server socket when there's no server
+ - imported lots of docs
+
+2007/01/07 : 1.3.5
+ - stats: swap color sets for active and backup servers
+ - try to guess server check port when unset
+ - added complete support and doc for TCP Splicing
+ - replace the wait-queue linked list with an rbtree.
+ - a few bugfixes and cleanups
+
+2007/01/02 : 1.3.4
+ - support for cttproxy on the server side to present the client
+ address to the server.
+ - added support for SO_REUSEPORT on Linux (needs kernel patch)
+ - new RFC2616-compliant HTTP request parser with header indexing
+ - split proxies in frontends, rulesets and backends
+ - implemented the 'req[i]setbe' to select a backend depending
+ on the contents
+ - added the 'default_backend' keyword to select a default BE.
+ - new stats page featuring FEs and BEs + bytes in both dirs
+ - improved log format to indicate the backend and the time in ms.
+ - lots of cleanups
+
+2006/10/15 : 1.3.3
+ - fix broken redispatch option in case the connection has already
+ been marked "in progress" (ie: nearly always).
+ - support regparm on x86 to speed up some often called functions
+ - removed a few useless calls to gettimeofday() in log functions.
+ - lots of 'const char*' cleanups
+ - turn every FD_* into functions which are faster on recent CPUs
+
+2006/09/03 : 1.3.2
+ - started the changes towards I/O completion callbacks. stream_sock* have
+ replaced event_*.
+ - added the new "reqtarpit" and "reqitarpit" protection features
+
+2006/07/09 : 1.3.1 (1.2.15)
+ - now, haproxy warns about missing timeout during startup to try to
+ eliminate all those buggy configurations.
+ - added "Content-Type: text/html" in responses wherever appropriate, as
+ suggested by Cameron Simpson.
+ - implemented "option ssl-hello-chk" to use SSLv3 CLIENT HELLO messages to
+ test server's health
+ - implemented "monitor-uri" so that haproxy can reply to a specific URI with
+ an "HTTP/1.0 200 OK" response. This is useful to validate multiple proxies
+ at once.
+
+2006/06/29 : 1.3.0
+ - exploded the whole file into multiple .c and .h. No functionnal
+ difference is expected at all.
+ - fixed a bug by which neither stats nor error messages could be returned if
+ 'clitimeout' was missing.
+
+2006/05/21 : 1.2.14
+ - new HTML status report with the 'stats' keyword.
+ - added the 'abortonclose' option to better resist traffic surges
+ - implemented dynamic traffic regulation with the 'minconn' option
+ - show request time on denied requests
+ - definitely fixed hot reconf on OpenBSD by the use of SO_REUSEPORT
+ - now a proxy instance is allowed to run without servers, which is
+ useful to dedicate one instance to stats
+ - added lots of error counters
+ - a missing parenthesis preventd matching of cacheable cookies
+ - a missing parenthesis in poll_loop() might have caused missed events.
+
+2006/05/14 : 1.2.13.1
+ - an uninitialized field in the struct session could cause a crash when
+ the session was freed. This has been encountered on Solaris only.
+ - Solaris and OpenBSD no not support shutdown() on listening socket. Let's
+ be nice to them by performing a soft stop if pause fails.
+
+2006/05/13 : 1.2.13
+ - 'maxconn' server parameter to do per-server session limitation
+ - queueing to support non-blocking session limitation
+ - fixed removal of cookies for cookie-less servers such as backup servers
+ - two separate wait queues for expirable and non-expirable tasks provide
+ better performance with lots of sessions.
+ - some code cleanups and performance improvements
+ - made state dumps a bit more verbose
+ - fixed missing checks for NULL srv in dispatch mode
+ - load balancing on backup servers was not possible in source hash mode.
+ - two session flags shared the same bit, but fortunately they were not
+ compatible.
+
+2006/04/15 : 1.2.12
+ Very few changes preparing for more important changes to support per-server
+ session limitations and queueing :
+ - ignore leading empty lines in HTTP requests as suggested by RFC2616.
+ - added the 'weight' parameter to the servers, limited to 1..256. It applies
+ to roundrobin and source hash.
+ - the optional '-s' option could clobber '-st' and '-sf' if compiled in.
+
+2006/03/30 : 1.2.11.1
+ - under some conditions, it might have been possible that when the
+ last dead server became available, it would not have been used
+ till another one would have changed state. Could not be reproduced
+ at all, however seems possible from the code.
+
+2006/03/25 : 1.2.11
+ - added the '-db' command-line option to disable backgrounding.
+ - added the -sf/-st command-line arguments which are used to specify
+ a list of pids to send a FINISH or TERMINATE signal upon startup.
+ They will also be asked to release their port if a bind fails.
+ - reworked the startup mechanism to allow the sending of a signal to a list
+ of old pids if a socket cannot be bound, with a retry for a limited amount
+ of time (1 second by default).
+ - added the ability to enforce limits on memory usage.
+ - added the 'source' load-balancing algorithm which uses the source IP(v4|v6)
+ - re-architectured the server round-robin mechanism to ease integration of
+ other algorithms. It now relies on the number of active and backup servers.
+ - added a counter for the number of active and backup servers, and report
+ these numbers upon SIGHUP or state change.
+
+2006/03/23 : 1.2.10.1
+ - while fixing the backup server round-robin "feature", a new bug was
+ introduced which could miss some backup servers.
+ - the displayed proxy name was wrong when dumping upon SIGHUP.
+
+2006/03/19 : 1.2.10
+ - assert.h is needed when DEBUG is defined.
+ - ENORMOUS long standing bug affecting the epoll polling system :
+ event_data is a union, not a structure !
+ - Make fd management more robust and easier to debug. Also some
+ micro-optimisations.
+ - Limit the number of consecutive accept() in multi-process mode.
+ This produces a more evenly distributed load across the processes and
+ slightly improves performance by reducing bottlenecks.
+ - Make health-checks be more regular, and faster to retry after a timeout.
+ - Fixed some messages to ease parsing of alerts.
+ - provided a patch to enable epoll on RHEL3 kernels.
+ - Separated OpenBSD build from the main Makefile into a new one.
+
+2006/03/15 : 1.2.9
+ - haproxy could not be stopped after being paused, it had to be woken up
+ first. This has been fixed.
+ - the 'ulimit-n' parameter is now optional and by default computed from
+ maxconn + the number of listeners + the number of health-checks.
+ - it is now possible to specify a maximum number of connections at build
+ time with the SYSTEM_MAXCONN define. The value set in the configuration
+ file will then be limited to this value, and only the command-line '-n'
+ option will be able to bypass it. It will prevent against accidental
+ high memory usage on small systems.
+ - RFC2616 expects that any HTTP agent accepts multi-line headers. Earlier
+ versions did not detect a line beginning with a space as the continuation
+ of previous header. It is now correct.
+ - health checks sent to servers configured with identical intervals were
+ sent in perfect synchronisation because the initial time was the same
+ for all. This could induce high load peaks when fragile servers were
+ hosting tens of instances for the same application. Now the load is
+ spread evenly across the smallest interval amongst a listener.
+ - a new 'forceclose' option was added to make the proxy close the outgoing
+ channel to the server once it has sent all its headers and the server
+ starts responding. This helps some servers which don't close upon the
+ 'Connection: close' header. It implies 'option httpclose'.
+ - there was a bug in the way the backup servers were handled. They were
+ erroneously load-balanced while the doc said the opposite. Since
+ load-balanced backup servers is one of the features some people have
+ been asking for, the problem was fixed to reflect the documented
+ behaviour and a new option 'allbackups' was introduced to provide the
+ feature to those who need it.
+ - a never ending connect() could lead to a fast select() loop if its
+ timeout times the number of retransmits exceeded the server read or write
+ timeout, because the later was used to compute select()'s timeout while
+ the connection timeout was not reached.
+ - now we initialize the libc's localtime structures very early so that even
+ under OOM conditions, we can still send dated error messages without
+ segfaulting.
+ - the 'daemon' mode implies 'quiet' and disables 'verbose' because file
+ descriptors are closed.
+
+2006/01/29 : 1.2.8
+ - fixed a nasty bug affecting poll/epoll which could return unmodified data
+ from the server to the client, and sometimes lead to memory corruption
+ crashing the process.
+ - added the new pause/play mechanism with SIGTTOU/SIGTTIN for hot-reconf.
+
+2005/12/18 : 1.2.7.1
+ - the "retries" option was ignored because connect() could not return an
+ error if the connection failed before the timeout.
+ - TCP health-checks could not detect a connection refused in poll/epoll
+ mode.
+
+2005/11/13 : 1.2.7
+ - building with -DUSE_PCRE should include PCRE headers and not regex.h. At
+ least on Solaris, this caused the libc's regex primitives to be used instead
+ of PCRE, which caused trouble on group references. This is now fixed.
+ - delayed the quiet mode during startup so that most of the startup alerts can
+ be displayed even in quiet mode.
+ - display an alert when a listener has no address, invalid or no port, or when
+ there are no enabled listeners upon startup.
+ - added "static-pcre" to the list of supported regex options in the Makefile.
+
+2005/10/09 : 1.2.7rc (1.1.33rc)
+ - second batch of socklen_t changes.
+ - clean-ups from Cameron Simpson.
+ - because tv_remain() does not know about eternity, using no timeout can
+ make select() spin around a null time-out. Bug reported by Cameron Simpson.
+ - client read timeout was not properly set to eternity initialized after an
+ accept() if it was not set in the config. It remained undetected so long
+ because eternity is 0 and newly allocated pages are zeroed by the system.
+ - do not call get_original_dst() when not in transparent mode.
+ - implemented a workaround for a bug in certain epoll() implementations on
+ linux-2.4 kernels (epoll-lt <= 0.21).
+ - implemented TCP keepalive with new options : tcpka, clitcpka, srvtcpka.
+
+2005/08/07 : 1.2.6
+ - clean-up patch from Alexander Lazic fixes build on Debian 3.1 (socklen_t).
+
+2005/07/06 : 1.2.6-pre5 (1.1.32)
+ - added the number of active sessions (proxy/process) in the logs
+
+2005/07/06 : 1.2.6-pre4 (1.1.32-pre4)
+ - the time-out fix introduced in 1.1.25 caused a corner case where it was
+ possible for a client to keep a connection maintained regardless of the
+ timeout if the server closed the connection during the HEADER phase,
+ while the client ignored the close request while doing nothing in the
+ other direction. This has been fixed now by ensuring that read timeouts
+ are re-armed when switching to any SHUTW state.
+
+2005/07/05 : 1.2.6-pre3 (1.1.32-pre3)
+ - enhanced error reporting in the logs. Now the proxy will precisely detect
+ various error conditions related to the system and/or process limits, and
+ generate LOG_EMERG logs indicating that a resource has been exhausted.
+ - logs will contain two new characters for the error cause : 'R' indicates
+ a resource exhausted, and 'I' indicates an internal error, though this
+ one should never happen.
+ - server connection timeouts can now be reported in the logs (sC), as well
+ as connections refused because of maxconn limitations (PC).
+
+2005/07/05 : 1.2.6-pre2 (1.1.32-pre2)
+ - new global configuration keyword "ulimit-n" may be used to raise the FD
+ limit to usable values.
+ - a warning is now displayed on startup if the FD limit is lower than the
+ configured maximum number of sockets.
+
+2005/07/05 : 1.2.6-pre1 (1.1.32-pre1)
+ - new configuration keyword "monitor-net" makes it possible to be monitored
+ by external devices which connect to the proxy without being logged nor
+ forwarded to any server. Particularly useful on generic TCPv4 relays.
+
+2005/06/21 : 1.2.5.2
+ - fixed build on PPC where chars are unsigned by default
+
+2005/05/02 : 1.2.5.1
+ - dirty hack to fix a bug introduced with epoll : if we close an FD and
+ immediately reassign it to another session through a connect(), the
+ Prev{Read,Write}Events are not updated, which causes trouble detecting
+ changes, thus leading to many timeouts at high loads.
+
+2005/04/30 : 1.2.5 (1.1.31)
+ - changed the runtime argument to disable epoll() to '-de'
+ - changed the runtime argument to disable poll() to '-dp'
+ - added global options 'nopoll' and 'noepoll' to do the same at the
+ configuration level.
+ - added a 'linux24e' target to the Makefile for Linux 2.4 systems patched to
+ support epoll().
+ - changed default FD_SETSIZE to 65536 on Solaris (default=1024)
+ - conditionned signals redirection to #ifdef DEBUG_MEMORY
+
+2005/04/26 : 1.2.5-pre4
+ - made epoll() support a compile-time option : ENABLE_EPOLL
+ - provided a very little libc replacement for a possibly missing epoll()
+ implementation which can be enabled by -DUSE_MY_EPOLL
+ - implemented the poll() poller, which can be enabled with -DENABLE_POLL.
+ The equivalent runtime argument becomes '-P'. A few tests show that it
+ performs like select() with many fds, but slightly slower (certainly
+ because of the higher amount of memory involved).
+ - separated the 3 polling methods and the tasks scheduler into 4 distinct
+ functions which makes the code a lot more modular.
+ - moved some event tables to private static declarations inside the poller
+ functions.
+ - the poller functions can now initialize themselves, run, and cleanup.
+ - changed the runtime argument to enable epoll() to '-E'.
+ - removed buggy epoll_ctl() code in the client_retnclose() function. This
+ function was never meant to remove anything.
+ - fixed a typo which caused glibc to yell about a double free on exit.
+ - removed error checking after epoll_ctl(DEL) because we can never know if
+ the fd is still active or already closed.
+ - added a few entries in the makefile
+
+2005/04/25 : 1.2.5-pre3
+ - experimental epoll() support (use temporary '-e' argument)
+
+2005/04/24 : 1.2.5-pre2
+ - implemented the HTTP 303 code for error redirection. This forces the
+ browser to fetch the given URI with a GET request. The new keyword for
+ this is 'errorloc303', and a new 'errorloc302' keyword has been created
+ to make them easily distinguishable.
+ - added more controls in the parser for valid use of '\x' sequence.
+ - few fixes from Alex & Klaus
+
+2005/02/17 : 1.2.5-pre1
+ - fixed a few errors in the documentation
+
+2005/02/13
+ - do not pre-initialize unused file-descriptors before select() anymore.
+
+2005/01/22 : 1.2.4
+ - merged Alexander Lazic's and Klaus Wagner's work on application
+ cookie-based persistence. Since this is the first merge, this version is
+ not intended for general use and reports are more than welcome. Some
+ documentation is really needed though.
+
+2005/01/22 : 1.2.3 (1.1.30)
+ - add an architecture guide to the documentation
+ - released without any changes
+
+2004/12/26 : 1.2.3-pre1 (1.1.30-pre1)
+ - increased default BUFSIZE to 16 kB to accept max headers of 8 kB which is
+ compatible with Apache. This limit can be configured in the makefile now.
+ Thanks to Eric Fehr for the checks.
+ - added a per-server "source" option which now makes it possible to bind to
+ a different source for each (potentially identical) server.
+ - changed cookie-based server selection slightly to allow several servers to
+ share a same cookie, thus making it possible to associate backup servers to
+ live servers and ease soft-stop for maintenance periods. (Alexander Lazic)
+ - added the cookie 'prefix' mode which makes it possible to use persistence
+ with thin clients which support only one cookie. The server name is prefixed
+ before the application cookie, and restore back.
+ - fixed the order of servers within an instance to match documentation. Now
+ the servers are *really* used in the order of their declaration. This is
+ particularly important when multiple backup servers are in use.
+
+2004/10/18 : 1.2.2 (1.1.29)
+ - fixed a bug where a TCP connection would be logged twice if the 'logasap'
+ option was enabled without the 'tcplog' option.
+ - encode_string() would use hdr_encode_map instead of the map argument.
+
+2004/08/10 : (1.1.29-pre2)
+ - the logged request is now encoded with '#XX' for unprintable characters
+ - new keywords 'capture request header' and 'capture response header' enable
+ logging of arbitrary HTTP headers in requests and responses
+ - removed "-DSOLARIS" after replacing the last inet_aton() with inet_pton()
+
+2004/06/06 : 1.2.1 (1.1.28)
+ - added the '-V' command line option to verbosely report errors even though
+ the -q or 'quiet' options are specified. This is useful with '-c'.
+ - added a Red Hat init script and a .spec from Simon Matter <simon.matter@invoca.ch>
+
+2004/06/05 :
+ - added the "logasap" option which produces a log without waiting for the data
+ to be transferred from the server to the client.
+ - added the "httpclose" option which removes any "connection:" header and adds
+ "Connection: close" in both direction.
+ - added the 'checkcache' option which blocks cacheable responses containing
+ dangerous headers, such as 'set-cookie'.
+ - added 'rspdeny' and 'rspideny' to block certain responses to avoid sensible
+ information leak from servers.
+
+2004/04/18 :
+ - send an EMERG log when no server is available for a given proxy
+ - added the '-c' command line option to syntactically check the
+ configuration file without starting the service.
+
+2003/11/09 : 1.2.0
+ - the same as 1.1.27 + IPv6 support on the client side
+
+2003/10/27 : 1.1.27
+ - the configurable HTTP health check introduced in 1.1.23 revealed a shameful
+ bug : the code still assumed that HTTP requests were the same size as the
+ original ones (22 bytes), and failed if they were not.
+ - added support for pidfiles.
+
+2003/10/22 : 1.1.26
+ - the fix introduced in 1.1.25 for client timeouts while waiting for servers
+ broke almost all compatibility with POST requests, because the proxy
+ stopped to read anything from the client as soon as it got all of its
+ headers.
+
+2003/10/15 : 1.1.25
+ - added the 'tcplog' option, which provides enhanced, HTTP-like logs for
+ generic TCP proxies, or lighter logs for HTTP proxies.
+ - fixed a time-out condition wrongly reported as client time-out in data
+ phase if the client timeout was lower than the connect timeout times the
+ number of retries.
+
+2003/09/21 : 1.1.24
+ - if a client sent a full request then shut its write connection down, then
+ the request was aborted. This case was detected only when using haproxy
+ both as health-check client and as a server.
+ - if 'option httpchk' is used in a 'health' mode server, then responses will
+ change from 'OK' to 'HTTP/1.0 200 OK'.
+ - fixed a Linux-only bug in case of HTTP server health-checks, where a single
+ server response followed by a close could be ignored, and the server seen
+ as failed.
+
+2003/09/19 : 1.1.23
+ - fixed a stupid bug introduced in 1.1.22 which caused second and subsequent
+ 'default' sections to keep previous parameters, and not initialize logs
+ correctly.
+ - fixed a second stupid bug introduced in 1.1.22 which caused configurations
+ relying on 'dispatch' mode to segfault at the first connection.
+ - 'option httpchk' now supports method, HTTP version and a few headers.
+ - now, 'option httpchk', 'cookie' and 'capture' can be specified in
+ 'defaults' section
+
+2003/09/10 : 1.1.22
+ - 'listen' now supports optionnal address:port-range lists
+ - 'bind' introduced to add new listen addresses
+ - fixed a bug which caused a session to be kept established on a server till
+ it timed out if the client closed during the DATA phase.
+ - the port part of each server address can now be empty to make the proxy
+ connect to the server on the same port it was connected to, be an absolute
+ unsigned number to reflect a single port (as in older versions), or an
+ explicitly signed number (+N/-N) to indicate that this offset must be
+ applied to the port the proxy was connected to, when connecting to the
+ server.
+ - the 'port' server option allows the user to specify a different
+ health-check port than the service one. It is mandatory when only relative
+ ports have been specified and check is required. By default, the checks are
+ sent to the service port.
+ - new 'defaults' section which is rather similar to 'listen' except that all
+ values are only used as default values for future 'listen' sections, until
+ a new 'defaults' resets them. At the moment, server options, regexes,
+ cookie names and captures cannot be set in the 'defaults' section.
+
+2003/05/06 : 1.1.21
+ - changed the debug output format so that it now includes the session unique
+ ID followed by the instance name at the beginning of each line.
+ - in debug mode, accept now shows the client's IP and port.
+ - added one 3 small debugging scripts to search and pretty print debug output
+ - changed the default health check request to "OPTIONS /" instead of
+ "OPTIONS *" since not all servers implement the later one.
+ - "option httpchk" now accepts an optional parameter allowing the user to
+ specify and URI other than '/' during health-checks.
+
+2003/04/21 : 1.1.20
+ - fixed two problems with time-outs, one where a server would be logged as
+ timed out during transfer that take longer to complete than the fixed
+ time-out, and one where clients were logged as timed-out during the data
+ phase because they didn't have anything to send. This sometimes caused
+ slow client connections to close too early while in fact there was no
+ problem. The proper fix would be to have a per-fd time-out with
+ conditions depending on the state of the HTTP FSM.
+
+2003/04/16 : 1.1.19
+ - haproxy was NOT RFC compliant because it was case-sensitive on HTTP
+ "Cookie:" and "Set-Cookie:" headers. This caused JVM 1.4 to fail on
+ cookie persistence because it uses "cookie:". Two memcmp() have been
+ replaced with strncasecmp().
+
+2003/04/02 : 1.1.18
+ - Haproxy can be compiled with PCRE regex instead of libc regex, by setting
+ REGEX=pcre on the make command line.
+ - HTTP health-checks now use "OPTIONS *" instead of "OPTIONS /".
+ - when explicit source address binding is required, it is now also used for
+ health-checks.
+ - added 'reqpass' and 'reqipass' to allow certain headers but not the request
+ itself.
+ - factored several strings to reduce binary size by about 2 kB.
+ - replaced setreuid() and setregid() with more standard setuid() and setgid().
+ - added 4 status flags to the log line indicating who ended the connection
+ first, the sessions state, the validity of the cookie, and action taken on
+ the set-cookie header.
+
+2002/10/18 : 1.1.17
+ - add the notion of "backup" servers, which are used only when all other
+ servers are down.
+ - make Set-Cookie return "" instead of "(null)" when the server has no
+ cookie assigned (useful for backup servers).
+ - "log" now supports an optionnal level name (info, notice, err ...) above
+ which nothing is sent.
+ - replaced some strncmp() with memcmp() for better efficiency.
+ - added "capture cookie" option which logs client and/or server cookies
+ - cleaned up/down messages and dump servers states upon SIGHUP
+ - added a redirection feature for errors : "errorloc <errnum> <url>"
+ - now we won't insist on connecting to a dead server, even with a cookie,
+ unless option "persist" is specified.
+ - added HTTP/408 response for client request time-out and HTTP/50[234] for
+ server reply time-out or errors.
+
+2002/09/01 : 1.1.16
+ - implement HTTP health checks when option "httpchk" is specified.
+
+2002/08/07 : 1.1.15
+ - replaced setpgid()/setpgrp() with setsid() for better portability, because
+ setpgrp() doesn't have the same meaning under Solaris, Linux, and OpenBSD.
+
+2002/07/20 : 1.1.14
+ - added "postonly" cookie mode
+
+2002/07/15 : 1.1.13
+ - tv_diff used inverted parameters which led to negative times !
+
+2002/07/13 : 1.1.12
+ - fixed stats monitoring, and optimized some tv_* for most common cases.
+ - replaced temporary 'newhdr' with 'trash' to reduce stack size
+ - made HTTP errors more HTML-fiendly.
+ - renamed strlcpy() to strlcpy2() because of a slightly difference between
+ their behaviour (return value), to avoid confusion.
+ - restricted HTTP messages to HTTP proxies only
+ - added a 502 message when the connection has been refused by the server,
+ to prevent clients from believing this is a zero-byte HTTP 0.9 reply.
+ - changed 'Cache-control:' from 'no-cache="set-cookie"' to 'private' when
+ inserting a cookie, because some caches (apache) don't understand it.
+ - fixed processing of server headers when client is in SHUTR state
+
+2002/07/04 :
+ - automatically close fd's 0,1 and 2 when going daemon ; setpgrp() after
+ setpgid()
+
+2002/06/04 : 1.1.11
+ - fixed multi-cookie handling in client request to allow clean deletion
+ in insert+indirect mode. Now, only the server cookie is deleted and not
+ all the header. Should now be compliant to RFC2965.
+ - added a "nocache" option to "cookie" to specify that we explicitly want
+ to add a "cache-control" header when we add a cookie.
+ It is also possible to add an "Expires: <old-date>" to keep compatibility
+ with old/broken caches.
+
+2002/05/10 : 1.1.10
+ - if a cookie is used in insert+indirect mode, it's desirable that the
+ the servers don't see it. It was not possible to remove it correctly
+ with regexps, so now it's removed automatically.
+
+2002/04/19 : 1.1.9
+ - don't use snprintf()'s return value as an end of message since it may
+ be larger. This caused bus errors and segfaults in internal libc's
+ getenv() during localtime() in send_log().
+ - removed dead insecure send_syslog() function and all references to it.
+ - fixed warnings on Solaris due to buggy implementation of isXXXX().
+
+2002/04/18 : 1.1.8
+ - option "dontlognull"
+ - fixed "double space" bug in config parser
+ - fixed an uninitialized server field in case of dispatch
+ with no existing server which could cause a segfault during
+ logging.
+ - the pid logged was always the father's, which was wrong for daemons.
+ - fixed wrong level "LOG_INFO" for message "proxy started".
+
+2002/04/13 :
+ - http logging is now complete :
+ - ip:port, date, proxy, server
+ - req_time, conn_time, hdr_time, tot_time
+ - status, size, request
+ - source address
+
+2002/04/12 : 1.1.7
+ - added option forwardfor
+ - added reqirep, reqidel, reqiallow, reqideny, rspirep, rspidel
+ - added "log global" in "listen" section.
+
+2002/04/09 :
+ - added a new "global" section :
+ - logs
+ - debug, quiet, daemon modes
+ - uid, gid, chroot, nbproc, maxconn
+
+2002/04/08 : 1.1.6
+ - regex are now chained and not limited anymore.
+ - unavailable server now returns HTTP/502.
+ - increased per-line args limit to 40
+ - added reqallow/reqdeny to block some request on matches
+ - added HTTP 400/403 responses
+
+2002/04/03 : 1.1.5
+ - connection logging displayed incorrect source address.
+ - added proxy start/stop and server up/down log events.
+ - replaced log message short buffers with larger trash.
+ - enlarged buffer to 8 kB and replace buffer to 4 kB.
+
+2002/03/25 : 1.1.4
+ - made rise/fall/interval time configurable
+
+2002/03/22 : 1.1.3
+ - fixed a bug : cr_expire and cw_expire were inverted in CL_STSHUT[WR]
+ which could lead to loops.
+
+2002/03/21 : 1.1.2
+ - fixed a bug in buffer management where we could have a loop
+ between event_read() and process_{cli|srv} if R==BUFSIZE-MAXREWRITE.
+ => implemented an adjustable buffer limit.
+ - fixed a bug : expiration of tasks in wait queue timeout is used again,
+ and running tasks are skipped.
+ - added some debug lines for accept events.
+ - send warnings for servers up/down.
+
+2002/03/12 : 1.1.1
+ - fixed a bug in total failure handling
+ - fixed a bug in timestamp comparison within same second (tv_cmp_ms)
+
+2002/03/10 : 1.1.0
+ - fixed a few timeout bugs
+ - rearranged the task scheduler subsystem to improve performance,
+ add new tasks, and make it easier to later port to librt ;
+ - allow multiple accept() for one select() wake up ;
+ - implemented internal load balancing with basic health-check ;
+ - cookie insertion and header add/replace/delete, with better strings
+ support.
+
+2002/03/08
+ - reworked buffer handling to fix a few rewrite bugs, and
+ improve overall performance.
+ - implement the "purge" option to delete server cookies in direct mode.
+
+2002/03/07
+ - fixed some error cases where the maxfd was not decreased.
+
+2002/02/26
+ - now supports transparent proxying, at least on linux 2.4.
+
+2002/02/12
+ - soft stop works again (fixed select timeout computation).
+ - it seems that TCP proxies sometimes cannot timeout.
+ - added a "quiet" mode.
+ - enforce file descriptor limitation on socket() and accept().
+
+2001/12/30 : release of version 1.0.2 : fixed a bug in header processing
+2001/12/19 : release of version 1.0.1 : no MSG_NOSIGNAL on solaris
+2001/12/16 : release of version 1.0.0.
+2001/12/16 : added syslog capability for each accepted connection.
+2001/11/19 : corrected premature end of files and occasional SIGPIPE.
+2001/10/31 : added health-check type servers (mode health) which replies OK then closes.
+2001/10/30 : added the ability to support standard TCP proxies and HTTP proxies
+ with or without cookies (use keyword http for this).
+2001/09/01 : added client/server header replacing with regexps.
+ eg:
+ cliexp ^(Host:\ [^:]*).* Host:\ \1:80
+ srvexp ^Server:\ .* Server:\ Apache
+2000/11/29 : first fully working release with complete FSMs and timeouts.
+2000/11/28 : major rewrite
+2000/11/26 : first write
--- /dev/null
+ HOW TO GET YOUR CODE ACCEPTED IN HAPROXY
+ READ THIS CAREFULLY BEFORE SUBMITTING CODE
+
+THIS DOCUMENT PROVIDES SOME RULES TO FOLLOW WHEN SENDING CONTRIBUTIONS. PATCHES
+NOT FOLLOWING THESE RULES WILL SIMPLY BE REJECTED IN ORDER TO PROTECT ALL OTHER
+RESPECTFUL CONTRIBUTORS' VALUABLE TIME.
+
+
+Background
+----------
+
+During the development cycle of version 1.6, much more time was spent reviewing
+poor quality submissions, fixing them and troubleshooting the bugs they
+introduced than doing any development work. This is not acceptable as it ends
+up with people actually slowing down the project for the features they're the
+only ones interested in. On the other end of the scale, there are people who
+make the effort of polishing their work to contribute excellent quality work
+which doesn't even require a review. Contrary to what newcomers may think, it's
+very easy to reach that level of quality and get your changes accepted quickly,
+even late in the development cycle. It only requires that you make your homework
+and not rely on others to do it for you. The most important point is that
+HAProxy is a community-driven project, all involved participants must respect
+all other ones' time and work.
+
+
+Preparation
+-----------
+
+It is possible that you'll want to add a specific feature to satisfy your needs
+or one of your customers'. Contributions are welcome, however maintainers are
+often very picky about changes. Patches that change massive parts of the code,
+or that touch the core parts without any good reason will generally be rejected
+if those changes have not been discussed first.
+
+The proper place to discuss your changes is the HAProxy Mailing List. There are
+enough skilled readers to catch hazardous mistakes and to suggest improvements.
+There is no other place where you'll find as many skilled people on the project,
+and these people can help you get your code integrated quickly. You can
+subscribe to it by sending an empty e-mail at the following address :
+
+ haproxy+subscribe@formilux.org
+
+If you have an idea about something to implement, *please* discuss it on the
+list first. It has already happened several times that two persons did the same
+thing simultaneously. This is a waste of time for both of them. It's also very
+common to see some changes rejected because they're done in a way that will
+conflict with future evolutions, or that does not leave a good feeling. It's
+always unpleasant for the person who did the work, and it is unpleasant in
+general because value people's time and efforts are valuable and would be better
+spent working on something else. That would not happen if these were discussed
+first. There is no problem posting work in progress to the list, it happens
+quite often in fact. Also, don't waste your time with the doc when submitting
+patches for review, only add the doc with the patch you consider ready to merge.
+
+Another important point concerns code portability. Haproxy requires gcc as the
+C compiler, and may or may not work with other compilers. However it's known to
+build using gcc 2.95 or any later version. As such, it is important to keep in
+mind that certain facilities offered by recent versions must not be used in the
+code :
+
+ - declarations mixed in the code (requires gcc >= 3.x and is a bad practice)
+ - GCC builtins without checking for their availability based on version and
+ architecture ;
+ - assembly code without any alternate portable form for other platforms
+ - use of stdbool.h, "bool", "false", "true" : simply use "int", "0", "1"
+ - in general, anything which requires C99 (such as declaring variables in
+ "for" statements)
+
+Since most of these restrictions are just a matter of coding style, it is
+normally not a problem to comply.
+
+If your work is very confidential and you can't publicly discuss it, you can
+also mail willy@haproxy.org directly about it, but your mail may be waiting
+several days in the queue before you get a response. Retransmit if you don't
+get a response by one week.
+
+If you'd like a feature to be added but you think you don't have the skills to
+implement it yourself, you should follow these steps :
+
+ 1. discuss the feature on the mailing list. It is possible that someone
+ else has already implemented it, or that someone will tell you how to
+ proceed without it, or even why not to do it. It is also possible that
+ in fact it's quite easy to implement and people will guide you through
+ the process. That way you'll finally have YOUR patch merged, providing
+ the feature YOU need.
+
+ 2. if you really can't code it yourself after discussing it, then you may
+ consider contacting someone to do the job for you. Some people on the
+ list might sometimes be OK with trying to do it.
+
+
+Rules : the 12 laws of patch contribution
+-----------------------------------------
+
+People contributing patches must apply the following rules. That may sound heavy
+at the beginning but it's common sense more than anything else and contributors
+do not think about them anymore after a few patches.
+
+1) Before modifying some code, you have read the LICENSE file ("main license")
+ coming with the sources, and all the files this file references. Certain
+ files may be covered by different licenses, in which case it will be
+ indicated in the files themselves. In any case, you agree to respect these
+ licenses and to contribute your changes under the same licenses. If you want
+ to create new files, they will be under the main license, or any license of
+ your choice that you have verified to be compatible with the main license,
+ and that will be explicitly mentionned in the affected files. The project's
+ maintainers are free to reject contributions proposing license changes they
+ feel are not appropriate or could cause future trouble.
+
+2) Your work may only be based on the latest development version. No development
+ is made on a stable branch. If your work needs to be applied to a stable
+ branch, it will first be applied to the development branch and only then will
+ be backported to the stable branch. You are responsible for ensuring that
+ your work correctly applies to the development version. If at any moment you
+ are going to work on restructuring something important which may impact other
+ contributors, the rule that applies is that the first sent is the first
+ served. However it is considered good practice and politeness to warn others
+ in advance if you know you're going to make changes that may force them to
+ re-adapt their code, because they did probably not expect to have to spend
+ more time discovering your changes and rebasing their work.
+
+3) You have read and understood "doc/codingstyle.txt", and you're actively
+ determined to respect it and to enforce it on your coworkers if you're going
+ to submit a team's work. We don't care what text editor you use, whether it's
+ an hex editor, cat, vi, emacs, Notepad, Word, or even Eclipse. The editor is
+ only the interface between you and the text file. What matters is what is in
+ the text file in the end. The editor is not an excuse for submitting poorly
+ indented code, which only proves that the person has no consideration for
+ quality and/or has done it in a hurry (probably worse). Please note that most
+ bugs were found in low-quality code. Reviewers know this and tend to be much
+ more reluctant to accept poorly formated code because by experience they
+ won't trust their author's ability to write correct code. It is also worth
+ noting that poor quality code is painful to read and may result in nobody
+ willing to waste their time even reviewing your work.
+
+4) The time it takes for you to polish your code is always much smaller than the
+ time it takes others to do it for you, because they always have to wonder if
+ what they see is intended (meaning they didn't understand something) or if it
+ is a mistake that needs to be fixed. And since there are less reviewers than
+ submitters, it is vital to spread the effort closer to where the code is
+ written and not closer to where it gets merged. For example if you have to
+ write a report for a customer that your boss wants to review before you send
+ it to the customer, will you throw on his desk a pile of paper with stains,
+ typos and copy-pastes everywhere ? Will you say "come on, OK I made a mistake
+ in the company's name but they will find it by themselves, it's obvious it
+ comes from us" ? No. When in doubt, simply ask for help on the mailing list.
+
+5) There are four levels of importance of quality in the project :
+
+ - The most important one, and by far, is the quality of the user-facing
+ documentation. This is the first contact for most users and it immediately
+ gives them an accurate idea of how the project is maintained. Dirty docs
+ necessarily belong to a dirty project. Be careful to the way the text you
+ add is presented and indented. Be very careful about typos, usual mistakes
+ such as double consonants when only one is needed or "it's" instead of
+ "its", don't mix US english and UK english in the same paragraph, etc.
+ When in doubt, check in a dictionary. Fixes for existing typos in the doc
+ are always welcome and chasing them is a good way to become familiar with
+ the project and to get other participants' respect and consideration.
+
+ - The second most important level is user-facing messages emitted by the
+ code. You must try to see all the messages your code produces to ensure
+ they are understandable outside of the context where you wrote them,
+ because the user often doesn't expect them. That's true for warnings, and
+ that's even more important for errors which prevent the program from
+ working and which require an immediate and well understood fix in the
+ configuration. It's much better to say "line 35: compression level must be
+ an integer between 1 and 9" than "invalid argument at line 35". In HAProxy,
+ error handling roughly represents half of the code, and that's about 3/4 of
+ the configuration parser. Take the time to do something you're proud of. A
+ good rule of thumb is to keep in mind that your code talks to a human and
+ tries to teach him/her how to proceed. It must then speak like a human.
+
+ - The third most important level is the code and its accompanying comments,
+ including the commit message which is a complement to your code and
+ comments. It's important for all other contributors that the code is
+ readable, fluid, understandable and that the commit message describes what
+ was done, the choices made, the possible alternatives you thought about,
+ the reason for picking this one and its limits if any. Comments should be
+ written where it's easy to have a doubt or after some error cases have been
+ wiped out and you want to explain what possibilities remain. All functions
+ must have a comment indicating what they take on input and what they
+ provide on output. Please adjust the comments when you copy-paste a
+ function or change its prototype, this type of lazy mistake is too common
+ and very confusing when reading code later to debug an issue. Do not forget
+ that others will feel really angry at you when they have to dig into your
+ code for a bug that your code caused and they feel like this code is dirty
+ or confusing, that the commit message doesn't explain anything useful and
+ that the patch should never have been accepted in the first place. That
+ will strongly impact your reputation and will definitely affect your
+ chances to contribute again!
+
+ - The fourth level of importance is in the technical documentation that you
+ may want to add with your code. Technical documentation is always welcome
+ as it helps others make the best use of your work and to go exactly in the
+ direction you thought about during the design. This is also what reduces
+ the risk that your design gets changed in the near future due to a misuse
+ and/or a poor understanding. All such documentation is actually considered
+ as a bonus. It is more important that this documentation exists than that
+ it looks clean. Sometimes just copy-pasting your draft notes in a file to
+ keep a record of design ideas is better than losing them. Please do your
+ best so that other ones can read your doc. If these docs require a special
+ tool such as a graphics utility, ensure that the file name makes it
+ unambiguous how to process it. So there are no rules here for the contents,
+ except one. Please write the date in your file. Design docs tend to stay
+ forever and to remain long after they become obsolete. At this point that
+ can cause harm more than it can help. Writing the date in the document
+ helps developers guess the degree of validity and/or compare them with the
+ date of certain commits touching the same area.
+
+6) All text files and commit messages are written using the US-ASCII charset.
+ Please be careful that your contributions do not contain any character not
+ printable using this charset, as they will render differently in different
+ editors and/or terminals. Avoid latin1 and more importantly UTF-8 which some
+ editors tend to abuse to replace some US-ASCII characters with their
+ typographic equivalent which aren't readable anymore in other editors. The
+ only place where alternative charsets are tolerated is in your name in the
+ commit message, but it's at your own risk as it can be mangled during the
+ merge. Anyway if you have an e-mail address, you probably have a valid
+ US-ASCII representation for it as well.
+
+7) Be careful about comments when you move code around. It's not acceptable that
+ a block of code is moved to another place leaving irrelevant comments at the
+ old place, just like it's not acceptable that a function is duplicated without
+ the comments being adjusted. The example below started to become quite common
+ during the 1.6 cycle, it is not acceptable and wastes everyone's time :
+
+ /* Parse switching <str> to build rule <rule>. Returns 0 on error. */
+ int parse_switching_rule(const char *str, struct rule *rule)
+ {
+ ...
+ }
+
+ /* Parse switching <str> to build rule <rule>. Returns 0 on error. */
+ void execute_switching_rule(struct rule *rule)
+ {
+ ...
+ }
+
+ This patch is not acceptable either (and it's unfortunately not that rare) :
+
+ + if (!session || !arg || list_is_empty(&session->rules->head))
+ + return 0;
+ +
+ /* Check if session->rules is valid before dereferencing it */
+ if (!session->rules_allocated)
+ return 0;
+
+ - if (!arg || list_is_empty(&session->rules->head))
+ - return 0;
+ -
+
+8) Limit the length of your identifiers in the code. When your identifiers start
+ to sound like sentences, it's very hard for the reader to keep on track with
+ what operation they are observing. Also long names force expressions to fit
+ on several lines which also cause some difficulties to the reader. See the
+ example below :
+
+ int file_name_len_including_global_path;
+ int file_name_len_without_global_path;
+ int global_path_len_or_zero_if_default;
+
+ if (global_path)
+ global_path_len_or_zero_if_default = strlen(global_path);
+ else
+ global_path_len_or_zero_if_default = 0;
+
+ file_name_len_without_global_path = strlen(file_name);
+ file_name_len_including_global_path =
+ file_name_len_without_global_path + 1 + /* for '/' */
+ global_path_len_or_zero_if_default ?
+ global_path_len_or_zero_if_default : default_path_len;
+
+ Compare it to this one :
+
+ int f, p;
+
+ p = global_path ? strlen(global_path) : default_path_len;
+ f = p + 1 + strlen(file_name); /* 1 for '/' */
+
+ A good rule of thumb is that if your identifiers start to contain more than
+ 3 words or more than 15 characters, they can become confusing. For function
+ names it's less important especially if these functions are rarely used or
+ are used in a complex context where it is important to differenciate between
+ their multiple variants.
+
+9) Your patches should be sent in "diff -up" format, which is also the format
+ used by Git. This means the "unified" diff format must be used exclusively,
+ and with the function name printed in the diff header of each block. That
+ significantly helps during reviews. Keep in mind that most reviews are done
+ on the patch and not on the code after applying the patch. Your diff must
+ keep some context (3 lines above and 3 lines below) so that there's no doubt
+ where the code has to be applied. Don't change code outside of the context of
+ your patch (eg: take care of not adding/removing empty lines once you remove
+ your debugging code). If you are using Git (which is strongly recommended),
+ please produce your patches using "git format-patch" and not "git diff", and
+ always use "git show" after doing a commit to ensure it looks good, and
+ enable syntax coloring that will automatically report in red the trailing
+ spaces or tabs that your patch added to the code and that must absolutely be
+ removed. These ones cause a real pain to apply patches later because they
+ mangle the context in an invisible way. Such patches with trailing spaces at
+ end of lines will be rejected.
+
+10) Please cut your work in series of patches that can be independantly reviewed
+ and merged. Each patch must do something on its own that you can explain to
+ someone without being ashamed of what you did. For example, you must not say
+ "This is the patch that implements SSL, it was tricky". There's clearly
+ something wrong there, your patch will be huge, will definitely break things
+ and nobody will be able to figure what exactly introduced the bug. However
+ it's much better to say "I needed to add some fields in the session to store
+ the SSL context so this patch does this and doesn't touch anything else, so
+ it's safe". Also when dealing with series, you will sometimes fix a bug that
+ one of your patches introduced. Please do merge these fixes (eg: using git
+ rebase -i and squash or fixup), as it is not acceptable to see patches which
+ introduce known bugs even if they're fixed later. Another benefit of cleanly
+ splitting patches is that if some of your patches need to be reworked after
+ a review, the other ones can still be merged so that you don't need to care
+ about them anymore. When sending multiple patches for review, prefer to send
+ one e-mail per patch than all patches in a single e-mail. The reason is that
+ not everyone is skilled in all areas nor has the time to review everything
+ at once. With one patch per e-mail, it's easy to comment on a single patch
+ without giving an opinion on the other ones, especially if a long thread
+ starts about one specific patch on the mailing list. "git send-email" does
+ that for you though it requires a few trials before getting it right.
+
+11) Please properly format your commit messages. While it's not strictly
+ required to use Git, it is strongly recommended because it helps you do the
+ cleanest job with the least effort. Patches always have the format of an
+ e-mail made of a subject, a description and the actual patch. If you're
+ sending a patch as an e-mail formated this way, it can quickly be applied
+ with limited effort so that's acceptable. But in any case, it is important
+ that there is a clean description of what the patch does, the motivation for
+ what it does, why it's the best way to do it, its impacts, and what it does
+ not yet cover. Also, in HAProxy, like many projects which take a great care
+ of maintaining stable branches, patches are reviewed later so that some of
+ them can be backported to stable releases. While reviewing hundreds of
+ patches can seem cumbersome, with a proper formating of the subject line it
+ actually becomes very easy. For example, here's how one can find patches
+ that need to be reviewed for backports (bugs and doc) between since commit
+ ID 827752e :
+
+ $ git log --oneline 827752e.. | grep 'BUG\|DOC'
+ 0d79cf6 DOC: fix function name
+ bc96534 DOC: ssl: missing LF
+ 10ec214 BUG/MEDIUM: lua: the lua fucntion Channel:close() causes a segf
+ bdc97a8 BUG/MEDIUM: lua: outgoing connection was broken since 1.6-dev2
+ ba56d9c DOC: mention support for RFC 5077 TLS Ticket extension in start
+ f1650a8 DOC: clarify some points about SSL and the proxy protocol
+ b157d73 BUG/MAJOR: peers: fix current table pointer not re-initialized
+ e1ab808 BUG/MEDIUM: peers: fix wrong message id on stick table updates
+ cc79b00 BUG/MINOR: ssl: TLS Ticket Key rotation broken via socket comma
+ d8e42b6 DOC: add new file intro.txt
+ c7d7607 BUG/MEDIUM: lua: bad error processing
+ 386a127 DOC: match several lua configuration option names to those impl
+ 0f4eadd BUG/MEDIUM: counters: ensure that src_{inc,clr}_gpc0 creates a
+
+ It is made possible by the fact that subject lines are properly formated and
+ always respect the same principle : one part indicating the nature and
+ severity of the patch, another one to indicate which subsystem is affected,
+ and the last one is a succint description of the change, with the important
+ part at the beginning so that it's obvious what it does even when lines are
+ truncated like above. The whole stable maintenance process relies on this.
+ For this reason, it is mandatory to respect some easy rules regarding the
+ way the subject is built. Please see the section below for more information
+ regarding this formating.
+
+12) When submitting changes, please always CC the mailing list address so that
+ everyone gets a chance to spot any issue in your code. It will also serve
+ as an advertisement for your work, you'll get more testers quicker and
+ you'll feel better knowing that people really use your work. It is also
+ important to CC any author mentionned in the file you change, or a subsystem
+ maintainers whose address is mentionned in a MAINTAINERS file. Not everyone
+ reads the list on a daily basis so it's very easy to miss some changes.
+ Don't consider it as a failure when a reviewer tells you you have to modify
+ your patch, actually it's a success because now you know what is missing for
+ your work to get accepted. That's why you should not hesitate to CC enough
+ people. Don't copy people who have no deal with your work area just because
+ you found their address on the list. That's the best way to appear careless
+ about their time and make them reject your changes in the future.
+
+
+Patch classifying rules
+-----------------------
+
+There are 3 criteria of particular importance in any patch :
+ - its nature (is it a fix for a bug, a new feature, an optimization, ...)
+ - its importance, which generally reflects the risk of merging/not merging it
+ - what area it applies to (eg: http, stats, startup, config, doc, ...)
+
+It's important to make these 3 criteria easy to spot in the patch's subject,
+because it's the first (and sometimes the only) thing which is read when
+reviewing patches to find which ones need to be backported to older versions.
+It also helps when trying to find which patch is the most likely to have caused
+a regression.
+
+Specifically, bugs must be clearly easy to spot so that they're never missed.
+Any patch fixing a bug must have the "BUG" tag in its subject. Most common
+patch types include :
+
+ - BUG fix for a bug. The severity of the bug should also be indicated
+ when known. Similarly, if a backport is needed to older versions,
+ it should be indicated on the last line of the commit message. If
+ the bug has been identified as a regression brought by a specific
+ patch or version, this indication will be appreciated too. New
+ maintenance releases are generally emitted when a few of these
+ patches are merged. If the bug is a vulnerability for which a CVE
+ identifier was assigned before you publish the fix, you can mention
+ it in the commit message, it will help distro maintainers.
+
+ - CLEANUP code cleanup, silence of warnings, etc... theorically no impact.
+ These patches will rarely be seen in stable branches, though they
+ may appear when they remove some annoyance or when they make
+ backporting easier. By nature, a cleanup is always of minor
+ importance and it's not needed to mention it.
+
+ - DOC updates to any of the documentation files, including README. Many
+ documentation updates are backported since they don't impact the
+ product's stability and may help users avoid bugs. So please
+ indicate in the commit message if a backport is desired. When a
+ feature gets documented, it's preferred that the doc patch appears
+ in the same patch or after the feature patch, but not before, as it
+ becomes confusing when someone working on a code base including
+ only the doc patch won't understand why a documented feature does
+ not work as documented.
+
+ - REORG code reorganization. Some blocks may be moved to other places,
+ some important checks might be swapped, etc... These changes
+ always present a risk of regression. For this reason, they should
+ never be mixed with any bug fix nor functional change. Code is
+ only moved as-is. Indicating the risk of breakage is highly
+ recommended. Minor breakage is tolerated in such patches if trying
+ to fix it at once makes the whole change even more confusing. That
+ may happen for example when some #ifdefs need to be propagated in
+ every file consecutive to the change.
+
+ - BUILD updates or fixes for build issues. Changes to makefiles also fall
+ into this category. The risk of breakage should be indicated if
+ known. It is also appreciated to indicate what platforms and/or
+ configurations were tested after the change.
+
+ - OPTIM some code was optimised. Sometimes if the regression risk is very
+ low and the gains significant, such patches may be merged in the
+ stable branch. Depending on the amount of code changed or replaced
+ and the level of trust the author has in the change, the risk of
+ regression should be indicated.
+
+ - RELEASE release of a new version (development or stable).
+
+ - LICENSE licensing updates (may impact distro packagers).
+
+
+When the patch cannot be categorized, it's best not to put any type tag. This is
+commonly the case for new features, which development versions are mostly made
+of.
+
+Additionally, the importance of the patch or severity of the bug it fixes must
+be indicated when relevant. A single upper-case word is preferred, among :
+
+ - MINOR minor change, very low risk of impact. It is often the case for
+ code additions that don't touch live code. As a rule of thumb, a
+ patch tagged "MINOR" is safe enough to be backported to stable
+ branches. For a bug, it generally indicates an annoyance, nothing
+ more.
+
+ - MEDIUM medium risk, may cause unexpected regressions of low importance or
+ which may quickly be discovered. In short, the patch is safe but
+ touches working areas and it is always possible that you missed
+ something you didn't know existed (eg: adding a "case" entry or
+ an error message after adding an error code to an enum). For a bug,
+ it generally indicates something odd which requires changing the
+ configuration in an undesired way to work around the issue.
+
+ - MAJOR major risk of hidden regression. This happens when large parts of
+ the code are rearranged, when new timeouts are introduced, when
+ sensitive parts of the session scheduling are touched, etc... We
+ should only exceptionally find such patches in stable branches when
+ there is no other option to fix a design issue. For a bug, it
+ indicates severe reliability issues for which workarounds are
+ identified with or without performance impacts.
+
+ - CRITICAL medium-term reliability or security is at risk and workarounds,
+ if they exist, might not always be acceptable. An upgrade is
+ absolutely required. A maintenance release may be emitted even if
+ only one of these bugs are fixed. Note that this tag is only used
+ with bugs. Such patches must indicate what is the first version
+ affected, and if known, the commit ID which introduced the issue.
+
+The expected length of the commit message grows with the importance of the
+change. While a MINOR patch may sometimes be described in 1 or 2 lines, MAJOR
+or CRITICAL patches cannot have less than 10-15 lines to describe exactly the
+impacts otherwise the submitter's work will be considered as rough sabotage.
+
+For BUILD, DOC and CLEANUP types, this tag is not always relevant and may be
+omitted.
+
+The area the patch applies to is quite important, because some areas are known
+to be similar in older versions, suggesting a backport might be desirable, and
+conversely, some areas are known to be specific to one version. The area is a
+single-word lowercase name the contributor find clear enough to describe what
+part is being touched. The following tags are suggested but not limitative :
+
+ - examples example files. Be careful, sometimes these files are packaged.
+
+ - tests regression test files. No code is affected, no need to upgrade.
+
+ - init initialization code, arguments parsing, etc...
+
+ - config configuration parser, mostly used when adding new config keywords
+
+ - http the HTTP engine
+
+ - stats the stats reporting engine
+
+ - cli the stats socket CLI
+
+ - checks the health checks engine (eg: when adding new checks)
+
+ - sample the sample fetch system (new fetch or converter functions)
+
+ - acl the ACL processing core or some ACLs from other areas
+
+ - peers the peer synchronization engine
+
+ - lua the Lua scripting engine
+
+ - listeners everything related to incoming connection settings
+
+ - frontend everything related to incoming connection processing
+
+ - backend everything related to LB algorithms and server farm
+
+ - session session processing and flags (very sensible, be careful)
+
+ - server server connection management, queueing
+
+ - ssl the SSL/TLS interface
+
+ - proxy proxy maintenance (start/stop)
+
+ - log log management
+
+ - poll any of the pollers
+
+ - halog the halog sub-component in the contrib directory
+
+ - contrib any addition to the contrib directory
+
+Other names may be invented when more precise indications are meaningful, for
+instance : "cookie" which indicates cookie processing in the HTTP core. Last,
+indicating the name of the affected file is also a good way to quickly spot
+changes. Many commits were already tagged with "stream_sock" or "cfgparse" for
+instance.
+
+It is required that the type of change and the severity when relevant are
+indicated, as well as the touched area when relevant as well in the patch
+subject. Normally, we would have the 3 most often. The two first criteria should
+be present before a first colon (':'). If both are present, then they should be
+delimited with a slash ('/'). The 3rd criterion (area) should appear next, also
+followed by a colon. Thus, all of the following messages are valid :
+
+Examples of messages :
+ - DOC: document options forwardfor to logasap
+ - DOC/MAJOR: reorganize the whole document and change indenting
+ - BUG: stats: connection reset counters must be plain ascii, not HTML
+ - BUG/MINOR: stats: connection reset counters must be plain ascii, not HTML
+ - MEDIUM: checks: support multi-packet health check responses
+ - RELEASE: Released version 1.4.2
+ - BUILD: stats: stdint is not present on solaris
+ - OPTIM/MINOR: halog: make fgets parse more bytes by blocks
+ - REORG/MEDIUM: move syscall redefinition to specific places
+
+Please do not use square brackets anymore around the tags, because they induce
+more work when merging patches, which need to be hand-edited not to lose the
+enclosed part.
+
+In fact, one of the only square bracket tags that still makes sense is '[RFC]'
+at the beginning of the subject, when you're asking for someone to review your
+change before getting it merged. If the patch is OK to be merged, then it can
+be merge as-is and the '[RFC]' tag will automatically be removed. If you don't
+want it to be merged at all, you can simply state it in the message, or use an
+alternate 'WIP/' prefix in front of your tag tag ("work in progress").
+
+The tags are not rigid, follow your intuition first, and they may be readjusted
+when your patch is merged. It may happen that a same patch has a different tag
+in two distinct branches. The reason is that a bug in one branch may just be a
+cleanup or safety measure in the other one because the code cannot be triggered.
+
+
+Working with Git
+----------------
+
+For a more efficient interaction between the mainline code and your code, you
+are strongly encouraged to try the Git version control system :
+
+ http://git-scm.com/
+
+It's very fast, lightweight and lets you undo/redo your work as often as you
+want, without making your mistakes visible to the rest of the world. It will
+definitely help you contribute quality code and take other people's feedback
+in consideration. In order to clone the HAProxy Git repository :
+
+ $ git clone http://git.haproxy.org/git/haproxy.git/ (development)
+
+If you decide to use Git for your developments, then your commit messages will
+have the subject line in the format described above, then the whole description
+of your work (mainly why you did it) will be in the body. You can directly send
+your commits to the mailing list, the format is convenient to read and process.
+
+It is recommended to create a branch for your work that is based on the master
+branch :
+
+ $ git checkout -b 20150920-fix-stats master
+
+You can then do your work and even experiment with multiple alternatives if you
+are not completely sure that your solution is the best one :
+
+ $ git checkout -b 20150920-fix-stats-v2
+
+Then reorder/merge/edit your patches :
+
+ $ git rebase -i master
+
+When you think you're ready, reread your whole patchset to ensure there is no
+formating or style issue :
+
+ $ git show master..
+
+And once you're satisfied, you should update your master branch to be sure that
+nothing changed during your work (only neede if you left it unattended for days
+or weeks) :
+
+ $ git checkout -b 20150920-fix-stats-rebased
+ $ git fetch origin master:master
+ $ git rebase master
+
+They can build a list of patches ready for submission like this :
+
+ $ git format-patch master
+
+The output files are the patches ready to be sent over e-mail, either via a
+regular e-mail or via git send-email (carefully check the man page). Don't
+destroy your other work branches until your patches get merged, it may happen
+that earlier designs will be preferred for various reasons. Patches should be
+sent to the mailing list : haproxy@formilux.org and CCed to relevant subsystem
+maintainers or authors of the modified files if their address appears at the
+top of the file.
+
+Please don't send pull-requests, they are really unconvenient. First, a pull
+implies a merge operation and the code doesn't move fast enough to justify the
+use of merges. Second, pull requests are not easily commented on by the
+project's participants, contrary to e-mails where anyone is allowed to have an
+opinion and to express it.
+
+-- end
--- /dev/null
+HAPROXY's license - 2006/06/15
+
+Historically, haproxy has been covered by GPL version 2. However, an issue
+appeared in GPL which will prevent external non-GPL code from being built
+using the headers provided with haproxy. My long-term goal is to build a core
+system able to load external modules to support specific application protocols.
+
+Since some protocols are found in rare environments (finance, industry, ...),
+some of them might be accessible only after signing an NDA. Enforcing GPL on
+such modules would only prevent them from ever being implemented, while not
+providing anything useful to ordinary users.
+
+For this reason, I *want* to be able to support binary only external modules
+when needed, with a GPL core and GPL modules for standard protocols, so that
+people fixing bugs don't keep them secretly to try to stay over competition.
+
+The solution was then to apply the LGPL license to the exportable include
+files, while keeping the GPL for all the rest. This way, it still is mandatory
+to redistribute modified code under customer request, but at the same time, it
+is expressly permitted to write, compile, link and load non-GPL code using the
+LGPL header files and not to distribute them if it causes a legal problem.
+
+Of course, users are strongly encouraged to continue the work under GPL as long
+as possible, since this license has allowed useful enhancements, contributions
+and fixes from talented people around the world.
+
+Due to the incompatibility between the GPL and the OpenSSL licence, you must
+apply the GPL/LGPL licence with the following exception:
+This program is released under the GPL with the additional exemption that
+compiling, linking, and/or using OpenSSL is allowed.
+
+The text of the licenses lies in the "doc" directory. All the files provided in
+this package are covered by the GPL unless expressly stated otherwise in them.
+Every patch or contribution provided by external people will by default comply
+with the license of the files it affects, or be rejected.
+
+Willy Tarreau - w@1wt.eu
--- /dev/null
+This file contains a list of people who are responsible for certain parts of
+the HAProxy project and who have authority on them. This means that these
+people have to be consulted before doing any change in the parts they maintain,
+including when fixing bugs. These persons are allowed to reject any change on
+the parts they maintain, and in parallel they try their best to ensure these
+parts work well. Similarly, any change to these parts not being validated by
+them will be rejected.
+
+The best way to deal with such subsystems when sending patches is to send the
+patches to the mailing list and to CC these people. When no maintainer is
+listed for a subsystem, you can simply send your changes the usual way, and it
+is also a sign that if you want to strengthen your skills on certain parts you
+can become yourself a maintainer of the parts you care a lot about.
+
+Please do not ask them to troubleshoot your bugs, it's not their job eventhough
+they may occasionally help as time permits.
+
+List of maintainers
+-------------------
+
+Lua
+Maintainer: Thierry Fournier <tfournier@arpalert.org>
+Files: src/hlua.c, include/*/hlua.h
+
+Maps and pattern matching
+Maintainer: Thierry Fournier <tfournier@arpalert.org>
+Files: src/maps.c, src/pattern.c, include/*/maps.h, include/*/pattern.h
+
+DNS
+Maintainer: Baptiste Assmann <bedis9@gmail.com>
+Files: src/dns.c, include/*/dns.h
+
+SSL
+Maintainer: Emeric Brun <ebrun@haproxy.com>
+Files: src/ssl_sock.c, include/*/ssl_sock.h
+
+Peers
+Maintainer: Emeric Brun <ebrun@haproxy.com>
+Files: src/peers.c, include/*/peers.h
+
+Doc to HTML converter (dconv)
+Maintainer: Cyril Bonté <cyril.bonte@free.fr>
+Files: doc/*.txt
+Note: ask Cyril before changing any doc's format or structure.
+
+Health checks
+Files: src/checks.c, include/*/checks.h
+Maintainers: Simon Horman for external-check, Baptiste Assmann for tcp-check
+Note: health checks are fragile and have been broken many times, so please
+ consult the relevant maintainers if you want to change these specific
+ parts.
+
+Mailers
+Maintainer: Simon Horman <horms@verge.net.au>
+Files: src/mailers.c, include/*/mailers.h
+
+DeviceAtlas device identification
+Maintainer: David Carlier <dcarlier@afilias.info>
+Files: src/da.c, include/*/da.h
+
--- /dev/null
+# This GNU Makefile supports different OS and CPU combinations.
+#
+# You should use it this way :
+# [g]make TARGET=os ARCH=arch CPU=cpu USE_xxx=1 ...
+#
+# Valid USE_* options are the following. Most of them are automatically set by
+# the TARGET, others have to be explictly specified :
+# USE_DLMALLOC : enable use of dlmalloc (see DLMALLOC_SRC)
+# USE_EPOLL : enable epoll() on Linux 2.6. Automatic.
+# USE_GETSOCKNAME : enable getsockname() on Linux 2.2. Automatic.
+# USE_KQUEUE : enable kqueue() on BSD. Automatic.
+# USE_MY_EPOLL : redefine epoll_* syscalls. Automatic.
+# USE_MY_SPLICE : redefine the splice syscall if build fails without.
+# USE_NETFILTER : enable netfilter on Linux. Automatic.
+# USE_PCRE : enable use of libpcre for regex. Recommended.
+# USE_PCRE_JIT : enable JIT for faster regex on libpcre >= 8.32
+# USE_POLL : enable poll(). Automatic.
+# USE_PRIVATE_CACHE : disable shared memory cache of ssl sessions.
+# USE_PTHREAD_PSHARED : enable pthread process shared mutex on sslcache.
+# USE_REGPARM : enable regparm optimization. Recommended on x86.
+# USE_STATIC_PCRE : enable static libpcre. Recommended.
+# USE_TPROXY : enable transparent proxy. Automatic.
+# USE_LINUX_TPROXY : enable full transparent proxy. Automatic.
+# USE_LINUX_SPLICE : enable kernel 2.6 splicing. Automatic.
+# USE_LIBCRYPT : enable crypted passwords using -lcrypt
+# USE_CRYPT_H : set it if your system requires including crypt.h
+# USE_VSYSCALL : enable vsyscall on Linux x86, bypassing libc
+# USE_GETADDRINFO : use getaddrinfo() to resolve IPv6 host names.
+# USE_OPENSSL : enable use of OpenSSL. Recommended, but see below.
+# USE_LUA : enable Lua support.
+# USE_FUTEX : enable use of futex on kernel 2.6. Automatic.
+# USE_ACCEPT4 : enable use of accept4() on linux. Automatic.
+# USE_MY_ACCEPT4 : use own implemention of accept4() if glibc < 2.10.
+# USE_ZLIB : enable zlib library support.
+# USE_SLZ : enable slz library instead of zlib (pick at most one).
+# USE_CPU_AFFINITY : enable pinning processes to CPU on Linux. Automatic.
+# USE_TFO : enable TCP fast open. Supported on Linux >= 3.7.
+# USE_NS : enable network namespace support. Supported on Linux >= 2.6.24.
+# USE_DL : enable it if your system requires -ldl. Automatic on Linux.
+# USE_DEVICEATLAS : enable DeviceAtlas api.
+# USE_51DEGREES : enable third party device detection library from 51Degrees
+#
+# Options can be forced by specifying "USE_xxx=1" or can be disabled by using
+# "USE_xxx=" (empty string).
+#
+# Variables useful for packagers :
+# CC is set to "gcc" by default and is used for compilation only.
+# LD is set to "gcc" by default and is used for linking only.
+# ARCH may be useful to force build of 32-bit binary on 64-bit systems
+# CFLAGS is automatically set for the specified CPU and may be overridden.
+# LDFLAGS is automatically set to -g and may be overridden.
+# SMALL_OPTS may be used to specify some options to shrink memory usage.
+# DEBUG may be used to set some internal debugging options.
+# ADDINC may be used to complete the include path in the form -Ipath.
+# ADDLIB may be used to complete the library list in the form -Lpath -llib.
+# DEFINE may be used to specify any additional define, which will be reported
+# by "haproxy -vv" in CFLAGS.
+# SILENT_DEFINE may be used to specify other defines which will not be
+# reported by "haproxy -vv".
+# EXTRA is used to force building or not building some extra tools. By
+# default on Linux 2.6+, it contains "haproxy-systemd-wrapper".
+# DESTDIR is not set by default and is used for installation only.
+# It might be useful to set DESTDIR if you want to install haproxy
+# in a sandbox.
+# PREFIX is set to "/usr/local" by default and is used for installation only.
+# SBINDIR is set to "$(PREFIX)/sbin" by default and is used for installation
+# only.
+# MANDIR is set to "$(PREFIX)/share/man" by default and is used for
+# installation only.
+# DOCDIR is set to "$(PREFIX)/doc/haproxy" by default and is used for
+# installation only.
+#
+# Other variables :
+# DLMALLOC_SRC : build with dlmalloc, indicate the location of dlmalloc.c.
+# DLMALLOC_THRES : should match PAGE_SIZE on every platform (default: 4096).
+# PCREDIR : force the path to libpcre.
+# PCRE_LIB : force the lib path to libpcre (defaults to $PCREDIR/lib).
+# PCRE_INC : force the include path to libpcre ($PCREDIR/inc)
+# SSL_LIB : force the lib path to libssl/libcrypto
+# SSL_INC : force the include path to libssl/libcrypto
+# LUA_LIB : force the lib path to lua
+# LUA_INC : force the include path to lua
+# LUA_LIB_NAME : force the lib name (or automatically evaluated, by order of
+# priority : lua5.3, lua53, lua).
+# IGNOREGIT : ignore GIT commit versions if set.
+# VERSION : force haproxy version reporting.
+# SUBVERS : add a sub-version (eg: platform, model, ...).
+# VERDATE : force haproxy's release date.
+
+#### Installation options.
+DESTDIR =
+PREFIX = /usr/local
+SBINDIR = $(PREFIX)/sbin
+MANDIR = $(PREFIX)/share/man
+DOCDIR = $(PREFIX)/doc/haproxy
+
+#### TARGET system
+# Use TARGET=<target_name> to optimize for a specifc target OS among the
+# following list (use the default "generic" if uncertain) :
+# generic, linux22, linux24, linux24e, linux26, solaris,
+# freebsd, openbsd, netbsd, cygwin, custom, aix51, aix52
+TARGET =
+
+#### TARGET CPU
+# Use CPU=<cpu_name> to optimize for a particular CPU, among the following
+# list :
+# generic, native, i586, i686, ultrasparc, custom
+CPU = generic
+
+#### Architecture, used when not building for native architecture
+# Use ARCH=<arch_name> to force build for a specific architecture. Known
+# architectures will lead to "-m32" or "-m64" being added to CFLAGS and
+# LDFLAGS. This can be required to build 32-bit binaries on 64-bit targets.
+# Currently, only 32, 64, x86_64, i386, i486, i586 and i686 are understood.
+ARCH =
+
+#### Toolchain options.
+# GCC is normally used both for compiling and linking.
+CC = gcc
+LD = $(CC)
+
+#### Debug flags (typically "-g").
+# Those flags only feed CFLAGS so it is not mandatory to use this form.
+DEBUG_CFLAGS = -g
+
+#### Compiler-specific flags that may be used to disable some negative over-
+# optimization or to silence some warnings. -fno-strict-aliasing is needed with
+# gcc >= 4.4.
+SPEC_CFLAGS = -fno-strict-aliasing -Wdeclaration-after-statement
+
+#### Memory usage tuning
+# If small memory footprint is required, you can reduce the buffer size. There
+# are 2 buffers per concurrent session, so 16 kB buffers will eat 32 MB memory
+# with 1000 concurrent sessions. Putting it slightly lower than a page size
+# will prevent the additional parameters to go beyond a page. 8030 bytes is
+# exactly 5.5 TCP segments of 1460 bytes and is generally good. Useful tuning
+# macros include :
+# SYSTEM_MAXCONN, BUFSIZE, MAXREWRITE, REQURI_LEN, CAPTURE_LEN.
+# Example: SMALL_OPTS = -DBUFSIZE=8030 -DMAXREWRITE=1030 -DSYSTEM_MAXCONN=1024
+SMALL_OPTS =
+
+#### Debug settings
+# You can enable debugging on specific code parts by setting DEBUG=-DDEBUG_xxx.
+# Currently defined DEBUG macros include DEBUG_FULL, DEBUG_MEMORY, DEBUG_FSM,
+# DEBUG_HASH and DEBUG_AUTH. Please check sources for exact meaning or do not
+# use at all.
+DEBUG =
+
+#### Trace options
+# Use TRACE=1 to trace function calls to file "trace.out" or to stderr if not
+# possible.
+TRACE =
+
+#### Additional include and library dirs
+# Redefine this if you want to add some special PATH to include/libs
+ADDINC =
+ADDLIB =
+
+#### Specific macro definitions
+# Use DEFINE=-Dxxx to set any tunable macro. Anything declared here will appear
+# in the build options reported by "haproxy -vv". Use SILENT_DEFINE if you do
+# not want to pollute the report with complex defines.
+# The following settings might be of interest when SSL is enabled :
+# LISTEN_DEFAULT_CIPHERS is a cipher suite string used to set the default SSL
+# ciphers on "bind" lines instead of using OpenSSL's defaults.
+# CONNECT_DEFAULT_CIPHERS is a cipher suite string used to set the default
+# SSL ciphers on "server" lines instead of using OpenSSL's defaults.
+DEFINE =
+SILENT_DEFINE =
+
+#### extra programs to build (eg: haproxy-systemd-wrapper)
+# Force this to enable building extra programs or to disable them.
+# It's automatically appended depending on the targets.
+EXTRA =
+
+#### CPU dependant optimizations
+# Some CFLAGS are set by default depending on the target CPU. Those flags only
+# feed CPU_CFLAGS, which in turn feed CFLAGS, so it is not mandatory to use
+# them. You should not have to change these options. Better use CPU_CFLAGS or
+# even CFLAGS instead.
+CPU_CFLAGS.generic = -O2
+CPU_CFLAGS.native = -O2 -march=native
+CPU_CFLAGS.i586 = -O2 -march=i586
+CPU_CFLAGS.i686 = -O2 -march=i686
+CPU_CFLAGS.ultrasparc = -O6 -mcpu=v9 -mtune=ultrasparc
+CPU_CFLAGS = $(CPU_CFLAGS.$(CPU))
+
+#### ARCH dependant flags, may be overriden by CPU flags
+ARCH_FLAGS.32 = -m32
+ARCH_FLAGS.64 = -m64
+ARCH_FLAGS.i386 = -m32 -march=i386
+ARCH_FLAGS.i486 = -m32 -march=i486
+ARCH_FLAGS.i586 = -m32 -march=i586
+ARCH_FLAGS.i686 = -m32 -march=i686
+ARCH_FLAGS.x86_64 = -m64 -march=x86-64
+ARCH_FLAGS = $(ARCH_FLAGS.$(ARCH))
+
+#### Common CFLAGS
+# These CFLAGS contain general optimization options, CPU-specific optimizations
+# and debug flags. They may be overridden by some distributions which prefer to
+# set all of them at once instead of playing with the CPU and DEBUG variables.
+CFLAGS = $(ARCH_FLAGS) $(CPU_CFLAGS) $(DEBUG_CFLAGS) $(SPEC_CFLAGS)
+
+#### Common LDFLAGS
+# These LDFLAGS are used as the first "ld" options, regardless of any library
+# path or any other option. They may be changed to add any linker-specific
+# option at the beginning of the ld command line.
+LDFLAGS = $(ARCH_FLAGS) -g
+
+#### Target system options
+# Depending on the target platform, some options are set, as well as some
+# CFLAGS and LDFLAGS. The USE_* values are set to "implicit" so that they are
+# not reported in the build options string. You should not have to change
+# anything there. poll() is always supported, unless explicitly disabled by
+# passing USE_POLL="" on the make command line.
+USE_POLL = default
+
+ifeq ($(TARGET),generic)
+ # generic system target has nothing specific
+ USE_POLL = implicit
+ USE_TPROXY = implicit
+else
+ifeq ($(TARGET),linux22)
+ # This is for Linux 2.2
+ USE_GETSOCKNAME = implicit
+ USE_POLL = implicit
+ USE_TPROXY = implicit
+ USE_LIBCRYPT = implicit
+ USE_DL = implicit
+else
+ifeq ($(TARGET),linux24)
+ # This is for standard Linux 2.4 with netfilter but without epoll()
+ USE_GETSOCKNAME = implicit
+ USE_NETFILTER = implicit
+ USE_POLL = implicit
+ USE_TPROXY = implicit
+ USE_LIBCRYPT = implicit
+ USE_DL = implicit
+else
+ifeq ($(TARGET),linux24e)
+ # This is for enhanced Linux 2.4 with netfilter and epoll() patch > 0.21
+ USE_GETSOCKNAME = implicit
+ USE_NETFILTER = implicit
+ USE_POLL = implicit
+ USE_EPOLL = implicit
+ USE_MY_EPOLL = implicit
+ USE_TPROXY = implicit
+ USE_LIBCRYPT = implicit
+ USE_DL = implicit
+else
+ifeq ($(TARGET),linux26)
+ # This is for standard Linux 2.6 with netfilter and standard epoll()
+ USE_GETSOCKNAME = implicit
+ USE_NETFILTER = implicit
+ USE_POLL = implicit
+ USE_EPOLL = implicit
+ USE_TPROXY = implicit
+ USE_LIBCRYPT = implicit
+ USE_FUTEX = implicit
+ EXTRA += haproxy-systemd-wrapper
+ USE_DL = implicit
+else
+ifeq ($(TARGET),linux2628)
+ # This is for standard Linux >= 2.6.28 with netfilter, epoll, tproxy and splice
+ USE_GETSOCKNAME = implicit
+ USE_NETFILTER = implicit
+ USE_POLL = implicit
+ USE_EPOLL = implicit
+ USE_TPROXY = implicit
+ USE_LIBCRYPT = implicit
+ USE_LINUX_SPLICE= implicit
+ USE_LINUX_TPROXY= implicit
+ USE_ACCEPT4 = implicit
+ USE_FUTEX = implicit
+ USE_CPU_AFFINITY= implicit
+ ASSUME_SPLICE_WORKS= implicit
+ EXTRA += haproxy-systemd-wrapper
+ USE_DL = implicit
+else
+ifeq ($(TARGET),solaris)
+ # This is for Solaris 8
+ # We also enable getaddrinfo() which works since solaris 8.
+ USE_POLL = implicit
+ TARGET_CFLAGS = -fomit-frame-pointer -DFD_SETSIZE=65536 -D_REENTRANT
+ TARGET_LDFLAGS = -lnsl -lsocket
+ USE_TPROXY = implicit
+ USE_LIBCRYPT = implicit
+ USE_CRYPT_H = implicit
+ USE_GETADDRINFO = implicit
+else
+ifeq ($(TARGET),freebsd)
+ # This is for FreeBSD
+ USE_POLL = implicit
+ USE_KQUEUE = implicit
+ USE_TPROXY = implicit
+ USE_LIBCRYPT = implicit
+else
+ifeq ($(TARGET),osx)
+ # This is for Mac OS/X
+ USE_POLL = implicit
+ USE_KQUEUE = implicit
+ USE_TPROXY = implicit
+ USE_LIBCRYPT = implicit
+else
+ifeq ($(TARGET),openbsd)
+ # This is for OpenBSD >= 3.0
+ USE_POLL = implicit
+ USE_KQUEUE = implicit
+ USE_TPROXY = implicit
+else
+ifeq ($(TARGET),netbsd)
+ # This is for NetBSD
+ USE_POLL = implicit
+ USE_KQUEUE = implicit
+ USE_TPROXY = implicit
+else
+ifeq ($(TARGET),aix51)
+ # This is for AIX 5.1
+ USE_POLL = implicit
+ USE_LIBCRYPT = implicit
+ TARGET_CFLAGS = -Dss_family=__ss_family
+ DEBUG_CFLAGS =
+else
+ifeq ($(TARGET),aix52)
+ # This is for AIX 5.2 and later
+ USE_POLL = implicit
+ USE_LIBCRYPT = implicit
+ TARGET_CFLAGS = -D_MSGQSUPPORT
+ DEBUG_CFLAGS =
+else
+ifeq ($(TARGET),cygwin)
+ # This is for Cygwin
+ # Cygwin adds IPv6 support only in version 1.7 (in beta right now).
+ USE_POLL = implicit
+ USE_TPROXY = implicit
+ TARGET_CFLAGS = $(if $(filter 1.5.%, $(shell uname -r)), -DUSE_IPV6 -DAF_INET6=23 -DINET6_ADDRSTRLEN=46, )
+endif # cygwin
+endif # aix52
+endif # aix51
+endif # netbsd
+endif # openbsd
+endif # osx
+endif # freebsd
+endif # solaris
+endif # linux2628
+endif # linux26
+endif # linux24e
+endif # linux24
+endif # linux22
+endif # generic
+
+
+#### Old-style REGEX library settings for compatibility with previous setups.
+# It is still possible to use REGEX=<regex_lib> to select an alternative regex
+# library. By default, we use libc's regex. On Solaris 8/Sparc, grouping seems
+# to be broken using libc, so consider using pcre instead. Supported values are
+# "libc", "pcre", and "static-pcre". Use of this method is deprecated in favor
+# of "USE_PCRE" and "USE_STATIC_PCRE" (see build options below).
+REGEX = libc
+
+ifeq ($(REGEX),pcre)
+USE_PCRE = 1
+$(warning WARNING! use of "REGEX=pcre" is deprecated, consider using "USE_PCRE=1" instead.)
+endif
+
+ifeq ($(REGEX),static-pcre)
+USE_STATIC_PCRE = 1
+$(warning WARNING! use of "REGEX=pcre-static" is deprecated, consider using "USE_STATIC_PCRE=1" instead.)
+endif
+
+#### Old-style TPROXY settings
+ifneq ($(findstring -DTPROXY,$(DEFINE)),)
+USE_TPROXY = 1
+$(warning WARNING! use of "DEFINE=-DTPROXY" is deprecated, consider using "USE_TPROXY=1" instead.)
+endif
+
+
+#### Determine version, sub-version and release date.
+# If GIT is found, and IGNOREGIT is not set, VERSION, SUBVERS and VERDATE are
+# extracted from the last commit. Otherwise, use the contents of the files
+# holding the same names in the current directory.
+
+ifeq ($(IGNOREGIT),)
+VERSION := $(shell [ -d .git/. ] && ref=`(git describe --tags --match 'v*' --abbrev=0) 2>/dev/null` && ref=$${ref%-g*} && echo "$${ref\#v}")
+ifneq ($(VERSION),)
+# OK git is there and works.
+SUBVERS := $(shell comms=`git log --format=oneline --no-merges v$(VERSION).. 2>/dev/null | wc -l | tr -dc '0-9'`; commit=`(git log -1 --pretty=%h --abbrev=6) 2>/dev/null`; [ $$comms -gt 0 ] && echo "-$$commit-$$comms")
+VERDATE := $(shell git log -1 --pretty=format:%ci | cut -f1 -d' ' | tr '-' '/')
+endif
+endif
+
+# Last commit version not found, take it from the files.
+ifeq ($(VERSION),)
+VERSION := $(shell cat VERSION 2>/dev/null || touch VERSION)
+endif
+ifeq ($(SUBVERS),)
+SUBVERS := $(shell (grep -v '\$$Format' SUBVERS 2>/dev/null || touch SUBVERS) | head -n 1)
+endif
+ifeq ($(VERDATE),)
+VERDATE := $(shell (grep -v '^\$$Format' VERDATE 2>/dev/null || touch VERDATE) | head -n 1 | cut -f1 -d' ' | tr '-' '/')
+endif
+
+#### Build options
+# Do not change these ones, enable USE_* variables instead.
+OPTIONS_CFLAGS =
+OPTIONS_LDFLAGS =
+OPTIONS_OBJS =
+
+# This variable collects all USE_* values except those set to "implicit". This
+# is used to report a list of all flags which were used to build this version.
+# Do not assign anything to it.
+BUILD_OPTIONS =
+
+# Return USE_xxx=$(USE_xxx) unless $(USE_xxx) = "implicit"
+# Usage:
+# BUILD_OPTIONS += $(call ignore_implicit,USE_xxx)
+ignore_implicit = $(patsubst %=implicit,,$(1)=$($(1)))
+
+ifneq ($(USE_TCPSPLICE),)
+$(error experimental option USE_TCPSPLICE has been removed, check USE_LINUX_SPLICE)
+endif
+
+ifneq ($(USE_LINUX_SPLICE),)
+OPTIONS_CFLAGS += -DCONFIG_HAP_LINUX_SPLICE
+BUILD_OPTIONS += $(call ignore_implicit,USE_LINUX_SPLICE)
+endif
+
+ifneq ($(USE_TPROXY),)
+OPTIONS_CFLAGS += -DTPROXY
+BUILD_OPTIONS += $(call ignore_implicit,USE_TPROXY)
+endif
+
+ifneq ($(USE_LINUX_TPROXY),)
+OPTIONS_CFLAGS += -DCONFIG_HAP_LINUX_TPROXY
+BUILD_OPTIONS += $(call ignore_implicit,USE_LINUX_TPROXY)
+endif
+
+ifneq ($(USE_LIBCRYPT),)
+OPTIONS_CFLAGS += -DCONFIG_HAP_CRYPT
+BUILD_OPTIONS += $(call ignore_implicit,USE_LIBCRYPT)
+OPTIONS_LDFLAGS += -lcrypt
+endif
+
+ifneq ($(USE_CRYPT_H),)
+OPTIONS_CFLAGS += -DNEED_CRYPT_H
+BUILD_OPTIONS += $(call ignore_implicit,USE_CRYPT_H)
+endif
+
+ifneq ($(USE_GETADDRINFO),)
+OPTIONS_CFLAGS += -DUSE_GETADDRINFO
+BUILD_OPTIONS += $(call ignore_implicit,USE_GETADDRINFO)
+endif
+
+ifneq ($(USE_SLZ),)
+# Use SLZ_INC and SLZ_LIB to force path to zlib.h and libz.{a,so} if needed.
+SLZ_INC =
+SLZ_LIB =
+OPTIONS_CFLAGS += -DUSE_SLZ $(if $(SLZ_INC),-I$(SLZ_INC))
+BUILD_OPTIONS += $(call ignore_implicit,USE_SLZ)
+OPTIONS_LDFLAGS += $(if $(SLZ_LIB),-L$(SLZ_LIB)) -lslz
+endif
+
+ifneq ($(USE_ZLIB),)
+# Use ZLIB_INC and ZLIB_LIB to force path to zlib.h and libz.{a,so} if needed.
+ZLIB_INC =
+ZLIB_LIB =
+OPTIONS_CFLAGS += -DUSE_ZLIB $(if $(ZLIB_INC),-I$(ZLIB_INC))
+BUILD_OPTIONS += $(call ignore_implicit,USE_ZLIB)
+OPTIONS_LDFLAGS += $(if $(ZLIB_LIB),-L$(ZLIB_LIB)) -lz
+endif
+
+ifneq ($(USE_POLL),)
+OPTIONS_CFLAGS += -DENABLE_POLL
+OPTIONS_OBJS += src/ev_poll.o
+BUILD_OPTIONS += $(call ignore_implicit,USE_POLL)
+endif
+
+ifneq ($(USE_EPOLL),)
+OPTIONS_CFLAGS += -DENABLE_EPOLL
+OPTIONS_OBJS += src/ev_epoll.o
+BUILD_OPTIONS += $(call ignore_implicit,USE_EPOLL)
+endif
+
+ifneq ($(USE_MY_EPOLL),)
+OPTIONS_CFLAGS += -DUSE_MY_EPOLL
+BUILD_OPTIONS += $(call ignore_implicit,USE_MY_EPOLL)
+endif
+
+ifneq ($(USE_KQUEUE),)
+OPTIONS_CFLAGS += -DENABLE_KQUEUE
+OPTIONS_OBJS += src/ev_kqueue.o
+BUILD_OPTIONS += $(call ignore_implicit,USE_KQUEUE)
+endif
+
+ifneq ($(USE_VSYSCALL),)
+OPTIONS_OBJS += src/i386-linux-vsys.o
+OPTIONS_CFLAGS += -DCONFIG_HAP_LINUX_VSYSCALL
+BUILD_OPTIONS += $(call ignore_implicit,USE_VSYSCALL)
+endif
+
+ifneq ($(USE_CPU_AFFINITY),)
+OPTIONS_CFLAGS += -DUSE_CPU_AFFINITY
+BUILD_OPTIONS += $(call ignore_implicit,USE_CPU_AFFINITY)
+endif
+
+ifneq ($(USE_MY_SPLICE),)
+OPTIONS_CFLAGS += -DUSE_MY_SPLICE
+BUILD_OPTIONS += $(call ignore_implicit,USE_MY_SPLICE)
+endif
+
+ifneq ($(ASSUME_SPLICE_WORKS),)
+OPTIONS_CFLAGS += -DASSUME_SPLICE_WORKS
+BUILD_OPTIONS += $(call ignore_implicit,ASSUME_SPLICE_WORKS)
+endif
+
+ifneq ($(USE_ACCEPT4),)
+OPTIONS_CFLAGS += -DUSE_ACCEPT4
+BUILD_OPTIONS += $(call ignore_implicit,USE_ACCEPT4)
+endif
+
+ifneq ($(USE_MY_ACCEPT4),)
+OPTIONS_CFLAGS += -DUSE_MY_ACCEPT4
+BUILD_OPTIONS += $(call ignore_implicit,USE_MY_ACCEPT4)
+endif
+
+ifneq ($(USE_NETFILTER),)
+OPTIONS_CFLAGS += -DNETFILTER
+BUILD_OPTIONS += $(call ignore_implicit,USE_NETFILTER)
+endif
+
+ifneq ($(USE_GETSOCKNAME),)
+OPTIONS_CFLAGS += -DUSE_GETSOCKNAME
+BUILD_OPTIONS += $(call ignore_implicit,USE_GETSOCKNAME)
+endif
+
+ifneq ($(USE_REGPARM),)
+OPTIONS_CFLAGS += -DCONFIG_REGPARM=3
+BUILD_OPTIONS += $(call ignore_implicit,USE_REGPARM)
+endif
+
+ifneq ($(USE_DL),)
+BUILD_OPTIONS += $(call ignore_implicit,USE_DL)
+OPTIONS_LDFLAGS += -ldl
+endif
+
+# report DLMALLOC_SRC only if explicitly specified
+ifneq ($(DLMALLOC_SRC),)
+BUILD_OPTIONS += DLMALLOC_SRC=$(DLMALLOC_SRC)
+endif
+
+ifneq ($(USE_DLMALLOC),)
+BUILD_OPTIONS += $(call ignore_implicit,USE_DLMALLOC)
+ifeq ($(DLMALLOC_SRC),)
+DLMALLOC_SRC=src/dlmalloc.c
+endif
+endif
+
+ifneq ($(DLMALLOC_SRC),)
+# DLMALLOC_THRES may be changed to match PAGE_SIZE on every platform
+DLMALLOC_THRES = 4096
+OPTIONS_OBJS += src/dlmalloc.o
+endif
+
+ifneq ($(USE_OPENSSL),)
+# OpenSSL is packaged in various forms and with various dependencies.
+# In general -lssl is enough, but on some platforms, -lcrypto may be needed,
+# reason why it's added by default. Some even need -lz, then you'll need to
+# pass it in the "ADDLIB" variable if needed. If your SSL libraries are not
+# in the usual path, use SSL_INC=/path/to/inc and SSL_LIB=/path/to/lib.
+BUILD_OPTIONS += $(call ignore_implicit,USE_OPENSSL)
+OPTIONS_CFLAGS += -DUSE_OPENSSL $(if $(SSL_INC),-I$(SSL_INC))
+OPTIONS_LDFLAGS += $(if $(SSL_LIB),-L$(SSL_LIB)) -lssl -lcrypto
+OPTIONS_OBJS += src/ssl_sock.o src/shctx.o
+ifneq ($(USE_PRIVATE_CACHE),)
+OPTIONS_CFLAGS += -DUSE_PRIVATE_CACHE
+else
+ifneq ($(USE_PTHREAD_PSHARED),)
+OPTIONS_CFLAGS += -DUSE_PTHREAD_PSHARED
+OPTIONS_LDFLAGS += -lpthread
+else
+ifneq ($(USE_FUTEX),)
+OPTIONS_CFLAGS += -DUSE_SYSCALL_FUTEX
+endif
+endif
+endif
+endif
+
+ifneq ($(USE_LUA),)
+check_lua_lib = $(shell echo "int main(){}" | $(CC) -o /dev/null -x c - $(2) -l$(1) 2>/dev/null && echo $(1))
+
+BUILD_OPTIONS += $(call ignore_implicit,USE_LUA)
+OPTIONS_CFLAGS += -DUSE_LUA $(if $(LUA_INC),-I$(LUA_INC))
+LUA_LD_FLAGS := $(if $(LUA_LIB),-L$(LUA_LIB))
+ifeq ($(LUA_LIB_NAME),)
+# Try to automatically detect the Lua library
+LUA_LIB_NAME := $(firstword $(foreach lib,lua5.3 lua53 lua,$(call check_lua_lib,$(lib),$(LUA_LD_FLAGS))))
+ifeq ($(LUA_LIB_NAME),)
+$(error unable to automatically detect the Lua library name, you can enforce its name with LUA_LIB_NAME=<name> (where <name> can be lua5.3, lua53, lua, ...))
+endif
+endif
+
+OPTIONS_LDFLAGS += $(LUA_LD_FLAGS) -l$(LUA_LIB_NAME) -lm
+ifneq ($(USE_DL),)
+OPTIONS_LDFLAGS += -ldl
+endif
+OPTIONS_OBJS += src/hlua.o
+endif
+
+ifneq ($(USE_DEVICEATLAS),)
+ifeq ($(USE_PCRE),)
+$(error the DeviceAtlas module needs the PCRE library in order to compile)
+endif
+# Use DEVICEATLAS_SRC and possibly DEVICEATLAS_INC and DEVICEATLAS_LIB to force path
+# to DeviceAtlas headers and libraries if needed.
+DEVICEATLAS_SRC =
+DEVICEATLAS_INC = $(DEVICEATLAS_SRC)
+DEVICEATLAS_LIB = $(DEVICEATLAS_SRC)
+OPTIONS_OBJS += $(DEVICEATLAS_LIB)/json.o
+OPTIONS_OBJS += $(DEVICEATLAS_LIB)/dac.o
+OPTIONS_OBJS += src/da.o
+OPTIONS_CFLAGS += -DUSE_DEVICEATLAS $(if $(DEVICEATLAS_INC),-I$(DEVICEATLAS_INC))
+BUILD_OPTIONS += $(call ignore_implicit,USE_DEVICEATLAS)
+endif
+
+ifneq ($(USE_51DEGREES),)
+# Use 51DEGREES_SRC and possibly 51DEGREES_INC and 51DEGREES_LIB to force path
+# to 51degrees headers and libraries if needed.
+51DEGREES_SRC =
+51DEGREES_INC = $(51DEGREES_SRC)
+51DEGREES_LIB = $(51DEGREES_SRC)
+OPTIONS_OBJS += $(51DEGREES_LIB)/../cityhash/city.o
+OPTIONS_OBJS += $(51DEGREES_LIB)/51Degrees.o
+OPTIONS_OBJS += src/51d.o
+OPTIONS_CFLAGS += -DUSE_51DEGREES -DFIFTYONEDEGREES_NO_THREADING $(if $(51DEGREES_INC),-I$(51DEGREES_INC))
+BUILD_OPTIONS += $(call ignore_implicit,USE_51DEGREES)
+OPTIONS_LDFLAGS += $(if $(51DEGREES_LIB),-L$(51DEGREES_LIB)) -lm
+endif
+
+ifneq ($(USE_PCRE)$(USE_STATIC_PCRE)$(USE_PCRE_JIT),)
+# PCREDIR is used to automatically construct the PCRE_INC and PCRE_LIB paths,
+# by appending /include and /lib respectively. If your system does not use the
+# same sub-directories, simply force these variables instead of PCREDIR. It is
+# automatically detected but can be forced if required (for cross-compiling).
+# Forcing PCREDIR to an empty string will let the compiler use the default
+# locations.
+
+PCREDIR := $(shell pcre-config --prefix 2>/dev/null || echo /usr/local)
+ifneq ($(PCREDIR),)
+PCRE_INC := $(PCREDIR)/include
+PCRE_LIB := $(PCREDIR)/lib
+endif
+
+ifeq ($(USE_STATIC_PCRE),)
+# dynamic PCRE
+OPTIONS_CFLAGS += -DUSE_PCRE $(if $(PCRE_INC),-I$(PCRE_INC))
+OPTIONS_LDFLAGS += $(if $(PCRE_LIB),-L$(PCRE_LIB)) -lpcreposix -lpcre
+BUILD_OPTIONS += $(call ignore_implicit,USE_PCRE)
+else
+# static PCRE
+OPTIONS_CFLAGS += -DUSE_PCRE $(if $(PCRE_INC),-I$(PCRE_INC))
+OPTIONS_LDFLAGS += $(if $(PCRE_LIB),-L$(PCRE_LIB)) -Wl,-Bstatic -lpcreposix -lpcre -Wl,-Bdynamic
+BUILD_OPTIONS += $(call ignore_implicit,USE_STATIC_PCRE)
+endif
+# JIT PCRE
+ifneq ($(USE_PCRE_JIT),)
+OPTIONS_CFLAGS += -DUSE_PCRE_JIT
+BUILD_OPTIONS += $(call ignore_implicit,USE_PCRE_JIT)
+endif
+endif
+
+# TCP Fast Open
+ifneq ($(USE_TFO),)
+OPTIONS_CFLAGS += -DUSE_TFO
+BUILD_OPTIONS += $(call ignore_implicit,USE_TFO)
+endif
+
+# This one can be changed to look for ebtree files in an external directory
+EBTREE_DIR := ebtree
+
+#### Global compile options
+VERBOSE_CFLAGS = $(CFLAGS) $(TARGET_CFLAGS) $(SMALL_OPTS) $(DEFINE)
+COPTS = -Iinclude -I$(EBTREE_DIR) -Wall
+COPTS += $(CFLAGS) $(TARGET_CFLAGS) $(SMALL_OPTS) $(DEFINE) $(SILENT_DEFINE)
+COPTS += $(DEBUG) $(OPTIONS_CFLAGS) $(ADDINC)
+
+ifneq ($(VERSION)$(SUBVERS),)
+COPTS += -DCONFIG_HAPROXY_VERSION=\"$(VERSION)$(SUBVERS)\"
+endif
+
+ifneq ($(VERDATE),)
+COPTS += -DCONFIG_HAPROXY_DATE=\"$(VERDATE)\"
+endif
+
+ifneq ($(TRACE),)
+# if tracing is enabled, we want it to be as fast as possible
+TRACE_COPTS := $(filter-out -O0 -O1 -O2 -pg -finstrument-functions,$(COPTS)) -O3 -fomit-frame-pointer
+COPTS += -finstrument-functions
+endif
+
+ifneq ($(USE_NS),)
+OPTIONS_CFLAGS += -DCONFIG_HAP_NS
+BUILD_OPTIONS += $(call ignore_implicit,USE_NS)
+endif
+
+#### Global link options
+# These options are added at the end of the "ld" command line. Use LDFLAGS to
+# add options at the beginning of the "ld" command line if needed.
+LDOPTS = $(TARGET_LDFLAGS) $(OPTIONS_LDFLAGS) $(ADDLIB)
+
+ifeq ($(TARGET),)
+all:
+ @echo
+ @echo "Due to too many reports of suboptimized setups, building without"
+ @echo "specifying the target is no longer supported. Please specify the"
+ @echo "target OS in the TARGET variable, in the following form:"
+ @echo
+ @echo " $ make TARGET=xxx"
+ @echo
+ @echo "Please choose the target among the following supported list :"
+ @echo
+ @echo " linux2628, linux26, linux24, linux24e, linux22, solaris"
+ @echo " freebsd, openbsd, cygwin, custom, generic"
+ @echo
+ @echo "Use \"generic\" if you don't want any optimization, \"custom\" if you"
+ @echo "want to precisely tweak every option, or choose the target which"
+ @echo "matches your OS the most in order to gain the maximum performance"
+ @echo "out of it. Please check the Makefile in case of doubts."
+ @echo
+ @exit 1
+else
+all: haproxy $(EXTRA)
+endif
+
+OBJS = src/haproxy.o src/base64.o src/protocol.o \
+ src/uri_auth.o src/standard.o src/buffer.o src/log.o src/task.o \
+ src/chunk.o src/channel.o src/listener.o src/lru.o src/xxhash.o \
+ src/time.o src/fd.o src/pipe.o src/regex.o src/cfgparse.o src/server.o \
+ src/checks.o src/queue.o src/frontend.o src/proxy.o src/peers.o \
+ src/arg.o src/stick_table.o src/proto_uxst.o src/connection.o \
+ src/proto_http.o src/raw_sock.o src/backend.o \
+ src/lb_chash.o src/lb_fwlc.o src/lb_fwrr.o src/lb_map.o src/lb_fas.o \
+ src/stream_interface.o src/dumpstats.o src/proto_tcp.o src/applet.o \
+ src/session.o src/stream.o src/hdr_idx.o src/ev_select.o src/signal.o \
+ src/acl.o src/sample.o src/memory.o src/freq_ctr.o src/auth.o src/proto_udp.o \
+ src/compression.o src/payload.o src/hash.o src/pattern.o src/map.o \
+ src/namespace.o src/mailers.o src/dns.o src/vars.o
+
+EBTREE_OBJS = $(EBTREE_DIR)/ebtree.o \
+ $(EBTREE_DIR)/eb32tree.o $(EBTREE_DIR)/eb64tree.o \
+ $(EBTREE_DIR)/ebmbtree.o $(EBTREE_DIR)/ebsttree.o \
+ $(EBTREE_DIR)/ebimtree.o $(EBTREE_DIR)/ebistree.o
+
+ifneq ($(TRACE),)
+OBJS += src/trace.o
+endif
+
+WRAPPER_OBJS = src/haproxy-systemd-wrapper.o
+
+# Not used right now
+LIB_EBTREE = $(EBTREE_DIR)/libebtree.a
+
+haproxy: $(OBJS) $(OPTIONS_OBJS) $(EBTREE_OBJS)
+ $(LD) $(LDFLAGS) -o $@ $^ $(LDOPTS)
+
+haproxy-systemd-wrapper: $(WRAPPER_OBJS)
+ $(LD) $(LDFLAGS) -o $@ $^ $(LDOPTS)
+
+$(LIB_EBTREE): $(EBTREE_OBJS)
+ $(AR) rv $@ $^
+
+objsize: haproxy
+ @objdump -t $^|grep ' g '|grep -F '.text'|awk '{print $$5 FS $$6}'|sort
+
+%.o: %.c
+ $(CC) $(COPTS) -c -o $@ $<
+
+src/trace.o: src/trace.c
+ $(CC) $(TRACE_COPTS) -c -o $@ $<
+
+src/haproxy.o: src/haproxy.c
+ $(CC) $(COPTS) \
+ -DBUILD_TARGET='"$(strip $(TARGET))"' \
+ -DBUILD_ARCH='"$(strip $(ARCH))"' \
+ -DBUILD_CPU='"$(strip $(CPU))"' \
+ -DBUILD_CC='"$(strip $(CC))"' \
+ -DBUILD_CFLAGS='"$(strip $(VERBOSE_CFLAGS))"' \
+ -DBUILD_OPTIONS='"$(strip $(BUILD_OPTIONS))"' \
+ -c -o $@ $<
+
+src/haproxy-systemd-wrapper.o: src/haproxy-systemd-wrapper.c
+ $(CC) $(COPTS) \
+ -DSBINDIR='"$(strip $(SBINDIR))"' \
+ -c -o $@ $<
+
+src/dlmalloc.o: $(DLMALLOC_SRC)
+ $(CC) $(COPTS) -DDEFAULT_MMAP_THRESHOLD=$(DLMALLOC_THRES) -c -o $@ $<
+
+install-man:
+ install -d "$(DESTDIR)$(MANDIR)"/man1
+ install -m 644 doc/haproxy.1 "$(DESTDIR)$(MANDIR)"/man1
+
+EXCLUDE_DOCUMENTATION = lgpl gpl coding-style
+DOCUMENTATION = $(filter-out $(EXCLUDE_DOCUMENTATION),$(patsubst doc/%.txt,%,$(wildcard doc/*.txt)))
+
+install-doc:
+ install -d "$(DESTDIR)$(DOCDIR)"
+ for x in $(DOCUMENTATION); do \
+ install -m 644 doc/$$x.txt "$(DESTDIR)$(DOCDIR)" ; \
+ done
+
+install-bin: haproxy $(EXTRA)
+ install -d "$(DESTDIR)$(SBINDIR)"
+ install haproxy $(EXTRA) "$(DESTDIR)$(SBINDIR)"
+
+install: install-bin install-man install-doc
+
+uninstall:
+ rm -f "$(DESTDIR)$(MANDIR)"/man1/haproxy.1
+ for x in $(DOCUMENTATION); do \
+ rm -f "$(DESTDIR)$(DOCDIR)"/$$x.txt ; \
+ done
+ -rmdir "$(DESTDIR)$(DOCDIR)"
+ rm -f "$(DESTDIR)$(SBINDIR)"/haproxy
+ rm -f "$(DESTDIR)$(SBINDIR)"/haproxy-systemd-wrapper
+
+clean:
+ rm -f *.[oas] src/*.[oas] ebtree/*.[oas] haproxy test
+ for dir in . src include/* doc ebtree; do rm -f $$dir/*~ $$dir/*.rej $$dir/core; done
+ rm -f haproxy-$(VERSION).tar.gz haproxy-$(VERSION)$(SUBVERS).tar.gz
+ rm -f haproxy-$(VERSION) haproxy-$(VERSION)$(SUBVERS) nohup.out gmon.out
+ rm -f haproxy-systemd-wrapper
+
+tags:
+ find src include \( -name '*.c' -o -name '*.h' \) -print0 | \
+ xargs -0 etags --declarations --members
+
+cscope:
+ find src include -name "*.[ch]" -print | cscope -q -b -i -
+
+tar: clean
+ ln -s . haproxy-$(VERSION)$(SUBVERS)
+ tar --exclude=haproxy-$(VERSION)$(SUBVERS)/.git \
+ --exclude=haproxy-$(VERSION)$(SUBVERS)/haproxy-$(VERSION)$(SUBVERS) \
+ --exclude=haproxy-$(VERSION)$(SUBVERS)/haproxy-$(VERSION)$(SUBVERS).tar.gz \
+ -cf - haproxy-$(VERSION)$(SUBVERS)/* | gzip -c9 >haproxy-$(VERSION)$(SUBVERS).tar.gz
+ rm -f haproxy-$(VERSION)$(SUBVERS)
+
+git-tar:
+ git archive --format=tar --prefix="haproxy-$(VERSION)$(SUBVERS)/" HEAD | gzip -9 > haproxy-$(VERSION)$(SUBVERS).tar.gz
+
+version:
+ @echo "VERSION: $(VERSION)"
+ @echo "SUBVERS: $(SUBVERS)"
+ @echo "VERDATE: $(VERDATE)"
+
+# never use this one if you don't know what it is used for.
+update-version:
+ @echo "Ready to update the following versions :"
+ @echo "VERSION: $(VERSION)"
+ @echo "SUBVERS: $(SUBVERS)"
+ @echo "VERDATE: $(VERDATE)"
+ @echo "Press [ENTER] to continue or Ctrl-C to abort now.";read
+ echo "$(VERSION)" > VERSION
+ echo "$(SUBVERS)" > SUBVERS
+ echo "$(VERDATE)" > VERDATE
--- /dev/null
+ ----------------------
+ HAProxy how-to
+ ----------------------
+ version 1.6
+ willy tarreau
+ 2015/10/13
+
+
+1) How to build it
+------------------
+
+This version is a stable version, which means that it belongs to a branch which
+will get some fixes for bugs as they are discovered. Versions which include the
+suffix "-dev" are development versions and should be avoided in production. If
+you are not used to build from sources or if you are not used to follow updates
+then it is recommended that instead you use the packages provided by your
+software vendor or Linux distribution. Most of them are taking this task
+seriously and are doing a good job at backporting important fixes. If for any
+reason you'd prefer a different version than the one packaged for your system,
+you want to be certain to have all the fixes or to get some commercial support,
+other choices are available at :
+
+ http://www.haproxy.com/
+
+To build haproxy, you will need :
+ - GNU make. Neither Solaris nor OpenBSD's make work with the GNU Makefile.
+ If you get many syntax errors when running "make", you may want to retry
+ with "gmake" which is the name commonly used for GNU make on BSD systems.
+ - GCC between 2.95 and 4.8. Others may work, but not tested.
+ - GNU ld
+
+Also, you might want to build with libpcre support, which will provide a very
+efficient regex implementation and will also fix some badness on Solaris' one.
+
+To build haproxy, you have to choose your target OS amongst the following ones
+and assign it to the TARGET variable :
+
+ - linux22 for Linux 2.2
+ - linux24 for Linux 2.4 and above (default)
+ - linux24e for Linux 2.4 with support for a working epoll (> 0.21)
+ - linux26 for Linux 2.6 and above
+ - linux2628 for Linux 2.6.28, 3.x, and above (enables splice and tproxy)
+ - solaris for Solaris 8 or 10 (others untested)
+ - freebsd for FreeBSD 5 to 10 (others untested)
+ - netbsd for NetBSD
+ - osx for Mac OS/X
+ - openbsd for OpenBSD 3.1 and above
+ - aix51 for AIX 5.1
+ - aix52 for AIX 5.2
+ - cygwin for Cygwin
+ - generic for any other OS or version.
+ - custom to manually adjust every setting
+
+You may also choose your CPU to benefit from some optimizations. This is
+particularly important on UltraSparc machines. For this, you can assign
+one of the following choices to the CPU variable :
+
+ - i686 for intel PentiumPro, Pentium 2 and above, AMD Athlon
+ - i586 for intel Pentium, AMD K6, VIA C3.
+ - ultrasparc : Sun UltraSparc I/II/III/IV processor
+ - native : use the build machine's specific processor optimizations. Use with
+ extreme care, and never in virtualized environments (known to break).
+ - generic : any other processor or no CPU-specific optimization. (default)
+
+Alternatively, you may just set the CPU_CFLAGS value to the optimal GCC options
+for your platform.
+
+You may want to build specific target binaries which do not match your native
+compiler's target. This is particularly true on 64-bit systems when you want
+to build a 32-bit binary. Use the ARCH variable for this purpose. Right now
+it only knows about a few x86 variants (i386,i486,i586,i686,x86_64), two
+generic ones (32,64) and sets -m32/-m64 as well as -march=<arch> accordingly.
+
+If your system supports PCRE (Perl Compatible Regular Expressions), then you
+really should build with libpcre which is between 2 and 10 times faster than
+other libc implementations. Regex are used for header processing (deletion,
+rewriting, allow, deny). The only inconvenient of libpcre is that it is not
+yet widely spread, so if you build for other systems, you might get into
+trouble if they don't have the dynamic library. In this situation, you should
+statically link libpcre into haproxy so that it will not be necessary to
+install it on target systems. Available build options for PCRE are :
+
+ - USE_PCRE=1 to use libpcre, in whatever form is available on your system
+ (shared or static)
+
+ - USE_STATIC_PCRE=1 to use a static version of libpcre even if the dynamic
+ one is available. This will enhance portability.
+
+ - with no option, use your OS libc's standard regex implementation (default).
+ Warning! group references on Solaris seem broken. Use static-pcre whenever
+ possible.
+
+If your system doesn't provide PCRE, you are encouraged to download it from
+http://www.pcre.org/ and build it yourself, it's fast and easy.
+
+Recent systems can resolve IPv6 host names using getaddrinfo(). This primitive
+is not present in all libcs and does not work in all of them either. Support in
+glibc was broken before 2.3. Some embedded libs may not properly work either,
+thus, support is disabled by default, meaning that some host names which only
+resolve as IPv6 addresses will not resolve and configs might emit an error
+during parsing. If you know that your OS libc has reliable support for
+getaddrinfo(), you can add USE_GETADDRINFO=1 on the make command line to enable
+it. This is the recommended option for most Linux distro packagers since it's
+working fine on all recent mainstream distros. It is automatically enabled on
+Solaris 8 and above, as it's known to work.
+
+It is possible to add native support for SSL using the GNU makefile, by passing
+"USE_OPENSSL=1" on the make command line. The libssl and libcrypto will
+automatically be linked with haproxy. Some systems also require libz, so if the
+build fails due to missing symbols such as deflateInit(), then try again with
+"ADDLIB=-lz".
+
+Your are strongly encouraged to always use an up-to-date version of OpenSSL, as
+found on https://www.openssl.org/ as vulnerabilities are occasionally found and
+you don't want them on your systems. HAProxy is known to build correctly on all
+currently supported branches (0.9.8, 1.0.0, 1.0.1 and 1.0.2 at the time of
+writing). Branch 1.0.2 is recommended for the richest features.
+
+To link OpenSSL statically against haproxy, build OpenSSL with the no-shared
+keyword and install it to a local directory, so your system is not affected :
+
+ $ export STATICLIBSSL=/tmp/staticlibssl
+ $ ./config --prefix=$STATICLIBSSL no-shared
+ $ make && make install_sw
+
+When building haproxy, pass that path via SSL_INC and SSL_LIB to make and
+include additional libs with ADDLIB if needed (in this case for example libdl):
+
+ $ make TARGET=linux26 USE_OPENSSL=1 SSL_INC=$STATICLIBSSL/include SSL_LIB=$STATICLIBSSL/lib ADDLIB=-ldl
+
+It is also possible to include native support for zlib to benefit from HTTP
+compression. For this, pass "USE_ZLIB=1" on the "make" command line and ensure
+that zlib is present on the system. Alternatively it is possible to use libslz
+for a faster, memory less, but slightly less efficient compression, by passing
+"USE_SLZ=1".
+
+Zlib is commonly found on most systems, otherwise updates can be retrieved from
+http://www.zlib.net/. It is easy and fast to build. Libslz can be downloaded
+from http://1wt.eu/projects/libslz/ and is even easier to build.
+
+By default, the DEBUG variable is set to '-g' to enable debug symbols. It is
+not wise to disable it on uncommon systems, because it's often the only way to
+get a complete core when you need one. Otherwise, you can set DEBUG to '-s' to
+strip the binary.
+
+For example, I use this to build for Solaris 8 :
+
+ $ make TARGET=solaris CPU=ultrasparc USE_STATIC_PCRE=1
+
+And I build it this way on OpenBSD or FreeBSD :
+
+ $ gmake TARGET=freebsd USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1
+
+And on a classic Linux with SSL and ZLIB support (eg: Red Hat 5.x) :
+
+ $ make TARGET=linux26 USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1
+
+And on a recent Linux >= 2.6.28 with SSL and ZLIB support :
+
+ $ make TARGET=linux2628 USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1
+
+In order to build a 32-bit binary on an x86_64 Linux system with SSL support
+without support for compression but when OpenSSL requires ZLIB anyway :
+
+ $ make TARGET=linux26 ARCH=i386 USE_OPENSSL=1 ADDLIB=-lz
+
+The SSL stack supports session cache synchronization between all running
+processes. This involves some atomic operations and synchronization operations
+which come in multiple flavors depending on the system and architecture :
+
+ Atomic operations :
+ - internal assembler versions for x86/x86_64 architectures
+
+ - gcc builtins for other architectures. Some architectures might not
+ be fully supported or might require a more recent version of gcc.
+ If your architecture is not supported, you willy have to either use
+ pthread if supported, or to disable the shared cache.
+
+ - pthread (posix threads). Pthreads are very common but inter-process
+ support is not that common, and some older operating systems did not
+ report an error when enabling multi-process mode, so they used to
+ silently fail, possibly causing crashes. Linux's implementation is
+ fine. OpenBSD doesn't support them and doesn't build. FreeBSD 9 builds
+ and reports an error at runtime, while certain older versions might
+ silently fail. Pthreads are enabled using USE_PTHREAD_PSHARED=1.
+
+ Synchronization operations :
+ - internal spinlock : this mode is OS-independant, light but will not
+ scale well to many processes. However, accesses to the session cache
+ are rare enough that this mode could certainly always be used. This
+ is the default mode.
+
+ - Futexes, which are Linux-specific highly scalable light weight mutexes
+ implemented in user-space with some limited assistance from the kernel.
+ This is the default on Linux 2.6 and above and is enabled by passing
+ USE_FUTEX=1
+
+ - pthread (posix threads). See above.
+
+If none of these mechanisms is supported by your platform, you may need to
+build with USE_PRIVATE_CACHE=1 to totally disable SSL cache sharing. Then
+it is better not to run SSL on multiple processes.
+
+If you need to pass other defines, includes, libraries, etc... then please
+check the Makefile to see which ones will be available in your case, and
+use the USE_* variables in the Makefile.
+
+AIX 5.3 is known to work with the generic target. However, for the binary to
+also run on 5.2 or earlier, you need to build with DEFINE="-D_MSGQSUPPORT",
+otherwise __fd_select() will be used while not being present in the libc, but
+this is easily addressed using the "aix52" target. If you get build errors
+because of strange symbols or section mismatches, simply remove -g from
+DEBUG_CFLAGS.
+
+You can easily define your own target with the GNU Makefile. Unknown targets
+are processed with no default option except USE_POLL=default. So you can very
+well use that property to define your own set of options. USE_POLL can even be
+disabled by setting USE_POLL="". For example :
+
+ $ gmake TARGET=tiny USE_POLL="" TARGET_CFLAGS=-fomit-frame-pointer
+
+
+1.1) DeviceAtlas Device Detection
+---------------------------------
+
+In order to add DeviceAtlas Device Detection support, you would need to download
+the API source code from https://deviceatlas.com/deviceatlas-haproxy-module and
+once extracted :
+
+ $ make TARGET=<target> USE_PCRE=1 USE_DEVICEATLAS=1 DEVICEATLAS_SRC=<path to the API root folder>
+
+Optionally DEVICEATLAS_INC and DEVICEATLAS_LIB may be set to override the path
+to the include files and libraries respectively if they're not in the source
+directory.
+
+These are supported DeviceAtlas directives (see doc/configuration.txt) :
+ - deviceatlas-json-file <path to the DeviceAtlas JSON data file>.
+ - deviceatlas-log-level <number> (0 to 3, level of information returned by
+ the API, 0 by default).
+ - deviceatlas-property-separator <character> (character used to separate the
+ properties produced by the API, | by default).
+
+Sample configuration :
+
+ global
+ deviceatlas-json-file <path to json file>
+
+ ...
+ frontend
+ bind *:8881
+ default_backend servers
+
+There are two distinct methods available, one which leverages all HTTP headers
+and one which uses only a single HTTP header for the detection. The former
+method is highly recommended and more accurate. There are several possible use
+cases.
+
+# To transmit the DeviceAtlas data downstream to the target application
+
+All HTTP headers via the sample / fetch
+
+ http-request set-header X-DeviceAtlas-Data %[da-csv-fetch(primaryHardwareType,osName,osVersion,browserName,browserVersion)]
+
+Single HTTP header (e.g. User-Agent) via the convertor
+
+ http-request set-header X-DeviceAtlas-Data %[req.fhdr(User-Agent),da-csv-conv(primaryHardwareType,osName,osVersion,browserName,browserVersion)]
+
+# Mobile content switching with ACL
+
+All HTTP headers
+
+ acl is_mobile da-csv-fetch(mobileDevice) 1
+
+Single HTTP header
+
+ acl device_type_tablet req.fhdr(User-Agent),da-csv-conv(primaryHardwareType) "Tablet"
+
+
+Please find more information about DeviceAtlas and the detection methods at https://deviceatlas.com/resources .
+
+
+1.2) 51Degrees Device Detection
+-------------------------------
+
+You can also include 51Degrees for inbuilt device detection enabling attributes
+such as screen size (physical & pixels), supported input methods, release date,
+hardware vendor and model, browser information, and device price among many
+others. Such information can be used to improve the user experience of a web
+site by tailoring the page content, layout and business processes to the
+precise characteristics of the device. Such customisations improve profit by
+making it easier for customers to get to the information or services they
+need. Attributes of the device making a web request can be added to HTTP
+headers as configurable parameters.
+
+In order to enable 51Degrees download the 51Degrees source code from the
+official github repository :
+
+ git clone https://github.com/51Degrees/Device-Detection
+
+then run 'make' with USE_51DEGREES and 51DEGREES_SRC set. Both 51DEGREES_INC
+and 51DEGREES_LIB may additionally be used to force specific different paths
+for .o and .h, but will default to 51DEGREES_SRC. Make sure to replace
+'51D_REPO_PATH' with the path to the 51Degrees repository.
+
+51Degrees provide 2 different detection algorithms:
+
+ 1. Pattern - balances main memory usage and CPU.
+ 2. Trie - a very high performance detection solution which uses more main
+ memory than Pattern.
+
+To make with 51Degrees Pattern algorithm use the following command line.
+
+ $ make TARGET=<target> USE_51DEGREES=1 51DEGREES_SRC='51D_REPO_PATH'/src/pattern
+
+To use the 51Degrees Trie algorithm use the following command line.
+
+ $ make TARGET=<target> USE_51DEGREES=1 51DEGREES_SRC='51D_REPO_PATH'/src/trie
+
+A data file containing information about devices, browsers, operating systems
+and their associated signatures is then needed. 51Degrees provide a free
+database with Github repo for this purpose. These free data files are located
+in '51D_REPO_PATH'/data with the extensions .dat for Pattern data and .trie for
+Trie data.
+
+The configuration file needs to set the following parameters:
+
+ global
+ 51degrees-data-file path to the Pattern or Trie data file
+ 51degrees-property-name-list list of 51Degrees properties to detect
+ 51degrees-property-separator separator to use between values
+ 51degrees-cache-size LRU-based cache size (disabled by default)
+
+The following is an example of the settings for Pattern.
+
+ global
+ 51degrees-data-file '51D_REPO_PATH'/data/51Degrees-LiteV3.2.dat
+ 51degrees-property-name-list IsTablet DeviceType IsMobile
+ 51degrees-property-separator ,
+ 51degrees-cache-size 10000
+
+HAProxy needs a way to pass device information to the backend servers. This is
+done by using the 51d converter or fetch method, which intercepts the HTTP
+headers and creates some new headers. This is controlled in the frontend
+http-in section.
+
+The following is an example which adds two new HTTP headers prefixed X-51D-
+
+ frontend http-in
+ bind *:8081
+ default_backend servers
+ http-request set-header X-51D-DeviceTypeMobileTablet %[51d.all(DeviceType,IsMobile,IsTablet)]
+ http-request set-header X-51D-Tablet %[51d.all(IsTablet)]
+
+Here, two headers are created with 51Degrees data, X-51D-DeviceTypeMobileTablet
+and X-51D-Tablet. Any number of headers can be created this way and can be
+named anything. 51d.all( ) invokes the 51degrees fetch. It can be passed up to
+five property names of values to return. Values will be returned in the same
+order, seperated by the 51-degrees-property-separator configured earlier. If a
+property name can't be found the value 'NoData' is returned instead.
+
+In addition to the device properties three additional properties related to the
+validity of the result can be returned when used with the Pattern method. The
+following example shows how Method, Difference and Rank could be included as one
+new HTTP header X-51D-Stats.
+
+ frontend http-in
+ ...
+ http-request set-header X-51D-Stats %[51d.all(Method,Difference,Rank)]
+
+These values indicate how confident 51Degrees is in the result that that was
+returned. More information is available on the 51Degrees web site at:
+
+ https://51degrees.com/support/documentation/pattern
+
+The above 51d.all fetch method uses all available HTTP headers for detection. A
+modest performance improvement can be obtained by only passing one HTTP header
+to the detection method with the 51d.single converter. The following example
+uses the User-Agent HTTP header only for detection.
+
+ frontend http-in
+ ...
+ http-request set-header X-51D-DeviceTypeMobileTablet %[req.fhdr(User-Agent),51d.single(DeviceType,IsMobile,IsTablet)]
+
+Any HTTP header could be used inplace of User-Agent by changing the parameter
+provided to req.fhdr.
+
+When compiled to use the Trie detection method the trie format data file needs
+to be provided. Changing the extension of the data file from dat to trie will
+use the correct data.
+
+ global
+ 51degrees-data-file '51D_REPO_PATH'/data/51Degrees-LiteV3.2.trie
+
+When used with Trie the Method, Difference and Rank properties are not
+available.
+
+The free Lite data file contains information about screen size in pixels and
+whether the device is a mobile. A full list of available properties is located
+on the 51Degrees web site at:
+
+ https://51degrees.com/resources/property-dictionary
+
+Some properties are only available in the paid for Premium and Enterprise
+versions of 51Degrees. These data sets not only contain more properties but
+are updated weekly and daily and contain signatures for 100,000s of different
+device combinations. For more information see the data options comparison web
+page:
+
+ https://51degrees.com/compare-data-options
+
+
+2) How to install it
+--------------------
+
+To install haproxy, you can either copy the single resulting binary to the
+place you want, or run :
+
+ $ sudo make install
+
+If you're packaging it for another system, you can specify its root directory
+in the usual DESTDIR variable.
+
+
+3) How to set it up
+-------------------
+
+There is some documentation in the doc/ directory :
+
+ - intro.txt : this is an introduction to haproxy, it explains what it is
+ what it is not. Useful for beginners or to re-discover it when planning
+ for an upgrade.
+
+ - architecture.txt : this is the architecture manual. It is quite old and
+ does not tell about the nice new features, but it's still a good starting
+ point when you know what you want but don't know how to do it.
+
+ - configuration.txt : this is the configuration manual. It recalls a few
+ essential HTTP basic concepts, and details all the configuration file
+ syntax (keywords, units). It also describes the log and stats format. It
+ is normally always up to date. If you see that something is missing from
+ it, please report it as this is a bug. Please note that this file is
+ huge and that it's generally more convenient to review Cyril Bonté's
+ HTML translation online here :
+
+ http://cbonte.github.io/haproxy-dconv/configuration-1.6.html
+
+ - management.txt : it explains how to start haproxy, how to manage it at
+ runtime, how to manage it on multiple nodes, how to proceed with seamless
+ upgrades.
+
+ - gpl.txt / lgpl.txt : the copy of the licenses covering the software. See
+ the 'LICENSE' file at the top for more information.
+
+ - the rest is mainly for developers.
+
+There are also a number of nice configuration examples in the "examples"
+directory as well as on several sites and articles on the net which are linked
+to from the haproxy web site.
+
+
+4) How to report a bug
+----------------------
+
+It is possible that from time to time you'll find a bug. A bug is a case where
+what you see is not what is documented. Otherwise it can be a misdesign. If you
+find that something is stupidly design, please discuss it on the list (see the
+"how to contribute" section below). If you feel like you're proceeding right
+and haproxy doesn't obey, then first ask yourself if it is possible that nobody
+before you has even encountered this issue. If it's unlikely, the you probably
+have an issue in your setup. Just in case of doubt, please consult the mailing
+list archives :
+
+ http://marc.info/?l=haproxy
+
+Otherwise, please try to gather the maximum amount of information to help
+reproduce the issue and send that to the mailing list :
+
+ haproxy@formilux.org
+
+Please include your configuration and logs. You can mask your IP addresses and
+passwords, we don't need them. But it's essential that you post your config if
+you want people to guess what is happening.
+
+Also, keep in mind that haproxy is designed to NEVER CRASH. If you see it die
+without any reason, then it definitely is a critical bug that must be reported
+and urgently fixed. It has happened a couple of times in the past, essentially
+on development versions running on new architectures. If you think your setup
+is fairly common, then it is possible that the issue is totally unrelated.
+Anyway, if that happens, feel free to contact me directly, as I will give you
+instructions on how to collect a usable core file, and will probably ask for
+other captures that you'll not want to share with the list.
+
+
+5) How to contribute
+--------------------
+
+Please carefully read the CONTRIBUTING file that comes with the sources. It is
+mandatory.
+
+-- end
--- /dev/null
+Medium-long term roadmap - 2015/10/13
+
+Legend: '+' = done, '-' = todo, '*' = done except doc
+
+1.7 or later :
+ - return-html code xxx [ file "xxx" | text "xxx" ] if <acl>
+
+ - return-raw [ file "xxx" | text "xxx" ] if <acl>
+
+ - add the ability to only dump response errors to more easily detect
+ anomalies without being polluted with attacks in requests.
+
+ - have multi-criteria analysers which subscribe to req flags, rsp flags, and
+ stream interface changes. This would result in a single analyser to wait
+ for the end of data transfer in HTTP.
+
+ - support for time-ordered priority queues with ability to add an offset
+ based on request matching. Each session will have one ebtree node to be
+ attached to whatever queue the session is waiting in.
+
+ - add a flag in logs to indicate keep-alive requests ?
+
+ - make it possible to condition a timeout on an ACL (dynamic timeouts)
+
+ - forwardfor/originalto except with IPv6
+
+ - remove lots of remaining Alert() calls or ensure that they forward to
+ send_log() after the fork.
+
+ - tcp-request session
+
+ - tcp-request session expect-proxy {L4|L5} if ...
+
+ - wait on resource (time, mem, CPU, socket, server's conn, server's rate, ...)
+
+ - bandwidth limits
+
+ - buddy servers to build defined lists of failovers. Detect loops during
+ the config check.
+
+ server XXX buddy YYY
+ server YYY # may replace XXX when XXX fails
+
+ - spare servers : servers which are used in LB only when a minimum farm
+ weight threshold is not satisfied anymore. Useful for inter-site LB with
+ local pref by default.
+
+ - add support for event-triggered epoll, and maybe change all events handling
+ to pass through an event cache to handle temporarily disabled events.
+
+ - evaluate the changes required for multi-process+shared mem or multi-thread
+ +thread-local+fast locking.
+
+ - ability to decide whether to drain or kill sessions when putting a server
+ to maintenance mode => requires a per-server session list and the change
+ above.
+
+Old, maybe obsolete points :
+ - clarify licence by adding a 'MODULE_LICENCE("GPL")' or something equivalent.
+
+ - 3 memory models : failsafe (prealloc), normal (current), optimal (alloc on
+ demand)
+
+ - implement support for event-triggerred epoll()
+
+ - verify if it would be worth implementing an epoll_ctl_batch() for Linux
+
+ - option minservers XXX : activates some spare servers when active servers
+ are insufficient
+
+ - initcwnd parameter for bind sockets : needed in kernel first
+
+ - have a callback function which would be called after a server is selected,
+ for header post-processing. That would be mainly used to remove then add
+ the server's name or cookie in a header so that the server knows it.
+
+Unsorted :
+ - outgoing log load-balancing (round-robin or hash among multiple servers)
+
+ - internal socket for "server XXX frontend:name"
+
+ - HTTP/2.0
+
+ - XML inspection (content-switching for SOAP requests)
+
+ - random cookie generator
+
+ - fastcgi to servers
+
+ - hot config reload
+
+ - RAM-based cache for small files
+
+ - RHI - BGP
+
+ - telnet/SSH cli
+
+ - dynamic memory allocation
+
+ - dynamic weights based on check response headers and traffic response time
+
+ - various kernel-level acceleration (multi-accept, ssplice, epoll2...)
--- /dev/null
+-$Format:%h$
+
--- /dev/null
+$Format:%ci$
+2015/12/25
--- /dev/null
+/*
+ * base64rev generator
+ *
+ * Copyright 2009-2010 Krzysztof Piotr Oledzki <ole@ans.pl>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <stdio.h>
+
+const char base64tab[65]="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/";
+char base64rev[128];
+
+#define base '#' /* arbitrary chosen base value */
+#define B64MAX 64
+#define B64PADV B64MAX
+
+int main() {
+ char *p, c;
+ int i, min = 255, max = 0;
+
+ for (i = 0; i < sizeof(base64rev); i++)
+ base64rev[i] = base;
+
+ for (i = 0; i < B64MAX; i++) {
+ c = base64tab[i];
+
+ if (min > c)
+ min = c;
+
+ if (max < c)
+ max = c;
+ }
+
+ for (i = 0; i < B64MAX; i++) {
+ c = base64tab[i];
+
+ if (base+i+1 > 127) {
+ printf("Wrong base value @%d\n", i);
+ return 1;
+ }
+
+ base64rev[c - min] = base+i+1;
+ }
+
+ base64rev['=' - min] = base + B64PADV;
+
+ base64rev[max - min + 1] = '\0';
+
+ printf("#define B64BASE '%c'\n", base);
+ printf("#define B64CMIN '%c'\n", min);
+ printf("#define B64CMAX '%c'\n", max);
+ printf("#define B64PADV %u\n", B64PADV);
+
+ p = base64rev;
+ printf("const char base64rev[]=\"");
+ for (p = base64rev; *p; p++) {
+ if (*p == '\\')
+ printf("\\%c", *p);
+ else
+ printf("%c", *p);
+ }
+ printf("\"\n");
+
+ return 0;
+}
--- /dev/null
+EBTREE_DIR = ../../ebtree
+INCLUDE = -I../../include -I$(EBTREE_DIR)
+
+CC = gcc
+
+# note: it is recommended to also add -fomit-frame-pointer on i386
+OPTIMIZE = -O3
+
+# most recent glibc provide platform-specific optimizations that make
+# memchr faster than the generic C implementation (eg: SSE and prefetch
+# on x86_64). Try with an without. In general, on x86_64 it's better to
+# use memchr using the define below.
+# DEFINE = -DUSE_MEMCHR
+DEFINE =
+
+OBJS = halog
+
+halog: halog.c fgets2.c
+ $(CC) $(OPTIMIZE) $(DEFINE) -o $@ $(INCLUDE) $(EBTREE_DIR)/ebtree.c $(EBTREE_DIR)/eb32tree.c $(EBTREE_DIR)/eb64tree.c $(EBTREE_DIR)/ebmbtree.c $(EBTREE_DIR)/ebsttree.c $(EBTREE_DIR)/ebistree.c $(EBTREE_DIR)/ebimtree.c $^
+
+clean:
+ rm -f $(OBJS) *.[oas]
--- /dev/null
+/*
+ * fast fgets() replacement for log parsing
+ *
+ * Copyright 2000-2012 Willy Tarreau <w@1wt.eu>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public License
+ * along with this library; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ *
+ * This function manages its own buffer and returns a pointer to that buffer
+ * in order to avoid expensive memory copies. It also checks for line breaks
+ * 32 or 64 bits at a time. It could be improved a lot using mmap() but we
+ * would not be allowed to replace trailing \n with zeroes and we would be
+ * limited to small log files on 32-bit machines.
+ *
+ */
+
+#include <stdlib.h>
+#include <string.h>
+#include <stdio.h>
+#include <unistd.h>
+
+#ifndef FGETS2_BUFSIZE
+#define FGETS2_BUFSIZE (256*1024)
+#endif
+
+/* return non-zero if the integer contains at least one zero byte */
+static inline unsigned int has_zero32(unsigned int x)
+{
+ unsigned int y;
+
+ /* Principle: we want to perform 4 tests on one 32-bit int at once. For
+ * this, we have to simulate an SIMD instruction which we don't have by
+ * default. The principle is that a zero byte is the only one which
+ * will cause a 1 to appear on the upper bit of a byte/word/etc... when
+ * we subtract 1. So we can detect a zero byte if a one appears at any
+ * of the bits 7, 15, 23 or 31 where it was not. It takes only one
+ * instruction to test for the presence of any of these bits, but it is
+ * still complex to check for their initial absence. Thus, we'll
+ * proceed differently : we first save and clear only those bits, then
+ * we check in the final result if one of them is present and was not.
+ * The order of operations below is important to save registers and
+ * tests. The result is used as a boolean, so the last test must apply
+ * on the constant so that it can efficiently be inlined.
+ */
+#if defined(__i386__)
+ /* gcc on x86 loves copying registers over and over even on code that
+ * simple, so let's do it by hand to prevent it from doing so :-(
+ */
+ asm("lea -0x01010101(%0),%1\n"
+ "not %0\n"
+ "and %1,%0\n"
+ : "=a" (x), "=r"(y)
+ : "0" (x)
+ );
+ return x & 0x80808080;
+#else
+ y = x - 0x01010101; /* generate a carry */
+ x = ~x & y; /* clear the bits that were already set */
+ return x & 0x80808080;
+#endif
+}
+
+/* return non-zero if the argument contains at least one zero byte. See principle above. */
+static inline unsigned long long has_zero64(unsigned long long x)
+{
+ unsigned long long y;
+
+ y = x - 0x0101010101010101ULL; /* generate a carry */
+ y &= ~x; /* clear the bits that were already set */
+ return y & 0x8080808080808080ULL;
+}
+
+static inline unsigned long has_zero(unsigned long x)
+{
+ return (sizeof(x) == 8) ? has_zero64(x) : has_zero32(x);
+}
+
+/* find a '\n' between <next> and <end>. Warning: may read slightly past <end>.
+ * If no '\n' is found, <end> is returned.
+ */
+static char *find_lf(char *next, char *end)
+{
+#if defined USE_MEMCHR
+ /* some recent libc use platform-specific optimizations to provide more
+ * efficient byte search than below (eg: glibc 2.11 on x86_64).
+ */
+ next = memchr(next, '\n', end - next);
+ if (!next)
+ next = end;
+#else
+ if (sizeof(long) == 4) { /* 32-bit system */
+ /* this is a speed-up, we read 32 bits at once and check for an
+ * LF character there. We stop if found then continue one at a
+ * time.
+ */
+ while (next < end && (((unsigned long)next) & 3) && *next != '\n')
+ next++;
+
+ /* Now next is multiple of 4 or equal to end. We know we can safely
+ * read up to 32 bytes past end if needed because they're allocated.
+ */
+ while (next < end) {
+ if (has_zero32(*(unsigned int *)next ^ 0x0A0A0A0A))
+ break;
+ next += 4;
+ if (has_zero32(*(unsigned int *)next ^ 0x0A0A0A0A))
+ break;
+ next += 4;
+ if (has_zero32(*(unsigned int *)next ^ 0x0A0A0A0A))
+ break;
+ next += 4;
+ if (has_zero32(*(unsigned int *)next ^ 0x0A0A0A0A))
+ break;
+ next += 4;
+ if (has_zero32(*(unsigned int *)next ^ 0x0A0A0A0A))
+ break;
+ next += 4;
+ if (has_zero32(*(unsigned int *)next ^ 0x0A0A0A0A))
+ break;
+ next += 4;
+ if (has_zero32(*(unsigned int *)next ^ 0x0A0A0A0A))
+ break;
+ next += 4;
+ if (has_zero32(*(unsigned int *)next ^ 0x0A0A0A0A))
+ break;
+ next += 4;
+ }
+ }
+ else { /* 64-bit system */
+ /* this is a speed-up, we read 64 bits at once and check for an
+ * LF character there. We stop if found then continue one at a
+ * time.
+ */
+ if (next <= end) {
+ /* max 3 bytes tested here */
+ while ((((unsigned long)next) & 3) && *next != '\n')
+ next++;
+
+ /* maybe we have can skip 4 more bytes */
+ if ((((unsigned long)next) & 4) && !has_zero32(*(unsigned int *)next ^ 0x0A0A0A0AU))
+ next += 4;
+ }
+
+ /* now next is multiple of 8 or equal to end */
+ while (next <= (end-68)) {
+ if (has_zero64(*(unsigned long long *)next ^ 0x0A0A0A0A0A0A0A0AULL))
+ break;
+ next += 8;
+ if (has_zero64(*(unsigned long long *)next ^ 0x0A0A0A0A0A0A0A0AULL))
+ break;
+ next += 8;
+ if (has_zero64(*(unsigned long long *)next ^ 0x0A0A0A0A0A0A0A0AULL))
+ break;
+ next += 8;
+ if (has_zero64(*(unsigned long long *)next ^ 0x0A0A0A0A0A0A0A0AULL))
+ break;
+ next += 8;
+ if (has_zero64(*(unsigned long long *)next ^ 0x0A0A0A0A0A0A0A0AULL))
+ break;
+ next += 8;
+ if (has_zero64(*(unsigned long long *)next ^ 0x0A0A0A0A0A0A0A0AULL))
+ break;
+ next += 8;
+ if (has_zero64(*(unsigned long long *)next ^ 0x0A0A0A0A0A0A0A0AULL))
+ break;
+ next += 8;
+ if (has_zero64(*(unsigned long long *)next ^ 0x0A0A0A0A0A0A0A0AULL))
+ break;
+ next += 8;
+ }
+
+ /* maybe we can skip 4 more bytes */
+ if (!has_zero32(*(unsigned int *)next ^ 0x0A0A0A0AU))
+ next += 4;
+ }
+
+ /* We finish if needed : if <next> is below <end>, it means we
+ * found an LF in one of the 4 following bytes.
+ */
+ while (next < end) {
+ if (*next == '\n')
+ break;
+ next++;
+ }
+#endif
+ return next;
+}
+
+const char *fgets2(FILE *stream)
+{
+ static char buffer[FGETS2_BUFSIZE + 68]; /* Note: +32 is enough on 32-bit systems */
+ static char *end = buffer;
+ static char *line = buffer;
+ char *next;
+ int ret;
+
+ next = line;
+
+ while (1) {
+ next = find_lf(next, end);
+ if (next < end) {
+ const char *start = line;
+ *next = '\0';
+ line = next + 1;
+ return start;
+ }
+
+ /* we found an incomplete line. First, let's move the
+ * remaining part of the buffer to the beginning, then
+ * try to complete the buffer with a new read. We can't
+ * rely on <next> anymore because it went past <end>.
+ */
+ if (line > buffer) {
+ if (end != line)
+ memmove(buffer, line, end - line);
+ end = buffer + (end - line);
+ next = end;
+ line = buffer;
+ } else {
+ if (end == buffer + FGETS2_BUFSIZE)
+ return NULL;
+ }
+
+ ret = read(fileno(stream), end, buffer + FGETS2_BUFSIZE - end);
+
+ if (ret <= 0) {
+ if (end == line)
+ return NULL;
+
+ *end = '\0';
+ end = line; /* ensure we stop next time */
+ return line;
+ }
+
+ end += ret;
+ *end = '\n'; /* make parser stop ASAP */
+ /* search for '\n' again */
+ }
+}
+
+#ifdef BENCHMARK
+int main() {
+ const char *p;
+ unsigned int lines = 0;
+
+ while ((p=fgets2(stdin)))
+ lines++;
+ printf("lines=%d\n", lines);
+ return 0;
+}
+#endif
--- /dev/null
+/*
+ * haproxy log statistics reporter
+ *
+ * Copyright 2000-2012 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <errno.h>
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <syslog.h>
+#include <string.h>
+#include <unistd.h>
+#include <ctype.h>
+#include <time.h>
+
+#include <eb32tree.h>
+#include <eb64tree.h>
+#include <ebistree.h>
+#include <ebsttree.h>
+
+#define SOURCE_FIELD 5
+#define ACCEPT_FIELD 6
+#define SERVER_FIELD 8
+#define TIME_FIELD 9
+#define STATUS_FIELD 10
+#define BYTES_SENT_FIELD 11
+#define TERM_CODES_FIELD 14
+#define CONN_FIELD 15
+#define QUEUE_LEN_FIELD 16
+#define METH_FIELD 17
+#define URL_FIELD 18
+#define MAXLINE 16384
+#define QBITS 4
+
+#define SEP(c) ((unsigned char)(c) <= ' ')
+#define SKIP_CHAR(p,c) do { while (1) { int __c = (unsigned char)*p++; if (__c == c) break; if (__c <= ' ') { p--; break; } } } while (0)
+
+/* [0] = err/date, [1] = req, [2] = conn, [3] = resp, [4] = data */
+static struct eb_root timers[5] = {
+ EB_ROOT_UNIQUE, EB_ROOT_UNIQUE, EB_ROOT_UNIQUE,
+ EB_ROOT_UNIQUE, EB_ROOT_UNIQUE,
+};
+
+struct timer {
+ struct eb32_node node;
+ unsigned int count;
+};
+
+struct srv_st {
+ unsigned int st_cnt[6]; /* 0xx to 5xx */
+ unsigned int nb_ct, nb_rt, nb_ok;
+ unsigned long long cum_ct, cum_rt;
+ struct ebmb_node node;
+ /* don't put anything else here, the server name will be there */
+};
+
+struct url_stat {
+ union {
+ struct ebpt_node url;
+ struct eb64_node val;
+ } node;
+ char *url;
+ unsigned long long total_time; /* sum(all reqs' times) */
+ unsigned long long total_time_ok; /* sum(all OK reqs' times) */
+ unsigned long long total_bytes_sent; /* sum(all bytes sent) */
+ unsigned int nb_err, nb_req;
+};
+
+#define FILT_COUNT_ONLY 0x01
+#define FILT_INVERT 0x02
+#define FILT_QUIET 0x04
+#define FILT_ERRORS_ONLY 0x08
+#define FILT_ACC_DELAY 0x10
+#define FILT_ACC_COUNT 0x20
+#define FILT_GRAPH_TIMERS 0x40
+#define FILT_PERCENTILE 0x80
+#define FILT_TIME_RESP 0x100
+
+#define FILT_INVERT_ERRORS 0x200
+#define FILT_INVERT_TIME_RESP 0x400
+
+#define FILT_COUNT_STATUS 0x800
+#define FILT_COUNT_SRV_STATUS 0x1000
+#define FILT_COUNT_TERM_CODES 0x2000
+
+#define FILT_COUNT_URL_ONLY 0x004000
+#define FILT_COUNT_URL_COUNT 0x008000
+#define FILT_COUNT_URL_ERR 0x010000
+#define FILT_COUNT_URL_TTOT 0x020000
+#define FILT_COUNT_URL_TAVG 0x040000
+#define FILT_COUNT_URL_TTOTO 0x080000
+#define FILT_COUNT_URL_TAVGO 0x100000
+
+#define FILT_HTTP_ONLY 0x200000
+#define FILT_TERM_CODE_NAME 0x400000
+#define FILT_INVERT_TERM_CODE_NAME 0x800000
+
+#define FILT_HTTP_STATUS 0x1000000
+#define FILT_INVERT_HTTP_STATUS 0x2000000
+#define FILT_QUEUE_ONLY 0x4000000
+#define FILT_QUEUE_SRV_ONLY 0x8000000
+
+#define FILT_COUNT_URL_BAVG 0x10000000
+#define FILT_COUNT_URL_BTOT 0x20000000
+
+#define FILT_COUNT_URL_ANY (FILT_COUNT_URL_ONLY|FILT_COUNT_URL_COUNT|FILT_COUNT_URL_ERR| \
+ FILT_COUNT_URL_TTOT|FILT_COUNT_URL_TAVG|FILT_COUNT_URL_TTOTO|FILT_COUNT_URL_TAVGO| \
+ FILT_COUNT_URL_BAVG|FILT_COUNT_URL_BTOT)
+
+#define FILT_COUNT_COOK_CODES 0x40000000
+#define FILT_COUNT_IP_COUNT 0x80000000
+
+#define FILT2_TIMESTAMP 0x01
+
+unsigned int filter = 0;
+unsigned int filter2 = 0;
+unsigned int filter_invert = 0;
+const char *line;
+int linenum = 0;
+int parse_err = 0;
+int lines_out = 0;
+int lines_max = -1;
+
+const char *fgets2(FILE *stream);
+
+void filter_count_url(const char *accept_field, const char *time_field, struct timer **tptr);
+void filter_count_ip(const char *source_field, const char *accept_field, const char *time_field, struct timer **tptr);
+void filter_count_srv_status(const char *accept_field, const char *time_field, struct timer **tptr);
+void filter_count_cook_codes(const char *accept_field, const char *time_field, struct timer **tptr);
+void filter_count_term_codes(const char *accept_field, const char *time_field, struct timer **tptr);
+void filter_count_status(const char *accept_field, const char *time_field, struct timer **tptr);
+void filter_graphs(const char *accept_field, const char *time_field, struct timer **tptr);
+void filter_output_line(const char *accept_field, const char *time_field, struct timer **tptr);
+void filter_accept_holes(const char *accept_field, const char *time_field, struct timer **tptr);
+
+void usage(FILE *output, const char *msg)
+{
+ fprintf(output,
+ "%s"
+ "Usage: halog [-h|--help] for long help\n"
+ " halog [-q] [-c] [-m <lines>]\n"
+ " {-cc|-gt|-pct|-st|-tc|-srv|-u|-uc|-ue|-ua|-ut|-uao|-uto|-uba|-ubt|-ic}\n"
+ " [-s <skip>] [-e|-E] [-H] [-rt|-RT <time>] [-ad <delay>] [-ac <count>]\n"
+ " [-v] [-Q|-QS] [-tcn|-TCN <termcode>] [ -hs|-HS [min][:[max]] ] [ -time [min][:[max]] ] < log\n"
+ "\n",
+ msg ? msg : ""
+ );
+}
+
+void die(const char *msg)
+{
+ usage(stderr, msg);
+ exit(1);
+}
+
+void help()
+{
+ usage(stdout, NULL);
+ printf(
+ "Input filters (several filters may be combined) :\n"
+ " -H only match lines containing HTTP logs (ignore TCP)\n"
+ " -E only match lines without any error (no 5xx status)\n"
+ " -e only match lines with errors (status 5xx or negative)\n"
+ " -rt|-RT <time> only match response times larger|smaller than <time>\n"
+ " -Q|-QS only match queued requests (any queue|server queue)\n"
+ " -tcn|-TCN <code> only match requests with/without termination code <code>\n"
+ " -hs|-HS <[min][:][max]> only match requests with HTTP status codes within/not\n"
+ " within min..max. Any of them may be omitted. Exact\n"
+ " code is checked for if no ':' is specified.\n"
+ " -time <[min][:max]> only match requests recorded between timestamps.\n"
+ " Any of them may be omitted.\n"
+ "Modifiers\n"
+ " -v invert the input filtering condition\n"
+ " -q don't report errors/warnings\n"
+ " -m <lines> limit output to the first <lines> lines\n"
+ "Output filters - only one may be used at a time\n"
+ " -c only report the number of lines that would have been printed\n"
+ " -pct output connect and response times percentiles\n"
+ " -st output number of requests per HTTP status code\n"
+ " -cc output number of requests per cookie code (2 chars)\n"
+ " -tc output number of requests per termination code (2 chars)\n"
+ " -srv output statistics per server (time, requests, errors)\n"
+ " -u* output statistics per URL (time, requests, errors)\n"
+ " Additional characters indicate the output sorting key :\n"
+ " -u : by URL, -uc : request count, -ue : error count\n"
+ " -ua : average response time, -ut : average total time\n"
+ " -uao, -uto: average times computed on valid ('OK') requests\n"
+ " -uba, -ubt: average bytes returned, total bytes returned\n"
+ );
+ exit(0);
+}
+
+
+/* return pointer to first char not part of current field starting at <p>. */
+
+#if defined(__i386__)
+/* this one is always faster on 32-bits */
+static inline const char *field_stop(const char *p)
+{
+ asm(
+ /* Look for spaces */
+ "4: \n\t"
+ "inc %0 \n\t"
+ "cmpb $0x20, -1(%0) \n\t"
+ "ja 4b \n\t"
+ "jz 3f \n\t"
+
+ /* we only get there for control chars 0..31. Leave if we find '\0' */
+ "cmpb $0x0, -1(%0) \n\t"
+ "jnz 4b \n\t"
+
+ /* return %0-1 = position of the last char we checked */
+ "3: \n\t"
+ "dec %0 \n\t"
+ : "=r" (p)
+ : "0" (p)
+ );
+ return p;
+}
+#else
+const char *field_stop(const char *p)
+{
+ unsigned char c;
+
+ while (1) {
+ c = *(p++);
+ if (c > ' ')
+ continue;
+ if (c == ' ' || c == 0)
+ break;
+ }
+ return p - 1;
+}
+#endif
+
+/* return field <field> (starting from 1) in string <p>. Only consider
+ * contiguous spaces (or tabs) as one delimiter. May return pointer to
+ * last char if field is not found. Equivalent to awk '{print $field}'.
+ */
+const char *field_start(const char *p, int field)
+{
+#ifndef PREFER_ASM
+ unsigned char c;
+ while (1) {
+ /* skip spaces */
+ while (1) {
+ c = *(p++);
+ if (c > ' ')
+ break;
+ if (c == ' ')
+ continue;
+ if (!c) /* end of line */
+ return p-1;
+ /* other char => new field */
+ break;
+ }
+
+ /* start of field */
+ field--;
+ if (!field)
+ return p-1;
+
+ /* skip this field */
+ while (1) {
+ c = *(p++);
+ if (c == ' ')
+ break;
+ if (c > ' ')
+ continue;
+ if (c == '\0')
+ return p - 1;
+ }
+ }
+#else
+ /* This version works optimally on i386 and x86_64 but the code above
+ * shows similar performance. However, depending on the version of GCC
+ * used, inlining rules change and it may have difficulties to make
+ * efficient use of this code at other locations and could result in
+ * worse performance (eg: gcc 4.4). You may want to experience.
+ */
+ asm(
+ /* skip spaces */
+ "1: \n\t"
+ "inc %0 \n\t"
+ "cmpb $0x20, -1(%0) \n\t"
+ "ja 2f \n\t"
+ "jz 1b \n\t"
+
+ /* we only get there for control chars 0..31. Leave if we find '\0' */
+ "cmpb $0x0, -1(%0) \n\t"
+ "jz 3f \n\t"
+
+ /* start of field at [%0-1]. Check if we need to skip more fields */
+ "2: \n\t"
+ "dec %1 \n\t"
+ "jz 3f \n\t"
+
+ /* Look for spaces */
+ "4: \n\t"
+ "inc %0 \n\t"
+ "cmpb $0x20, -1(%0) \n\t"
+ "jz 1b \n\t"
+ "ja 4b \n\t"
+
+ /* we only get there for control chars 0..31. Leave if we find '\0' */
+ "cmpb $0x0, -1(%0) \n\t"
+ "jnz 4b \n\t"
+
+ /* return %0-1 = position of the last char we checked */
+ "3: \n\t"
+ "dec %0 \n\t"
+ : "=r" (p)
+ : "r" (field), "0" (p)
+ );
+ return p;
+#endif
+}
+
+/* keep only the <bits> higher bits of <i> */
+static inline unsigned int quantify_u32(unsigned int i, int bits)
+{
+ int high;
+
+ if (!bits)
+ return 0;
+
+ if (i)
+ high = fls_auto(i); // 1 to 32
+ else
+ high = 0;
+
+ if (high <= bits)
+ return i;
+
+ return i & ~((1 << (high - bits)) - 1);
+}
+
+/* keep only the <bits> higher bits of the absolute value of <i>, as well as
+ * its sign. */
+static inline int quantify(int i, int bits)
+{
+ if (i >= 0)
+ return quantify_u32(i, bits);
+ else
+ return -quantify_u32(-i, bits);
+}
+
+/* Insert timer value <v> into tree <r>. A pre-allocated node must be passed
+ * in <alloc>. It may be NULL, in which case the function will allocate it
+ * itself. It will be reset to NULL once consumed. The caller is responsible
+ * for freeing the node once not used anymore. The node where the value was
+ * inserted is returned.
+ */
+struct timer *insert_timer(struct eb_root *r, struct timer **alloc, int v)
+{
+ struct timer *t = *alloc;
+ struct eb32_node *n;
+
+ if (!t) {
+ t = calloc(sizeof(*t), 1);
+ if (unlikely(!t)) {
+ fprintf(stderr, "%s: not enough memory\n", __FUNCTION__);
+ exit(1);
+ }
+ }
+ t->node.key = quantify(v, QBITS); // keep only the higher QBITS bits
+
+ n = eb32i_insert(r, &t->node);
+ if (n == &t->node)
+ t = NULL; /* node inserted, will malloc next time */
+
+ *alloc = t;
+ return container_of(n, struct timer, node);
+}
+
+/* Insert value value <v> into tree <r>. A pre-allocated node must be passed
+ * in <alloc>. It may be NULL, in which case the function will allocate it
+ * itself. It will be reset to NULL once consumed. The caller is responsible
+ * for freeing the node once not used anymore. The node where the value was
+ * inserted is returned.
+ */
+struct timer *insert_value(struct eb_root *r, struct timer **alloc, int v)
+{
+ struct timer *t = *alloc;
+ struct eb32_node *n;
+
+ if (!t) {
+ t = calloc(sizeof(*t), 1);
+ if (unlikely(!t)) {
+ fprintf(stderr, "%s: not enough memory\n", __FUNCTION__);
+ exit(1);
+ }
+ }
+ t->node.key = v;
+
+ n = eb32i_insert(r, &t->node);
+ if (n == &t->node)
+ t = NULL; /* node inserted, will malloc next time */
+
+ *alloc = t;
+ return container_of(n, struct timer, node);
+}
+
+int str2ic(const char *s)
+{
+ int i = 0;
+ int j, k;
+
+ if (*s != '-') {
+ /* positive number */
+ while (1) {
+ j = (*s++) - '0';
+ k = i * 10;
+ if ((unsigned)j > 9)
+ break;
+ i = k + j;
+ }
+ } else {
+ /* negative number */
+ s++;
+ while (1) {
+ j = (*s++) - '0';
+ k = i * 10;
+ if ((unsigned)j > 9)
+ break;
+ i = k - j;
+ }
+ }
+
+ return i;
+}
+
+
+/* Equivalent to strtoul with a length. */
+static inline unsigned int __strl2ui(const char *s, int len)
+{
+ unsigned int i = 0;
+ while (len-- > 0) {
+ i = i * 10 - '0';
+ i += (unsigned char)*s++;
+ }
+ return i;
+}
+
+unsigned int strl2ui(const char *s, int len)
+{
+ return __strl2ui(s, len);
+}
+
+/* Convert "[04/Dec/2008:09:49:40.555]" to an integer equivalent to the time of
+ * the day in milliseconds. It returns -1 for all unparsable values. The parser
+ * looks ugly but gcc emits far better code that way.
+ */
+int convert_date(const char *field)
+{
+ unsigned int h, m, s, ms;
+ unsigned char c;
+ const char *b, *e;
+
+ h = m = s = ms = 0;
+ e = field;
+
+ /* skip the date */
+ while (1) {
+ c = *(e++);
+ if (c == ':')
+ break;
+ if (!c)
+ goto out_err;
+ }
+
+ /* hour + ':' */
+ b = e;
+ while (1) {
+ c = *(e++) - '0';
+ if (c > 9)
+ break;
+ h = h * 10 + c;
+ }
+ if (c == (unsigned char)(0 - '0'))
+ goto out_err;
+
+ /* minute + ':' */
+ b = e;
+ while (1) {
+ c = *(e++) - '0';
+ if (c > 9)
+ break;
+ m = m * 10 + c;
+ }
+ if (c == (unsigned char)(0 - '0'))
+ goto out_err;
+
+ /* second + '.' or ']' */
+ b = e;
+ while (1) {
+ c = *(e++) - '0';
+ if (c > 9)
+ break;
+ s = s * 10 + c;
+ }
+ if (c == (unsigned char)(0 - '0'))
+ goto out_err;
+
+ /* if there's a '.', we have milliseconds */
+ if (c == (unsigned char)('.' - '0')) {
+ /* millisecond second + ']' */
+ b = e;
+ while (1) {
+ c = *(e++) - '0';
+ if (c > 9)
+ break;
+ ms = ms * 10 + c;
+ }
+ if (c == (unsigned char)(0 - '0'))
+ goto out_err;
+ }
+ return (((h * 60) + m) * 60 + s) * 1000 + ms;
+ out_err:
+ return -1;
+}
+
+/* Convert "[04/Dec/2008:09:49:40.555]" to an unix timestamp.
+ * It returns -1 for all unparsable values. The parser
+ * looks ugly but gcc emits far better code that way.
+ */
+int convert_date_to_timestamp(const char *field)
+{
+ unsigned int d, mo, y, h, m, s;
+ unsigned char c;
+ const char *b, *e;
+ time_t rawtime;
+ static struct tm * timeinfo;
+ static int last_res;
+
+ d = mo = y = h = m = s = 0;
+ e = field;
+
+ c = *(e++); // remove '['
+ /* day + '/' */
+ while (1) {
+ c = *(e++) - '0';
+ if (c > 9)
+ break;
+ d = d * 10 + c;
+ if (c == (unsigned char)(0 - '0'))
+ goto out_err;
+ }
+
+ /* month + '/' */
+ c = *(e++);
+ if (c =='F') {
+ mo = 2;
+ e = e+3;
+ } else if (c =='S') {
+ mo = 9;
+ e = e+3;
+ } else if (c =='O') {
+ mo = 10;
+ e = e+3;
+ } else if (c =='N') {
+ mo = 11;
+ e = e+3;
+ } else if (c == 'D') {
+ mo = 12;
+ e = e+3;
+ } else if (c == 'A') {
+ c = *(e++);
+ if (c == 'p') {
+ mo = 4;
+ e = e+2;
+ } else if (c == 'u') {
+ mo = 8;
+ e = e+2;
+ } else
+ goto out_err;
+ } else if (c == 'J') {
+ c = *(e++);
+ if (c == 'a') {
+ mo = 1;
+ e = e+2;
+ } else if (c == 'u') {
+ c = *(e++);
+ if (c == 'n') {
+ mo = 6;
+ e = e+1;
+ } else if (c == 'l') {
+ mo = 7;
+ e++;
+ }
+ } else
+ goto out_err;
+ } else if (c == 'M') {
+ e++;
+ c = *(e++);
+ if (c == 'r') {
+ mo = 3;
+ e = e+1;
+ } else if (c == 'y') {
+ mo = 5;
+ e = e+1;
+ } else
+ goto out_err;
+ } else
+ goto out_err;
+
+ /* year + ':' */
+ while (1) {
+ c = *(e++) - '0';
+ if (c > 9)
+ break;
+ y = y * 10 + c;
+ if (c == (unsigned char)(0 - '0'))
+ goto out_err;
+ }
+
+ /* hour + ':' */
+ b = e;
+ while (1) {
+ c = *(e++) - '0';
+ if (c > 9)
+ break;
+ h = h * 10 + c;
+ }
+ if (c == (unsigned char)(0 - '0'))
+ goto out_err;
+
+ /* minute + ':' */
+ b = e;
+ while (1) {
+ c = *(e++) - '0';
+ if (c > 9)
+ break;
+ m = m * 10 + c;
+ }
+ if (c == (unsigned char)(0 - '0'))
+ goto out_err;
+
+ /* second + '.' or ']' */
+ b = e;
+ while (1) {
+ c = *(e++) - '0';
+ if (c > 9)
+ break;
+ s = s * 10 + c;
+ }
+
+ if (likely(timeinfo)) {
+ if (timeinfo->tm_min == m &&
+ timeinfo->tm_hour == h &&
+ timeinfo->tm_mday == d &&
+ timeinfo->tm_mon == mo - 1 &&
+ timeinfo->tm_year == y - 1900)
+ return last_res + s;
+ }
+ else {
+ time(&rawtime);
+ timeinfo = localtime(&rawtime);
+ }
+
+ timeinfo->tm_sec = 0;
+ timeinfo->tm_min = m;
+ timeinfo->tm_hour = h;
+ timeinfo->tm_mday = d;
+ timeinfo->tm_mon = mo - 1;
+ timeinfo->tm_year = y - 1900;
+ last_res = mktime(timeinfo);
+
+ return last_res + s;
+ out_err:
+ return -1;
+}
+
+void truncated_line(int linenum, const char *line)
+{
+ if (!(filter & FILT_QUIET))
+ fprintf(stderr, "Truncated line %d: %s\n", linenum, line);
+}
+
+int main(int argc, char **argv)
+{
+ const char *b, *e, *p, *time_field, *accept_field, *source_field;
+ const char *filter_term_code_name = NULL;
+ const char *output_file = NULL;
+ int f, last, err;
+ struct timer *t = NULL;
+ struct eb32_node *n;
+ struct url_stat *ustat = NULL;
+ int val, test;
+ unsigned int uval;
+ int filter_acc_delay = 0, filter_acc_count = 0;
+ int filter_time_resp = 0;
+ int filt_http_status_low = 0, filt_http_status_high = 0;
+ int filt2_timestamp_low = 0, filt2_timestamp_high = 0;
+ int skip_fields = 1;
+
+ void (*line_filter)(const char *accept_field, const char *time_field, struct timer **tptr) = NULL;
+
+ argc--; argv++;
+ while (argc > 0) {
+ if (*argv[0] != '-')
+ break;
+
+ if (strcmp(argv[0], "-ad") == 0) {
+ if (argc < 2) die("missing option for -ad");
+ argc--; argv++;
+ filter |= FILT_ACC_DELAY;
+ filter_acc_delay = atol(*argv);
+ }
+ else if (strcmp(argv[0], "-ac") == 0) {
+ if (argc < 2) die("missing option for -ac");
+ argc--; argv++;
+ filter |= FILT_ACC_COUNT;
+ filter_acc_count = atol(*argv);
+ }
+ else if (strcmp(argv[0], "-rt") == 0) {
+ if (argc < 2) die("missing option for -rt");
+ argc--; argv++;
+ filter |= FILT_TIME_RESP;
+ filter_time_resp = atol(*argv);
+ }
+ else if (strcmp(argv[0], "-RT") == 0) {
+ if (argc < 2) die("missing option for -RT");
+ argc--; argv++;
+ filter |= FILT_TIME_RESP | FILT_INVERT_TIME_RESP;
+ filter_time_resp = atol(*argv);
+ }
+ else if (strcmp(argv[0], "-s") == 0) {
+ if (argc < 2) die("missing option for -s");
+ argc--; argv++;
+ skip_fields = atol(*argv);
+ }
+ else if (strcmp(argv[0], "-m") == 0) {
+ if (argc < 2) die("missing option for -m");
+ argc--; argv++;
+ lines_max = atol(*argv);
+ }
+ else if (strcmp(argv[0], "-e") == 0)
+ filter |= FILT_ERRORS_ONLY;
+ else if (strcmp(argv[0], "-E") == 0)
+ filter |= FILT_ERRORS_ONLY | FILT_INVERT_ERRORS;
+ else if (strcmp(argv[0], "-H") == 0)
+ filter |= FILT_HTTP_ONLY;
+ else if (strcmp(argv[0], "-Q") == 0)
+ filter |= FILT_QUEUE_ONLY;
+ else if (strcmp(argv[0], "-QS") == 0)
+ filter |= FILT_QUEUE_SRV_ONLY;
+ else if (strcmp(argv[0], "-c") == 0)
+ filter |= FILT_COUNT_ONLY;
+ else if (strcmp(argv[0], "-q") == 0)
+ filter |= FILT_QUIET;
+ else if (strcmp(argv[0], "-v") == 0)
+ filter_invert = !filter_invert;
+ else if (strcmp(argv[0], "-gt") == 0)
+ filter |= FILT_GRAPH_TIMERS;
+ else if (strcmp(argv[0], "-pct") == 0)
+ filter |= FILT_PERCENTILE;
+ else if (strcmp(argv[0], "-st") == 0)
+ filter |= FILT_COUNT_STATUS;
+ else if (strcmp(argv[0], "-srv") == 0)
+ filter |= FILT_COUNT_SRV_STATUS;
+ else if (strcmp(argv[0], "-cc") == 0)
+ filter |= FILT_COUNT_COOK_CODES;
+ else if (strcmp(argv[0], "-tc") == 0)
+ filter |= FILT_COUNT_TERM_CODES;
+ else if (strcmp(argv[0], "-tcn") == 0) {
+ if (argc < 2) die("missing option for -tcn");
+ argc--; argv++;
+ filter |= FILT_TERM_CODE_NAME;
+ filter_term_code_name = *argv;
+ }
+ else if (strcmp(argv[0], "-TCN") == 0) {
+ if (argc < 2) die("missing option for -TCN");
+ argc--; argv++;
+ filter |= FILT_TERM_CODE_NAME | FILT_INVERT_TERM_CODE_NAME;
+ filter_term_code_name = *argv;
+ }
+ else if (strcmp(argv[0], "-hs") == 0 || strcmp(argv[0], "-HS") == 0) {
+ char *sep, *str;
+
+ if (argc < 2) die("missing option for -hs/-HS ([min]:[max])");
+ filter |= FILT_HTTP_STATUS;
+ if (argv[0][1] == 'H')
+ filter |= FILT_INVERT_HTTP_STATUS;
+
+ argc--; argv++;
+ str = *argv;
+ sep = strchr(str, ':'); /* [min]:[max] */
+ if (!sep)
+ sep = str; /* make max point to min */
+ else
+ *sep++ = 0;
+ filt_http_status_low = *str ? atol(str) : 0;
+ filt_http_status_high = *sep ? atol(sep) : 65535;
+ }
+ else if (strcmp(argv[0], "-time") == 0) {
+ char *sep, *str;
+
+ if (argc < 2) die("missing option for -time ([min]:[max])");
+ filter2 |= FILT2_TIMESTAMP;
+
+ argc--; argv++;
+ str = *argv;
+ sep = strchr(str, ':'); /* [min]:[max] */
+ filt2_timestamp_low = *str ? atol(str) : 0;
+ if (!sep)
+ filt2_timestamp_high = 0xFFFFFFFF;
+ else
+ filt2_timestamp_high = atol(++sep);
+ }
+ else if (strcmp(argv[0], "-u") == 0)
+ filter |= FILT_COUNT_URL_ONLY;
+ else if (strcmp(argv[0], "-uc") == 0)
+ filter |= FILT_COUNT_URL_COUNT;
+ else if (strcmp(argv[0], "-ue") == 0)
+ filter |= FILT_COUNT_URL_ERR;
+ else if (strcmp(argv[0], "-ua") == 0)
+ filter |= FILT_COUNT_URL_TAVG;
+ else if (strcmp(argv[0], "-ut") == 0)
+ filter |= FILT_COUNT_URL_TTOT;
+ else if (strcmp(argv[0], "-uao") == 0)
+ filter |= FILT_COUNT_URL_TAVGO;
+ else if (strcmp(argv[0], "-uto") == 0)
+ filter |= FILT_COUNT_URL_TTOTO;
+ else if (strcmp(argv[0], "-uba") == 0)
+ filter |= FILT_COUNT_URL_BAVG;
+ else if (strcmp(argv[0], "-ubt") == 0)
+ filter |= FILT_COUNT_URL_BTOT;
+ else if (strcmp(argv[0], "-ic") == 0)
+ filter |= FILT_COUNT_IP_COUNT;
+ else if (strcmp(argv[0], "-o") == 0) {
+ if (output_file)
+ die("Fatal: output file name already specified.\n");
+ if (argc < 2)
+ die("Fatal: missing output file name.\n");
+ output_file = argv[1];
+ }
+ else if (strcmp(argv[0], "-h") == 0 || strcmp(argv[0], "--help") == 0)
+ help();
+ argc--;
+ argv++;
+ }
+
+ if (!filter)
+ die("No action specified.\n");
+
+ if (filter & FILT_ACC_COUNT && !filter_acc_count)
+ filter_acc_count=1;
+
+ if (filter & FILT_ACC_DELAY && !filter_acc_delay)
+ filter_acc_delay = 1;
+
+
+ /* by default, all lines are printed */
+ line_filter = filter_output_line;
+ if (filter & (FILT_ACC_COUNT|FILT_ACC_DELAY))
+ line_filter = filter_accept_holes;
+ else if (filter & (FILT_GRAPH_TIMERS|FILT_PERCENTILE))
+ line_filter = filter_graphs;
+ else if (filter & FILT_COUNT_STATUS)
+ line_filter = filter_count_status;
+ else if (filter & FILT_COUNT_COOK_CODES)
+ line_filter = filter_count_cook_codes;
+ else if (filter & FILT_COUNT_TERM_CODES)
+ line_filter = filter_count_term_codes;
+ else if (filter & FILT_COUNT_SRV_STATUS)
+ line_filter = filter_count_srv_status;
+ else if (filter & FILT_COUNT_URL_ANY)
+ line_filter = filter_count_url;
+ else if (filter & FILT_COUNT_ONLY)
+ line_filter = NULL;
+
+#if defined(POSIX_FADV_SEQUENTIAL)
+ /* around 20% performance improvement is observed on Linux with this
+ * on cold-cache. Surprizingly, WILLNEED is less performant. Don't
+ * use NOREUSE as it flushes the cache and prevents easy data
+ * manipulation on logs!
+ */
+ posix_fadvise(0, 0, 0, POSIX_FADV_SEQUENTIAL);
+#endif
+
+ if (!line_filter && /* FILT_COUNT_ONLY ( see above), and no input filter (see below) */
+ !(filter & (FILT_HTTP_ONLY|FILT_TIME_RESP|FILT_ERRORS_ONLY|FILT_HTTP_STATUS|FILT_QUEUE_ONLY|FILT_QUEUE_SRV_ONLY|FILT_TERM_CODE_NAME)) &&
+ !(filter2 & (FILT2_TIMESTAMP))) {
+ /* read the whole file at once first, ignore it if inverted output */
+ if (!filter_invert)
+ while ((lines_max < 0 || lines_out < lines_max) && fgets2(stdin) != NULL)
+ lines_out++;
+
+ goto skip_filters;
+ }
+
+ while ((line = fgets2(stdin)) != NULL) {
+ linenum++;
+ time_field = NULL; accept_field = NULL;
+ source_field = NULL;
+
+ test = 1;
+
+ /* for any line we process, we first ensure that there is a field
+ * looking like the accept date field (beginning with a '[').
+ */
+ if (filter & FILT_COUNT_IP_COUNT) {
+ /* we need the IP first */
+ source_field = field_start(line, SOURCE_FIELD + skip_fields);
+ accept_field = field_start(source_field, ACCEPT_FIELD - SOURCE_FIELD + 1);
+ }
+ else
+ accept_field = field_start(line, ACCEPT_FIELD + skip_fields);
+
+ if (unlikely(*accept_field != '[')) {
+ parse_err++;
+ continue;
+ }
+
+ /* the day of month field is begin 01 and 31 */
+ if (accept_field[1] < '0' || accept_field[1] > '3') {
+ parse_err++;
+ continue;
+ }
+
+ if (filter2 & FILT2_TIMESTAMP) {
+ uval = convert_date_to_timestamp(accept_field);
+ test &= (uval>=filt2_timestamp_low && uval<=filt2_timestamp_high) ;
+ }
+
+ if (filter & FILT_HTTP_ONLY) {
+ /* only report lines with at least 4 timers */
+ if (!time_field) {
+ time_field = field_start(accept_field, TIME_FIELD - ACCEPT_FIELD + 1);
+ if (unlikely(!*time_field)) {
+ truncated_line(linenum, line);
+ continue;
+ }
+ }
+
+ e = field_stop(time_field + 1);
+ /* we have field TIME_FIELD in [time_field]..[e-1] */
+ p = time_field;
+ f = 0;
+ while (!SEP(*p)) {
+ if (++f == 4)
+ break;
+ SKIP_CHAR(p, '/');
+ }
+ test &= (f >= 4);
+ }
+
+ if (filter & FILT_TIME_RESP) {
+ int tps;
+
+ /* only report lines with response times larger than filter_time_resp */
+ if (!time_field) {
+ time_field = field_start(accept_field, TIME_FIELD - ACCEPT_FIELD + 1);
+ if (unlikely(!*time_field)) {
+ truncated_line(linenum, line);
+ continue;
+ }
+ }
+
+ e = field_stop(time_field + 1);
+ /* we have field TIME_FIELD in [time_field]..[e-1], let's check only the response time */
+
+ p = time_field;
+ err = 0;
+ f = 0;
+ while (!SEP(*p)) {
+ tps = str2ic(p);
+ if (tps < 0) {
+ tps = -1;
+ err = 1;
+ }
+ if (++f == 4)
+ break;
+ SKIP_CHAR(p, '/');
+ }
+
+ if (unlikely(f < 4)) {
+ parse_err++;
+ continue;
+ }
+
+ test &= (tps >= filter_time_resp) ^ !!(filter & FILT_INVERT_TIME_RESP);
+ }
+
+ if (filter & (FILT_ERRORS_ONLY | FILT_HTTP_STATUS)) {
+ /* Check both error codes (-1, 5xx) and status code ranges */
+ if (time_field)
+ b = field_start(time_field, STATUS_FIELD - TIME_FIELD + 1);
+ else
+ b = field_start(accept_field, STATUS_FIELD - ACCEPT_FIELD + 1);
+
+ if (unlikely(!*b)) {
+ truncated_line(linenum, line);
+ continue;
+ }
+
+ val = str2ic(b);
+ if (filter & FILT_ERRORS_ONLY)
+ test &= (val < 0 || (val >= 500 && val <= 599)) ^ !!(filter & FILT_INVERT_ERRORS);
+
+ if (filter & FILT_HTTP_STATUS)
+ test &= (val >= filt_http_status_low && val <= filt_http_status_high) ^ !!(filter & FILT_INVERT_HTTP_STATUS);
+ }
+
+ if (filter & (FILT_QUEUE_ONLY|FILT_QUEUE_SRV_ONLY)) {
+ /* Check if the server's queue is non-nul */
+ if (time_field)
+ b = field_start(time_field, QUEUE_LEN_FIELD - TIME_FIELD + 1);
+ else
+ b = field_start(accept_field, QUEUE_LEN_FIELD - ACCEPT_FIELD + 1);
+
+ if (unlikely(!*b)) {
+ truncated_line(linenum, line);
+ continue;
+ }
+
+ if (*b == '0') {
+ if (filter & FILT_QUEUE_SRV_ONLY) {
+ test = 0;
+ }
+ else {
+ do {
+ b++;
+ if (*b == '/') {
+ b++;
+ break;
+ }
+ } while (*b);
+ test &= ((unsigned char)(*b - '1') < 9);
+ }
+ }
+ }
+
+ if (filter & FILT_TERM_CODE_NAME) {
+ /* only report corresponding termination code name */
+ if (time_field)
+ b = field_start(time_field, TERM_CODES_FIELD - TIME_FIELD + 1);
+ else
+ b = field_start(accept_field, TERM_CODES_FIELD - ACCEPT_FIELD + 1);
+
+ if (unlikely(!*b)) {
+ truncated_line(linenum, line);
+ continue;
+ }
+
+ test &= (b[0] == filter_term_code_name[0] && b[1] == filter_term_code_name[1]) ^ !!(filter & FILT_INVERT_TERM_CODE_NAME);
+ }
+
+
+ test ^= filter_invert;
+ if (!test)
+ continue;
+
+ /************** here we process inputs *******************/
+
+ if (line_filter) {
+ if (filter & FILT_COUNT_IP_COUNT)
+ filter_count_ip(source_field, accept_field, time_field, &t);
+ else
+ line_filter(accept_field, time_field, &t);
+ }
+ else
+ lines_out++; /* FILT_COUNT_ONLY was used, so we're just counting lines */
+ if (lines_max >= 0 && lines_out >= lines_max)
+ break;
+ }
+
+ skip_filters:
+ /*****************************************************
+ * Here we've finished reading all input. Depending on the
+ * filters, we may still have some analysis to run on the
+ * collected data and to output data in a new format.
+ *************************************************** */
+
+ if (t)
+ free(t);
+
+ if (filter & FILT_COUNT_ONLY) {
+ printf("%d\n", lines_out);
+ exit(0);
+ }
+
+ if (filter & (FILT_ACC_COUNT|FILT_ACC_DELAY)) {
+ /* sort and count all timers. Output will look like this :
+ * <accept_date> <delta_ms from previous one> <nb entries>
+ */
+ n = eb32_first(&timers[0]);
+
+ if (n)
+ last = n->key;
+ while (n) {
+ unsigned int d, h, m, s, ms;
+
+ t = container_of(n, struct timer, node);
+ h = n->key;
+ d = h - last;
+ last = h;
+
+ if (d >= filter_acc_delay && t->count >= filter_acc_count) {
+ ms = h % 1000; h = h / 1000;
+ s = h % 60; h = h / 60;
+ m = h % 60; h = h / 60;
+ printf("%02d:%02d:%02d.%03d %d %d %d\n", h, m, s, ms, last, d, t->count);
+ lines_out++;
+ if (lines_max >= 0 && lines_out >= lines_max)
+ break;
+ }
+ n = eb32_next(n);
+ }
+ }
+ else if (filter & FILT_GRAPH_TIMERS) {
+ /* sort all timers */
+ for (f = 0; f < 5; f++) {
+ struct eb32_node *n;
+ int val;
+
+ val = 0;
+ n = eb32_first(&timers[f]);
+ while (n) {
+ int i;
+ double d;
+
+ t = container_of(n, struct timer, node);
+ last = n->key;
+ val = t->count;
+
+ i = (last < 0) ? -last : last;
+ i = fls_auto(i) - QBITS;
+
+ if (i > 0)
+ d = val / (double)(1 << i);
+ else
+ d = val;
+
+ if (d > 0.0)
+ printf("%d %d %f\n", f, last, d+1.0);
+
+ n = eb32_next(n);
+ }
+ }
+ }
+ else if (filter & FILT_PERCENTILE) {
+ /* report timers by percentile :
+ * <percent> <total> <max_req_time> <max_conn_time> <max_resp_time> <max_data_time>
+ * We don't count errs.
+ */
+ struct eb32_node *n[5];
+ unsigned long cum[5];
+ double step;
+
+ if (!lines_out)
+ goto empty;
+
+ for (f = 1; f < 5; f++) {
+ n[f] = eb32_first(&timers[f]);
+ cum[f] = container_of(n[f], struct timer, node)->count;
+ }
+
+ for (step = 1; step <= 1000;) {
+ unsigned int thres = lines_out * (step / 1000.0);
+
+ printf("%3.1f %d ", step/10.0, thres);
+ for (f = 1; f < 5; f++) {
+ struct eb32_node *next;
+ while (cum[f] < thres) {
+ /* need to find other keys */
+ next = eb32_next(n[f]);
+ if (!next)
+ break;
+ n[f] = next;
+ cum[f] += container_of(next, struct timer, node)->count;
+ }
+
+ /* value still within $step % of total */
+ printf("%d ", n[f]->key);
+ }
+ putchar('\n');
+ if (step >= 100 && step < 900)
+ step += 50; // jump 5% by 5% between those steps.
+ else if (step >= 20 && step < 980)
+ step += 10;
+ else
+ step += 1;
+ }
+ }
+ else if (filter & FILT_COUNT_STATUS) {
+ /* output all statuses in the form of <status> <occurrences> */
+ n = eb32_first(&timers[0]);
+ while (n) {
+ t = container_of(n, struct timer, node);
+ printf("%d %d\n", n->key, t->count);
+ lines_out++;
+ if (lines_max >= 0 && lines_out >= lines_max)
+ break;
+ n = eb32_next(n);
+ }
+ }
+ else if (filter & FILT_COUNT_SRV_STATUS) {
+ struct ebmb_node *srv_node;
+ struct srv_st *srv;
+
+ printf("#srv_name 1xx 2xx 3xx 4xx 5xx other tot_req req_ok pct_ok avg_ct avg_rt\n");
+
+ srv_node = ebmb_first(&timers[0]);
+ while (srv_node) {
+ int tot_rq;
+
+ srv = container_of(srv_node, struct srv_st, node);
+
+ tot_rq = 0;
+ for (f = 0; f <= 5; f++)
+ tot_rq += srv->st_cnt[f];
+
+ printf("%s %d %d %d %d %d %d %d %d %.1f %d %d\n",
+ srv_node->key, srv->st_cnt[1], srv->st_cnt[2],
+ srv->st_cnt[3], srv->st_cnt[4], srv->st_cnt[5], srv->st_cnt[0],
+ tot_rq,
+ srv->nb_ok, (double)srv->nb_ok * 100.0 / (tot_rq?tot_rq:1),
+ (int)(srv->cum_ct / (srv->nb_ct?srv->nb_ct:1)), (int)(srv->cum_rt / (srv->nb_rt?srv->nb_rt:1)));
+ srv_node = ebmb_next(srv_node);
+ lines_out++;
+ if (lines_max >= 0 && lines_out >= lines_max)
+ break;
+ }
+ }
+ else if (filter & (FILT_COUNT_TERM_CODES|FILT_COUNT_COOK_CODES)) {
+ /* output all statuses in the form of <code> <occurrences> */
+ n = eb32_first(&timers[0]);
+ while (n) {
+ t = container_of(n, struct timer, node);
+ printf("%c%c %d\n", (n->key >> 8), (n->key) & 255, t->count);
+ lines_out++;
+ if (lines_max >= 0 && lines_out >= lines_max)
+ break;
+ n = eb32_next(n);
+ }
+ }
+ else if (filter & (FILT_COUNT_URL_ANY|FILT_COUNT_IP_COUNT)) {
+ struct eb_node *node, *next;
+
+ if (!(filter & FILT_COUNT_URL_ONLY)) {
+ /* we have to sort on another criterion. We'll use timers[1] for the
+ * destination tree.
+ */
+
+ timers[1] = EB_ROOT; /* reconfigure to accept duplicates */
+ for (node = eb_first(&timers[0]); node; node = next) {
+ next = eb_next(node);
+ eb_delete(node);
+
+ ustat = container_of(node, struct url_stat, node.url.node);
+
+ if (filter & (FILT_COUNT_URL_COUNT|FILT_COUNT_IP_COUNT))
+ ustat->node.val.key = ustat->nb_req;
+ else if (filter & FILT_COUNT_URL_ERR)
+ ustat->node.val.key = ustat->nb_err;
+ else if (filter & FILT_COUNT_URL_TTOT)
+ ustat->node.val.key = ustat->total_time;
+ else if (filter & FILT_COUNT_URL_TAVG)
+ ustat->node.val.key = ustat->nb_req ? ustat->total_time / ustat->nb_req : 0;
+ else if (filter & FILT_COUNT_URL_TTOTO)
+ ustat->node.val.key = ustat->total_time_ok;
+ else if (filter & FILT_COUNT_URL_TAVGO)
+ ustat->node.val.key = (ustat->nb_req - ustat->nb_err) ? ustat->total_time_ok / (ustat->nb_req - ustat->nb_err) : 0;
+ else if (filter & FILT_COUNT_URL_BAVG)
+ ustat->node.val.key = ustat->nb_req ? ustat->total_bytes_sent / ustat->nb_req : 0;
+ else if (filter & FILT_COUNT_URL_BTOT)
+ ustat->node.val.key = ustat->total_bytes_sent;
+ else
+ ustat->node.val.key = 0;
+
+ eb64_insert(&timers[1], &ustat->node.val);
+ }
+ /* switch trees */
+ timers[0] = timers[1];
+ }
+
+ if (FILT_COUNT_IP_COUNT)
+ printf("#req err ttot tavg oktot okavg bavg btot src\n");
+ else
+ printf("#req err ttot tavg oktot okavg bavg btot url\n");
+
+ /* scan the tree in its reverse sorting order */
+ node = eb_last(&timers[0]);
+ while (node) {
+ ustat = container_of(node, struct url_stat, node.url.node);
+ printf("%d %d %Ld %Ld %Ld %Ld %Ld %Ld %s\n",
+ ustat->nb_req,
+ ustat->nb_err,
+ ustat->total_time,
+ ustat->nb_req ? ustat->total_time / ustat->nb_req : 0,
+ ustat->total_time_ok,
+ (ustat->nb_req - ustat->nb_err) ? ustat->total_time_ok / (ustat->nb_req - ustat->nb_err) : 0,
+ ustat->nb_req ? ustat->total_bytes_sent / ustat->nb_req : 0,
+ ustat->total_bytes_sent,
+ ustat->url);
+
+ node = eb_prev(node);
+ lines_out++;
+ if (lines_max >= 0 && lines_out >= lines_max)
+ break;
+ }
+ }
+
+ empty:
+ if (!(filter & FILT_QUIET))
+ fprintf(stderr, "%d lines in, %d lines out, %d parsing errors\n",
+ linenum, lines_out, parse_err);
+ exit(0);
+}
+
+void filter_output_line(const char *accept_field, const char *time_field, struct timer **tptr)
+{
+ puts(line);
+ lines_out++;
+}
+
+void filter_accept_holes(const char *accept_field, const char *time_field, struct timer **tptr)
+{
+ struct timer *t2;
+ int val;
+
+ val = convert_date(accept_field);
+ if (unlikely(val < 0)) {
+ truncated_line(linenum, line);
+ return;
+ }
+
+ t2 = insert_value(&timers[0], tptr, val);
+ t2->count++;
+ return;
+}
+
+void filter_count_status(const char *accept_field, const char *time_field, struct timer **tptr)
+{
+ struct timer *t2;
+ const char *b;
+ int val;
+
+ if (time_field)
+ b = field_start(time_field, STATUS_FIELD - TIME_FIELD + 1);
+ else
+ b = field_start(accept_field, STATUS_FIELD - ACCEPT_FIELD + 1);
+
+ if (unlikely(!*b)) {
+ truncated_line(linenum, line);
+ return;
+ }
+
+ val = str2ic(b);
+
+ t2 = insert_value(&timers[0], tptr, val);
+ t2->count++;
+}
+
+void filter_count_cook_codes(const char *accept_field, const char *time_field, struct timer **tptr)
+{
+ struct timer *t2;
+ const char *b;
+ int val;
+
+ if (time_field)
+ b = field_start(time_field, TERM_CODES_FIELD - TIME_FIELD + 1);
+ else
+ b = field_start(accept_field, TERM_CODES_FIELD - ACCEPT_FIELD + 1);
+
+ if (unlikely(!*b)) {
+ truncated_line(linenum, line);
+ return;
+ }
+
+ val = 256 * b[2] + b[3];
+
+ t2 = insert_value(&timers[0], tptr, val);
+ t2->count++;
+}
+
+void filter_count_term_codes(const char *accept_field, const char *time_field, struct timer **tptr)
+{
+ struct timer *t2;
+ const char *b;
+ int val;
+
+ if (time_field)
+ b = field_start(time_field, TERM_CODES_FIELD - TIME_FIELD + 1);
+ else
+ b = field_start(accept_field, TERM_CODES_FIELD - ACCEPT_FIELD + 1);
+
+ if (unlikely(!*b)) {
+ truncated_line(linenum, line);
+ return;
+ }
+
+ val = 256 * b[0] + b[1];
+
+ t2 = insert_value(&timers[0], tptr, val);
+ t2->count++;
+}
+
+void filter_count_srv_status(const char *accept_field, const char *time_field, struct timer **tptr)
+{
+ const char *b, *e, *p;
+ int f, err, array[5];
+ struct ebmb_node *srv_node;
+ struct srv_st *srv;
+ int val;
+
+ /* the server field is before the status field, so let's
+ * parse them in the proper order.
+ */
+ b = field_start(accept_field, SERVER_FIELD - ACCEPT_FIELD + 1);
+ if (unlikely(!*b)) {
+ truncated_line(linenum, line);
+ return;
+ }
+
+ e = field_stop(b + 1); /* we have the server name in [b]..[e-1] */
+
+ /* the chance that a server name already exists is extremely high,
+ * so let's perform a normal lookup first.
+ */
+ srv_node = ebst_lookup_len(&timers[0], b, e - b);
+ srv = container_of(srv_node, struct srv_st, node);
+
+ if (!srv_node) {
+ /* server not yet in the tree, let's create it */
+ srv = (void *)calloc(1, sizeof(struct srv_st) + e - b + 1);
+ srv_node = &srv->node;
+ memcpy(&srv_node->key, b, e - b);
+ srv_node->key[e - b] = '\0';
+ ebst_insert(&timers[0], srv_node);
+ }
+
+ /* let's collect the connect and response times */
+ if (!time_field) {
+ time_field = field_start(e, TIME_FIELD - SERVER_FIELD);
+ if (unlikely(!*time_field)) {
+ truncated_line(linenum, line);
+ return;
+ }
+ }
+
+ e = field_stop(time_field + 1);
+ /* we have field TIME_FIELD in [time_field]..[e-1] */
+
+ p = time_field;
+ err = 0;
+ f = 0;
+ while (!SEP(*p)) {
+ array[f] = str2ic(p);
+ if (array[f] < 0) {
+ array[f] = -1;
+ err = 1;
+ }
+ if (++f == 5)
+ break;
+ SKIP_CHAR(p, '/');
+ }
+
+ if (unlikely(f < 5)){
+ parse_err++;
+ return;
+ }
+
+ /* OK we have our timers in array[2,3] */
+ if (!err)
+ srv->nb_ok++;
+
+ if (array[2] >= 0) {
+ srv->cum_ct += array[2];
+ srv->nb_ct++;
+ }
+
+ if (array[3] >= 0) {
+ srv->cum_rt += array[3];
+ srv->nb_rt++;
+ }
+
+ /* we're interested in the 5 HTTP status classes (1xx ... 5xx), and
+ * the invalid ones which will be reported as 0.
+ */
+ b = field_start(e, STATUS_FIELD - TIME_FIELD);
+ if (unlikely(!*b)) {
+ truncated_line(linenum, line);
+ return;
+ }
+
+ val = 0;
+ if (*b >= '1' && *b <= '5')
+ val = *b - '0';
+
+ srv->st_cnt[val]++;
+}
+
+void filter_count_url(const char *accept_field, const char *time_field, struct timer **tptr)
+{
+ struct url_stat *ustat = NULL;
+ struct ebpt_node *ebpt_old;
+ const char *b, *e;
+ int f, err, array[5];
+ int val;
+
+ /* let's collect the response time */
+ if (!time_field) {
+ time_field = field_start(accept_field, TIME_FIELD - ACCEPT_FIELD + 1); // avg 115 ns per line
+ if (unlikely(!*time_field)) {
+ truncated_line(linenum, line);
+ return;
+ }
+ }
+
+ /* we have the field TIME_FIELD starting at <time_field>. We'll
+ * parse the 5 timers to detect errors, it takes avg 55 ns per line.
+ */
+ e = time_field; err = 0; f = 0;
+ while (!SEP(*e)) {
+ array[f] = str2ic(e);
+ if (array[f] < 0) {
+ array[f] = -1;
+ err = 1;
+ }
+ if (++f == 5)
+ break;
+ SKIP_CHAR(e, '/');
+ }
+ if (f < 5) {
+ parse_err++;
+ return;
+ }
+
+ /* OK we have our timers in array[3], and err is >0 if at
+ * least one -1 was seen. <e> points to the first char of
+ * the last timer. Let's prepare a new node with that.
+ */
+ if (unlikely(!ustat))
+ ustat = calloc(1, sizeof(*ustat));
+
+ ustat->nb_err = err;
+ ustat->nb_req = 1;
+
+ /* use array[4] = total time in case of error */
+ ustat->total_time = (array[3] >= 0) ? array[3] : array[4];
+ ustat->total_time_ok = (array[3] >= 0) ? array[3] : 0;
+
+ e = field_start(e, BYTES_SENT_FIELD - TIME_FIELD + 1);
+ val = str2ic(e);
+ ustat->total_bytes_sent = val;
+
+ /* the line may be truncated because of a bad request or anything like this,
+ * without a method. Also, if it does not begin with an quote, let's skip to
+ * the next field because it's a capture. Let's fall back to the "method" itself
+ * if there's nothing else.
+ */
+ e = field_start(e, METH_FIELD - BYTES_SENT_FIELD + 1);
+ while (*e != '"' && *e) {
+ /* Note: some syslog servers escape quotes ! */
+ if (*e == '\\' && e[1] == '"')
+ break;
+ e = field_start(e, 2);
+ }
+
+ if (unlikely(!*e)) {
+ truncated_line(linenum, line);
+ return;
+ }
+
+ b = field_start(e, URL_FIELD - METH_FIELD + 1); // avg 40 ns per line
+ if (!*b)
+ b = e;
+
+ /* stop at end of field or first ';' or '?', takes avg 64 ns per line */
+ e = b;
+ do {
+ if (*e == ' ' || *e == '?' || *e == ';') {
+ *(char *)e = 0;
+ break;
+ }
+ e++;
+ } while (*e);
+
+ /* now instead of copying the URL for a simple lookup, we'll link
+ * to it from the node we're trying to insert. If it returns a
+ * different value, it was already there. Otherwise we just have
+ * to dynamically realloc an entry using strdup().
+ */
+ ustat->node.url.key = (char *)b;
+ ebpt_old = ebis_insert(&timers[0], &ustat->node.url);
+
+ if (ebpt_old != &ustat->node.url) {
+ struct url_stat *ustat_old;
+ /* node was already there, let's update previous one */
+ ustat_old = container_of(ebpt_old, struct url_stat, node.url);
+ ustat_old->nb_req ++;
+ ustat_old->nb_err += ustat->nb_err;
+ ustat_old->total_time += ustat->total_time;
+ ustat_old->total_time_ok += ustat->total_time_ok;
+ ustat_old->total_bytes_sent += ustat->total_bytes_sent;
+ } else {
+ ustat->url = ustat->node.url.key = strdup(ustat->node.url.key);
+ ustat = NULL; /* node was used */
+ }
+}
+
+void filter_count_ip(const char *source_field, const char *accept_field, const char *time_field, struct timer **tptr)
+{
+ struct url_stat *ustat = NULL;
+ struct ebpt_node *ebpt_old;
+ const char *b, *e;
+ int f, err, array[5];
+ int val;
+
+ /* let's collect the response time */
+ if (!time_field) {
+ time_field = field_start(accept_field, TIME_FIELD - ACCEPT_FIELD + 1); // avg 115 ns per line
+ if (unlikely(!*time_field)) {
+ truncated_line(linenum, line);
+ return;
+ }
+ }
+
+ /* we have the field TIME_FIELD starting at <time_field>. We'll
+ * parse the 5 timers to detect errors, it takes avg 55 ns per line.
+ */
+ e = time_field; err = 0; f = 0;
+ while (!SEP(*e)) {
+ if (f == 0 || f == 4) {
+ array[f] = str2ic(e);
+ if (array[f] < 0) {
+ array[f] = -1;
+ err = 1;
+ }
+ }
+ if (++f == 5)
+ break;
+ SKIP_CHAR(e, '/');
+ }
+ if (f < 5) {
+ parse_err++;
+ return;
+ }
+
+ /* OK we have our timers in array[0], and err is >0 if at
+ * least one -1 was seen. <e> points to the first char of
+ * the last timer. Let's prepare a new node with that.
+ */
+ if (unlikely(!ustat))
+ ustat = calloc(1, sizeof(*ustat));
+
+ ustat->nb_err = err;
+ ustat->nb_req = 1;
+
+ /* use array[4] = total time in case of error */
+ ustat->total_time = (array[0] >= 0) ? array[0] : array[4];
+ ustat->total_time_ok = (array[0] >= 0) ? array[0] : 0;
+
+ e = field_start(e, BYTES_SENT_FIELD - TIME_FIELD + 1);
+ val = str2ic(e);
+ ustat->total_bytes_sent = val;
+
+ /* the source might be IPv4 or IPv6, so we always strip the port by
+ * removing the last colon.
+ */
+ b = source_field;
+ e = field_stop(b + 1);
+ while (e > b && e[-1] != ':')
+ e--;
+ *(char *)(e - 1) = '\0';
+
+ /* now instead of copying the src for a simple lookup, we'll link
+ * to it from the node we're trying to insert. If it returns a
+ * different value, it was already there. Otherwise we just have
+ * to dynamically realloc an entry using strdup(). We're using the
+ * <url> field of the node to store the source address.
+ */
+ ustat->node.url.key = (char *)b;
+ ebpt_old = ebis_insert(&timers[0], &ustat->node.url);
+
+ if (ebpt_old != &ustat->node.url) {
+ struct url_stat *ustat_old;
+ /* node was already there, let's update previous one */
+ ustat_old = container_of(ebpt_old, struct url_stat, node.url);
+ ustat_old->nb_req ++;
+ ustat_old->nb_err += ustat->nb_err;
+ ustat_old->total_time += ustat->total_time;
+ ustat_old->total_time_ok += ustat->total_time_ok;
+ ustat_old->total_bytes_sent += ustat->total_bytes_sent;
+ } else {
+ ustat->url = ustat->node.url.key = strdup(ustat->node.url.key);
+ ustat = NULL; /* node was used */
+ }
+}
+
+void filter_graphs(const char *accept_field, const char *time_field, struct timer **tptr)
+{
+ struct timer *t2;
+ const char *e, *p;
+ int f, err, array[5];
+
+ if (!time_field) {
+ time_field = field_start(accept_field, TIME_FIELD - ACCEPT_FIELD + 1);
+ if (unlikely(!*time_field)) {
+ truncated_line(linenum, line);
+ return;
+ }
+ }
+
+ e = field_stop(time_field + 1);
+ /* we have field TIME_FIELD in [time_field]..[e-1] */
+
+ p = time_field;
+ err = 0;
+ f = 0;
+ while (!SEP(*p)) {
+ array[f] = str2ic(p);
+ if (array[f] < 0) {
+ array[f] = -1;
+ err = 1;
+ }
+ if (++f == 5)
+ break;
+ SKIP_CHAR(p, '/');
+ }
+
+ if (unlikely(f < 5)) {
+ parse_err++;
+ return;
+ }
+
+ /* if we find at least one negative time, we count one error
+ * with a time equal to the total session time. This will
+ * emphasize quantum timing effects associated to known
+ * timeouts. Note that on some buggy machines, it is possible
+ * that the total time is negative, hence the reason to reset
+ * it.
+ */
+
+ if (filter & FILT_GRAPH_TIMERS) {
+ if (err) {
+ if (array[4] < 0)
+ array[4] = -1;
+ t2 = insert_timer(&timers[0], tptr, array[4]); // total time
+ t2->count++;
+ } else {
+ int v;
+
+ t2 = insert_timer(&timers[1], tptr, array[0]); t2->count++; // req
+ t2 = insert_timer(&timers[2], tptr, array[2]); t2->count++; // conn
+ t2 = insert_timer(&timers[3], tptr, array[3]); t2->count++; // resp
+
+ v = array[4] - array[0] - array[1] - array[2] - array[3]; // data time
+ if (v < 0 && !(filter & FILT_QUIET))
+ fprintf(stderr, "ERR: %s (%d %d %d %d %d => %d)\n",
+ line, array[0], array[1], array[2], array[3], array[4], v);
+ t2 = insert_timer(&timers[4], tptr, v); t2->count++;
+ lines_out++;
+ }
+ } else { /* percentile */
+ if (err) {
+ if (array[4] < 0)
+ array[4] = -1;
+ t2 = insert_value(&timers[0], tptr, array[4]); // total time
+ t2->count++;
+ } else {
+ int v;
+
+ t2 = insert_value(&timers[1], tptr, array[0]); t2->count++; // req
+ t2 = insert_value(&timers[2], tptr, array[2]); t2->count++; // conn
+ t2 = insert_value(&timers[3], tptr, array[3]); t2->count++; // resp
+
+ v = array[4] - array[0] - array[1] - array[2] - array[3]; // data time
+ if (v < 0 && !(filter & FILT_QUIET))
+ fprintf(stderr, "ERR: %s (%d %d %d %d %d => %d)\n",
+ line, array[0], array[1], array[2], array[3], array[4], v);
+ t2 = insert_value(&timers[4], tptr, v); t2->count++;
+ lines_out++;
+ }
+ }
+}
+
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+CC = gcc
+OPTIMIZE = -O3
+LDFLAGS = -s
+
+OBJS = ip6range
+
+all: $(OBJS)
+
+%: %.c
+ $(CC) $(LDFLAGS) $(OPTIMIZE) -o $@ $^
+
+clean:
+ rm -f $(OBJS) *.o *.a *~
--- /dev/null
+/*
+ * network range to IP+mask converter
+ *
+ * Copyright 2011-2012 Willy Tarreau <w@1wt.eu>
+ *
+ * This program reads lines starting by two IP addresses and outputs them with
+ * the two IP addresses replaced by a netmask covering the range between these
+ * IPs (inclusive). When multiple ranges are needed, as many lines are emitted.
+ * The IP addresses may be delimited by spaces, tabs or commas. Quotes are
+ * stripped, and lines beginning with a sharp character ('#') are ignored. The
+ * IP addresses may be either in the dotted format or represented as a 32-bit
+ * integer value in network byte order.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <sys/types.h>
+#include <sys/socket.h>
+#include <arpa/inet.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+
+#define MAXLINE 1024
+
+static inline void in6_bswap(struct in6_addr *a)
+{
+ a->in6_u.u6_addr32[0] = ntohl(a->in6_u.u6_addr32[0]);
+ a->in6_u.u6_addr32[1] = ntohl(a->in6_u.u6_addr32[1]);
+ a->in6_u.u6_addr32[2] = ntohl(a->in6_u.u6_addr32[2]);
+ a->in6_u.u6_addr32[3] = ntohl(a->in6_u.u6_addr32[3]);
+}
+
+/* returns a string version of an IPv6 address in host order */
+static const char *get_ipv6_addr(struct in6_addr *addr)
+{
+ struct in6_addr a;
+ static char out[INET6_ADDRSTRLEN + 1];
+
+ memcpy(&a, addr, sizeof(struct in6_addr));
+ in6_bswap(&a);
+ return inet_ntop(AF_INET6, &a, out, INET6_ADDRSTRLEN + 1);
+}
+
+static const char *get_addr(struct in6_addr *addr)
+{
+ static char out[50];
+ snprintf(out, 50, "%08x:%08x:%08x:%08x",
+ addr->in6_u.u6_addr32[0],
+ addr->in6_u.u6_addr32[1],
+ addr->in6_u.u6_addr32[2],
+ addr->in6_u.u6_addr32[3]);
+ return out;
+}
+
+/* a <= b */
+static inline int a_le_b(struct in6_addr *a, struct in6_addr *b)
+{
+ if (a->in6_u.u6_addr32[0] < b->in6_u.u6_addr32[0]) return 1;
+ if (a->in6_u.u6_addr32[0] > b->in6_u.u6_addr32[0]) return 0;
+ if (a->in6_u.u6_addr32[1] < b->in6_u.u6_addr32[1]) return 1;
+ if (a->in6_u.u6_addr32[1] > b->in6_u.u6_addr32[1]) return 0;
+ if (a->in6_u.u6_addr32[2] < b->in6_u.u6_addr32[2]) return 1;
+ if (a->in6_u.u6_addr32[2] > b->in6_u.u6_addr32[2]) return 0;
+ if (a->in6_u.u6_addr32[3] < b->in6_u.u6_addr32[3]) return 1;
+ if (a->in6_u.u6_addr32[3] > b->in6_u.u6_addr32[3]) return 0;
+ return 1;
+}
+
+/* a == b */
+static inline int a_eq_b(struct in6_addr *a, struct in6_addr *b)
+{
+ if (a->in6_u.u6_addr32[0] != b->in6_u.u6_addr32[0]) return 0;
+ if (a->in6_u.u6_addr32[1] != b->in6_u.u6_addr32[1]) return 0;
+ if (a->in6_u.u6_addr32[2] != b->in6_u.u6_addr32[2]) return 0;
+ if (a->in6_u.u6_addr32[3] != b->in6_u.u6_addr32[3]) return 0;
+ return 1;
+}
+
+/* a > b */
+static inline int a_gt_b(struct in6_addr *a, struct in6_addr *b)
+{
+ if (a->in6_u.u6_addr32[0] > b->in6_u.u6_addr32[0]) return 1;
+ if (a->in6_u.u6_addr32[0] < b->in6_u.u6_addr32[0]) return 0;
+ if (a->in6_u.u6_addr32[1] > b->in6_u.u6_addr32[1]) return 1;
+ if (a->in6_u.u6_addr32[1] < b->in6_u.u6_addr32[1]) return 0;
+ if (a->in6_u.u6_addr32[2] > b->in6_u.u6_addr32[2]) return 1;
+ if (a->in6_u.u6_addr32[2] < b->in6_u.u6_addr32[2]) return 0;
+ if (a->in6_u.u6_addr32[3] > b->in6_u.u6_addr32[3]) return 1;
+ if (a->in6_u.u6_addr32[3] < b->in6_u.u6_addr32[3]) return 0;
+ return 0;
+}
+
+/* ( 1 << m ) - 1 -> r */
+static inline struct in6_addr *hmask(unsigned int b, struct in6_addr *r)
+{
+
+ if (b < 32) {
+ r->in6_u.u6_addr32[3] = (1 << b) - 1;
+ r->in6_u.u6_addr32[2] = 0;
+ r->in6_u.u6_addr32[1] = 0;
+ r->in6_u.u6_addr32[0] = 0;
+ }
+ else if (b < 64) {
+ r->in6_u.u6_addr32[3] = 0xffffffff;
+ r->in6_u.u6_addr32[2] = (1 << (b - 32)) - 1;
+ r->in6_u.u6_addr32[1] = 0;
+ r->in6_u.u6_addr32[0] = 0;
+ }
+ else if (b < 96) {
+ r->in6_u.u6_addr32[3] = 0xffffffff;
+ r->in6_u.u6_addr32[2] = 0xffffffff;
+ r->in6_u.u6_addr32[1] = (1 << (b - 64)) - 1;
+ r->in6_u.u6_addr32[0] = 0;
+ }
+ else if (b < 128) {
+ r->in6_u.u6_addr32[3] = 0xffffffff;
+ r->in6_u.u6_addr32[2] = 0xffffffff;
+ r->in6_u.u6_addr32[1] = 0xffffffff;
+ r->in6_u.u6_addr32[0] = (1 << (b - 96)) - 1;
+ }
+ else {
+ r->in6_u.u6_addr32[3] = 0xffffffff;
+ r->in6_u.u6_addr32[2] = 0xffffffff;
+ r->in6_u.u6_addr32[1] = 0xffffffff;
+ r->in6_u.u6_addr32[0] = 0xffffffff;
+ }
+ return r;
+}
+
+/* 1 << b -> r */
+static inline struct in6_addr *one_ls_b(unsigned int b, struct in6_addr *r)
+{
+ if (b < 32) {
+ r->in6_u.u6_addr32[3] = 1 << b;
+ r->in6_u.u6_addr32[2] = 0;
+ r->in6_u.u6_addr32[1] = 0;
+ r->in6_u.u6_addr32[0] = 0;
+ }
+ else if (b < 64) {
+ r->in6_u.u6_addr32[3] = 0;
+ r->in6_u.u6_addr32[2] = 1 << (b - 32);
+ r->in6_u.u6_addr32[1] = 0;
+ r->in6_u.u6_addr32[0] = 0;
+ }
+ else if (b < 96) {
+ r->in6_u.u6_addr32[3] = 0;
+ r->in6_u.u6_addr32[2] = 0;
+ r->in6_u.u6_addr32[1] = 1 << (b - 64);
+ r->in6_u.u6_addr32[0] = 0;
+ }
+ else if (b < 128) {
+ r->in6_u.u6_addr32[3] = 0;
+ r->in6_u.u6_addr32[2] = 0;
+ r->in6_u.u6_addr32[1] = 0;
+ r->in6_u.u6_addr32[0] = 1 << (b - 96);
+ }
+ else {
+ r->in6_u.u6_addr32[3] = 0;
+ r->in6_u.u6_addr32[2] = 0;
+ r->in6_u.u6_addr32[1] = 0;
+ r->in6_u.u6_addr32[0] = 0;
+ }
+ return r;
+}
+
+/* a + b -> r */
+static inline struct in6_addr *a_plus_b(struct in6_addr *a, struct in6_addr *b, struct in6_addr *r)
+{
+ unsigned long long int c = 0;
+ int i;
+
+ for (i=3; i>=0; i--) {
+ c = (unsigned long long int)a->in6_u.u6_addr32[i] +
+ (unsigned long long int)b->in6_u.u6_addr32[i] + c;
+ r->in6_u.u6_addr32[i] = c;
+ c >>= 32;
+ }
+
+ return r;
+}
+
+/* a - b -> r */
+static inline struct in6_addr *a_minus_b(struct in6_addr *a, struct in6_addr *b, struct in6_addr *r)
+{
+ signed long long int c = 0;
+ signed long long int d;
+ int i;
+
+ /* Check sign. Return 0xff..ff (-1) if the result is less than 0. */
+ if (a_gt_b(b, a)) {
+ r->in6_u.u6_addr32[3] = 0xffffffff;
+ r->in6_u.u6_addr32[2] = 0xffffffff;
+ r->in6_u.u6_addr32[1] = 0xffffffff;
+ r->in6_u.u6_addr32[0] = 0xffffffff;
+ return r;
+ }
+
+ for (i=3; i>=0; i--) {
+ d = (unsigned long long int)b->in6_u.u6_addr32[i] + c;
+ c = (unsigned long long int)a->in6_u.u6_addr32[i];
+ if (c < d)
+ c += 0x100000000ULL;
+ c -= d;
+ r->in6_u.u6_addr32[i] = c;
+ c >>= 32;
+ }
+
+ return r;
+}
+
+/* a & b -> r */
+static inline struct in6_addr *a_and_b(struct in6_addr *a, struct in6_addr *b, struct in6_addr *r)
+{
+ r->in6_u.u6_addr32[0] = a->in6_u.u6_addr32[0] & b->in6_u.u6_addr32[0];
+ r->in6_u.u6_addr32[1] = a->in6_u.u6_addr32[1] & b->in6_u.u6_addr32[1];
+ r->in6_u.u6_addr32[2] = a->in6_u.u6_addr32[2] & b->in6_u.u6_addr32[2];
+ r->in6_u.u6_addr32[3] = a->in6_u.u6_addr32[3] & b->in6_u.u6_addr32[3];
+ return r;
+}
+
+/* a != 0 */
+int is_set(struct in6_addr *a)
+{
+ return a->in6_u.u6_addr32[0] ||
+ a->in6_u.u6_addr32[1] ||
+ a->in6_u.u6_addr32[2] ||
+ a->in6_u.u6_addr32[3];
+}
+
+/* 1 */
+static struct in6_addr one = { .in6_u.u6_addr32 = {0, 0, 0, 1} };
+
+/* print all networks present between address <low> and address <high> in
+ * cidr format, followed by <eol>.
+ */
+static void convert_range(struct in6_addr *low, struct in6_addr *high, const char *eol, const char *pfx)
+{
+ int bit;
+ struct in6_addr r0;
+ struct in6_addr r1;
+
+ if (a_eq_b(low, high)) {
+ /* single value */
+ printf("%s%s%s%s\n", pfx?pfx:"", pfx?" ":"", get_ipv6_addr(low), eol);
+ return;
+ }
+ else if (a_gt_b(low, high)) {
+ struct in6_addr *swap = low;
+ low = high;
+ high = swap;
+ }
+
+ if (a_eq_b(low, a_plus_b(high, &one, &r0))) {
+ /* full range */
+ printf("%s%s::/0%s\n", pfx?pfx:"", pfx?" ":"", eol);
+ return;
+ }
+ //printf("low=%08x high=%08x\n", low, high);
+
+ bit = 0;
+ while (bit < 128 && a_le_b(a_plus_b(low, hmask(bit, &r0), &r0), high)) {
+
+ /* enlarge mask */
+ if (is_set(a_and_b(low, one_ls_b(bit, &r0), &r0))) {
+ /* can't aggregate anymore, dump and retry from the same bit */
+ printf("%s%s%s/%d%s\n", pfx?pfx:"", pfx?" ":"", get_ipv6_addr(low), 128-bit, eol);
+ a_plus_b(low, one_ls_b(bit, &r0), low);
+ }
+ else {
+ /* try to enlarge the mask as much as possible first */
+ bit++;
+ //printf(" ++bit=%d\n", bit);
+ }
+ }
+ //printf("stopped 1 at low=%08x, bit=%d\n", low, bit);
+
+ bit = 127;
+ while (bit >= 0 && is_set(a_plus_b(a_minus_b(high, low, &r0), &one, &r0))) {
+
+ /* shrink mask */
+ if (is_set(a_and_b(a_plus_b(a_minus_b(high, low, &r0), &one, &r0), one_ls_b(bit, &r1), &r1))) {
+ /* large bit accepted, dump and go on from the same bit */
+ //printf("max: %08x/%d\n", low, 32-bit);
+ printf("%s%s%s/%d%s\n", pfx?pfx:"", pfx?" ":"", get_ipv6_addr(low), 128-bit, eol);
+ a_plus_b(low, one_ls_b(bit, &r0), low);
+ }
+ else {
+ bit--;
+ //printf(" --bit=%d, low=%08x\n", bit, low);
+ }
+ }
+ //printf("stopped at low=%08x\n", low);
+}
+
+static void usage(const char *argv0)
+{
+ fprintf(stderr,
+ "Usage: %s [<addr> ...] < iplist.csv\n"
+ "\n"
+ "This program reads lines starting by two IP addresses and outputs them with\n"
+ "the two IP addresses replaced by a netmask covering the range between these\n"
+ "IPs (inclusive). When multiple ranges are needed, as many lines are emitted.\n"
+ "The IP addresses may be delimited by spaces, tabs or commas. Quotes are\n"
+ "stripped, and lines beginning with a sharp character ('#') are ignored. The\n"
+ "IP addresses may be either in the dotted format or represented as a 32-bit\n"
+ "integer value in network byte order.\n"
+ "\n"
+ "For each optional <addr> specified, only the network it belongs to is returned,\n"
+ "prefixed with the <addr> value.\n"
+ "\n", argv0);
+}
+
+main(int argc, char **argv)
+{
+ char line[MAXLINE];
+ int l, lnum;
+ char *lb, *le, *hb, *he, *err;
+ struct in6_addr sa, da, ta;
+
+ if (argc > 1 && *argv[1] == '-') {
+ usage(argv[0]);
+ exit(1);
+ }
+
+ lnum = 0;
+ while (fgets(line, sizeof(line), stdin) != NULL) {
+ l = strlen(line);
+ if (l && line[l - 1] == '\n')
+ line[--l] = '\0';
+
+ lnum++;
+ /* look for the first field which must be the low address of a range,
+ * in dotted IPv4 format or as an integer. spaces and commas are
+ * considered as delimiters, quotes are removed.
+ */
+ for (lb = line; *lb == ' ' || *lb == '\t' || *lb == ',' || *lb == '"'; lb++);
+ if (!*lb || *lb == '#')
+ continue;
+ for (le = lb + 1; *le != ' ' && *le != '\t' && *le != ',' && *le != '"' && *le; le++);
+ if (!*le)
+ continue;
+ /* we have the low address between lb(included) and le(excluded) */
+ *(le++) = 0;
+
+ for (hb = le; *hb == ' ' || *hb == '\t' || *hb == ',' || *hb == '"'; hb++);
+ if (!*hb || *hb == '#')
+ continue;
+ for (he = hb + 1; *he != ' ' && *he != '\t' && *he != ',' && *he != '"' && *he; he++);
+ if (!*he)
+ continue;
+ /* we have the high address between hb(included) and he(excluded) */
+ *(he++) = 0;
+
+ /* we want to remove a possible ending quote and a possible comma,
+ * not more.
+ */
+ while (*he == '"')
+ *(he++) = ' ';
+ while (*he == ',' || *he == ' ' || *he == '\t')
+ *(he++) = ' ';
+
+ /* if the trailing string is not empty, prefix it with a space */
+ if (*(he-1) == ' ')
+ he--;
+
+ if (inet_pton(AF_INET6, lb, &sa) <= 0) {
+ fprintf(stderr, "Failed to parse source address <%s> at line %d, skipping line\n", lb, lnum);
+ continue;
+ }
+
+ if (inet_pton(AF_INET6, hb, &da) <= 0) {
+ fprintf(stderr, "Failed to parse destination address <%s> at line %d, skipping line\n", hb, lnum);
+ continue;
+ }
+
+ in6_bswap(&sa);
+ in6_bswap(&da);
+
+ if (argc > 1) {
+ for (l = 1; l < argc; l++) {
+ if (inet_pton(AF_INET6, argv[l], &da) <= 0)
+ continue;
+ in6_bswap(&ta);
+ if ((a_le_b(&sa, &ta) && a_le_b(&ta, &da)) || (a_le_b(&da, &ta) && a_le_b(&ta, &sa)))
+ convert_range(&sa, &da, he, argv[l]);
+ }
+ }
+ else {
+ convert_range(&sa, &da, he, NULL);
+ }
+ }
+}
--- /dev/null
+CC = gcc
+OPTIMIZE = -O3
+LDFLAGS = -s
+
+OBJS = iprange
+
+all: $(OBJS)
+
+%: %.c
+ $(CC) $(LDFLAGS) $(OPTIMIZE) -o $@ $^
+
+clean:
+ rm -f $(OBJS) *.o *.a *~
--- /dev/null
+/*
+ * network range to IP+mask converter
+ *
+ * Copyright 2011-2012 Willy Tarreau <w@1wt.eu>
+ *
+ * This program reads lines starting by two IP addresses and outputs them with
+ * the two IP addresses replaced by a netmask covering the range between these
+ * IPs (inclusive). When multiple ranges are needed, as many lines are emitted.
+ * The IP addresses may be delimited by spaces, tabs or commas. Quotes are
+ * stripped, and lines beginning with a sharp character ('#') are ignored. The
+ * IP addresses may be either in the dotted format or represented as a 32-bit
+ * integer value in network byte order.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <sys/types.h>
+#include <sys/socket.h>
+#include <arpa/inet.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+
+#define MAXLINE 1024
+
+/* returns a string version of an IPv4 address in host order */
+static const char *get_ipv4_addr(unsigned int addr)
+{
+ struct in_addr a;
+
+ a.s_addr = ntohl(addr);
+ return inet_ntoa(a);
+}
+
+/* print all networks present between address <low> and address <high> in
+ * cidr format, followed by <eol>.
+ */
+static void convert_range(unsigned int low, unsigned int high, const char *eol, const char *pfx)
+{
+ int bit;
+
+ if (low == high) {
+ /* single value */
+ printf("%s%s%s%s\n", pfx?pfx:"", pfx?" ":"", get_ipv4_addr(low), eol);
+ return;
+ }
+ else if (low > high) {
+ int swap = low;
+ low = high;
+ high = swap;
+ }
+
+ if (low == high + 1) {
+ /* full range */
+ printf("%s%s0.0.0.0/0%s\n", pfx?pfx:"", pfx?" ":"", eol);
+ return;
+ }
+ //printf("low=%08x high=%08x\n", low, high);
+
+ bit = 0;
+ while (bit < 32 && low + (1 << bit) - 1 <= high) {
+ /* enlarge mask */
+ if (low & (1 << bit)) {
+ /* can't aggregate anymore, dump and retry from the same bit */
+ printf("%s%s%s/%d%s\n", pfx?pfx:"", pfx?" ":"", get_ipv4_addr(low), 32-bit, eol);
+ low += (1 << bit);
+ }
+ else {
+ /* try to enlarge the mask as much as possible first */
+ bit++;
+ //printf(" ++bit=%d\n", bit);
+ }
+ }
+ //printf("stopped 1 at low=%08x, bit=%d\n", low, bit);
+
+ bit = 31;
+ while (bit >= 0 && high - low + 1 != 0) {
+ /* shrink mask */
+ if ((high - low + 1) & (1 << bit)) {
+ /* large bit accepted, dump and go on from the same bit */
+ //printf("max: %08x/%d\n", low, 32-bit);
+ printf("%s%s%s/%d%s\n", pfx?pfx:"", pfx?" ":"", get_ipv4_addr(low), 32-bit, eol);
+ low += (1 << bit);
+ }
+ else {
+ bit--;
+ //printf(" --bit=%d, low=%08x\n", bit, low);
+ }
+ }
+ //printf("stopped at low=%08x\n", low);
+}
+
+static void usage(const char *argv0)
+{
+ fprintf(stderr,
+ "Usage: %s [<addr> ...] < iplist.csv\n"
+ "\n"
+ "This program reads lines starting by two IP addresses and outputs them with\n"
+ "the two IP addresses replaced by a netmask covering the range between these\n"
+ "IPs (inclusive). When multiple ranges are needed, as many lines are emitted.\n"
+ "The IP addresses may be delimited by spaces, tabs or commas. Quotes are\n"
+ "stripped, and lines beginning with a sharp character ('#') are ignored. The\n"
+ "IP addresses may be either in the dotted format or represented as a 32-bit\n"
+ "integer value in network byte order.\n"
+ "\n"
+ "For each optional <addr> specified, only the network it belongs to is returned,\n"
+ "prefixed with the <addr> value.\n"
+ "\n", argv0);
+}
+
+main(int argc, char **argv)
+{
+ char line[MAXLINE];
+ int l, lnum;
+ char *lb, *le, *hb, *he, *err;
+ struct in_addr src_addr, dst_addr;
+ unsigned int sa, da, ta;
+
+ if (argc > 1 && *argv[1] == '-') {
+ usage(argv[0]);
+ exit(1);
+ }
+
+ lnum = 0;
+ while (fgets(line, sizeof(line), stdin) != NULL) {
+ l = strlen(line);
+ if (l && line[l - 1] == '\n')
+ line[--l] = '\0';
+
+ lnum++;
+ /* look for the first field which must be the low address of a range,
+ * in dotted IPv4 format or as an integer. spaces and commas are
+ * considered as delimiters, quotes are removed.
+ */
+ for (lb = line; *lb == ' ' || *lb == '\t' || *lb == ',' || *lb == '"'; lb++);
+ if (!*lb || *lb == '#')
+ continue;
+ for (le = lb + 1; *le != ' ' && *le != '\t' && *le != ',' && *le != '"' && *le; le++);
+ if (!*le)
+ continue;
+ /* we have the low address between lb(included) and le(excluded) */
+ *(le++) = 0;
+
+ for (hb = le; *hb == ' ' || *hb == '\t' || *hb == ',' || *hb == '"'; hb++);
+ if (!*hb || *hb == '#')
+ continue;
+ for (he = hb + 1; *he != ' ' && *he != '\t' && *he != ',' && *he != '"' && *he; he++);
+ if (!*he)
+ continue;
+ /* we have the high address between hb(included) and he(excluded) */
+ *(he++) = 0;
+
+ /* we want to remove a possible ending quote and a possible comma,
+ * not more.
+ */
+ while (*he == '"')
+ *(he++) = ' ';
+ while (*he == ',' || *he == ' ' || *he == '\t')
+ *(he++) = ' ';
+
+ /* if the trailing string is not empty, prefix it with a space */
+ if (*(he-1) == ' ')
+ he--;
+
+ if (inet_pton(AF_INET, lb, &src_addr) <= 0) {
+ /* parsing failed, retry with a plain numeric IP */
+ src_addr.s_addr = ntohl(strtoul(lb, &err, 10));
+ if (err && *err) {
+ fprintf(stderr, "Failed to parse source address <%s> at line %d, skipping line\n", lb, lnum);
+ continue;
+ }
+ }
+
+ if (inet_pton(AF_INET, hb, &dst_addr) <= 0) {
+ /* parsing failed, retry with a plain numeric IP */
+ dst_addr.s_addr = ntohl(strtoul(hb, &err, 10));
+ if (err && *err) {
+ fprintf(stderr, "Failed to parse destination address <%s> at line %d, skipping line\n", hb, lnum);
+ continue;
+ }
+ }
+
+ sa = htonl(src_addr.s_addr);
+ da = htonl(dst_addr.s_addr);
+ if (argc > 1) {
+ for (l = 1; l < argc; l++) {
+ if (inet_pton(AF_INET, argv[l], &dst_addr) <= 0)
+ continue;
+ ta = htonl(dst_addr.s_addr);
+ if ((sa <= ta && ta <= da) || (da <= ta && ta <= sa))
+ convert_range(sa, da, he, argv[l]);
+ }
+ }
+ else {
+ convert_range(sa, da, he, NULL);
+ }
+ }
+}
--- /dev/null
+SNMP support for HAProxy
+Copyright 2007-2008 Krzysztof Piotr Oledzki <ole@ans.pl>
+
+Root OID: 1.3.6.1.4.1.29385.106
+
+Files:
+ - README: this file
+ - haproxy.pl: Net-SNMP embedded perl module
+ - haproxy_backend.xml: Cacti snmp-query definition for backends
+ - haproxy_frontend.xml: Cacti snmp-query definition for frontends
+
+Install:
+ cp haproxy.pl /etc/snmp/
+ grep -q "disablePerl false" /etc/snmp/snmpd.conf || echo "disablePerl false" >> /etc/snmp/snmpd.conf
+ echo "perl do '/etc/snmp/haproxy.pl';" >> /etc/snmp/snmpd.conf
+
+Supported commands:
+ - GET (snmpget, snmpbulkget): quite fast.
+ - GETNEXT (snmpwalk, snmpbulkwalk): not so fast as requires to transfer
+ and parse a lot of data during each step. Always use "get" instead of "walk"
+ if that's possible.
+
+Supported OIDs:
+ - 1.3.6.1.4.1.29385.106.1: get a variable from stats
+ Usage: 1.3.6.1.4.1.29385.106.1.$type.$field.$iid.$sid
+
+ - type is one of:
+ 0) frontend
+ 1) backend
+ 2) server
+
+ - field is one of:
+ 0..32) CSV format variable
+ 10001) index
+ 10002) unique name
+
+ - iid is a proxy id
+
+ - sid is a service id (sid): 0 for frontends and backends, >= 1 for servers
+
+ - 1.3.6.1.4.1.29385.106.2: get a variable from info
+ Usage: 1.3.6.1.4.1.29385.106.2.$req.$varnr
+
+ - req is one of:
+ 0) get variable name
+ 1) gat variable value
+
+Examples:
+
+- Get a list of frontends (type: 0) with status (field: 17):
+$ snmpbulkwalk -c public -v2c 192.168.0.1 1.3.6.1.4.1.29385.106.1.0.17
+SNMPv2-SMI::enterprises.29385.106.1.0.17.1.0 = STRING: "OPEN"
+SNMPv2-SMI::enterprises.29385.106.1.0.17.47.0 = STRING: "OPEN"
+
+- Get a list of backends (type: 1) with index (field: 10001):
+$ snmpbulkwalk -c public -v2c 192.168.0.1 1.3.6.1.4.1.29385.106.1.1.10001
+SNMPv2-SMI::enterprises.29385.106.1.1.10001.1.0 = STRING: "1.0"
+SNMPv2-SMI::enterprises.29385.106.1.1.10001.1100.0 = STRING: "1100.0"
+SNMPv2-SMI::enterprises.29385.106.1.1.10001.1101.0 = STRING: "1101.0"
+SNMPv2-SMI::enterprises.29385.106.1.1.10001.1200.0 = STRING: "1200.0"
+SNMPv2-SMI::enterprises.29385.106.1.1.10001.1201.0 = STRING: "1201.0"
+SNMPv2-SMI::enterprises.29385.106.1.1.10001.1300.0 = STRING: "1300.0"
+SNMPv2-SMI::enterprises.29385.106.1.1.10001.1400.0 = STRING: "1400.0"
+SNMPv2-SMI::enterprises.29385.106.1.1.10001.1401.0 = STRING: "1401.0"
+SNMPv2-SMI::enterprises.29385.106.1.1.10001.1500.0 = STRING: "1500.0"
+(...)
+
+- Get a list of servers (type: 2) with unique name (field: 10002):
+$ snmpbulkwalk -c public -v2c 192.168.0.1 1.3.6.1.4.1.29385.106.1.2.10002
+SNMPv2-SMI::enterprises.29385.106.1.2.10002.1100.1001 = STRING: "backend1/s2"
+SNMPv2-SMI::enterprises.29385.106.1.2.10002.1100.1002 = STRING: "backend1/s5"
+SNMPv2-SMI::enterprises.29385.106.1.2.10002.1100.1003 = STRING: "backend1/s6"
+SNMPv2-SMI::enterprises.29385.106.1.2.10002.1100.1012 = STRING: "backend1/s7"
+SNMPv2-SMI::enterprises.29385.106.1.2.10002.1101.1001 = STRING: "backend2/s9"
+SNMPv2-SMI::enterprises.29385.106.1.2.10002.1101.1002 = STRING: "backend2/s10"
+SNMPv2-SMI::enterprises.29385.106.1.2.10002.1101.1003 = STRING: "backend2/s11"
+SNMPv2-SMI::enterprises.29385.106.1.2.10002.1101.1012 = STRING: "backend2/s12"
+SNMPv2-SMI::enterprises.29385.106.1.2.10002.1200.1001 = STRING: "backend3/s8"
+(...)
+
+- Get a list of servers (type: 2) with weight (field: 18) in proxy 4300:
+$ snmpbulkwalk -c public -v2c 192.168.0.1 1.3.6.1.4.1.29385.106.1.2.18.4300
+SNMPv2-SMI::enterprises.29385.106.1.2.18.4300.1001 = STRING: "40"
+SNMPv2-SMI::enterprises.29385.106.1.2.18.4300.1002 = STRING: "25"
+SNMPv2-SMI::enterprises.29385.106.1.2.18.4300.1003 = STRING: "40"
+SNMPv2-SMI::enterprises.29385.106.1.2.18.4300.1012 = STRING: "80"
+
+- Get total sessions count (field: 7) in frontend (type: 1), sid.iid: 47.0 (proxy #47):
+snmpget -c public -v2c 192.168.0.1 enterprises.29385.106.1.0.7.47.0
+SNMPv2-SMI::enterprises.29385.106.1.0.7.47.0 = STRING: "1014019"
+
+- Get a list of available variables (req: 0):
+$ snmpbulkwalk -c public -v2c 192.168.0.1 1.3.6.1.4.1.29385.106.2.0
+SNMPv2-SMI::enterprises.29385.106.2.0.0 = STRING: "Name"
+SNMPv2-SMI::enterprises.29385.106.2.0.1 = STRING: "Version"
+SNMPv2-SMI::enterprises.29385.106.2.0.2 = STRING: "Release_date"
+SNMPv2-SMI::enterprises.29385.106.2.0.3 = STRING: "Nbproc"
+SNMPv2-SMI::enterprises.29385.106.2.0.4 = STRING: "Process_num"
+SNMPv2-SMI::enterprises.29385.106.2.0.5 = STRING: "Pid"
+SNMPv2-SMI::enterprises.29385.106.2.0.6 = STRING: "Uptime"
+SNMPv2-SMI::enterprises.29385.106.2.0.7 = STRING: "Uptime_sec"
+SNMPv2-SMI::enterprises.29385.106.2.0.8 = STRING: "Memmax_MB"
+SNMPv2-SMI::enterprises.29385.106.2.0.9 = STRING: "Ulimit-n"
+SNMPv2-SMI::enterprises.29385.106.2.0.10 = STRING: "Maxsock"
+SNMPv2-SMI::enterprises.29385.106.2.0.11 = STRING: "Maxconn"
+SNMPv2-SMI::enterprises.29385.106.2.0.12 = STRING: "CurrConns"
+
+- Get a variable (req: 1), varnr: 7 (Uptime_sec):
+$ snmpget -c public -v2c 192.168.0.1 1.3.6.1.4.1.29385.106.2.1.7
+SNMPv2-SMI::enterprises.29385.106.2.1.7 = STRING: "18761"
+
--- /dev/null
+<cacti>
+ <hash_040013d1dd43e3e5cee941860ea277826c4fe2>
+ <name>HaProxy Backends</name>
+ <description></description>
+ <xml_path><path_cacti>/resource/snmp_queries/haproxy_backend.xml</xml_path>
+ <data_input_id>hash_030013bf566c869ac6443b0c75d1c32b5a350e</data_input_id>
+ <graphs>
+ <hash_1100134d2954fa52f51ed186916f2cf624a8b9>
+ <name>HAProxy Backend Sessions</name>
+ <graph_template_id>hash_000013cdbf9accfcd57d9e0a7c97896313ddee</graph_template_id>
+ <rrd>
+ <item_000>
+ <snmp_field_name>beSTot</snmp_field_name>
+ <data_template_id>hash_010013fa4d4fff334b60e9064e89082173fe34</data_template_id>
+ <data_template_rrd_id>hash_080013230e04055a4228154123e74c6586d435</data_template_rrd_id>
+ </item_000>
+ <item_001>
+ <snmp_field_name>beEResp</snmp_field_name>
+ <data_template_id>hash_010013fa4d4fff334b60e9064e89082173fe34</data_template_id>
+ <data_template_rrd_id>hash_080013088549c8d7e8cdc80f19bae4d78dc296</data_template_rrd_id>
+ </item_001>
+ </rrd>
+ <sv_graph>
+ <hash_12001368ff8a0bfc447cb94d02e0d17cc3e252>
+ <field_name>ResponseErrors</field_name>
+ <sequence>1</sequence>
+ <text>ResponseErrors</text>
+ </hash_12001368ff8a0bfc447cb94d02e0d17cc3e252>
+ <hash_120013c2e81996ac5a70f67fa4a07e95eea035>
+ <field_name>TotalSessions</field_name>
+ <sequence>1</sequence>
+ <text>TotalSessions</text>
+ </hash_120013c2e81996ac5a70f67fa4a07e95eea035>
+ </sv_graph>
+ <sv_data_source>
+ <hash_130013169b7ea71d2aa3a8abaece19de7feeff>
+ <field_name>ResponseErrors</field_name>
+ <data_template_id>hash_010013fa4d4fff334b60e9064e89082173fe34</data_template_id>
+ <sequence>1</sequence>
+ <text>ResponseErrors</text>
+ </hash_130013169b7ea71d2aa3a8abaece19de7feeff>
+ <hash_130013a61ea1bb051f2162ba635c815324678d>
+ <field_name>TotalSessions</field_name>
+ <data_template_id>hash_010013fa4d4fff334b60e9064e89082173fe34</data_template_id>
+ <sequence>1</sequence>
+ <text>TotalSessions</text>
+ </hash_130013a61ea1bb051f2162ba635c815324678d>
+ </sv_data_source>
+ </hash_1100134d2954fa52f51ed186916f2cf624a8b9>
+ <hash_110013abc35ade0aae030d90f817dfd91486f4>
+ <name>HAProxy Backend Traffic</name>
+ <graph_template_id>hash_000013b6d238ff2532fcc19ab498043c7c65c2</graph_template_id>
+ <rrd>
+ <item_000>
+ <snmp_field_name>beBOut</snmp_field_name>
+ <data_template_id>hash_010013a63ddba34026d2c07d73c0ef2ae64b54</data_template_id>
+ <data_template_rrd_id>hash_0800136c0e4debeb9b084231d858faabd82f8f</data_template_rrd_id>
+ </item_000>
+ <item_001>
+ <snmp_field_name>beBIn</snmp_field_name>
+ <data_template_id>hash_010013a63ddba34026d2c07d73c0ef2ae64b54</data_template_id>
+ <data_template_rrd_id>hash_0800132f5283f17a7cde63137189d4d3ea7e4e</data_template_rrd_id>
+ </item_001>
+ </rrd>
+ <sv_graph>
+ <hash_1200133ba4a6c8aacf161f3e2411afd7053b8d>
+ <field_name>BytesIn</field_name>
+ <sequence>1</sequence>
+ <text>BytesIn</text>
+ </hash_1200133ba4a6c8aacf161f3e2411afd7053b8d>
+ <hash_1200130f8f674b52f6ea2e09608b505abfb3a1>
+ <field_name>BytesOut</field_name>
+ <sequence>1</sequence>
+ <text>BytesOut</text>
+ </hash_1200130f8f674b52f6ea2e09608b505abfb3a1>
+ </sv_graph>
+ <sv_data_source>
+ <hash_130013d9fb3064081d77e553c5ce732f15c909>
+ <field_name>BytesIn</field_name>
+ <data_template_id>hash_010013fa4d4fff334b60e9064e89082173fe34</data_template_id>
+ <sequence>1</sequence>
+ <text>BytesIn</text>
+ </hash_130013d9fb3064081d77e553c5ce732f15c909>
+ <hash_1300134fc96e4392a7a86d05fda31c2d5d334c>
+ <field_name>BytesOut</field_name>
+ <data_template_id>hash_010013fa4d4fff334b60e9064e89082173fe34</data_template_id>
+ <sequence>1</sequence>
+ <text>BytesOut</text>
+ </hash_1300134fc96e4392a7a86d05fda31c2d5d334c>
+ <hash_130013a7aad3557880ac197539a1d658f5d5da>
+ <field_name>BytesIn</field_name>
+ <data_template_id>hash_010013a63ddba34026d2c07d73c0ef2ae64b54</data_template_id>
+ <sequence>1</sequence>
+ <text>BytesIn</text>
+ </hash_130013a7aad3557880ac197539a1d658f5d5da>
+ <hash_130013acb469b673f6adbaa21ad5c634c3683f>
+ <field_name>BytesOut</field_name>
+ <data_template_id>hash_010013a63ddba34026d2c07d73c0ef2ae64b54</data_template_id>
+ <sequence>1</sequence>
+ <text>BytesOut</text>
+ </hash_130013acb469b673f6adbaa21ad5c634c3683f>
+ </sv_data_source>
+ </hash_110013abc35ade0aae030d90f817dfd91486f4>
+ </graphs>
+ </hash_040013d1dd43e3e5cee941860ea277826c4fe2>
+ <hash_030013bf566c869ac6443b0c75d1c32b5a350e>
+ <name>Get SNMP Data (Indexed)</name>
+ <type_id>3</type_id>
+ <input_string></input_string>
+ <fields>
+ <hash_070013617cdc8a230615e59f06f361ef6e7728>
+ <name>SNMP IP Address</name>
+ <update_rra></update_rra>
+ <regexp_match></regexp_match>
+ <allow_nulls></allow_nulls>
+ <type_code>hostname</type_code>
+ <input_output>in</input_output>
+ <data_name>management_ip</data_name>
+ </hash_070013617cdc8a230615e59f06f361ef6e7728>
+ <hash_070013acb449d1451e8a2a655c2c99d31142c7>
+ <name>SNMP Community</name>
+ <update_rra></update_rra>
+ <regexp_match></regexp_match>
+ <allow_nulls></allow_nulls>
+ <type_code>snmp_community</type_code>
+ <input_output>in</input_output>
+ <data_name>snmp_community</data_name>
+ </hash_070013acb449d1451e8a2a655c2c99d31142c7>
+ <hash_070013f4facc5e2ca7ebee621f09bc6d9fc792>
+ <name>SNMP Username (v3)</name>
+ <update_rra></update_rra>
+ <regexp_match></regexp_match>
+ <allow_nulls>on</allow_nulls>
+ <type_code>snmp_username</type_code>
+ <input_output>in</input_output>
+ <data_name>snmp_username</data_name>
+ </hash_070013f4facc5e2ca7ebee621f09bc6d9fc792>
+ <hash_0700131cc1493a6781af2c478fa4de971531cf>
+ <name>SNMP Password (v3)</name>
+ <update_rra></update_rra>
+ <regexp_match></regexp_match>
+ <allow_nulls>on</allow_nulls>
+ <type_code>snmp_password</type_code>
+ <input_output>in</input_output>
+ <data_name>snmp_password</data_name>
+ </hash_0700131cc1493a6781af2c478fa4de971531cf>
+ <hash_070013b5c23f246559df38662c255f4aa21d6b>
+ <name>SNMP Version (1, 2, or 3)</name>
+ <update_rra></update_rra>
+ <regexp_match></regexp_match>
+ <allow_nulls></allow_nulls>
+ <type_code>snmp_version</type_code>
+ <input_output>in</input_output>
+ <data_name>snmp_version</data_name>
+ </hash_070013b5c23f246559df38662c255f4aa21d6b>
+ <hash_0700136027a919c7c7731fbe095b6f53ab127b>
+ <name>Index Type</name>
+ <update_rra></update_rra>
+ <regexp_match></regexp_match>
+ <allow_nulls></allow_nulls>
+ <type_code>index_type</type_code>
+ <input_output>in</input_output>
+ <data_name>index_type</data_name>
+ </hash_0700136027a919c7c7731fbe095b6f53ab127b>
+ <hash_070013cbbe5c1ddfb264a6e5d509ce1c78c95f>
+ <name>Index Value</name>
+ <update_rra></update_rra>
+ <regexp_match></regexp_match>
+ <allow_nulls></allow_nulls>
+ <type_code>index_value</type_code>
+ <input_output>in</input_output>
+ <data_name>index_value</data_name>
+ </hash_070013cbbe5c1ddfb264a6e5d509ce1c78c95f>
+ <hash_070013e6deda7be0f391399c5130e7c4a48b28>
+ <name>Output Type ID</name>
+ <update_rra></update_rra>
+ <regexp_match></regexp_match>
+ <allow_nulls></allow_nulls>
+ <type_code>output_type</type_code>
+ <input_output>in</input_output>
+ <data_name>output_type</data_name>
+ </hash_070013e6deda7be0f391399c5130e7c4a48b28>
+ <hash_070013c1f36ee60c3dc98945556d57f26e475b>
+ <name>SNMP Port</name>
+ <update_rra></update_rra>
+ <regexp_match></regexp_match>
+ <allow_nulls></allow_nulls>
+ <type_code>snmp_port</type_code>
+ <input_output>in</input_output>
+ <data_name>snmp_port</data_name>
+ </hash_070013c1f36ee60c3dc98945556d57f26e475b>
+ </fields>
+ </hash_030013bf566c869ac6443b0c75d1c32b5a350e>
+ <hash_000013cdbf9accfcd57d9e0a7c97896313ddee>
+ <name>HAProxy Backend Sessions</name>
+ <graph>
+ <t_title></t_title>
+ <title>|host_description| - HaProxy - |query_bePxName| Backend Sessions</title>
+ <t_image_format_id></t_image_format_id>
+ <image_format_id>1</image_format_id>
+ <t_height></t_height>
+ <height>120</height>
+ <t_width></t_width>
+ <width>500</width>
+ <t_auto_scale></t_auto_scale>
+ <auto_scale>on</auto_scale>
+ <t_auto_scale_opts></t_auto_scale_opts>
+ <auto_scale_opts>2</auto_scale_opts>
+ <t_auto_scale_log></t_auto_scale_log>
+ <auto_scale_log></auto_scale_log>
+ <t_auto_scale_rigid></t_auto_scale_rigid>
+ <auto_scale_rigid></auto_scale_rigid>
+ <t_auto_padding></t_auto_padding>
+ <auto_padding>on</auto_padding>
+ <t_export></t_export>
+ <export>on</export>
+ <t_upper_limit></t_upper_limit>
+ <upper_limit>10000</upper_limit>
+ <t_lower_limit></t_lower_limit>
+ <lower_limit>0</lower_limit>
+ <t_base_value></t_base_value>
+ <base_value>1000</base_value>
+ <t_unit_value></t_unit_value>
+ <unit_value></unit_value>
+ <t_unit_exponent_value></t_unit_exponent_value>
+ <unit_exponent_value></unit_exponent_value>
+ <t_vertical_label></t_vertical_label>
+ <vertical_label></vertical_label>
+ </graph>
+ <items>
+ <hash_1000131ecaf3728447913a30dfa80cdd9cdff4>
+ <task_item_id>hash_080013230e04055a4228154123e74c6586d435</task_item_id>
+ <color_id>0000FF</color_id>
+ <graph_type_id>5</graph_type_id>
+ <consolidation_function_id>1</consolidation_function_id>
+ <cdef_id>0</cdef_id>
+ <value></value>
+ <gprint_id>hash_060013e9c43831e54eca8069317a2ce8c6f751</gprint_id>
+ <text_format>Total Sessions:</text_format>
+ <hard_return></hard_return>
+ <sequence>5</sequence>
+ </hash_1000131ecaf3728447913a30dfa80cdd9cdff4>
+ <hash_1000132171a00b34d33f99ef24bcc235fbb6a3>
+ <task_item_id>hash_080013230e04055a4228154123e74c6586d435</task_item_id>
+ <color_id>0</color_id>
+ <graph_type_id>9</graph_type_id>
+ <consolidation_function_id>4</consolidation_function_id>
+ <cdef_id>0</cdef_id>
+ <value></value>
+ <gprint_id>hash_060013e9c43831e54eca8069317a2ce8c6f751</gprint_id>
+ <text_format>Current:</text_format>
+ <hard_return></hard_return>
+ <sequence>6</sequence>
+ </hash_1000132171a00b34d33f99ef24bcc235fbb6a3>
+ <hash_1000132129590e72a46480422f85e063d8cf4d>
+ <task_item_id>hash_080013230e04055a4228154123e74c6586d435</task_item_id>
+ <color_id>0</color_id>
+ <graph_type_id>9</graph_type_id>
+ <consolidation_function_id>1</consolidation_function_id>
+ <cdef_id>0</cdef_id>
+ <value></value>
+ <gprint_id>hash_060013e9c43831e54eca8069317a2ce8c6f751</gprint_id>
+ <text_format>Average:</text_format>
+ <hard_return></hard_return>
+ <sequence>7</sequence>
+ </hash_1000132129590e72a46480422f85e063d8cf4d>
+ <hash_1000138d11fec869f88ccf2fa3227bcffadfc3>
+ <task_item_id>hash_080013230e04055a4228154123e74c6586d435</task_item_id>
+ <color_id>0</color_id>
+ <graph_type_id>9</graph_type_id>
+ <consolidation_function_id>3</consolidation_function_id>
+ <cdef_id>0</cdef_id>
+ <value></value>
+ <gprint_id>hash_060013e9c43831e54eca8069317a2ce8c6f751</gprint_id>
+ <text_format>Maximum:</text_format>
+ <hard_return>on</hard_return>
+ <sequence>8</sequence>
+ </hash_1000138d11fec869f88ccf2fa3227bcffadfc3>
+ <hash_100013783d295131617ad996e4699533a134ea>
+ <task_item_id>hash_080013088549c8d7e8cdc80f19bae4d78dc296</task_item_id>
+ <color_id>EA8F00</color_id>
+ <graph_type_id>5</graph_type_id>
+ <consolidation_function_id>1</consolidation_function_id>
+ <cdef_id>0</cdef_id>
+ <value></value>
+ <gprint_id>hash_060013e9c43831e54eca8069317a2ce8c6f751</gprint_id>
+ <text_format>Response Errors:</text_format>
+ <hard_return></hard_return>
+ <sequence>9</sequence>
+ </hash_100013783d295131617ad996e4699533a134ea>
+ <hash_1000139bc04e5072b25ca992ee0b0eec981b95>
+ <task_item_id>hash_080013088549c8d7e8cdc80f19bae4d78dc296</task_item_id>
+ <color_id>0</color_id>
+ <graph_type_id>9</graph_type_id>
+ <consolidation_function_id>4</consolidation_function_id>
+ <cdef_id>0</cdef_id>
+ <value></value>
+ <gprint_id>hash_060013e9c43831e54eca8069317a2ce8c6f751</gprint_id>
+ <text_format>Current:</text_format>
+ <hard_return></hard_return>
+ <sequence>10</sequence>
+ </hash_1000139bc04e5072b25ca992ee0b0eec981b95>
+ <hash_1000136333a9334fa0dc0d2f75c031dee1dcc5>
+ <task_item_id>hash_080013088549c8d7e8cdc80f19bae4d78dc296</task_item_id>
+ <color_id>0</color_id>
+ <graph_type_id>9</graph_type_id>
+ <consolidation_function_id>1</consolidation_function_id>
+ <cdef_id>0</cdef_id>
+ <value></value>
+ <gprint_id>hash_060013e9c43831e54eca8069317a2ce8c6f751</gprint_id>
+ <text_format>Average:</text_format>
+ <hard_return></hard_return>
+ <sequence>11</sequence>
+ </hash_1000136333a9334fa0dc0d2f75c031dee1dcc5>
+ <hash_10001386e0e18d79915cd21ff123fb830e150e>
+ <task_item_id>hash_080013088549c8d7e8cdc80f19bae4d78dc296</task_item_id>
+ <color_id>0</color_id>
+ <graph_type_id>9</graph_type_id>
+ <consolidation_function_id>3</consolidation_function_id>
+ <cdef_id>0</cdef_id>
+ <value></value>
+ <gprint_id>hash_060013e9c43831e54eca8069317a2ce8c6f751</gprint_id>
+ <text_format>Maximum:</text_format>
+ <hard_return>on</hard_return>
+ <sequence>12</sequence>
+ </hash_10001386e0e18d79915cd21ff123fb830e150e>
+ <hash_100013206b0b016daf267ff0a1daa7733ecf25>
+ <task_item_id>0</task_item_id>
+ <color_id>0</color_id>
+ <graph_type_id>1</graph_type_id>
+ <consolidation_function_id>1</consolidation_function_id>
+ <cdef_id>0</cdef_id>
+ <value></value>
+ <gprint_id>hash_060013e9c43831e54eca8069317a2ce8c6f751</gprint_id>
+ <text_format>Graph Last Updated: |date_time|</text_format>
+ <hard_return>on</hard_return>
+ <sequence>13</sequence>
+ </hash_100013206b0b016daf267ff0a1daa7733ecf25>
+ </items>
+ <inputs>
+ <hash_090013871102d568ae1a0d7d79aa4b0d3a6411>
+ <name>Data Source [TotalSessions]</name>
+ <description></description>
+ <column_name>task_item_id</column_name>
+ <items>hash_0000131ecaf3728447913a30dfa80cdd9cdff4|hash_0000132171a00b34d33f99ef24bcc235fbb6a3|hash_0000132129590e72a46480422f85e063d8cf4d|hash_0000138d11fec869f88ccf2fa3227bcffadfc3</items>
+ </hash_090013871102d568ae1a0d7d79aa4b0d3a6411>
+ <hash_090013320fd0edeb30465be51274fa3ecbe168>
+ <name>Data Source [ResponseErrors]</name>
+ <description></description>
+ <column_name>task_item_id</column_name>
+ <items>hash_000013783d295131617ad996e4699533a134ea|hash_0000139bc04e5072b25ca992ee0b0eec981b95|hash_0000136333a9334fa0dc0d2f75c031dee1dcc5|hash_00001386e0e18d79915cd21ff123fb830e150e</items>
+ </hash_090013320fd0edeb30465be51274fa3ecbe168>
+ </inputs>
+ </hash_000013cdbf9accfcd57d9e0a7c97896313ddee>
+ <hash_000013b6d238ff2532fcc19ab498043c7c65c2>
+ <name>HAProxy Backend Traffic</name>
+ <graph>
+ <t_title></t_title>
+ <title>|host_description| - HaProxy |query_bePxName| Backend Traffic</title>
+ <t_image_format_id></t_image_format_id>
+ <image_format_id>1</image_format_id>
+ <t_height></t_height>
+ <height>120</height>
+ <t_width></t_width>
+ <width>500</width>
+ <t_auto_scale></t_auto_scale>
+ <auto_scale>on</auto_scale>
+ <t_auto_scale_opts></t_auto_scale_opts>
+ <auto_scale_opts>2</auto_scale_opts>
+ <t_auto_scale_log></t_auto_scale_log>
+ <auto_scale_log></auto_scale_log>
+ <t_auto_scale_rigid></t_auto_scale_rigid>
+ <auto_scale_rigid></auto_scale_rigid>
+ <t_auto_padding></t_auto_padding>
+ <auto_padding>on</auto_padding>
+ <t_export></t_export>
+ <export>on</export>
+ <t_upper_limit></t_upper_limit>
+ <upper_limit>10000000000</upper_limit>
+ <t_lower_limit></t_lower_limit>
+ <lower_limit>0</lower_limit>
+ <t_base_value></t_base_value>
+ <base_value>1024</base_value>
+ <t_unit_value></t_unit_value>
+ <unit_value></unit_value>
+ <t_unit_exponent_value></t_unit_exponent_value>
+ <unit_exponent_value></unit_exponent_value>
+ <t_vertical_label></t_vertical_label>
+ <vertical_label>bytes</vertical_label>
+ </graph>
+ <items>
+ <hash_100013184e60d8dac2421c2787887fe07f6d25>
+ <task_item_id>hash_0800132f5283f17a7cde63137189d4d3ea7e4e</task_item_id>
+ <color_id>6EA100</color_id>
+ <graph_type_id>5</graph_type_id>
+ <consolidation_function_id>1</consolidation_function_id>
+ <cdef_id>0</cdef_id>
+ <value></value>
+ <gprint_id>hash_060013e9c43831e54eca8069317a2ce8c6f751</gprint_id>
+ <text_format>Ingress Traffic:</text_format>
+ <hard_return></hard_return>
+ <sequence>2</sequence>
+ </hash_100013184e60d8dac2421c2787887fe07f6d25>
+ <hash_100013f3889b4094b935798483e489b5f5e16e>
+ <task_item_id>hash_0800132f5283f17a7cde63137189d4d3ea7e4e</task_item_id>
+ <color_id>0</color_id>
+ <graph_type_id>9</graph_type_id>
+ <consolidation_function_id>4</consolidation_function_id>
+ <cdef_id>0</cdef_id>
+ <value></value>
+ <gprint_id>hash_060013e9c43831e54eca8069317a2ce8c6f751</gprint_id>
+ <text_format>Current:</text_format>
+ <hard_return></hard_return>
+ <sequence>3</sequence>
+ </hash_100013f3889b4094b935798483e489b5f5e16e>
+ <hash_1000134bbdf263db6461f5d76717c12564c42c>
+ <task_item_id>hash_0800132f5283f17a7cde63137189d4d3ea7e4e</task_item_id>
+ <color_id>0</color_id>
+ <graph_type_id>9</graph_type_id>
+ <consolidation_function_id>1</consolidation_function_id>
+ <cdef_id>0</cdef_id>
+ <value></value>
+ <gprint_id>hash_060013e9c43831e54eca8069317a2ce8c6f751</gprint_id>
+ <text_format>Average:</text_format>
+ <hard_return></hard_return>
+ <sequence>4</sequence>
+ </hash_1000134bbdf263db6461f5d76717c12564c42c>
+ <hash_1000131b708578244e36caba0f4dea67230c80>
+ <task_item_id>hash_0800132f5283f17a7cde63137189d4d3ea7e4e</task_item_id>
+ <color_id>0</color_id>
+ <graph_type_id>9</graph_type_id>
+ <consolidation_function_id>3</consolidation_function_id>
+ <cdef_id>0</cdef_id>
+ <value></value>
+ <gprint_id>hash_060013e9c43831e54eca8069317a2ce8c6f751</gprint_id>
+ <text_format>Maximum:</text_format>
+ <hard_return>on</hard_return>
+ <sequence>5</sequence>
+ </hash_1000131b708578244e36caba0f4dea67230c80>
+ <hash_1000133e2f02edb1a55bcdd20e925a3849fd37>
+ <task_item_id>hash_0800136c0e4debeb9b084231d858faabd82f8f</task_item_id>
+ <color_id>FF0000</color_id>
+ <graph_type_id>5</graph_type_id>
+ <consolidation_function_id>1</consolidation_function_id>
+ <cdef_id>0</cdef_id>
+ <value></value>
+ <gprint_id>hash_060013e9c43831e54eca8069317a2ce8c6f751</gprint_id>
+ <text_format>Egress Traffic:</text_format>
+ <hard_return></hard_return>
+ <sequence>6</sequence>
+ </hash_1000133e2f02edb1a55bcdd20e925a3849fd37>
+ <hash_1000134517c9799c71e03dcd2278681858d70f>
+ <task_item_id>hash_0800136c0e4debeb9b084231d858faabd82f8f</task_item_id>
+ <color_id>0</color_id>
+ <graph_type_id>9</graph_type_id>
+ <consolidation_function_id>4</consolidation_function_id>
+ <cdef_id>0</cdef_id>
+ <value></value>
+ <gprint_id>hash_060013e9c43831e54eca8069317a2ce8c6f751</gprint_id>
+ <text_format>Current:</text_format>
+ <hard_return></hard_return>
+ <sequence>7</sequence>
+ </hash_1000134517c9799c71e03dcd2278681858d70f>
+ <hash_1000132edf24a4592c9537d2341ec20c588fc2>
+ <task_item_id>hash_0800136c0e4debeb9b084231d858faabd82f8f</task_item_id>
+ <color_id>0</color_id>
+ <graph_type_id>9</graph_type_id>
+ <consolidation_function_id>1</consolidation_function_id>
+ <cdef_id>0</cdef_id>
+ <value></value>
+ <gprint_id>hash_060013e9c43831e54eca8069317a2ce8c6f751</gprint_id>
+ <text_format>Average:</text_format>
+ <hard_return></hard_return>
+ <sequence>8</sequence>
+ </hash_1000132edf24a4592c9537d2341ec20c588fc2>
+ <hash_100013150e680935bfccc75f1f88c7c60030f7>
+ <task_item_id>hash_0800136c0e4debeb9b084231d858faabd82f8f</task_item_id>
+ <color_id>0</color_id>
+ <graph_type_id>9</graph_type_id>
+ <consolidation_function_id>3</consolidation_function_id>
+ <cdef_id>0</cdef_id>
+ <value></value>
+ <gprint_id>hash_060013e9c43831e54eca8069317a2ce8c6f751</gprint_id>
+ <text_format>Maximum:</text_format>
+ <hard_return>on</hard_return>
+ <sequence>9</sequence>
+ </hash_100013150e680935bfccc75f1f88c7c60030f7>
+ <hash_1000135dcb7625a1a21d8d94fdf2f97d302a42>
+ <task_item_id>0</task_item_id>
+ <color_id>0</color_id>
+ <graph_type_id>1</graph_type_id>
+ <consolidation_function_id>1</consolidation_function_id>
+ <cdef_id>0</cdef_id>
+ <value></value>
+ <gprint_id>hash_060013e9c43831e54eca8069317a2ce8c6f751</gprint_id>
+ <text_format>Graph Last Updated: |date_time|</text_format>
+ <hard_return>on</hard_return>
+ <sequence>10</sequence>
+ </hash_1000135dcb7625a1a21d8d94fdf2f97d302a42>
+ </items>
+ <inputs>
+ <hash_090013952f2971b58b10f88a55d63a0388a429>
+ <name>Data Source [BytesIn]</name>
+ <description></description>
+ <column_name>task_item_id</column_name>
+ <items>hash_000013184e60d8dac2421c2787887fe07f6d25|hash_000013f3889b4094b935798483e489b5f5e16e|hash_0000134bbdf263db6461f5d76717c12564c42c|hash_0000131b708578244e36caba0f4dea67230c80</items>
+ </hash_090013952f2971b58b10f88a55d63a0388a429>
+ <hash_09001393a65aa111654d6801846a6cb523580b>
+ <name>Data Source [BytesOut]</name>
+ <description></description>
+ <column_name>task_item_id</column_name>
+ <items>hash_0000133e2f02edb1a55bcdd20e925a3849fd37|hash_0000134517c9799c71e03dcd2278681858d70f|hash_0000132edf24a4592c9537d2341ec20c588fc2|hash_000013150e680935bfccc75f1f88c7c60030f7</items>
+ </hash_09001393a65aa111654d6801846a6cb523580b>
+ </inputs>
+ </hash_000013b6d238ff2532fcc19ab498043c7c65c2>
+ <hash_010013fa4d4fff334b60e9064e89082173fe34>
+ <name>HAProxy Backend Session Stats</name>
+ <ds>
+ <t_name></t_name>
+ <name>|host_description| - Haproxy - |query_bePxName| Backend Session Stats</name>
+ <data_input_id>hash_030013bf566c869ac6443b0c75d1c32b5a350e</data_input_id>
+ <t_rra_id></t_rra_id>
+ <t_rrd_step></t_rrd_step>
+ <rrd_step>300</rrd_step>
+ <t_active></t_active>
+ <active>on</active>
+ <rra_items>hash_150013c21df5178e5c955013591239eb0afd46|hash_1500130d9c0af8b8acdc7807943937b3208e29|hash_1500136fc2d038fb42950138b0ce3e9874cc60|hash_150013e36f3adb9f152adfa5dc50fd2b23337e|hash_15001352829408ab566127eede2c74d201c678|hash_150013e73fb797d3ab2a9b97c3ec29e9690910</rra_items>
+ </ds>
+ <items>
+ <hash_080013230e04055a4228154123e74c6586d435>
+ <t_data_source_name></t_data_source_name>
+ <data_source_name>TotalSessions</data_source_name>
+ <t_rrd_minimum></t_rrd_minimum>
+ <rrd_minimum>0</rrd_minimum>
+ <t_rrd_maximum></t_rrd_maximum>
+ <rrd_maximum>10000</rrd_maximum>
+ <t_data_source_type_id></t_data_source_type_id>
+ <data_source_type_id>2</data_source_type_id>
+ <t_rrd_heartbeat></t_rrd_heartbeat>
+ <rrd_heartbeat>600</rrd_heartbeat>
+ <t_data_input_field_id></t_data_input_field_id>
+ <data_input_field_id>0</data_input_field_id>
+ </hash_080013230e04055a4228154123e74c6586d435>
+ <hash_080013088549c8d7e8cdc80f19bae4d78dc296>
+ <t_data_source_name></t_data_source_name>
+ <data_source_name>ResponseErrors</data_source_name>
+ <t_rrd_minimum></t_rrd_minimum>
+ <rrd_minimum>0</rrd_minimum>
+ <t_rrd_maximum></t_rrd_maximum>
+ <rrd_maximum>10000</rrd_maximum>
+ <t_data_source_type_id></t_data_source_type_id>
+ <data_source_type_id>2</data_source_type_id>
+ <t_rrd_heartbeat></t_rrd_heartbeat>
+ <rrd_heartbeat>600</rrd_heartbeat>
+ <t_data_input_field_id></t_data_input_field_id>
+ <data_input_field_id>0</data_input_field_id>
+ </hash_080013088549c8d7e8cdc80f19bae4d78dc296>
+ </items>
+ <data>
+ <item_000>
+ <data_input_field_id>hash_070013c1f36ee60c3dc98945556d57f26e475b</data_input_field_id>
+ <t_value></t_value>
+ <value></value>
+ </item_000>
+ <item_001>
+ <data_input_field_id>hash_070013e6deda7be0f391399c5130e7c4a48b28</data_input_field_id>
+ <t_value></t_value>
+ <value></value>
+ </item_001>
+ <item_002>
+ <data_input_field_id>hash_070013cbbe5c1ddfb264a6e5d509ce1c78c95f</data_input_field_id>
+ <t_value></t_value>
+ <value></value>
+ </item_002>
+ <item_003>
+ <data_input_field_id>hash_0700136027a919c7c7731fbe095b6f53ab127b</data_input_field_id>
+ <t_value></t_value>
+ <value></value>
+ </item_003>
+ <item_004>
+ <data_input_field_id>hash_070013b5c23f246559df38662c255f4aa21d6b</data_input_field_id>
+ <t_value></t_value>
+ <value></value>
+ </item_004>
+ <item_005>
+ <data_input_field_id>hash_0700131cc1493a6781af2c478fa4de971531cf</data_input_field_id>
+ <t_value></t_value>
+ <value></value>
+ </item_005>
+ <item_006>
+ <data_input_field_id>hash_070013f4facc5e2ca7ebee621f09bc6d9fc792</data_input_field_id>
+ <t_value></t_value>
+ <value></value>
+ </item_006>
+ <item_007>
+ <data_input_field_id>hash_070013acb449d1451e8a2a655c2c99d31142c7</data_input_field_id>
+ <t_value></t_value>
+ <value></value>
+ </item_007>
+ <item_008>
+ <data_input_field_id>hash_070013617cdc8a230615e59f06f361ef6e7728</data_input_field_id>
+ <t_value></t_value>
+ <value></value>
+ </item_008>
+ </data>
+ </hash_010013fa4d4fff334b60e9064e89082173fe34>
+ <hash_010013a63ddba34026d2c07d73c0ef2ae64b54>
+ <name>HAProxy Backend Traffic Stats</name>
+ <ds>
+ <t_name></t_name>
+ <name>|host_description| - Haproxy - |query_bePxName| Backend Traffic Stats</name>
+ <data_input_id>hash_030013bf566c869ac6443b0c75d1c32b5a350e</data_input_id>
+ <t_rra_id></t_rra_id>
+ <t_rrd_step></t_rrd_step>
+ <rrd_step>300</rrd_step>
+ <t_active></t_active>
+ <active>on</active>
+ <rra_items>hash_150013c21df5178e5c955013591239eb0afd46|hash_1500130d9c0af8b8acdc7807943937b3208e29|hash_1500136fc2d038fb42950138b0ce3e9874cc60|hash_150013e36f3adb9f152adfa5dc50fd2b23337e|hash_150013a4aa6f4de84eaa00008f88d3f5bd8520|hash_150013e73fb797d3ab2a9b97c3ec29e9690910</rra_items>
+ </ds>
+ <items>
+ <hash_0800136c0e4debeb9b084231d858faabd82f8f>
+ <t_data_source_name></t_data_source_name>
+ <data_source_name>BytesOut</data_source_name>
+ <t_rrd_minimum></t_rrd_minimum>
+ <rrd_minimum>0</rrd_minimum>
+ <t_rrd_maximum></t_rrd_maximum>
+ <rrd_maximum>10000000000</rrd_maximum>
+ <t_data_source_type_id></t_data_source_type_id>
+ <data_source_type_id>2</data_source_type_id>
+ <t_rrd_heartbeat></t_rrd_heartbeat>
+ <rrd_heartbeat>600</rrd_heartbeat>
+ <t_data_input_field_id></t_data_input_field_id>
+ <data_input_field_id>0</data_input_field_id>
+ </hash_0800136c0e4debeb9b084231d858faabd82f8f>
+ <hash_0800132f5283f17a7cde63137189d4d3ea7e4e>
+ <t_data_source_name></t_data_source_name>
+ <data_source_name>BytesIn</data_source_name>
+ <t_rrd_minimum></t_rrd_minimum>
+ <rrd_minimum>0</rrd_minimum>
+ <t_rrd_maximum></t_rrd_maximum>
+ <rrd_maximum>10000000000</rrd_maximum>
+ <t_data_source_type_id></t_data_source_type_id>
+ <data_source_type_id>2</data_source_type_id>
+ <t_rrd_heartbeat></t_rrd_heartbeat>
+ <rrd_heartbeat>600</rrd_heartbeat>
+ <t_data_input_field_id></t_data_input_field_id>
+ <data_input_field_id>0</data_input_field_id>
+ </hash_0800132f5283f17a7cde63137189d4d3ea7e4e>
+ </items>
+ <data>
+ <item_000>
+ <data_input_field_id>hash_070013c1f36ee60c3dc98945556d57f26e475b</data_input_field_id>
+ <t_value></t_value>
+ <value></value>
+ </item_000>
+ <item_001>
+ <data_input_field_id>hash_070013e6deda7be0f391399c5130e7c4a48b28</data_input_field_id>
+ <t_value></t_value>
+ <value></value>
+ </item_001>
+ <item_002>
+ <data_input_field_id>hash_070013cbbe5c1ddfb264a6e5d509ce1c78c95f</data_input_field_id>
+ <t_value></t_value>
+ <value></value>
+ </item_002>
+ <item_003>
+ <data_input_field_id>hash_0700136027a919c7c7731fbe095b6f53ab127b</data_input_field_id>
+ <t_value></t_value>
+ <value></value>
+ </item_003>
+ <item_004>
+ <data_input_field_id>hash_070013b5c23f246559df38662c255f4aa21d6b</data_input_field_id>
+ <t_value></t_value>
+ <value></value>
+ </item_004>
+ <item_005>
+ <data_input_field_id>hash_0700131cc1493a6781af2c478fa4de971531cf</data_input_field_id>
+ <t_value></t_value>
+ <value></value>
+ </item_005>
+ <item_006>
+ <data_input_field_id>hash_070013f4facc5e2ca7ebee621f09bc6d9fc792</data_input_field_id>
+ <t_value></t_value>
+ <value></value>
+ </item_006>
+ <item_007>
+ <data_input_field_id>hash_070013acb449d1451e8a2a655c2c99d31142c7</data_input_field_id>
+ <t_value></t_value>
+ <value></value>
+ </item_007>
+ <item_008>
+ <data_input_field_id>hash_070013617cdc8a230615e59f06f361ef6e7728</data_input_field_id>
+ <t_value></t_value>
+ <value></value>
+ </item_008>
+ </data>
+ </hash_010013a63ddba34026d2c07d73c0ef2ae64b54>
+ <hash_150013c21df5178e5c955013591239eb0afd46>
+ <name>Daily (5 Minute Average)</name>
+ <x_files_factor>0.5</x_files_factor>
+ <steps>1</steps>
+ <rows>600</rows>
+ <timespan>86400</timespan>
+ <cf_items>1|2|3|4</cf_items>
+ </hash_150013c21df5178e5c955013591239eb0afd46>
+ <hash_1500130d9c0af8b8acdc7807943937b3208e29>
+ <name>Weekly (30 Minute Average)</name>
+ <x_files_factor>0.5</x_files_factor>
+ <steps>6</steps>
+ <rows>700</rows>
+ <timespan>604800</timespan>
+ <cf_items>1|2|3|4</cf_items>
+ </hash_1500130d9c0af8b8acdc7807943937b3208e29>
+ <hash_1500136fc2d038fb42950138b0ce3e9874cc60>
+ <name>Monthly (2 Hour Average)</name>
+ <x_files_factor>0.5</x_files_factor>
+ <steps>24</steps>
+ <rows>775</rows>
+ <timespan>2678400</timespan>
+ <cf_items>1|2|3|4</cf_items>
+ </hash_1500136fc2d038fb42950138b0ce3e9874cc60>
+ <hash_150013e36f3adb9f152adfa5dc50fd2b23337e>
+ <name>Yearly (1 Day Average)</name>
+ <x_files_factor>0.5</x_files_factor>
+ <steps>288</steps>
+ <rows>797</rows>
+ <timespan>33053184</timespan>
+ <cf_items>1|2|3|4</cf_items>
+ </hash_150013e36f3adb9f152adfa5dc50fd2b23337e>
+ <hash_1500130028a19ed71b758898eaa55ab1c59694>
+ <name>Three days (5 minutes average)</name>
+ <x_files_factor>0.5</x_files_factor>
+ <steps>6</steps>
+ <rows>700</rows>
+ <timespan>302400</timespan>
+ <cf_items>1|2|3|4</cf_items>
+ </hash_1500130028a19ed71b758898eaa55ab1c59694>
+ <hash_150013e73fb797d3ab2a9b97c3ec29e9690910>
+ <name>Hourly (1 Minute Average)</name>
+ <x_files_factor>0.5</x_files_factor>
+ <steps>1</steps>
+ <rows>500</rows>
+ <timespan>14400</timespan>
+ <cf_items>1|3</cf_items>
+ </hash_150013e73fb797d3ab2a9b97c3ec29e9690910>
+ <hash_060013e9c43831e54eca8069317a2ce8c6f751>
+ <name>Normal</name>
+ <gprint_text>%8.2lf %s</gprint_text>
+ </hash_060013e9c43831e54eca8069317a2ce8c6f751>
+</cacti>
\ No newline at end of file
--- /dev/null
+<cacti>
+ <hash_0400138cb70c1064bd60742726af23828c4b05>
+ <name>HAProxy Frontends</name>
+ <description></description>
+ <xml_path><path_cacti>/resource/snmp_queries/haproxy_frontend.xml</xml_path>
+ <data_input_id>hash_030013bf566c869ac6443b0c75d1c32b5a350e</data_input_id>
+ <graphs>
+ <hash_110013c1c2bca3af0ae4e2ce0de096aa79dba5>
+ <name>HAProxy Frontend Sessions</name>
+ <graph_template_id>hash_00001328b6727aa54dde6bb3f5dde939ae03aa</graph_template_id>
+ <rrd>
+ <item_000>
+ <snmp_field_name>feSTot</snmp_field_name>
+ <data_template_id>hash_0100139f985697a7530256b4e35c95ef03db20</data_template_id>
+ <data_template_rrd_id>hash_080013f9c76e05d0a87b2d32f9a5b014e17aab</data_template_rrd_id>
+ </item_000>
+ <item_001>
+ <snmp_field_name>feEReq</snmp_field_name>
+ <data_template_id>hash_0100139f985697a7530256b4e35c95ef03db20</data_template_id>
+ <data_template_rrd_id>hash_080013c137bec94d7220e65a5b3dfa4049c242</data_template_rrd_id>
+ </item_001>
+ </rrd>
+ <sv_graph>
+ <hash_1200130f0e4ffcd11f807d23794ab805d7901a>
+ <field_name>TotalSessions</field_name>
+ <sequence>1</sequence>
+ <text>TotalSessions</text>
+ </hash_1200130f0e4ffcd11f807d23794ab805d7901a>
+ <hash_1200134fc506db9ce45c0e5cb38a429ad8e077>
+ <field_name>RequestErrors</field_name>
+ <sequence>1</sequence>
+ <text>RequestErrors</text>
+ </hash_1200134fc506db9ce45c0e5cb38a429ad8e077>
+ </sv_graph>
+ <sv_data_source>
+ <hash_1300138a5efc51c95b400c3139b352ce110969>
+ <field_name>RequestErrors</field_name>
+ <data_template_id>hash_0100139f985697a7530256b4e35c95ef03db20</data_template_id>
+ <sequence>1</sequence>
+ <text>RequestErrors</text>
+ </hash_1300138a5efc51c95b400c3139b352ce110969>
+ <hash_130013e374903ab025bc2728f2f9abeb412ac3>
+ <field_name>TotalSessions</field_name>
+ <data_template_id>hash_0100139f985697a7530256b4e35c95ef03db20</data_template_id>
+ <sequence>1</sequence>
+ <text>TotalSessions</text>
+ </hash_130013e374903ab025bc2728f2f9abeb412ac3>
+ </sv_data_source>
+ </hash_110013c1c2bca3af0ae4e2ce0de096aa79dba5>
+ <hash_1100130838495d5d82f25f4a675ee7c56543a5>
+ <name>HAProxy Frontend Traffic</name>
+ <graph_template_id>hash_000013d0fe9e9efc2746de488fdede0419b051</graph_template_id>
+ <rrd>
+ <item_000>
+ <snmp_field_name>feBOut</snmp_field_name>
+ <data_template_id>hash_010013a88327df77ea19e333ddd96096c34751</data_template_id>
+ <data_template_rrd_id>hash_0800137db81cd58fbbbd203af0f55c15c2081a</data_template_rrd_id>
+ </item_000>
+ <item_001>
+ <snmp_field_name>feBIn</snmp_field_name>
+ <data_template_id>hash_010013a88327df77ea19e333ddd96096c34751</data_template_id>
+ <data_template_rrd_id>hash_08001305772980bb6de1f12223d7ec53e323c4</data_template_rrd_id>
+ </item_001>
+ </rrd>
+ <sv_graph>
+ <hash_120013934d1311136bccb4d9ca5a67e240afeb>
+ <field_name>BytesIn</field_name>
+ <sequence>1</sequence>
+ <text>BytesIn</text>
+ </hash_120013934d1311136bccb4d9ca5a67e240afeb>
+ <hash_12001399a6e6fb09b025bc60a214cb00e6d1f0>
+ <field_name>BytesOut</field_name>
+ <sequence>1</sequence>
+ <text>BytesOut</text>
+ </hash_12001399a6e6fb09b025bc60a214cb00e6d1f0>
+ </sv_graph>
+ <sv_data_source>
+ <hash_1300135f35cdaeda1a1169be21e52a85af339e>
+ <field_name>BytesOut</field_name>
+ <data_template_id>hash_0100139f985697a7530256b4e35c95ef03db20</data_template_id>
+ <sequence>1</sequence>
+ <text>BytesOut</text>
+ </hash_1300135f35cdaeda1a1169be21e52a85af339e>
+ <hash_1300136ee916a0c0ce8dad133b9dfcf32e2581>
+ <field_name>BytesIn</field_name>
+ <data_template_id>hash_0100139f985697a7530256b4e35c95ef03db20</data_template_id>
+ <sequence>1</sequence>
+ <text>BytesIn</text>
+ </hash_1300136ee916a0c0ce8dad133b9dfcf32e2581>
+ <hash_13001382c5a3b953f8d1583b168d15beed6e9c>
+ <field_name>BytesOut</field_name>
+ <data_template_id>hash_010013a88327df77ea19e333ddd96096c34751</data_template_id>
+ <sequence>1</sequence>
+ <text>BytesOut</text>
+ </hash_13001382c5a3b953f8d1583b168d15beed6e9c>
+ <hash_1300132c486fa1a5e875179031ea9f5328614b>
+ <field_name>BytesIn</field_name>
+ <data_template_id>hash_010013a88327df77ea19e333ddd96096c34751</data_template_id>
+ <sequence>1</sequence>
+ <text>BytesIn</text>
+ </hash_1300132c486fa1a5e875179031ea9f5328614b>
+ </sv_data_source>
+ </hash_1100130838495d5d82f25f4a675ee7c56543a5>
+ </graphs>
+ </hash_0400138cb70c1064bd60742726af23828c4b05>
+ <hash_030013bf566c869ac6443b0c75d1c32b5a350e>
+ <name>Get SNMP Data (Indexed)</name>
+ <type_id>3</type_id>
+ <input_string></input_string>
+ <fields>
+ <hash_070013617cdc8a230615e59f06f361ef6e7728>
+ <name>SNMP IP Address</name>
+ <update_rra></update_rra>
+ <regexp_match></regexp_match>
+ <allow_nulls></allow_nulls>
+ <type_code>hostname</type_code>
+ <input_output>in</input_output>
+ <data_name>management_ip</data_name>
+ </hash_070013617cdc8a230615e59f06f361ef6e7728>
+ <hash_070013acb449d1451e8a2a655c2c99d31142c7>
+ <name>SNMP Community</name>
+ <update_rra></update_rra>
+ <regexp_match></regexp_match>
+ <allow_nulls></allow_nulls>
+ <type_code>snmp_community</type_code>
+ <input_output>in</input_output>
+ <data_name>snmp_community</data_name>
+ </hash_070013acb449d1451e8a2a655c2c99d31142c7>
+ <hash_070013f4facc5e2ca7ebee621f09bc6d9fc792>
+ <name>SNMP Username (v3)</name>
+ <update_rra></update_rra>
+ <regexp_match></regexp_match>
+ <allow_nulls>on</allow_nulls>
+ <type_code>snmp_username</type_code>
+ <input_output>in</input_output>
+ <data_name>snmp_username</data_name>
+ </hash_070013f4facc5e2ca7ebee621f09bc6d9fc792>
+ <hash_0700131cc1493a6781af2c478fa4de971531cf>
+ <name>SNMP Password (v3)</name>
+ <update_rra></update_rra>
+ <regexp_match></regexp_match>
+ <allow_nulls>on</allow_nulls>
+ <type_code>snmp_password</type_code>
+ <input_output>in</input_output>
+ <data_name>snmp_password</data_name>
+ </hash_0700131cc1493a6781af2c478fa4de971531cf>
+ <hash_070013b5c23f246559df38662c255f4aa21d6b>
+ <name>SNMP Version (1, 2, or 3)</name>
+ <update_rra></update_rra>
+ <regexp_match></regexp_match>
+ <allow_nulls></allow_nulls>
+ <type_code>snmp_version</type_code>
+ <input_output>in</input_output>
+ <data_name>snmp_version</data_name>
+ </hash_070013b5c23f246559df38662c255f4aa21d6b>
+ <hash_0700136027a919c7c7731fbe095b6f53ab127b>
+ <name>Index Type</name>
+ <update_rra></update_rra>
+ <regexp_match></regexp_match>
+ <allow_nulls></allow_nulls>
+ <type_code>index_type</type_code>
+ <input_output>in</input_output>
+ <data_name>index_type</data_name>
+ </hash_0700136027a919c7c7731fbe095b6f53ab127b>
+ <hash_070013cbbe5c1ddfb264a6e5d509ce1c78c95f>
+ <name>Index Value</name>
+ <update_rra></update_rra>
+ <regexp_match></regexp_match>
+ <allow_nulls></allow_nulls>
+ <type_code>index_value</type_code>
+ <input_output>in</input_output>
+ <data_name>index_value</data_name>
+ </hash_070013cbbe5c1ddfb264a6e5d509ce1c78c95f>
+ <hash_070013e6deda7be0f391399c5130e7c4a48b28>
+ <name>Output Type ID</name>
+ <update_rra></update_rra>
+ <regexp_match></regexp_match>
+ <allow_nulls></allow_nulls>
+ <type_code>output_type</type_code>
+ <input_output>in</input_output>
+ <data_name>output_type</data_name>
+ </hash_070013e6deda7be0f391399c5130e7c4a48b28>
+ <hash_070013c1f36ee60c3dc98945556d57f26e475b>
+ <name>SNMP Port</name>
+ <update_rra></update_rra>
+ <regexp_match></regexp_match>
+ <allow_nulls></allow_nulls>
+ <type_code>snmp_port</type_code>
+ <input_output>in</input_output>
+ <data_name>snmp_port</data_name>
+ </hash_070013c1f36ee60c3dc98945556d57f26e475b>
+ </fields>
+ </hash_030013bf566c869ac6443b0c75d1c32b5a350e>
+ <hash_00001328b6727aa54dde6bb3f5dde939ae03aa>
+ <name>HAProxy Frontend Sessions</name>
+ <graph>
+ <t_title></t_title>
+ <title>|host_description| - HaProxy - |query_fePxName| Frontend Sessions</title>
+ <t_image_format_id></t_image_format_id>
+ <image_format_id>1</image_format_id>
+ <t_height></t_height>
+ <height>120</height>
+ <t_width></t_width>
+ <width>500</width>
+ <t_auto_scale></t_auto_scale>
+ <auto_scale>on</auto_scale>
+ <t_auto_scale_opts></t_auto_scale_opts>
+ <auto_scale_opts>2</auto_scale_opts>
+ <t_auto_scale_log></t_auto_scale_log>
+ <auto_scale_log></auto_scale_log>
+ <t_auto_scale_rigid></t_auto_scale_rigid>
+ <auto_scale_rigid></auto_scale_rigid>
+ <t_auto_padding></t_auto_padding>
+ <auto_padding>on</auto_padding>
+ <t_export></t_export>
+ <export>on</export>
+ <t_upper_limit></t_upper_limit>
+ <upper_limit>10000</upper_limit>
+ <t_lower_limit></t_lower_limit>
+ <lower_limit>0</lower_limit>
+ <t_base_value></t_base_value>
+ <base_value>1000</base_value>
+ <t_unit_value></t_unit_value>
+ <unit_value></unit_value>
+ <t_unit_exponent_value></t_unit_exponent_value>
+ <unit_exponent_value></unit_exponent_value>
+ <t_vertical_label></t_vertical_label>
+ <vertical_label></vertical_label>
+ </graph>
+ <items>
+ <hash_100013b1ecfd75df9c17c0ba11acc5e9b7d8f8>
+ <task_item_id>hash_080013f9c76e05d0a87b2d32f9a5b014e17aab</task_item_id>
+ <color_id>0000FF</color_id>
+ <graph_type_id>5</graph_type_id>
+ <consolidation_function_id>1</consolidation_function_id>
+ <cdef_id>0</cdef_id>
+ <value></value>
+ <gprint_id>hash_060013e9c43831e54eca8069317a2ce8c6f751</gprint_id>
+ <text_format>Total Sessions:</text_format>
+ <hard_return></hard_return>
+ <sequence>5</sequence>
+ </hash_100013b1ecfd75df9c17c0ba11acc5e9b7d8f8>
+ <hash_100013fa878148199aee5bb2a10b7693318347>
+ <task_item_id>hash_080013f9c76e05d0a87b2d32f9a5b014e17aab</task_item_id>
+ <color_id>0</color_id>
+ <graph_type_id>9</graph_type_id>
+ <consolidation_function_id>4</consolidation_function_id>
+ <cdef_id>0</cdef_id>
+ <value></value>
+ <gprint_id>hash_060013e9c43831e54eca8069317a2ce8c6f751</gprint_id>
+ <text_format>Current:</text_format>
+ <hard_return></hard_return>
+ <sequence>6</sequence>
+ </hash_100013fa878148199aee5bb2a10b7693318347>
+ <hash_1000137d834c383afa4863974edc19a337e260>
+ <task_item_id>hash_080013f9c76e05d0a87b2d32f9a5b014e17aab</task_item_id>
+ <color_id>0</color_id>
+ <graph_type_id>9</graph_type_id>
+ <consolidation_function_id>1</consolidation_function_id>
+ <cdef_id>0</cdef_id>
+ <value></value>
+ <gprint_id>hash_060013e9c43831e54eca8069317a2ce8c6f751</gprint_id>
+ <text_format>Average:</text_format>
+ <hard_return></hard_return>
+ <sequence>7</sequence>
+ </hash_1000137d834c383afa4863974edc19a337e260>
+ <hash_1000138b0422b293230883462cfbfe32144d47>
+ <task_item_id>hash_080013f9c76e05d0a87b2d32f9a5b014e17aab</task_item_id>
+ <color_id>0</color_id>
+ <graph_type_id>9</graph_type_id>
+ <consolidation_function_id>3</consolidation_function_id>
+ <cdef_id>0</cdef_id>
+ <value></value>
+ <gprint_id>hash_060013e9c43831e54eca8069317a2ce8c6f751</gprint_id>
+ <text_format>Maximum:</text_format>
+ <hard_return>on</hard_return>
+ <sequence>8</sequence>
+ </hash_1000138b0422b293230883462cfbfe32144d47>
+ <hash_1000131c87ed4e76c026cd131418d792822944>
+ <task_item_id>hash_080013c137bec94d7220e65a5b3dfa4049c242</task_item_id>
+ <color_id>EA8F00</color_id>
+ <graph_type_id>5</graph_type_id>
+ <consolidation_function_id>1</consolidation_function_id>
+ <cdef_id>0</cdef_id>
+ <value></value>
+ <gprint_id>hash_060013e9c43831e54eca8069317a2ce8c6f751</gprint_id>
+ <text_format>Request Errors:</text_format>
+ <hard_return></hard_return>
+ <sequence>9</sequence>
+ </hash_1000131c87ed4e76c026cd131418d792822944>
+ <hash_100013a9993114514cb1abea4b929f984222ea>
+ <task_item_id>hash_080013c137bec94d7220e65a5b3dfa4049c242</task_item_id>
+ <color_id>0</color_id>
+ <graph_type_id>9</graph_type_id>
+ <consolidation_function_id>4</consolidation_function_id>
+ <cdef_id>0</cdef_id>
+ <value></value>
+ <gprint_id>hash_060013e9c43831e54eca8069317a2ce8c6f751</gprint_id>
+ <text_format>Current:</text_format>
+ <hard_return></hard_return>
+ <sequence>10</sequence>
+ </hash_100013a9993114514cb1abea4b929f984222ea>
+ <hash_1000131bc67adbaa8b77cd6c73d9622c7eebc1>
+ <task_item_id>hash_080013c137bec94d7220e65a5b3dfa4049c242</task_item_id>
+ <color_id>0</color_id>
+ <graph_type_id>9</graph_type_id>
+ <consolidation_function_id>1</consolidation_function_id>
+ <cdef_id>0</cdef_id>
+ <value></value>
+ <gprint_id>hash_060013e9c43831e54eca8069317a2ce8c6f751</gprint_id>
+ <text_format>Average:</text_format>
+ <hard_return></hard_return>
+ <sequence>12</sequence>
+ </hash_1000131bc67adbaa8b77cd6c73d9622c7eebc1>
+ <hash_1000138840d17711368b90a61132ba83e9edb8>
+ <task_item_id>hash_080013c137bec94d7220e65a5b3dfa4049c242</task_item_id>
+ <color_id>0</color_id>
+ <graph_type_id>9</graph_type_id>
+ <consolidation_function_id>3</consolidation_function_id>
+ <cdef_id>0</cdef_id>
+ <value></value>
+ <gprint_id>hash_060013e9c43831e54eca8069317a2ce8c6f751</gprint_id>
+ <text_format>Maximum:</text_format>
+ <hard_return>on</hard_return>
+ <sequence>13</sequence>
+ </hash_1000138840d17711368b90a61132ba83e9edb8>
+ <hash_100013e8ddbe92933ba99b2d2ebc8f76a06e2e>
+ <task_item_id>0</task_item_id>
+ <color_id>0</color_id>
+ <graph_type_id>1</graph_type_id>
+ <consolidation_function_id>1</consolidation_function_id>
+ <cdef_id>0</cdef_id>
+ <value></value>
+ <gprint_id>hash_060013e9c43831e54eca8069317a2ce8c6f751</gprint_id>
+ <text_format>Graph Last Updated: |date_time|</text_format>
+ <hard_return>on</hard_return>
+ <sequence>14</sequence>
+ </hash_100013e8ddbe92933ba99b2d2ebc8f76a06e2e>
+ </items>
+ <inputs>
+ <hash_0900134bedc49c15c9557fc95cdbc8850a5cb1>
+ <name>Data Source [TotalSessions]</name>
+ <description></description>
+ <column_name>task_item_id</column_name>
+ <items>hash_000013b1ecfd75df9c17c0ba11acc5e9b7d8f8|hash_000013fa878148199aee5bb2a10b7693318347|hash_0000138b0422b293230883462cfbfe32144d47|hash_0000137d834c383afa4863974edc19a337e260</items>
+ </hash_0900134bedc49c15c9557fc95cdbc8850a5cb1>
+ <hash_090013f3f3dfd39bb035006de08df94415e828>
+ <name>Data Source [RequestErrors]</name>
+ <description></description>
+ <column_name>task_item_id</column_name>
+ <items>hash_0000131c87ed4e76c026cd131418d792822944|hash_000013a9993114514cb1abea4b929f984222ea|hash_0000131bc67adbaa8b77cd6c73d9622c7eebc1|hash_0000138840d17711368b90a61132ba83e9edb8</items>
+ </hash_090013f3f3dfd39bb035006de08df94415e828>
+ </inputs>
+ </hash_00001328b6727aa54dde6bb3f5dde939ae03aa>
+ <hash_000013d0fe9e9efc2746de488fdede0419b051>
+ <name>HAProxy Frontend Traffic</name>
+ <graph>
+ <t_title></t_title>
+ <title>|host_description| - HaProxy |query_fePxName| Frontend Traffic</title>
+ <t_image_format_id></t_image_format_id>
+ <image_format_id>1</image_format_id>
+ <t_height></t_height>
+ <height>120</height>
+ <t_width></t_width>
+ <width>500</width>
+ <t_auto_scale></t_auto_scale>
+ <auto_scale>on</auto_scale>
+ <t_auto_scale_opts></t_auto_scale_opts>
+ <auto_scale_opts>2</auto_scale_opts>
+ <t_auto_scale_log></t_auto_scale_log>
+ <auto_scale_log></auto_scale_log>
+ <t_auto_scale_rigid></t_auto_scale_rigid>
+ <auto_scale_rigid></auto_scale_rigid>
+ <t_auto_padding></t_auto_padding>
+ <auto_padding>on</auto_padding>
+ <t_export></t_export>
+ <export>on</export>
+ <t_upper_limit></t_upper_limit>
+ <upper_limit>10000000000</upper_limit>
+ <t_lower_limit></t_lower_limit>
+ <lower_limit>0</lower_limit>
+ <t_base_value></t_base_value>
+ <base_value>1024</base_value>
+ <t_unit_value></t_unit_value>
+ <unit_value></unit_value>
+ <t_unit_exponent_value></t_unit_exponent_value>
+ <unit_exponent_value></unit_exponent_value>
+ <t_vertical_label></t_vertical_label>
+ <vertical_label>bytes</vertical_label>
+ </graph>
+ <items>
+ <hash_100013d5c13ff711cbd645e9f88697b2c5e61b>
+ <task_item_id>hash_08001305772980bb6de1f12223d7ec53e323c4</task_item_id>
+ <color_id>6EA100</color_id>
+ <graph_type_id>5</graph_type_id>
+ <consolidation_function_id>1</consolidation_function_id>
+ <cdef_id>0</cdef_id>
+ <value></value>
+ <gprint_id>hash_060013e9c43831e54eca8069317a2ce8c6f751</gprint_id>
+ <text_format>Ingress Traffic:</text_format>
+ <hard_return></hard_return>
+ <sequence>2</sequence>
+ </hash_100013d5c13ff711cbd645e9f88697b2c5e61b>
+ <hash_10001353cff0cd64c4d70574ef9da42f62c86a>
+ <task_item_id>hash_08001305772980bb6de1f12223d7ec53e323c4</task_item_id>
+ <color_id>0</color_id>
+ <graph_type_id>9</graph_type_id>
+ <consolidation_function_id>4</consolidation_function_id>
+ <cdef_id>0</cdef_id>
+ <value></value>
+ <gprint_id>hash_060013e9c43831e54eca8069317a2ce8c6f751</gprint_id>
+ <text_format>Current:</text_format>
+ <hard_return></hard_return>
+ <sequence>3</sequence>
+ </hash_10001353cff0cd64c4d70574ef9da42f62c86a>
+ <hash_1000136788d44f6207ce323ad40ccc8f15d462>
+ <task_item_id>hash_08001305772980bb6de1f12223d7ec53e323c4</task_item_id>
+ <color_id>0</color_id>
+ <graph_type_id>9</graph_type_id>
+ <consolidation_function_id>1</consolidation_function_id>
+ <cdef_id>0</cdef_id>
+ <value></value>
+ <gprint_id>hash_060013e9c43831e54eca8069317a2ce8c6f751</gprint_id>
+ <text_format>Average:</text_format>
+ <hard_return></hard_return>
+ <sequence>4</sequence>
+ </hash_1000136788d44f6207ce323ad40ccc8f15d462>
+ <hash_100013d4cb02a8fb7fa37ef1e37d8b78333ea3>
+ <task_item_id>hash_08001305772980bb6de1f12223d7ec53e323c4</task_item_id>
+ <color_id>0</color_id>
+ <graph_type_id>9</graph_type_id>
+ <consolidation_function_id>3</consolidation_function_id>
+ <cdef_id>0</cdef_id>
+ <value></value>
+ <gprint_id>hash_060013e9c43831e54eca8069317a2ce8c6f751</gprint_id>
+ <text_format>Maximum:</text_format>
+ <hard_return>on</hard_return>
+ <sequence>5</sequence>
+ </hash_100013d4cb02a8fb7fa37ef1e37d8b78333ea3>
+ <hash_1000137d82a7f3c82c698fe4e9cecc03d680b1>
+ <task_item_id>hash_0800137db81cd58fbbbd203af0f55c15c2081a</task_item_id>
+ <color_id>FF0000</color_id>
+ <graph_type_id>5</graph_type_id>
+ <consolidation_function_id>1</consolidation_function_id>
+ <cdef_id>0</cdef_id>
+ <value></value>
+ <gprint_id>hash_060013e9c43831e54eca8069317a2ce8c6f751</gprint_id>
+ <text_format>Egress Traffic:</text_format>
+ <hard_return></hard_return>
+ <sequence>6</sequence>
+ </hash_1000137d82a7f3c82c698fe4e9cecc03d680b1>
+ <hash_100013d2d059378b521327426b451324bbb608>
+ <task_item_id>hash_0800137db81cd58fbbbd203af0f55c15c2081a</task_item_id>
+ <color_id>0</color_id>
+ <graph_type_id>9</graph_type_id>
+ <consolidation_function_id>4</consolidation_function_id>
+ <cdef_id>0</cdef_id>
+ <value></value>
+ <gprint_id>hash_060013e9c43831e54eca8069317a2ce8c6f751</gprint_id>
+ <text_format>Current:</text_format>
+ <hard_return></hard_return>
+ <sequence>7</sequence>
+ </hash_100013d2d059378b521327426b451324bbb608>
+ <hash_1000132eef0fae129ef21ad2d73e5e80814a23>
+ <task_item_id>hash_0800137db81cd58fbbbd203af0f55c15c2081a</task_item_id>
+ <color_id>0</color_id>
+ <graph_type_id>9</graph_type_id>
+ <consolidation_function_id>1</consolidation_function_id>
+ <cdef_id>0</cdef_id>
+ <value></value>
+ <gprint_id>hash_060013e9c43831e54eca8069317a2ce8c6f751</gprint_id>
+ <text_format>Average:</text_format>
+ <hard_return></hard_return>
+ <sequence>8</sequence>
+ </hash_1000132eef0fae129ef21ad2d73e5e80814a23>
+ <hash_1000138365462951b1f4e6b1a76f20b91be65d>
+ <task_item_id>hash_0800137db81cd58fbbbd203af0f55c15c2081a</task_item_id>
+ <color_id>0</color_id>
+ <graph_type_id>9</graph_type_id>
+ <consolidation_function_id>3</consolidation_function_id>
+ <cdef_id>0</cdef_id>
+ <value></value>
+ <gprint_id>hash_060013e9c43831e54eca8069317a2ce8c6f751</gprint_id>
+ <text_format>Maximum:</text_format>
+ <hard_return>on</hard_return>
+ <sequence>9</sequence>
+ </hash_1000138365462951b1f4e6b1a76f20b91be65d>
+ <hash_100013eefc9c0f83c57d6d6f8d75fbd45965a3>
+ <task_item_id>0</task_item_id>
+ <color_id>0</color_id>
+ <graph_type_id>1</graph_type_id>
+ <consolidation_function_id>1</consolidation_function_id>
+ <cdef_id>0</cdef_id>
+ <value></value>
+ <gprint_id>hash_060013e9c43831e54eca8069317a2ce8c6f751</gprint_id>
+ <text_format>Graph Last Updated: |date_time|</text_format>
+ <hard_return>on</hard_return>
+ <sequence>10</sequence>
+ </hash_100013eefc9c0f83c57d6d6f8d75fbd45965a3>
+ </items>
+ <inputs>
+ <hash_090013384e7b730bb32653e3fbce5ce509977d>
+ <name>Data Source [BytesOut]</name>
+ <description></description>
+ <column_name>task_item_id</column_name>
+ <items>hash_000013d2d059378b521327426b451324bbb608|hash_0000137d82a7f3c82c698fe4e9cecc03d680b1|hash_0000132eef0fae129ef21ad2d73e5e80814a23|hash_0000138365462951b1f4e6b1a76f20b91be65d</items>
+ </hash_090013384e7b730bb32653e3fbce5ce509977d>
+ <hash_090013e5ff60e3069b2d28d905f6affa63250e>
+ <name>Data Source [BytesIn]</name>
+ <description></description>
+ <column_name>task_item_id</column_name>
+ <items>hash_00001353cff0cd64c4d70574ef9da42f62c86a|hash_000013d5c13ff711cbd645e9f88697b2c5e61b|hash_0000136788d44f6207ce323ad40ccc8f15d462|hash_000013d4cb02a8fb7fa37ef1e37d8b78333ea3</items>
+ </hash_090013e5ff60e3069b2d28d905f6affa63250e>
+ </inputs>
+ </hash_000013d0fe9e9efc2746de488fdede0419b051>
+ <hash_0100139f985697a7530256b4e35c95ef03db20>
+ <name>HAProxy Frontend Session Stats</name>
+ <ds>
+ <t_name></t_name>
+ <name>|host_description| - Haproxy - |query_fePxName| Frontend Session Stats</name>
+ <data_input_id>hash_030013bf566c869ac6443b0c75d1c32b5a350e</data_input_id>
+ <t_rra_id></t_rra_id>
+ <t_rrd_step></t_rrd_step>
+ <rrd_step>300</rrd_step>
+ <t_active></t_active>
+ <active>on</active>
+ <rra_items>hash_150013c21df5178e5c955013591239eb0afd46|hash_1500130d9c0af8b8acdc7807943937b3208e29|hash_1500136fc2d038fb42950138b0ce3e9874cc60|hash_150013e36f3adb9f152adfa5dc50fd2b23337e|hash_1500139b529013942d5a6891d05a84d17175e0|hash_150013e73fb797d3ab2a9b97c3ec29e9690910</rra_items>
+ </ds>
+ <items>
+ <hash_080013c137bec94d7220e65a5b3dfa4049c242>
+ <t_data_source_name></t_data_source_name>
+ <data_source_name>RequestErrors</data_source_name>
+ <t_rrd_minimum></t_rrd_minimum>
+ <rrd_minimum>0</rrd_minimum>
+ <t_rrd_maximum></t_rrd_maximum>
+ <rrd_maximum>10000</rrd_maximum>
+ <t_data_source_type_id></t_data_source_type_id>
+ <data_source_type_id>2</data_source_type_id>
+ <t_rrd_heartbeat></t_rrd_heartbeat>
+ <rrd_heartbeat>600</rrd_heartbeat>
+ <t_data_input_field_id></t_data_input_field_id>
+ <data_input_field_id>0</data_input_field_id>
+ </hash_080013c137bec94d7220e65a5b3dfa4049c242>
+ <hash_080013f9c76e05d0a87b2d32f9a5b014e17aab>
+ <t_data_source_name></t_data_source_name>
+ <data_source_name>TotalSessions</data_source_name>
+ <t_rrd_minimum></t_rrd_minimum>
+ <rrd_minimum>0</rrd_minimum>
+ <t_rrd_maximum></t_rrd_maximum>
+ <rrd_maximum>10000</rrd_maximum>
+ <t_data_source_type_id></t_data_source_type_id>
+ <data_source_type_id>2</data_source_type_id>
+ <t_rrd_heartbeat></t_rrd_heartbeat>
+ <rrd_heartbeat>600</rrd_heartbeat>
+ <t_data_input_field_id></t_data_input_field_id>
+ <data_input_field_id>0</data_input_field_id>
+ </hash_080013f9c76e05d0a87b2d32f9a5b014e17aab>
+ </items>
+ <data>
+ <item_000>
+ <data_input_field_id>hash_070013c1f36ee60c3dc98945556d57f26e475b</data_input_field_id>
+ <t_value></t_value>
+ <value></value>
+ </item_000>
+ <item_001>
+ <data_input_field_id>hash_070013e6deda7be0f391399c5130e7c4a48b28</data_input_field_id>
+ <t_value></t_value>
+ <value></value>
+ </item_001>
+ <item_002>
+ <data_input_field_id>hash_070013cbbe5c1ddfb264a6e5d509ce1c78c95f</data_input_field_id>
+ <t_value></t_value>
+ <value></value>
+ </item_002>
+ <item_003>
+ <data_input_field_id>hash_0700136027a919c7c7731fbe095b6f53ab127b</data_input_field_id>
+ <t_value></t_value>
+ <value></value>
+ </item_003>
+ <item_004>
+ <data_input_field_id>hash_070013b5c23f246559df38662c255f4aa21d6b</data_input_field_id>
+ <t_value></t_value>
+ <value></value>
+ </item_004>
+ <item_005>
+ <data_input_field_id>hash_0700131cc1493a6781af2c478fa4de971531cf</data_input_field_id>
+ <t_value></t_value>
+ <value></value>
+ </item_005>
+ <item_006>
+ <data_input_field_id>hash_070013f4facc5e2ca7ebee621f09bc6d9fc792</data_input_field_id>
+ <t_value></t_value>
+ <value></value>
+ </item_006>
+ <item_007>
+ <data_input_field_id>hash_070013acb449d1451e8a2a655c2c99d31142c7</data_input_field_id>
+ <t_value></t_value>
+ <value></value>
+ </item_007>
+ <item_008>
+ <data_input_field_id>hash_070013617cdc8a230615e59f06f361ef6e7728</data_input_field_id>
+ <t_value></t_value>
+ <value></value>
+ </item_008>
+ </data>
+ </hash_0100139f985697a7530256b4e35c95ef03db20>
+ <hash_010013a88327df77ea19e333ddd96096c34751>
+ <name>HAProxy Frontend Traffic Stats</name>
+ <ds>
+ <t_name></t_name>
+ <name>|host_description| - Haproxy - |query_fePxName| Frontend Traffic Stats</name>
+ <data_input_id>hash_030013bf566c869ac6443b0c75d1c32b5a350e</data_input_id>
+ <t_rra_id></t_rra_id>
+ <t_rrd_step></t_rrd_step>
+ <rrd_step>300</rrd_step>
+ <t_active></t_active>
+ <active>on</active>
+ <rra_items>hash_150013c21df5178e5c955013591239eb0afd46|hash_1500130d9c0af8b8acdc7807943937b3208e29|hash_1500136fc2d038fb42950138b0ce3e9874cc60|hash_150013e36f3adb9f152adfa5dc50fd2b23337e|hash_15001369b0abdb84cea4d93762fd5a5d0c2777|hash_150013e73fb797d3ab2a9b97c3ec29e9690910</rra_items>
+ </ds>
+ <items>
+ <hash_0800137db81cd58fbbbd203af0f55c15c2081a>
+ <t_data_source_name></t_data_source_name>
+ <data_source_name>BytesOut</data_source_name>
+ <t_rrd_minimum></t_rrd_minimum>
+ <rrd_minimum>0</rrd_minimum>
+ <t_rrd_maximum></t_rrd_maximum>
+ <rrd_maximum>10000000000</rrd_maximum>
+ <t_data_source_type_id></t_data_source_type_id>
+ <data_source_type_id>2</data_source_type_id>
+ <t_rrd_heartbeat></t_rrd_heartbeat>
+ <rrd_heartbeat>600</rrd_heartbeat>
+ <t_data_input_field_id></t_data_input_field_id>
+ <data_input_field_id>0</data_input_field_id>
+ </hash_0800137db81cd58fbbbd203af0f55c15c2081a>
+ <hash_08001305772980bb6de1f12223d7ec53e323c4>
+ <t_data_source_name></t_data_source_name>
+ <data_source_name>BytesIn</data_source_name>
+ <t_rrd_minimum></t_rrd_minimum>
+ <rrd_minimum>0</rrd_minimum>
+ <t_rrd_maximum></t_rrd_maximum>
+ <rrd_maximum>10000000000</rrd_maximum>
+ <t_data_source_type_id></t_data_source_type_id>
+ <data_source_type_id>2</data_source_type_id>
+ <t_rrd_heartbeat></t_rrd_heartbeat>
+ <rrd_heartbeat>600</rrd_heartbeat>
+ <t_data_input_field_id></t_data_input_field_id>
+ <data_input_field_id>0</data_input_field_id>
+ </hash_08001305772980bb6de1f12223d7ec53e323c4>
+ </items>
+ <data>
+ <item_000>
+ <data_input_field_id>hash_070013c1f36ee60c3dc98945556d57f26e475b</data_input_field_id>
+ <t_value></t_value>
+ <value></value>
+ </item_000>
+ <item_001>
+ <data_input_field_id>hash_070013e6deda7be0f391399c5130e7c4a48b28</data_input_field_id>
+ <t_value></t_value>
+ <value></value>
+ </item_001>
+ <item_002>
+ <data_input_field_id>hash_070013cbbe5c1ddfb264a6e5d509ce1c78c95f</data_input_field_id>
+ <t_value></t_value>
+ <value></value>
+ </item_002>
+ <item_003>
+ <data_input_field_id>hash_0700136027a919c7c7731fbe095b6f53ab127b</data_input_field_id>
+ <t_value></t_value>
+ <value></value>
+ </item_003>
+ <item_004>
+ <data_input_field_id>hash_070013b5c23f246559df38662c255f4aa21d6b</data_input_field_id>
+ <t_value></t_value>
+ <value></value>
+ </item_004>
+ <item_005>
+ <data_input_field_id>hash_0700131cc1493a6781af2c478fa4de971531cf</data_input_field_id>
+ <t_value></t_value>
+ <value></value>
+ </item_005>
+ <item_006>
+ <data_input_field_id>hash_070013f4facc5e2ca7ebee621f09bc6d9fc792</data_input_field_id>
+ <t_value></t_value>
+ <value></value>
+ </item_006>
+ <item_007>
+ <data_input_field_id>hash_070013acb449d1451e8a2a655c2c99d31142c7</data_input_field_id>
+ <t_value></t_value>
+ <value></value>
+ </item_007>
+ <item_008>
+ <data_input_field_id>hash_070013617cdc8a230615e59f06f361ef6e7728</data_input_field_id>
+ <t_value></t_value>
+ <value></value>
+ </item_008>
+ </data>
+ </hash_010013a88327df77ea19e333ddd96096c34751>
+ <hash_150013c21df5178e5c955013591239eb0afd46>
+ <name>Daily (5 Minute Average)</name>
+ <x_files_factor>0.5</x_files_factor>
+ <steps>1</steps>
+ <rows>600</rows>
+ <timespan>86400</timespan>
+ <cf_items>1|2|3|4</cf_items>
+ </hash_150013c21df5178e5c955013591239eb0afd46>
+ <hash_1500130d9c0af8b8acdc7807943937b3208e29>
+ <name>Weekly (30 Minute Average)</name>
+ <x_files_factor>0.5</x_files_factor>
+ <steps>6</steps>
+ <rows>700</rows>
+ <timespan>604800</timespan>
+ <cf_items>1|2|3|4</cf_items>
+ </hash_1500130d9c0af8b8acdc7807943937b3208e29>
+ <hash_1500136fc2d038fb42950138b0ce3e9874cc60>
+ <name>Monthly (2 Hour Average)</name>
+ <x_files_factor>0.5</x_files_factor>
+ <steps>24</steps>
+ <rows>775</rows>
+ <timespan>2678400</timespan>
+ <cf_items>1|2|3|4</cf_items>
+ </hash_1500136fc2d038fb42950138b0ce3e9874cc60>
+ <hash_150013e36f3adb9f152adfa5dc50fd2b23337e>
+ <name>Yearly (1 Day Average)</name>
+ <x_files_factor>0.5</x_files_factor>
+ <steps>288</steps>
+ <rows>797</rows>
+ <timespan>33053184</timespan>
+ <cf_items>1|2|3|4</cf_items>
+ </hash_150013e36f3adb9f152adfa5dc50fd2b23337e>
+ <hash_1500136399acb234c65ef56054d5a82b23bc20>
+ <name>Three days (5 minutes average)</name>
+ <x_files_factor>0.5</x_files_factor>
+ <steps>6</steps>
+ <rows>700</rows>
+ <timespan>302400</timespan>
+ <cf_items>1|2|3|4</cf_items>
+ </hash_1500136399acb234c65ef56054d5a82b23bc20>
+ <hash_150013e73fb797d3ab2a9b97c3ec29e9690910>
+ <name>Hourly (1 Minute Average)</name>
+ <x_files_factor>0.5</x_files_factor>
+ <steps>1</steps>
+ <rows>500</rows>
+ <timespan>14400</timespan>
+ <cf_items>1|3</cf_items>
+ </hash_150013e73fb797d3ab2a9b97c3ec29e9690910>
+ <hash_060013e9c43831e54eca8069317a2ce8c6f751>
+ <name>Normal</name>
+ <gprint_text>%8.2lf %s</gprint_text>
+ </hash_060013e9c43831e54eca8069317a2ce8c6f751>
+</cacti>
\ No newline at end of file
--- /dev/null
+#
+# Net-SNMP perl plugin for Haproxy
+# Version 0.30
+#
+# Copyright 2007-2010 Krzysztof Piotr Oledzki <ole@ans.pl>
+#
+# 1. get a variable from "show stat":
+# 1.3.6.1.4.1.29385.106.1.$type.$field.$iid.$sid
+# type: 0->frontend, 1->backend, 2->server, 3->socket
+#
+# 2. get a variable from "show info":
+# 1.3.6.1.4.1.29385.106.2.$req.$varnr
+#
+# TODO:
+# - implement read timeout
+#
+
+use NetSNMP::agent (':all');
+use NetSNMP::ASN qw(:all);
+use IO::Socket::UNIX;
+
+use strict;
+
+my $agent = new NetSNMP::agent('Name' => 'Haproxy');
+my $sa = "/var/run/haproxy.stat";
+
+use constant OID_HAPROXY => '1.3.6.1.4.1.29385.106';
+use constant OID_HAPROXY_STATS => OID_HAPROXY . '.1';
+use constant OID_HAPROXY_INFO => OID_HAPROXY . '.2';
+
+my $oid_stat = new NetSNMP::OID(OID_HAPROXY_STATS);
+my $oid_info = new NetSNMP::OID(OID_HAPROXY_INFO);
+
+use constant STATS_PXNAME => 0;
+use constant STATS_SVNAME => 1;
+use constant STATS_IID => 27;
+use constant STATS_SID => 28;
+use constant STATS_TYPE => 32;
+
+use constant FIELD_INDEX => 10001;
+use constant FIELD_NAME => 10002;
+
+my %info_vars = (
+ 0 => 'Name',
+ 1 => 'Version',
+ 2 => 'Release_date',
+ 3 => 'Nbproc',
+ 4 => 'Process_num',
+ 5 => 'Pid',
+ 6 => 'Uptime',
+ 7 => 'Uptime_sec',
+ 8 => 'Memmax_MB',
+ 9 => 'Ulimit-n',
+ 10 => 'Maxsock',
+ 11 => 'Maxconn',
+ 12 => 'Maxpipes',
+ 13 => 'CurrConns',
+ 14 => 'PipesUsed',
+ 15 => 'PipesFree',
+ 16 => 'Tasks',
+ 17 => 'Run_queue',
+ 18 => 'node',
+ 19 => 'description',
+);
+
+sub find_next_stat_id {
+ my($type, $field, $proxyid, $sid) = @_;
+
+ my $obj = 1 << $type;
+
+ my $np = -1;
+ my $nl = -1;
+
+ my $sock = new IO::Socket::UNIX (Peer => $sa, Type => SOCK_STREAM, Timeout => 1);
+ next if !$sock;
+
+ print $sock "show stat -1 $obj -1\n";
+
+ while(<$sock>) {
+ chomp;
+ my @d = split(',');
+
+ last if !$d[$field] && $field != FIELD_INDEX && $field != FIELD_NAME && /^#/;
+ next if /^#/;
+
+ next if $d[STATS_TYPE] != $type;
+
+ next if ($d[STATS_IID] < $proxyid) || ($d[STATS_IID] == $proxyid && $d[STATS_SID] <= $sid);
+
+ if ($np == -1 || $d[STATS_IID] < $np || ($d[STATS_IID] == $np && $d[STATS_SID] < $nl)) {
+ $np = $d[STATS_IID];
+ $nl = $d[STATS_SID];
+ next;
+ }
+ }
+
+ close($sock);
+
+ return 0 if ($np == -1);
+
+ return "$type.$field.$np.$nl"
+}
+
+sub haproxy_stat {
+ my($handler, $registration_info, $request_info, $requests) = @_;
+
+ for(my $request = $requests; $request; $request = $request->next()) {
+ my $oid = $request->getOID();
+
+ $oid =~ s/$oid_stat//;
+ $oid =~ s/^\.//;
+
+ my $mode = $request_info->getMode();
+
+ my($type, $field, $proxyid, $sid, $or) = split('\.', $oid, 5);
+
+ next if $type > 3 || defined($or);
+
+ if ($mode == MODE_GETNEXT) {
+
+ $type = 0 if !$type;
+ $field = 0 if !$field;
+ $proxyid = 0 if !$proxyid;
+ $sid = 0 if !$sid;
+
+ my $nextid = find_next_stat_id($type, $field, $proxyid, $sid);
+ $nextid = find_next_stat_id($type, $field+1, 0, 0) if !$nextid;
+ $nextid = find_next_stat_id($type+1, 0, 0, 0) if !$nextid;
+
+ if ($nextid) {
+ ($type, $field, $proxyid, $sid) = split('\.', $nextid);
+ $request->setOID(sprintf("%s.%s", OID_HAPROXY_STATS, $nextid));
+ $mode = MODE_GET;
+ }
+ }
+
+ if ($mode == MODE_GET) {
+ next if !defined($proxyid) || !defined($type) || !defined($sid) || !defined($field);
+
+ my $obj = 1 << $type;
+
+ my $sock = new IO::Socket::UNIX (Peer => $sa, Type => SOCK_STREAM, Timeout => 1);
+ next if !$sock;
+
+ print $sock "show stat $proxyid $obj $sid\n";
+
+ while(<$sock>) {
+ chomp;
+ my @data = split(',');
+
+ last if !defined($data[$field]) && $field != FIELD_INDEX && $field != FIELD_NAME;
+
+ if ($proxyid) {
+ next if $data[STATS_IID] ne $proxyid;
+ next if $data[STATS_SID] ne $sid;
+ next if $data[STATS_TYPE] ne $type;
+ }
+
+ if ($field == FIELD_INDEX) {
+ $request->setValue(ASN_OCTET_STR,
+ sprintf("%s.%s", $data[STATS_IID],
+ $data[STATS_SID]));
+ } elsif ($field == FIELD_NAME) {
+ $request->setValue(ASN_OCTET_STR,
+ sprintf("%s/%s", $data[STATS_PXNAME],
+ $data[STATS_SVNAME]));
+ } else {
+ $request->setValue(ASN_OCTET_STR, $data[$field]);
+ }
+
+ close($sock);
+ last;
+ }
+
+ close($sock);
+ next;
+ }
+
+ }
+}
+
+sub haproxy_info {
+ my($handler, $registration_info, $request_info, $requests) = @_;
+
+ for(my $request = $requests; $request; $request = $request->next()) {
+ my $oid = $request->getOID();
+
+ $oid =~ s/$oid_info//;
+ $oid =~ s/^\.//;
+
+ my $mode = $request_info->getMode();
+
+ my($req, $nr, $or) = split('\.', $oid, 3);
+
+ next if $req >= 2 || defined($or);
+
+ if ($mode == MODE_GETNEXT) {
+ $req = 0 if !defined($req);
+ $nr = -1 if !defined($nr);
+
+ if (!defined($info_vars{$nr+1})) {
+ $req++;
+ $nr = -1;
+ }
+
+ next if $req >= 2;
+
+ $request->setOID(sprintf("%s.%s.%s", OID_HAPROXY_INFO, $req, ++$nr));
+ $mode = MODE_GET;
+
+ }
+
+ if ($mode == MODE_GET) {
+
+ next if !defined($req) || !defined($nr);
+
+ if ($req == 0) {
+ next if !defined($info_vars{$nr});
+ $request->setValue(ASN_OCTET_STR, $info_vars{$nr});
+ next;
+ }
+
+ if ($req == 1) {
+ next if !defined($info_vars{$nr});
+
+ my $sock = new IO::Socket::UNIX (Peer => $sa, Type => SOCK_STREAM, Timeout => 1);
+ next if !$sock;
+
+ print $sock "show info\n";
+
+ while(<$sock>) {
+ chomp;
+ my ($key, $val) = /(.*):\s*(.*)/;
+
+ next if $info_vars{$nr} ne $key;
+
+ $request->setValue(ASN_OCTET_STR, $val);
+ last;
+ }
+
+ close($sock);
+ }
+ }
+ }
+}
+
+$agent->register('Haproxy stat', OID_HAPROXY_STATS, \&haproxy_stat);
+$agent->register('Haproxy info', OID_HAPROXY_INFO, \&haproxy_info);
+
--- /dev/null
+<interface>
+ <name>Haproxy - backend</name>
+ <oid_index>.1.3.6.1.4.1.29385.106.1.1.10001</oid_index>
+ <fields>
+ <beIID>
+ <name>Proxy ID</name>
+ <method>get</method>
+ <source>value</source>
+ <direction>input</direction>
+ <oid>.1.3.6.1.4.1.29385.106.1.1.27</oid>
+ </beIID>
+ <beSID>
+ <name>Service ID</name>
+ <method>get</method>
+ <source>value</source>
+ <direction>input</direction>
+ <oid>.1.3.6.1.4.1.29385.106.1.1.28</oid>
+ </beSID>
+ <bePxName>
+ <name>Proxy Name</name>
+ <method>get</method>
+ <source>value</source>
+ <direction>input</direction>
+ <oid>.1.3.6.1.4.1.29385.106.1.1.0</oid>
+ </bePxName>
+ <beSvName>
+ <name>Service Name</name>
+ <method>get</method>
+ <source>value</source>
+ <direction>input</direction>
+ <oid>.1.3.6.1.4.1.29385.106.1.1.1</oid>
+ </beSvName>
+ <beSTot>
+ <name>Total Sessions</name>
+ <method>get</method>
+ <source>value</source>
+ <direction>output</direction>
+ <oid>.1.3.6.1.4.1.29385.106.1.1.7</oid>
+ </beSTot>
+ <beBIn>
+ <name>Bytes In</name>
+ <method>get</method>
+ <source>value</source>
+ <direction>output</direction>
+ <oid>.1.3.6.1.4.1.29385.106.1.1.8</oid>
+ </beBIn>
+ <beBOut>
+ <name>Bytes Out</name>
+ <method>get</method>
+ <source>value</source>
+ <direction>output</direction>
+ <oid>.1.3.6.1.4.1.29385.106.1.1.9</oid>
+ </beBOut>
+ <beEConn>
+ <name>Connection Errors</name>
+ <method>get</method>
+ <source>value</source>
+ <direction>output</direction>
+ <oid>.1.3.6.1.4.1.29385.106.1.1.13</oid>
+ </beEConn>
+ <beEResp>
+ <name>Response Errors</name>
+ <method>get</method>
+ <source>value</source>
+ <direction>output</direction>
+ <oid>.1.3.6.1.4.1.29385.106.1.1.14</oid>
+ </beEResp>
+ <beLBTot>
+ <name>LB Total</name>
+ <method>get</method>
+ <source>value</source>
+ <direction>output</direction>
+ <oid>.1.3.6.1.4.1.29385.106.1.1.30</oid>
+ </beLBTot>
+ <beDReq>
+ <name>Denied Requests</name>
+ <method>get</method>
+ <source>value</source>
+ <direction>output</direction>
+ <oid>.1.3.6.1.4.1.29385.106.1.1.10</oid>
+ </beDReq>
+ </fields>
+</interface>
--- /dev/null
+<interface>
+ <name>Haproxy - frontend</name>
+ <oid_index>.1.3.6.1.4.1.29385.106.1.0.10001</oid_index>
+ <fields>
+ <feIID>
+ <name>Proxy ID</name>
+ <method>get</method>
+ <source>value</source>
+ <direction>input</direction>
+ <oid>.1.3.6.1.4.1.29385.106.1.0.27</oid>
+ </feIID>
+ <feSID>
+ <name>Service ID</name>
+ <method>get</method>
+ <source>value</source>
+ <direction>input</direction>
+ <oid>.1.3.6.1.4.1.29385.106.1.0.28</oid>
+ </feSID>
+ <fePxName>
+ <name>Proxy Name</name>
+ <method>get</method>
+ <source>value</source>
+ <direction>input</direction>
+ <oid>.1.3.6.1.4.1.29385.106.1.0.0</oid>
+ </fePxName>
+ <feSvName>
+ <name>Service Name</name>
+ <method>get</method>
+ <source>value</source>
+ <direction>input</direction>
+ <oid>.1.3.6.1.4.1.29385.106.1.0.1</oid>
+ </feSvName>
+ <feSCur>
+ <name>Current Sessions</name>
+ <method>get</method>
+ <source>value</source>
+ <direction>output</direction>
+ <oid>.1.3.6.1.4.1.29385.106.1.0.4</oid>
+ </feSCur>
+ <feSMax>
+ <name>Maximum Sessions</name>
+ <method>get</method>
+ <source>value</source>
+ <direction>output</direction>
+ <oid>.1.3.6.1.4.1.29385.106.1.0.5</oid>
+ </feSMax>
+ <feSTot>
+ <name>Total Sessions</name>
+ <method>get</method>
+ <source>value</source>
+ <direction>output</direction>
+ <oid>.1.3.6.1.4.1.29385.106.1.0.7</oid>
+ </feSTot>
+ <feEReq>
+ <name>Request Errors</name>
+ <method>get</method>
+ <source>value</source>
+ <direction>output</direction>
+ <oid>.1.3.6.1.4.1.29385.106.1.0.12</oid>
+ </feEReq>
+ <feBIn>
+ <name>Bytes In</name>
+ <method>get</method>
+ <source>value</source>
+ <direction>output</direction>
+ <oid>.1.3.6.1.4.1.29385.106.1.0.8</oid>
+ </feBIn>
+ <feBOut>
+ <name>Bytes Out</name>
+ <method>get</method>
+ <source>value</source>
+ <direction>output</direction>
+ <oid>.1.3.6.1.4.1.29385.106.1.0.9</oid>
+ </feBOut>
+ <feDReq>
+ <name>Denied Requests</name>
+ <method>get</method>
+ <source>value</source>
+ <direction>output</direction>
+ <oid>.1.3.6.1.4.1.29385.106.1.0.10</oid>
+ </feDReq>
+ </fields>
+</interface>
--- /dev/null
+<interface>
+ <name>HAProxy - socket</name>
+ <oid_index>.1.3.6.1.4.1.29385.106.1.3.10001</oid_index>
+ <fields>
+ <feseID>
+ <name>Unique Index</name>
+ <method>get</method>
+ <source>value</source>
+ <direction>input</direction>
+ <oid>.1.3.6.1.4.1.29385.106.1.3.10001</oid>
+ </feseID>
+ <feIID>
+ <name>Proxy ID</name>
+ <method>get</method>
+ <source>value</source>
+ <direction>input</direction>
+ <oid>.1.3.6.1.4.1.29385.106.1.3.27</oid>
+ </feIID>
+ <feSID>
+ <name>Service ID</name>
+ <method>get</method>
+ <source>value</source>
+ <direction>input</direction>
+ <oid>.1.3.6.1.4.1.29385.106.1.3.28</oid>
+ </feSID>
+ <fePxName>
+ <name>Proxy Name</name>
+ <method>get</method>
+ <source>value</source>
+ <direction>input</direction>
+ <oid>.1.3.6.1.4.1.29385.106.1.3.0</oid>
+ </fePxName>
+ <feSvName>
+ <name>Service Name</name>
+ <method>get</method>
+ <source>value</source>
+ <direction>input</direction>
+ <oid>.1.3.6.1.4.1.29385.106.1.3.1</oid>
+ </feSvName>
+ <feSCur>
+ <name>Current Sessions</name>
+ <method>get</method>
+ <source>value</source>
+ <direction>output</direction>
+ <oid>.1.3.6.1.4.1.29385.106.1.3.4</oid>
+ </feSCur>
+ <feSMax>
+ <name>Maximum Sessions</name>
+ <method>get</method>
+ <source>value</source>
+ <direction>output</direction>
+ <oid>.1.3.6.1.4.1.29385.106.1.3.5</oid>
+ </feSMax>
+ <feSTot>
+ <name>Total Sessions</name>
+ <method>get</method>
+ <source>value</source>
+ <direction>output</direction>
+ <oid>.1.3.6.1.4.1.29385.106.1.3.7</oid>
+ </feSTot>
+ <feEReq>
+ <name>Request Errors</name>
+ <method>get</method>
+ <source>value</source>
+ <direction>output</direction>
+ <oid>.1.3.6.1.4.1.29385.106.1.3.12</oid>
+ </feEReq>
+ <feBIn>
+ <name>Bytes In</name>
+ <method>get</method>
+ <source>value</source>
+ <direction>output</direction>
+ <oid>.1.3.6.1.4.1.29385.106.1.3.8</oid>
+ </feBIn>
+ <feBOut>
+ <name>Bytes Out</name>
+ <method>get</method>
+ <source>value</source>
+ <direction>output</direction>
+ <oid>.1.3.6.1.4.1.29385.106.1.3.9</oid>
+ </feBOut>
+ <feDReq>
+ <name>Denied Requests</name>
+ <method>get</method>
+ <source>value</source>
+ <direction>output</direction>
+ <oid>.1.3.6.1.4.1.29385.106.1.3.10</oid>
+ </feDReq>
+ </fields>
+</interface>
--- /dev/null
+This directory includes an selinux policy for haproxy. It assumes
+the following file locations:
+
+ /usr/sbin/haproxy -- binary
+ /etc/haproxy/haproxy\.cfg -- configuration
+ /var/run/haproxy\.pid -- pid-file
+ /var/run/haproxy\.sock(.*) -- stats socket
+ /var/empty/haproxy -- chroot dir
+
+To build and load it on RHEL5 you'll need the "selinux-policy-devel" package,
+and from within this directory run:
+
+ make -f /usr/share/selinux/devel/Makefile
+ sudo semodule -i haproxy.pp
+ restorecon /usr/sbin/haproxy /etc/haproxy/haproxy.cfg /var/run/haproxy.pid /var/run/haproxy.sock*
+
+
+Feedback to Jan-Frode Myklebust <janfrode@tanso.no> is much appreciated,
--- /dev/null
+# haproxy labeling policy
+# file: haproxy.fc
+/usr/sbin/haproxy -- gen_context(system_u:object_r:haproxy_exec_t, s0)
+/etc/haproxy/haproxy\.cfg -- gen_context(system_u:object_r:haproxy_conf_t, s0)
+/var/run/haproxy\.pid -- gen_context(system_u:object_r:haproxy_var_run_t, s0)
+/var/run/haproxy\.sock(.*) -- gen_context(system_u:object_r:haproxy_var_run_t, s0)
--- /dev/null
+## <summary>selinux policy module for haproxy</summary>
+
--- /dev/null
+policy_module(haproxy,1.0.0)
+
+########################################
+#
+# Declarations
+#
+
+type haproxy_t;
+type haproxy_exec_t;
+type haproxy_port_t;
+init_daemon_domain(haproxy_t, haproxy_exec_t)
+
+type haproxy_var_run_t;
+files_pid_file(haproxy_var_run_t)
+
+type haproxy_conf_t;
+files_config_file(haproxy_conf_t)
+
+########################################
+#
+# Local policy
+#
+
+# Configuration files - read
+allow haproxy_t haproxy_conf_t : dir list_dir_perms;
+allow haproxy_t haproxy_conf_t : file read_file_perms;
+allow haproxy_t haproxy_conf_t : lnk_file read_file_perms;
+
+# PID and socket file - create, read, and write
+files_pid_filetrans(haproxy_t, haproxy_var_run_t, { file sock_file })
+allow haproxy_t haproxy_var_run_t:file manage_file_perms;
+allow haproxy_t haproxy_var_run_t:sock_file { create rename link setattr unlink };
+
+allow haproxy_t self : tcp_socket create_stream_socket_perms;
+allow haproxy_t self: udp_socket create_socket_perms;
+allow haproxy_t self: capability { setgid setuid sys_chroot sys_resource kill };
+allow haproxy_t self: process { setrlimit signal };
+
+
+logging_send_syslog_msg(haproxy_t)
+
+corenet_tcp_bind_all_ports(haproxy_t)
+corenet_tcp_connect_all_ports(haproxy_t)
+corenet_tcp_bind_all_nodes(haproxy_t)
+corenet_tcp_sendrecv_all_ports(haproxy_t)
+corenet_tcp_recvfrom_unlabeled(haproxy_t)
+
+# use shared libraries
+libs_use_ld_so(haproxy_t)
+libs_use_shared_libs(haproxy_t)
+
+# Read /etc/localtime:
+miscfiles_read_localization(haproxy_t)
+# Read /etc/passwd and more.
+files_read_etc_files(haproxy_t)
+
+# RHEL5 specific:
+require {
+ type unlabeled_t;
+ type haproxy_t;
+ class packet send;
+ class packet recv;
+}
+
+allow haproxy_t unlabeled_t:packet { send recv };
+
--- /dev/null
+PREFIX = /usr/local
+SBINDIR = $(PREFIX)/sbin
+
+haproxy.service: haproxy.service.in
+ sed -e 's:@SBINDIR@:'$(strip $(SBINDIR))':' $< > $@
+
+clean:
+ rm -f haproxy.service
--- /dev/null
+[Unit]
+Description=HAProxy Load Balancer
+After=network.target
+
+[Service]
+ExecStartPre=@SBINDIR@/haproxy -f /etc/haproxy/haproxy.cfg -c -q
+ExecStart=@SBINDIR@/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
+ExecReload=/bin/kill -USR2 $MAINPID
+KillMode=mixed
+Restart=always
+
+[Install]
+WantedBy=multi-user.target
--- /dev/null
+#!/bin/sh
+#
+# trace.awk - Fast trace symbol resolver - w@1wt.eu - 2012/05/25
+#
+# Principle: this program launches reads pointers from a trace file and if not
+# found in its cache, it passes them over a pipe to addr2line which is forked
+# in a coprocess, then stores the result in the cache.
+#
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License
+# as published by the Free Software Foundation; either version
+# 2 of the License, or (at your option) any later version.
+#
+# usage: $0 exec_file < trace.out
+#
+
+if [ $# -lt 1 ]; then
+ echo "Usage: ${0##*/} exec_file < trace.out"
+ echo "Example: ${0##*/} ./haproxy < trace.out"
+ echo "Example: HAPROXY_TRACE=/dev/stdout ./haproxy -f cfg | ${0##*/} ./haproxy"
+ exit 1
+fi
+
+if [ ! -s "$1" ]; then
+ echo "$1 is not a valid executable file"
+ exit 1
+fi
+
+exec awk -v prog="$1" \
+'
+BEGIN {
+ if (cmd == "")
+ cmd=ENVIRON["ADDR2LINE"];
+ if (cmd == "")
+ cmd="addr2line";
+
+ if (prog == "")
+ prog=ENVIRON["PROG"];
+
+ cmd=cmd " -f -e " prog;
+
+ for (i = 1; i < 100; i++) {
+ indents[">",i] = indents[">",i-1] "->"
+ indents[">",i-1] = indents[">",i-1] " "
+ indents["<",i] = indents["<",i-1] " "
+ indents["<",i-1] = indents["<",i-1] " "
+ }
+}
+
+function getptr(ptr)
+{
+ loc=locs[ptr];
+ name=names[ptr];
+ if (loc == "" || name == "") {
+ print ptr |& cmd;
+ cmd |& getline name;
+ cmd |& getline loc;
+ names[ptr]=name
+ locs[ptr]=loc
+ }
+}
+
+{
+ # input format: <timestamp> <level> <caller> <dir> <callee>
+ getptr($3); caller_loc=loc; caller_name=name
+ getptr($5); callee_loc=loc; callee_name=name
+ printf "%s %s %s %s %s [%s:%s] %s [%s:%s]\n",
+ $1, indents[$4,$2], caller_name, $4, callee_name, caller_loc, $3, $4, callee_loc, $5
+}
+'
--- /dev/null
+haproxy (1.4.23-1) unstable; urgency=low
+
+ As of 1.4.23-1, the Debian package ships an rsyslog snippet to allow logging
+ via /dev/log from chrooted HAProxy processes. If you are using rsyslog, you
+ should restart rsyslog after installing this package to enable HAProxy to log
+ via rsyslog. See /usr/share/doc/haproxy/README.Debian for more details.
+
+ Also note that as of 1.4.23-1, chrooting the HAProxy process is enabled in the
+ default Debian configuration.
+
+ -- Apollon Oikonomopoulos <apoikos@gmail.com> Thu, 25 Apr 2013 23:26:35 +0300
+
+haproxy (1.4.13-1) unstable; urgency=low
+
+ Maintainer of this package has changed.
+
+ -- Christo Buschek <crito@30loops.net> Mon, 10 Mar 2011 22:07:10 +0100
+
+haproxy (1.3.14.2-1) unstable; urgency=low
+
+ Configuration has moved to /etc/haproxy/haproxy.cfg. This allows to add the
+ configurable /etc/haproxy/errors directory.
+ The haproxy binary was also moved to /usr/sbin rather than /usr/bin, update
+ your init script or reinstall the one provided with the package.
+
+ -- Arnaud Cornet <acornet@debian.org> Mon, 21 Jan 2008 23:38:15 +0100
--- /dev/null
+Binding non-local IPv6 addresses
+================================
+
+There are cases where HAProxy needs to bind() a non-existing address, like
+for example in high-availability setups with floating IP addresses (e.g. using
+keepalived or ucarp). For IPv4 the net.ipv4.ip_nonlocal_bind sysctl can be used
+to permit binding non-existing addresses, such a control does not exist for
+IPv6 however.
+
+The solution is to add the "transparent" parameter to the frontend's bind
+statement, for example:
+
+frontend fe1
+ bind 2001:db8:abcd:f00::1:8080 transparent
+
+This will require a recent Linux kernel (>= 2.6.28) with TPROXY support (Debian
+kernels will work correctly with this option).
+
+See /usr/share/doc/haproxy/configuration.txt.gz for more information on the
+"transparent" bind parameter.
+
+ -- Apollon Oikonomopoulos <apoikos@gmail.com> Wed, 16 Oct 2013 21:18:58 +0300
--- /dev/null
+haproxy (1.6.3-1) unstable; urgency=medium
+
+ [ Apollon Oikonomopoulos ]
+ * haproxy.init: use s-s-d's --pidfile option.
+ Thanks to Louis Bouchard (Closes: 804530)
+
+ [ Vincent Bernat ]
+ * watch: fix d/watch to look for 1.6 version
+ * Imported Upstream version 1.6.3
+
+ -- Vincent Bernat <bernat@debian.org> Thu, 31 Dec 2015 08:10:10 +0100
+
+haproxy (1.6.2-2) unstable; urgency=medium
+
+ * Enable USE_REGPARM on amd64 as well.
+
+ -- Vincent Bernat <bernat@debian.org> Tue, 03 Nov 2015 21:21:30 +0100
+
+haproxy (1.6.2-1) unstable; urgency=medium
+
+ * New upstream release.
+ - BUG/MAJOR: dns: first DNS response packet not matching queried
+ hostname may lead to a loop
+ - BUG/MAJOR: http: don't requeue an idle connection that is already
+ queued
+ * Upload to unstable.
+
+ -- Vincent Bernat <bernat@debian.org> Tue, 03 Nov 2015 13:36:22 +0100
+
+haproxy (1.6.1-2) experimental; urgency=medium
+
+ * Build the Lua manpage in -arch, fixes FTBFS in binary-only builds.
+
+ -- Apollon Oikonomopoulos <apoikos@debian.org> Thu, 22 Oct 2015 12:19:41 +0300
+
+haproxy (1.6.1-1) experimental; urgency=medium
+
+ [ Vincent Bernat ]
+ * New upstream release.
+ - BUG/MAJOR: ssl: free the generated SSL_CTX if the LRU cache is
+ disabled
+ * Drop 0001-BUILD-install-only-relevant-and-existing-documentati.patch.
+
+ [ Apollon Oikonomopoulos ]
+ * Ship and generate Lua API documentation.
+
+ -- Vincent Bernat <bernat@debian.org> Thu, 22 Oct 2015 10:45:55 +0200
+
+haproxy (1.6.0+ds1-1) experimental; urgency=medium
+
+ * New upstream release!
+ * Add a patch to fix documentation installation:
+ + 0001-BUILD-install-only-relevant-and-existing-documentati.patch
+ * Update HAProxy documentation converter to a more recent version.
+
+ -- Vincent Bernat <bernat@debian.org> Wed, 14 Oct 2015 17:29:19 +0200
+
+haproxy (1.6~dev7-1) experimental; urgency=medium
+
+ * New upstream release.
+
+ -- Vincent Bernat <bernat@debian.org> Tue, 06 Oct 2015 16:01:26 +0200
+
+haproxy (1.6~dev5-1) experimental; urgency=medium
+
+ * New upstream release.
+
+ -- Vincent Bernat <bernat@debian.org> Mon, 14 Sep 2015 15:50:28 +0200
+
+haproxy (1.6~dev4-1) experimental; urgency=medium
+
+ * New upstream release.
+ * Refresh debian/copyright.
+
+ -- Vincent Bernat <bernat@debian.org> Sun, 30 Aug 2015 23:54:10 +0200
+
+haproxy (1.6~dev3-1) experimental; urgency=medium
+
+ * New upstream release.
+ * Enable Lua support.
+
+ -- Vincent Bernat <bernat@debian.org> Sat, 15 Aug 2015 17:51:29 +0200
+
+haproxy (1.5.15-1) unstable; urgency=medium
+
+ * New upstream stable release including the following fix:
+ - BUG/MAJOR: http: don't call http_send_name_header() after an error
+
+ -- Vincent Bernat <bernat@debian.org> Mon, 02 Nov 2015 07:34:19 +0100
+
+haproxy (1.5.14-1) unstable; urgency=high
+
+ * New upstream version. Fix an information leak (CVE-2015-3281):
+ - BUG/MAJOR: buffers: make the buffer_slow_realign() function
+ respect output data.
+ * Add $named as a dependency for init script. Closes: #790638.
+
+ -- Vincent Bernat <bernat@debian.org> Fri, 03 Jul 2015 19:49:02 +0200
+
+haproxy (1.5.13-1) unstable; urgency=medium
+
+ * New upstream stable release including the following fixes:
+ - MAJOR: peers: allow peers section to be used with nbproc > 1
+ - BUG/MAJOR: checks: always check for end of list before proceeding
+ - MEDIUM: ssl: replace standards DH groups with custom ones
+ - BUG/MEDIUM: ssl: fix tune.ssl.default-dh-param value being overwritten
+ - BUG/MEDIUM: cfgparse: segfault when userlist is misused
+ - BUG/MEDIUM: stats: properly initialize the scope before dumping stats
+ - BUG/MEDIUM: http: don't forward client shutdown without NOLINGER
+ except for tunnels
+ - BUG/MEDIUM: checks: do not dereference head of a tcp-check at the end
+ - BUG/MEDIUM: checks: do not dereference a list as a tcpcheck struct
+ - BUG/MEDIUM: peers: apply a random reconnection timeout
+ - BUG/MEDIUM: config: properly compute the default number of processes
+ for a proxy
+
+ -- Vincent Bernat <bernat@debian.org> Sat, 27 Jun 2015 20:52:07 +0200
+
+haproxy (1.5.12-1) unstable; urgency=medium
+
+ * New upstream stable release including the following fixes:
+ - BUG/MAJOR: http: don't read past buffer's end in http_replace_value
+ - BUG/MAJOR: http: prevent risk of reading past end with balance
+ url_param
+ - BUG/MEDIUM: Do not consider an agent check as failed on L7 error
+ - BUG/MEDIUM: patern: some entries are not deleted with case
+ insensitive match
+ - BUG/MEDIUM: buffer: one byte miss in buffer free space check
+ - BUG/MEDIUM: http: thefunction "(req|res)-replace-value" doesn't
+ respect the HTTP syntax
+ - BUG/MEDIUM: peers: correctly configure the client timeout
+ - BUG/MEDIUM: http: hdr_cnt would not count any header when called
+ without name
+ - BUG/MEDIUM: listener: don't report an error when resuming unbound
+ listeners
+ - BUG/MEDIUM: init: don't limit cpu-map to the first 32 processes only
+ - BUG/MEDIUM: stream-int: always reset si->ops when si->end is
+ nullified
+ - BUG/MEDIUM: http: remove content-length from chunked messages
+ - BUG/MEDIUM: http: do not restrict parsing of transfer-encoding to
+ HTTP/1.1
+ - BUG/MEDIUM: http: incorrect transfer-coding in the request is a bad
+ request
+ - BUG/MEDIUM: http: remove content-length form responses with bad
+ transfer-encoding
+ - BUG/MEDIUM: http: wait for the exact amount of body bytes in
+ wait_for_request_body
+
+ -- Vincent Bernat <bernat@debian.org> Sat, 02 May 2015 16:38:28 +0200
+
+haproxy (1.5.11-2) unstable; urgency=medium
+
+ * Upload to unstable.
+
+ -- Vincent Bernat <bernat@debian.org> Sun, 26 Apr 2015 17:46:58 +0200
+
+haproxy (1.5.11-1) experimental; urgency=medium
+
+ * New upstream stable release including the following fixes:
+ - BUG/MAJOR: log: don't try to emit a log if no logger is set
+ - BUG/MEDIUM: backend: correctly detect the domain when
+ use_domain_only is used
+ - BUG/MEDIUM: Do not set agent health to zero if server is disabled
+ in config
+ - BUG/MEDIUM: Only explicitly report "DOWN (agent)" if the agent health
+ is zero
+ - BUG/MEDIUM: http: fix header removal when previous header ends with
+ pure LF
+ - BUG/MEDIUM: channel: fix possible integer overflow on reserved size
+ computation
+ - BUG/MEDIUM: channel: don't schedule data in transit for leaving until
+ connected
+ - BUG/MEDIUM: http: make http-request set-header compute the string
+ before removal
+ * Upload to experimental.
+
+ -- Vincent Bernat <bernat@debian.org> Sun, 01 Feb 2015 09:22:27 +0100
+
+haproxy (1.5.10-1) experimental; urgency=medium
+
+ * New upstream stable release including the following fixes:
+ - BUG/MAJOR: stream-int: properly check the memory allocation return
+ - BUG/MEDIUM: sample: fix random number upper-bound
+ - BUG/MEDIUM: patterns: previous fix was incomplete
+ - BUG/MEDIUM: payload: ensure that a request channel is available
+ - BUG/MEDIUM: tcp-check: don't rely on random memory contents
+ - BUG/MEDIUM: tcp-checks: disable quick-ack unless next rule is an expect
+ - BUG/MEDIUM: config: do not propagate processes between stopped
+ processes
+ - BUG/MEDIUM: memory: fix freeing logic in pool_gc2()
+ - BUG/MEDIUM: compression: correctly report zlib_mem
+ * Upload to experimental.
+
+ -- Vincent Bernat <bernat@debian.org> Sun, 04 Jan 2015 13:17:56 +0100
+
+haproxy (1.5.9-1) experimental; urgency=medium
+
+ * New upstream stable release including the following fixes:
+ - BUG/MAJOR: sessions: unlink session from list on out
+ of memory
+ - BUG/MEDIUM: pattern: don't load more than once a pattern
+ list.
+ - BUG/MEDIUM: connection: sanitize PPv2 header length before
+ parsing address information
+ - BUG/MAJOR: frontend: initialize capture pointers earlier
+ - BUG/MEDIUM: checks: fix conflicts between agent checks and
+ ssl healthchecks
+ - BUG/MEDIUM: ssl: force a full GC in case of memory shortage
+ - BUG/MEDIUM: ssl: fix bad ssl context init can cause
+ segfault in case of OOM.
+ * Upload to experimental.
+
+ -- Vincent Bernat <bernat@debian.org> Sun, 07 Dec 2014 16:37:36 +0100
+
+haproxy (1.5.8-3) unstable; urgency=medium
+
+ * Remove RC4 from the default cipher string shipped in configuration.
+
+ -- Vincent Bernat <bernat@debian.org> Fri, 27 Feb 2015 11:29:23 +0100
+
+haproxy (1.5.8-2) unstable; urgency=medium
+
+ * Cherry-pick the following patches from 1.5.9 release:
+ - 8a0b93bde77e BUG/MAJOR: sessions: unlink session from list on out
+ of memory
+ - bae03eaad40a BUG/MEDIUM: pattern: don't load more than once a pattern
+ list.
+ - 93637b6e8503 BUG/MEDIUM: connection: sanitize PPv2 header length before
+ parsing address information
+ - 8ba50128832b BUG/MAJOR: frontend: initialize capture pointers earlier
+ - 1f96a87c4e14 BUG/MEDIUM: checks: fix conflicts between agent checks and
+ ssl healthchecks
+ - 9bcc01ae2598 BUG/MEDIUM: ssl: force a full GC in case of memory shortage
+ - 909514970089 BUG/MEDIUM: ssl: fix bad ssl context init can cause
+ segfault in case of OOM.
+ * Cherry-pick the following patches from future 1.5.10 release:
+ - 1e89acb6be9b BUG/MEDIUM: payload: ensure that a request channel is
+ available
+ - bad3c6f1b6d7 BUG/MEDIUM: patterns: previous fix was incomplete
+
+ -- Vincent Bernat <bernat@debian.org> Sun, 07 Dec 2014 11:11:21 +0100
+
+haproxy (1.5.8-1) unstable; urgency=medium
+
+ * New upstream stable release including the following fixes:
+
+ + BUG/MAJOR: buffer: check the space left is enough or not when input
+ data in a buffer is wrapped
+ + BUG/MINOR: ssl: correctly initialize ssl ctx for invalid certificates
+ + BUG/MEDIUM: tcp: don't use SO_ORIGINAL_DST on non-AF_INET sockets
+ + BUG/MEDIUM: regex: fix pcre_study error handling
+ + BUG/MEDIUM: tcp: fix outgoing polling based on proxy protocol
+ + BUG/MINOR: log: fix request flags when keep-alive is enabled
+ + BUG/MAJOR: cli: explicitly call cli_release_handler() upon error
+ + BUG/MEDIUM: http: don't dump debug headers on MSG_ERROR
+ * Also includes the following new features:
+ + MINOR: ssl: add statement to force some ssl options in global.
+ + MINOR: ssl: add fetchs 'ssl_c_der' and 'ssl_f_der' to return DER
+ formatted certs
+ * Disable SSLv3 in the default configuration file.
+
+ -- Vincent Bernat <bernat@debian.org> Fri, 31 Oct 2014 13:48:19 +0100
+
+haproxy (1.5.6-1) unstable; urgency=medium
+
+ * New upstream stable release including the following fixes:
+ + BUG/MEDIUM: systemd: set KillMode to 'mixed'
+ + MINOR: systemd: Check configuration before start
+ + BUG/MEDIUM: config: avoid skipping disabled proxies
+ + BUG/MINOR: config: do not accept more track-sc than configured
+ + BUG/MEDIUM: backend: fix URI hash when a query string is present
+ * Drop systemd patches:
+ + haproxy.service-also-check-on-start.patch
+ + haproxy.service-set-killmode-to-mixed.patch
+ * Refresh other patches.
+
+ -- Vincent Bernat <bernat@debian.org> Mon, 20 Oct 2014 18:10:21 +0200
+
+haproxy (1.5.5-1) unstable; urgency=medium
+
+ [ Vincent Bernat ]
+ * initscript: use start-stop-daemon to reliably terminate all haproxy
+ processes. Also treat stopping a non-running haproxy as success.
+ (Closes: #762608, LP: #1038139)
+
+ [ Apollon Oikonomopoulos ]
+ * New upstream stable release including the following fixes:
+ + DOC: Address issue where documentation is excluded due to a gitignore
+ rule.
+ + MEDIUM: Improve signal handling in systemd wrapper.
+ + BUG/MINOR: config: don't propagate process binding for dynamic
+ use_backend
+ + MINOR: Also accept SIGHUP/SIGTERM in systemd-wrapper
+ + DOC: clearly state that the "show sess" output format is not fixed
+ + MINOR: stats: fix minor typo fix in stats_dump_errors_to_buffer()
+ + DOC: indicate in the doc that track-sc* can wait if data are missing
+ + MEDIUM: http: enable header manipulation for 101 responses
+ + BUG/MEDIUM: config: propagate frontend to backend process binding again.
+ + MEDIUM: config: properly propagate process binding between proxies
+ + MEDIUM: config: make the frontends automatically bind to the listeners'
+ processes
+ + MEDIUM: config: compute the exact bind-process before listener's
+ maxaccept
+ + MEDIUM: config: only warn if stats are attached to multi-process bind
+ directives
+ + MEDIUM: config: report it when tcp-request rules are misplaced
+ + MINOR: config: detect the case where a tcp-request content rule has no
+ inspect-delay
+ + MEDIUM: systemd-wrapper: support multiple executable versions and names
+ + BUG/MEDIUM: remove debugging code from systemd-wrapper
+ + BUG/MEDIUM: http: adjust close mode when switching to backend
+ + BUG/MINOR: config: don't propagate process binding on fatal errors.
+ + BUG/MEDIUM: check: rule-less tcp-check must detect connect failures
+ + BUG/MINOR: tcp-check: report the correct failed step in the status
+ + DOC: indicate that weight zero is reported as DRAIN
+ * Add a new patch (haproxy.service-set-killmode-to-mixed.patch) to fix the
+ systemctl stop action conflicting with the systemd wrapper now catching
+ SIGTERM.
+ * Bump standards to 3.9.6; no changes needed.
+ * haproxy-doc: link to tracker.debian.org instead of packages.qa.debian.org.
+ * d/copyright: move debian/dconv/* paragraph after debian/*, so that it
+ actually matches the files it is supposed to.
+
+ -- Apollon Oikonomopoulos <apoikos@debian.org> Wed, 08 Oct 2014 12:34:53 +0300
+
+haproxy (1.5.4-1) unstable; urgency=high
+
+ * New upstream version.
+ + Fix a critical bug that, under certain unlikely conditions, allows a
+ client to crash haproxy.
+ * Prefix rsyslog configuration file to ensure to log only to
+ /var/log/haproxy. Thanks to Paul Bourke for the patch.
+
+ -- Vincent Bernat <bernat@debian.org> Tue, 02 Sep 2014 19:14:38 +0200
+
+haproxy (1.5.3-1) unstable; urgency=medium
+
+ * New upstream stable release, fixing the following issues:
+ + Memory corruption when building a proxy protocol v2 header
+ + Memory leak in SSL DHE key exchange
+
+ -- Apollon Oikonomopoulos <apoikos@debian.org> Fri, 25 Jul 2014 10:41:36 +0300
+
+haproxy (1.5.2-1) unstable; urgency=medium
+
+ * New upstream stable release. Important fixes:
+ + A few sample fetch functions when combined in certain ways would return
+ malformed results, possibly crashing the HAProxy process.
+ + Hash-based load balancing and http-send-name-header would fail for
+ requests which contain a body which starts to be forwarded before the
+ data is used.
+
+ -- Apollon Oikonomopoulos <apoikos@debian.org> Mon, 14 Jul 2014 00:42:32 +0300
+
+haproxy (1.5.1-1) unstable; urgency=medium
+
+ * New upstream stable release:
+ + Fix a file descriptor leak for clients that disappear before connecting.
+ + Do not staple expired OCSP responses.
+
+ -- Apollon Oikonomopoulos <apoikos@debian.org> Tue, 24 Jun 2014 12:56:30 +0300
+
+haproxy (1.5.0-1) unstable; urgency=medium
+
+ * New upstream stable series. Notable changes since the 1.4 series:
+ + Native SSL support on both sides with SNI/NPN/ALPN and OCSP stapling.
+ + IPv6 and UNIX sockets are supported everywhere
+ + End-to-end HTTP keep-alive for better support of NTLM and improved
+ efficiency in static farms
+ + HTTP/1.1 response compression (deflate, gzip) to save bandwidth
+ + PROXY protocol versions 1 and 2 on both sides
+ + Data sampling on everything in request or response, including payload
+ + ACLs can use any matching method with any input sample
+ + Maps and dynamic ACLs updatable from the CLI
+ + Stick-tables support counters to track activity on any input sample
+ + Custom format for logs, unique-id, header rewriting, and redirects
+ + Improved health checks (SSL, scripted TCP, check agent, ...)
+ + Much more scalable configuration supports hundreds of thousands of
+ backends and certificates without sweating
+
+ * Upload to unstable, merge all 1.5 work from experimental. Most important
+ packaging changes since 1.4.25-1 include:
+ + systemd support.
+ + A more sane default config file.
+ + Zero-downtime upgrades between 1.5 releases by gracefully reloading
+ HAProxy during upgrades.
+ + HTML documentation shipped in the haproxy-doc package.
+ + kqueue support for kfreebsd.
+
+ * Packaging changes since 1.5~dev26-2:
+ + Drop patches merged upstream:
+ o Fix-reference-location-in-manpage.patch
+ o 0001-BUILD-stats-workaround-stupid-and-bogus-Werror-forma.patch
+ + d/watch: look for stable 1.5 releases
+ + systemd: respect CONFIG and EXTRAOPTS when specified in
+ /etc/default/haproxy.
+ + initscript: test the configuration before start or reload.
+ + initscript: remove the ENABLED flag and logic.
+
+ -- Apollon Oikonomopoulos <apoikos@debian.org> Fri, 20 Jun 2014 11:05:17 +0300
+
+haproxy (1.5~dev26-2) experimental; urgency=medium
+
+ * initscript: start should not fail when haproxy is already running
+ + Fixes upgrades from post-1.5~dev24-1 installations
+
+ -- Apollon Oikonomopoulos <apoikos@debian.org> Wed, 04 Jun 2014 13:20:39 +0300
+
+haproxy (1.5~dev26-1) experimental; urgency=medium
+
+ * New upstream development version.
+ + Add a patch to fix compilation with -Werror=format-security
+
+ -- Vincent Bernat <bernat@debian.org> Wed, 28 May 2014 20:32:10 +0200
+
+haproxy (1.5~dev25-1) experimental; urgency=medium
+
+ [ Vincent Bernat ]
+ * New upstream development version.
+ * Rename "contimeout", "clitimeout" and "srvtimeout" in the default
+ configuration file to "timeout connection", "timeout client" and
+ "timeout server".
+
+ [ Apollon Oikonomopoulos ]
+ * Build on kfreebsd using the "freebsd" target; enables kqueue support.
+
+ -- Vincent Bernat <bernat@debian.org> Thu, 15 May 2014 00:20:11 +0200
+
+haproxy (1.5~dev24-2) experimental; urgency=medium
+
+ * New binary package: haproxy-doc
+ + Contains the HTML documentation built using a version of Cyril Bonté's
+ haproxy-dconv (https://github.com/cbonte/haproxy-dconv).
+ + Add Build-Depends-Indep on python and python-mako
+ + haproxy Suggests: haproxy-doc
+ * systemd: check config file for validity on reload.
+ * haproxy.cfg:
+ + Enable the stats socket by default and bind it to
+ /run/haproxy/admin.sock, which is accessible by the haproxy group.
+ /run/haproxy creation is handled by the initscript for sysv-rc and a
+ tmpfiles.d config for systemd.
+ + Set the default locations for CA and server certificates to
+ /etc/ssl/certs and /etc/ssl/private respectively.
+ + Set the default cipher list to be used on listening SSL sockets to
+ enable PFS, preferring ECDHE ciphers by default.
+ * Gracefully reload HAProxy on upgrade instead of performing a full restart.
+ * debian/rules: split build into binary-arch and binary-indep.
+ * Build-depend on debhelper >= 9, set compat to 9.
+
+ -- Apollon Oikonomopoulos <apoikos@debian.org> Sun, 27 Apr 2014 13:37:17 +0300
+
+haproxy (1.5~dev24-1) experimental; urgency=medium
+
+ * New upstream development version, fixes major regressions introduced in
+ 1.5~dev23:
+
+ + Forwarding of a message body (request or response) would automatically
+ stop after the transfer timeout strikes, and with no error.
+ + Redirects failed to update the msg->next offset after consuming the
+ request, so if they were made with keep-alive enabled and starting with
+ a slash (relative location), then the buffer was shifted by a negative
+ amount of data, causing a crash.
+ + The code to standardize DH parameters caused an important performance
+ regression for, so it was temporarily reverted for the time needed to
+ understand the cause and to fix it.
+
+ For a complete release announcement, including other bugfixes and feature
+ enhancements, see http://deb.li/yBVA.
+
+ -- Apollon Oikonomopoulos <apoikos@debian.org> Sun, 27 Apr 2014 11:09:37 +0300
+
+haproxy (1.5~dev23-1) experimental; urgency=medium
+
+ * New upstream development version; notable changes since 1.5~dev22:
+ + SSL record size optimizations to speed up both, small and large
+ transfers.
+ + Dynamic backend name support in use_backend.
+ + Compressed chunked transfer encoding support.
+ + Dynamic ACL manipulation via the CLI.
+ + New "language" converter for extracting language preferences from
+ Accept-Language headers.
+ * Remove halog source and systemd unit files from
+ /usr/share/doc/haproxy/contrib, they are built and shipped in their
+ appropriate locations since 1.5~dev19-2.
+
+ -- Apollon Oikonomopoulos <apoikos@debian.org> Wed, 23 Apr 2014 11:12:34 +0300
+
+haproxy (1.5~dev22-1) experimental; urgency=medium
+
+ * New upstream development version
+ * watch: use the source page and not the main one
+
+ -- Apollon Oikonomopoulos <apoikos@debian.org> Mon, 03 Feb 2014 17:45:51 +0200
+
+haproxy (1.5~dev21+20140118-1) experimental; urgency=medium
+
+ * New upstream development snapshot, with the following fixes since
+ 1.5-dev21:
+ + 00b0fb9 BUG/MAJOR: ssl: fix breakage caused by recent fix abf08d9
+ + 410f810 BUG/MEDIUM: map: segmentation fault with the stats's socket
+ command "set map ..."
+ + abf08d9 BUG/MAJOR: connection: fix mismatch between rcv_buf's API and
+ usage
+ + 35249cb BUG/MINOR: pattern: pattern comparison executed twice
+ + c920096 BUG/MINOR: http: don't clear the SI_FL_DONT_WAKE flag between
+ requests
+ + b800623 BUG/MEDIUM: stats: fix HTTP/1.0 breakage introduced in previous
+ patch
+ + 61f7f0a BUG/MINOR: stream-int: do not clear the owner upon unregister
+ + 983eb31 BUG/MINOR: channel: CHN_INFINITE_FORWARD must be unsigned
+ + a3ae932 BUG/MEDIUM: stats: the web interface must check the tracked
+ servers before enabling
+ + e24d963 BUG/MEDIUM: checks: unchecked servers could not be enabled
+ anymore
+ + 7257550 BUG/MINOR: http: always disable compression on HTTP/1.0
+ + 9f708ab BUG/MINOR: checks: successful check completion must not
+ re-enable MAINT servers
+ + ff605db BUG/MEDIUM: backend: do not re-initialize the connection's
+ context upon reuse
+ + ea90063 BUG/MEDIUM: stream-int: fix the keep-alive idle connection
+ handler
+ * Update debian/copyright to reflect the license of ebtree/
+ (closes: #732614)
+ * Synchronize debian/copyright with source
+ * Add Documentation field to the systemd unit file
+
+ -- Apollon Oikonomopoulos <apoikos@debian.org> Mon, 20 Jan 2014 10:07:34 +0200
+
+haproxy (1.5~dev21-1) experimental; urgency=low
+
+ [ Prach Pongpanich ]
+ * Bump Standards-Version to 3.9.5
+
+ [ Thomas Bechtold ]
+ * debian/control: Add haproxy-dbg binary package for debug symbols.
+
+ [ Apollon Oikonomopoulos ]
+ * New upstream development version.
+ * Require syslog to be operational before starting. Closes: #726323.
+
+ -- Vincent Bernat <bernat@debian.org> Tue, 17 Dec 2013 01:38:04 +0700
+
+haproxy (1.5~dev19-2) experimental; urgency=low
+
+ [ Vincent Bernat ]
+ * Really enable systemd support by using dh-systemd helper.
+ * Don't use -L/usr/lib and rely on default search path. Closes: #722777.
+
+ [ Apollon Oikonomopoulos ]
+ * Ship halog.
+
+ -- Vincent Bernat <bernat@debian.org> Thu, 12 Sep 2013 21:58:05 +0200
+
+haproxy (1.5~dev19-1) experimental; urgency=high
+
+ [ Vincent Bernat ]
+ * New upstream version.
+ + CVE-2013-2175: fix a possible crash when using negative header
+ occurrences.
+ + Drop 0002-Fix-typo-in-src-haproxy.patch: applied upstream.
+ * Enable gzip compression feature.
+
+ [ Prach Pongpanich ]
+ * Drop bashism patch. It seems useless to maintain a patch to convert
+ example scripts from /bin/bash to /bin/sh.
+ * Fix reload/restart action of init script (LP: #1187469)
+
+ -- Vincent Bernat <bernat@debian.org> Mon, 17 Jun 2013 22:03:58 +0200
+
+haproxy (1.5~dev18-1) experimental; urgency=low
+
+ [ Apollon Oikonomopoulos ]
+ * New upstream development version
+
+ [ Vincent Bernat ]
+ * Add support for systemd. Currently, /etc/default/haproxy is not used
+ when using systemd.
+
+ -- Vincent Bernat <bernat@debian.org> Sun, 26 May 2013 12:33:00 +0200
+
+haproxy (1.4.25-1) unstable; urgency=medium
+
+ [ Prach Pongpanich ]
+ * New upstream version.
+ * Update watch file to use the source page.
+ * Bump Standards-Version to 3.9.5.
+
+ [ Thomas Bechtold ]
+ * debian/control: Add haproxy-dbg binary package for debug symbols.
+
+ [ Apollon Oikonomopoulos ]
+ * Require syslog to be operational before starting. Closes: #726323.
+ * Document how to bind non-local IPv6 addresses.
+ * Add a reference to configuration.txt.gz to the manpage.
+ * debian/copyright: synchronize with source.
+
+ -- Prach Pongpanich <prachpub@gmail.com> Fri, 28 Mar 2014 09:35:09 +0700
+
+haproxy (1.4.24-2) unstable; urgency=low
+
+ [ Apollon Oikonomopoulos ]
+ * Ship contrib/halog as /usr/bin/halog.
+
+ [ Vincent Bernat ]
+ * Don't use -L/usr/lib and rely on default search path. Closes: #722777.
+
+ -- Vincent Bernat <bernat@debian.org> Sun, 15 Sep 2013 14:36:27 +0200
+
+haproxy (1.4.24-1) unstable; urgency=high
+
+ [ Vincent Bernat ]
+ * New upstream version.
+ + CVE-2013-2175: fix a possible crash when using negative header
+ occurrences.
+
+ [ Prach Pongpanich ]
+ * Drop bashism patch. It seems useless to maintain a patch to convert
+ example scripts from /bin/bash to /bin/sh.
+ * Fix reload/restart action of init script (LP: #1187469).
+
+ -- Vincent Bernat <bernat@debian.org> Mon, 17 Jun 2013 21:56:26 +0200
+
+haproxy (1.4.23-1) unstable; urgency=low
+
+ [ Apollon Oikonomopoulos ]
+ * New upstream version (Closes: #643650, #678953)
+ + This fixes CVE-2012-2942 (Closes: #674447)
+ + This fixes CVE-2013-1912 (Closes: #704611)
+ * Ship vim addon as vim-haproxy (Closes: #702893)
+ * Check for the configuration file after sourcing /etc/default/haproxy
+ (Closes: #641762)
+ * Use /dev/log for logging by default (Closes: #649085)
+
+ [ Vincent Bernat ]
+ * debian/control:
+ + add Vcs-* fields
+ + switch maintenance to Debian HAProxy team. (Closes: #706890)
+ + drop dependency to quilt: 3.0 (quilt) format is in use.
+ * debian/rules:
+ + don't explicitly call dh_installchangelog.
+ + use dh_installdirs to install directories.
+ + use dh_install to install error and configuration files.
+ + switch to `linux2628` Makefile target for Linux.
+ * debian/postrm:
+ + remove haproxy user and group on purge.
+ * Ship a more minimal haproxy.cfg file: no `listen` blocks but `global`
+ and `defaults` block with appropriate configuration to use chroot and
+ logging in the expected way.
+
+ [ Prach Pongpanich ]
+ * debian/copyright:
+ + add missing copyright holders
+ + update years of copyright
+ * debian/rules:
+ + build with -Wl,--as-needed to get rid of unnecessary depends
+ * Remove useless files in debian/haproxy.{docs,examples}
+ * Update debian/watch file, thanks to Bart Martens
+
+ -- Vincent Bernat <bernat@debian.org> Mon, 06 May 2013 20:02:14 +0200
+
+haproxy (1.4.15-1) unstable; urgency=low
+
+ * New upstream release with critical bug fix (Closes: #631351)
+
+ -- Christo Buschek <crito@30loops.net> Thu, 14 Jul 2011 18:17:05 +0200
+
+haproxy (1.4.13-1) unstable; urgency=low
+
+ * New maintainer upload (Closes: #615246)
+ * New upstream release
+ * Standards-version goes 3.9.1 (no change)
+ * Added patch bashism (Closes: #581109)
+ * Added a README.source file.
+
+ -- Christo Buschek <crito@30loops.net> Thu, 11 Mar 2011 12:41:59 +0000
+
+haproxy (1.4.8-1) unstable; urgency=low
+
+ * New upstream release.
+
+ -- Arnaud Cornet <acornet@debian.org> Fri, 18 Jun 2010 00:42:53 +0100
+
+haproxy (1.4.4-1) unstable; urgency=low
+
+ * New upstream release
+ * Add splice and tproxy support
+ * Add regparm optimization on i386
+ * Switch to dpkg-source 3.0 (quilt) format
+
+ -- Arnaud Cornet <acornet@debian.org> Thu, 15 Apr 2010 20:00:34 +0100
+
+haproxy (1.4.2-1) unstable; urgency=low
+
+ * New upstream release
+ * Remove debian/patches/haproxy.1-hyphen.patch gone upstream
+ * Tighten quilt build dep (Closes: #567087)
+ * standards-version goes 3.8.4 (no change)
+ * Add $remote_fs to init.d script required start and stop
+
+ -- Arnaud Cornet <acornet@debian.org> Sat, 27 Mar 2010 15:19:48 +0000
+
+haproxy (1.3.22-1) unstable; urgency=low
+
+ * New upstream bugfix release
+
+ -- Arnaud Cornet <acornet@debian.org> Mon, 19 Oct 2009 22:31:45 +0100
+
+haproxy (1.3.21-1) unstable; urgency=low
+
+ [ Michael Shuler ]
+ * New Upstream Version (Closes: #538992)
+ * Added override for example shell scripts in docs (Closes: #530096)
+ * Added upstream changelog to docs
+ * Added debian/watch
+ * Updated debian/copyright format
+ * Added haproxy.1-hyphen.patch, to fix hyphen in man page
+ * Upgrade Standards-Version to 3.8.3 (no change needed)
+ * Upgrade debian/compat to 7 (no change needed)
+
+ [ Arnaud Cornet ]
+ * New upstream version.
+ * Merge Michael's work, few changelog fixes
+ * Add debian/README.source to point to quilt doc
+ * Depend on debhelper >= 7.0.50~ and use overrides in debian/rules
+
+ -- Arnaud Cornet <acornet@debian.org> Sun, 18 Oct 2009 14:01:29 +0200
+
+haproxy (1.3.18-1) unstable; urgency=low
+
+ * New Upstream Version (Closes: #534583).
+ * Add contrib directory in docs
+
+ -- Arnaud Cornet <acornet@debian.org> Fri, 26 Jun 2009 00:11:01 +0200
+
+haproxy (1.3.15.7-2) unstable; urgency=low
+
+ * Fix build without debian/patches directory (Closes: #515682) using
+ /usr/share/quilt/quilt.make.
+
+ -- Arnaud Cornet <acornet@debian.org> Tue, 17 Feb 2009 08:55:12 +0100
+
+haproxy (1.3.15.7-1) unstable; urgency=low
+
+ * New Upstream Version.
+ * Remove upstream patches:
+ -use_backend-consider-unless.patch
+ -segfault-url_param+check_post.patch
+ -server-timeout.patch
+ -closed-fd-remove.patch
+ -connection-slot-during-retry.patch
+ -srv_dynamic_maxconn.patch
+ -do-not-pause-backends-on-reload.patch
+ -acl-in-default.patch
+ -cookie-capture-check.patch
+ -dead-servers-queue.patch
+
+ -- Arnaud Cornet <acornet@debian.org> Mon, 16 Feb 2009 11:20:21 +0100
+
+haproxy (1.3.15.2-2~lenny1) testing-proposed-updates; urgency=low
+
+ * Rebuild for lenny to circumvent pcre3 shlibs bump.
+
+ -- Arnaud Cornet <acornet@debian.org> Wed, 14 Jan 2009 11:28:36 +0100
+
+haproxy (1.3.15.2-2) unstable; urgency=low
+
+ * Add stable branch bug fixes from upstream (Closes: #510185).
+ - use_backend-consider-unless.patch: consider "unless" in use_backend
+ - segfault-url_param+check_post.patch: fix segfault with url_param +
+ check_post
+ - server-timeout.patch: consider server timeout in all circumstances
+ - closed-fd-remove.patch: drop info about closed file descriptors
+ - connection-slot-during-retry.patch: do not release the connection slot
+ during a retry
+ - srv_dynamic_maxconn.patch: dynamic connection throttling api fix
+ - do-not-pause-backends-on-reload.patch: make reload reliable
+ - acl-in-default.patch: allow acl-related keywords in defaults sections
+ - cookie-capture-check.patch: cookie capture is declared in the frontend
+ but checked on the backend
+ - dead-servers-queue.patch: make dead servers not suck pending connections
+ * Add quilt build-dependancy. Use quilt in debian/rules to apply
+ patches.
+
+ -- Arnaud Cornet <acornet@debian.org> Wed, 31 Dec 2008 08:50:21 +0100
+
+haproxy (1.3.15.2-1) unstable; urgency=low
+
+ * New Upstream Version (Closes: #497186).
+
+ -- Arnaud Cornet <acornet@debian.org> Sat, 30 Aug 2008 18:06:31 +0200
+
+haproxy (1.3.15.1-1) unstable; urgency=low
+
+ * New Upstream Version
+ * Upgrade standards version to 3.8.0 (no change needed).
+ * Build with TARGET=linux26 on linux, TARGET=generic on other systems.
+
+ -- Arnaud Cornet <acornet@debian.org> Fri, 20 Jun 2008 00:38:50 +0200
+
+haproxy (1.3.14.5-1) unstable; urgency=low
+
+ * New Upstream Version (Closes: #484221)
+ * Use debhelper 7, drop CDBS.
+
+ -- Arnaud Cornet <acornet@debian.org> Wed, 04 Jun 2008 19:21:56 +0200
+
+haproxy (1.3.14.3-1) unstable; urgency=low
+
+ * New Upstream Version
+ * Add status argument support to init-script to conform to LSB.
+ * Cleanup pidfile after stop in init script. Init script return code fixups.
+
+ -- Arnaud Cornet <acornet@debian.org> Sun, 09 Mar 2008 21:30:29 +0100
+
+haproxy (1.3.14.2-3) unstable; urgency=low
+
+ * Add init script support for nbproc > 1 in configuration. That is,
+ multiple haproxy processes.
+ * Use 'option redispatch' instead of redispatch in debian default
+ config.
+
+ -- Arnaud Cornet <acornet@debian.org> Sun, 03 Feb 2008 18:22:28 +0100
+
+haproxy (1.3.14.2-2) unstable; urgency=low
+
+ * Fix init scripts's reload function to use -sf instead of -st (to wait for
+ active session to finish cleanly). Also support dash. Thanks to
+ Jean-Baptiste Quenot for noticing.
+
+ -- Arnaud Cornet <acornet@debian.org> Thu, 24 Jan 2008 23:47:26 +0100
+
+haproxy (1.3.14.2-1) unstable; urgency=low
+
+ * New Upstream Version
+ * Simplify DEB_MAKE_INVOKE, as upstream now supports us overriding
+ CFLAGS.
+ * Move haproxy to usr/sbin.
+
+ -- Arnaud Cornet <acornet@debian.org> Mon, 21 Jan 2008 22:42:51 +0100
+
+haproxy (1.3.14.1-1) unstable; urgency=low
+
+ * New upstream release.
+ * Drop dfsg list and hash code rewrite (merged upstream).
+ * Add a HAPROXY variable in init script.
+ * Drop makefile patch, fix debian/rules accordingly. Drop build-dependancy
+ on quilt.
+ * Manpage now upstream. Ship upstream's and drop ours.
+
+ -- Arnaud Cornet <acornet@debian.org> Tue, 01 Jan 2008 22:50:09 +0100
+
+haproxy (1.3.12.dfsg2-1) unstable; urgency=low
+
+ * New upstream bugfix release.
+ * Use new Homepage tag.
+ * Bump standards-version (no change needed).
+ * Add build-depend on quilt and add patch to allow proper CFLAGS passing to
+ make.
+
+ -- Arnaud Cornet <acornet@debian.org> Tue, 25 Dec 2007 21:52:59 +0100
+
+haproxy (1.3.12.dfsg-1) unstable; urgency=low
+
+ * Initial release (Closes: #416397).
+ * The DFSG removes files with GPL-incompabitle license and adds a
+ re-implementation by me.
+
+ -- Arnaud Cornet <acornet@debian.org> Fri, 17 Aug 2007 09:33:41 +0200
--- /dev/null
+doc/configuration.html
+doc/intro.html
--- /dev/null
+Source: haproxy
+Section: net
+Priority: optional
+Maintainer: Debian HAProxy Maintainers <pkg-haproxy-maintainers@lists.alioth.debian.org>
+Uploaders: Apollon Oikonomopoulos <apoikos@debian.org>,
+ Prach Pongpanich <prach@debian.org>,
+ Vincent Bernat <bernat@debian.org>
+Standards-Version: 3.9.6
+Build-Depends: debhelper (>= 9),
+ libpcre3-dev,
+ libssl-dev,
+ liblua5.3-dev,
+ dh-systemd (>= 1.5),
+ python-sphinx (>= 1.0.7+dfsg)
+Build-Depends-Indep: python, python-mako
+Homepage: http://haproxy.1wt.eu/
+Vcs-Git: git://anonscm.debian.org/pkg-haproxy/haproxy.git
+Vcs-Browser: http://anonscm.debian.org/gitweb/?p=pkg-haproxy/haproxy.git
+
+Package: haproxy
+Architecture: any
+Depends: ${shlibs:Depends}, ${misc:Depends}, adduser
+Suggests: vim-haproxy, haproxy-doc
+Description: fast and reliable load balancing reverse proxy
+ HAProxy is a TCP/HTTP reverse proxy which is particularly suited for high
+ availability environments. It features connection persistence through HTTP
+ cookies, load balancing, header addition, modification, deletion both ways. It
+ has request blocking capabilities and provides interface to display server
+ status.
+
+Package: haproxy-dbg
+Section: debug
+Priority: extra
+Architecture: any
+Depends: ${misc:Depends}, haproxy (= ${binary:Version})
+Description: fast and reliable load balancing reverse proxy (debug symbols)
+ HAProxy is a TCP/HTTP reverse proxy which is particularly suited for high
+ availability environments. It features connection persistence through HTTP
+ cookies, load balancing, header addition, modification, deletion both ways. It
+ has request blocking capabilities and provides interface to display server
+ status.
+ .
+ This package contains the debugging symbols for haproxy.
+
+Package: haproxy-doc
+Section: doc
+Priority: extra
+Architecture: all
+Depends: ${misc:Depends}, libjs-bootstrap (<< 4), libjs-jquery,
+ ${sphinxdoc:Depends}
+Description: fast and reliable load balancing reverse proxy (HTML documentation)
+ HAProxy is a TCP/HTTP reverse proxy which is particularly suited for high
+ availability environments. It features connection persistence through HTTP
+ cookies, load balancing, header addition, modification, deletion both ways. It
+ has request blocking capabilities and provides interface to display server
+ status.
+ .
+ This package contains the HTML documentation for haproxy.
+
+Package: vim-haproxy
+Architecture: all
+Depends: ${misc:Depends}
+Recommends: vim-addon-manager
+Description: syntax highlighting for HAProxy configuration files
+ The vim-haproxy package provides filetype detection and syntax highlighting
+ for HAProxy configuration files.
+ .
+ As per the Debian vim policy, installed addons are not activated
+ automatically, but the "vim-addon-manager" tool can be used for this purpose.
--- /dev/null
+Format: http://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
+Upstream-Name: haproxy
+Upstream-Contact: Willy Tarreau <w@1wt.eu>
+Source: http://haproxy.1wt.eu/
+
+Files: *
+Copyright: Copyright 2000-2015 Willy Tarreau <w@1wt.eu>.
+License: GPL-2+
+
+Files: ebtree/*
+ include/*
+ contrib/halog/fgets2.c
+Copyright: Copyright 2000-2013 Willy Tarreau - w@1wt.eu
+License: LGPL-2.1
+
+Files: include/proto/auth.h
+ include/types/checks.h
+ include/types/auth.h
+ src/auth.c
+Copyright: Copyright 2008-2010 Krzysztof Piotr Oledzki <ole@ans.pl>
+License: GPL-2+
+
+Files: include/import/lru.h
+ src/lru.c
+Copyright: Copyright (C) 2015 Willy Tarreau <w@1wt.eu>
+License: Expat
+
+Files: include/import/xxhash.h
+ src/xxhash.c
+Copyright: Copyright (C) 2012-2014, Yann Collet.
+License: BSD-2-clause
+
+Files: include/proto/shctx.h
+ src/shctx.c
+Copyright: Copyright (C) 2011-2012 EXCELIANCE
+License: GPL-2+
+
+Files: include/proto/compression.h
+ include/types/compression.h
+Copyright: Copyright 2012 (C) Exceliance, David Du Colombier <dducolombier@exceliance.fr>
+ William Lallemand <wlallemand@exceliance.fr>
+License: LGPL-2.1
+
+Files: include/proto/peers.h
+ include/proto/ssl_sock.h
+ include/types/peers.h
+ include/types/ssl_sock.h
+Copyright: Copyright (C) 2009-2012 EXCELIANCE, Emeric Brun <ebrun@exceliance.fr>
+License: LGPL-2.1
+
+Files: include/types/dns.h
+Copyright: Copyright (C) 2014 Baptiste Assmann <bedis9@gmail.com>
+License: LGPL-2.1
+
+Files: src/dns.c
+Copyright: Copyright (C) 2014 Baptiste Assmann <bedis9@gmail.com>
+License: GPL-2+
+
+Files: include/types/mailers.h
+ src/mailers.c
+Copyright: Copyright 2015 Horms Solutions Ltd., Simon Horman <horms@verge.net.au>
+ Copyright 2010 EXCELIANCE, Emeric Brun <ebrun@exceliance.fr>
+License: LGPL-2.1
+
+Files: include/proto/sample.h
+ include/proto/stick_table.h
+ include/types/sample.h
+ include/types/stick_table.h
+Copyright: Copyright (C) 2009-2012 EXCELIANCE, Emeric Brun <ebrun@exceliance.fr>
+ Copyright (C) 2010-2013 Willy Tarreau <w@1wt.eu>
+License: LGPL-2.1
+
+Files: include/types/counters.h
+Copyright: Copyright 2008-2009 Krzysztof Piotr Oledzki <ole@ans.pl>
+ Copyright 2011 Willy Tarreau <w@1wt.eu>
+License: LGPL-2.1
+
+Files: include/common/base64.h
+ include/common/uri_auth.h
+ include/proto/signal.h
+ include/types/signal.h
+Copyright: Copyright 2000-2013 Willy Tarreau <w@1wt.eu>
+License: GPL-2+
+
+Files: include/common/rbtree.h
+Copyright: (C) 1999 Andrea Arcangeli <andrea@suse.de>
+License: GPL-2+
+
+Files: src/base64.c
+ src/checks.c
+ src/dumpstats.c
+ src/server.c
+Copyright: Copyright 2000-2012 Willy Tarreau <w@1wt.eu>
+ Copyright 2007-2010 Krzysztof Piotr Oledzki <ole@ans.pl>
+License: GPL-2+
+
+Files: src/compression.c
+Copyright: Copyright 2012 (C) Exceliance, David Du Colombier <dducolombier@exceliance.fr>
+ William Lallemand <wlallemand@exceliance.fr>
+License: GPL-2+
+
+Files: src/haproxy-systemd-wrapper.c
+Copyright: Copyright 2013 Marc-Antoine Perennou <Marc-Antoine@Perennou.com>
+License: GPL-2+
+
+Files: src/rbtree.c
+Copyright: (C) 1999 Andrea Arcangeli <andrea@suse.de>
+ (C) 2002 David Woodhouse <dwmw2@infradead.org>
+License: GPL-2+
+
+Files: src/sample.c
+ src/stick_table.c
+Copyright: Copyright 2009-2010 EXCELIANCE, Emeric Brun <ebrun@exceliance.fr>
+ Copyright (C) 2010-2012 Willy Tarreau <w@1wt.eu>
+License: GPL-2+
+
+Files: src/peers.c
+ src/ssl_sock.c
+Copyright: Copyright (C) 2010-2012 EXCELIANCE, Emeric Brun <ebrun@exceliance.fr>
+License: GPL-2+
+
+Files: contrib/netsnmp-perl/haproxy.pl
+ contrib/base64/base64rev-gen.c
+Copyright: Copyright 2007-2010 Krzysztof Piotr Oledzki <ole@ans.pl>
+License: GPL-2+
+
+Files: examples/stats_haproxy.sh
+Copyright: Copyright 2007 Julien Antony and Matthieu Huguet
+License: GPL-2+
+
+Files: examples/check
+Copyright: 2006-2007 (C) Fabrice Dulaunoy <fabrice@dulaunoy.com>
+License: GPL-2+
+
+Files: tests/test_pools.c
+Copyright: Copyright 2007 Aleksandar Lazic <al-haproxy@none.at>
+License: GPL-2+
+
+Files: debian/*
+Copyright: Copyright (C) 2007-2011, Arnaud Cornet <acornet@debian.org>
+ Copyright (C) 2011, Christo Buschek <crito@30loops.net>
+ Copyright (C) 2013, Prach Pongpanich <prachpub@gmail.com>
+ Copyright (C) 2013-2014, Apollon Oikonomopoulos <apoikos@debian.org>
+ Copyright (C) 2013, Vincent Bernat <bernat@debian.org>
+License: GPL-2
+
+Files: debian/dconv/*
+Copyright: Copyright (C) 2012 Cyril Bonté
+License: Apache-2.0
+
+Files: debian/dconv/js/typeahead.bundle.js
+Copyright: Copyright 2013-2015 Twitter, Inc. and other contributors
+License: Expat
+
+License: GPL-2+
+ This program is free software; you can redistribute it
+ and/or modify it under the terms of the GNU General Public
+ License as published by the Free Software Foundation; either
+ version 2 of the License, or (at your option) any later
+ version.
+ .
+ This program is distributed in the hope that it will be
+ useful, but WITHOUT ANY WARRANTY; without even the implied
+ warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
+ PURPOSE. See the GNU General Public License for more
+ details.
+ .
+ You should have received a copy of the GNU General Public
+ License along with this package; if not, write to the Free
+ Software Foundation, Inc., 51 Franklin St, Fifth Floor,
+ Boston, MA 02110-1301 USA
+ .
+ On Debian systems, the full text of the GNU General Public
+ License version 2 can be found in the file
+ `/usr/share/common-licenses/GPL-2'.
+
+License: LGPL-2.1
+ This library is free software; you can redistribute it and/or
+ modify it under the terms of the GNU Lesser General Public
+ License as published by the Free Software Foundation; either
+ version 2.1 of the License, or (at your option) any later version.
+ .
+ This library is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ Lesser General Public License for more details.
+ .
+ You should have received a copy of the GNU Lesser General Public
+ License along with this library; if not, write to the Free Software
+ Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ .
+ On Debian systems, the complete text of the GNU Lesser General Public License,
+ version 2.1, can be found in /usr/share/common-licenses/LGPL-2.1.
+
+License: GPL-2
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License version 2 as
+ published by the Free Software Foundation.
+ .
+ On Debian systems, the complete text of the GNU General Public License, version
+ 2, can be found in /usr/share/common-licenses/GPL-2.
+
+License: Apache-2.0
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+ .
+ http://www.apache.org/licenses/LICENSE-2.0
+ .
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+ .
+ On Debian systems, the full text of the Apache License version 2.0 can be
+ found in the file `/usr/share/common-licenses/Apache-2.0'.
+
+License: Expat
+ Permission is hereby granted, free of charge, to any person obtaining
+ a copy of this software and associated documentation files (the
+ "Software"), to deal in the Software without restriction, including
+ without limitation the rights to use, copy, modify, merge, publish,
+ distribute, sublicense, and/or sell copies of the Software, and to
+ permit persons to whom the Software is furnished to do so, subject to
+ the following conditions:
+ .
+ The above copyright notice and this permission notice shall be
+ included in all copies or substantial portions of the Software.
+ .
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
+ LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
+ OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
+ WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+
+License: BSD-2-clause
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions are
+ met:
+ .
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above
+ copyright notice, this list of conditions and the following disclaimer
+ in the documentation and/or other materials provided with the
+ distribution.
+ .
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
--- /dev/null
+
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright [yyyy] [name of copyright owner]
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
--- /dev/null
+Copyright 2012 Cyril Bonté
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
--- /dev/null
+# HAProxy Documentation Converter
+
+Made to convert the HAProxy documentation into HTML.
+
+More than HTML, the main goal is to provide easy navigation.
+
+## Documentations
+
+A bot periodically fetches last commits for HAProxy 1.4 and 1.5 to produce up-to-date documentations.
+
+Converted documentations are then stored online :
+- HAProxy 1.4 Configuration Manual : [stable](http://cbonte.github.com/haproxy-dconv/configuration-1.4.html) / [snapshot](http://cbonte.github.com/haproxy-dconv/snapshot/configuration-1.4.html)
+- HAProxy 1.5 Configuration Manual : [stable](http://cbonte.github.com/haproxy-dconv/configuration-1.5.html) / [snapshot](http://cbonte.github.com/haproxy-dconv/snapshot/configuration-1.5.html)
+- HAProxy 1.6 Configuration Manual : [stable](http://cbonte.github.com/haproxy-dconv/configuration-1.6.html) / [snapshot](http://cbonte.github.com/haproxy-dconv/snapshot/configuration-1.6.html)
+
+
+## Contribute
+
+The project now lives by itself, as it is sufficiently useable. But I'm sure we can do even better.
+Feel free to report feature requests or to provide patches !
+
--- /dev/null
+/* Global Styles */
+
+body {
+ margin-top: 50px;
+ background: #eee;
+}
+
+a.anchor {
+ display: block; position: relative; top: -50px; visibility: hidden;
+}
+
+/* ------------------------------- */
+
+/* Wrappers */
+
+/* ------------------------------- */
+
+#wrapper {
+ width: 100%;
+}
+
+#page-wrapper {
+ padding: 0 15px 50px;
+ width: 740px;
+ background-color: #fff;
+ margin-left: 250px;
+}
+
+#sidebar {
+ position: fixed;
+ width: 250px;
+ top: 50px;
+ bottom: 0;
+ padding: 15px;
+ background: #f5f5f5;
+ border-right: 1px solid #ccc;
+}
+
+
+/* ------------------------------- */
+
+/* Twitter typeahead.js */
+
+/* ------------------------------- */
+
+.twitter-typeahead {
+ width: 100%;
+}
+.typeahead,
+.tt-query,
+.tt-hint {
+ width: 100%;
+ padding: 8px 12px;
+ border: 2px solid #ccc;
+ -webkit-border-radius: 8px;
+ -moz-border-radius: 8px;
+ border-radius: 8px;
+ outline: none;
+}
+
+.typeahead {
+ background-color: #fff;
+}
+
+.typeahead:focus {
+ border: 2px solid #0097cf;
+}
+
+.tt-query {
+ -webkit-box-shadow: inset 0 1px 1px rgba(0, 0, 0, 0.075);
+ -moz-box-shadow: inset 0 1px 1px rgba(0, 0, 0, 0.075);
+ box-shadow: inset 0 1px 1px rgba(0, 0, 0, 0.075);
+}
+
+.tt-hint {
+ color: #999
+}
+
+.tt-menu {
+ width: 100%;
+ margin-top: 4px;
+ padding: 8px 0;
+ background-color: #fff;
+ border: 1px solid #ccc;
+ border: 1px solid rgba(0, 0, 0, 0.2);
+ -webkit-border-radius: 8px;
+ -moz-border-radius: 8px;
+ border-radius: 8px;
+ -webkit-box-shadow: 0 5px 10px rgba(0,0,0,.2);
+ -moz-box-shadow: 0 5px 10px rgba(0,0,0,.2);
+ box-shadow: 0 5px 10px rgba(0,0,0,.2);
+}
+
+.tt-suggestion {
+ padding: 3px 8px;
+ line-height: 24px;
+}
+
+.tt-suggestion:hover {
+ cursor: pointer;
+ color: #fff;
+ background-color: #0097cf;
+}
+
+.tt-suggestion.tt-cursor {
+ color: #fff;
+ background-color: #0097cf;
+
+}
+
+.tt-suggestion p {
+ margin: 0;
+}
+
+#searchKeyword {
+ width: 100%;
+ margin: 0;
+}
+
+#searchKeyword .tt-menu {
+ max-height: 300px;
+ overflow-y: auto;
+}
+
+/* ------------------------------- */
+
+/* Misc */
+
+/* ------------------------------- */
+
+.well-small ul {
+ padding: 0px;
+}
+.table th,
+.table td.pagination-centered {
+ text-align: center;
+}
+
+pre {
+ overflow: visible; /* Workaround for dropdown menus */
+}
+
+pre.text {
+ padding: 0;
+ font-size: 13px;
+ color: #000;
+ background: transparent;
+ border: none;
+ margin-bottom: 18px;
+}
+pre.arguments {
+ font-size: 13px;
+ color: #000;
+ background: transparent;
+}
+
+.comment {
+ color: #888;
+}
+small, .small {
+ color: #888;
+}
+.level1 {
+ font-size: 125%;
+}
+.sublevels {
+ border-left: 1px solid #ccc;
+ padding-left: 10px;
+}
+.tab {
+ padding-left: 20px;
+}
+.keyword {
+ font-family: Menlo, Monaco, "Courier New", monospace;
+ white-space: pre;
+ background: #eee;
+ border-top: 1px solid #fff;
+ border-bottom: 1px solid #ccc;
+}
+
+.label-see-also {
+ background-color: #999;
+}
+.label-disabled {
+ background-color: #ccc;
+}
+h5 {
+ text-decoration: underline;
+}
+
+.example-desc {
+ border-bottom: 1px solid #ccc;
+ margin-bottom: 18px;
+}
+.noheight {
+ min-height: 0 !important;
+}
+.separator {
+ margin-bottom: 18px;
+}
+
+div {
+ word-wrap: break-word;
+}
+
+html, body {
+ width: 100%;
+ min-height: 100%:
+}
+
+.dropdown-menu > li {
+ white-space: nowrap;
+}
+/* TEMPORARILY HACKS WHILE PRE TAGS ARE USED
+-------------------------------------------------- */
+
+h5,
+.unpre,
+.example-desc,
+.dropdown-menu {
+ font-family: "Helvetica Neue", Helvetica, Arial, sans-serif;
+ white-space: normal;
+}
--- /dev/null
+/*!
+ * typeahead.js 0.11.1
+ * https://github.com/twitter/typeahead.js
+ * Copyright 2013-2015 Twitter, Inc. and other contributors; Licensed MIT
+ */
+
+(function(root, factory) {
+ if (typeof define === "function" && define.amd) {
+ define("bloodhound", [ "jquery" ], function(a0) {
+ return root["Bloodhound"] = factory(a0);
+ });
+ } else if (typeof exports === "object") {
+ module.exports = factory(require("jquery"));
+ } else {
+ root["Bloodhound"] = factory(jQuery);
+ }
+})(this, function($) {
+ var _ = function() {
+ "use strict";
+ return {
+ isMsie: function() {
+ return /(msie|trident)/i.test(navigator.userAgent) ? navigator.userAgent.match(/(msie |rv:)(\d+(.\d+)?)/i)[2] : false;
+ },
+ isBlankString: function(str) {
+ return !str || /^\s*$/.test(str);
+ },
+ escapeRegExChars: function(str) {
+ return str.replace(/[\-\[\]\/\{\}\(\)\*\+\?\.\\\^\$\|]/g, "\\$&");
+ },
+ isString: function(obj) {
+ return typeof obj === "string";
+ },
+ isNumber: function(obj) {
+ return typeof obj === "number";
+ },
+ isArray: $.isArray,
+ isFunction: $.isFunction,
+ isObject: $.isPlainObject,
+ isUndefined: function(obj) {
+ return typeof obj === "undefined";
+ },
+ isElement: function(obj) {
+ return !!(obj && obj.nodeType === 1);
+ },
+ isJQuery: function(obj) {
+ return obj instanceof $;
+ },
+ toStr: function toStr(s) {
+ return _.isUndefined(s) || s === null ? "" : s + "";
+ },
+ bind: $.proxy,
+ each: function(collection, cb) {
+ $.each(collection, reverseArgs);
+ function reverseArgs(index, value) {
+ return cb(value, index);
+ }
+ },
+ map: $.map,
+ filter: $.grep,
+ every: function(obj, test) {
+ var result = true;
+ if (!obj) {
+ return result;
+ }
+ $.each(obj, function(key, val) {
+ if (!(result = test.call(null, val, key, obj))) {
+ return false;
+ }
+ });
+ return !!result;
+ },
+ some: function(obj, test) {
+ var result = false;
+ if (!obj) {
+ return result;
+ }
+ $.each(obj, function(key, val) {
+ if (result = test.call(null, val, key, obj)) {
+ return false;
+ }
+ });
+ return !!result;
+ },
+ mixin: $.extend,
+ identity: function(x) {
+ return x;
+ },
+ clone: function(obj) {
+ return $.extend(true, {}, obj);
+ },
+ getIdGenerator: function() {
+ var counter = 0;
+ return function() {
+ return counter++;
+ };
+ },
+ templatify: function templatify(obj) {
+ return $.isFunction(obj) ? obj : template;
+ function template() {
+ return String(obj);
+ }
+ },
+ defer: function(fn) {
+ setTimeout(fn, 0);
+ },
+ debounce: function(func, wait, immediate) {
+ var timeout, result;
+ return function() {
+ var context = this, args = arguments, later, callNow;
+ later = function() {
+ timeout = null;
+ if (!immediate) {
+ result = func.apply(context, args);
+ }
+ };
+ callNow = immediate && !timeout;
+ clearTimeout(timeout);
+ timeout = setTimeout(later, wait);
+ if (callNow) {
+ result = func.apply(context, args);
+ }
+ return result;
+ };
+ },
+ throttle: function(func, wait) {
+ var context, args, timeout, result, previous, later;
+ previous = 0;
+ later = function() {
+ previous = new Date();
+ timeout = null;
+ result = func.apply(context, args);
+ };
+ return function() {
+ var now = new Date(), remaining = wait - (now - previous);
+ context = this;
+ args = arguments;
+ if (remaining <= 0) {
+ clearTimeout(timeout);
+ timeout = null;
+ previous = now;
+ result = func.apply(context, args);
+ } else if (!timeout) {
+ timeout = setTimeout(later, remaining);
+ }
+ return result;
+ };
+ },
+ stringify: function(val) {
+ return _.isString(val) ? val : JSON.stringify(val);
+ },
+ noop: function() {}
+ };
+ }();
+ var VERSION = "0.11.1";
+ var tokenizers = function() {
+ "use strict";
+ return {
+ nonword: nonword,
+ whitespace: whitespace,
+ obj: {
+ nonword: getObjTokenizer(nonword),
+ whitespace: getObjTokenizer(whitespace)
+ }
+ };
+ function whitespace(str) {
+ str = _.toStr(str);
+ return str ? str.split(/\s+/) : [];
+ }
+ function nonword(str) {
+ str = _.toStr(str);
+ return str ? str.split(/\W+/) : [];
+ }
+ function getObjTokenizer(tokenizer) {
+ return function setKey(keys) {
+ keys = _.isArray(keys) ? keys : [].slice.call(arguments, 0);
+ return function tokenize(o) {
+ var tokens = [];
+ _.each(keys, function(k) {
+ tokens = tokens.concat(tokenizer(_.toStr(o[k])));
+ });
+ return tokens;
+ };
+ };
+ }
+ }();
+ var LruCache = function() {
+ "use strict";
+ function LruCache(maxSize) {
+ this.maxSize = _.isNumber(maxSize) ? maxSize : 100;
+ this.reset();
+ if (this.maxSize <= 0) {
+ this.set = this.get = $.noop;
+ }
+ }
+ _.mixin(LruCache.prototype, {
+ set: function set(key, val) {
+ var tailItem = this.list.tail, node;
+ if (this.size >= this.maxSize) {
+ this.list.remove(tailItem);
+ delete this.hash[tailItem.key];
+ this.size--;
+ }
+ if (node = this.hash[key]) {
+ node.val = val;
+ this.list.moveToFront(node);
+ } else {
+ node = new Node(key, val);
+ this.list.add(node);
+ this.hash[key] = node;
+ this.size++;
+ }
+ },
+ get: function get(key) {
+ var node = this.hash[key];
+ if (node) {
+ this.list.moveToFront(node);
+ return node.val;
+ }
+ },
+ reset: function reset() {
+ this.size = 0;
+ this.hash = {};
+ this.list = new List();
+ }
+ });
+ function List() {
+ this.head = this.tail = null;
+ }
+ _.mixin(List.prototype, {
+ add: function add(node) {
+ if (this.head) {
+ node.next = this.head;
+ this.head.prev = node;
+ }
+ this.head = node;
+ this.tail = this.tail || node;
+ },
+ remove: function remove(node) {
+ node.prev ? node.prev.next = node.next : this.head = node.next;
+ node.next ? node.next.prev = node.prev : this.tail = node.prev;
+ },
+ moveToFront: function(node) {
+ this.remove(node);
+ this.add(node);
+ }
+ });
+ function Node(key, val) {
+ this.key = key;
+ this.val = val;
+ this.prev = this.next = null;
+ }
+ return LruCache;
+ }();
+ var PersistentStorage = function() {
+ "use strict";
+ var LOCAL_STORAGE;
+ try {
+ LOCAL_STORAGE = window.localStorage;
+ LOCAL_STORAGE.setItem("~~~", "!");
+ LOCAL_STORAGE.removeItem("~~~");
+ } catch (err) {
+ LOCAL_STORAGE = null;
+ }
+ function PersistentStorage(namespace, override) {
+ this.prefix = [ "__", namespace, "__" ].join("");
+ this.ttlKey = "__ttl__";
+ this.keyMatcher = new RegExp("^" + _.escapeRegExChars(this.prefix));
+ this.ls = override || LOCAL_STORAGE;
+ !this.ls && this._noop();
+ }
+ _.mixin(PersistentStorage.prototype, {
+ _prefix: function(key) {
+ return this.prefix + key;
+ },
+ _ttlKey: function(key) {
+ return this._prefix(key) + this.ttlKey;
+ },
+ _noop: function() {
+ this.get = this.set = this.remove = this.clear = this.isExpired = _.noop;
+ },
+ _safeSet: function(key, val) {
+ try {
+ this.ls.setItem(key, val);
+ } catch (err) {
+ if (err.name === "QuotaExceededError") {
+ this.clear();
+ this._noop();
+ }
+ }
+ },
+ get: function(key) {
+ if (this.isExpired(key)) {
+ this.remove(key);
+ }
+ return decode(this.ls.getItem(this._prefix(key)));
+ },
+ set: function(key, val, ttl) {
+ if (_.isNumber(ttl)) {
+ this._safeSet(this._ttlKey(key), encode(now() + ttl));
+ } else {
+ this.ls.removeItem(this._ttlKey(key));
+ }
+ return this._safeSet(this._prefix(key), encode(val));
+ },
+ remove: function(key) {
+ this.ls.removeItem(this._ttlKey(key));
+ this.ls.removeItem(this._prefix(key));
+ return this;
+ },
+ clear: function() {
+ var i, keys = gatherMatchingKeys(this.keyMatcher);
+ for (i = keys.length; i--; ) {
+ this.remove(keys[i]);
+ }
+ return this;
+ },
+ isExpired: function(key) {
+ var ttl = decode(this.ls.getItem(this._ttlKey(key)));
+ return _.isNumber(ttl) && now() > ttl ? true : false;
+ }
+ });
+ return PersistentStorage;
+ function now() {
+ return new Date().getTime();
+ }
+ function encode(val) {
+ return JSON.stringify(_.isUndefined(val) ? null : val);
+ }
+ function decode(val) {
+ return $.parseJSON(val);
+ }
+ function gatherMatchingKeys(keyMatcher) {
+ var i, key, keys = [], len = LOCAL_STORAGE.length;
+ for (i = 0; i < len; i++) {
+ if ((key = LOCAL_STORAGE.key(i)).match(keyMatcher)) {
+ keys.push(key.replace(keyMatcher, ""));
+ }
+ }
+ return keys;
+ }
+ }();
+ var Transport = function() {
+ "use strict";
+ var pendingRequestsCount = 0, pendingRequests = {}, maxPendingRequests = 6, sharedCache = new LruCache(10);
+ function Transport(o) {
+ o = o || {};
+ this.cancelled = false;
+ this.lastReq = null;
+ this._send = o.transport;
+ this._get = o.limiter ? o.limiter(this._get) : this._get;
+ this._cache = o.cache === false ? new LruCache(0) : sharedCache;
+ }
+ Transport.setMaxPendingRequests = function setMaxPendingRequests(num) {
+ maxPendingRequests = num;
+ };
+ Transport.resetCache = function resetCache() {
+ sharedCache.reset();
+ };
+ _.mixin(Transport.prototype, {
+ _fingerprint: function fingerprint(o) {
+ o = o || {};
+ return o.url + o.type + $.param(o.data || {});
+ },
+ _get: function(o, cb) {
+ var that = this, fingerprint, jqXhr;
+ fingerprint = this._fingerprint(o);
+ if (this.cancelled || fingerprint !== this.lastReq) {
+ return;
+ }
+ if (jqXhr = pendingRequests[fingerprint]) {
+ jqXhr.done(done).fail(fail);
+ } else if (pendingRequestsCount < maxPendingRequests) {
+ pendingRequestsCount++;
+ pendingRequests[fingerprint] = this._send(o).done(done).fail(fail).always(always);
+ } else {
+ this.onDeckRequestArgs = [].slice.call(arguments, 0);
+ }
+ function done(resp) {
+ cb(null, resp);
+ that._cache.set(fingerprint, resp);
+ }
+ function fail() {
+ cb(true);
+ }
+ function always() {
+ pendingRequestsCount--;
+ delete pendingRequests[fingerprint];
+ if (that.onDeckRequestArgs) {
+ that._get.apply(that, that.onDeckRequestArgs);
+ that.onDeckRequestArgs = null;
+ }
+ }
+ },
+ get: function(o, cb) {
+ var resp, fingerprint;
+ cb = cb || $.noop;
+ o = _.isString(o) ? {
+ url: o
+ } : o || {};
+ fingerprint = this._fingerprint(o);
+ this.cancelled = false;
+ this.lastReq = fingerprint;
+ if (resp = this._cache.get(fingerprint)) {
+ cb(null, resp);
+ } else {
+ this._get(o, cb);
+ }
+ },
+ cancel: function() {
+ this.cancelled = true;
+ }
+ });
+ return Transport;
+ }();
+ var SearchIndex = window.SearchIndex = function() {
+ "use strict";
+ var CHILDREN = "c", IDS = "i";
+ function SearchIndex(o) {
+ o = o || {};
+ if (!o.datumTokenizer || !o.queryTokenizer) {
+ $.error("datumTokenizer and queryTokenizer are both required");
+ }
+ this.identify = o.identify || _.stringify;
+ this.datumTokenizer = o.datumTokenizer;
+ this.queryTokenizer = o.queryTokenizer;
+ this.reset();
+ }
+ _.mixin(SearchIndex.prototype, {
+ bootstrap: function bootstrap(o) {
+ this.datums = o.datums;
+ this.trie = o.trie;
+ },
+ add: function(data) {
+ var that = this;
+ data = _.isArray(data) ? data : [ data ];
+ _.each(data, function(datum) {
+ var id, tokens;
+ that.datums[id = that.identify(datum)] = datum;
+ tokens = normalizeTokens(that.datumTokenizer(datum));
+ _.each(tokens, function(token) {
+ var node, chars, ch;
+ node = that.trie;
+ chars = token.split("");
+ while (ch = chars.shift()) {
+ node = node[CHILDREN][ch] || (node[CHILDREN][ch] = newNode());
+ node[IDS].push(id);
+ }
+ });
+ });
+ },
+ get: function get(ids) {
+ var that = this;
+ return _.map(ids, function(id) {
+ return that.datums[id];
+ });
+ },
+ search: function search(query) {
+ var that = this, tokens, matches;
+ tokens = normalizeTokens(this.queryTokenizer(query));
+ _.each(tokens, function(token) {
+ var node, chars, ch, ids;
+ if (matches && matches.length === 0) {
+ return false;
+ }
+ node = that.trie;
+ chars = token.split("");
+ while (node && (ch = chars.shift())) {
+ node = node[CHILDREN][ch];
+ }
+ if (node && chars.length === 0) {
+ ids = node[IDS].slice(0);
+ matches = matches ? getIntersection(matches, ids) : ids;
+ } else {
+ matches = [];
+ return false;
+ }
+ });
+ return matches ? _.map(unique(matches), function(id) {
+ return that.datums[id];
+ }) : [];
+ },
+ all: function all() {
+ var values = [];
+ for (var key in this.datums) {
+ values.push(this.datums[key]);
+ }
+ return values;
+ },
+ reset: function reset() {
+ this.datums = {};
+ this.trie = newNode();
+ },
+ serialize: function serialize() {
+ return {
+ datums: this.datums,
+ trie: this.trie
+ };
+ }
+ });
+ return SearchIndex;
+ function normalizeTokens(tokens) {
+ tokens = _.filter(tokens, function(token) {
+ return !!token;
+ });
+ tokens = _.map(tokens, function(token) {
+ return token.toLowerCase();
+ });
+ return tokens;
+ }
+ function newNode() {
+ var node = {};
+ node[IDS] = [];
+ node[CHILDREN] = {};
+ return node;
+ }
+ function unique(array) {
+ var seen = {}, uniques = [];
+ for (var i = 0, len = array.length; i < len; i++) {
+ if (!seen[array[i]]) {
+ seen[array[i]] = true;
+ uniques.push(array[i]);
+ }
+ }
+ return uniques;
+ }
+ function getIntersection(arrayA, arrayB) {
+ var ai = 0, bi = 0, intersection = [];
+ arrayA = arrayA.sort();
+ arrayB = arrayB.sort();
+ var lenArrayA = arrayA.length, lenArrayB = arrayB.length;
+ while (ai < lenArrayA && bi < lenArrayB) {
+ if (arrayA[ai] < arrayB[bi]) {
+ ai++;
+ } else if (arrayA[ai] > arrayB[bi]) {
+ bi++;
+ } else {
+ intersection.push(arrayA[ai]);
+ ai++;
+ bi++;
+ }
+ }
+ return intersection;
+ }
+ }();
+ var Prefetch = function() {
+ "use strict";
+ var keys;
+ keys = {
+ data: "data",
+ protocol: "protocol",
+ thumbprint: "thumbprint"
+ };
+ function Prefetch(o) {
+ this.url = o.url;
+ this.ttl = o.ttl;
+ this.cache = o.cache;
+ this.prepare = o.prepare;
+ this.transform = o.transform;
+ this.transport = o.transport;
+ this.thumbprint = o.thumbprint;
+ this.storage = new PersistentStorage(o.cacheKey);
+ }
+ _.mixin(Prefetch.prototype, {
+ _settings: function settings() {
+ return {
+ url: this.url,
+ type: "GET",
+ dataType: "json"
+ };
+ },
+ store: function store(data) {
+ if (!this.cache) {
+ return;
+ }
+ this.storage.set(keys.data, data, this.ttl);
+ this.storage.set(keys.protocol, location.protocol, this.ttl);
+ this.storage.set(keys.thumbprint, this.thumbprint, this.ttl);
+ },
+ fromCache: function fromCache() {
+ var stored = {}, isExpired;
+ if (!this.cache) {
+ return null;
+ }
+ stored.data = this.storage.get(keys.data);
+ stored.protocol = this.storage.get(keys.protocol);
+ stored.thumbprint = this.storage.get(keys.thumbprint);
+ isExpired = stored.thumbprint !== this.thumbprint || stored.protocol !== location.protocol;
+ return stored.data && !isExpired ? stored.data : null;
+ },
+ fromNetwork: function(cb) {
+ var that = this, settings;
+ if (!cb) {
+ return;
+ }
+ settings = this.prepare(this._settings());
+ this.transport(settings).fail(onError).done(onResponse);
+ function onError() {
+ cb(true);
+ }
+ function onResponse(resp) {
+ cb(null, that.transform(resp));
+ }
+ },
+ clear: function clear() {
+ this.storage.clear();
+ return this;
+ }
+ });
+ return Prefetch;
+ }();
+ var Remote = function() {
+ "use strict";
+ function Remote(o) {
+ this.url = o.url;
+ this.prepare = o.prepare;
+ this.transform = o.transform;
+ this.transport = new Transport({
+ cache: o.cache,
+ limiter: o.limiter,
+ transport: o.transport
+ });
+ }
+ _.mixin(Remote.prototype, {
+ _settings: function settings() {
+ return {
+ url: this.url,
+ type: "GET",
+ dataType: "json"
+ };
+ },
+ get: function get(query, cb) {
+ var that = this, settings;
+ if (!cb) {
+ return;
+ }
+ query = query || "";
+ settings = this.prepare(query, this._settings());
+ return this.transport.get(settings, onResponse);
+ function onResponse(err, resp) {
+ err ? cb([]) : cb(that.transform(resp));
+ }
+ },
+ cancelLastRequest: function cancelLastRequest() {
+ this.transport.cancel();
+ }
+ });
+ return Remote;
+ }();
+ var oParser = function() {
+ "use strict";
+ return function parse(o) {
+ var defaults, sorter;
+ defaults = {
+ initialize: true,
+ identify: _.stringify,
+ datumTokenizer: null,
+ queryTokenizer: null,
+ sufficient: 5,
+ sorter: null,
+ local: [],
+ prefetch: null,
+ remote: null
+ };
+ o = _.mixin(defaults, o || {});
+ !o.datumTokenizer && $.error("datumTokenizer is required");
+ !o.queryTokenizer && $.error("queryTokenizer is required");
+ sorter = o.sorter;
+ o.sorter = sorter ? function(x) {
+ return x.sort(sorter);
+ } : _.identity;
+ o.local = _.isFunction(o.local) ? o.local() : o.local;
+ o.prefetch = parsePrefetch(o.prefetch);
+ o.remote = parseRemote(o.remote);
+ return o;
+ };
+ function parsePrefetch(o) {
+ var defaults;
+ if (!o) {
+ return null;
+ }
+ defaults = {
+ url: null,
+ ttl: 24 * 60 * 60 * 1e3,
+ cache: true,
+ cacheKey: null,
+ thumbprint: "",
+ prepare: _.identity,
+ transform: _.identity,
+ transport: null
+ };
+ o = _.isString(o) ? {
+ url: o
+ } : o;
+ o = _.mixin(defaults, o);
+ !o.url && $.error("prefetch requires url to be set");
+ o.transform = o.filter || o.transform;
+ o.cacheKey = o.cacheKey || o.url;
+ o.thumbprint = VERSION + o.thumbprint;
+ o.transport = o.transport ? callbackToDeferred(o.transport) : $.ajax;
+ return o;
+ }
+ function parseRemote(o) {
+ var defaults;
+ if (!o) {
+ return;
+ }
+ defaults = {
+ url: null,
+ cache: true,
+ prepare: null,
+ replace: null,
+ wildcard: null,
+ limiter: null,
+ rateLimitBy: "debounce",
+ rateLimitWait: 300,
+ transform: _.identity,
+ transport: null
+ };
+ o = _.isString(o) ? {
+ url: o
+ } : o;
+ o = _.mixin(defaults, o);
+ !o.url && $.error("remote requires url to be set");
+ o.transform = o.filter || o.transform;
+ o.prepare = toRemotePrepare(o);
+ o.limiter = toLimiter(o);
+ o.transport = o.transport ? callbackToDeferred(o.transport) : $.ajax;
+ delete o.replace;
+ delete o.wildcard;
+ delete o.rateLimitBy;
+ delete o.rateLimitWait;
+ return o;
+ }
+ function toRemotePrepare(o) {
+ var prepare, replace, wildcard;
+ prepare = o.prepare;
+ replace = o.replace;
+ wildcard = o.wildcard;
+ if (prepare) {
+ return prepare;
+ }
+ if (replace) {
+ prepare = prepareByReplace;
+ } else if (o.wildcard) {
+ prepare = prepareByWildcard;
+ } else {
+ prepare = idenityPrepare;
+ }
+ return prepare;
+ function prepareByReplace(query, settings) {
+ settings.url = replace(settings.url, query);
+ return settings;
+ }
+ function prepareByWildcard(query, settings) {
+ settings.url = settings.url.replace(wildcard, encodeURIComponent(query));
+ return settings;
+ }
+ function idenityPrepare(query, settings) {
+ return settings;
+ }
+ }
+ function toLimiter(o) {
+ var limiter, method, wait;
+ limiter = o.limiter;
+ method = o.rateLimitBy;
+ wait = o.rateLimitWait;
+ if (!limiter) {
+ limiter = /^throttle$/i.test(method) ? throttle(wait) : debounce(wait);
+ }
+ return limiter;
+ function debounce(wait) {
+ return function debounce(fn) {
+ return _.debounce(fn, wait);
+ };
+ }
+ function throttle(wait) {
+ return function throttle(fn) {
+ return _.throttle(fn, wait);
+ };
+ }
+ }
+ function callbackToDeferred(fn) {
+ return function wrapper(o) {
+ var deferred = $.Deferred();
+ fn(o, onSuccess, onError);
+ return deferred;
+ function onSuccess(resp) {
+ _.defer(function() {
+ deferred.resolve(resp);
+ });
+ }
+ function onError(err) {
+ _.defer(function() {
+ deferred.reject(err);
+ });
+ }
+ };
+ }
+ }();
+ var Bloodhound = function() {
+ "use strict";
+ var old;
+ old = window && window.Bloodhound;
+ function Bloodhound(o) {
+ o = oParser(o);
+ this.sorter = o.sorter;
+ this.identify = o.identify;
+ this.sufficient = o.sufficient;
+ this.local = o.local;
+ this.remote = o.remote ? new Remote(o.remote) : null;
+ this.prefetch = o.prefetch ? new Prefetch(o.prefetch) : null;
+ this.index = new SearchIndex({
+ identify: this.identify,
+ datumTokenizer: o.datumTokenizer,
+ queryTokenizer: o.queryTokenizer
+ });
+ o.initialize !== false && this.initialize();
+ }
+ Bloodhound.noConflict = function noConflict() {
+ window && (window.Bloodhound = old);
+ return Bloodhound;
+ };
+ Bloodhound.tokenizers = tokenizers;
+ _.mixin(Bloodhound.prototype, {
+ __ttAdapter: function ttAdapter() {
+ var that = this;
+ return this.remote ? withAsync : withoutAsync;
+ function withAsync(query, sync, async) {
+ return that.search(query, sync, async);
+ }
+ function withoutAsync(query, sync) {
+ return that.search(query, sync);
+ }
+ },
+ _loadPrefetch: function loadPrefetch() {
+ var that = this, deferred, serialized;
+ deferred = $.Deferred();
+ if (!this.prefetch) {
+ deferred.resolve();
+ } else if (serialized = this.prefetch.fromCache()) {
+ this.index.bootstrap(serialized);
+ deferred.resolve();
+ } else {
+ this.prefetch.fromNetwork(done);
+ }
+ return deferred.promise();
+ function done(err, data) {
+ if (err) {
+ return deferred.reject();
+ }
+ that.add(data);
+ that.prefetch.store(that.index.serialize());
+ deferred.resolve();
+ }
+ },
+ _initialize: function initialize() {
+ var that = this, deferred;
+ this.clear();
+ (this.initPromise = this._loadPrefetch()).done(addLocalToIndex);
+ return this.initPromise;
+ function addLocalToIndex() {
+ that.add(that.local);
+ }
+ },
+ initialize: function initialize(force) {
+ return !this.initPromise || force ? this._initialize() : this.initPromise;
+ },
+ add: function add(data) {
+ this.index.add(data);
+ return this;
+ },
+ get: function get(ids) {
+ ids = _.isArray(ids) ? ids : [].slice.call(arguments);
+ return this.index.get(ids);
+ },
+ search: function search(query, sync, async) {
+ var that = this, local;
+ local = this.sorter(this.index.search(query));
+ sync(this.remote ? local.slice() : local);
+ if (this.remote && local.length < this.sufficient) {
+ this.remote.get(query, processRemote);
+ } else if (this.remote) {
+ this.remote.cancelLastRequest();
+ }
+ return this;
+ function processRemote(remote) {
+ var nonDuplicates = [];
+ _.each(remote, function(r) {
+ !_.some(local, function(l) {
+ return that.identify(r) === that.identify(l);
+ }) && nonDuplicates.push(r);
+ });
+ async && async(nonDuplicates);
+ }
+ },
+ all: function all() {
+ return this.index.all();
+ },
+ clear: function clear() {
+ this.index.reset();
+ return this;
+ },
+ clearPrefetchCache: function clearPrefetchCache() {
+ this.prefetch && this.prefetch.clear();
+ return this;
+ },
+ clearRemoteCache: function clearRemoteCache() {
+ Transport.resetCache();
+ return this;
+ },
+ ttAdapter: function ttAdapter() {
+ return this.__ttAdapter();
+ }
+ });
+ return Bloodhound;
+ }();
+ return Bloodhound;
+});
+
+(function(root, factory) {
+ if (typeof define === "function" && define.amd) {
+ define("typeahead.js", [ "jquery" ], function(a0) {
+ return factory(a0);
+ });
+ } else if (typeof exports === "object") {
+ module.exports = factory(require("jquery"));
+ } else {
+ factory(jQuery);
+ }
+})(this, function($) {
+ var _ = function() {
+ "use strict";
+ return {
+ isMsie: function() {
+ return /(msie|trident)/i.test(navigator.userAgent) ? navigator.userAgent.match(/(msie |rv:)(\d+(.\d+)?)/i)[2] : false;
+ },
+ isBlankString: function(str) {
+ return !str || /^\s*$/.test(str);
+ },
+ escapeRegExChars: function(str) {
+ return str.replace(/[\-\[\]\/\{\}\(\)\*\+\?\.\\\^\$\|]/g, "\\$&");
+ },
+ isString: function(obj) {
+ return typeof obj === "string";
+ },
+ isNumber: function(obj) {
+ return typeof obj === "number";
+ },
+ isArray: $.isArray,
+ isFunction: $.isFunction,
+ isObject: $.isPlainObject,
+ isUndefined: function(obj) {
+ return typeof obj === "undefined";
+ },
+ isElement: function(obj) {
+ return !!(obj && obj.nodeType === 1);
+ },
+ isJQuery: function(obj) {
+ return obj instanceof $;
+ },
+ toStr: function toStr(s) {
+ return _.isUndefined(s) || s === null ? "" : s + "";
+ },
+ bind: $.proxy,
+ each: function(collection, cb) {
+ $.each(collection, reverseArgs);
+ function reverseArgs(index, value) {
+ return cb(value, index);
+ }
+ },
+ map: $.map,
+ filter: $.grep,
+ every: function(obj, test) {
+ var result = true;
+ if (!obj) {
+ return result;
+ }
+ $.each(obj, function(key, val) {
+ if (!(result = test.call(null, val, key, obj))) {
+ return false;
+ }
+ });
+ return !!result;
+ },
+ some: function(obj, test) {
+ var result = false;
+ if (!obj) {
+ return result;
+ }
+ $.each(obj, function(key, val) {
+ if (result = test.call(null, val, key, obj)) {
+ return false;
+ }
+ });
+ return !!result;
+ },
+ mixin: $.extend,
+ identity: function(x) {
+ return x;
+ },
+ clone: function(obj) {
+ return $.extend(true, {}, obj);
+ },
+ getIdGenerator: function() {
+ var counter = 0;
+ return function() {
+ return counter++;
+ };
+ },
+ templatify: function templatify(obj) {
+ return $.isFunction(obj) ? obj : template;
+ function template() {
+ return String(obj);
+ }
+ },
+ defer: function(fn) {
+ setTimeout(fn, 0);
+ },
+ debounce: function(func, wait, immediate) {
+ var timeout, result;
+ return function() {
+ var context = this, args = arguments, later, callNow;
+ later = function() {
+ timeout = null;
+ if (!immediate) {
+ result = func.apply(context, args);
+ }
+ };
+ callNow = immediate && !timeout;
+ clearTimeout(timeout);
+ timeout = setTimeout(later, wait);
+ if (callNow) {
+ result = func.apply(context, args);
+ }
+ return result;
+ };
+ },
+ throttle: function(func, wait) {
+ var context, args, timeout, result, previous, later;
+ previous = 0;
+ later = function() {
+ previous = new Date();
+ timeout = null;
+ result = func.apply(context, args);
+ };
+ return function() {
+ var now = new Date(), remaining = wait - (now - previous);
+ context = this;
+ args = arguments;
+ if (remaining <= 0) {
+ clearTimeout(timeout);
+ timeout = null;
+ previous = now;
+ result = func.apply(context, args);
+ } else if (!timeout) {
+ timeout = setTimeout(later, remaining);
+ }
+ return result;
+ };
+ },
+ stringify: function(val) {
+ return _.isString(val) ? val : JSON.stringify(val);
+ },
+ noop: function() {}
+ };
+ }();
+ var WWW = function() {
+ "use strict";
+ var defaultClassNames = {
+ wrapper: "twitter-typeahead",
+ input: "tt-input",
+ hint: "tt-hint",
+ menu: "tt-menu",
+ dataset: "tt-dataset",
+ suggestion: "tt-suggestion",
+ selectable: "tt-selectable",
+ empty: "tt-empty",
+ open: "tt-open",
+ cursor: "tt-cursor",
+ highlight: "tt-highlight"
+ };
+ return build;
+ function build(o) {
+ var www, classes;
+ classes = _.mixin({}, defaultClassNames, o);
+ www = {
+ css: buildCss(),
+ classes: classes,
+ html: buildHtml(classes),
+ selectors: buildSelectors(classes)
+ };
+ return {
+ css: www.css,
+ html: www.html,
+ classes: www.classes,
+ selectors: www.selectors,
+ mixin: function(o) {
+ _.mixin(o, www);
+ }
+ };
+ }
+ function buildHtml(c) {
+ return {
+ wrapper: '<span class="' + c.wrapper + '"></span>',
+ menu: '<div class="' + c.menu + '"></div>'
+ };
+ }
+ function buildSelectors(classes) {
+ var selectors = {};
+ _.each(classes, function(v, k) {
+ selectors[k] = "." + v;
+ });
+ return selectors;
+ }
+ function buildCss() {
+ var css = {
+ wrapper: {
+ position: "relative",
+ display: "inline-block"
+ },
+ hint: {
+ position: "absolute",
+ top: "0",
+ left: "0",
+ borderColor: "transparent",
+ boxShadow: "none",
+ opacity: "1"
+ },
+ input: {
+ position: "relative",
+ verticalAlign: "top",
+ backgroundColor: "transparent"
+ },
+ inputWithNoHint: {
+ position: "relative",
+ verticalAlign: "top"
+ },
+ menu: {
+ position: "absolute",
+ top: "100%",
+ left: "0",
+ zIndex: "100",
+ display: "none"
+ },
+ ltr: {
+ left: "0",
+ right: "auto"
+ },
+ rtl: {
+ left: "auto",
+ right: " 0"
+ }
+ };
+ if (_.isMsie()) {
+ _.mixin(css.input, {
+ backgroundImage: "url(data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)"
+ });
+ }
+ return css;
+ }
+ }();
+ var EventBus = function() {
+ "use strict";
+ var namespace, deprecationMap;
+ namespace = "typeahead:";
+ deprecationMap = {
+ render: "rendered",
+ cursorchange: "cursorchanged",
+ select: "selected",
+ autocomplete: "autocompleted"
+ };
+ function EventBus(o) {
+ if (!o || !o.el) {
+ $.error("EventBus initialized without el");
+ }
+ this.$el = $(o.el);
+ }
+ _.mixin(EventBus.prototype, {
+ _trigger: function(type, args) {
+ var $e;
+ $e = $.Event(namespace + type);
+ (args = args || []).unshift($e);
+ this.$el.trigger.apply(this.$el, args);
+ return $e;
+ },
+ before: function(type) {
+ var args, $e;
+ args = [].slice.call(arguments, 1);
+ $e = this._trigger("before" + type, args);
+ return $e.isDefaultPrevented();
+ },
+ trigger: function(type) {
+ var deprecatedType;
+ this._trigger(type, [].slice.call(arguments, 1));
+ if (deprecatedType = deprecationMap[type]) {
+ this._trigger(deprecatedType, [].slice.call(arguments, 1));
+ }
+ }
+ });
+ return EventBus;
+ }();
+ var EventEmitter = function() {
+ "use strict";
+ var splitter = /\s+/, nextTick = getNextTick();
+ return {
+ onSync: onSync,
+ onAsync: onAsync,
+ off: off,
+ trigger: trigger
+ };
+ function on(method, types, cb, context) {
+ var type;
+ if (!cb) {
+ return this;
+ }
+ types = types.split(splitter);
+ cb = context ? bindContext(cb, context) : cb;
+ this._callbacks = this._callbacks || {};
+ while (type = types.shift()) {
+ this._callbacks[type] = this._callbacks[type] || {
+ sync: [],
+ async: []
+ };
+ this._callbacks[type][method].push(cb);
+ }
+ return this;
+ }
+ function onAsync(types, cb, context) {
+ return on.call(this, "async", types, cb, context);
+ }
+ function onSync(types, cb, context) {
+ return on.call(this, "sync", types, cb, context);
+ }
+ function off(types) {
+ var type;
+ if (!this._callbacks) {
+ return this;
+ }
+ types = types.split(splitter);
+ while (type = types.shift()) {
+ delete this._callbacks[type];
+ }
+ return this;
+ }
+ function trigger(types) {
+ var type, callbacks, args, syncFlush, asyncFlush;
+ if (!this._callbacks) {
+ return this;
+ }
+ types = types.split(splitter);
+ args = [].slice.call(arguments, 1);
+ while ((type = types.shift()) && (callbacks = this._callbacks[type])) {
+ syncFlush = getFlush(callbacks.sync, this, [ type ].concat(args));
+ asyncFlush = getFlush(callbacks.async, this, [ type ].concat(args));
+ syncFlush() && nextTick(asyncFlush);
+ }
+ return this;
+ }
+ function getFlush(callbacks, context, args) {
+ return flush;
+ function flush() {
+ var cancelled;
+ for (var i = 0, len = callbacks.length; !cancelled && i < len; i += 1) {
+ cancelled = callbacks[i].apply(context, args) === false;
+ }
+ return !cancelled;
+ }
+ }
+ function getNextTick() {
+ var nextTickFn;
+ if (window.setImmediate) {
+ nextTickFn = function nextTickSetImmediate(fn) {
+ setImmediate(function() {
+ fn();
+ });
+ };
+ } else {
+ nextTickFn = function nextTickSetTimeout(fn) {
+ setTimeout(function() {
+ fn();
+ }, 0);
+ };
+ }
+ return nextTickFn;
+ }
+ function bindContext(fn, context) {
+ return fn.bind ? fn.bind(context) : function() {
+ fn.apply(context, [].slice.call(arguments, 0));
+ };
+ }
+ }();
+ var highlight = function(doc) {
+ "use strict";
+ var defaults = {
+ node: null,
+ pattern: null,
+ tagName: "strong",
+ className: null,
+ wordsOnly: false,
+ caseSensitive: false
+ };
+ return function hightlight(o) {
+ var regex;
+ o = _.mixin({}, defaults, o);
+ if (!o.node || !o.pattern) {
+ return;
+ }
+ o.pattern = _.isArray(o.pattern) ? o.pattern : [ o.pattern ];
+ regex = getRegex(o.pattern, o.caseSensitive, o.wordsOnly);
+ traverse(o.node, hightlightTextNode);
+ function hightlightTextNode(textNode) {
+ var match, patternNode, wrapperNode;
+ if (match = regex.exec(textNode.data)) {
+ wrapperNode = doc.createElement(o.tagName);
+ o.className && (wrapperNode.className = o.className);
+ patternNode = textNode.splitText(match.index);
+ patternNode.splitText(match[0].length);
+ wrapperNode.appendChild(patternNode.cloneNode(true));
+ textNode.parentNode.replaceChild(wrapperNode, patternNode);
+ }
+ return !!match;
+ }
+ function traverse(el, hightlightTextNode) {
+ var childNode, TEXT_NODE_TYPE = 3;
+ for (var i = 0; i < el.childNodes.length; i++) {
+ childNode = el.childNodes[i];
+ if (childNode.nodeType === TEXT_NODE_TYPE) {
+ i += hightlightTextNode(childNode) ? 1 : 0;
+ } else {
+ traverse(childNode, hightlightTextNode);
+ }
+ }
+ }
+ };
+ function getRegex(patterns, caseSensitive, wordsOnly) {
+ var escapedPatterns = [], regexStr;
+ for (var i = 0, len = patterns.length; i < len; i++) {
+ escapedPatterns.push(_.escapeRegExChars(patterns[i]));
+ }
+ regexStr = wordsOnly ? "\\b(" + escapedPatterns.join("|") + ")\\b" : "(" + escapedPatterns.join("|") + ")";
+ return caseSensitive ? new RegExp(regexStr) : new RegExp(regexStr, "i");
+ }
+ }(window.document);
+ var Input = function() {
+ "use strict";
+ var specialKeyCodeMap;
+ specialKeyCodeMap = {
+ 9: "tab",
+ 27: "esc",
+ 37: "left",
+ 39: "right",
+ 13: "enter",
+ 38: "up",
+ 40: "down"
+ };
+ function Input(o, www) {
+ o = o || {};
+ if (!o.input) {
+ $.error("input is missing");
+ }
+ www.mixin(this);
+ this.$hint = $(o.hint);
+ this.$input = $(o.input);
+ this.query = this.$input.val();
+ this.queryWhenFocused = this.hasFocus() ? this.query : null;
+ this.$overflowHelper = buildOverflowHelper(this.$input);
+ this._checkLanguageDirection();
+ if (this.$hint.length === 0) {
+ this.setHint = this.getHint = this.clearHint = this.clearHintIfInvalid = _.noop;
+ }
+ }
+ Input.normalizeQuery = function(str) {
+ return _.toStr(str).replace(/^\s*/g, "").replace(/\s{2,}/g, " ");
+ };
+ _.mixin(Input.prototype, EventEmitter, {
+ _onBlur: function onBlur() {
+ this.resetInputValue();
+ this.trigger("blurred");
+ },
+ _onFocus: function onFocus() {
+ this.queryWhenFocused = this.query;
+ this.trigger("focused");
+ },
+ _onKeydown: function onKeydown($e) {
+ var keyName = specialKeyCodeMap[$e.which || $e.keyCode];
+ this._managePreventDefault(keyName, $e);
+ if (keyName && this._shouldTrigger(keyName, $e)) {
+ this.trigger(keyName + "Keyed", $e);
+ }
+ },
+ _onInput: function onInput() {
+ this._setQuery(this.getInputValue());
+ this.clearHintIfInvalid();
+ this._checkLanguageDirection();
+ },
+ _managePreventDefault: function managePreventDefault(keyName, $e) {
+ var preventDefault;
+ switch (keyName) {
+ case "up":
+ case "down":
+ preventDefault = !withModifier($e);
+ break;
+
+ default:
+ preventDefault = false;
+ }
+ preventDefault && $e.preventDefault();
+ },
+ _shouldTrigger: function shouldTrigger(keyName, $e) {
+ var trigger;
+ switch (keyName) {
+ case "tab":
+ trigger = !withModifier($e);
+ break;
+
+ default:
+ trigger = true;
+ }
+ return trigger;
+ },
+ _checkLanguageDirection: function checkLanguageDirection() {
+ var dir = (this.$input.css("direction") || "ltr").toLowerCase();
+ if (this.dir !== dir) {
+ this.dir = dir;
+ this.$hint.attr("dir", dir);
+ this.trigger("langDirChanged", dir);
+ }
+ },
+ _setQuery: function setQuery(val, silent) {
+ var areEquivalent, hasDifferentWhitespace;
+ areEquivalent = areQueriesEquivalent(val, this.query);
+ hasDifferentWhitespace = areEquivalent ? this.query.length !== val.length : false;
+ this.query = val;
+ if (!silent && !areEquivalent) {
+ this.trigger("queryChanged", this.query);
+ } else if (!silent && hasDifferentWhitespace) {
+ this.trigger("whitespaceChanged", this.query);
+ }
+ },
+ bind: function() {
+ var that = this, onBlur, onFocus, onKeydown, onInput;
+ onBlur = _.bind(this._onBlur, this);
+ onFocus = _.bind(this._onFocus, this);
+ onKeydown = _.bind(this._onKeydown, this);
+ onInput = _.bind(this._onInput, this);
+ this.$input.on("blur.tt", onBlur).on("focus.tt", onFocus).on("keydown.tt", onKeydown);
+ if (!_.isMsie() || _.isMsie() > 9) {
+ this.$input.on("input.tt", onInput);
+ } else {
+ this.$input.on("keydown.tt keypress.tt cut.tt paste.tt", function($e) {
+ if (specialKeyCodeMap[$e.which || $e.keyCode]) {
+ return;
+ }
+ _.defer(_.bind(that._onInput, that, $e));
+ });
+ }
+ return this;
+ },
+ focus: function focus() {
+ this.$input.focus();
+ },
+ blur: function blur() {
+ this.$input.blur();
+ },
+ getLangDir: function getLangDir() {
+ return this.dir;
+ },
+ getQuery: function getQuery() {
+ return this.query || "";
+ },
+ setQuery: function setQuery(val, silent) {
+ this.setInputValue(val);
+ this._setQuery(val, silent);
+ },
+ hasQueryChangedSinceLastFocus: function hasQueryChangedSinceLastFocus() {
+ return this.query !== this.queryWhenFocused;
+ },
+ getInputValue: function getInputValue() {
+ return this.$input.val();
+ },
+ setInputValue: function setInputValue(value) {
+ this.$input.val(value);
+ this.clearHintIfInvalid();
+ this._checkLanguageDirection();
+ },
+ resetInputValue: function resetInputValue() {
+ this.setInputValue(this.query);
+ },
+ getHint: function getHint() {
+ return this.$hint.val();
+ },
+ setHint: function setHint(value) {
+ this.$hint.val(value);
+ },
+ clearHint: function clearHint() {
+ this.setHint("");
+ },
+ clearHintIfInvalid: function clearHintIfInvalid() {
+ var val, hint, valIsPrefixOfHint, isValid;
+ val = this.getInputValue();
+ hint = this.getHint();
+ valIsPrefixOfHint = val !== hint && hint.indexOf(val) === 0;
+ isValid = val !== "" && valIsPrefixOfHint && !this.hasOverflow();
+ !isValid && this.clearHint();
+ },
+ hasFocus: function hasFocus() {
+ return this.$input.is(":focus");
+ },
+ hasOverflow: function hasOverflow() {
+ var constraint = this.$input.width() - 2;
+ this.$overflowHelper.text(this.getInputValue());
+ return this.$overflowHelper.width() >= constraint;
+ },
+ isCursorAtEnd: function() {
+ var valueLength, selectionStart, range;
+ valueLength = this.$input.val().length;
+ selectionStart = this.$input[0].selectionStart;
+ if (_.isNumber(selectionStart)) {
+ return selectionStart === valueLength;
+ } else if (document.selection) {
+ range = document.selection.createRange();
+ range.moveStart("character", -valueLength);
+ return valueLength === range.text.length;
+ }
+ return true;
+ },
+ destroy: function destroy() {
+ this.$hint.off(".tt");
+ this.$input.off(".tt");
+ this.$overflowHelper.remove();
+ this.$hint = this.$input = this.$overflowHelper = $("<div>");
+ }
+ });
+ return Input;
+ function buildOverflowHelper($input) {
+ return $('<pre aria-hidden="true"></pre>').css({
+ position: "absolute",
+ visibility: "hidden",
+ whiteSpace: "pre",
+ fontFamily: $input.css("font-family"),
+ fontSize: $input.css("font-size"),
+ fontStyle: $input.css("font-style"),
+ fontVariant: $input.css("font-variant"),
+ fontWeight: $input.css("font-weight"),
+ wordSpacing: $input.css("word-spacing"),
+ letterSpacing: $input.css("letter-spacing"),
+ textIndent: $input.css("text-indent"),
+ textRendering: $input.css("text-rendering"),
+ textTransform: $input.css("text-transform")
+ }).insertAfter($input);
+ }
+ function areQueriesEquivalent(a, b) {
+ return Input.normalizeQuery(a) === Input.normalizeQuery(b);
+ }
+ function withModifier($e) {
+ return $e.altKey || $e.ctrlKey || $e.metaKey || $e.shiftKey;
+ }
+ }();
+ var Dataset = function() {
+ "use strict";
+ var keys, nameGenerator;
+ keys = {
+ val: "tt-selectable-display",
+ obj: "tt-selectable-object"
+ };
+ nameGenerator = _.getIdGenerator();
+ function Dataset(o, www) {
+ o = o || {};
+ o.templates = o.templates || {};
+ o.templates.notFound = o.templates.notFound || o.templates.empty;
+ if (!o.source) {
+ $.error("missing source");
+ }
+ if (!o.node) {
+ $.error("missing node");
+ }
+ if (o.name && !isValidName(o.name)) {
+ $.error("invalid dataset name: " + o.name);
+ }
+ www.mixin(this);
+ this.highlight = !!o.highlight;
+ this.name = o.name || nameGenerator();
+ this.limit = o.limit || 5;
+ this.displayFn = getDisplayFn(o.display || o.displayKey);
+ this.templates = getTemplates(o.templates, this.displayFn);
+ this.source = o.source.__ttAdapter ? o.source.__ttAdapter() : o.source;
+ this.async = _.isUndefined(o.async) ? this.source.length > 2 : !!o.async;
+ this._resetLastSuggestion();
+ this.$el = $(o.node).addClass(this.classes.dataset).addClass(this.classes.dataset + "-" + this.name);
+ }
+ Dataset.extractData = function extractData(el) {
+ var $el = $(el);
+ if ($el.data(keys.obj)) {
+ return {
+ val: $el.data(keys.val) || "",
+ obj: $el.data(keys.obj) || null
+ };
+ }
+ return null;
+ };
+ _.mixin(Dataset.prototype, EventEmitter, {
+ _overwrite: function overwrite(query, suggestions) {
+ suggestions = suggestions || [];
+ if (suggestions.length) {
+ this._renderSuggestions(query, suggestions);
+ } else if (this.async && this.templates.pending) {
+ this._renderPending(query);
+ } else if (!this.async && this.templates.notFound) {
+ this._renderNotFound(query);
+ } else {
+ this._empty();
+ }
+ this.trigger("rendered", this.name, suggestions, false);
+ },
+ _append: function append(query, suggestions) {
+ suggestions = suggestions || [];
+ if (suggestions.length && this.$lastSuggestion.length) {
+ this._appendSuggestions(query, suggestions);
+ } else if (suggestions.length) {
+ this._renderSuggestions(query, suggestions);
+ } else if (!this.$lastSuggestion.length && this.templates.notFound) {
+ this._renderNotFound(query);
+ }
+ this.trigger("rendered", this.name, suggestions, true);
+ },
+ _renderSuggestions: function renderSuggestions(query, suggestions) {
+ var $fragment;
+ $fragment = this._getSuggestionsFragment(query, suggestions);
+ this.$lastSuggestion = $fragment.children().last();
+ this.$el.html($fragment).prepend(this._getHeader(query, suggestions)).append(this._getFooter(query, suggestions));
+ },
+ _appendSuggestions: function appendSuggestions(query, suggestions) {
+ var $fragment, $lastSuggestion;
+ $fragment = this._getSuggestionsFragment(query, suggestions);
+ $lastSuggestion = $fragment.children().last();
+ this.$lastSuggestion.after($fragment);
+ this.$lastSuggestion = $lastSuggestion;
+ },
+ _renderPending: function renderPending(query) {
+ var template = this.templates.pending;
+ this._resetLastSuggestion();
+ template && this.$el.html(template({
+ query: query,
+ dataset: this.name
+ }));
+ },
+ _renderNotFound: function renderNotFound(query) {
+ var template = this.templates.notFound;
+ this._resetLastSuggestion();
+ template && this.$el.html(template({
+ query: query,
+ dataset: this.name
+ }));
+ },
+ _empty: function empty() {
+ this.$el.empty();
+ this._resetLastSuggestion();
+ },
+ _getSuggestionsFragment: function getSuggestionsFragment(query, suggestions) {
+ var that = this, fragment;
+ fragment = document.createDocumentFragment();
+ _.each(suggestions, function getSuggestionNode(suggestion) {
+ var $el, context;
+ context = that._injectQuery(query, suggestion);
+ $el = $(that.templates.suggestion(context)).data(keys.obj, suggestion).data(keys.val, that.displayFn(suggestion)).addClass(that.classes.suggestion + " " + that.classes.selectable);
+ fragment.appendChild($el[0]);
+ });
+ this.highlight && highlight({
+ className: this.classes.highlight,
+ node: fragment,
+ pattern: query
+ });
+ return $(fragment);
+ },
+ _getFooter: function getFooter(query, suggestions) {
+ return this.templates.footer ? this.templates.footer({
+ query: query,
+ suggestions: suggestions,
+ dataset: this.name
+ }) : null;
+ },
+ _getHeader: function getHeader(query, suggestions) {
+ return this.templates.header ? this.templates.header({
+ query: query,
+ suggestions: suggestions,
+ dataset: this.name
+ }) : null;
+ },
+ _resetLastSuggestion: function resetLastSuggestion() {
+ this.$lastSuggestion = $();
+ },
+ _injectQuery: function injectQuery(query, obj) {
+ return _.isObject(obj) ? _.mixin({
+ _query: query
+ }, obj) : obj;
+ },
+ update: function update(query) {
+ var that = this, canceled = false, syncCalled = false, rendered = 0;
+ this.cancel();
+ this.cancel = function cancel() {
+ canceled = true;
+ that.cancel = $.noop;
+ that.async && that.trigger("asyncCanceled", query);
+ };
+ this.source(query, sync, async);
+ !syncCalled && sync([]);
+ function sync(suggestions) {
+ if (syncCalled) {
+ return;
+ }
+ syncCalled = true;
+ suggestions = (suggestions || []).slice(0, that.limit);
+ rendered = suggestions.length;
+ that._overwrite(query, suggestions);
+ if (rendered < that.limit && that.async) {
+ that.trigger("asyncRequested", query);
+ }
+ }
+ function async(suggestions) {
+ suggestions = suggestions || [];
+ if (!canceled && rendered < that.limit) {
+ that.cancel = $.noop;
+ rendered += suggestions.length;
+ that._append(query, suggestions.slice(0, that.limit - rendered));
+ that.async && that.trigger("asyncReceived", query);
+ }
+ }
+ },
+ cancel: $.noop,
+ clear: function clear() {
+ this._empty();
+ this.cancel();
+ this.trigger("cleared");
+ },
+ isEmpty: function isEmpty() {
+ return this.$el.is(":empty");
+ },
+ destroy: function destroy() {
+ this.$el = $("<div>");
+ }
+ });
+ return Dataset;
+ function getDisplayFn(display) {
+ display = display || _.stringify;
+ return _.isFunction(display) ? display : displayFn;
+ function displayFn(obj) {
+ return obj[display];
+ }
+ }
+ function getTemplates(templates, displayFn) {
+ return {
+ notFound: templates.notFound && _.templatify(templates.notFound),
+ pending: templates.pending && _.templatify(templates.pending),
+ header: templates.header && _.templatify(templates.header),
+ footer: templates.footer && _.templatify(templates.footer),
+ suggestion: templates.suggestion || suggestionTemplate
+ };
+ function suggestionTemplate(context) {
+ return $("<div>").text(displayFn(context));
+ }
+ }
+ function isValidName(str) {
+ return /^[_a-zA-Z0-9-]+$/.test(str);
+ }
+ }();
+ var Menu = function() {
+ "use strict";
+ function Menu(o, www) {
+ var that = this;
+ o = o || {};
+ if (!o.node) {
+ $.error("node is required");
+ }
+ www.mixin(this);
+ this.$node = $(o.node);
+ this.query = null;
+ this.datasets = _.map(o.datasets, initializeDataset);
+ function initializeDataset(oDataset) {
+ var node = that.$node.find(oDataset.node).first();
+ oDataset.node = node.length ? node : $("<div>").appendTo(that.$node);
+ return new Dataset(oDataset, www);
+ }
+ }
+ _.mixin(Menu.prototype, EventEmitter, {
+ _onSelectableClick: function onSelectableClick($e) {
+ this.trigger("selectableClicked", $($e.currentTarget));
+ },
+ _onRendered: function onRendered(type, dataset, suggestions, async) {
+ this.$node.toggleClass(this.classes.empty, this._allDatasetsEmpty());
+ this.trigger("datasetRendered", dataset, suggestions, async);
+ },
+ _onCleared: function onCleared() {
+ this.$node.toggleClass(this.classes.empty, this._allDatasetsEmpty());
+ this.trigger("datasetCleared");
+ },
+ _propagate: function propagate() {
+ this.trigger.apply(this, arguments);
+ },
+ _allDatasetsEmpty: function allDatasetsEmpty() {
+ return _.every(this.datasets, isDatasetEmpty);
+ function isDatasetEmpty(dataset) {
+ return dataset.isEmpty();
+ }
+ },
+ _getSelectables: function getSelectables() {
+ return this.$node.find(this.selectors.selectable);
+ },
+ _removeCursor: function _removeCursor() {
+ var $selectable = this.getActiveSelectable();
+ $selectable && $selectable.removeClass(this.classes.cursor);
+ },
+ _ensureVisible: function ensureVisible($el) {
+ var elTop, elBottom, nodeScrollTop, nodeHeight;
+ elTop = $el.position().top;
+ elBottom = elTop + $el.outerHeight(true);
+ nodeScrollTop = this.$node.scrollTop();
+ nodeHeight = this.$node.height() + parseInt(this.$node.css("paddingTop"), 10) + parseInt(this.$node.css("paddingBottom"), 10);
+ if (elTop < 0) {
+ this.$node.scrollTop(nodeScrollTop + elTop);
+ } else if (nodeHeight < elBottom) {
+ this.$node.scrollTop(nodeScrollTop + (elBottom - nodeHeight));
+ }
+ },
+ bind: function() {
+ var that = this, onSelectableClick;
+ onSelectableClick = _.bind(this._onSelectableClick, this);
+ this.$node.on("click.tt", this.selectors.selectable, onSelectableClick);
+ _.each(this.datasets, function(dataset) {
+ dataset.onSync("asyncRequested", that._propagate, that).onSync("asyncCanceled", that._propagate, that).onSync("asyncReceived", that._propagate, that).onSync("rendered", that._onRendered, that).onSync("cleared", that._onCleared, that);
+ });
+ return this;
+ },
+ isOpen: function isOpen() {
+ return this.$node.hasClass(this.classes.open);
+ },
+ open: function open() {
+ this.$node.addClass(this.classes.open);
+ },
+ close: function close() {
+ this.$node.removeClass(this.classes.open);
+ this._removeCursor();
+ },
+ setLanguageDirection: function setLanguageDirection(dir) {
+ this.$node.attr("dir", dir);
+ },
+ selectableRelativeToCursor: function selectableRelativeToCursor(delta) {
+ var $selectables, $oldCursor, oldIndex, newIndex;
+ $oldCursor = this.getActiveSelectable();
+ $selectables = this._getSelectables();
+ oldIndex = $oldCursor ? $selectables.index($oldCursor) : -1;
+ newIndex = oldIndex + delta;
+ newIndex = (newIndex + 1) % ($selectables.length + 1) - 1;
+ newIndex = newIndex < -1 ? $selectables.length - 1 : newIndex;
+ return newIndex === -1 ? null : $selectables.eq(newIndex);
+ },
+ setCursor: function setCursor($selectable) {
+ this._removeCursor();
+ if ($selectable = $selectable && $selectable.first()) {
+ $selectable.addClass(this.classes.cursor);
+ this._ensureVisible($selectable);
+ }
+ },
+ getSelectableData: function getSelectableData($el) {
+ return $el && $el.length ? Dataset.extractData($el) : null;
+ },
+ getActiveSelectable: function getActiveSelectable() {
+ var $selectable = this._getSelectables().filter(this.selectors.cursor).first();
+ return $selectable.length ? $selectable : null;
+ },
+ getTopSelectable: function getTopSelectable() {
+ var $selectable = this._getSelectables().first();
+ return $selectable.length ? $selectable : null;
+ },
+ update: function update(query) {
+ var isValidUpdate = query !== this.query;
+ if (isValidUpdate) {
+ this.query = query;
+ _.each(this.datasets, updateDataset);
+ }
+ return isValidUpdate;
+ function updateDataset(dataset) {
+ dataset.update(query);
+ }
+ },
+ empty: function empty() {
+ _.each(this.datasets, clearDataset);
+ this.query = null;
+ this.$node.addClass(this.classes.empty);
+ function clearDataset(dataset) {
+ dataset.clear();
+ }
+ },
+ destroy: function destroy() {
+ this.$node.off(".tt");
+ this.$node = $("<div>");
+ _.each(this.datasets, destroyDataset);
+ function destroyDataset(dataset) {
+ dataset.destroy();
+ }
+ }
+ });
+ return Menu;
+ }();
+ var DefaultMenu = function() {
+ "use strict";
+ var s = Menu.prototype;
+ function DefaultMenu() {
+ Menu.apply(this, [].slice.call(arguments, 0));
+ }
+ _.mixin(DefaultMenu.prototype, Menu.prototype, {
+ open: function open() {
+ !this._allDatasetsEmpty() && this._show();
+ return s.open.apply(this, [].slice.call(arguments, 0));
+ },
+ close: function close() {
+ this._hide();
+ return s.close.apply(this, [].slice.call(arguments, 0));
+ },
+ _onRendered: function onRendered() {
+ if (this._allDatasetsEmpty()) {
+ this._hide();
+ } else {
+ this.isOpen() && this._show();
+ }
+ return s._onRendered.apply(this, [].slice.call(arguments, 0));
+ },
+ _onCleared: function onCleared() {
+ if (this._allDatasetsEmpty()) {
+ this._hide();
+ } else {
+ this.isOpen() && this._show();
+ }
+ return s._onCleared.apply(this, [].slice.call(arguments, 0));
+ },
+ setLanguageDirection: function setLanguageDirection(dir) {
+ this.$node.css(dir === "ltr" ? this.css.ltr : this.css.rtl);
+ return s.setLanguageDirection.apply(this, [].slice.call(arguments, 0));
+ },
+ _hide: function hide() {
+ this.$node.hide();
+ },
+ _show: function show() {
+ this.$node.css("display", "block");
+ }
+ });
+ return DefaultMenu;
+ }();
+ var Typeahead = function() {
+ "use strict";
+ function Typeahead(o, www) {
+ var onFocused, onBlurred, onEnterKeyed, onTabKeyed, onEscKeyed, onUpKeyed, onDownKeyed, onLeftKeyed, onRightKeyed, onQueryChanged, onWhitespaceChanged;
+ o = o || {};
+ if (!o.input) {
+ $.error("missing input");
+ }
+ if (!o.menu) {
+ $.error("missing menu");
+ }
+ if (!o.eventBus) {
+ $.error("missing event bus");
+ }
+ www.mixin(this);
+ this.eventBus = o.eventBus;
+ this.minLength = _.isNumber(o.minLength) ? o.minLength : 1;
+ this.input = o.input;
+ this.menu = o.menu;
+ this.enabled = true;
+ this.active = false;
+ this.input.hasFocus() && this.activate();
+ this.dir = this.input.getLangDir();
+ this._hacks();
+ this.menu.bind().onSync("selectableClicked", this._onSelectableClicked, this).onSync("asyncRequested", this._onAsyncRequested, this).onSync("asyncCanceled", this._onAsyncCanceled, this).onSync("asyncReceived", this._onAsyncReceived, this).onSync("datasetRendered", this._onDatasetRendered, this).onSync("datasetCleared", this._onDatasetCleared, this);
+ onFocused = c(this, "activate", "open", "_onFocused");
+ onBlurred = c(this, "deactivate", "_onBlurred");
+ onEnterKeyed = c(this, "isActive", "isOpen", "_onEnterKeyed");
+ onTabKeyed = c(this, "isActive", "isOpen", "_onTabKeyed");
+ onEscKeyed = c(this, "isActive", "_onEscKeyed");
+ onUpKeyed = c(this, "isActive", "open", "_onUpKeyed");
+ onDownKeyed = c(this, "isActive", "open", "_onDownKeyed");
+ onLeftKeyed = c(this, "isActive", "isOpen", "_onLeftKeyed");
+ onRightKeyed = c(this, "isActive", "isOpen", "_onRightKeyed");
+ onQueryChanged = c(this, "_openIfActive", "_onQueryChanged");
+ onWhitespaceChanged = c(this, "_openIfActive", "_onWhitespaceChanged");
+ this.input.bind().onSync("focused", onFocused, this).onSync("blurred", onBlurred, this).onSync("enterKeyed", onEnterKeyed, this).onSync("tabKeyed", onTabKeyed, this).onSync("escKeyed", onEscKeyed, this).onSync("upKeyed", onUpKeyed, this).onSync("downKeyed", onDownKeyed, this).onSync("leftKeyed", onLeftKeyed, this).onSync("rightKeyed", onRightKeyed, this).onSync("queryChanged", onQueryChanged, this).onSync("whitespaceChanged", onWhitespaceChanged, this).onSync("langDirChanged", this._onLangDirChanged, this);
+ }
+ _.mixin(Typeahead.prototype, {
+ _hacks: function hacks() {
+ var $input, $menu;
+ $input = this.input.$input || $("<div>");
+ $menu = this.menu.$node || $("<div>");
+ $input.on("blur.tt", function($e) {
+ var active, isActive, hasActive;
+ active = document.activeElement;
+ isActive = $menu.is(active);
+ hasActive = $menu.has(active).length > 0;
+ if (_.isMsie() && (isActive || hasActive)) {
+ $e.preventDefault();
+ $e.stopImmediatePropagation();
+ _.defer(function() {
+ $input.focus();
+ });
+ }
+ });
+ $menu.on("mousedown.tt", function($e) {
+ $e.preventDefault();
+ });
+ },
+ _onSelectableClicked: function onSelectableClicked(type, $el) {
+ this.select($el);
+ },
+ _onDatasetCleared: function onDatasetCleared() {
+ this._updateHint();
+ },
+ _onDatasetRendered: function onDatasetRendered(type, dataset, suggestions, async) {
+ this._updateHint();
+ this.eventBus.trigger("render", suggestions, async, dataset);
+ },
+ _onAsyncRequested: function onAsyncRequested(type, dataset, query) {
+ this.eventBus.trigger("asyncrequest", query, dataset);
+ },
+ _onAsyncCanceled: function onAsyncCanceled(type, dataset, query) {
+ this.eventBus.trigger("asynccancel", query, dataset);
+ },
+ _onAsyncReceived: function onAsyncReceived(type, dataset, query) {
+ this.eventBus.trigger("asyncreceive", query, dataset);
+ },
+ _onFocused: function onFocused() {
+ this._minLengthMet() && this.menu.update(this.input.getQuery());
+ },
+ _onBlurred: function onBlurred() {
+ if (this.input.hasQueryChangedSinceLastFocus()) {
+ this.eventBus.trigger("change", this.input.getQuery());
+ }
+ },
+ _onEnterKeyed: function onEnterKeyed(type, $e) {
+ var $selectable;
+ if ($selectable = this.menu.getActiveSelectable()) {
+ this.select($selectable) && $e.preventDefault();
+ }
+ },
+ _onTabKeyed: function onTabKeyed(type, $e) {
+ var $selectable;
+ if ($selectable = this.menu.getActiveSelectable()) {
+ this.select($selectable) && $e.preventDefault();
+ } else if ($selectable = this.menu.getTopSelectable()) {
+ this.autocomplete($selectable) && $e.preventDefault();
+ }
+ },
+ _onEscKeyed: function onEscKeyed() {
+ this.close();
+ },
+ _onUpKeyed: function onUpKeyed() {
+ this.moveCursor(-1);
+ },
+ _onDownKeyed: function onDownKeyed() {
+ this.moveCursor(+1);
+ },
+ _onLeftKeyed: function onLeftKeyed() {
+ if (this.dir === "rtl" && this.input.isCursorAtEnd()) {
+ this.autocomplete(this.menu.getTopSelectable());
+ }
+ },
+ _onRightKeyed: function onRightKeyed() {
+ if (this.dir === "ltr" && this.input.isCursorAtEnd()) {
+ this.autocomplete(this.menu.getTopSelectable());
+ }
+ },
+ _onQueryChanged: function onQueryChanged(e, query) {
+ this._minLengthMet(query) ? this.menu.update(query) : this.menu.empty();
+ },
+ _onWhitespaceChanged: function onWhitespaceChanged() {
+ this._updateHint();
+ },
+ _onLangDirChanged: function onLangDirChanged(e, dir) {
+ if (this.dir !== dir) {
+ this.dir = dir;
+ this.menu.setLanguageDirection(dir);
+ }
+ },
+ _openIfActive: function openIfActive() {
+ this.isActive() && this.open();
+ },
+ _minLengthMet: function minLengthMet(query) {
+ query = _.isString(query) ? query : this.input.getQuery() || "";
+ return query.length >= this.minLength;
+ },
+ _updateHint: function updateHint() {
+ var $selectable, data, val, query, escapedQuery, frontMatchRegEx, match;
+ $selectable = this.menu.getTopSelectable();
+ data = this.menu.getSelectableData($selectable);
+ val = this.input.getInputValue();
+ if (data && !_.isBlankString(val) && !this.input.hasOverflow()) {
+ query = Input.normalizeQuery(val);
+ escapedQuery = _.escapeRegExChars(query);
+ frontMatchRegEx = new RegExp("^(?:" + escapedQuery + ")(.+$)", "i");
+ match = frontMatchRegEx.exec(data.val);
+ match && this.input.setHint(val + match[1]);
+ } else {
+ this.input.clearHint();
+ }
+ },
+ isEnabled: function isEnabled() {
+ return this.enabled;
+ },
+ enable: function enable() {
+ this.enabled = true;
+ },
+ disable: function disable() {
+ this.enabled = false;
+ },
+ isActive: function isActive() {
+ return this.active;
+ },
+ activate: function activate() {
+ if (this.isActive()) {
+ return true;
+ } else if (!this.isEnabled() || this.eventBus.before("active")) {
+ return false;
+ } else {
+ this.active = true;
+ this.eventBus.trigger("active");
+ return true;
+ }
+ },
+ deactivate: function deactivate() {
+ if (!this.isActive()) {
+ return true;
+ } else if (this.eventBus.before("idle")) {
+ return false;
+ } else {
+ this.active = false;
+ this.close();
+ this.eventBus.trigger("idle");
+ return true;
+ }
+ },
+ isOpen: function isOpen() {
+ return this.menu.isOpen();
+ },
+ open: function open() {
+ if (!this.isOpen() && !this.eventBus.before("open")) {
+ this.menu.open();
+ this._updateHint();
+ this.eventBus.trigger("open");
+ }
+ return this.isOpen();
+ },
+ close: function close() {
+ if (this.isOpen() && !this.eventBus.before("close")) {
+ this.menu.close();
+ this.input.clearHint();
+ this.input.resetInputValue();
+ this.eventBus.trigger("close");
+ }
+ return !this.isOpen();
+ },
+ setVal: function setVal(val) {
+ this.input.setQuery(_.toStr(val));
+ },
+ getVal: function getVal() {
+ return this.input.getQuery();
+ },
+ select: function select($selectable) {
+ var data = this.menu.getSelectableData($selectable);
+ if (data && !this.eventBus.before("select", data.obj)) {
+ this.input.setQuery(data.val, true);
+ this.eventBus.trigger("select", data.obj);
+ this.close();
+ return true;
+ }
+ return false;
+ },
+ autocomplete: function autocomplete($selectable) {
+ var query, data, isValid;
+ query = this.input.getQuery();
+ data = this.menu.getSelectableData($selectable);
+ isValid = data && query !== data.val;
+ if (isValid && !this.eventBus.before("autocomplete", data.obj)) {
+ this.input.setQuery(data.val);
+ this.eventBus.trigger("autocomplete", data.obj);
+ return true;
+ }
+ return false;
+ },
+ moveCursor: function moveCursor(delta) {
+ var query, $candidate, data, payload, cancelMove;
+ query = this.input.getQuery();
+ $candidate = this.menu.selectableRelativeToCursor(delta);
+ data = this.menu.getSelectableData($candidate);
+ payload = data ? data.obj : null;
+ cancelMove = this._minLengthMet() && this.menu.update(query);
+ if (!cancelMove && !this.eventBus.before("cursorchange", payload)) {
+ this.menu.setCursor($candidate);
+ if (data) {
+ this.input.setInputValue(data.val);
+ } else {
+ this.input.resetInputValue();
+ this._updateHint();
+ }
+ this.eventBus.trigger("cursorchange", payload);
+ return true;
+ }
+ return false;
+ },
+ destroy: function destroy() {
+ this.input.destroy();
+ this.menu.destroy();
+ }
+ });
+ return Typeahead;
+ function c(ctx) {
+ var methods = [].slice.call(arguments, 1);
+ return function() {
+ var args = [].slice.call(arguments);
+ _.each(methods, function(method) {
+ return ctx[method].apply(ctx, args);
+ });
+ };
+ }
+ }();
+ (function() {
+ "use strict";
+ var old, keys, methods;
+ old = $.fn.typeahead;
+ keys = {
+ www: "tt-www",
+ attrs: "tt-attrs",
+ typeahead: "tt-typeahead"
+ };
+ methods = {
+ initialize: function initialize(o, datasets) {
+ var www;
+ datasets = _.isArray(datasets) ? datasets : [].slice.call(arguments, 1);
+ o = o || {};
+ www = WWW(o.classNames);
+ return this.each(attach);
+ function attach() {
+ var $input, $wrapper, $hint, $menu, defaultHint, defaultMenu, eventBus, input, menu, typeahead, MenuConstructor;
+ _.each(datasets, function(d) {
+ d.highlight = !!o.highlight;
+ });
+ $input = $(this);
+ $wrapper = $(www.html.wrapper);
+ $hint = $elOrNull(o.hint);
+ $menu = $elOrNull(o.menu);
+ defaultHint = o.hint !== false && !$hint;
+ defaultMenu = o.menu !== false && !$menu;
+ defaultHint && ($hint = buildHintFromInput($input, www));
+ defaultMenu && ($menu = $(www.html.menu).css(www.css.menu));
+ $hint && $hint.val("");
+ $input = prepInput($input, www);
+ if (defaultHint || defaultMenu) {
+ $wrapper.css(www.css.wrapper);
+ $input.css(defaultHint ? www.css.input : www.css.inputWithNoHint);
+ $input.wrap($wrapper).parent().prepend(defaultHint ? $hint : null).append(defaultMenu ? $menu : null);
+ }
+ MenuConstructor = defaultMenu ? DefaultMenu : Menu;
+ eventBus = new EventBus({
+ el: $input
+ });
+ input = new Input({
+ hint: $hint,
+ input: $input
+ }, www);
+ menu = new MenuConstructor({
+ node: $menu,
+ datasets: datasets
+ }, www);
+ typeahead = new Typeahead({
+ input: input,
+ menu: menu,
+ eventBus: eventBus,
+ minLength: o.minLength
+ }, www);
+ $input.data(keys.www, www);
+ $input.data(keys.typeahead, typeahead);
+ }
+ },
+ isEnabled: function isEnabled() {
+ var enabled;
+ ttEach(this.first(), function(t) {
+ enabled = t.isEnabled();
+ });
+ return enabled;
+ },
+ enable: function enable() {
+ ttEach(this, function(t) {
+ t.enable();
+ });
+ return this;
+ },
+ disable: function disable() {
+ ttEach(this, function(t) {
+ t.disable();
+ });
+ return this;
+ },
+ isActive: function isActive() {
+ var active;
+ ttEach(this.first(), function(t) {
+ active = t.isActive();
+ });
+ return active;
+ },
+ activate: function activate() {
+ ttEach(this, function(t) {
+ t.activate();
+ });
+ return this;
+ },
+ deactivate: function deactivate() {
+ ttEach(this, function(t) {
+ t.deactivate();
+ });
+ return this;
+ },
+ isOpen: function isOpen() {
+ var open;
+ ttEach(this.first(), function(t) {
+ open = t.isOpen();
+ });
+ return open;
+ },
+ open: function open() {
+ ttEach(this, function(t) {
+ t.open();
+ });
+ return this;
+ },
+ close: function close() {
+ ttEach(this, function(t) {
+ t.close();
+ });
+ return this;
+ },
+ select: function select(el) {
+ var success = false, $el = $(el);
+ ttEach(this.first(), function(t) {
+ success = t.select($el);
+ });
+ return success;
+ },
+ autocomplete: function autocomplete(el) {
+ var success = false, $el = $(el);
+ ttEach(this.first(), function(t) {
+ success = t.autocomplete($el);
+ });
+ return success;
+ },
+ moveCursor: function moveCursoe(delta) {
+ var success = false;
+ ttEach(this.first(), function(t) {
+ success = t.moveCursor(delta);
+ });
+ return success;
+ },
+ val: function val(newVal) {
+ var query;
+ if (!arguments.length) {
+ ttEach(this.first(), function(t) {
+ query = t.getVal();
+ });
+ return query;
+ } else {
+ ttEach(this, function(t) {
+ t.setVal(newVal);
+ });
+ return this;
+ }
+ },
+ destroy: function destroy() {
+ ttEach(this, function(typeahead, $input) {
+ revert($input);
+ typeahead.destroy();
+ });
+ return this;
+ }
+ };
+ $.fn.typeahead = function(method) {
+ if (methods[method]) {
+ return methods[method].apply(this, [].slice.call(arguments, 1));
+ } else {
+ return methods.initialize.apply(this, arguments);
+ }
+ };
+ $.fn.typeahead.noConflict = function noConflict() {
+ $.fn.typeahead = old;
+ return this;
+ };
+ function ttEach($els, fn) {
+ $els.each(function() {
+ var $input = $(this), typeahead;
+ (typeahead = $input.data(keys.typeahead)) && fn(typeahead, $input);
+ });
+ }
+ function buildHintFromInput($input, www) {
+ return $input.clone().addClass(www.classes.hint).removeData().css(www.css.hint).css(getBackgroundStyles($input)).prop("readonly", true).removeAttr("id name placeholder required").attr({
+ autocomplete: "off",
+ spellcheck: "false",
+ tabindex: -1
+ });
+ }
+ function prepInput($input, www) {
+ $input.data(keys.attrs, {
+ dir: $input.attr("dir"),
+ autocomplete: $input.attr("autocomplete"),
+ spellcheck: $input.attr("spellcheck"),
+ style: $input.attr("style")
+ });
+ $input.addClass(www.classes.input).attr({
+ autocomplete: "off",
+ spellcheck: false
+ });
+ try {
+ !$input.attr("dir") && $input.attr("dir", "auto");
+ } catch (e) {}
+ return $input;
+ }
+ function getBackgroundStyles($el) {
+ return {
+ backgroundAttachment: $el.css("background-attachment"),
+ backgroundClip: $el.css("background-clip"),
+ backgroundColor: $el.css("background-color"),
+ backgroundImage: $el.css("background-image"),
+ backgroundOrigin: $el.css("background-origin"),
+ backgroundPosition: $el.css("background-position"),
+ backgroundRepeat: $el.css("background-repeat"),
+ backgroundSize: $el.css("background-size")
+ };
+ }
+ function revert($input) {
+ var www, $wrapper;
+ www = $input.data(keys.www);
+ $wrapper = $input.parent().filter(www.selectors.wrapper);
+ _.each($input.data(keys.attrs), function(val, key) {
+ _.isUndefined(val) ? $input.removeAttr(key) : $input.attr(key, val);
+ });
+ $input.removeData(keys.typeahead).removeData(keys.www).removeData(keys.attr).removeClass(www.classes.input);
+ if ($wrapper.length) {
+ $input.detach().insertAfter($wrapper);
+ $wrapper.remove();
+ }
+ }
+ function $elOrNull(obj) {
+ var isValid, $el;
+ isValid = _.isJQuery(obj) || _.isElement(obj);
+ $el = isValid ? $(obj).first() : [];
+ return $el.length ? $el : null;
+ }
+ })();
+});
\ No newline at end of file
--- /dev/null
+__all__ = [
+ 'arguments',
+ 'example',
+ 'keyword',
+ 'seealso',
+ 'table',
+ 'underline'
+]
+
+
+class Parser:
+ def __init__(self, pctxt):
+ self.pctxt = pctxt
+
+ def parse(self, line):
+ return line
+
+class PContext:
+ def __init__(self, templates = None):
+ self.set_content_list([])
+ self.templates = templates
+
+ def set_content(self, content):
+ self.set_content_list(content.split("\n"))
+
+ def set_content_list(self, content):
+ self.lines = content
+ self.nblines = len(self.lines)
+ self.i = 0
+ self.stop = False
+
+ def get_lines(self):
+ return self.lines
+
+ def eat_lines(self):
+ count = 0
+ while self.has_more_lines() and self.lines[self.i].strip():
+ count += 1
+ self.next()
+ return count
+
+ def eat_empty_lines(self):
+ count = 0
+ while self.has_more_lines() and not self.lines[self.i].strip():
+ count += 1
+ self.next()
+ return count
+
+ def next(self, count=1):
+ self.i += count
+
+ def has_more_lines(self, offset=0):
+ return self.i + offset < self.nblines
+
+ def get_line(self, offset=0):
+ return self.lines[self.i + offset].rstrip()
+
+
+# Get the indentation of a line
+def get_indent(line):
+ indent = 0
+ length = len(line)
+ while indent < length and line[indent] == ' ':
+ indent += 1
+ return indent
+
+
+# Remove unneeded indentation
+def remove_indent(list):
+ # Detect the minimum indentation in the list
+ min_indent = -1
+ for line in list:
+ if not line.strip():
+ continue
+ indent = get_indent(line)
+ if min_indent < 0 or indent < min_indent:
+ min_indent = indent
+ # Realign the list content to remove the minimum indentation
+ if min_indent > 0:
+ for index, line in enumerate(list):
+ list[index] = line[min_indent:]
--- /dev/null
+import sys
+import re
+import parser
+
+'''
+TODO: Allow inner data parsing (this will allow to parse the examples provided in an arguments block)
+'''
+class Parser(parser.Parser):
+ def __init__(self, pctxt):
+ parser.Parser.__init__(self, pctxt)
+ #template = pctxt.templates.get_template("parser/arguments.tpl")
+ #self.replace = template.render().strip()
+
+ def parse(self, line):
+ #return re.sub(r'(Arguments *:)', self.replace, line)
+ pctxt = self.pctxt
+
+ result = re.search(r'(Arguments? *:)', line)
+ if result:
+ label = result.group(0)
+ content = []
+
+ desc_indent = False
+ desc = re.sub(r'.*Arguments? *:', '', line).strip()
+
+ indent = parser.get_indent(line)
+
+ pctxt.next()
+ pctxt.eat_empty_lines()
+
+ arglines = []
+ if desc != "none":
+ add_empty_lines = 0
+ while pctxt.has_more_lines() and (parser.get_indent(pctxt.get_line()) > indent):
+ for j in xrange(0, add_empty_lines):
+ arglines.append("")
+ arglines.append(pctxt.get_line())
+ pctxt.next()
+ add_empty_lines = pctxt.eat_empty_lines()
+ '''
+ print line
+
+ if parser.get_indent(line) == arg_indent:
+ argument = re.sub(r' *([^ ]+).*', r'\1', line)
+ if argument:
+ #content.append("<b>%s</b>" % argument)
+ arg_desc = [line.replace(argument, " " * len(self.unescape(argument)), 1)]
+ #arg_desc = re.sub(r'( *)([^ ]+)(.*)', r'\1<b>\2</b>\3', line)
+ arg_desc_indent = parser.get_indent(arg_desc[0])
+ arg_desc[0] = arg_desc[0][arg_indent:]
+ pctxt.next()
+ add_empty_lines = 0
+ while pctxt.has_more_lines and parser.get_indent(pctxt.get_line()) >= arg_indent:
+ for i in xrange(0, add_empty_lines):
+ arg_desc.append("")
+ arg_desc.append(pctxt.get_line()[arg_indent:])
+ pctxt.next()
+ add_empty_lines = pctxt.eat_empty_lines()
+ # TODO : reduce space at the beginnning
+ content.append({
+ 'name': argument,
+ 'desc': arg_desc
+ })
+ '''
+
+ if arglines:
+ new_arglines = []
+ #content = self.parse_args(arglines)
+ parser.remove_indent(arglines)
+ '''
+ pctxt2 = parser.PContext(pctxt.templates)
+ pctxt2.set_content_list(arglines)
+ while pctxt2.has_more_lines():
+ new_arglines.append(parser.example.Parser(pctxt2).parse(pctxt2.get_line()))
+ pctxt2.next()
+ arglines = new_arglines
+ '''
+
+ pctxt.stop = True
+
+ template = pctxt.templates.get_template("parser/arguments.tpl")
+ return template.render(
+ pctxt=pctxt,
+ label=label,
+ desc=desc,
+ content=arglines
+ #content=content
+ )
+ return line
+
+ return line
+
+'''
+ def parse_args(self, data):
+ args = []
+
+ pctxt = parser.PContext()
+ pctxt.set_content_list(data)
+
+ while pctxt.has_more_lines():
+ line = pctxt.get_line()
+ arg_indent = parser.get_indent(line)
+ argument = re.sub(r' *([^ ]+).*', r'\1', line)
+ if True or argument:
+ arg_desc = []
+ trailing_desc = line.replace(argument, " " * len(self.unescape(argument)), 1)[arg_indent:]
+ if trailing_desc.strip():
+ arg_desc.append(trailing_desc)
+ pctxt.next()
+ add_empty_lines = 0
+ while pctxt.has_more_lines() and parser.get_indent(pctxt.get_line()) > arg_indent:
+ for i in xrange(0, add_empty_lines):
+ arg_desc.append("")
+ arg_desc.append(pctxt.get_line()[arg_indent:])
+ pctxt.next()
+ add_empty_lines = pctxt.eat_empty_lines()
+
+ parser.remove_indent(arg_desc)
+
+ args.append({
+ 'name': argument,
+ 'desc': arg_desc
+ })
+ return args
+
+ def unescape(self, s):
+ s = s.replace("<", "<")
+ s = s.replace(">", ">")
+ # this has to be last:
+ s = s.replace("&", "&")
+ return s
+'''
--- /dev/null
+import re
+import parser
+
+# Detect examples blocks
+class Parser(parser.Parser):
+ def __init__(self, pctxt):
+ parser.Parser.__init__(self, pctxt)
+ template = pctxt.templates.get_template("parser/example/comment.tpl")
+ self.comment = template.render(pctxt=pctxt).strip()
+
+
+ def parse(self, line):
+ pctxt = self.pctxt
+
+ result = re.search(r'^ *(Examples? *:)(.*)', line)
+ if result:
+ label = result.group(1)
+
+ desc_indent = False
+ desc = result.group(2).strip()
+
+ # Some examples have a description
+ if desc:
+ desc_indent = len(line) - len(desc)
+
+ indent = parser.get_indent(line)
+
+ if desc:
+ # And some description are on multiple lines
+ while pctxt.get_line(1) and parser.get_indent(pctxt.get_line(1)) == desc_indent:
+ desc += " " + pctxt.get_line(1).strip()
+ pctxt.next()
+
+ pctxt.next()
+ add_empty_line = pctxt.eat_empty_lines()
+
+ content = []
+
+ if parser.get_indent(pctxt.get_line()) > indent:
+ if desc:
+ desc = desc[0].upper() + desc[1:]
+ add_empty_line = 0
+ while pctxt.has_more_lines() and ((not pctxt.get_line()) or (parser.get_indent(pctxt.get_line()) > indent)):
+ if pctxt.get_line():
+ for j in xrange(0, add_empty_line):
+ content.append("")
+
+ content.append(re.sub(r'(#.*)$', self.comment, pctxt.get_line()))
+ add_empty_line = 0
+ else:
+ add_empty_line += 1
+ pctxt.next()
+ elif parser.get_indent(pctxt.get_line()) == indent:
+ # Simple example that can't have empty lines
+ if add_empty_line and desc:
+ # This means that the example was on the same line as the 'Example' tag
+ # and was not a description
+ content.append(" " * indent + desc)
+ desc = False
+ else:
+ while pctxt.has_more_lines() and (parser.get_indent(pctxt.get_line()) >= indent):
+ content.append(pctxt.get_line())
+ pctxt.next()
+ pctxt.eat_empty_lines() # Skip empty remaining lines
+
+ pctxt.stop = True
+
+ parser.remove_indent(content)
+
+ template = pctxt.templates.get_template("parser/example.tpl")
+ return template.render(
+ pctxt=pctxt,
+ label=label,
+ desc=desc,
+ content=content
+ )
+ return line
--- /dev/null
+import re
+import parser
+from urllib import quote
+
+class Parser(parser.Parser):
+ def __init__(self, pctxt):
+ parser.Parser.__init__(self, pctxt)
+ self.keywordPattern = re.compile(r'^(%s%s)(%s)' % (
+ '([a-z][a-z0-9\-\+_\.]*[a-z0-9\-\+_)])', # keyword
+ '( [a-z0-9\-_]+)*', # subkeywords
+ '(\([^ ]*\))?', # arg (ex: (<backend>), (<frontend>/<backend>), (<offset1>,<length>[,<offset2>]) ...
+ ))
+
+ def parse(self, line):
+ pctxt = self.pctxt
+ keywords = pctxt.keywords
+ keywordsCount = pctxt.keywordsCount
+ chapters = pctxt.chapters
+
+ res = ""
+
+ if line != "" and not re.match(r'^ ', line):
+ parsed = self.keywordPattern.match(line)
+ if parsed != None:
+ keyword = parsed.group(1)
+ arg = parsed.group(4)
+ parameters = line[len(keyword) + len(arg):]
+ if (parameters != "" and not re.match("^ +((<|\[|\{|/).*|(: [a-z +]+))?(\(deprecated\))?$", parameters)):
+ # Dirty hack
+ # - parameters should only start with the characer "<", "[", "{", "/"
+ # - or a column (":") followed by a alpha keywords to identify fetching samples (optionally separated by the character "+")
+ # - or the string "(deprecated)" at the end
+ keyword = False
+ else:
+ splitKeyword = keyword.split(" ")
+
+ parameters = arg + parameters
+ else:
+ keyword = False
+
+ if keyword and (len(splitKeyword) <= 5):
+ toplevel = pctxt.details["toplevel"]
+ for j in xrange(0, len(splitKeyword)):
+ subKeyword = " ".join(splitKeyword[0:j + 1])
+ if subKeyword != "no":
+ if not subKeyword in keywords:
+ keywords[subKeyword] = set()
+ keywords[subKeyword].add(pctxt.details["chapter"])
+ res += '<a class="anchor" name="%s"></a>' % subKeyword
+ res += '<a class="anchor" name="%s-%s"></a>' % (toplevel, subKeyword)
+ res += '<a class="anchor" name="%s-%s"></a>' % (pctxt.details["chapter"], subKeyword)
+ res += '<a class="anchor" name="%s (%s)"></a>' % (subKeyword, chapters[toplevel]['title'])
+ res += '<a class="anchor" name="%s (%s)"></a>' % (subKeyword, chapters[pctxt.details["chapter"]]['title'])
+
+ deprecated = parameters.find("(deprecated)")
+ if deprecated != -1:
+ prefix = ""
+ suffix = ""
+ parameters = parameters.replace("(deprecated)", '<span class="label label-warning">(deprecated)</span>')
+ else:
+ prefix = ""
+ suffix = ""
+
+ nextline = pctxt.get_line(1)
+
+ while nextline.startswith(" "):
+ # Found parameters on the next line
+ parameters += "\n" + nextline
+ pctxt.next()
+ if pctxt.has_more_lines(1):
+ nextline = pctxt.get_line(1)
+ else:
+ nextline = ""
+
+
+ parameters = self.colorize(parameters)
+ res += '<div class="keyword">%s<b><a class="anchor" name="%s"></a><a href="#%s">%s</a></b>%s%s</div>' % (prefix, keyword, quote("%s-%s" % (pctxt.details["chapter"], keyword)), keyword, parameters, suffix)
+ pctxt.next()
+ pctxt.stop = True
+ elif line.startswith("/*"):
+ # Skip comments in the documentation
+ while not pctxt.get_line().endswith("*/"):
+ pctxt.next()
+ pctxt.next()
+ else:
+ # This is probably not a keyword but a text, ignore it
+ res += line
+ else:
+ res += line
+
+ return res
+
+ # Used to colorize keywords parameters
+ # TODO : use CSS styling
+ def colorize(self, text):
+ colorized = ""
+ tags = [
+ [ "[" , "]" , "#008" ],
+ [ "{" , "}" , "#800" ],
+ [ "<", ">", "#080" ],
+ ]
+ heap = []
+ pos = 0
+ while pos < len(text):
+ substring = text[pos:]
+ found = False
+ for tag in tags:
+ if substring.startswith(tag[0]):
+ # Opening tag
+ heap.append(tag)
+ colorized += '<span style="color: %s">%s' % (tag[2], substring[0:len(tag[0])])
+ pos += len(tag[0])
+ found = True
+ break
+ elif substring.startswith(tag[1]):
+ # Closing tag
+
+ # pop opening tags until the corresponding one is found
+ openingTag = False
+ while heap and openingTag != tag:
+ openingTag = heap.pop()
+ if openingTag != tag:
+ colorized += '</span>'
+ # all intermediate tags are now closed, we can display the tag
+ colorized += substring[0:len(tag[1])]
+ # and the close it if it was previously opened
+ if openingTag == tag:
+ colorized += '</span>'
+ pos += len(tag[1])
+ found = True
+ break
+ if not found:
+ colorized += substring[0]
+ pos += 1
+ # close all unterminated tags
+ while heap:
+ tag = heap.pop()
+ colorized += '</span>'
+
+ return colorized
+
+
--- /dev/null
+import re
+import parser
+
+class Parser(parser.Parser):
+ def parse(self, line):
+ pctxt = self.pctxt
+
+ result = re.search(r'(See also *:)', line)
+ if result:
+ label = result.group(0)
+
+ desc = re.sub(r'.*See also *:', '', line).strip()
+
+ indent = parser.get_indent(line)
+
+ # Some descriptions are on multiple lines
+ while pctxt.has_more_lines(1) and parser.get_indent(pctxt.get_line(1)) >= indent:
+ desc += " " + pctxt.get_line(1).strip()
+ pctxt.next()
+
+ pctxt.eat_empty_lines()
+ pctxt.next()
+ pctxt.stop = True
+
+ template = pctxt.templates.get_template("parser/seealso.tpl")
+ return template.render(
+ pctxt=pctxt,
+ label=label,
+ desc=desc,
+ )
+
+ return line
--- /dev/null
+import re
+import sys
+import parser
+
+class Parser(parser.Parser):
+ def __init__(self, pctxt):
+ parser.Parser.__init__(self, pctxt)
+ self.table1Pattern = re.compile(r'^ *(-+\+)+-+')
+ self.table2Pattern = re.compile(r'^ *\+(-+\+)+')
+
+ def parse(self, line):
+ global document, keywords, keywordsCount, chapters, keyword_conflicts
+
+ pctxt = self.pctxt
+
+ if pctxt.context['headers']['subtitle'] != 'Configuration Manual':
+ # Quick exit
+ return line
+ elif pctxt.details['chapter'] == "4":
+ # BUG: the matrix in chapter 4. Proxies is not well displayed, we skip this chapter
+ return line
+
+ if pctxt.has_more_lines(1):
+ nextline = pctxt.get_line(1)
+ else:
+ nextline = ""
+
+ if self.table1Pattern.match(nextline):
+ # activate table rendering only for the Configuration Manual
+ lineSeparator = nextline
+ nbColumns = nextline.count("+") + 1
+ extraColumns = 0
+ print >> sys.stderr, "Entering table mode (%d columns)" % nbColumns
+ table = []
+ if line.find("|") != -1:
+ row = []
+ while pctxt.has_more_lines():
+ line = pctxt.get_line()
+ if pctxt.has_more_lines(1):
+ nextline = pctxt.get_line(1)
+ else:
+ nextline = ""
+ if line == lineSeparator:
+ # New row
+ table.append(row)
+ row = []
+ if nextline.find("|") == -1:
+ break # End of table
+ else:
+ # Data
+ columns = line.split("|")
+ for j in xrange(0, len(columns)):
+ try:
+ if row[j]:
+ row[j] += "<br />"
+ row[j] += columns[j].strip()
+ except:
+ row.append(columns[j].strip())
+ pctxt.next()
+ else:
+ row = []
+ headers = nextline
+ while pctxt.has_more_lines():
+ line = pctxt.get_line()
+ if pctxt.has_more_lines(1):
+ nextline = pctxt.get_line(1)
+ else:
+ nextline = ""
+
+ if nextline == "":
+ if row: table.append(row)
+ break # End of table
+
+ if (line != lineSeparator) and (line[0] != "-"):
+ start = 0
+
+ if row and not line.startswith(" "):
+ # Row is complete, parse a new one
+ table.append(row)
+ row = []
+
+ tmprow = []
+ while start != -1:
+ end = headers.find("+", start)
+ if end == -1:
+ end = len(headers)
+
+ realend = end
+ if realend == len(headers):
+ realend = len(line)
+ else:
+ while realend < len(line) and line[realend] != " ":
+ realend += 1
+ end += 1
+
+ tmprow.append(line[start:realend])
+
+ start = end + 1
+ if start >= len(headers):
+ start = -1
+ for j in xrange(0, nbColumns):
+ try:
+ row[j] += tmprow[j].strip()
+ except:
+ row.append(tmprow[j].strip())
+
+ deprecated = row[0].endswith("(deprecated)")
+ if deprecated:
+ row[0] = row[0][: -len("(deprecated)")].rstrip()
+
+ nooption = row[1].startswith("(*)")
+ if nooption:
+ row[1] = row[1][len("(*)"):].strip()
+
+ if deprecated or nooption:
+ extraColumns = 1
+ extra = ""
+ if deprecated:
+ extra += '<span class="label label-warning">(deprecated)</span>'
+ if nooption:
+ extra += '<span>(*)</span>'
+ row.append(extra)
+
+ pctxt.next()
+ print >> sys.stderr, "Leaving table mode"
+ pctxt.next() # skip useless next line
+ pctxt.stop = True
+
+ return self.renderTable(table, nbColumns, pctxt.details["toplevel"])
+ # elif self.table2Pattern.match(line):
+ # return self.parse_table_format2()
+ elif line.find("May be used in sections") != -1:
+ nextline = pctxt.get_line(1)
+ rows = []
+ headers = line.split(":")
+ rows.append(headers[1].split("|"))
+ rows.append(nextline.split("|"))
+ table = {
+ "rows": rows,
+ "title": headers[0]
+ }
+ pctxt.next(2) # skip this previous table
+ pctxt.stop = True
+
+ return self.renderTable(table)
+
+ return line
+
+
+ def parse_table_format2(self):
+ pctxt = self.pctxt
+
+ linesep = pctxt.get_line()
+ rows = []
+
+ pctxt.next()
+ maxcols = 0
+ while pctxt.get_line().strip().startswith("|"):
+ row = pctxt.get_line().strip()[1:-1].split("|")
+ rows.append(row)
+ maxcols = max(maxcols, len(row))
+ pctxt.next()
+ if pctxt.get_line() == linesep:
+ # TODO : find a way to define a special style for next row
+ pctxt.next()
+ pctxt.stop = True
+
+ return self.renderTable(rows, maxcols)
+
+ # Render tables detected by the conversion parser
+ def renderTable(self, table, maxColumns = 0, toplevel = None):
+ pctxt = self.pctxt
+ template = pctxt.templates.get_template("parser/table.tpl")
+
+ res = ""
+
+ title = None
+ if isinstance(table, dict):
+ title = table["title"]
+ table = table["rows"]
+
+ if not maxColumns:
+ maxColumns = len(table[0])
+
+ rows = []
+
+ mode = "th"
+ headerLine = ""
+ hasKeywords = False
+ i = 0
+ for row in table:
+ line = ""
+
+ if i == 0:
+ row_template = pctxt.templates.get_template("parser/table/header.tpl")
+ else:
+ row_template = pctxt.templates.get_template("parser/table/row.tpl")
+
+ if i > 1 and (i - 1) % 20 == 0 and len(table) > 50:
+ # Repeat headers periodically for long tables
+ rows.append(headerLine)
+
+ j = 0
+ cols = []
+ for column in row:
+ if j >= maxColumns:
+ break
+
+ tplcol = {}
+
+ data = column.strip()
+ keyword = column
+ if j == 0 and i == 0 and keyword == 'keyword':
+ hasKeywords = True
+ if j == 0 and i != 0 and hasKeywords:
+ if keyword.startswith("[no] "):
+ keyword = keyword[len("[no] "):]
+ tplcol['toplevel'] = toplevel
+ tplcol['keyword'] = keyword
+ tplcol['extra'] = []
+ if j == 0 and len(row) > maxColumns:
+ for k in xrange(maxColumns, len(row)):
+ tplcol['extra'].append(row[k])
+ tplcol['data'] = data
+ cols.append(tplcol)
+ j += 1
+ mode = "td"
+
+ line = row_template.render(
+ pctxt=pctxt,
+ columns=cols
+ ).strip()
+ if i == 0:
+ headerLine = line
+
+ rows.append(line)
+
+ i += 1
+
+ return template.render(
+ pctxt=pctxt,
+ title=title,
+ rows=rows,
+ )
--- /dev/null
+import parser
+
+class Parser(parser.Parser):
+ # Detect underlines
+ def parse(self, line):
+ pctxt = self.pctxt
+ if pctxt.has_more_lines(1):
+ nextline = pctxt.get_line(1)
+ if (len(line) > 0) and (len(nextline) > 0) and (nextline[0] == '-') and ("-" * len(line) == nextline):
+ template = pctxt.templates.get_template("parser/underline.tpl")
+ line = template.render(pctxt=pctxt, data=line).strip()
+ pctxt.next(2)
+ pctxt.eat_empty_lines()
+ pctxt.stop = True
+
+ return line
--- /dev/null
+<div class="separator">
+<span class="label label-info">${label}</span>\
+% if desc:
+ ${desc}
+% endif
+% if content:
+<pre class="prettyprint arguments">${"\n".join(content)}</pre>
+% endif
+</div>
--- /dev/null
+<div class="separator">
+<span class="label label-success">${label}</span>
+<pre class="prettyprint">
+% if desc:
+<div class="example-desc">${desc}</div>\
+% endif
+<code>\
+% for line in content:
+${line}
+% endfor
+</code></pre>
+</div>
\ No newline at end of file
--- /dev/null
+<span class="comment">\1</span>
\ No newline at end of file
--- /dev/null
+<div class="page-header"><b>${label}</b> ${desc}</div>
--- /dev/null
+% if title:
+<div><p>${title} :</p>\
+% endif
+<table class="table table-bordered" border="0" cellspacing="0" cellpadding="0">
+% for row in rows:
+${row}
+% endfor
+</table>\
+% if title:
+</div>
+% endif
\ No newline at end of file
--- /dev/null
+<thead><tr>\
+% for col in columns:
+<% data = col['data'] %>\
+<th>${data}</th>\
+% endfor
+</tr></thead>
--- /dev/null
+<% from urllib import quote %>
+<% base = pctxt.context['base'] %>
+<tr>\
+% for col in columns:
+<% data = col['data'] %>\
+<%
+ if data in ['yes']:
+ style = "class=\"alert-success pagination-centered\""
+ data = 'yes<br /><img src="%scss/check.png" alt="yes" title="yes" />' % base
+ elif data in ['no']:
+ style = "class=\"alert-error pagination-centered\""
+ data = 'no<br /><img src="%scss/cross.png" alt="no" title="no" />' % base
+ elif data in ['X']:
+ style = "class=\"pagination-centered\""
+ data = '<img src="%scss/check.png" alt="X" title="yes" />' % base
+ elif data in ['-']:
+ style = "class=\"pagination-centered\""
+ data = ' '
+ elif data in ['*']:
+ style = "class=\"pagination-centered\""
+ else:
+ style = None
+%>\
+<td ${style}>\
+% if "keyword" in col:
+<a href="#${quote("%s-%s" % (col['toplevel'], col['keyword']))}">\
+% for extra in col['extra']:
+<span class="pull-right">${extra}</span>\
+% endfor
+${data}</a>\
+% else:
+${data}\
+% endif
+</td>\
+% endfor
+</tr>
--- /dev/null
+<h5>${data}</h5>
--- /dev/null
+<a class="anchor" id="summary" name="summary"></a>
+<div class="page-header">
+ <h1 id="chapter-summary" data-target="summary">Summary</h1>
+</div>
+<div class="row">
+ <div class="col-md-6">
+ <% previousLevel = None %>
+ % for k in chapterIndexes:
+ <% chapter = chapters[k] %>
+ % if chapter['title']:
+ <%
+ if chapter['level'] == 1:
+ otag = "<b>"
+ etag = "</b>"
+ else:
+ otag = etag = ""
+ %>
+ % if chapter['chapter'] == '7':
+ ## Quick and dirty hack to split the summary in 2 columns
+ ## TODO : implement a generic way split the summary
+ </div><div class="col-md-6">
+ <% previousLevel = None %>
+ % endif
+ % if otag and previousLevel:
+ <br />
+ % endif
+ <div class="row">
+ <div class="col-md-2 pagination-right noheight">${otag}<small>${chapter['chapter']}.</small>${etag}</div>
+ <div class="col-md-10 noheight">
+ % for tab in range(1, chapter['level']):
+ <div class="tab">
+ % endfor
+ <a href="#${chapter['chapter']}">${otag}${chapter['title']}${etag}</a>
+ % for tab in range(1, chapter['level']):
+ </div>
+ % endfor
+ </div>
+ </div>
+ <% previousLevel = chapter['level'] %>
+ % endif
+ % endfor
+ </div>
+</div>
--- /dev/null
+<!DOCTYPE html>
+<html lang="en">
+ <head>
+ <meta charset="utf-8" />
+ <title>${headers['title']} ${headers['version']} - ${headers['subtitle']}</title>
+ <link href="//cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.1.1/css/bootstrap.min.css" rel="stylesheet" />
+ <link href="${base}css/page.css?${version}" rel="stylesheet" />
+ </head>
+ <body>
+ <nav class="navbar navbar-default navbar-fixed-top" role="navigation">
+ <div class="navbar-header">
+ <button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#menu">
+ <span class="sr-only">Toggle navigation</span>
+ <span class="icon-bar"></span>
+ <span class="icon-bar"></span>
+ <span class="icon-bar"></span>
+ </button>
+ <a class="navbar-brand" href="${base}index.html">${headers['title']} <small>${headers['subtitle']}</small></a>
+ </div>
+ <!-- /.navbar-header -->
+
+ <!-- Collect the nav links, forms, and other content for toggling -->
+ <div class="collapse navbar-collapse" id="menu">
+ <ul class="nav navbar-nav">
+ <li><a href="http://www.haproxy.org/">HAProxy home page</a></li>
+ <li class="dropdown">
+ <a href="#" class="dropdown-toggle" data-toggle="dropdown">Versions <b class="caret"></b></a>
+ <ul class="dropdown-menu">
+ ## TODO : provide a structure to dynamically generate per version links
+ <li class="dropdown-header">HAProxy 1.4</li>
+ <li><a href="${base}configuration-1.4.html">Configuration Manual <small>(stable)</small></a></li>
+ <li><a href="${base}snapshot/configuration-1.4.html">Configuration Manual <small>(snapshot)</small></a></li>
+ <li><a href="http://git.1wt.eu/git/haproxy-1.4.git/">GIT Repository</a></li>
+ <li><a href="http://www.haproxy.org/git/?p=haproxy-1.4.git">Browse repository</a></li>
+ <li><a href="http://www.haproxy.org/download/1.4/">Browse directory</a></li>
+ <li class="divider"></li>
+ <li class="dropdown-header">HAProxy 1.5</li>
+ <li><a href="${base}configuration-1.5.html">Configuration Manual <small>(stable)</small></a></li>
+ <li><a href="${base}snapshot/configuration-1.5.html">Configuration Manual <small>(snapshot)</small></a></li>
+ <li><a href="http://git.1wt.eu/git/haproxy-1.5.git/">GIT Repository</a></li>
+ <li><a href="http://www.haproxy.org/git/?p=haproxy-1.5.git">Browse repository</a></li>
+ <li><a href="http://www.haproxy.org/download/1.5/">Browse directory</a></li>
+ <li class="divider"></li>
+ <li class="dropdown-header">HAProxy 1.6</li>
+ <li><a href="${base}configuration-1.6.html">Configuration Manual <small>(stable)</small></a></li>
+ <li><a href="${base}snapshot/configuration-1.6.html">Configuration Manual <small>(snapshot)</small></a></li>
+ <li><a href="${base}intro-1.6.html">Starter Guide <small>(stable)</small></a></li>
+ <li><a href="${base}snapshot/intro-1.6.html">Starter Guide <small>(snapshot)</small></a></li>
+ <li><a href="http://git.1wt.eu/git/haproxy.git/">GIT Repository</a></li>
+ <li><a href="http://www.haproxy.org/git/?p=haproxy.git">Browse repository</a></li>
+ <li><a href="http://www.haproxy.org/download/1.6/">Browse directory</a></li>
+ </ul>
+ </li>
+ </ul>
+ </div>
+ </nav>
+ <!-- /.navbar-static-side -->
+
+ <div id="wrapper">
+
+ <div id="sidebar">
+ <form onsubmit="search(this.keyword.value); return false" role="form">
+ <div id="searchKeyword" class="form-group">
+ <input type="text" class="form-control typeahead" id="keyword" name="keyword" placeholder="Search..." autocomplete="off">
+ </div>
+ </form>
+ <p>
+ Keyboard navigation : <span id="keyboardNavStatus"></span>
+ </p>
+ <p>
+ When enabled, you can use <strong>left</strong> and <strong>right</strong> arrow keys to navigate between chapters.<br>
+ The feature is automatically disabled when the search field is focused.
+ </p>
+ <p class="text-right">
+ <small>Converted with <a href="https://github.com/cbonte/haproxy-dconv">haproxy-dconv</a> v<b>${version}</b> on <b>${date}</b></small>
+ </p>
+ </div>
+ <!-- /.sidebar -->
+
+ <div id="page-wrapper">
+ <div class="row">
+ <div class="col-lg-12">
+ <div class="text-center">
+ <h1>${headers['title']}</h1>
+ <h2>${headers['subtitle']}</h2>
+ <p><strong>${headers['version']}</strong></p>
+ <p>
+ <a href="http://www.haproxy.org/" title="HAProxy Home Page"><img src="${base}img/logo-med.png" /></a><br>
+ ${headers['author']}<br>
+ ${headers['date']}
+ </p>
+ </div>
+
+ ${document}
+ <br>
+ <hr>
+ <div class="text-right">
+ ${headers['title']} ${headers['version'].replace("version ", "")} – ${headers['subtitle']}<br>
+ <small>${headers['date']}, ${headers['author']}</small>
+ </div>
+ </div>
+ <!-- /.col-lg-12 -->
+ </div>
+ <!-- /.row -->
+ <div style="position: fixed; z-index: 1000; bottom: 0; left: 0; right: 0; padding: 10px">
+ <ul class="pager" style="margin: 0">
+ <li class="previous"><a id="previous" href="#"></a></li>
+ <li class="next"><a id="next" href="#"></a></li>
+ </ul>
+ </div>
+ </div>
+ <!-- /#page-wrapper -->
+
+ </div>
+ <!-- /#wrapper -->
+
+ <script src="//cdnjs.cloudflare.com/ajax/libs/jquery/1.11.0/jquery.min.js"></script>
+ <script src="//cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.1.1/js/bootstrap.min.js"></script>
+ <script src="//cdnjs.cloudflare.com/ajax/libs/typeahead.js/0.11.1/typeahead.bundle.min.js"></script>
+ <script>
+ /* Keyword search */
+ var searchFocus = false
+ var keywords = [
+ "${'",\n\t\t\t\t"'.join(keywords)}"
+ ]
+
+ function updateKeyboardNavStatus() {
+ var status = searchFocus ? '<span class="label label-disabled">Disabled</span>' : '<span class="label label-success">Enabled</span>'
+ $('#keyboardNavStatus').html(status)
+ }
+
+ function search(keyword) {
+ if (keyword && !!~$.inArray(keyword, keywords)) {
+ window.location.hash = keyword
+ }
+ }
+ // constructs the suggestion engine
+ var kwbh = new Bloodhound({
+ datumTokenizer: Bloodhound.tokenizers.obj.whitespace('value'),
+ queryTokenizer: Bloodhound.tokenizers.whitespace,
+ local: $.map(keywords, function(keyword) { return { value: keyword }; })
+ });
+ kwbh.initialize()
+
+ $('#searchKeyword .typeahead').typeahead({
+ hint: true,
+ highlight: true,
+ minLength: 1,
+ autoselect: true
+ },
+ {
+ name: 'keywords',
+ displayKey: 'value',
+ limit: keywords.length,
+ source: kwbh.ttAdapter()
+ }).focus(function() {
+ searchFocus = true
+ updateKeyboardNavStatus()
+ }).blur(function() {
+ searchFocus = false
+ updateKeyboardNavStatus()
+ }).bind('typeahead:selected', function ($e, datum) {
+ search(datum.value)
+ })
+
+ /* EXPERIMENTAL - Previous/Next navigation */
+ var headings = $(":header")
+ var previousTarget = false
+ var nextTarget = false
+ var $previous = $('#previous')
+ var $next = $('#next')
+ function refreshNavigation() {
+ var previous = false
+ var next = false
+ $.each(headings, function(item, value) {
+ var el = $(value)
+
+ // TODO : avoid target recalculation on each refresh
+ var target = el.attr('data-target')
+ if (! target) return true
+
+ var target_el = $('#' + target.replace(/\./, "\\."))
+ if (! target_el.attr('id')) return true
+
+ if (target_el.offset().top < $(window).scrollTop()) {
+ previous = el
+ }
+ if (target_el.offset().top - 1 > $(window).scrollTop()) {
+ next = el
+ }
+ if (next) return false
+ })
+
+ previousTarget = previous ? previous.attr('data-target') : 'top'
+ $previous.html(
+ previous && previousTarget ?
+ '<span class="glyphicon glyphicon-arrow-left"></span> ' + previous.text() :
+ '<span class="glyphicon glyphicon-arrow-up"></span> Top'
+ ).attr('href', '#' + previousTarget)
+
+ nextTarget = next ? next.attr('data-target') : 'bottom'
+ $next.html(
+ next && nextTarget ?
+ next.text() + ' <span class="glyphicon glyphicon-arrow-right"></span>' :
+ 'Bottom <span class="glyphicon glyphicon-arrow-down"></span>'
+ ).attr('href', '#' + nextTarget)
+ }
+
+ $(window).scroll(function () {
+ refreshNavigation()
+ });
+ $(document).ready(function() {
+ refreshNavigation()
+ updateKeyboardNavStatus()
+ });
+
+ /* EXPERIMENTAL - Enable keyboard navigation */
+ $(document).keydown(function(e){
+ if (searchFocus) return
+
+ switch(e.which) {
+ case 37: // left
+ window.location.hash = previousTarget ? previousTarget : 'top'
+ break
+
+ case 39: // right
+ window.location.hash = nextTarget ? nextTarget : 'bottom'
+ break
+
+ default: return // exit this handler for other keys
+ }
+ e.preventDefault()
+ })
+ </script>
+ ${footer}
+ <a class="anchor" name="bottom"></a>
+ </body>
+</html>
--- /dev/null
+#!/bin/bash
+
+PROJECT_HOME=$(dirname $(readlink -f $0))
+cd $PROJECT_HOME || exit 1
+
+WORK_DIR=$PROJECT_HOME/work
+
+function on_exit()
+{
+ echo "-- END $(date)"
+}
+
+function init()
+{
+ trap on_exit EXIT
+
+ echo
+ echo "-- START $(date)"
+ echo "PROJECT_HOME = $PROJECT_HOME"
+
+ echo "Preparing work directories..."
+ mkdir -p $WORK_DIR || exit 1
+ mkdir -p $WORK_DIR/haproxy || exit 1
+ mkdir -p $WORK_DIR/haproxy-dconv || exit 1
+
+ UPDATED=0
+ PUSH=0
+
+}
+
+# Needed as "git -C" is only available since git 1.8.5
+function git-C()
+{
+ _gitpath=$1
+ shift
+ echo "git --git-dir=$_gitpath/.git --work-tree=$_gitpath $@" >&2
+ git --git-dir=$_gitpath/.git --work-tree=$_gitpath "$@"
+}
+
+function fetch_haproxy_dconv()
+{
+ echo "Fetching latest haproxy-dconv public version..."
+ if [ ! -e $WORK_DIR/haproxy-dconv/master ];
+ then
+ git clone -v git://github.com/cbonte/haproxy-dconv.git $WORK_DIR/haproxy-dconv/master || exit 1
+ fi
+ GIT="git-C $WORK_DIR/haproxy-dconv/master"
+
+ OLD_MD5="$($GIT log -1 | md5sum) $($GIT describe --tags)"
+ $GIT checkout master && $GIT pull -v
+ version=$($GIT describe --tags)
+ version=${version%-g*}
+ NEW_MD5="$($GIT log -1 | md5sum) $($GIT describe --tags)"
+ if [ "$OLD_MD5" != "$NEW_MD5" ];
+ then
+ UPDATED=1
+ fi
+
+ echo "Fetching last haproxy-dconv public pages version..."
+ if [ ! -e $WORK_DIR/haproxy-dconv/gh-pages ];
+ then
+ cp -a $WORK_DIR/haproxy-dconv/master $WORK_DIR/haproxy-dconv/gh-pages || exit 1
+ fi
+ GIT="git-C $WORK_DIR/haproxy-dconv/gh-pages"
+
+ $GIT checkout gh-pages && $GIT pull -v
+}
+
+function fetch_haproxy()
+{
+ url=$1
+ path=$2
+
+ echo "Fetching HAProxy 1.4 repository..."
+ if [ ! -e $path ];
+ then
+ git clone -v $url $path || exit 1
+ fi
+ GIT="git-C $path"
+
+ $GIT checkout master && $GIT pull -v
+}
+
+function _generate_file()
+{
+ infile=$1
+ destfile=$2
+ git_version=$3
+ state=$4
+
+ $GIT checkout $git_version
+
+ if [ -e $gitpath/doc/$infile ];
+ then
+
+ git_version_simple=${git_version%-g*}
+ doc_version=$(tail -n1 $destfile 2>/dev/null | grep " git:" | sed 's/.* git:\([^ ]*\).*/\1/')
+ if [ $UPDATED -eq 1 -o "$git_version" != "$doc_version" ];
+ then
+ HTAG="VERSION-$(basename $gitpath | sed 's/[.]/\\&/g')"
+ if [ "$state" == "snapshot" ];
+ then
+ base=".."
+ HTAG="$HTAG-SNAPSHOT"
+ else
+ base="."
+ fi
+
+
+ $WORK_DIR/haproxy-dconv/master/haproxy-dconv.py -i $gitpath/doc/$infile -o $destfile --base=$base &&
+ echo "<!-- git:$git_version -->" >> $destfile &&
+ sed -i "s/\(<\!-- $HTAG -->\)\(.*\)\(<\!-- \/$HTAG -->\)/\1${git_version_simple}\3/" $docroot/index.html
+
+ else
+ echo "Already up to date."
+ fi
+
+ if [ "$doc_version" != "" -a "$git_version" != "$doc_version" ];
+ then
+ changelog=$($GIT log --oneline $doc_version..$git_version $gitpath/doc/$infile)
+ else
+ changelog=""
+ fi
+
+ GITDOC="git-C $docroot"
+ if [ "$($GITDOC status -s $destfile)" != "" ];
+ then
+ $GITDOC add $destfile &&
+ $GITDOC commit -m "Updating HAProxy $state $infile ${git_version_simple} generated by haproxy-dconv $version" -m "$changelog" $destfile $docroot/index.html &&
+ PUSH=1
+ fi
+ fi
+}
+
+function generate_docs()
+{
+ url=$1
+ gitpath=$2
+ docroot=$3
+ infile=$4
+ outfile=$5
+
+ fetch_haproxy $url $gitpath
+
+ GIT="git-C $gitpath"
+
+ $GIT checkout master
+ git_version=$($GIT describe --tags --match 'v*')
+ git_version_stable=${git_version%-*-g*}
+
+ echo "Generating snapshot version $git_version..."
+ _generate_file $infile $docroot/snapshot/$outfile $git_version snapshot
+
+ echo "Generating stable version $git_version..."
+ _generate_file $infile $docroot/$outfile $git_version_stable stable
+}
+
+function push()
+{
+ docroot=$1
+ GITDOC="git-C $docroot"
+
+ if [ $PUSH -eq 1 ];
+ then
+ $GITDOC push origin gh-pages
+ fi
+
+}
+
+
+init
+fetch_haproxy_dconv
+generate_docs http://git.1wt.eu/git/haproxy-1.4.git/ $WORK_DIR/haproxy/1.4 $WORK_DIR/haproxy-dconv/gh-pages configuration.txt configuration-1.4.html
+generate_docs http://git.1wt.eu/git/haproxy-1.5.git/ $WORK_DIR/haproxy/1.5 $WORK_DIR/haproxy-dconv/gh-pages configuration.txt configuration-1.5.html
+generate_docs http://git.1wt.eu/git/haproxy.git/ $WORK_DIR/haproxy/1.6 $WORK_DIR/haproxy-dconv/gh-pages configuration.txt configuration-1.6.html
+generate_docs http://git.1wt.eu/git/haproxy.git/ $WORK_DIR/haproxy/1.6 $WORK_DIR/haproxy-dconv/gh-pages intro.txt intro-1.6.html
+push $WORK_DIR/haproxy-dconv/gh-pages
--- /dev/null
+[DEFAULT]
+pristine-tar = True
+upstream-branch = upstream-1.6
+debian-branch = master
--- /dev/null
+.TH HALOG "1" "July 2013" "halog" "User Commands"
+.SH NAME
+halog \- HAProxy log statistics reporter
+.SH SYNOPSIS
+.B halog
+[\fI-h|--help\fR]
+.br
+.B halog
+[\fIoptions\fR] <LOGFILE
+.SH DESCRIPTION
+.B halog
+reads HAProxy log data from stdin and extracts and displays lines matching
+user-specified criteria.
+.SH OPTIONS
+.SS Input filters \fR(several filters may be combined)
+.TP
+\fB\-H\fR
+Only match lines containing HTTP logs (ignore TCP)
+.TP
+\fB\-E\fR
+Only match lines without any error (no 5xx status)
+.TP
+\fB\-e\fR
+Only match lines with errors (status 5xx or negative)
+.TP
+\fB\-rt\fR|\fB\-RT\fR <time>
+Only match response times larger|smaller than <time>
+.TP
+\fB\-Q\fR|\fB\-QS\fR
+Only match queued requests (any queue|server queue)
+.TP
+\fB\-tcn\fR|\fB\-TCN\fR <code>
+Only match requests with/without termination code <code>
+.TP
+\fB\-hs\fR|\fB\-HS\fR <[min][:][max]>
+Only match requests with HTTP status codes within/not within min..max. Any of
+them may be omitted. Exact code is checked for if no ':' is specified.
+.SS
+Modifiers
+.TP
+\fB\-v\fR
+Invert the input filtering condition
+.TP
+\fB\-q\fR
+Don't report errors/warnings
+.TP
+\fB\-m\fR <lines>
+Limit output to the first <lines> lines
+.SS
+Output filters \fR\- only one may be used at a time
+.TP
+\fB\-c\fR
+Only report the number of lines that would have been printed
+.TP
+\fB\-pct\fR
+Output connect and response times percentiles
+.TP
+\fB\-st\fR
+Output number of requests per HTTP status code
+.TP
+\fB\-cc\fR
+Output number of requests per cookie code (2 chars)
+.TP
+\fB\-tc\fR
+Output number of requests per termination code (2 chars)
+.TP
+\fB\-srv\fR
+Output statistics per server (time, requests, errors)
+.TP
+\fB\-u\fR*
+Output statistics per URL (time, requests, errors)
+.br
+Additional characters indicate the output sorting key:
+.RS
+.TP
+\fB\-u\fR
+URL
+.TP
+\fB\-uc\fR
+Request count
+.TP
+\fB\-ue\fR
+Error count
+.TP
+\fB\-ua\fR
+Average response time
+.TP
+\fB\-ut\fR
+Average total time
+.TP
+\fB\-uao\fR, \fB\-uto\fR
+Average times computed on valid ('OK') requests
+.TP
+\fB\-uba\fR
+Average bytes returned
+.TP
+\fB\-ubt\fR
+Total bytes returned
+.RE
+.SH "SEE ALSO"
+.BR haproxy (1)
+.SH AUTHOR
+.PP
+\fBhalog\fR was written by Willy Tarreau <w@1wt.eu> and is part of \fBhaproxy\fR(1).
+.PP
+This manual page was written by Apollon Oikonomopoulos <apoikos@gmail.com> for the Debian project (but may
+be used by others).
+
--- /dev/null
+Syslog support
+--------------
+Upstream recommends using syslog over UDP to log from HAProxy processes, as
+this allows seamless logging from chroot'ed processes without access to
+/dev/log. However, many syslog implementations do not enable UDP syslog by
+default.
+
+The default HAProxy configuration in Debian uses /dev/log for logging and
+ships an rsyslog snippet that creates /dev/log in HAProxy's chroot and logs all
+HAProxy messages to /var/log/haproxy.log. To take advantage of this, you must
+restart rsyslog after installing this package. For other syslog daemons you
+will have to take manual measures to enable UDP logging or create /dev/log
+under HAProxy's chroot:
+a. For sysklogd, add SYSLOG="-a /var/lib/haproxy/dev/log" to
+ /etc/default/syslog.
+b. For inetutils-syslogd, add SYSLOGD_OPTS="-a /var/lib/haproxy/dev/log" to
+ /etc/default/inetutils-syslogd.
--- /dev/null
+global
+ log /dev/log local0
+ log /dev/log local1 notice
+ chroot /var/lib/haproxy
+ stats socket /run/haproxy/admin.sock mode 660 level admin
+ stats timeout 30s
+ user haproxy
+ group haproxy
+ daemon
+
+ # Default SSL material locations
+ ca-base /etc/ssl/certs
+ crt-base /etc/ssl/private
+
+ # Default ciphers to use on SSL-enabled listening sockets.
+ # For more information, see ciphers(1SSL). This list is from:
+ # https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
+ ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS
+ ssl-default-bind-options no-sslv3
+
+defaults
+ log global
+ mode http
+ option httplog
+ option dontlognull
+ timeout connect 5000
+ timeout client 50000
+ timeout server 50000
+ errorfile 400 /etc/haproxy/errors/400.http
+ errorfile 403 /etc/haproxy/errors/403.http
+ errorfile 408 /etc/haproxy/errors/408.http
+ errorfile 500 /etc/haproxy/errors/500.http
+ errorfile 502 /etc/haproxy/errors/502.http
+ errorfile 503 /etc/haproxy/errors/503.http
+ errorfile 504 /etc/haproxy/errors/504.http
--- /dev/null
+# Defaults file for HAProxy
+#
+# This is sourced by both, the initscript and the systemd unit file, so do not
+# treat it as a shell script fragment.
+
+# Change the config file location if needed
+#CONFIG="/etc/haproxy/haproxy.cfg"
+
+# Add extra flags here, see haproxy(1) for a few options
+#EXTRAOPTS="-de -m 16"
--- /dev/null
+etc/haproxy
+etc/haproxy/errors
+var/lib/haproxy
+var/lib/haproxy/dev
--- /dev/null
+doc/architecture.txt
+doc/configuration.txt
+contrib
+README
--- /dev/null
+examples/*.cfg
--- /dev/null
+#!/bin/sh
+### BEGIN INIT INFO
+# Provides: haproxy
+# Required-Start: $local_fs $network $remote_fs $syslog $named
+# Required-Stop: $local_fs $remote_fs $syslog $named
+# Default-Start: 2 3 4 5
+# Default-Stop: 0 1 6
+# Short-Description: fast and reliable load balancing reverse proxy
+# Description: This file should be used to start and stop haproxy.
+### END INIT INFO
+
+# Author: Arnaud Cornet <acornet@debian.org>
+
+PATH=/sbin:/usr/sbin:/bin:/usr/bin
+PIDFILE=/var/run/haproxy.pid
+CONFIG=/etc/haproxy/haproxy.cfg
+HAPROXY=/usr/sbin/haproxy
+RUNDIR=/run/haproxy
+EXTRAOPTS=
+
+test -x $HAPROXY || exit 0
+
+if [ -e /etc/default/haproxy ]; then
+ . /etc/default/haproxy
+fi
+
+test -f "$CONFIG" || exit 0
+
+[ -f /etc/default/rcS ] && . /etc/default/rcS
+. /lib/lsb/init-functions
+
+
+check_haproxy_config()
+{
+ $HAPROXY -c -f "$CONFIG" >/dev/null
+ if [ $? -eq 1 ]; then
+ log_end_msg 1
+ exit 1
+ fi
+}
+
+haproxy_start()
+{
+ [ -d "$RUNDIR" ] || mkdir "$RUNDIR"
+ chown haproxy:haproxy "$RUNDIR"
+ chmod 2775 "$RUNDIR"
+
+ check_haproxy_config
+
+ start-stop-daemon --quiet --oknodo --start --pidfile "$PIDFILE" \
+ --exec $HAPROXY -- -f "$CONFIG" -D -p "$PIDFILE" \
+ $EXTRAOPTS || return 2
+ return 0
+}
+
+haproxy_stop()
+{
+ if [ ! -f $PIDFILE ] ; then
+ # This is a success according to LSB
+ return 0
+ fi
+
+ ret=0
+ tmppid="$(mktemp)"
+
+ # HAProxy's pidfile may contain multiple PIDs, if nbproc > 1, so loop
+ # over each PID. Note that start-stop-daemon has a --pid option, but it
+ # was introduced in dpkg 1.17.6, post wheezy, so we use a temporary
+ # pidfile instead to ease backports.
+ for pid in $(cat $PIDFILE); do
+ echo "$pid" > "$tmppid"
+ start-stop-daemon --quiet --oknodo --stop \
+ --retry 5 --pidfile "$tmppid" --exec $HAPROXY || ret=$?
+ done
+
+ rm -f "$tmppid"
+ [ $ret -eq 0 ] && rm -f $PIDFILE
+
+ return $ret
+}
+
+haproxy_reload()
+{
+ check_haproxy_config
+
+ $HAPROXY -f "$CONFIG" -p $PIDFILE -D $EXTRAOPTS -sf $(cat $PIDFILE) \
+ || return 2
+ return 0
+}
+
+haproxy_status()
+{
+ if [ ! -f $PIDFILE ] ; then
+ # program not running
+ return 3
+ fi
+
+ for pid in $(cat $PIDFILE) ; do
+ if ! ps --no-headers p "$pid" | grep haproxy > /dev/null ; then
+ # program running, bogus pidfile
+ return 1
+ fi
+ done
+
+ return 0
+}
+
+
+case "$1" in
+start)
+ log_daemon_msg "Starting haproxy" "haproxy"
+ haproxy_start
+ ret=$?
+ case "$ret" in
+ 0)
+ log_end_msg 0
+ ;;
+ 1)
+ log_end_msg 1
+ echo "pid file '$PIDFILE' found, haproxy not started."
+ ;;
+ 2)
+ log_end_msg 1
+ ;;
+ esac
+ exit $ret
+ ;;
+stop)
+ log_daemon_msg "Stopping haproxy" "haproxy"
+ haproxy_stop
+ ret=$?
+ case "$ret" in
+ 0|1)
+ log_end_msg 0
+ ;;
+ 2)
+ log_end_msg 1
+ ;;
+ esac
+ exit $ret
+ ;;
+reload|force-reload)
+ log_daemon_msg "Reloading haproxy" "haproxy"
+ haproxy_reload
+ ret=$?
+ case "$ret" in
+ 0|1)
+ log_end_msg 0
+ ;;
+ 2)
+ log_end_msg 1
+ ;;
+ esac
+ exit $ret
+ ;;
+restart)
+ log_daemon_msg "Restarting haproxy" "haproxy"
+ haproxy_stop
+ haproxy_start
+ ret=$?
+ case "$ret" in
+ 0)
+ log_end_msg 0
+ ;;
+ 1)
+ log_end_msg 1
+ ;;
+ 2)
+ log_end_msg 1
+ ;;
+ esac
+ exit $ret
+ ;;
+status)
+ haproxy_status
+ ret=$?
+ case "$ret" in
+ 0)
+ echo "haproxy is running."
+ ;;
+ 1)
+ echo "haproxy dead, but $PIDFILE exists."
+ ;;
+ *)
+ echo "haproxy not running."
+ ;;
+ esac
+ exit $ret
+ ;;
+*)
+ echo "Usage: /etc/init.d/haproxy {start|stop|reload|restart|status}"
+ exit 2
+ ;;
+esac
+
+:
--- /dev/null
+debian/haproxy.cfg etc/haproxy
+examples/errorfiles/*.http etc/haproxy/errors
+contrib/systemd/haproxy.service lib/systemd/system
+contrib/halog/halog usr/bin
--- /dev/null
+haproxy binary: binary-without-manpage usr/sbin/haproxy-systemd-wrapper
--- /dev/null
+mv_conffile /etc/rsyslog.d/haproxy.conf /etc/rsyslog.d/49-haproxy.conf 1.5.3-2~
--- /dev/null
+doc/haproxy.1
+doc/lua-api/_build/man/haproxy-lua.1
+debian/halog.1
--- /dev/null
+#!/bin/sh
+
+set -e
+
+adduser --system --disabled-password --disabled-login --home /var/lib/haproxy \
+ --no-create-home --quiet --force-badname --group haproxy
+
+#DEBHELPER#
+
+if [ -n "$2" ] && dpkg --compare-versions "$2" gt "1.5~dev24-2~"; then
+ # Reload already running instances. Since 1.5~dev24-2 we do not stop
+ # haproxy in prerm during upgrades.
+ invoke-rc.d haproxy reload || true
+fi
+
+exit 0
--- /dev/null
+#!/bin/sh
+
+set -e
+
+#DEBHELPER#
+
+case "$1" in
+ purge)
+ deluser --system haproxy || true
+ delgroup --system haproxy || true
+ ;;
+ *)
+ ;;
+esac
+
+exit 0
--- /dev/null
+d /run/haproxy 2775 haproxy haproxy -
--- /dev/null
+" detect HAProxy configuration
+au BufRead,BufNewFile haproxy*.cfg set filetype=haproxy
--- /dev/null
+/var/log/haproxy.log {
+ daily
+ rotate 52
+ missingok
+ notifempty
+ compress
+ delaycompress
+ postrotate
+ invoke-rc.d rsyslog rotate >/dev/null 2>&1 || true
+ endscript
+}
--- /dev/null
+From ca3fa95fbb1cc4060dcdd785cd76b1fa82c13b4a Mon Sep 17 00:00:00 2001
+From: Sergii Golovatiuk <sgolovatiuk@mirantis.com>
+Date: Tue, 24 May 2016 13:54:12 +0000
+Subject: [PATCH] Adding "include" configuration statement to haproxy.
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+This patch ia based on original work done by Brane F. Gračnar:
+http://marc.info/?l=haproxy&m=129235503410444
+
+Original patch was modified according to upstream changes in 1.6.*
+---
+ include/common/cfgparse.h | 6 +-
+ src/cfgparse.c | 159 +++++++++++++++++++++++++++++++++++++++++++++-
+ src/haproxy.c | 2 +-
+ 3 files changed, 162 insertions(+), 5 deletions(-)
+
+diff --git a/include/common/cfgparse.h b/include/common/cfgparse.h
+index d785327..b521302 100644
+--- a/include/common/cfgparse.h
++++ b/include/common/cfgparse.h
+@@ -36,6 +36,10 @@
+ #define CFG_USERLIST 3
+ #define CFG_PEERS 4
+
++
++/* maximum include recursion level */
++#define INCLUDE_RECURSION_LEVEL_MAX 10
++
+ struct cfg_keyword {
+ int section; /* section type for this keyword */
+ const char *kw; /* the keyword itself */
+@@ -65,7 +69,7 @@ extern int cfg_maxconn;
+
+ int cfg_parse_global(const char *file, int linenum, char **args, int inv);
+ int cfg_parse_listen(const char *file, int linenum, char **args, int inv);
+-int readcfgfile(const char *file);
++int readcfgfile(const char *file, int recdepth);
+ void cfg_register_keywords(struct cfg_kw_list *kwl);
+ void cfg_unregister_keywords(struct cfg_kw_list *kwl);
+ void init_default_instance();
+diff --git a/src/cfgparse.c b/src/cfgparse.c
+index 97f4243..99a19e5 100644
+--- a/src/cfgparse.c
++++ b/src/cfgparse.c
+@@ -32,6 +32,8 @@
+ #include <sys/stat.h>
+ #include <fcntl.h>
+ #include <unistd.h>
++#include <glob.h>
++#include <libgen.h>
+
+ #include <common/cfgparse.h>
+ #include <common/chunk.h>
+@@ -6844,6 +6846,149 @@ out:
+ return err_code;
+ }
+
++/**
++ * This function takes glob(3) pattern and tries to resolve
++ * that pattern to files and tries to include them.
++ *
++ * See readcfgfile() for return values.
++ */
++int cfgfile_include (char *pattern, char *dir, int recdepth) {
++
++ int err_code = 0;
++
++ if (pattern == NULL) {
++ Alert("Config file include pattern == NULL; This should never happen.\n");
++ err_code |= ERR_ABORT;
++ goto out;
++ }
++ if (recdepth >= INCLUDE_RECURSION_LEVEL_MAX) {
++ Alert(
++ "Refusing to include filename pattern: '%s': too deep recursion level: %d.\n",
++ pattern,
++ recdepth
++ );
++ err_code|= ERR_ABORT;
++ goto out;
++ }
++
++ /** don't waste time with empty strings */
++ if (strlen(pattern) < 1) return 0;
++
++ /** we want to support relative to include file glob patterns */
++ int buf_len = 3;
++ if (dir != NULL)
++ buf_len += strlen(dir);
++ buf_len += strlen(pattern);
++ char *real_pattern = malloc(buf_len);
++ if (real_pattern == NULL) {
++ Alert("Error allocating memory for glob pattern: %s\n", strerror(errno));
++ err_code |= ERR_ABORT;
++ goto out;
++ }
++ memset(real_pattern, '\0', buf_len);
++ if (dir != NULL && pattern[0] != '/') {
++ strcat(real_pattern, dir);
++ strcat(real_pattern, "/");
++ }
++ strcat(real_pattern, pattern);
++
++ /* file inclusion result */
++ int result = 0;
++
++ /** glob the pattern */
++ glob_t res;
++ int rv = glob(
++ real_pattern,
++ (GLOB_NOESCAPE | GLOB_BRACE | GLOB_TILDE),
++ NULL,
++ &res
++ );
++ /* check for glob(3) injuries */
++ switch (rv) {
++ case GLOB_NOMATCH:
++ /* nothing was found */
++ break;
++
++ case GLOB_ABORTED:
++ Alert("Error globbing pattern '%s': read error.\n", real_pattern);
++ result = ERR_ABORT;
++ break;
++
++ case GLOB_NOSPACE:
++ Alert("Error globbing pattern '%s': out of memory.\n", real_pattern);
++ result = ERR_ABORT;
++ break;
++
++ default:
++ ;
++ int i = 0;
++ for (i = 0; i < res.gl_pathc; i++) {
++ char *file = res.gl_pathv[i];
++
++ /* parse configuration fragment */
++ int r = readcfgfile(file, recdepth);
++
++ /* check for injuries */
++ if (r != 0) {
++ result = r;
++ goto outta_cfgfile_include;
++ }
++ }
++ }
++
++outta_cfgfile_include:
++
++ /** free glob result. */
++ globfree(&res);
++ free(real_pattern);
++
++ return result;
++
++out:
++ return err_code;
++}
++
++int
++cfg_parse_include(const char *file, int linenum, char **args, int recdepth) {
++
++ int err_code = 0;
++
++ if (strcmp(args[0], "include") == 0) {
++ if (args[1] == NULL || strlen(args[1]) < 1) {
++ Alert("parsing [%s:%d]: include statement requires file glob pattern.\n",
++ file, linenum);
++ err_code |= ERR_ABORT;
++ goto out;
++ }
++ /**
++ * compute file's dirname - this is necessary because
++ * dirname(3) returns shared buffer address
++ */
++ int buf_len = strlen(file) + 1;
++ char *file_dir = malloc(buf_len);
++ if (file_dir == NULL) {
++ Alert("Unable to allocate memory for config file dirname.");
++ err_code |= ERR_ABORT;
++ goto out;
++ }
++ memset(file_dir, '\0', buf_len);
++ strcpy(file_dir, file);
++ strcpy(file_dir, dirname(file_dir));
++
++ /* include pattern */
++ int r = cfgfile_include(args[1], file_dir, (recdepth + 1));
++ //int r = cfgfile_include(args[1], file_dir, 1);
++ free(file_dir);
++ /* check for injuries */
++ if (r != 0) {
++ err_code |= r;
++ goto out;
++ }
++ }
++out:
++ return err_code;
++}
++
+ /*
+ * This function reads and parses the configuration file given in the argument.
+ * Returns the error code, 0 if OK, or any combination of :
+@@ -6854,7 +6999,7 @@ out:
+ * Only the two first ones can stop processing, the two others are just
+ * indicators.
+ */
+-int readcfgfile(const char *file)
++int readcfgfile(const char *file, int recdepth)
+ {
+ char *thisline;
+ int linesize = LINESIZE;
+@@ -6878,13 +7023,16 @@ int readcfgfile(const char *file)
+ !cfg_register_section("global", cfg_parse_global) ||
+ !cfg_register_section("userlist", cfg_parse_users) ||
+ !cfg_register_section("peers", cfg_parse_peers) ||
++ !cfg_register_section("include", cfg_parse_include) ||
+ !cfg_register_section("mailers", cfg_parse_mailers) ||
+ !cfg_register_section("namespace_list", cfg_parse_netns) ||
+ !cfg_register_section("resolvers", cfg_parse_resolvers))
+ return -1;
+
+- if ((f=fopen(file,"r")) == NULL)
++ if ((f=fopen(file,"r")) == NULL) {
++ Alert("Error opening configuration file %s: %s\n", file, strerror(errno));
+ return -1;
++ }
+
+ next_line:
+ while (fgets(thisline + readbytes, linesize - readbytes, f) != NULL) {
+@@ -7168,7 +7316,12 @@ next_line:
+
+ /* else it's a section keyword */
+ if (cs)
+- err_code |= cs->section_parser(file, linenum, args, kwm);
++ if (strcmp("include", cs->section_name) == 0) {
++ err_code |= cs->section_parser(file, linenum, args, recdepth);
++ }
++ else {
++ err_code |= cs->section_parser(file, linenum, args, kwm);
++ }
+ else {
+ Alert("parsing [%s:%d]: unknown keyword '%s' out of section.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+diff --git a/src/haproxy.c b/src/haproxy.c
+index 4299328..63a9bfd 100644
+--- a/src/haproxy.c
++++ b/src/haproxy.c
+@@ -770,7 +770,7 @@ void init(int argc, char **argv)
+ list_for_each_entry(wl, &cfg_cfgfiles, list) {
+ int ret;
+
+- ret = readcfgfile(wl->s);
++ ret = readcfgfile(wl->s, 0);
+ if (ret == -1) {
+ Alert("Could not open configuration file %s : %s\n",
+ wl->s, strerror(errno));
+--
+2.7.4
+
--- /dev/null
+From: Apollon Oikonomopoulos <apoikos@debian.org>
+Date: Wed, 29 Apr 2015 13:51:49 +0300
+Subject: [PATCH] dconv: debianize
+
+ - Use Debian bootstrap and jquery packages
+ - Add Debian-related resources to the template
+ - Use the package's version instead of HAProxy's git version
+ - Strip the conversion date from the output to ensure reproducible
+ build.
+
+diff --git a/debian/dconv/haproxy-dconv.py b/debian/dconv/haproxy-dconv.py
+index fe2b96dce325..702eefac6a3b 100755
+--- a/debian/dconv/haproxy-dconv.py
++++ b/debian/dconv/haproxy-dconv.py
+@@ -44,12 +44,11 @@ VERSION = ""
+ HAPROXY_GIT_VERSION = False
+
+ def main():
+- global VERSION, HAPROXY_GIT_VERSION
++ global HAPROXY_GIT_VERSION
+
+ usage="Usage: %prog --infile <infile> --outfile <outfile>"
+
+ optparser = OptionParser(description='Generate HTML Document from HAProxy configuation.txt',
+- version=VERSION,
+ usage=usage)
+ optparser.add_option('--infile', '-i', help='Input file mostly the configuration.txt')
+ optparser.add_option('--outfile','-o', help='Output file')
+@@ -65,11 +64,7 @@ def main():
+
+ os.chdir(os.path.dirname(__file__))
+
+- VERSION = get_git_version()
+- if not VERSION:
+- sys.exit(1)
+-
+- HAPROXY_GIT_VERSION = get_haproxy_git_version(os.path.dirname(option.infile))
++ HAPROXY_GIT_VERSION = get_haproxy_debian_version(os.path.dirname(option.infile))
+
+ convert(option.infile, option.outfile, option.base)
+
+@@ -114,6 +109,15 @@ def get_haproxy_git_version(path):
+ version = re.sub(r'-g.*', '', version)
+ return version
+
++def get_haproxy_debian_version(path):
++ try:
++ version = subprocess.check_output(["dpkg-parsechangelog", "-Sversion"],
++ cwd=os.path.join(path, ".."))
++ except subprocess.CalledProcessError:
++ return False
++
++ return version.strip()
++
+ def getTitleDetails(string):
+ array = string.split(".")
+
+@@ -506,7 +510,6 @@ def convert(infile, outfile, base=''):
+ keywords = keywords,
+ keywordsCount = keywordsCount,
+ keyword_conflicts = keyword_conflicts,
+- version = VERSION,
+ date = datetime.datetime.now().strftime("%Y/%m/%d"),
+ )
+ except TopLevelLookupException:
+@@ -524,7 +527,6 @@ def convert(infile, outfile, base=''):
+ keywords = keywords,
+ keywordsCount = keywordsCount,
+ keyword_conflicts = keyword_conflicts,
+- version = VERSION,
+ date = datetime.datetime.now().strftime("%Y/%m/%d"),
+ footer = footer
+ )
+diff --git a/debian/dconv/templates/template.html b/debian/dconv/templates/template.html
+index c72b3558c2dd..9aefa16dd82d 100644
+--- a/debian/dconv/templates/template.html
++++ b/debian/dconv/templates/template.html
+@@ -3,8 +3,8 @@
+ <head>
+ <meta charset="utf-8" />
+ <title>${headers['title']} ${headers['version']} - ${headers['subtitle']}</title>
+- <link href="//cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.1.1/css/bootstrap.min.css" rel="stylesheet" />
+- <link href="${base}css/page.css?${version}" rel="stylesheet" />
++ <link href="${base}css/bootstrap.min.css" rel="stylesheet" />
++ <link href="${base}css/page.css" rel="stylesheet" />
+ </head>
+ <body>
+ <nav class="navbar navbar-default navbar-fixed-top" role="navigation">
+@@ -15,7 +15,7 @@
+ <span class="icon-bar"></span>
+ <span class="icon-bar"></span>
+ </button>
+- <a class="navbar-brand" href="${base}index.html">${headers['title']} <small>${headers['subtitle']}</small></a>
++ <a class="navbar-brand" href="${base}configuration.html">${headers['title']}</a>
+ </div>
+ <!-- /.navbar-header -->
+
+@@ -24,31 +24,16 @@
+ <ul class="nav navbar-nav">
+ <li><a href="http://www.haproxy.org/">HAProxy home page</a></li>
+ <li class="dropdown">
+- <a href="#" class="dropdown-toggle" data-toggle="dropdown">Versions <b class="caret"></b></a>
++ <a href="#" class="dropdown-toggle" data-toggle="dropdown">Debian resources <b class="caret"></b></a>
+ <ul class="dropdown-menu">
+ ## TODO : provide a structure to dynamically generate per version links
+- <li class="dropdown-header">HAProxy 1.4</li>
+- <li><a href="${base}configuration-1.4.html">Configuration Manual <small>(stable)</small></a></li>
+- <li><a href="${base}snapshot/configuration-1.4.html">Configuration Manual <small>(snapshot)</small></a></li>
+- <li><a href="http://git.1wt.eu/git/haproxy-1.4.git/">GIT Repository</a></li>
+- <li><a href="http://www.haproxy.org/git/?p=haproxy-1.4.git">Browse repository</a></li>
+- <li><a href="http://www.haproxy.org/download/1.4/">Browse directory</a></li>
+- <li class="divider"></li>
+- <li class="dropdown-header">HAProxy 1.5</li>
+- <li><a href="${base}configuration-1.5.html">Configuration Manual <small>(stable)</small></a></li>
+- <li><a href="${base}snapshot/configuration-1.5.html">Configuration Manual <small>(snapshot)</small></a></li>
+- <li><a href="http://git.1wt.eu/git/haproxy-1.5.git/">GIT Repository</a></li>
+- <li><a href="http://www.haproxy.org/git/?p=haproxy-1.5.git">Browse repository</a></li>
+- <li><a href="http://www.haproxy.org/download/1.5/">Browse directory</a></li>
+- <li class="divider"></li>
+- <li class="dropdown-header">HAProxy 1.6</li>
+- <li><a href="${base}configuration-1.6.html">Configuration Manual <small>(stable)</small></a></li>
+- <li><a href="${base}snapshot/configuration-1.6.html">Configuration Manual <small>(snapshot)</small></a></li>
+- <li><a href="${base}intro-1.6.html">Starter Guide <small>(stable)</small></a></li>
+- <li><a href="${base}snapshot/intro-1.6.html">Starter Guide <small>(snapshot)</small></a></li>
+- <li><a href="http://git.1wt.eu/git/haproxy.git/">GIT Repository</a></li>
+- <li><a href="http://www.haproxy.org/git/?p=haproxy.git">Browse repository</a></li>
+- <li><a href="http://www.haproxy.org/download/1.6/">Browse directory</a></li>
++ <li><a href="https://bugs.debian.org/src:haproxy">Bug Tracking System</a></li>
++ <li><a href="https://packages.debian.org/haproxy">Package page</a></li>
++ <li><a href="http://tracker.debian.org/pkg/haproxy">Package Tracking System</a></li>
++ <li class="divider"></li>
++ <li><a href="${base}intro.html">Starter Guide</a></li>
++ <li><a href="${base}configuration.html">Configuration Manual</a></li>
++ <li><a href="http://anonscm.debian.org/gitweb/?p=pkg-haproxy/haproxy.git">Package Git Repository</a></li>
+ </ul>
+ </li>
+ </ul>
+@@ -72,7 +57,7 @@
+ The feature is automatically disabled when the search field is focused.
+ </p>
+ <p class="text-right">
+- <small>Converted with <a href="https://github.com/cbonte/haproxy-dconv">haproxy-dconv</a> v<b>${version}</b> on <b>${date}</b></small>
++ <small>Converted with <a href="https://github.com/cbonte/haproxy-dconv">haproxy-dconv</a></small>
+ </p>
+ </div>
+ <!-- /.sidebar -->
+@@ -83,7 +68,7 @@
+ <div class="text-center">
+ <h1>${headers['title']}</h1>
+ <h2>${headers['subtitle']}</h2>
+- <p><strong>${headers['version']}</strong></p>
++ <p><strong>${headers['version']} (Debian)</strong></p>
+ <p>
+ <a href="http://www.haproxy.org/" title="HAProxy Home Page"><img src="${base}img/logo-med.png" /></a><br>
+ ${headers['author']}<br>
+@@ -114,9 +99,9 @@
+ </div>
+ <!-- /#wrapper -->
+
+- <script src="//cdnjs.cloudflare.com/ajax/libs/jquery/1.11.0/jquery.min.js"></script>
+- <script src="//cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.1.1/js/bootstrap.min.js"></script>
+- <script src="//cdnjs.cloudflare.com/ajax/libs/typeahead.js/0.11.1/typeahead.bundle.min.js"></script>
++ <script src="${base}js/jquery.min.js"></script>
++ <script src="${base}js/bootstrap.min.js"></script>
++ <script src="${base}js/typeahead.bundle.js"></script>
+ <script>
+ /* Keyword search */
+ var searchFocus = false
--- /dev/null
+Subject: Add documentation field to the systemd unit
+Author: Apollon Oikonomopoulos <apoikos@gmail.com>
+
+Forwarded: no
+Last-Update: 2014-01-03
+--- a/contrib/systemd/haproxy.service.in
++++ b/contrib/systemd/haproxy.service.in
+@@ -1,5 +1,7 @@
+ [Unit]
+ Description=HAProxy Load Balancer
++Documentation=man:haproxy(1)
++Documentation=file:/usr/share/doc/haproxy/configuration.txt.gz
+ After=network.target syslog.service
+ Wants=syslog.service
+
--- /dev/null
+Author: Apollon Oikonomopoulos
+Description: Check the configuration before reloading HAProxy
+ While HAProxy will survive a reload with an invalid configuration, explicitly
+ checking the config file for validity will make "systemctl reload" return an
+ error and let the user know something went wrong.
+
+Forwarded: no
+Last-Update: 2014-04-27
+Index: haproxy/contrib/systemd/haproxy.service.in
+===================================================================
+--- haproxy.orig/contrib/systemd/haproxy.service.in
++++ haproxy/contrib/systemd/haproxy.service.in
+@@ -8,6 +8,7 @@ Wants=syslog.service
+ [Service]
+ ExecStartPre=@SBINDIR@/haproxy -f /etc/haproxy/haproxy.cfg -c -q
+ ExecStart=@SBINDIR@/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
++ExecReload=@SBINDIR@/haproxy -c -f /etc/haproxy/haproxy.cfg
+ ExecReload=/bin/kill -USR2 $MAINPID
+ KillMode=mixed
+ Restart=always
--- /dev/null
+Subject: start after the syslog service using systemd
+Author: Apollon Oikonomopoulos <apoikos@gmail.com>
+
+Forwarded: no
+Last-Update: 2013-10-15
+Index: haproxy/contrib/systemd/haproxy.service.in
+===================================================================
+--- haproxy.orig/contrib/systemd/haproxy.service.in
++++ haproxy/contrib/systemd/haproxy.service.in
+@@ -1,6 +1,7 @@
+ [Unit]
+ Description=HAProxy Load Balancer
+-After=network.target
++After=network.target syslog.service
++Wants=syslog.service
+
+ [Service]
+ ExecStartPre=@SBINDIR@/haproxy -f /etc/haproxy/haproxy.cfg -c -q
--- /dev/null
+Author: Apollon Oikonomopoulos <apoikos@debian.org>
+Description: Use the variables from /etc/default/haproxy
+ This will allow seamless upgrades from the sysvinit system while respecting
+ any changes the users may have made. It will also make local configuration
+ easier than overriding the systemd unit file.
+
+Last-Update: 2014-06-20
+Forwarded: not-needed
+Index: haproxy/contrib/systemd/haproxy.service.in
+===================================================================
+--- haproxy.orig/contrib/systemd/haproxy.service.in
++++ haproxy/contrib/systemd/haproxy.service.in
+@@ -6,9 +6,11 @@ After=network.target syslog.service
+ Wants=syslog.service
+
+ [Service]
+-ExecStartPre=@SBINDIR@/haproxy -f /etc/haproxy/haproxy.cfg -c -q
+-ExecStart=@SBINDIR@/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
+-ExecReload=@SBINDIR@/haproxy -c -f /etc/haproxy/haproxy.cfg
++Environment=CONFIG=/etc/haproxy/haproxy.cfg
++EnvironmentFile=-/etc/default/haproxy
++ExecStartPre=@SBINDIR@/haproxy -f ${CONFIG} -c -q
++ExecStart=@SBINDIR@/haproxy-systemd-wrapper -f ${CONFIG} -p /run/haproxy.pid $EXTRAOPTS
++ExecReload=@SBINDIR@/haproxy -c -f ${CONFIG}
+ ExecReload=/bin/kill -USR2 $MAINPID
+ KillMode=mixed
+ Restart=always
--- /dev/null
+0002-Use-dpkg-buildflags-to-build-halog.patch
+haproxy.service-start-after-syslog.patch
+haproxy.service-add-documentation.patch
+haproxy.service-check-config-before-reload.patch
+haproxy.service-use-environment-variables.patch
+MIRA0001-Adding-include-configuration-statement-to-haproxy.patch
--- /dev/null
+# Create an additional socket in haproxy's chroot in order to allow logging via
+# /dev/log to chroot'ed HAProxy processes
+$AddUnixListenSocket /var/lib/haproxy/dev/log
+
+# Send HAProxy messages to a dedicated logfile
+if $programname startswith 'haproxy' then /var/log/haproxy.log
+&~
--- /dev/null
+#!/usr/bin/make -f
+
+export DEB_LDFLAGS_MAINT_APPEND = -Wl,--as-needed
+
+MAKEARGS=DESTDIR=debian/haproxy \
+ PREFIX=/usr \
+ IGNOREGIT=true \
+ MANDIR=/usr/share/man \
+ DOCDIR=/usr/share/doc/haproxy \
+ USE_PCRE=1 PCREDIR= \
+ USE_OPENSSL=1 \
+ USE_ZLIB=1 \
+ USE_LUA=1 \
+ LUA_INC=/usr/include/lua5.3
+
+OS_TYPE = $(shell dpkg-architecture -qDEB_HOST_ARCH_OS)
+
+ifeq ($(OS_TYPE),linux)
+ MAKEARGS+= TARGET=linux2628
+else ifeq ($(OS_TYPE),kfreebsd)
+ MAKEARGS+= TARGET=freebsd
+else
+ MAKEARGS+= TARGET=generic
+endif
+
+ifneq ($(filter amd64 i386, $(shell dpkg-architecture -qDEB_HOST_ARCH_CPU)),)
+ MAKEARGS+= USE_REGPARM=1
+else ifeq ($(shell dpkg-architecture -qDEB_HOST_ARCH_CPU),amd64)
+ MAKEARGS+= USE_REGPARM=1
+endif
+
+MAKEARGS += CFLAGS="$(shell dpkg-buildflags --get CFLAGS) $(shell dpkg-buildflags --get CPPFLAGS)"
+MAKEARGS += LDFLAGS="$(shell dpkg-buildflags --get LDFLAGS)"
+
+%:
+ dh $@ --with systemd,sphinxdoc
+
+override_dh_auto_configure:
+
+override_dh_auto_build-arch:
+ make $(MAKEARGS)
+ make -C contrib/systemd $(MAKEARGS)
+ dh_auto_build -Dcontrib/halog
+ $(MAKE) -C doc/lua-api man
+
+override_dh_auto_build-indep:
+ # Build the HTML documentation, after patching dconv
+ patch -p1 < $(CURDIR)/debian/patches/debianize-dconv.patch
+ python -B $(CURDIR)/debian/dconv/haproxy-dconv.py \
+ -i $(CURDIR)/doc/configuration.txt \
+ -o $(CURDIR)/doc/configuration.html
+ python -B $(CURDIR)/debian/dconv/haproxy-dconv.py \
+ -i $(CURDIR)/doc/intro.txt \
+ -o $(CURDIR)/doc/intro.html
+ patch -p1 -R < $(CURDIR)/debian/patches/debianize-dconv.patch
+ $(MAKE) -C doc/lua-api html
+
+override_dh_auto_clean:
+ make -C contrib/systemd clean
+ $(MAKE) -C doc/lua-api clean
+ dh_auto_clean
+ dh_auto_clean -Dcontrib/halog
+
+override_dh_auto_install-arch:
+ make $(MAKEARGS) install
+ install -m 0644 -D debian/rsyslog.conf debian/haproxy/etc/rsyslog.d/49-haproxy.conf
+ install -m 0644 -D debian/logrotate.conf debian/haproxy/etc/logrotate.d/haproxy
+
+override_dh_auto_install-indep:
+
+override_dh_installdocs:
+ dh_installdocs -Xsystemd/ -Xhalog/
+
+override_dh_installexamples:
+ dh_installexamples -X build.cfg
+
+override_dh_installinit:
+ dh_installinit --no-restart-on-upgrade
+
+override_dh_strip:
+ dh_strip --dbg-package=haproxy-dbg
--- /dev/null
+3.0 (quilt)
--- /dev/null
+debian/dconv/css/check.png
+debian/dconv/css/cross.png
+debian/dconv/img/logo-med.png
--- /dev/null
+debian/vim-haproxy.yaml /usr/share/vim/registry
+debian/haproxy.vim /usr/share/vim/addons/ftdetect
+examples/haproxy.vim /usr/share/vim/addons/syntax
--- /dev/null
+addon: haproxy
+description: "Syntax highlighting for HAProxy"
+files:
+ - syntax/haproxy.vim
+ - ftdetect/haproxy.vim
--- /dev/null
+version=3
+opts="uversionmangle=s/-(dev\d+)/~$1/" http://haproxy.1wt.eu/download/1.6/src/ haproxy-(1\.6\.\d+)\.(?:tgz|tbz2|tar\.(?:gz|bz2|xz))
--- /dev/null
+#FIG 3.2 Produced by xfig version 3.2.5-alpha5
+Portrait
+Center
+Metric
+A4
+100.00
+Single
+-2
+1200 2
+6 2430 1080 2700 2250
+1 2 0 1 0 11 52 -1 20 0.000 1 0.0000 2587 1687 113 563 2474 1687 2700 1687
+4 1 0 50 -1 16 8 1.5708 4 120 840 2610 1710 tcp-req inspect\001
+-6
+6 5805 1080 6255 2250
+1 2 0 1 0 29 52 -1 20 0.000 1 0.0000 6052 1687 203 563 5849 1687 6255 1687
+4 1 0 50 -1 16 8 1.5708 4 90 300 6030 1710 HTTP\001
+4 1 0 50 -1 16 8 1.5708 4 120 615 6165 1710 processing\001
+-6
+6 1575 3375 1800 4500
+2 2 0 1 0 29 52 -1 20 0.000 0 0 -1 0 0 5
+ 1575 3375 1800 3375 1800 4500 1575 4500 1575 3375
+4 1 0 50 -1 16 8 1.5708 4 120 735 1710 3960 http-resp out\001
+-6
+6 2025 3375 2250 4500
+2 2 0 1 0 29 52 -1 20 0.000 0 0 -1 0 0 5
+ 2025 3375 2250 3375 2250 4500 2025 4500 2025 3375
+4 1 0 50 -1 16 8 1.5708 4 120 735 2160 3960 http-resp out\001
+-6
+6 810 3600 1080 4230
+4 1 0 50 -1 16 8 1.5708 4 105 555 900 3915 Response\001
+4 1 0 50 -1 16 8 1.5708 4 105 450 1065 3915 to client\001
+-6
+6 720 1350 1035 2070
+4 1 0 50 -1 16 8 1.5708 4 120 540 855 1710 Requests \001
+4 1 0 50 -1 16 8 1.5708 4 105 645 1020 1710 from clients\001
+-6
+6 7695 1350 8010 1980
+4 1 0 50 -1 16 8 1.5708 4 120 510 7830 1665 Requests\001
+4 1 0 50 -1 16 8 1.5708 4 105 555 7995 1665 to servers\001
+-6
+6 7785 3600 8055 4230
+4 1 0 50 -1 16 8 1.5708 4 105 555 7875 3915 Response\001
+4 1 0 50 -1 16 8 1.5708 4 105 630 8055 3915 from server\001
+-6
+1 2 0 1 0 11 52 -1 20 0.000 1 0.0000 1687 1687 113 563 1574 1687 1800 1687
+1 2 0 1 0 11 52 -1 20 0.000 1 0.0000 7087 3937 113 563 6974 3937 7200 3937
+1 2 0 1 0 29 52 -1 20 0.000 1 0.0000 4072 3937 203 563 3869 3937 4275 3937
+1 2 0 1 0 29 52 -1 20 0.000 1 0.0000 2903 3937 203 563 2700 3937 3106 3937
+2 3 0 1 0 6 54 -1 20 0.000 0 0 -1 0 0 9
+ 1485 900 1485 2475 4140 2475 4140 1035 6390 1035 6390 2340
+ 6840 2340 6840 900 1485 900
+2 3 0 1 0 2 54 -1 20 0.000 0 0 -1 0 0 9
+ 4365 1035 4365 2475 7290 2475 7290 900 6840 900 6840 2340
+ 5715 2340 5715 1035 4365 1035
+2 2 0 1 0 29 52 -1 20 0.000 0 0 -1 0 0 5
+ 4950 1125 5175 1125 5175 2250 4950 2250 4950 1125
+2 2 0 1 0 29 52 -1 20 0.000 0 0 -1 0 0 5
+ 5400 1125 5625 1125 5625 2250 5400 2250 5400 1125
+2 2 0 1 0 11 52 -1 20 0.000 0 0 -1 0 0 5
+ 2025 1125 2250 1125 2250 2250 2025 2250 2025 1125
+2 2 0 1 0 11 52 -1 20 0.000 0 0 -1 0 0 5
+ 2925 1125 3150 1125 3150 2250 2925 2250 2925 1125
+2 1 0 1 0 0 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 1125 1710 1575 1710
+2 1 0 1 0 0 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 1125 1935 1575 1755
+2 1 0 1 0 0 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 1125 1485 1575 1665
+2 2 0 1 0 29 52 -1 20 0.000 0 0 -1 0 0 5
+ 3825 1125 4050 1125 4050 2250 3825 2250 3825 1125
+2 2 0 1 0 6 50 -1 20 0.000 0 0 -1 0 0 5
+ 1575 450 2025 450 2025 540 1575 540 1575 450
+2 2 0 1 0 2 50 -1 20 0.000 0 0 -1 0 0 5
+ 1575 675 2025 675 2025 765 1575 765 1575 675
+2 2 0 1 0 11 50 -1 20 0.000 0 0 -1 0 0 5
+ 3150 450 3600 450 3600 540 3150 540 3150 450
+2 2 0 1 0 29 50 -1 20 0.000 0 0 -1 0 0 5
+ 3150 675 3600 675 3600 765 3150 765 3150 675
+2 2 0 1 0 29 52 -1 20 0.000 0 0 -1 0 0 5
+ 6525 1125 6750 1125 6750 2250 6525 2250 6525 1125
+2 1 0 1 0 0 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 7200 1665 7650 1665
+2 1 0 1 0 0 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 7200 1620 7650 1530
+2 1 0 1 0 0 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 7200 1710 7650 1800
+2 2 0 1 0 29 52 -1 20 0.000 0 0 -1 0 0 5
+ 6975 1125 7200 1125 7200 2250 6975 2250 6975 1125
+2 2 0 1 0 29 52 -1 20 0.000 0 0 -1 0 0 5
+ 3375 1125 3600 1125 3600 2250 3375 2250 3375 1125
+2 2 0 1 0 11 52 -1 20 0.000 0 0 -1 0 0 5
+ 4500 1125 4725 1125 4725 2250 4500 2250 4500 1125
+2 1 0 1 0 0 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 30.00 60.00
+ 1800 1665 2025 1665
+2 1 0 1 0 0 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 30.00 60.00
+ 2250 1665 2475 1665
+2 1 0 1 0 0 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 30.00 60.00
+ 2700 1665 2925 1665
+2 1 0 1 0 0 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 30.00 60.00
+ 3150 1665 3375 1665
+2 1 0 1 0 0 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 30.00 60.00
+ 3600 1665 3825 1665
+2 1 0 1 0 0 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 30.00 60.00
+ 4725 1665 4950 1665
+2 1 0 1 0 0 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 30.00 60.00
+ 5175 1665 5400 1665
+2 1 0 1 0 0 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 30.00 60.00
+ 5625 1665 5850 1665
+2 1 0 1 0 0 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 30.00 60.00
+ 6750 1665 6975 1665
+2 1 0 1 0 0 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 30.00 60.00
+ 6255 1665 6525 1665
+2 1 0 1 0 0 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 30.00 60.00
+ 4050 1665 4500 1665
+2 1 0 1 0 0 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 30.00 60.00
+ 4050 1620 4500 1530
+2 1 0 1 0 0 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 30.00 60.00
+ 4050 1710 4500 1800
+2 2 0 1 0 11 52 -1 20 0.000 0 0 -1 0 0 5
+ 6525 3375 6750 3375 6750 4500 6525 4500 6525 3375
+2 2 0 1 0 29 52 -1 20 0.000 0 0 -1 0 0 5
+ 6075 3375 6300 3375 6300 4500 6075 4500 6075 3375
+2 3 0 1 0 2 54 -1 20 0.000 0 0 -1 0 0 9
+ 7290 3150 7290 4725 5985 4725 5985 3285 2385 3285 2385 4590
+ 1935 4590 1935 3150 7290 3150
+2 3 0 1 0 6 54 -1 20 0.000 0 0 -1 0 0 9
+ 1935 3150 1485 3150 1485 4725 5985 4725 5985 3285 5085 3285
+ 5085 4590 1935 4590 1935 3150
+2 2 0 1 0 11 52 -1 20 0.000 0 0 -1 0 0 5
+ 5625 3375 5850 3375 5850 4500 5625 4500 5625 3375
+2 2 0 1 0 29 52 -1 20 0.000 0 0 -1 0 0 5
+ 5175 3375 5400 3375 5400 4500 5175 4500 5175 3375
+2 1 0 1 0 0 54 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 7650 3915 7200 3915
+2 1 0 1 0 0 54 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 1575 3915 1125 3915
+2 1 0 1 0 0 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 30.00 60.00
+ 6975 3915 6750 3915
+2 1 0 1 0 0 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 30.00 60.00
+ 6525 3915 6300 3915
+2 1 0 1 0 0 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 30.00 60.00
+ 6075 3915 5850 3915
+2 1 0 1 0 0 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 30.00 60.00
+ 5625 3915 5400 3915
+2 1 0 1 0 0 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 30.00 60.00
+ 2025 3915 1800 3915
+2 1 0 1 0 0 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 30.00 60.00
+ 5175 3915 4275 3915
+2 1 0 1 0 0 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 30.00 60.00
+ 3870 3915 3105 3915
+2 1 0 1 0 0 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 30.00 60.00
+ 2700 3915 2250 3915
+2 1 0 1 0 0 50 -1 -1 0.000 0 0 -1 1 0 3
+ 1 1 1.00 30.00 60.00
+ 3465 2250 3465 2880 2970 3465
+2 1 0 1 0 0 50 -1 -1 0.000 0 0 -1 1 0 4
+ 1 1 1.00 30.00 60.00
+ 5040 2250 5040 2655 3600 2880 3015 3510
+2 1 0 1 0 0 50 -1 -1 0.000 0 0 -1 1 0 4
+ 1 1 1.00 30.00 60.00
+ 6075 2250 6075 2565 3645 2925 3060 3555
+2 1 0 1 0 0 50 -1 -1 0.000 0 0 -1 1 0 4
+ 1 1 1.00 30.00 60.00
+ 6615 2250 6615 2610 3690 2970 3060 3645
+2 1 0 1 0 0 50 -1 -1 0.000 0 0 -1 1 0 4
+ 1 1 1.00 30.00 60.00
+ 7065 2250 7065 2655 3735 3015 3060 3690
+2 1 0 1 0 0 50 -1 -1 0.000 0 0 -1 1 0 4
+ 1 1 1.00 30.00 60.00
+ 5265 3375 5265 2970 3825 3105 3105 3780
+2 1 0 1 0 0 50 -1 -1 0.000 0 0 -1 1 0 4
+ 1 1 1.00 30.00 60.00
+ 6165 3375 6165 2835 3780 3060 3105 3735
+4 1 0 50 -1 16 8 1.5708 4 120 630 2160 1710 tcp-request\001
+4 1 0 50 -1 16 8 1.5708 4 120 870 3060 1710 tcp-req content\001
+4 1 0 50 -1 16 8 1.5708 4 120 600 5085 1710 http-req in\001
+4 1 0 50 -1 16 8 1.5708 4 105 690 3960 1710 use-backend\001
+4 1 0 50 -1 16 8 1.5708 4 75 570 5535 1710 use-server\001
+4 1 0 50 -1 16 8 1.5708 4 120 360 1710 1710 accept\001
+4 0 0 50 -1 18 6 0.0000 4 90 435 2115 540 frontend\001
+4 0 0 50 -1 18 6 0.0000 4 90 405 2115 765 backend\001
+4 0 0 50 -1 18 6 0.0000 4 105 150 3735 540 tcp\001
+4 0 0 50 -1 18 6 0.0000 4 105 450 3735 765 http only\001
+4 2 0 50 -1 18 6 0.0000 4 90 435 4050 2430 frontend\001
+4 0 0 50 -1 18 6 0.0000 4 90 405 4455 2430 backend\001
+4 1 0 50 -1 16 8 1.5708 4 120 675 6660 1710 http-req out\001
+4 1 0 50 -1 16 8 1.5708 4 120 675 7110 1710 http-req out\001
+4 1 0 50 -1 16 8 1.5708 4 120 600 3510 1710 http-req in\001
+4 1 0 50 -1 16 8 1.5708 4 120 870 4635 1710 tcp-req content\001
+4 1 0 50 -1 16 8 1.5708 4 120 660 6210 3960 http-resp in\001
+4 1 0 50 -1 16 8 1.5708 4 120 930 6660 3960 tcp-resp content\001
+4 1 0 50 -1 16 8 1.5708 4 120 900 7110 3960 tcp-resp inspect\001
+4 1 0 50 -1 16 8 1.5708 4 120 930 5760 3960 tcp-resp content\001
+4 1 0 50 -1 16 8 1.5708 4 120 660 5310 3960 http-resp in\001
+4 0 0 50 -1 18 6 0.0000 4 90 405 6075 4680 backend\001
+4 1 0 50 -1 16 8 1.5708 4 90 300 4050 3960 HTTP\001
+4 1 0 50 -1 16 8 1.5708 4 120 615 4185 3960 processing\001
+4 1 0 50 -1 16 8 1.5708 4 90 300 2835 3915 Error\001
+4 1 0 50 -1 16 8 1.5708 4 120 615 2970 3915 processing\001
+4 2 0 50 -1 18 6 0.0000 4 90 435 5895 4680 frontend\001
--- /dev/null
+ -------------------
+ HAProxy
+ Architecture Guide
+ -------------------
+ version 1.1.34
+ willy tarreau
+ 2006/01/29
+
+
+This document provides real world examples with working configurations.
+Please note that except stated otherwise, global configuration parameters
+such as logging, chrooting, limits and time-outs are not described here.
+
+===================================================
+1. Simple HTTP load-balancing with cookie insertion
+===================================================
+
+A web application often saturates the front-end server with high CPU loads,
+due to the scripting language involved. It also relies on a back-end database
+which is not much loaded. User contexts are stored on the server itself, and
+not in the database, so that simply adding another server with simple IP/TCP
+load-balancing would not work.
+
+ +-------+
+ |clients| clients and/or reverse-proxy
+ +---+---+
+ |
+ -+-----+--------+----
+ | _|_db
+ +--+--+ (___)
+ | web | (___)
+ +-----+ (___)
+ 192.168.1.1 192.168.1.2
+
+
+Replacing the web server with a bigger SMP system would cost much more than
+adding low-cost pizza boxes. The solution is to buy N cheap boxes and install
+the application on them. Install haproxy on the old one which will spread the
+load across the new boxes.
+
+ 192.168.1.1 192.168.1.11-192.168.1.14 192.168.1.2
+ -------+-----------+-----+-----+-----+--------+----
+ | | | | | _|_db
+ +--+--+ +-+-+ +-+-+ +-+-+ +-+-+ (___)
+ | LB1 | | A | | B | | C | | D | (___)
+ +-----+ +---+ +---+ +---+ +---+ (___)
+ haproxy 4 cheap web servers
+
+
+Config on haproxy (LB1) :
+-------------------------
+
+ listen webfarm 192.168.1.1:80
+ mode http
+ balance roundrobin
+ cookie SERVERID insert indirect
+ option httpchk HEAD /index.html HTTP/1.0
+ server webA 192.168.1.11:80 cookie A check
+ server webB 192.168.1.12:80 cookie B check
+ server webC 192.168.1.13:80 cookie C check
+ server webD 192.168.1.14:80 cookie D check
+
+
+Description :
+-------------
+ - LB1 will receive clients requests.
+ - if a request does not contain a cookie, it will be forwarded to a valid
+ server
+ - in return, a cookie "SERVERID" will be inserted in the response holding the
+ server name (eg: "A").
+ - when the client comes again with the cookie "SERVERID=A", LB1 will know that
+ it must be forwarded to server A. The cookie will be removed so that the
+ server does not see it.
+ - if server "webA" dies, the requests will be sent to another valid server
+ and a cookie will be reassigned.
+
+
+Flows :
+-------
+
+(client) (haproxy) (server A)
+ >-- GET /URI1 HTTP/1.0 ------------> |
+ ( no cookie, haproxy forwards in load-balancing mode. )
+ | >-- GET /URI1 HTTP/1.0 ---------->
+ | <-- HTTP/1.0 200 OK -------------<
+ ( the proxy now adds the server cookie in return )
+ <-- HTTP/1.0 200 OK ---------------< |
+ Set-Cookie: SERVERID=A |
+ >-- GET /URI2 HTTP/1.0 ------------> |
+ Cookie: SERVERID=A |
+ ( the proxy sees the cookie. it forwards to server A and deletes it )
+ | >-- GET /URI2 HTTP/1.0 ---------->
+ | <-- HTTP/1.0 200 OK -------------<
+ ( the proxy does not add the cookie in return because the client knows it )
+ <-- HTTP/1.0 200 OK ---------------< |
+ >-- GET /URI3 HTTP/1.0 ------------> |
+ Cookie: SERVERID=A |
+ ( ... )
+
+
+Limits :
+--------
+ - if clients use keep-alive (HTTP/1.1), only the first response will have
+ a cookie inserted, and only the first request of each session will be
+ analyzed. This does not cause trouble in insertion mode because the cookie
+ is put immediately in the first response, and the session is maintained to
+ the same server for all subsequent requests in the same session. However,
+ the cookie will not be removed from the requests forwarded to the servers,
+ so the server must not be sensitive to unknown cookies. If this causes
+ trouble, you can disable keep-alive by adding the following option :
+
+ option httpclose
+
+ - if for some reason the clients cannot learn more than one cookie (eg: the
+ clients are indeed some home-made applications or gateways), and the
+ application already produces a cookie, you can use the "prefix" mode (see
+ below).
+
+ - LB1 becomes a very sensible server. If LB1 dies, nothing works anymore.
+ => you can back it up using keepalived (see below)
+
+ - if the application needs to log the original client's IP, use the
+ "forwardfor" option which will add an "X-Forwarded-For" header with the
+ original client's IP address. You must also use "httpclose" to ensure
+ that you will rewrite every requests and not only the first one of each
+ session :
+
+ option httpclose
+ option forwardfor
+
+ - if the application needs to log the original destination IP, use the
+ "originalto" option which will add an "X-Original-To" header with the
+ original destination IP address. You must also use "httpclose" to ensure
+ that you will rewrite every requests and not only the first one of each
+ session :
+
+ option httpclose
+ option originalto
+
+ The web server will have to be configured to use this header instead.
+ For example, on apache, you can use LogFormat for this :
+
+ LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b " combined
+ CustomLog /var/log/httpd/access_log combined
+
+Hints :
+-------
+Sometimes on the internet, you will find a few percent of the clients which
+disable cookies on their browser. Obviously they have troubles everywhere on
+the web, but you can still help them access your site by using the "source"
+balancing algorithm instead of the "roundrobin". It ensures that a given IP
+address always reaches the same server as long as the number of servers remains
+unchanged. Never use this behind a proxy or in a small network, because the
+distribution will be unfair. However, in large internal networks, and on the
+internet, it works quite well. Clients which have a dynamic address will not
+be affected as long as they accept the cookie, because the cookie always has
+precedence over load balancing :
+
+ listen webfarm 192.168.1.1:80
+ mode http
+ balance source
+ cookie SERVERID insert indirect
+ option httpchk HEAD /index.html HTTP/1.0
+ server webA 192.168.1.11:80 cookie A check
+ server webB 192.168.1.12:80 cookie B check
+ server webC 192.168.1.13:80 cookie C check
+ server webD 192.168.1.14:80 cookie D check
+
+
+==================================================================
+2. HTTP load-balancing with cookie prefixing and high availability
+==================================================================
+
+Now you don't want to add more cookies, but rather use existing ones. The
+application already generates a "JSESSIONID" cookie which is enough to track
+sessions, so we'll prefix this cookie with the server name when we see it.
+Since the load-balancer becomes critical, it will be backed up with a second
+one in VRRP mode using keepalived under Linux.
+
+Download the latest version of keepalived from this site and install it
+on each load-balancer LB1 and LB2 :
+
+ http://www.keepalived.org/
+
+You then have a shared IP between the two load-balancers (we will still use the
+original IP). It is active only on one of them at any moment. To allow the
+proxy to bind to the shared IP on Linux 2.4, you must enable it in /proc :
+
+# echo 1 >/proc/sys/net/ipv4/ip_nonlocal_bind
+
+
+ shared IP=192.168.1.1
+ 192.168.1.3 192.168.1.4 192.168.1.11-192.168.1.14 192.168.1.2
+ -------+------------+-----------+-----+-----+-----+--------+----
+ | | | | | | _|_db
+ +--+--+ +--+--+ +-+-+ +-+-+ +-+-+ +-+-+ (___)
+ | LB1 | | LB2 | | A | | B | | C | | D | (___)
+ +-----+ +-----+ +---+ +---+ +---+ +---+ (___)
+ haproxy haproxy 4 cheap web servers
+ keepalived keepalived
+
+
+Config on both proxies (LB1 and LB2) :
+--------------------------------------
+
+ listen webfarm 192.168.1.1:80
+ mode http
+ balance roundrobin
+ cookie JSESSIONID prefix
+ option httpclose
+ option forwardfor
+ option httpchk HEAD /index.html HTTP/1.0
+ server webA 192.168.1.11:80 cookie A check
+ server webB 192.168.1.12:80 cookie B check
+ server webC 192.168.1.13:80 cookie C check
+ server webD 192.168.1.14:80 cookie D check
+
+
+Notes: the proxy will modify EVERY cookie sent by the client and the server,
+so it is important that it can access to ALL cookies in ALL requests for
+each session. This implies that there is no keep-alive (HTTP/1.1), thus the
+"httpclose" option. Only if you know for sure that the client(s) will never
+use keep-alive (eg: Apache 1.3 in reverse-proxy mode), you can remove this
+option.
+
+
+Configuration for keepalived on LB1/LB2 :
+-----------------------------------------
+
+ vrrp_script chk_haproxy { # Requires keepalived-1.1.13
+ script "killall -0 haproxy" # cheaper than pidof
+ interval 2 # check every 2 seconds
+ weight 2 # add 2 points of prio if OK
+ }
+
+ vrrp_instance VI_1 {
+ interface eth0
+ state MASTER
+ virtual_router_id 51
+ priority 101 # 101 on master, 100 on backup
+ virtual_ipaddress {
+ 192.168.1.1
+ }
+ track_script {
+ chk_haproxy
+ }
+ }
+
+
+Description :
+-------------
+ - LB1 is VRRP master (keepalived), LB2 is backup. Both monitor the haproxy
+ process, and lower their prio if it fails, leading to a failover to the
+ other node.
+ - LB1 will receive clients requests on IP 192.168.1.1.
+ - both load-balancers send their checks from their native IP.
+ - if a request does not contain a cookie, it will be forwarded to a valid
+ server
+ - in return, if a JESSIONID cookie is seen, the server name will be prefixed
+ into it, followed by a delimitor ('~')
+ - when the client comes again with the cookie "JSESSIONID=A~xxx", LB1 will
+ know that it must be forwarded to server A. The server name will then be
+ extracted from cookie before it is sent to the server.
+ - if server "webA" dies, the requests will be sent to another valid server
+ and a cookie will be reassigned.
+
+
+Flows :
+-------
+
+(client) (haproxy) (server A)
+ >-- GET /URI1 HTTP/1.0 ------------> |
+ ( no cookie, haproxy forwards in load-balancing mode. )
+ | >-- GET /URI1 HTTP/1.0 ---------->
+ | X-Forwarded-For: 10.1.2.3
+ | <-- HTTP/1.0 200 OK -------------<
+ ( no cookie, nothing changed )
+ <-- HTTP/1.0 200 OK ---------------< |
+ >-- GET /URI2 HTTP/1.0 ------------> |
+ ( no cookie, haproxy forwards in lb mode, possibly to another server. )
+ | >-- GET /URI2 HTTP/1.0 ---------->
+ | X-Forwarded-For: 10.1.2.3
+ | <-- HTTP/1.0 200 OK -------------<
+ | Set-Cookie: JSESSIONID=123
+ ( the cookie is identified, it will be prefixed with the server name )
+ <-- HTTP/1.0 200 OK ---------------< |
+ Set-Cookie: JSESSIONID=A~123 |
+ >-- GET /URI3 HTTP/1.0 ------------> |
+ Cookie: JSESSIONID=A~123 |
+ ( the proxy sees the cookie, removes the server name and forwards
+ to server A which sees the same cookie as it previously sent )
+ | >-- GET /URI3 HTTP/1.0 ---------->
+ | Cookie: JSESSIONID=123
+ | X-Forwarded-For: 10.1.2.3
+ | <-- HTTP/1.0 200 OK -------------<
+ ( no cookie, nothing changed )
+ <-- HTTP/1.0 200 OK ---------------< |
+ ( ... )
+
+Hints :
+-------
+Sometimes, there will be some powerful servers in the farm, and some smaller
+ones. In this situation, it may be desirable to tell haproxy to respect the
+difference in performance. Let's consider that WebA and WebB are two old
+P3-1.2 GHz while WebC and WebD are shiny new Opteron-2.6 GHz. If your
+application scales with CPU, you may assume a very rough 2.6/1.2 performance
+ratio between the servers. You can inform haproxy about this using the "weight"
+keyword, with values between 1 and 256. It will then spread the load the most
+smoothly possible respecting those ratios :
+
+ server webA 192.168.1.11:80 cookie A weight 12 check
+ server webB 192.168.1.12:80 cookie B weight 12 check
+ server webC 192.168.1.13:80 cookie C weight 26 check
+ server webD 192.168.1.14:80 cookie D weight 26 check
+
+
+========================================================
+2.1 Variations involving external layer 4 load-balancers
+========================================================
+
+Instead of using a VRRP-based active/backup solution for the proxies,
+they can also be load-balanced by a layer4 load-balancer (eg: Alteon)
+which will also check that the services run fine on both proxies :
+
+ | VIP=192.168.1.1
+ +----+----+
+ | Alteon |
+ +----+----+
+ |
+ 192.168.1.3 | 192.168.1.4 192.168.1.11-192.168.1.14 192.168.1.2
+ -------+-----+------+-----------+-----+-----+-----+--------+----
+ | | | | | | _|_db
+ +--+--+ +--+--+ +-+-+ +-+-+ +-+-+ +-+-+ (___)
+ | LB1 | | LB2 | | A | | B | | C | | D | (___)
+ +-----+ +-----+ +---+ +---+ +---+ +---+ (___)
+ haproxy haproxy 4 cheap web servers
+
+
+Config on both proxies (LB1 and LB2) :
+--------------------------------------
+
+ listen webfarm 0.0.0.0:80
+ mode http
+ balance roundrobin
+ cookie JSESSIONID prefix
+ option httpclose
+ option forwardfor
+ option httplog
+ option dontlognull
+ option httpchk HEAD /index.html HTTP/1.0
+ server webA 192.168.1.11:80 cookie A check
+ server webB 192.168.1.12:80 cookie B check
+ server webC 192.168.1.13:80 cookie C check
+ server webD 192.168.1.14:80 cookie D check
+
+The "dontlognull" option is used to prevent the proxy from logging the health
+checks from the Alteon. If a session exchanges no data, then it will not be
+logged.
+
+Config on the Alteon :
+----------------------
+
+ /c/slb/real 11
+ ena
+ name "LB1"
+ rip 192.168.1.3
+ /c/slb/real 12
+ ena
+ name "LB2"
+ rip 192.168.1.4
+ /c/slb/group 10
+ name "LB1-2"
+ metric roundrobin
+ health tcp
+ add 11
+ add 12
+ /c/slb/virt 10
+ ena
+ vip 192.168.1.1
+ /c/slb/virt 10/service http
+ group 10
+
+
+Note: the health-check on the Alteon is set to "tcp" to prevent the proxy from
+forwarding the connections. It can also be set to "http", but for this the
+proxy must specify a "monitor-net" with the Alteons' addresses, so that the
+Alteon can really check that the proxies can talk HTTP but without forwarding
+the connections to the end servers. Check next section for an example on how to
+use monitor-net.
+
+
+============================================================
+2.2 Generic TCP relaying and external layer 4 load-balancers
+============================================================
+
+Sometimes it's useful to be able to relay generic TCP protocols (SMTP, TSE,
+VNC, etc...), for example to interconnect private networks. The problem comes
+when you use external load-balancers which need to send periodic health-checks
+to the proxies, because these health-checks get forwarded to the end servers.
+The solution is to specify a network which will be dedicated to monitoring
+systems and must not lead to a forwarding connection nor to any log, using the
+"monitor-net" keyword. Note: this feature expects a version of haproxy greater
+than or equal to 1.1.32 or 1.2.6.
+
+
+ | VIP=172.16.1.1 |
+ +----+----+ +----+----+
+ | Alteon1 | | Alteon2 |
+ +----+----+ +----+----+
+ 192.168.1.252 | GW=192.168.1.254 | 192.168.1.253
+ | |
+ ------+---+------------+--+-----------------> TSE farm : 192.168.1.10
+ 192.168.1.1 | | 192.168.1.2
+ +--+--+ +--+--+
+ | LB1 | | LB2 |
+ +-----+ +-----+
+ haproxy haproxy
+
+
+Config on both proxies (LB1 and LB2) :
+--------------------------------------
+
+ listen tse-proxy
+ bind :3389,:1494,:5900 # TSE, ICA and VNC at once.
+ mode tcp
+ balance roundrobin
+ server tse-farm 192.168.1.10
+ monitor-net 192.168.1.252/31
+
+The "monitor-net" option instructs the proxies that any connection coming from
+192.168.1.252 or 192.168.1.253 will not be logged nor forwarded and will be
+closed immediately. The Alteon load-balancers will then see the proxies alive
+without perturbating the service.
+
+Config on the Alteon :
+----------------------
+
+ /c/l3/if 1
+ ena
+ addr 192.168.1.252
+ mask 255.255.255.0
+ /c/slb/real 11
+ ena
+ name "LB1"
+ rip 192.168.1.1
+ /c/slb/real 12
+ ena
+ name "LB2"
+ rip 192.168.1.2
+ /c/slb/group 10
+ name "LB1-2"
+ metric roundrobin
+ health tcp
+ add 11
+ add 12
+ /c/slb/virt 10
+ ena
+ vip 172.16.1.1
+ /c/slb/virt 10/service 1494
+ group 10
+ /c/slb/virt 10/service 3389
+ group 10
+ /c/slb/virt 10/service 5900
+ group 10
+
+
+Special handling of SSL :
+-------------------------
+Sometimes, you want to send health-checks to remote systems, even in TCP mode,
+in order to be able to failover to a backup server in case the first one is
+dead. Of course, you can simply enable TCP health-checks, but it sometimes
+happens that intermediate firewalls between the proxies and the remote servers
+acknowledge the TCP connection themselves, showing an always-up server. Since
+this is generally encountered on long-distance communications, which often
+involve SSL, an SSL health-check has been implemented to workaround this issue.
+It sends SSL Hello messages to the remote server, which in turns replies with
+SSL Hello messages. Setting it up is very easy :
+
+ listen tcp-syslog-proxy
+ bind :1514 # listen to TCP syslog traffic on this port (SSL)
+ mode tcp
+ balance roundrobin
+ option ssl-hello-chk
+ server syslog-prod-site 192.168.1.10 check
+ server syslog-back-site 192.168.2.10 check backup
+
+
+=========================================================
+3. Simple HTTP/HTTPS load-balancing with cookie insertion
+=========================================================
+
+This is the same context as in example 1 above, but the web
+server uses HTTPS.
+
+ +-------+
+ |clients| clients
+ +---+---+
+ |
+ -+-----+--------+----
+ | _|_db
+ +--+--+ (___)
+ | SSL | (___)
+ | web | (___)
+ +-----+
+ 192.168.1.1 192.168.1.2
+
+
+Since haproxy does not handle SSL, this part will have to be extracted from the
+servers (freeing even more resources) and installed on the load-balancer
+itself. Install haproxy and apache+mod_ssl on the old box which will spread the
+load between the new boxes. Apache will work in SSL reverse-proxy-cache. If the
+application is correctly developped, it might even lower its load. However,
+since there now is a cache between the clients and haproxy, some security
+measures must be taken to ensure that inserted cookies will not be cached.
+
+
+ 192.168.1.1 192.168.1.11-192.168.1.14 192.168.1.2
+ -------+-----------+-----+-----+-----+--------+----
+ | | | | | _|_db
+ +--+--+ +-+-+ +-+-+ +-+-+ +-+-+ (___)
+ | LB1 | | A | | B | | C | | D | (___)
+ +-----+ +---+ +---+ +---+ +---+ (___)
+ apache 4 cheap web servers
+ mod_ssl
+ haproxy
+
+
+Config on haproxy (LB1) :
+-------------------------
+
+ listen 127.0.0.1:8000
+ mode http
+ balance roundrobin
+ cookie SERVERID insert indirect nocache
+ option httpchk HEAD /index.html HTTP/1.0
+ server webA 192.168.1.11:80 cookie A check
+ server webB 192.168.1.12:80 cookie B check
+ server webC 192.168.1.13:80 cookie C check
+ server webD 192.168.1.14:80 cookie D check
+
+
+Description :
+-------------
+ - apache on LB1 will receive clients requests on port 443
+ - it forwards it to haproxy bound to 127.0.0.1:8000
+ - if a request does not contain a cookie, it will be forwarded to a valid
+ server
+ - in return, a cookie "SERVERID" will be inserted in the response holding the
+ server name (eg: "A"), and a "Cache-control: private" header will be added
+ so that the apache does not cache any page containing such cookie.
+ - when the client comes again with the cookie "SERVERID=A", LB1 will know that
+ it must be forwarded to server A. The cookie will be removed so that the
+ server does not see it.
+ - if server "webA" dies, the requests will be sent to another valid server
+ and a cookie will be reassigned.
+
+Notes :
+-------
+ - if the cookie works in "prefix" mode, there is no need to add the "nocache"
+ option because it is an application cookie which will be modified, and the
+ application flags will be preserved.
+ - if apache 1.3 is used as a front-end before haproxy, it always disables
+ HTTP keep-alive on the back-end, so there is no need for the "httpclose"
+ option on haproxy.
+ - configure apache to set the X-Forwarded-For header itself, and do not do
+ it on haproxy if you need the application to know about the client's IP.
+
+
+Flows :
+-------
+
+(apache) (haproxy) (server A)
+ >-- GET /URI1 HTTP/1.0 ------------> |
+ ( no cookie, haproxy forwards in load-balancing mode. )
+ | >-- GET /URI1 HTTP/1.0 ---------->
+ | <-- HTTP/1.0 200 OK -------------<
+ ( the proxy now adds the server cookie in return )
+ <-- HTTP/1.0 200 OK ---------------< |
+ Set-Cookie: SERVERID=A |
+ Cache-Control: private |
+ >-- GET /URI2 HTTP/1.0 ------------> |
+ Cookie: SERVERID=A |
+ ( the proxy sees the cookie. it forwards to server A and deletes it )
+ | >-- GET /URI2 HTTP/1.0 ---------->
+ | <-- HTTP/1.0 200 OK -------------<
+ ( the proxy does not add the cookie in return because the client knows it )
+ <-- HTTP/1.0 200 OK ---------------< |
+ >-- GET /URI3 HTTP/1.0 ------------> |
+ Cookie: SERVERID=A |
+ ( ... )
+
+
+
+========================================
+3.1. Alternate solution using Stunnel
+========================================
+
+When only SSL is required and cache is not needed, stunnel is a cheaper
+solution than Apache+mod_ssl. By default, stunnel does not process HTTP and
+does not add any X-Forwarded-For header, but there is a patch on the official
+haproxy site to provide this feature to recent stunnel versions.
+
+This time, stunnel will only process HTTPS and not HTTP. This means that
+haproxy will get all HTTP traffic, so haproxy will have to add the
+X-Forwarded-For header for HTTP traffic, but not for HTTPS traffic since
+stunnel will already have done it. We will use the "except" keyword to tell
+haproxy that connections from local host already have a valid header.
+
+
+ 192.168.1.1 192.168.1.11-192.168.1.14 192.168.1.2
+ -------+-----------+-----+-----+-----+--------+----
+ | | | | | _|_db
+ +--+--+ +-+-+ +-+-+ +-+-+ +-+-+ (___)
+ | LB1 | | A | | B | | C | | D | (___)
+ +-----+ +---+ +---+ +---+ +---+ (___)
+ stunnel 4 cheap web servers
+ haproxy
+
+
+Config on stunnel (LB1) :
+-------------------------
+
+ cert=/etc/stunnel/stunnel.pem
+ setuid=stunnel
+ setgid=proxy
+
+ socket=l:TCP_NODELAY=1
+ socket=r:TCP_NODELAY=1
+
+ [https]
+ accept=192.168.1.1:443
+ connect=192.168.1.1:80
+ xforwardedfor=yes
+
+
+Config on haproxy (LB1) :
+-------------------------
+
+ listen 192.168.1.1:80
+ mode http
+ balance roundrobin
+ option forwardfor except 192.168.1.1
+ cookie SERVERID insert indirect nocache
+ option httpchk HEAD /index.html HTTP/1.0
+ server webA 192.168.1.11:80 cookie A check
+ server webB 192.168.1.12:80 cookie B check
+ server webC 192.168.1.13:80 cookie C check
+ server webD 192.168.1.14:80 cookie D check
+
+Description :
+-------------
+ - stunnel on LB1 will receive clients requests on port 443
+ - it forwards them to haproxy bound to port 80
+ - haproxy will receive HTTP client requests on port 80 and decrypted SSL
+ requests from Stunnel on the same port.
+ - stunnel will add the X-Forwarded-For header
+ - haproxy will add the X-Forwarded-For header for everyone except the local
+ address (stunnel).
+
+
+========================================
+4. Soft-stop for application maintenance
+========================================
+
+When an application is spread across several servers, the time to update all
+instances increases, so the application seems jerky for a longer period.
+
+HAproxy offers several solutions for this. Although it cannot be reconfigured
+without being stopped, nor does it offer any external command, there are other
+working solutions.
+
+
+=========================================
+4.1 Soft-stop using a file on the servers
+=========================================
+
+This trick is quite common and very simple: put a file on the server which will
+be checked by the proxy. When you want to stop the server, first remove this
+file. The proxy will see the server as failed, and will not send it any new
+session, only the old ones if the "persist" option is used. Wait a bit then
+stop the server when it does not receive anymore connections.
+
+
+ listen 192.168.1.1:80
+ mode http
+ balance roundrobin
+ cookie SERVERID insert indirect
+ option httpchk HEAD /running HTTP/1.0
+ server webA 192.168.1.11:80 cookie A check inter 2000 rise 2 fall 2
+ server webB 192.168.1.12:80 cookie B check inter 2000 rise 2 fall 2
+ server webC 192.168.1.13:80 cookie C check inter 2000 rise 2 fall 2
+ server webD 192.168.1.14:80 cookie D check inter 2000 rise 2 fall 2
+ option persist
+ redispatch
+ contimeout 5000
+
+
+Description :
+-------------
+ - every 2 seconds, haproxy will try to access the file "/running" on the
+ servers, and declare the server as down after 2 attempts (4 seconds).
+ - only the servers which respond with a 200 or 3XX response will be used.
+ - if a request does not contain a cookie, it will be forwarded to a valid
+ server
+ - if a request contains a cookie for a failed server, haproxy will insist
+ on trying to reach the server anyway, to let the user finish what he was
+ doing. ("persist" option)
+ - if the server is totally stopped, the connection will fail and the proxy
+ will rebalance the client to another server ("redispatch")
+
+Usage on the web servers :
+--------------------------
+- to start the server :
+ # /etc/init.d/httpd start
+ # touch /home/httpd/www/running
+
+- to soft-stop the server
+ # rm -f /home/httpd/www/running
+
+- to completely stop the server :
+ # /etc/init.d/httpd stop
+
+Limits
+------
+If the server is totally powered down, the proxy will still try to reach it
+for those clients who still have a cookie referencing it, and the connection
+attempt will expire after 5 seconds ("contimeout"), and only after that, the
+client will be redispatched to another server. So this mode is only useful
+for software updates where the server will suddenly refuse the connection
+because the process is stopped. The problem is the same if the server suddenly
+crashes. All of its users will be fairly perturbated.
+
+
+==================================
+4.2 Soft-stop using backup servers
+==================================
+
+A better solution which covers every situation is to use backup servers.
+Version 1.1.30 fixed a bug which prevented a backup server from sharing
+the same cookie as a standard server.
+
+
+ listen 192.168.1.1:80
+ mode http
+ balance roundrobin
+ redispatch
+ cookie SERVERID insert indirect
+ option httpchk HEAD / HTTP/1.0
+ server webA 192.168.1.11:80 cookie A check port 81 inter 2000
+ server webB 192.168.1.12:80 cookie B check port 81 inter 2000
+ server webC 192.168.1.13:80 cookie C check port 81 inter 2000
+ server webD 192.168.1.14:80 cookie D check port 81 inter 2000
+
+ server bkpA 192.168.1.11:80 cookie A check port 80 inter 2000 backup
+ server bkpB 192.168.1.12:80 cookie B check port 80 inter 2000 backup
+ server bkpC 192.168.1.13:80 cookie C check port 80 inter 2000 backup
+ server bkpD 192.168.1.14:80 cookie D check port 80 inter 2000 backup
+
+Description
+-----------
+Four servers webA..D are checked on their port 81 every 2 seconds. The same
+servers named bkpA..D are checked on the port 80, and share the exact same
+cookies. Those servers will only be used when no other server is available
+for the same cookie.
+
+When the web servers are started, only the backup servers are seen as
+available. On the web servers, you need to redirect port 81 to local
+port 80, either with a local proxy (eg: a simple haproxy tcp instance),
+or with iptables (linux) or pf (openbsd). This is because we want the
+real web server to reply on this port, and not a fake one. Eg, with
+iptables :
+
+ # /etc/init.d/httpd start
+ # iptables -t nat -A PREROUTING -p tcp --dport 81 -j REDIRECT --to-port 80
+
+A few seconds later, the standard server is seen up and haproxy starts to send
+it new requests on its real port 80 (only new users with no cookie, of course).
+
+If a server completely crashes (even if it does not respond at the IP level),
+both the standard and backup servers will fail, so clients associated to this
+server will be redispatched to other live servers and will lose their sessions.
+
+Now if you want to enter a server into maintenance, simply stop it from
+responding on port 81 so that its standard instance will be seen as failed,
+but the backup will still work. Users will not notice anything since the
+service is still operational :
+
+ # iptables -t nat -D PREROUTING -p tcp --dport 81 -j REDIRECT --to-port 80
+
+The health checks on port 81 for this server will quickly fail, and the
+standard server will be seen as failed. No new session will be sent to this
+server, and existing clients with a valid cookie will still reach it because
+the backup server will still be up.
+
+Now wait as long as you want for the old users to stop using the service, and
+once you see that the server does not receive any traffic, simply stop it :
+
+ # /etc/init.d/httpd stop
+
+The associated backup server will in turn fail, and if any client still tries
+to access this particular server, he will be redispatched to any other valid
+server because of the "redispatch" option.
+
+This method has an advantage : you never touch the proxy when doing server
+maintenance. The people managing the servers can make them disappear smoothly.
+
+
+4.2.1 Variations for operating systems without any firewall software
+--------------------------------------------------------------------
+
+The downside is that you need a redirection solution on the server just for
+the health-checks. If the server OS does not support any firewall software,
+this redirection can also be handled by a simple haproxy in tcp mode :
+
+ global
+ daemon
+ quiet
+ pidfile /var/run/haproxy-checks.pid
+ listen 0.0.0.0:81
+ mode tcp
+ dispatch 127.0.0.1:80
+ contimeout 1000
+ clitimeout 10000
+ srvtimeout 10000
+
+To start the web service :
+
+ # /etc/init.d/httpd start
+ # haproxy -f /etc/haproxy/haproxy-checks.cfg
+
+To soft-stop the service :
+
+ # kill $(</var/run/haproxy-checks.pid)
+
+The port 81 will stop responding and the load-balancer will notice the failure.
+
+
+4.2.2 Centralizing the server management
+----------------------------------------
+
+If one finds it preferable to manage the servers from the load-balancer itself,
+the port redirector can be installed on the load-balancer itself. See the
+example with iptables below.
+
+Make the servers appear as operational :
+ # iptables -t nat -A OUTPUT -d 192.168.1.11 -p tcp --dport 81 -j DNAT --to-dest :80
+ # iptables -t nat -A OUTPUT -d 192.168.1.12 -p tcp --dport 81 -j DNAT --to-dest :80
+ # iptables -t nat -A OUTPUT -d 192.168.1.13 -p tcp --dport 81 -j DNAT --to-dest :80
+ # iptables -t nat -A OUTPUT -d 192.168.1.14 -p tcp --dport 81 -j DNAT --to-dest :80
+
+Soft stop one server :
+ # iptables -t nat -D OUTPUT -d 192.168.1.12 -p tcp --dport 81 -j DNAT --to-dest :80
+
+Another solution is to use the "COMAFILE" patch provided by Alexander Lazic,
+which is available for download here :
+
+ http://w.ods.org/tools/haproxy/contrib/
+
+
+4.2.3 Notes :
+-------------
+ - Never, ever, start a fake service on port 81 for the health-checks, because
+ a real web service failure will not be detected as long as the fake service
+ runs. You must really forward the check port to the real application.
+
+ - health-checks will be sent twice as often, once for each standard server,
+ and once for each backup server. All this will be multiplicated by the
+ number of processes if you use multi-process mode. You will have to ensure
+ that all the checks sent to the server do not overload it.
+
+=======================
+4.3 Hot reconfiguration
+=======================
+
+There are two types of haproxy users :
+ - those who can never do anything in production out of maintenance periods ;
+ - those who can do anything at any time provided that the consequences are
+ limited.
+
+The first ones have no problem stopping the server to change configuration
+because they got some maintenance periods during which they can break anything.
+So they will even prefer doing a clean stop/start sequence to ensure everything
+will work fine upon next reload. Since those have represented the majority of
+haproxy uses, there has been little effort trying to improve this.
+
+However, the second category is a bit different. They like to be able to fix an
+error in a configuration file without anyone noticing. This can sometimes also
+be the case for the first category because humans are not failsafe.
+
+For this reason, a new hot reconfiguration mechanism has been introduced in
+version 1.1.34. Its usage is very simple and works even in chrooted
+environments with lowered privileges. The principle is very simple : upon
+reception of a SIGTTOU signal, the proxy will stop listening to all the ports.
+This will release the ports so that a new instance can be started. Existing
+connections will not be broken at all. If the new instance fails to start,
+then sending a SIGTTIN signal back to the original processes will restore
+the listening ports. This is possible without any special privileges because
+the sockets will not have been closed, so the bind() is still valid. Otherwise,
+if the new process starts successfully, then sending a SIGUSR1 signal to the
+old one ensures that it will exit as soon as its last session ends.
+
+A hot reconfiguration script would look like this :
+
+ # save previous state
+ mv /etc/haproxy/config /etc/haproxy/config.old
+ mv /var/run/haproxy.pid /var/run/haproxy.pid.old
+
+ mv /etc/haproxy/config.new /etc/haproxy/config
+ kill -TTOU $(cat /var/run/haproxy.pid.old)
+ if haproxy -p /var/run/haproxy.pid -f /etc/haproxy/config; then
+ echo "New instance successfully loaded, stopping previous one."
+ kill -USR1 $(cat /var/run/haproxy.pid.old)
+ rm -f /var/run/haproxy.pid.old
+ exit 1
+ else
+ echo "New instance failed to start, resuming previous one."
+ kill -TTIN $(cat /var/run/haproxy.pid.old)
+ rm -f /var/run/haproxy.pid
+ mv /var/run/haproxy.pid.old /var/run/haproxy.pid
+ mv /etc/haproxy/config /etc/haproxy/config.new
+ mv /etc/haproxy/config.old /etc/haproxy/config
+ exit 0
+ fi
+
+After this, you can still force old connections to end by sending
+a SIGTERM to the old process if it still exists :
+
+ kill $(cat /var/run/haproxy.pid.old)
+ rm -f /var/run/haproxy.pid.old
+
+Be careful with this as in multi-process mode, some pids might already
+have been reallocated to completely different processes.
+
+
+==================================================
+5. Multi-site load-balancing with local preference
+==================================================
+
+5.1 Description of the problem
+==============================
+
+Consider a world-wide company with sites on several continents. There are two
+production sites SITE1 and SITE2 which host identical applications. There are
+many offices around the world. For speed and communication cost reasons, each
+office uses the nearest site by default, but can switch to the backup site in
+the event of a site or application failure. There also are users on the
+production sites, which use their local sites by default, but can switch to the
+other site in case of a local application failure.
+
+The main constraints are :
+
+ - application persistence : although the application is the same on both
+ sites, there is no session synchronisation between the sites. A failure
+ of one server or one site can cause a user to switch to another server
+ or site, but when the server or site comes back, the user must not switch
+ again.
+
+ - communication costs : inter-site communication should be reduced to the
+ minimum. Specifically, in case of a local application failure, every
+ office should be able to switch to the other site without continuing to
+ use the default site.
+
+5.2 Solution
+============
+ - Each production site will have two haproxy load-balancers in front of its
+ application servers to balance the load across them and provide local HA.
+ We will call them "S1L1" and "S1L2" on site 1, and "S2L1" and "S2L2" on
+ site 2. These proxies will extend the application's JSESSIONID cookie to
+ put the server name as a prefix.
+
+ - Each production site will have one front-end haproxy director to provide
+ the service to local users and to remote offices. It will load-balance
+ across the two local load-balancers, and will use the other site's
+ load-balancers as backup servers. It will insert the local site identifier
+ in a SITE cookie for the local load-balancers, and the remote site
+ identifier for the remote load-balancers. These front-end directors will
+ be called "SD1" and "SD2" for "Site Director".
+
+ - Each office will have one haproxy near the border gateway which will direct
+ local users to their preference site by default, or to the backup site in
+ the event of a previous failure. It will also analyze the SITE cookie, and
+ direct the users to the site referenced in the cookie. Thus, the preferred
+ site will be declared as a normal server, and the backup site will be
+ declared as a backup server only, which will only be used when the primary
+ site is unreachable, or when the primary site's director has forwarded
+ traffic to the second site. These proxies will be called "OP1".."OPXX"
+ for "Office Proxy #XX".
+
+
+5.3 Network diagram
+===================
+
+Note : offices 1 and 2 are on the same continent as site 1, while
+ office 3 is on the same continent as site 3. Each production
+ site can reach the second one either through the WAN or through
+ a dedicated link.
+
+
+ Office1 Office2 Office3
+ users users users
+192.168 # # # 192.168 # # # # # #
+.1.0/24 | | | .2.0/24 | | | 192.168.3.0/24 | | |
+ --+----+-+-+- --+----+-+-+- ---+----+-+-+-
+ | | .1 | | .1 | | .1
+ | +-+-+ | +-+-+ | +-+-+
+ | |OP1| | |OP2| | |OP3| ...
+ ,-:-. +---+ ,-:-. +---+ ,-:-. +---+
+ ( X ) ( X ) ( X )
+ `-:-' `-:-' ,---. `-:-'
+ --+---------------+------+----~~~( X )~~~~-------+---------+-
+ | `---' |
+ | |
+ +---+ ,-:-. +---+ ,-:-.
+ |SD1| ( X ) |SD2| ( X )
+ ( SITE 1 ) +-+-+ `-:-' ( SITE 2 ) +-+-+ `-:-'
+ |.1 | |.1 |
+ 10.1.1.0/24 | | ,---. 10.2.1.0/24 | |
+ -+-+-+-+-+-+-+-----+-+--( X )------+-+-+-+-+-+-+-----+-+--
+ | | | | | | | `---' | | | | | | |
+ ...# # # # # |.11 |.12 ...# # # # # |.11 |.12
+ Site 1 +-+--+ +-+--+ Site 2 +-+--+ +-+--+
+ Local |S1L1| |S1L2| Local |S2L1| |S2L2|
+ users +-+--+ +--+-+ users +-+--+ +--+-+
+ | | | |
+ 10.1.2.0/24 -+-+-+--+--++-- 10.2.2.0/24 -+-+-+--+--++--
+ |.1 |.4 |.1 |.4
+ +-+-+ +-+-+ +-+-+ +-+-+
+ |W11| ~~~ |W14| |W21| ~~~ |W24|
+ +---+ +---+ +---+ +---+
+ 4 application servers 4 application servers
+ on site 1 on site 2
+
+
+
+5.4 Description
+===============
+
+5.4.1 Local users
+-----------------
+ - Office 1 users connect to OP1 = 192.168.1.1
+ - Office 2 users connect to OP2 = 192.168.2.1
+ - Office 3 users connect to OP3 = 192.168.3.1
+ - Site 1 users connect to SD1 = 10.1.1.1
+ - Site 2 users connect to SD2 = 10.2.1.1
+
+5.4.2 Office proxies
+--------------------
+ - Office 1 connects to site 1 by default and uses site 2 as a backup.
+ - Office 2 connects to site 1 by default and uses site 2 as a backup.
+ - Office 3 connects to site 2 by default and uses site 1 as a backup.
+
+The offices check the local site's SD proxy every 30 seconds, and the
+remote one every 60 seconds.
+
+
+Configuration for Office Proxy OP1
+----------------------------------
+
+ listen 192.168.1.1:80
+ mode http
+ balance roundrobin
+ redispatch
+ cookie SITE
+ option httpchk HEAD / HTTP/1.0
+ server SD1 10.1.1.1:80 cookie SITE1 check inter 30000
+ server SD2 10.2.1.1:80 cookie SITE2 check inter 60000 backup
+
+
+Configuration for Office Proxy OP2
+----------------------------------
+
+ listen 192.168.2.1:80
+ mode http
+ balance roundrobin
+ redispatch
+ cookie SITE
+ option httpchk HEAD / HTTP/1.0
+ server SD1 10.1.1.1:80 cookie SITE1 check inter 30000
+ server SD2 10.2.1.1:80 cookie SITE2 check inter 60000 backup
+
+
+Configuration for Office Proxy OP3
+----------------------------------
+
+ listen 192.168.3.1:80
+ mode http
+ balance roundrobin
+ redispatch
+ cookie SITE
+ option httpchk HEAD / HTTP/1.0
+ server SD2 10.2.1.1:80 cookie SITE2 check inter 30000
+ server SD1 10.1.1.1:80 cookie SITE1 check inter 60000 backup
+
+
+5.4.3 Site directors ( SD1 and SD2 )
+------------------------------------
+The site directors forward traffic to the local load-balancers, and set a
+cookie to identify the site. If no local load-balancer is available, or if
+the local application servers are all down, it will redirect traffic to the
+remote site, and report this in the SITE cookie. In order not to uselessly
+load each site's WAN link, each SD will check the other site at a lower
+rate. The site directors will also insert their client's address so that
+the application server knows which local user or remote site accesses it.
+
+The SITE cookie which is set by these directors will also be understood
+by the office proxies. This is important because if SD1 decides to forward
+traffic to site 2, it will write "SITE2" in the "SITE" cookie, and on next
+request, the office proxy will automatically and directly talk to SITE2 if
+it can reach it. If it cannot, it will still send the traffic to SITE1
+where SD1 will in turn try to reach SITE2.
+
+The load-balancers checks are performed on port 81. As we'll see further,
+the load-balancers provide a health monitoring port 81 which reroutes to
+port 80 but which allows them to tell the SD that they are going down soon
+and that the SD must not use them anymore.
+
+
+Configuration for SD1
+---------------------
+
+ listen 10.1.1.1:80
+ mode http
+ balance roundrobin
+ redispatch
+ cookie SITE insert indirect
+ option httpchk HEAD / HTTP/1.0
+ option forwardfor
+ server S1L1 10.1.1.11:80 cookie SITE1 check port 81 inter 4000
+ server S1L2 10.1.1.12:80 cookie SITE1 check port 81 inter 4000
+ server S2L1 10.2.1.11:80 cookie SITE2 check port 81 inter 8000 backup
+ server S2L2 10.2.1.12:80 cookie SITE2 check port 81 inter 8000 backup
+
+Configuration for SD2
+---------------------
+
+ listen 10.2.1.1:80
+ mode http
+ balance roundrobin
+ redispatch
+ cookie SITE insert indirect
+ option httpchk HEAD / HTTP/1.0
+ option forwardfor
+ server S2L1 10.2.1.11:80 cookie SITE2 check port 81 inter 4000
+ server S2L2 10.2.1.12:80 cookie SITE2 check port 81 inter 4000
+ server S1L1 10.1.1.11:80 cookie SITE1 check port 81 inter 8000 backup
+ server S1L2 10.1.1.12:80 cookie SITE1 check port 81 inter 8000 backup
+
+
+5.4.4 Local load-balancers S1L1, S1L2, S2L1, S2L2
+-------------------------------------------------
+Please first note that because SD1 and SD2 use the same cookie for both
+servers on a same site, the second load-balancer of each site will only
+receive load-balanced requests, but as soon as the SITE cookie will be
+set, only the first LB will receive the requests because it will be the
+first one to match the cookie.
+
+The load-balancers will spread the load across 4 local web servers, and
+use the JSESSIONID provided by the application to provide server persistence
+using the new 'prefix' method. Soft-stop will also be implemented as described
+in section 4 above. Moreover, these proxies will provide their own maintenance
+soft-stop. Port 80 will be used for application traffic, while port 81 will
+only be used for health-checks and locally rerouted to port 80. A grace time
+will be specified to service on port 80, but not on port 81. This way, a soft
+kill (kill -USR1) on the proxy will only kill the health-check forwarder so
+that the site director knows it must not use this load-balancer anymore. But
+the service will still work for 20 seconds and as long as there are established
+sessions.
+
+These proxies will also be the only ones to disable HTTP keep-alive in the
+chain, because it is enough to do it at one place, and it's necessary to do
+it with 'prefix' cookies.
+
+Configuration for S1L1/S1L2
+---------------------------
+
+ listen 10.1.1.11:80 # 10.1.1.12:80 for S1L2
+ grace 20000 # don't kill us until 20 seconds have elapsed
+ mode http
+ balance roundrobin
+ cookie JSESSIONID prefix
+ option httpclose
+ option forwardfor
+ option httpchk HEAD / HTTP/1.0
+ server W11 10.1.2.1:80 cookie W11 check port 81 inter 2000
+ server W12 10.1.2.2:80 cookie W12 check port 81 inter 2000
+ server W13 10.1.2.3:80 cookie W13 check port 81 inter 2000
+ server W14 10.1.2.4:80 cookie W14 check port 81 inter 2000
+
+ server B11 10.1.2.1:80 cookie W11 check port 80 inter 4000 backup
+ server B12 10.1.2.2:80 cookie W12 check port 80 inter 4000 backup
+ server B13 10.1.2.3:80 cookie W13 check port 80 inter 4000 backup
+ server B14 10.1.2.4:80 cookie W14 check port 80 inter 4000 backup
+
+ listen 10.1.1.11:81 # 10.1.1.12:81 for S1L2
+ mode tcp
+ dispatch 10.1.1.11:80 # 10.1.1.12:80 for S1L2
+
+
+Configuration for S2L1/S2L2
+---------------------------
+
+ listen 10.2.1.11:80 # 10.2.1.12:80 for S2L2
+ grace 20000 # don't kill us until 20 seconds have elapsed
+ mode http
+ balance roundrobin
+ cookie JSESSIONID prefix
+ option httpclose
+ option forwardfor
+ option httpchk HEAD / HTTP/1.0
+ server W21 10.2.2.1:80 cookie W21 check port 81 inter 2000
+ server W22 10.2.2.2:80 cookie W22 check port 81 inter 2000
+ server W23 10.2.2.3:80 cookie W23 check port 81 inter 2000
+ server W24 10.2.2.4:80 cookie W24 check port 81 inter 2000
+
+ server B21 10.2.2.1:80 cookie W21 check port 80 inter 4000 backup
+ server B22 10.2.2.2:80 cookie W22 check port 80 inter 4000 backup
+ server B23 10.2.2.3:80 cookie W23 check port 80 inter 4000 backup
+ server B24 10.2.2.4:80 cookie W24 check port 80 inter 4000 backup
+
+ listen 10.2.1.11:81 # 10.2.1.12:81 for S2L2
+ mode tcp
+ dispatch 10.2.1.11:80 # 10.2.1.12:80 for S2L2
+
+
+5.5 Comments
+------------
+Since each site director sets a cookie identifying the site, remote office
+users will have their office proxies direct them to the right site and stick
+to this site as long as the user still uses the application and the site is
+available. Users on production sites will be directed to the right site by the
+site directors depending on the SITE cookie.
+
+If the WAN link dies on a production site, the remote office users will not
+see their site anymore, so they will redirect the traffic to the second site.
+If there are dedicated inter-site links as on the diagram above, the second
+SD will see the cookie and still be able to reach the original site. For
+example :
+
+Office 1 user sends the following to OP1 :
+ GET / HTTP/1.0
+ Cookie: SITE=SITE1; JSESSIONID=W14~123;
+
+OP1 cannot reach site 1 because its external router is dead. So the SD1 server
+is seen as dead, and OP1 will then forward the request to SD2 on site 2,
+regardless of the SITE cookie.
+
+SD2 on site 2 receives a SITE cookie containing "SITE1". Fortunately, it
+can reach Site 1's load balancers S1L1 and S1L2. So it forwards the request
+so S1L1 (the first one with the same cookie).
+
+S1L1 (on site 1) finds "W14" in the JSESSIONID cookie, so it can forward the
+request to the right server, and the user session will continue to work. Once
+the Site 1's WAN link comes back, OP1 will see SD1 again, and will not route
+through SITE 2 anymore.
+
+However, when a new user on Office 1 connects to the application during a
+site 1 failure, it does not contain any cookie. Since OP1 does not see SD1
+because of the network failure, it will direct the request to SD2 on site 2,
+which will by default direct the traffic to the local load-balancers, S2L1 and
+S2L2. So only initial users will load the inter-site link, not the new ones.
+
+
+===================
+6. Source balancing
+===================
+
+Sometimes it may reveal useful to access servers from a pool of IP addresses
+instead of only one or two. Some equipments (NAT firewalls, load-balancers)
+are sensible to source address, and often need many sources to distribute the
+load evenly amongst their internal hash buckets.
+
+To do this, you simply have to use several times the same server with a
+different source. Example :
+
+ listen 0.0.0.0:80
+ mode tcp
+ balance roundrobin
+ server from1to1 10.1.1.1:80 source 10.1.2.1
+ server from2to1 10.1.1.1:80 source 10.1.2.2
+ server from3to1 10.1.1.1:80 source 10.1.2.3
+ server from4to1 10.1.1.1:80 source 10.1.2.4
+ server from5to1 10.1.1.1:80 source 10.1.2.5
+ server from6to1 10.1.1.1:80 source 10.1.2.6
+ server from7to1 10.1.1.1:80 source 10.1.2.7
+ server from8to1 10.1.1.1:80 source 10.1.2.8
+
+
+=============================================
+7. Managing high loads on application servers
+=============================================
+
+One of the roles often expected from a load balancer is to mitigate the load on
+the servers during traffic peaks. More and more often, we see heavy frameworks
+used to deliver flexible and evolutive web designs, at the cost of high loads
+on the servers, or very low concurrency. Sometimes, response times are also
+rather high. People developing web sites relying on such frameworks very often
+look for a load balancer which is able to distribute the load in the most
+evenly fashion and which will be nice with the servers.
+
+There is a powerful feature in haproxy which achieves exactly this : request
+queueing associated with concurrent connections limit.
+
+Let's say you have an application server which supports at most 20 concurrent
+requests. You have 3 servers, so you can accept up to 60 concurrent HTTP
+connections, which often means 30 concurrent users in case of keep-alive (2
+persistent connections per user).
+
+Even if you disable keep-alive, if the server takes a long time to respond,
+you still have a high risk of multiple users clicking at the same time and
+having their requests unserved because of server saturation. To workaround
+the problem, you increase the concurrent connection limit on the servers,
+but their performance stalls under higher loads.
+
+The solution is to limit the number of connections between the clients and the
+servers. You set haproxy to limit the number of connections on a per-server
+basis, and you let all the users you want connect to it. It will then fill all
+the servers up to the configured connection limit, and will put the remaining
+connections in a queue, waiting for a connection to be released on a server.
+
+This ensures five essential principles :
+
+ - all clients can be served whatever their number without crashing the
+ servers, the only impact it that the response time can be delayed.
+
+ - the servers can be used at full throttle without the risk of stalling,
+ and fine tuning can lead to optimal performance.
+
+ - response times can be reduced by making the servers work below the
+ congestion point, effectively leading to shorter response times even
+ under moderate loads.
+
+ - no domino effect when a server goes down or starts up. Requests will be
+ queued more or less, always respecting servers limits.
+
+ - it's easy to achieve high performance even on memory-limited hardware.
+ Indeed, heavy frameworks often consume huge amounts of RAM and not always
+ all the CPU available. In case of wrong sizing, reducing the number of
+ concurrent connections will protect against memory shortages while still
+ ensuring optimal CPU usage.
+
+
+Example :
+---------
+
+Haproxy is installed in front of an application servers farm. It will limit
+the concurrent connections to 4 per server (one thread per CPU), thus ensuring
+very fast response times.
+
+
+ 192.168.1.1 192.168.1.11-192.168.1.13 192.168.1.2
+ -------+-------------+-----+-----+------------+----
+ | | | | _|_db
+ +--+--+ +-+-+ +-+-+ +-+-+ (___)
+ | LB1 | | A | | B | | C | (___)
+ +-----+ +---+ +---+ +---+ (___)
+ haproxy 3 application servers
+ with heavy frameworks
+
+
+Config on haproxy (LB1) :
+-------------------------
+
+ listen appfarm 192.168.1.1:80
+ mode http
+ maxconn 10000
+ option httpclose
+ option forwardfor
+ balance roundrobin
+ cookie SERVERID insert indirect
+ option httpchk HEAD /index.html HTTP/1.0
+ server railsA 192.168.1.11:80 cookie A maxconn 4 check
+ server railsB 192.168.1.12:80 cookie B maxconn 4 check
+ server railsC 192.168.1.13:80 cookie C maxconn 4 check
+ contimeout 60000
+
+
+Description :
+-------------
+The proxy listens on IP 192.168.1.1, port 80, and expects HTTP requests. It
+can accept up to 10000 concurrent connections on this socket. It follows the
+roundrobin algorithm to assign servers to connections as long as servers are
+not saturated.
+
+It allows up to 4 concurrent connections per server, and will queue the
+requests above this value. The "contimeout" parameter is used to set the
+maximum time a connection may take to establish on a server, but here it
+is also used to set the maximum time a connection may stay unserved in the
+queue (1 minute here).
+
+If the servers can each process 4 requests in 10 ms on average, then at 3000
+connections, response times will be delayed by at most :
+
+ 3000 / 3 servers / 4 conns * 10 ms = 2.5 seconds
+
+Which is not that dramatic considering the huge number of users for such a low
+number of servers.
+
+When connection queues fill up and application servers are starving, response
+times will grow and users might abort by clicking on the "Stop" button. It is
+very undesirable to send aborted requests to servers, because they will eat
+CPU cycles for nothing.
+
+An option has been added to handle this specific case : "option abortonclose".
+By specifying it, you tell haproxy that if an input channel is closed on the
+client side AND the request is still waiting in the queue, then it is highly
+likely that the user has stopped, so we remove the request from the queue
+before it will get served.
+
+
+Managing unfair response times
+------------------------------
+
+Sometimes, the application server will be very slow for some requests (eg:
+login page) and faster for other requests. This may cause excessive queueing
+of expectedly fast requests when all threads on the server are blocked on a
+request to the database. Then the only solution is to increase the number of
+concurrent connections, so that the server can handle a large average number
+of slow connections with threads left to handle faster connections.
+
+But as we have seen, increasing the number of connections on the servers can
+be detrimental to performance (eg: Apache processes fighting for the accept()
+lock). To improve this situation, the "minconn" parameter has been introduced.
+When it is set, the maximum connection concurrency on the server will be bound
+by this value, and the limit will increase with the number of clients waiting
+in queue, till the clients connected to haproxy reach the proxy's maxconn, in
+which case the connections per server will reach the server's maxconn. It means
+that during low-to-medium loads, the minconn will be applied, and during surges
+the maxconn will be applied. It ensures both optimal response times under
+normal loads, and availability under very high loads.
+
+Example :
+---------
+
+ listen appfarm 192.168.1.1:80
+ mode http
+ maxconn 10000
+ option httpclose
+ option abortonclose
+ option forwardfor
+ balance roundrobin
+ # The servers will get 4 concurrent connections under low
+ # loads, and 12 when there will be 10000 clients.
+ server railsA 192.168.1.11:80 minconn 4 maxconn 12 check
+ server railsB 192.168.1.12:80 minconn 4 maxconn 12 check
+ server railsC 192.168.1.13:80 minconn 4 maxconn 12 check
+ contimeout 60000
+
+
--- /dev/null
+2011/04/20 - List of keep-alive / close options with associated behaviours.
+
+PK="http-pretend-keepalive", HC="httpclose", SC="http-server-close",
+FC = "forceclose".
+
+0 = option not set
+1 = option is set
+* = option doesn't matter
+
+Options can be split between frontend and backend, so some of them might have
+a meaning only when combined by associating a frontend to a backend. Some forms
+are not the normal ones and provide a behaviour compatible with another normal
+form. Those are considered alternate forms and are markes "(alt)".
+
+FC SC HC PK Behaviour
+ 0 0 0 X tunnel mode
+ 0 0 1 0 passive close, only set headers then tunnel
+ 0 0 1 1 forced close with keep-alive announce (alt)
+ 0 1 0 0 server close
+ 0 1 0 1 server close with keep-alive announce
+ 0 1 1 0 forced close (alt)
+ 0 1 1 1 forced close with keep-alive announce (alt)
+ 1 * * 0 forced close
+ 1 * * 1 forced close with keep-alive announce
+
+At this point this results in 4 distinct effective modes for a request being
+processed :
+ - tunnel mode : Connection header is left untouched and body is ignored
+ - passive close : Connection header is changed and body is ignored
+ - server close : Connection header set, body scanned, client-side keep-alive
+ is made possible regardless of server-side capabilities
+ - forced close : Connection header set, body scanned, connection closed.
+
+The "close" modes may be combined with a fake keep-alive announce to the server
+in order to workaround buggy servers that disable chunked encoding and content
+length announces when the client does not ask for keep-alive.
+
+Note: "http-pretend-keepalive" alone has no effect. However, if it is set in a
+ backend while a frontend is in "http-close" mode, then the combination of
+ both will result in a forced close with keep-alive announces for requests
+ passing through both.
+
+It is also worth noting that "option httpclose" alone has become useless since
+1.4, because "option forceclose" does the right thing, while the former only
+pretends to do the right thing. Both options might get merged in the future.
+
--- /dev/null
+2015/09/21 - HAProxy coding style - Willy Tarreau <w@1wt.eu>
+------------------------------------------------------------
+
+A number of contributors are often embarrassed with coding style issues, they
+don't always know if they're doing it right, especially since the coding style
+has elvoved along the years. What is explained here is not necessarily what is
+applied in the code, but new code should as much as possible conform to this
+style. Coding style fixes happen when code is replaced. It is useless to send
+patches to fix coding style only, they will be rejected, unless they belong to
+a patch series which needs these fixes prior to get code changes. Also, please
+avoid fixing coding style in the same patches as functional changes, they make
+code review harder.
+
+A good way to quickly validate your patch before submitting it is to pass it
+through the Linux kernel's checkpatch.pl utility which can be downloaded here :
+
+ http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/plain/scripts/checkpatch.pl
+
+Running it with the following options relaxes its checks to accommodate to the
+extra degree of freedom that is tolerated in HAProxy's coding style compared to
+the stricter style used in the kernel :
+
+ checkpatch.pl -q --max-line-length=160 --no-tree --no-signoff \
+ --ignore=LEADING_SPACE,CODE_INDENT,DEEP_INDENTATION \
+ --ignore=ELSE_AFTER_BRACE < patch
+
+You can take its output as hints instead of strict rules, but in general its
+output will be accurate and it may even spot some real bugs.
+
+When modifying a file, you must accept the terms of the license of this file
+which is recalled at the top of the file, or is explained in the LICENSE file,
+or if not stated, defaults to LGPL version 2.1 or later for files in the
+'include' directory, and GPL version 2 or later for all other files.
+
+When adding a new file, you must add a copyright banner at the top of the
+file with your real name, e-mail address and a reminder of the license.
+Contributions under incompatible licenses or too restrictive licenses might
+get rejected. If in doubt, please apply the principle above for existing files.
+
+All code examples below will intentionally be prefixed with " | " to mark
+where the code aligns with the first column, and tabs in this document will be
+represented as a series of 8 spaces so that it displays the same everywhere.
+
+
+1) Indentation and alignment
+----------------------------
+
+1.1) Indentation
+----------------
+
+Indentation and alignment are two completely different things that people often
+get wrong. Indentation is used to mark a sub-level in the code. A sub-level
+means that a block is executed in the context of another block (eg: a function
+or a condition) :
+
+ | main(int argc, char **argv)
+ | {
+ | int i;
+ |
+ | if (argc < 2)
+ | exit(1);
+ | }
+
+In the example above, the code belongs to the main() function and the exit()
+call belongs to the if statement. Indentation is made with tabs (\t, ASCII 9),
+which allows any developer to configure their preferred editor to use their
+own tab size and to still get the text properly indented. Exactly one tab is
+used per sub-level. Tabs may only appear at the beginning of a line or after
+another tab. It is illegal to put a tab after some text, as it mangles displays
+in a different manner for different users (particularly when used to align
+comments or values after a #define). If you're tempted to put a tab after some
+text, then you're doing it wrong and you need alignment instead (see below).
+
+Note that there are places where the code was not properly indented in the
+past. In order to view it correctly, you may have to set your tab size to 8
+characters.
+
+
+1.2) Alignment
+--------------
+
+Alignment is used to continue a line in a way to makes things easier to group
+together. By definition, alignment is character-based, so it uses spaces. Tabs
+would not work because for one tab there would not be as many characters on all
+displays. For instance, the arguments in a function declaration may be broken
+into multiple lines using alignment spaces :
+
+ | int http_header_match2(const char *hdr, const char *end,
+ | const char *name, int len)
+ | {
+ | ...
+ | }
+
+In this example, the "const char *name" part is aligned with the first
+character of the group it belongs to (list of function arguments). Placing it
+here makes it obvious that it's one of the function's arguments. Multiple lines
+are easy to handle this way. This is very common with long conditions too :
+
+ | if ((len < eol - sol) &&
+ | (sol[len] == ':') &&
+ | (strncasecmp(sol, name, len) == 0)) {
+ | ctx->del = len;
+ | }
+
+If we take again the example above marking tabs with "[-Tabs-]" and spaces
+with "#", we get this :
+
+ | [-Tabs-]if ((len < eol - sol) &&
+ | [-Tabs-]####(sol[len] == ':') &&
+ | [-Tabs-]####(strncasecmp(sol, name, len) == 0)) {
+ | [-Tabs-][-Tabs-]ctx->del = len;
+ | [-Tabs-]}
+
+It is worth noting that some editors tend to confuse indentations and aligment.
+Emacs is notoriously known for this brokenness, and is responsible for almost
+all of the alignment mess. The reason is that Emacs only counts spaces, tries
+to fill as many as possible with tabs and completes with spaces. Once you know
+it, you just have to be careful, as alignment is not used much, so generally it
+is just a matter of replacing the last tab with 8 spaces when this happens.
+
+Indentation should be used everywhere there is a block or an opening brace. It
+is not possible to have two consecutive closing braces on the same column, it
+means that the innermost was not indented.
+
+Right :
+
+ | main(int argc, char **argv)
+ | {
+ | if (argc > 1) {
+ | printf("Hello\n");
+ | }
+ | exit(0);
+ | }
+
+Wrong :
+
+ | main(int argc, char **argv)
+ | {
+ | if (argc > 1) {
+ | printf("Hello\n");
+ | }
+ | exit(0);
+ | }
+
+A special case applies to switch/case statements. Due to my editor's settings,
+I've been used to align "case" with "switch" and to find it somewhat logical
+since each of the "case" statements opens a sublevel belonging to the "switch"
+statement. But indenting "case" after "switch" is accepted too. However in any
+case, whatever follows the "case" statement must be indented, whether or not it
+contains braces :
+
+ | switch (*arg) {
+ | case 'A': {
+ | int i;
+ | for (i = 0; i < 10; i++)
+ | printf("Please stop pressing 'A'!\n");
+ | break;
+ | }
+ | case 'B':
+ | printf("You pressed 'B'\n");
+ | break;
+ | case 'C':
+ | case 'D':
+ | printf("You pressed 'C' or 'D'\n");
+ | break;
+ | default:
+ | printf("I don't know what you pressed\n");
+ | }
+
+
+2) Braces
+---------
+
+Braces are used to delimit multiple-instruction blocks. In general it is
+preferred to avoid braces around single-instruction blocks as it reduces the
+number of lines :
+
+Right :
+
+ | if (argc >= 2)
+ | exit(0);
+
+Wrong :
+
+ | if (argc >= 2) {
+ | exit(0);
+ | }
+
+But it is not that strict, it really depends on the context. It happens from
+time to time that single-instruction blocks are enclosed within braces because
+it makes the code more symmetrical, or more readable. Example :
+
+ | if (argc < 2) {
+ | printf("Missing argument\n");
+ | exit(1);
+ | } else {
+ | exit(0);
+ | }
+
+Braces are always needed to declare a function. A function's opening brace must
+be placed at the beginning of the next line :
+
+Right :
+
+ | int main(int argc, char **argv)
+ | {
+ | exit(0);
+ | }
+
+Wrong :
+
+ | int main(int argc, char **argv) {
+ | exit(0);
+ | }
+
+Note that a large portion of the code still does not conforms to this rule, as
+it took years to get all authors to adapt to this more common standard which
+is now preferred, as it avoids visual confusion when function declarations are
+broken on multiple lines :
+
+Right :
+
+ | int foo(const char *hdr, const char *end,
+ | const char *name, const char *err,
+ | int len)
+ | {
+ | int i;
+
+Wrong :
+
+ | int foo(const char *hdr, const char *end,
+ | const char *name, const char *err,
+ | int len) {
+ | int i;
+
+Braces should always be used where there might be an ambiguity with the code
+later. The most common example is the stacked "if" statement where an "else"
+may be added later at the wrong place breaking the code, but it also happens
+with comments or long arguments in function calls. In general, if a block is
+more than one line long, it should use braces.
+
+Dangerous code waiting of a victim :
+
+ | if (argc < 2)
+ | /* ret must not be negative here */
+ | if (ret < 0)
+ | return -1;
+
+Wrong change :
+
+ | if (argc < 2)
+ | /* ret must not be negative here */
+ | if (ret < 0)
+ | return -1;
+ | else
+ | return 0;
+
+It will do this instead of what your eye seems to tell you :
+
+ | if (argc < 2)
+ | /* ret must not be negative here */
+ | if (ret < 0)
+ | return -1;
+ | else
+ | return 0;
+
+Right :
+
+ | if (argc < 2) {
+ | /* ret must not be negative here */
+ | if (ret < 0)
+ | return -1;
+ | }
+ | else
+ | return 0;
+
+Similarly dangerous example :
+
+ | if (ret < 0)
+ | /* ret must not be negative here */
+ | complain();
+ | init();
+
+Wrong change to silent the annoying message :
+
+ | if (ret < 0)
+ | /* ret must not be negative here */
+ | //complain();
+ | init();
+
+... which in fact means :
+
+ | if (ret < 0)
+ | init();
+
+
+3) Breaking lines
+-----------------
+
+There is no strict rule for line breaking. Some files try to stick to the 80
+column limit, but given that various people use various tab sizes, it does not
+make much sense. Also, code is sometimes easier to read with less lines, as it
+represents less surface on the screen (since each new line adds its tabs and
+spaces). The rule is to stick to the average line length of other lines. If you
+are working in a file which fits in 80 columns, try to keep this goal in mind.
+If you're in a function with 120-chars lines, there is no reason to add many
+short lines, so you can make longer lines.
+
+In general, opening a new block should lead to a new line. Similarly, multiple
+instructions should be avoided on the same line. But some constructs make it
+more readable when those are perfectly aligned :
+
+A copy-paste bug in the following construct will be easier to spot :
+
+ | if (omult % idiv == 0) { omult /= idiv; idiv = 1; }
+ | if (idiv % omult == 0) { idiv /= omult; omult = 1; }
+ | if (imult % odiv == 0) { imult /= odiv; odiv = 1; }
+ | if (odiv % imult == 0) { odiv /= imult; imult = 1; }
+
+than in this one :
+
+ | if (omult % idiv == 0) {
+ | omult /= idiv;
+ | idiv = 1;
+ | }
+ | if (idiv % omult == 0) {
+ | idiv /= omult;
+ | omult = 1;
+ | }
+ | if (imult % odiv == 0) {
+ | imult /= odiv;
+ | odiv = 1;
+ | }
+ | if (odiv % imult == 0) {
+ | odiv /= imult;
+ | imult = 1;
+ | }
+
+What is important is not to mix styles. For instance there is nothing wrong
+with having many one-line "case" statements as long as most of them are this
+short like below :
+
+ | switch (*arg) {
+ | case 'A': ret = 1; break;
+ | case 'B': ret = 2; break;
+ | case 'C': ret = 4; break;
+ | case 'D': ret = 8; break;
+ | default : ret = 0; break;
+ | }
+
+Otherwise, prefer to have the "case" statement on its own line as in the
+example in section 1.2 about alignment. In any case, avoid to stack multiple
+control statements on the same line, so that it will never be the needed to
+add two tab levels at once :
+
+Right :
+
+ | switch (*arg) {
+ | case 'A':
+ | if (ret < 0)
+ | ret = 1;
+ | break;
+ | default : ret = 0; break;
+ | }
+
+Wrong :
+
+ | switch (*arg) {
+ | case 'A': if (ret < 0)
+ | ret = 1;
+ | break;
+ | default : ret = 0; break;
+ | }
+
+Right :
+
+ | if (argc < 2)
+ | if (ret < 0)
+ | return -1;
+
+or Right :
+
+ | if (argc < 2)
+ | if (ret < 0) return -1;
+
+but Wrong :
+
+ | if (argc < 2) if (ret < 0) return -1;
+
+
+When complex conditions or expressions are broken into multiple lines, please
+do ensure that alignment is perfectly appropriate, and group all main operators
+on the same side (which you're free to choose as long as it does not change for
+every block. Putting binary operators on the right side is preferred as it does
+not mangle with alignment but various people have their preferences.
+
+Right :
+
+ | if ((txn->flags & TX_NOT_FIRST) &&
+ | ((req->flags & BF_FULL) ||
+ | req->r < req->lr ||
+ | req->r > req->data + req->size - global.tune.maxrewrite)) {
+ | return 0;
+ | }
+
+Right :
+
+ | if ((txn->flags & TX_NOT_FIRST)
+ | && ((req->flags & BF_FULL)
+ | || req->r < req->lr
+ | || req->r > req->data + req->size - global.tune.maxrewrite)) {
+ | return 0;
+ | }
+
+Wrong :
+
+ | if ((txn->flags & TX_NOT_FIRST) &&
+ | ((req->flags & BF_FULL) ||
+ | req->r < req->lr
+ | || req->r > req->data + req->size - global.tune.maxrewrite)) {
+ | return 0;
+ | }
+
+If it makes the result more readable, parenthesis may even be closed on their
+own line in order to align with the opening one. Note that should normally not
+be needed because such code would be too complex to be digged into.
+
+The "else" statement may either be merged with the closing "if" brace or lie on
+its own line. The later is preferred but it adds one extra line to each control
+block which is annoying in short ones. However, if the "else" is followed by an
+"if", then it should really be on its own line and the rest of the if/else
+blocks must follow the same style.
+
+Right :
+
+ | if (a < b) {
+ | return a;
+ | }
+ | else {
+ | return b;
+ | }
+
+Right :
+
+ | if (a < b) {
+ | return a;
+ | } else {
+ | return b;
+ | }
+
+Right :
+
+ | if (a < b) {
+ | return a;
+ | }
+ | else if (a != b) {
+ | return b;
+ | }
+ | else {
+ | return 0;
+ | }
+
+Wrong :
+
+ | if (a < b) {
+ | return a;
+ | } else if (a != b) {
+ | return b;
+ | } else {
+ | return 0;
+ | }
+
+Wrong :
+
+ | if (a < b) {
+ | return a;
+ | }
+ | else if (a != b) {
+ | return b;
+ | } else {
+ | return 0;
+ | }
+
+
+4) Spacing
+----------
+
+Correctly spacing code is very important. When you have to spot a bug at 3am,
+you need it to be clear. When you expect other people to review your code, you
+want it to be clear and don't want them to get nervous when trying to find what
+you did.
+
+Always place spaces around all binary or ternary operators, commas, as well as
+after semi-colons and opening braces if the line continues :
+
+Right :
+
+ | int ret = 0;
+ | /* if (x >> 4) { x >>= 4; ret += 4; } */
+ | ret += (x >> 4) ? (x >>= 4, 4) : 0;
+ | val = ret + ((0xFFFFAA50U >> (x << 1)) & 3) + 1;
+
+Wrong :
+
+ | int ret=0;
+ | /* if (x>>4) {x>>=4;ret+=4;} */
+ | ret+=(x>>4)?(x>>=4,4):0;
+ | val=ret+((0xFFFFAA50U>>(x<<1))&3)+1;
+
+Never place spaces after unary operators (&, *, -, !, ~, ++, --) nor cast, as
+they might be confused with they binary counterpart, nor before commas or
+semicolons :
+
+Right :
+
+ | bit = !!(~len++ ^ -(unsigned char)*x);
+
+Wrong :
+
+ | bit = ! ! (~len++ ^ - (unsigned char) * x) ;
+
+Note that "sizeof" is a unary operator which is sometimes considered as a
+langage keyword, but in no case it is a function. It does not require
+parenthesis so it is sometimes followed by spaces and sometimes not when
+there are no parenthesis. Most people do not really care as long as what
+is written is unambiguous.
+
+Braces opening a block must be preceeded by one space unless the brace is
+placed on the first column :
+
+Right :
+
+ | if (argc < 2) {
+ | }
+
+Wrong :
+
+ | if (argc < 2){
+ | }
+
+Do not add unneeded spaces inside parenthesis, they just make the code less
+readable.
+
+Right :
+
+ | if (x < 4 && (!y || !z))
+ | break;
+
+Wrong :
+
+ | if ( x < 4 && ( !y || !z ) )
+ | break;
+
+Language keywords must all be followed by a space. This is true for control
+statements (do, for, while, if, else, return, switch, case), and for types
+(int, char, unsigned). As an exception, the last type in a cast does not take
+a space before the closing parenthesis). The "default" statement in a "switch"
+construct is generally just followed by the colon. However the colon after a
+"case" or "default" statement must be followed by a space.
+
+Right :
+
+ | if (nbargs < 2) {
+ | printf("Missing arg at %c\n", *(char *)ptr);
+ | for (i = 0; i < 10; i++) beep();
+ | return 0;
+ | }
+ | switch (*arg) {
+
+Wrong :
+
+ | if(nbargs < 2){
+ | printf("Missing arg at %c\n", *(char*)ptr);
+ | for(i = 0; i < 10; i++)beep();
+ | return 0;
+ | }
+ | switch(*arg) {
+
+Function calls are different, the opening parenthesis is always coupled to the
+function name without any space. But spaces are still needed after commas :
+
+Right :
+
+ | if (!init(argc, argv))
+ | exit(1);
+
+Wrong :
+
+ | if (!init (argc,argv))
+ | exit(1);
+
+
+5) Excess or lack of parenthesis
+--------------------------------
+
+Sometimes there are too many parenthesis in some formulas, sometimes there are
+too few. There are a few rules of thumb for this. The first one is to respect
+the compiler's advice. If it emits a warning and asks for more parenthesis to
+avoid confusion, follow the advice at least to shut the warning. For instance,
+the code below is quite ambiguous due to its alignment :
+
+ | if (var1 < 2 || var2 < 2 &&
+ | var3 != var4) {
+ | /* fail */
+ | return -3;
+ | }
+
+Note that this code does :
+
+ | if (var1 < 2 || (var2 < 2 && var3 != var4)) {
+ | /* fail */
+ | return -3;
+ | }
+
+But maybe the author meant :
+
+ | if ((var1 < 2 || var2 < 2) && var3 != var4) {
+ | /* fail */
+ | return -3;
+ | }
+
+A second rule to put parenthesis is that people don't always know operators
+precedence too well. Most often they have no issue with operators of the same
+category (eg: booleans, integers, bit manipulation, assignment) but once these
+operators are mixed, it causes them all sort of issues. In this case, it is
+wise to use parenthesis to avoid errors. One common error concerns the bit
+shift operators because they're used to replace multiplies and divides but
+don't have the same precedence :
+
+The expression :
+
+ | x = y * 16 + 5;
+
+becomes :
+
+ | x = y << 4 + 5;
+
+which is wrong because it is equivalent to :
+
+ | x = y << (4 + 5);
+
+while the following was desired instead :
+
+ | x = (y << 4) + 5;
+
+It is generally fine to write boolean expressions based on comparisons without
+any parenthesis. But on top of that, integer expressions and assignments should
+then be protected. For instance, there is an error in the expression below
+which should be safely rewritten :
+
+Wrong :
+
+ | if (var1 > 2 && var1 < 10 ||
+ | var1 > 2 + 256 && var2 < 10 + 256 ||
+ | var1 > 2 + 1 << 16 && var2 < 10 + 2 << 16)
+ | return 1;
+
+Right (may remove a few parenthesis depending on taste) :
+
+ | if ((var1 > 2 && var1 < 10) ||
+ | (var1 > (2 + 256) && var2 < (10 + 256)) ||
+ | (var1 > (2 + (1 << 16)) && var2 < (10 + (1 << 16))))
+ | return 1;
+
+The "return" statement is not a function, so it takes no argument. It is a
+control statement which is followed by the expression to be returned. It does
+not need to be followed by parenthesis :
+
+Wrong :
+
+ | int ret0()
+ | {
+ | return(0);
+ | }
+
+Right :
+
+ | int ret0()
+ | {
+ | return 0;
+ | }
+
+Parenthesisis are also found in type casts. Type casting should be avoided as
+much as possible, especially when it concerns pointer types. Casting a pointer
+disables the compiler's type checking and is the best way to get caught doing
+wrong things with data not the size you expect. If you need to manipulate
+multiple data types, you can use a union instead. If the union is really not
+convenient and casts are easier, then try to isolate them as much as possible,
+for instance when initializing function arguments or in another function. Not
+proceeding this way causes huge risks of not using the proper pointer without
+any notification, which is especially true during copy-pastes.
+
+Wrong :
+
+ | void *check_private_data(void *arg1, void *arg2)
+ | {
+ | char *area;
+ |
+ | if (*(int *)arg1 > 1000)
+ | return NULL;
+ | if (memcmp(*(const char *)arg2, "send(", 5) != 0))
+ | return NULL;
+ | area = malloc(*(int *)arg1);
+ | if (!area)
+ | return NULL;
+ | memcpy(area, *(const char *)arg2 + 5, *(int *)arg1);
+ | return area;
+ | }
+
+Right :
+
+ | void *check_private_data(void *arg1, void *arg2)
+ | {
+ | char *area;
+ | int len = *(int *)arg1;
+ | const char *msg = arg2;
+ |
+ | if (len > 1000)
+ | return NULL;
+ | if (memcmp(msg, "send(", 5) != 0)
+ | return NULL;
+ | area = malloc(len);
+ | if (!area)
+ | return NULL;
+ | memcpy(area, msg + 5, len);
+ | return area;
+ | }
+
+
+6) Ambiguous comparisons with zero or NULL
+------------------------------------------
+
+In C, '0' has no type, or it has the type of the variable it is assigned to.
+Comparing a variable or a return value with zero means comparing with the
+representation of zero for this variable's type. For a boolean, zero is false.
+For a pointer, zero is NULL. Very often, to make things shorter, it is fine to
+use the '!' unary operator to compare with zero, as it is shorter and easier to
+remind or understand than a plain '0'. Since the '!' operator is read "not", it
+helps read code faster when what follows it makes sense as a boolean, and it is
+often much more appropriate than a comparison with zero which makes an equal
+sign appear at an undesirable place. For instance :
+
+ | if (!isdigit(*c) && !isspace(*c))
+ | break;
+
+is easier to understand than :
+
+ | if (isdigit(*c) == 0 && isspace(*c) == 0)
+ | break;
+
+For a char this "not" operator can be reminded as "no remaining char", and the
+absence of comparison to zero implies existence of the tested entity, hence the
+simple strcpy() implementation below which automatically stops once the last
+zero is copied :
+
+ | void my_strcpy(char *d, const char *s)
+ | {
+ | while ((*d++ = *s++));
+ | }
+
+Note the double parenthesis in order to avoid the compiler telling us it looks
+like an equality test.
+
+For a string or more generally any pointer, this test may be understood as an
+existence test or a validity test, as the only pointer which will fail to
+validate equality is the NULL pointer :
+
+ | area = malloc(1000);
+ | if (!area)
+ | return -1;
+
+However sometimes it can fool the reader. For instance, strcmp() precisely is
+one of such functions whose return value can make one think the opposite due to
+its name which may be understood as "if strings compare...". Thus it is strongly
+recommended to perform an explicit comparison with zero in such a case, and it
+makes sense considering that the comparison's operator is the same that is
+wanted to compare the strings (note that current config parser lacks a lot in
+this regards) :
+
+ strcmp(a, b) == 0 <=> a == b
+ strcmp(a, b) != 0 <=> a != b
+ strcmp(a, b) < 0 <=> a < b
+ strcmp(a, b) > 0 <=> a > b
+
+Avoid this :
+
+ | if (strcmp(arg, "test"))
+ | printf("this is not a test\n");
+ |
+ | if (!strcmp(arg, "test"))
+ | printf("this is a test\n");
+
+Prefer this :
+
+ | if (strcmp(arg, "test") != 0)
+ | printf("this is not a test\n");
+ |
+ | if (strcmp(arg, "test") == 0)
+ | printf("this is a test\n");
+
+
+7) System call returns
+----------------------
+
+This is not directly a matter of coding style but more of bad habits. It is
+important to check for the correct value upon return of syscalls. The proper
+return code indicating an error is described in its man page. There is no
+reason to consider wider ranges than what is indicated. For instance, it is
+common to see such a thing :
+
+ | if ((fd = open(file, O_RDONLY)) < 0)
+ | return -1;
+
+This is wrong. The man page says that -1 is returned if an error occured. It
+does not suggest that any other negative value will be an error. It is possible
+that a few such issues have been left in existing code. They are bugs for which
+fixes are accepted, eventhough they're currently harmless since open() is not
+known for returning negative values at the moment.
+
+
+8) Declaring new types, names and values
+----------------------------------------
+
+Please refrain from using "typedef" to declare new types, they only obfuscate
+the code. The reader never knows whether he's manipulating a scalar type or a
+struct. For instance it is not obvious why the following code fails to build :
+
+ | int delay_expired(timer_t exp, timer_us_t now)
+ | {
+ | return now >= exp;
+ | }
+
+With the types declared in another file this way :
+
+ | typedef unsigned int timer_t;
+ | typedef struct timeval timer_us_t;
+
+This cannot work because we're comparing a scalar with a struct, which does
+not make sense. Without a typedef, the function would have been written this
+way without any ambiguity and would not have failed :
+
+ | int delay_expired(unsigned int exp, struct timeval *now)
+ | {
+ | return now >= exp->tv_sec;
+ | }
+
+Declaring special values may be done using enums. Enums are a way to define
+structured integer values which are related to each other. They are perfectly
+suited for state machines. While the first element is always assigned the zero
+value, not everybody knows that, especially people working with multiple
+languages all the day. For this reason it is recommended to explicitly force
+the first value even if it's zero. The last element should be followed by a
+comma if it is planned that new elements might later be added, this will make
+later patches shorter. Conversely, if the last element is placed in order to
+get the number of possible values, it must not be followed by a comma and must
+be preceeded by a comment :
+
+ | enum {
+ | first = 0,
+ | second,
+ | third,
+ | fourth,
+ | };
+
+
+ | enum {
+ | first = 0,
+ | second,
+ | third,
+ | fourth,
+ | /* nbvalues must always be placed last */
+ | nbvalues
+ | };
+
+Structure names should be short enough not to mangle function declarations,
+and explicit enough to avoid confusion (which is the most important thing).
+
+Wrong :
+
+ | struct request_args { /* arguments on the query string */
+ | char *name;
+ | char *value;
+ | struct misc_args *next;
+ | };
+
+Right :
+
+ | struct qs_args { /* arguments on the query string */
+ | char *name;
+ | char *value;
+ | struct qs_args *next;
+ | }
+
+
+When declaring new functions or structures, please do not use CamelCase, which
+is a style where upper and lower case are mixed in a single word. It causes a
+lot of confusion when words are composed from acronyms, because it's hard to
+stick to a rule. For instance, a function designed to generate an ISN (initial
+sequence number) for a TCP/IP connection could be called :
+
+ - generateTcpipIsn()
+ - generateTcpIpIsn()
+ - generateTcpIpISN()
+ - generateTCPIPISN()
+ etc...
+
+None is right, none is wrong, these are just preferences which might change
+along the code. Instead, please use an underscore to separate words. Lowercase
+is preferred for the words, but if acronyms are upcased it's not dramatic. The
+real advantage of this method is that it creates unambiguous levels even for
+short names.
+
+Valid examples :
+
+ - generate_tcpip_isn()
+ - generate_tcp_ip_isn()
+ - generate_TCPIP_ISN()
+ - generate_TCP_IP_ISN()
+
+Another example is easy to understand when 3 arguments are involved in naming
+the function :
+
+Wrong (naming conflict) :
+
+ | /* returns A + B * C */
+ | int mulABC(int a, int b, int c)
+ | {
+ | return a + b * c;
+ | }
+ |
+ | /* returns (A + B) * C */
+ | int mulABC(int a, int b, int c)
+ | {
+ | return (a + b) * c;
+ | }
+
+Right (unambiguous naming) :
+
+ | /* returns A + B * C */
+ | int mul_a_bc(int a, int b, int c)
+ | {
+ | return a + b * c;
+ | }
+ |
+ | /* returns (A + B) * C */
+ | int mul_ab_c(int a, int b, int c)
+ | {
+ | return (a + b) * c;
+ | }
+
+Whenever you manipulate pointers, try to declare them as "const", as it will
+save you from many accidental misuses and will only cause warnings to be
+emitted when there is a real risk. In the examples below, it is possible to
+call my_strcpy() with a const string only in the first declaration. Note that
+people who ignore "const" are often the ones who cast a lot and who complain
+from segfaults when using strtok() !
+
+Right :
+
+ | void my_strcpy(char *d, const char *s)
+ | {
+ | while ((*d++ = *s++));
+ | }
+ |
+ | void say_hello(char *dest)
+ | {
+ | my_strcpy(dest, "hello\n");
+ | }
+
+Wrong :
+
+ | void my_strcpy(char *d, char *s)
+ | {
+ | while ((*d++ = *s++));
+ | }
+ |
+ | void say_hello(char *dest)
+ | {
+ | my_strcpy(dest, "hello\n");
+ | }
+
+
+9) Getting macros right
+-----------------------
+
+It is very common for macros to do the wrong thing when used in a way their
+author did not have in mind. For this reason, macros must always be named with
+uppercase letters only. This is the only way to catch the developer's eye when
+using them, so that he double-checks whether he's taking risks or not. First,
+macros must never ever be terminated by a semi-colon, or they will close the
+wrong block once in a while. For instance, the following will cause a build
+error before the "else" due to the double semi-colon :
+
+Wrong :
+
+ | #define WARN printf("warning\n");
+ | ...
+ | if (a < 0)
+ | WARN;
+ | else
+ | a--;
+
+Right :
+
+ | #define WARN printf("warning\n")
+
+If multiple instructions are needed, then use a do { } while (0) block, which
+is the only construct which respects *exactly* the semantics of a single
+instruction :
+
+ | #define WARN do { printf("warning\n"); log("warning\n"); } while (0)
+ | ...
+ |
+ | if (a < 0)
+ | WARN;
+ | else
+ | a--;
+
+Second, do not put unprotected control statements in macros, they will
+definitely cause bugs :
+
+Wrong :
+
+ | #define WARN if (verbose) printf("warning\n")
+ | ...
+ | if (a < 0)
+ | WARN;
+ | else
+ | a--;
+
+Which is equivalent to the undesired form below :
+
+ | if (a < 0)
+ | if (verbose)
+ | printf("warning\n");
+ | else
+ | a--;
+
+Right way to do it :
+
+ | #define WARN do { if (verbose) printf("warning\n"); } while (0)
+ | ...
+ | if (a < 0)
+ | WARN;
+ | else
+ | a--;
+
+Which is equivalent to :
+
+ | if (a < 0)
+ | do { if (verbose) printf("warning\n"); } while (0);
+ | else
+ | a--;
+
+Macro parameters must always be surrounded by parenthesis, and must never be
+duplicated in the same macro unless explicitly stated. Also, macros must not be
+defined with operators without surrounding parenthesis. The MIN/MAX macros are
+a pretty common example of multiple misuses, but this happens as early as when
+using bit masks. Most often, in case of any doubt, try to use inline functions
+instead.
+
+Wrong :
+
+ | #define MIN(a, b) a < b ? a : b
+ |
+ | /* returns 2 * min(a,b) + 1 */
+ | int double_min_p1(int a, int b)
+ | {
+ | return 2 * MIN(a, b) + 1;
+ | }
+
+What this will do :
+
+ | int double_min_p1(int a, int b)
+ | {
+ | return 2 * a < b ? a : b + 1;
+ | }
+
+Which is equivalent to :
+
+ | int double_min_p1(int a, int b)
+ | {
+ | return (2 * a) < b ? a : (b + 1);
+ | }
+
+The first thing to fix is to surround the macro definition with parenthesis to
+avoid this mistake :
+
+ | #define MIN(a, b) (a < b ? a : b)
+
+But this is still not enough, as can be seen in this example :
+
+ | /* compares either a or b with c */
+ | int min_ab_c(int a, int b, int c)
+ | {
+ | return MIN(a ? a : b, c);
+ | }
+
+Which is equivalent to :
+
+ | int min_ab_c(int a, int b, int c)
+ | {
+ | return (a ? a : b < c ? a ? a : b : c);
+ | }
+
+Which in turn means a totally different thing due to precedence :
+
+ | int min_ab_c(int a, int b, int c)
+ | {
+ | return (a ? a : ((b < c) ? (a ? a : b) : c));
+ | }
+
+This can be fixed by surrounding *each* argument in the macro with parenthesis:
+
+ | #define MIN(a, b) ((a) < (b) ? (a) : (b))
+
+But this is still not enough, as can be seen in this example :
+
+ | int min_ap1_b(int a, int b)
+ | {
+ | return MIN(++a, b);
+ | }
+
+Which is equivalent to :
+
+ | int min_ap1_b(int a, int b)
+ | {
+ | return ((++a) < (b) ? (++a) : (b));
+ | }
+
+Again, this is wrong because "a" is incremented twice if below b. The only way
+to fix this is to use a compound statement and to assign each argument exactly
+once to a local variable of the same type :
+
+ | #define MIN(a, b) ({ typeof(a) __a = (a); typeof(b) __b = (b); \
+ | ((__a) < (__b) ? (__a) : (__b)); \
+ | })
+
+At this point, using static inline functions is much cleaner if a single type
+is to be used :
+
+ | static inline int min(int a, int b)
+ | {
+ | return a < b ? a : b;
+ | }
+
+
+10) Includes
+------------
+
+Includes are as much as possible listed in alphabetically ordered groups :
+ - the libc-standard includes (those without any path component)
+ - the includes more or less system-specific (sys/*, netinet/*, ...)
+ - includes from the local "common" subdirectory
+ - includes from the local "types" subdirectory
+ - includes from the local "proto" subdirectory
+
+Each section is just visually delimited from the other ones using an empty
+line. The two first ones above may be merged into a single section depending on
+developer's preference. Please do not copy-paste include statements from other
+files. Having too many includes significantly increases build time and makes it
+hard to find which ones are needed later. Just include what you need and if
+possible in alphabetical order so that when something is missing, it becomes
+obvious where to look for it and where to add it.
+
+All files should include <common/config.h> because this is where build options
+are prepared.
+
+Header files are split in two directories ("types" and "proto") depending on
+what they provide. Types, structures, enums and #defines must go into the
+"types" directory. Function prototypes and inlined functions must go into the
+"proto" directory. This split is because of inlined functions which
+cross-reference types from other files, which cause a chicken-and-egg problem
+if the functions and types are declared at the same place.
+
+All headers which do not depend on anything currently go to the "common"
+subdirectory, but are equally well placed into the "proto" directory. It is
+possible that one day the "common" directory will disappear.
+
+Include files must be protected against multiple inclusion using the common
+#ifndef/#define/#endif trick with a tag derived from the include file and its
+location.
+
+
+11) Comments
+------------
+
+Comments are preferably of the standard 'C' form using /* */. The C++ form "//"
+are tolerated for very short comments (eg: a word or two) but should be avoided
+as much as possible. Multi-line comments are made with each intermediate line
+starting with a star aligned with the first one, as in this example :
+
+ | /*
+ | * This is a multi-line
+ | * comment.
+ | */
+
+If multiple code lines need a short comment, try to align them so that you can
+have multi-line sentences. This is rarely needed, only for really complex
+constructs.
+
+Do not tell what you're doing in comments, but explain why you're doing it if
+it seems not to be obvious. Also *do* indicate at the top of function what they
+accept and what they don't accept. For instance, strcpy() only accepts output
+buffers at least as large as the input buffer, and does not support any NULL
+pointer. There is nothing wrong with that if the caller knows it.
+
+Wrong use of comments :
+
+ | int flsnz8(unsigned int x)
+ | {
+ | int ret = 0; /* initialize ret */
+ | if (x >> 4) { x >>= 4; ret += 4; } /* add 4 to ret if needed */
+ | return ret + ((0xFFFFAA50U >> (x << 1)) & 3) + 1; /* add ??? */
+ | }
+ | ...
+ | bit = ~len + (skip << 3) + 9; /* update bit */
+
+Right use of comments :
+
+ | /* This function returns the positoin of the highest bit set in the lowest
+ | * byte of <x>, between 0 and 7. It only works if <x> is non-null. It uses
+ | * a 32-bit value as a lookup table to return one of 4 values for the
+ | * highest 16 possible 4-bit values.
+ | */
+ | int flsnz8(unsigned int x)
+ | {
+ | int ret = 0;
+ | if (x >> 4) { x >>= 4; ret += 4; }
+ | return ret + ((0xFFFFAA50U >> (x << 1)) & 3) + 1;
+ | }
+ | ...
+ | bit = ~len + (skip << 3) + 9; /* (skip << 3) + (8 - len), saves 1 cycle */
+
+
+12) Use of assembly
+-------------------
+
+There are many projects where use of assembly code is not welcome. There is no
+problem with use of assembly in haproxy, provided that :
+
+ a) an alternate C-form is provided for architectures not covered
+ b) the code is small enough and well commented enough to be maintained
+
+It is important to take care of various incompatibilities between compiler
+versions, for instance regarding output and cloberred registers. There are
+a number of documentations on the subject on the net. Anyway if you are
+fiddling with assembly, you probably know that already.
+
+Example :
+ | /* gcc does not know when it can safely divide 64 bits by 32 bits. Use this
+ | * function when you know for sure that the result fits in 32 bits, because
+ | * it is optimal on x86 and on 64bit processors.
+ | */
+ | static inline unsigned int div64_32(unsigned long long o1, unsigned int o2)
+ | {
+ | unsigned int result;
+ | #ifdef __i386__
+ | asm("divl %2"
+ | : "=a" (result)
+ | : "A"(o1), "rm"(o2));
+ | #else
+ | result = o1 / o2;
+ | #endif
+ | return result;
+ | }
+
--- /dev/null
+ ----------------------
+ HAProxy
+ Configuration Manual
+ ----------------------
+ version 1.6
+ willy tarreau
+ 2015/12/21
+
+
+This document covers the configuration language as implemented in the version
+specified above. It does not provide any hint, example or advice. For such
+documentation, please refer to the Reference Manual or the Architecture Manual.
+The summary below is meant to help you search sections by name and navigate
+through the document.
+
+Note to documentation contributors :
+ This document is formatted with 80 columns per line, with even number of
+ spaces for indentation and without tabs. Please follow these rules strictly
+ so that it remains easily printable everywhere. If a line needs to be
+ printed verbatim and does not fit, please end each line with a backslash
+ ('\') and continue on next line, indented by two characters. It is also
+ sometimes useful to prefix all output lines (logs, console outs) with 3
+ closing angle brackets ('>>>') in order to help get the difference between
+ inputs and outputs when it can become ambiguous. If you add sections,
+ please update the summary below for easier searching.
+
+
+Summary
+-------
+
+1. Quick reminder about HTTP
+1.1. The HTTP transaction model
+1.2. HTTP request
+1.2.1. The Request line
+1.2.2. The request headers
+1.3. HTTP response
+1.3.1. The Response line
+1.3.2. The response headers
+
+2. Configuring HAProxy
+2.1. Configuration file format
+2.2. Quoting and escaping
+2.3. Environment variables
+2.4. Time format
+2.5. Examples
+
+3. Global parameters
+3.1. Process management and security
+3.2. Performance tuning
+3.3. Debugging
+3.4. Userlists
+3.5. Peers
+3.6. Mailers
+
+4. Proxies
+4.1. Proxy keywords matrix
+4.2. Alphabetically sorted keywords reference
+
+5. Bind and Server options
+5.1. Bind options
+5.2. Server and default-server options
+5.3. Server DNS resolution
+5.3.1. Global overview
+5.3.2. The resolvers section
+
+6. HTTP header manipulation
+
+7. Using ACLs and fetching samples
+7.1. ACL basics
+7.1.1. Matching booleans
+7.1.2. Matching integers
+7.1.3. Matching strings
+7.1.4. Matching regular expressions (regexes)
+7.1.5. Matching arbitrary data blocks
+7.1.6. Matching IPv4 and IPv6 addresses
+7.2. Using ACLs to form conditions
+7.3. Fetching samples
+7.3.1. Converters
+7.3.2. Fetching samples from internal states
+7.3.3. Fetching samples at Layer 4
+7.3.4. Fetching samples at Layer 5
+7.3.5. Fetching samples from buffer contents (Layer 6)
+7.3.6. Fetching HTTP samples (Layer 7)
+7.4. Pre-defined ACLs
+
+8. Logging
+8.1. Log levels
+8.2. Log formats
+8.2.1. Default log format
+8.2.2. TCP log format
+8.2.3. HTTP log format
+8.2.4. Custom log format
+8.2.5. Error log format
+8.3. Advanced logging options
+8.3.1. Disabling logging of external tests
+8.3.2. Logging before waiting for the session to terminate
+8.3.3. Raising log level upon errors
+8.3.4. Disabling logging of successful connections
+8.4. Timing events
+8.5. Session state at disconnection
+8.6. Non-printable characters
+8.7. Capturing HTTP cookies
+8.8. Capturing HTTP headers
+8.9. Examples of logs
+
+
+1. Quick reminder about HTTP
+----------------------------
+
+When haproxy is running in HTTP mode, both the request and the response are
+fully analyzed and indexed, thus it becomes possible to build matching criteria
+on almost anything found in the contents.
+
+However, it is important to understand how HTTP requests and responses are
+formed, and how HAProxy decomposes them. It will then become easier to write
+correct rules and to debug existing configurations.
+
+
+1.1. The HTTP transaction model
+-------------------------------
+
+The HTTP protocol is transaction-driven. This means that each request will lead
+to one and only one response. Traditionally, a TCP connection is established
+from the client to the server, a request is sent by the client on the
+connection, the server responds and the connection is closed. A new request
+will involve a new connection :
+
+ [CON1] [REQ1] ... [RESP1] [CLO1] [CON2] [REQ2] ... [RESP2] [CLO2] ...
+
+In this mode, called the "HTTP close" mode, there are as many connection
+establishments as there are HTTP transactions. Since the connection is closed
+by the server after the response, the client does not need to know the content
+length.
+
+Due to the transactional nature of the protocol, it was possible to improve it
+to avoid closing a connection between two subsequent transactions. In this mode
+however, it is mandatory that the server indicates the content length for each
+response so that the client does not wait indefinitely. For this, a special
+header is used: "Content-length". This mode is called the "keep-alive" mode :
+
+ [CON] [REQ1] ... [RESP1] [REQ2] ... [RESP2] [CLO] ...
+
+Its advantages are a reduced latency between transactions, and less processing
+power required on the server side. It is generally better than the close mode,
+but not always because the clients often limit their concurrent connections to
+a smaller value.
+
+A last improvement in the communications is the pipelining mode. It still uses
+keep-alive, but the client does not wait for the first response to send the
+second request. This is useful for fetching large number of images composing a
+page :
+
+ [CON] [REQ1] [REQ2] ... [RESP1] [RESP2] [CLO] ...
+
+This can obviously have a tremendous benefit on performance because the network
+latency is eliminated between subsequent requests. Many HTTP agents do not
+correctly support pipelining since there is no way to associate a response with
+the corresponding request in HTTP. For this reason, it is mandatory for the
+server to reply in the exact same order as the requests were received.
+
+By default HAProxy operates in keep-alive mode with regards to persistent
+connections: for each connection it processes each request and response, and
+leaves the connection idle on both sides between the end of a response and the
+start of a new request.
+
+HAProxy supports 5 connection modes :
+ - keep alive : all requests and responses are processed (default)
+ - tunnel : only the first request and response are processed,
+ everything else is forwarded with no analysis.
+ - passive close : tunnel with "Connection: close" added in both directions.
+ - server close : the server-facing connection is closed after the response.
+ - forced close : the connection is actively closed after end of response.
+
+
+1.2. HTTP request
+-----------------
+
+First, let's consider this HTTP request :
+
+ Line Contents
+ number
+ 1 GET /serv/login.php?lang=en&profile=2 HTTP/1.1
+ 2 Host: www.mydomain.com
+ 3 User-agent: my small browser
+ 4 Accept: image/jpeg, image/gif
+ 5 Accept: image/png
+
+
+1.2.1. The Request line
+-----------------------
+
+Line 1 is the "request line". It is always composed of 3 fields :
+
+ - a METHOD : GET
+ - a URI : /serv/login.php?lang=en&profile=2
+ - a version tag : HTTP/1.1
+
+All of them are delimited by what the standard calls LWS (linear white spaces),
+which are commonly spaces, but can also be tabs or line feeds/carriage returns
+followed by spaces/tabs. The method itself cannot contain any colon (':') and
+is limited to alphabetic letters. All those various combinations make it
+desirable that HAProxy performs the splitting itself rather than leaving it to
+the user to write a complex or inaccurate regular expression.
+
+The URI itself can have several forms :
+
+ - A "relative URI" :
+
+ /serv/login.php?lang=en&profile=2
+
+ It is a complete URL without the host part. This is generally what is
+ received by servers, reverse proxies and transparent proxies.
+
+ - An "absolute URI", also called a "URL" :
+
+ http://192.168.0.12:8080/serv/login.php?lang=en&profile=2
+
+ It is composed of a "scheme" (the protocol name followed by '://'), a host
+ name or address, optionally a colon (':') followed by a port number, then
+ a relative URI beginning at the first slash ('/') after the address part.
+ This is generally what proxies receive, but a server supporting HTTP/1.1
+ must accept this form too.
+
+ - a star ('*') : this form is only accepted in association with the OPTIONS
+ method and is not relayable. It is used to inquiry a next hop's
+ capabilities.
+
+ - an address:port combination : 192.168.0.12:80
+ This is used with the CONNECT method, which is used to establish TCP
+ tunnels through HTTP proxies, generally for HTTPS, but sometimes for
+ other protocols too.
+
+In a relative URI, two sub-parts are identified. The part before the question
+mark is called the "path". It is typically the relative path to static objects
+on the server. The part after the question mark is called the "query string".
+It is mostly used with GET requests sent to dynamic scripts and is very
+specific to the language, framework or application in use.
+
+
+1.2.2. The request headers
+--------------------------
+
+The headers start at the second line. They are composed of a name at the
+beginning of the line, immediately followed by a colon (':'). Traditionally,
+an LWS is added after the colon but that's not required. Then come the values.
+Multiple identical headers may be folded into one single line, delimiting the
+values with commas, provided that their order is respected. This is commonly
+encountered in the "Cookie:" field. A header may span over multiple lines if
+the subsequent lines begin with an LWS. In the example in 1.2, lines 4 and 5
+define a total of 3 values for the "Accept:" header.
+
+Contrary to a common mis-conception, header names are not case-sensitive, and
+their values are not either if they refer to other header names (such as the
+"Connection:" header).
+
+The end of the headers is indicated by the first empty line. People often say
+that it's a double line feed, which is not exact, even if a double line feed
+is one valid form of empty line.
+
+Fortunately, HAProxy takes care of all these complex combinations when indexing
+headers, checking values and counting them, so there is no reason to worry
+about the way they could be written, but it is important not to accuse an
+application of being buggy if it does unusual, valid things.
+
+Important note:
+ As suggested by RFC2616, HAProxy normalizes headers by replacing line breaks
+ in the middle of headers by LWS in order to join multi-line headers. This
+ is necessary for proper analysis and helps less capable HTTP parsers to work
+ correctly and not to be fooled by such complex constructs.
+
+
+1.3. HTTP response
+------------------
+
+An HTTP response looks very much like an HTTP request. Both are called HTTP
+messages. Let's consider this HTTP response :
+
+ Line Contents
+ number
+ 1 HTTP/1.1 200 OK
+ 2 Content-length: 350
+ 3 Content-Type: text/html
+
+As a special case, HTTP supports so called "Informational responses" as status
+codes 1xx. These messages are special in that they don't convey any part of the
+response, they're just used as sort of a signaling message to ask a client to
+continue to post its request for instance. In the case of a status 100 response
+the requested information will be carried by the next non-100 response message
+following the informational one. This implies that multiple responses may be
+sent to a single request, and that this only works when keep-alive is enabled
+(1xx messages are HTTP/1.1 only). HAProxy handles these messages and is able to
+correctly forward and skip them, and only process the next non-100 response. As
+such, these messages are neither logged nor transformed, unless explicitly
+state otherwise. Status 101 messages indicate that the protocol is changing
+over the same connection and that haproxy must switch to tunnel mode, just as
+if a CONNECT had occurred. Then the Upgrade header would contain additional
+information about the type of protocol the connection is switching to.
+
+
+1.3.1. The Response line
+------------------------
+
+Line 1 is the "response line". It is always composed of 3 fields :
+
+ - a version tag : HTTP/1.1
+ - a status code : 200
+ - a reason : OK
+
+The status code is always 3-digit. The first digit indicates a general status :
+ - 1xx = informational message to be skipped (eg: 100, 101)
+ - 2xx = OK, content is following (eg: 200, 206)
+ - 3xx = OK, no content following (eg: 302, 304)
+ - 4xx = error caused by the client (eg: 401, 403, 404)
+ - 5xx = error caused by the server (eg: 500, 502, 503)
+
+Please refer to RFC2616 for the detailed meaning of all such codes. The
+"reason" field is just a hint, but is not parsed by clients. Anything can be
+found there, but it's a common practice to respect the well-established
+messages. It can be composed of one or multiple words, such as "OK", "Found",
+or "Authentication Required".
+
+Haproxy may emit the following status codes by itself :
+
+ Code When / reason
+ 200 access to stats page, and when replying to monitoring requests
+ 301 when performing a redirection, depending on the configured code
+ 302 when performing a redirection, depending on the configured code
+ 303 when performing a redirection, depending on the configured code
+ 307 when performing a redirection, depending on the configured code
+ 308 when performing a redirection, depending on the configured code
+ 400 for an invalid or too large request
+ 401 when an authentication is required to perform the action (when
+ accessing the stats page)
+ 403 when a request is forbidden by a "block" ACL or "reqdeny" filter
+ 408 when the request timeout strikes before the request is complete
+ 500 when haproxy encounters an unrecoverable internal error, such as a
+ memory allocation failure, which should never happen
+ 502 when the server returns an empty, invalid or incomplete response, or
+ when an "rspdeny" filter blocks the response.
+ 503 when no server was available to handle the request, or in response to
+ monitoring requests which match the "monitor fail" condition
+ 504 when the response timeout strikes before the server responds
+
+The error 4xx and 5xx codes above may be customized (see "errorloc" in section
+4.2).
+
+
+1.3.2. The response headers
+---------------------------
+
+Response headers work exactly like request headers, and as such, HAProxy uses
+the same parsing function for both. Please refer to paragraph 1.2.2 for more
+details.
+
+
+2. Configuring HAProxy
+----------------------
+
+2.1. Configuration file format
+------------------------------
+
+HAProxy's configuration process involves 3 major sources of parameters :
+
+ - the arguments from the command-line, which always take precedence
+ - the "global" section, which sets process-wide parameters
+ - the proxies sections which can take form of "defaults", "listen",
+ "frontend" and "backend".
+
+The configuration file syntax consists in lines beginning with a keyword
+referenced in this manual, optionally followed by one or several parameters
+delimited by spaces.
+
+
+2.2. Quoting and escaping
+-------------------------
+
+HAProxy's configuration introduces a quoting and escaping system similar to
+many programming languages. The configuration file supports 3 types: escaping
+with a backslash, weak quoting with double quotes, and strong quoting with
+single quotes.
+
+If spaces have to be entered in strings, then they must be escaped by preceding
+them by a backslash ('\') or by quoting them. Backslashes also have to be
+escaped by doubling or strong quoting them.
+
+Escaping is achieved by preceding a special character by a backslash ('\'):
+
+ \ to mark a space and differentiate it from a delimiter
+ \# to mark a hash and differentiate it from a comment
+ \\ to use a backslash
+ \' to use a single quote and differentiate it from strong quoting
+ \" to use a double quote and differentiate it from weak quoting
+
+Weak quoting is achieved by using double quotes (""). Weak quoting prevents
+the interpretation of:
+
+ space as a parameter separator
+ ' single quote as a strong quoting delimiter
+ # hash as a comment start
+
+Weak quoting permits the interpretation of variables, if you want to use a non
+-interpreted dollar within a double quoted string, you should escape it with a
+backslash ("\$"), it does not work outside weak quoting.
+
+Interpretation of escaping and special characters are not prevented by weak
+quoting.
+
+Strong quoting is achieved by using single quotes (''). Inside single quotes,
+nothing is interpreted, it's the efficient way to quote regexes.
+
+Quoted and escaped strings are replaced in memory by their interpreted
+equivalent, it allows you to perform concatenation.
+
+ Example:
+ # those are equivalents:
+ log-format %{+Q}o\ %t\ %s\ %{-Q}r
+ log-format "%{+Q}o %t %s %{-Q}r"
+ log-format '%{+Q}o %t %s %{-Q}r'
+ log-format "%{+Q}o %t"' %s %{-Q}r'
+ log-format "%{+Q}o %t"' %s'\ %{-Q}r
+
+ # those are equivalents:
+ reqrep "^([^\ :]*)\ /static/(.*)" \1\ /\2
+ reqrep "^([^ :]*)\ /static/(.*)" '\1 /\2'
+ reqrep "^([^ :]*)\ /static/(.*)" "\1 /\2"
+ reqrep "^([^ :]*)\ /static/(.*)" "\1\ /\2"
+
+
+2.3. Environment variables
+--------------------------
+
+HAProxy's configuration supports environment variables. Those variables are
+interpreted only within double quotes. Variables are expanded during the
+configuration parsing. Variable names must be preceded by a dollar ("$") and
+optionally enclosed with braces ("{}") similarly to what is done in Bourne
+shell. Variable names can contain alphanumerical characters or the character
+underscore ("_") but should not start with a digit.
+
+ Example:
+
+ bind "fd@${FD_APP1}"
+
+ log "${LOCAL_SYSLOG}:514" local0 notice # send to local server
+
+ user "$HAPROXY_USER"
+
+
+2.4. Time format
+----------------
+
+Some parameters involve values representing time, such as timeouts. These
+values are generally expressed in milliseconds (unless explicitly stated
+otherwise) but may be expressed in any other unit by suffixing the unit to the
+numeric value. It is important to consider this because it will not be repeated
+for every keyword. Supported units are :
+
+ - us : microseconds. 1 microsecond = 1/1000000 second
+ - ms : milliseconds. 1 millisecond = 1/1000 second. This is the default.
+ - s : seconds. 1s = 1000ms
+ - m : minutes. 1m = 60s = 60000ms
+ - h : hours. 1h = 60m = 3600s = 3600000ms
+ - d : days. 1d = 24h = 1440m = 86400s = 86400000ms
+
+
+2.4. Examples
+-------------
+
+ # Simple configuration for an HTTP proxy listening on port 80 on all
+ # interfaces and forwarding requests to a single backend "servers" with a
+ # single server "server1" listening on 127.0.0.1:8000
+ global
+ daemon
+ maxconn 256
+
+ defaults
+ mode http
+ timeout connect 5000ms
+ timeout client 50000ms
+ timeout server 50000ms
+
+ frontend http-in
+ bind *:80
+ default_backend servers
+
+ backend servers
+ server server1 127.0.0.1:8000 maxconn 32
+
+
+ # The same configuration defined with a single listen block. Shorter but
+ # less expressive, especially in HTTP mode.
+ global
+ daemon
+ maxconn 256
+
+ defaults
+ mode http
+ timeout connect 5000ms
+ timeout client 50000ms
+ timeout server 50000ms
+
+ listen http-in
+ bind *:80
+ server server1 127.0.0.1:8000 maxconn 32
+
+
+Assuming haproxy is in $PATH, test these configurations in a shell with:
+
+ $ sudo haproxy -f configuration.conf -c
+
+
+3. Global parameters
+--------------------
+
+Parameters in the "global" section are process-wide and often OS-specific. They
+are generally set once for all and do not need being changed once correct. Some
+of them have command-line equivalents.
+
+The following keywords are supported in the "global" section :
+
+ * Process management and security
+ - ca-base
+ - chroot
+ - crt-base
+ - cpu-map
+ - daemon
+ - description
+ - deviceatlas-json-file
+ - deviceatlas-log-level
+ - deviceatlas-separator
+ - deviceatlas-properties-cookie
+ - external-check
+ - gid
+ - group
+ - log
+ - log-tag
+ - log-send-hostname
+ - lua-load
+ - nbproc
+ - node
+ - pidfile
+ - uid
+ - ulimit-n
+ - user
+ - stats
+ - ssl-default-bind-ciphers
+ - ssl-default-bind-options
+ - ssl-default-server-ciphers
+ - ssl-default-server-options
+ - ssl-dh-param-file
+ - ssl-server-verify
+ - unix-bind
+ - 51degrees-data-file
+ - 51degrees-property-name-list
+ - 51degrees-property-separator
+ - 51degrees-cache-size
+
+ * Performance tuning
+ - max-spread-checks
+ - maxconn
+ - maxconnrate
+ - maxcomprate
+ - maxcompcpuusage
+ - maxpipes
+ - maxsessrate
+ - maxsslconn
+ - maxsslrate
+ - maxzlibmem
+ - noepoll
+ - nokqueue
+ - nopoll
+ - nosplice
+ - nogetaddrinfo
+ - spread-checks
+ - server-state-base
+ - server-state-file
+ - tune.buffers.limit
+ - tune.buffers.reserve
+ - tune.bufsize
+ - tune.chksize
+ - tune.comp.maxlevel
+ - tune.http.cookielen
+ - tune.http.maxhdr
+ - tune.idletimer
+ - tune.lua.forced-yield
+ - tune.lua.maxmem
+ - tune.lua.session-timeout
+ - tune.lua.task-timeout
+ - tune.lua.service-timeout
+ - tune.maxaccept
+ - tune.maxpollevents
+ - tune.maxrewrite
+ - tune.pattern.cache-size
+ - tune.pipesize
+ - tune.rcvbuf.client
+ - tune.rcvbuf.server
+ - tune.sndbuf.client
+ - tune.sndbuf.server
+ - tune.ssl.cachesize
+ - tune.ssl.lifetime
+ - tune.ssl.force-private-cache
+ - tune.ssl.maxrecord
+ - tune.ssl.default-dh-param
+ - tune.ssl.ssl-ctx-cache-size
+ - tune.vars.global-max-size
+ - tune.vars.reqres-max-size
+ - tune.vars.sess-max-size
+ - tune.vars.txn-max-size
+ - tune.zlib.memlevel
+ - tune.zlib.windowsize
+
+ * Debugging
+ - debug
+ - quiet
+
+
+3.1. Process management and security
+------------------------------------
+
+ca-base <dir>
+ Assigns a default directory to fetch SSL CA certificates and CRLs from when a
+ relative path is used with "ca-file" or "crl-file" directives. Absolute
+ locations specified in "ca-file" and "crl-file" prevail and ignore "ca-base".
+
+chroot <jail dir>
+ Changes current directory to <jail dir> and performs a chroot() there before
+ dropping privileges. This increases the security level in case an unknown
+ vulnerability would be exploited, since it would make it very hard for the
+ attacker to exploit the system. This only works when the process is started
+ with superuser privileges. It is important to ensure that <jail_dir> is both
+ empty and unwritable to anyone.
+
+cpu-map <"all"|"odd"|"even"|process_num> <cpu-set>...
+ On Linux 2.6 and above, it is possible to bind a process to a specific CPU
+ set. This means that the process will never run on other CPUs. The "cpu-map"
+ directive specifies CPU sets for process sets. The first argument is the
+ process number to bind. This process must have a number between 1 and 32 or
+ 64, depending on the machine's word size, and any process IDs above nbproc
+ are ignored. It is possible to specify all processes at once using "all",
+ only odd numbers using "odd" or even numbers using "even", just like with the
+ "bind-process" directive. The second and forthcoming arguments are CPU sets.
+ Each CPU set is either a unique number between 0 and 31 or 63 or a range with
+ two such numbers delimited by a dash ('-'). Multiple CPU numbers or ranges
+ may be specified, and the processes will be allowed to bind to all of them.
+ Obviously, multiple "cpu-map" directives may be specified. Each "cpu-map"
+ directive will replace the previous ones when they overlap.
+
+crt-base <dir>
+ Assigns a default directory to fetch SSL certificates from when a relative
+ path is used with "crtfile" directives. Absolute locations specified after
+ "crtfile" prevail and ignore "crt-base".
+
+daemon
+ Makes the process fork into background. This is the recommended mode of
+ operation. It is equivalent to the command line "-D" argument. It can be
+ disabled by the command line "-db" argument.
+
+deviceatlas-json-file <path>
+ Sets the path of the DeviceAtlas JSON data file to be loaded by the API.
+ The path must be a valid JSON data file and accessible by Haproxy process.
+
+deviceatlas-log-level <value>
+ Sets the level of informations returned by the API. This directive is
+ optional and set to 0 by default if not set.
+
+deviceatlas-separator <char>
+ Sets the character separator for the API properties results. This directive
+ is optional and set to | by default if not set.
+
+deviceatlas-properties-cookie <name>
+ Sets the client cookie's name used for the detection if the DeviceAtlas
+ Client-side component was used during the request. This directive is optional
+ and set to DAPROPS by default if not set.
+
+external-check
+ Allows the use of an external agent to perform health checks.
+ This is disabled by default as a security precaution.
+ See "option external-check".
+
+gid <number>
+ Changes the process' group ID to <number>. It is recommended that the group
+ ID is dedicated to HAProxy or to a small set of similar daemons. HAProxy must
+ be started with a user belonging to this group, or with superuser privileges.
+ Note that if haproxy is started from a user having supplementary groups, it
+ will only be able to drop these groups if started with superuser privileges.
+ See also "group" and "uid".
+
+group <group name>
+ Similar to "gid" but uses the GID of group name <group name> from /etc/group.
+ See also "gid" and "user".
+
+log <address> [len <length>] [format <format>] <facility> [max level [min level]]
+ Adds a global syslog server. Up to two global servers can be defined. They
+ will receive logs for startups and exits, as well as all logs from proxies
+ configured with "log global".
+
+ <address> can be one of:
+
+ - An IPv4 address optionally followed by a colon and a UDP port. If
+ no port is specified, 514 is used by default (the standard syslog
+ port).
+
+ - An IPv6 address followed by a colon and optionally a UDP port. If
+ no port is specified, 514 is used by default (the standard syslog
+ port).
+
+ - A filesystem path to a UNIX domain socket, keeping in mind
+ considerations for chroot (be sure the path is accessible inside
+ the chroot) and uid/gid (be sure the path is appropriately
+ writeable).
+
+ You may want to reference some environment variables in the address
+ parameter, see section 2.3 about environment variables.
+
+ <length> is an optional maximum line length. Log lines larger than this value
+ will be truncated before being sent. The reason is that syslog
+ servers act differently on log line length. All servers support the
+ default value of 1024, but some servers simply drop larger lines
+ while others do log them. If a server supports long lines, it may
+ make sense to set this value here in order to avoid truncating long
+ lines. Similarly, if a server drops long lines, it is preferable to
+ truncate them before sending them. Accepted values are 80 to 65535
+ inclusive. The default value of 1024 is generally fine for all
+ standard usages. Some specific cases of long captures or
+ JSON-formated logs may require larger values.
+
+ <format> is the log format used when generating syslog messages. It may be
+ one of the following :
+
+ rfc3164 The RFC3164 syslog message format. This is the default.
+ (https://tools.ietf.org/html/rfc3164)
+
+ rfc5424 The RFC5424 syslog message format.
+ (https://tools.ietf.org/html/rfc5424)
+
+ <facility> must be one of the 24 standard syslog facilities :
+
+ kern user mail daemon auth syslog lpr news
+ uucp cron auth2 ftp ntp audit alert cron2
+ local0 local1 local2 local3 local4 local5 local6 local7
+
+ An optional level can be specified to filter outgoing messages. By default,
+ all messages are sent. If a maximum level is specified, only messages with a
+ severity at least as important as this level will be sent. An optional minimum
+ level can be specified. If it is set, logs emitted with a more severe level
+ than this one will be capped to this level. This is used to avoid sending
+ "emerg" messages on all terminals on some default syslog configurations.
+ Eight levels are known :
+
+ emerg alert crit err warning notice info debug
+
+log-send-hostname [<string>]
+ Sets the hostname field in the syslog header. If optional "string" parameter
+ is set the header is set to the string contents, otherwise uses the hostname
+ of the system. Generally used if one is not relaying logs through an
+ intermediate syslog server or for simply customizing the hostname printed in
+ the logs.
+
+log-tag <string>
+ Sets the tag field in the syslog header to this string. It defaults to the
+ program name as launched from the command line, which usually is "haproxy".
+ Sometimes it can be useful to differentiate between multiple processes
+ running on the same host. See also the per-proxy "log-tag" directive.
+
+lua-load <file>
+ This global directive loads and executes a Lua file. This directive can be
+ used multiple times.
+
+nbproc <number>
+ Creates <number> processes when going daemon. This requires the "daemon"
+ mode. By default, only one process is created, which is the recommended mode
+ of operation. For systems limited to small sets of file descriptors per
+ process, it may be needed to fork multiple daemons. USING MULTIPLE PROCESSES
+ IS HARDER TO DEBUG AND IS REALLY DISCOURAGED. See also "daemon".
+
+pidfile <pidfile>
+ Writes pids of all daemons into file <pidfile>. This option is equivalent to
+ the "-p" command line argument. The file must be accessible to the user
+ starting the process. See also "daemon".
+
+stats bind-process [ all | odd | even | <number 1-64>[-<number 1-64>] ] ...
+ Limits the stats socket to a certain set of processes numbers. By default the
+ stats socket is bound to all processes, causing a warning to be emitted when
+ nbproc is greater than 1 because there is no way to select the target process
+ when connecting. However, by using this setting, it becomes possible to pin
+ the stats socket to a specific set of processes, typically the first one. The
+ warning will automatically be disabled when this setting is used, whatever
+ the number of processes used. The maximum process ID depends on the machine's
+ word size (32 or 64). A better option consists in using the "process" setting
+ of the "stats socket" line to force the process on each line.
+
+server-state-base <directory>
+ Specifies the directory prefix to be prepended in front of all servers state
+ file names which do not start with a '/'. See also "server-state-file",
+ "load-server-state-from-file" and "server-state-file-name".
+
+server-state-file <file>
+ Specifies the path to the file containing state of servers. If the path starts
+ with a slash ('/'), it is considered absolute, otherwise it is considered
+ relative to the directory specified using "server-state-base" (if set) or to
+ the current directory. Before reloading HAProxy, it is possible to save the
+ servers' current state using the stats command "show servers state". The
+ output of this command must be written in the file pointed by <file>. When
+ starting up, before handling traffic, HAProxy will read, load and apply state
+ for each server found in the file and available in its current running
+ configuration. See also "server-state-base" and "show servers state",
+ "load-server-state-from-file" and "server-state-file-name"
+
+ssl-default-bind-ciphers <ciphers>
+ This setting is only available when support for OpenSSL was built in. It sets
+ the default string describing the list of cipher algorithms ("cipher suite")
+ that are negotiated during the SSL/TLS handshake for all "bind" lines which
+ do not explicitly define theirs. The format of the string is defined in
+ "man 1 ciphers" from OpenSSL man pages, and can be for instance a string such
+ as "AES:ALL:!aNULL:!eNULL:+RC4:@STRENGTH" (without quotes). Please check the
+ "bind" keyword for more information.
+
+ssl-default-bind-options [<option>]...
+ This setting is only available when support for OpenSSL was built in. It sets
+ default ssl-options to force on all "bind" lines. Please check the "bind"
+ keyword to see available options.
+
+ Example:
+ global
+ ssl-default-bind-options no-sslv3 no-tls-tickets
+
+ssl-default-server-ciphers <ciphers>
+ This setting is only available when support for OpenSSL was built in. It
+ sets the default string describing the list of cipher algorithms that are
+ negotiated during the SSL/TLS handshake with the server, for all "server"
+ lines which do not explicitly define theirs. The format of the string is
+ defined in "man 1 ciphers". Please check the "server" keyword for more
+ information.
+
+ssl-default-server-options [<option>]...
+ This setting is only available when support for OpenSSL was built in. It sets
+ default ssl-options to force on all "server" lines. Please check the "server"
+ keyword to see available options.
+
+ssl-dh-param-file <file>
+ This setting is only available when support for OpenSSL was built in. It sets
+ the default DH parameters that are used during the SSL/TLS handshake when
+ ephemeral Diffie-Hellman (DHE) key exchange is used, for all "bind" lines
+ which do not explicitely define theirs. It will be overridden by custom DH
+ parameters found in a bind certificate file if any. If custom DH parameters
+ are not specified either by using ssl-dh-param-file or by setting them
+ directly in the certificate file, pre-generated DH parameters of the size
+ specified by tune.ssl.default-dh-param will be used. Custom parameters are
+ known to be more secure and therefore their use is recommended.
+ Custom DH parameters may be generated by using the OpenSSL command
+ "openssl dhparam <size>", where size should be at least 2048, as 1024-bit DH
+ parameters should not be considered secure anymore.
+
+ssl-server-verify [none|required]
+ The default behavior for SSL verify on servers side. If specified to 'none',
+ servers certificates are not verified. The default is 'required' except if
+ forced using cmdline option '-dV'.
+
+stats socket [<address:port>|<path>] [param*]
+ Binds a UNIX socket to <path> or a TCPv4/v6 address to <address:port>.
+ Connections to this socket will return various statistics outputs and even
+ allow some commands to be issued to change some runtime settings. Please
+ consult section 9.2 "Unix Socket commands" of Management Guide for more
+ details.
+
+ All parameters supported by "bind" lines are supported, for instance to
+ restrict access to some users or their access rights. Please consult
+ section 5.1 for more information.
+
+stats timeout <timeout, in milliseconds>
+ The default timeout on the stats socket is set to 10 seconds. It is possible
+ to change this value with "stats timeout". The value must be passed in
+ milliseconds, or be suffixed by a time unit among { us, ms, s, m, h, d }.
+
+stats maxconn <connections>
+ By default, the stats socket is limited to 10 concurrent connections. It is
+ possible to change this value with "stats maxconn".
+
+uid <number>
+ Changes the process' user ID to <number>. It is recommended that the user ID
+ is dedicated to HAProxy or to a small set of similar daemons. HAProxy must
+ be started with superuser privileges in order to be able to switch to another
+ one. See also "gid" and "user".
+
+ulimit-n <number>
+ Sets the maximum number of per-process file-descriptors to <number>. By
+ default, it is automatically computed, so it is recommended not to use this
+ option.
+
+unix-bind [ prefix <prefix> ] [ mode <mode> ] [ user <user> ] [ uid <uid> ]
+ [ group <group> ] [ gid <gid> ]
+
+ Fixes common settings to UNIX listening sockets declared in "bind" statements.
+ This is mainly used to simplify declaration of those UNIX sockets and reduce
+ the risk of errors, since those settings are most commonly required but are
+ also process-specific. The <prefix> setting can be used to force all socket
+ path to be relative to that directory. This might be needed to access another
+ component's chroot. Note that those paths are resolved before haproxy chroots
+ itself, so they are absolute. The <mode>, <user>, <uid>, <group> and <gid>
+ all have the same meaning as their homonyms used by the "bind" statement. If
+ both are specified, the "bind" statement has priority, meaning that the
+ "unix-bind" settings may be seen as process-wide default settings.
+
+user <user name>
+ Similar to "uid" but uses the UID of user name <user name> from /etc/passwd.
+ See also "uid" and "group".
+
+node <name>
+ Only letters, digits, hyphen and underscore are allowed, like in DNS names.
+
+ This statement is useful in HA configurations where two or more processes or
+ servers share the same IP address. By setting a different node-name on all
+ nodes, it becomes easy to immediately spot what server is handling the
+ traffic.
+
+description <text>
+ Add a text that describes the instance.
+
+ Please note that it is required to escape certain characters (# for example)
+ and this text is inserted into a html page so you should avoid using
+ "<" and ">" characters.
+
+51degrees-data-file <file path>
+ The path of the 51Degrees data file to provide device detection services. The
+ file should be unzipped and accessible by HAProxy with relevavnt permissions.
+
+ Please note that this option is only available when haproxy has been
+ compiled with USE_51DEGREES.
+
+51degrees-property-name-list [<string>]
+ A list of 51Degrees property names to be load from the dataset. A full list
+ of names is available on the 51Degrees website:
+ https://51degrees.com/resources/property-dictionary
+
+ Please note that this option is only available when haproxy has been
+ compiled with USE_51DEGREES.
+
+51degrees-property-separator <char>
+ A char that will be appended to every property value in a response header
+ containing 51Degrees results. If not set that will be set as ','.
+
+ Please note that this option is only available when haproxy has been
+ compiled with USE_51DEGREES.
+
+51degrees-cache-size <number>
+ Sets the size of the 51Degrees converter cache to <number> entries. This
+ is an LRU cache which reminds previous device detections and their results.
+ By default, this cache is disabled.
+
+ Please note that this option is only available when haproxy has been
+ compiled with USE_51DEGREES.
+
+
+3.2. Performance tuning
+-----------------------
+
+max-spread-checks <delay in milliseconds>
+ By default, haproxy tries to spread the start of health checks across the
+ smallest health check interval of all the servers in a farm. The principle is
+ to avoid hammering services running on the same server. But when using large
+ check intervals (10 seconds or more), the last servers in the farm take some
+ time before starting to be tested, which can be a problem. This parameter is
+ used to enforce an upper bound on delay between the first and the last check,
+ even if the servers' check intervals are larger. When servers run with
+ shorter intervals, their intervals will be respected though.
+
+maxconn <number>
+ Sets the maximum per-process number of concurrent connections to <number>. It
+ is equivalent to the command-line argument "-n". Proxies will stop accepting
+ connections when this limit is reached. The "ulimit-n" parameter is
+ automatically adjusted according to this value. See also "ulimit-n". Note:
+ the "select" poller cannot reliably use more than 1024 file descriptors on
+ some platforms. If your platform only supports select and reports "select
+ FAILED" on startup, you need to reduce maxconn until it works (slightly
+ below 500 in general). If this value is not set, it will default to the value
+ set in DEFAULT_MAXCONN at build time (reported in haproxy -vv) if no memory
+ limit is enforced, or will be computed based on the memory limit, the buffer
+ size, memory allocated to compression, SSL cache size, and use or not of SSL
+ and the associated maxsslconn (which can also be automatic).
+
+maxconnrate <number>
+ Sets the maximum per-process number of connections per second to <number>.
+ Proxies will stop accepting connections when this limit is reached. It can be
+ used to limit the global capacity regardless of each frontend capacity. It is
+ important to note that this can only be used as a service protection measure,
+ as there will not necessarily be a fair share between frontends when the
+ limit is reached, so it's a good idea to also limit each frontend to some
+ value close to its expected share. Also, lowering tune.maxaccept can improve
+ fairness.
+
+maxcomprate <number>
+ Sets the maximum per-process input compression rate to <number> kilobytes
+ per second. For each session, if the maximum is reached, the compression
+ level will be decreased during the session. If the maximum is reached at the
+ beginning of a session, the session will not compress at all. If the maximum
+ is not reached, the compression level will be increased up to
+ tune.comp.maxlevel. A value of zero means there is no limit, this is the
+ default value.
+
+maxcompcpuusage <number>
+ Sets the maximum CPU usage HAProxy can reach before stopping the compression
+ for new requests or decreasing the compression level of current requests.
+ It works like 'maxcomprate' but measures CPU usage instead of incoming data
+ bandwidth. The value is expressed in percent of the CPU used by haproxy. In
+ case of multiple processes (nbproc > 1), each process manages its individual
+ usage. A value of 100 disable the limit. The default value is 100. Setting
+ a lower value will prevent the compression work from slowing the whole
+ process down and from introducing high latencies.
+
+maxpipes <number>
+ Sets the maximum per-process number of pipes to <number>. Currently, pipes
+ are only used by kernel-based tcp splicing. Since a pipe contains two file
+ descriptors, the "ulimit-n" value will be increased accordingly. The default
+ value is maxconn/4, which seems to be more than enough for most heavy usages.
+ The splice code dynamically allocates and releases pipes, and can fall back
+ to standard copy, so setting this value too low may only impact performance.
+
+maxsessrate <number>
+ Sets the maximum per-process number of sessions per second to <number>.
+ Proxies will stop accepting connections when this limit is reached. It can be
+ used to limit the global capacity regardless of each frontend capacity. It is
+ important to note that this can only be used as a service protection measure,
+ as there will not necessarily be a fair share between frontends when the
+ limit is reached, so it's a good idea to also limit each frontend to some
+ value close to its expected share. Also, lowering tune.maxaccept can improve
+ fairness.
+
+maxsslconn <number>
+ Sets the maximum per-process number of concurrent SSL connections to
+ <number>. By default there is no SSL-specific limit, which means that the
+ global maxconn setting will apply to all connections. Setting this limit
+ avoids having openssl use too much memory and crash when malloc returns NULL
+ (since it unfortunately does not reliably check for such conditions). Note
+ that the limit applies both to incoming and outgoing connections, so one
+ connection which is deciphered then ciphered accounts for 2 SSL connections.
+ If this value is not set, but a memory limit is enforced, this value will be
+ automatically computed based on the memory limit, maxconn, the buffer size,
+ memory allocated to compression, SSL cache size, and use of SSL in either
+ frontends, backends or both. If neither maxconn nor maxsslconn are specified
+ when there is a memory limit, haproxy will automatically adjust these values
+ so that 100% of the connections can be made over SSL with no risk, and will
+ consider the sides where it is enabled (frontend, backend, both).
+
+maxsslrate <number>
+ Sets the maximum per-process number of SSL sessions per second to <number>.
+ SSL listeners will stop accepting connections when this limit is reached. It
+ can be used to limit the global SSL CPU usage regardless of each frontend
+ capacity. It is important to note that this can only be used as a service
+ protection measure, as there will not necessarily be a fair share between
+ frontends when the limit is reached, so it's a good idea to also limit each
+ frontend to some value close to its expected share. It is also important to
+ note that the sessions are accounted before they enter the SSL stack and not
+ after, which also protects the stack against bad handshakes. Also, lowering
+ tune.maxaccept can improve fairness.
+
+maxzlibmem <number>
+ Sets the maximum amount of RAM in megabytes per process usable by the zlib.
+ When the maximum amount is reached, future sessions will not compress as long
+ as RAM is unavailable. When sets to 0, there is no limit.
+ The default value is 0. The value is available in bytes on the UNIX socket
+ with "show info" on the line "MaxZlibMemUsage", the memory used by zlib is
+ "ZlibMemUsage" in bytes.
+
+noepoll
+ Disables the use of the "epoll" event polling system on Linux. It is
+ equivalent to the command-line argument "-de". The next polling system
+ used will generally be "poll". See also "nopoll".
+
+nokqueue
+ Disables the use of the "kqueue" event polling system on BSD. It is
+ equivalent to the command-line argument "-dk". The next polling system
+ used will generally be "poll". See also "nopoll".
+
+nopoll
+ Disables the use of the "poll" event polling system. It is equivalent to the
+ command-line argument "-dp". The next polling system used will be "select".
+ It should never be needed to disable "poll" since it's available on all
+ platforms supported by HAProxy. See also "nokqueue" and "noepoll".
+
+nosplice
+ Disables the use of kernel tcp splicing between sockets on Linux. It is
+ equivalent to the command line argument "-dS". Data will then be copied
+ using conventional and more portable recv/send calls. Kernel tcp splicing is
+ limited to some very recent instances of kernel 2.6. Most versions between
+ 2.6.25 and 2.6.28 are buggy and will forward corrupted data, so they must not
+ be used. This option makes it easier to globally disable kernel splicing in
+ case of doubt. See also "option splice-auto", "option splice-request" and
+ "option splice-response".
+
+nogetaddrinfo
+ Disables the use of getaddrinfo(3) for name resolving. It is equivalent to
+ the command line argument "-dG". Deprecated gethostbyname(3) will be used.
+
+spread-checks <0..50, in percent>
+ Sometimes it is desirable to avoid sending agent and health checks to
+ servers at exact intervals, for instance when many logical servers are
+ located on the same physical server. With the help of this parameter, it
+ becomes possible to add some randomness in the check interval between 0
+ and +/- 50%. A value between 2 and 5 seems to show good results. The
+ default value remains at 0.
+
+tune.buffers.limit <number>
+ Sets a hard limit on the number of buffers which may be allocated per process.
+ The default value is zero which means unlimited. The minimum non-zero value
+ will always be greater than "tune.buffers.reserve" and should ideally always
+ be about twice as large. Forcing this value can be particularly useful to
+ limit the amount of memory a process may take, while retaining a sane
+ behaviour. When this limit is reached, sessions which need a buffer wait for
+ another one to be released by another session. Since buffers are dynamically
+ allocated and released, the waiting time is very short and not perceptible
+ provided that limits remain reasonable. In fact sometimes reducing the limit
+ may even increase performance by increasing the CPU cache's efficiency. Tests
+ have shown good results on average HTTP traffic with a limit to 1/10 of the
+ expected global maxconn setting, which also significantly reduces memory
+ usage. The memory savings come from the fact that a number of connections
+ will not allocate 2*tune.bufsize. It is best not to touch this value unless
+ advised to do so by an haproxy core developer.
+
+tune.buffers.reserve <number>
+ Sets the number of buffers which are pre-allocated and reserved for use only
+ during memory shortage conditions resulting in failed memory allocations. The
+ minimum value is 2 and is also the default. There is no reason a user would
+ want to change this value, it's mostly aimed at haproxy core developers.
+
+tune.bufsize <number>
+ Sets the buffer size to this size (in bytes). Lower values allow more
+ sessions to coexist in the same amount of RAM, and higher values allow some
+ applications with very large cookies to work. The default value is 16384 and
+ can be changed at build time. It is strongly recommended not to change this
+ from the default value, as very low values will break some services such as
+ statistics, and values larger than default size will increase memory usage,
+ possibly causing the system to run out of memory. At least the global maxconn
+ parameter should be decreased by the same factor as this one is increased.
+ If HTTP request is larger than (tune.bufsize - tune.maxrewrite), haproxy will
+ return HTTP 400 (Bad Request) error. Similarly if an HTTP response is larger
+ than this size, haproxy will return HTTP 502 (Bad Gateway).
+
+tune.chksize <number>
+ Sets the check buffer size to this size (in bytes). Higher values may help
+ find string or regex patterns in very large pages, though doing so may imply
+ more memory and CPU usage. The default value is 16384 and can be changed at
+ build time. It is not recommended to change this value, but to use better
+ checks whenever possible.
+
+tune.comp.maxlevel <number>
+ Sets the maximum compression level. The compression level affects CPU
+ usage during compression. This value affects CPU usage during compression.
+ Each session using compression initializes the compression algorithm with
+ this value. The default value is 1.
+
+tune.http.cookielen <number>
+ Sets the maximum length of captured cookies. This is the maximum value that
+ the "capture cookie xxx len yyy" will be allowed to take, and any upper value
+ will automatically be truncated to this one. It is important not to set too
+ high a value because all cookie captures still allocate this size whatever
+ their configured value (they share a same pool). This value is per request
+ per response, so the memory allocated is twice this value per connection.
+ When not specified, the limit is set to 63 characters. It is recommended not
+ to change this value.
+
+tune.http.maxhdr <number>
+ Sets the maximum number of headers in a request. When a request comes with a
+ number of headers greater than this value (including the first line), it is
+ rejected with a "400 Bad Request" status code. Similarly, too large responses
+ are blocked with "502 Bad Gateway". The default value is 101, which is enough
+ for all usages, considering that the widely deployed Apache server uses the
+ same limit. It can be useful to push this limit further to temporarily allow
+ a buggy application to work by the time it gets fixed. Keep in mind that each
+ new header consumes 32bits of memory for each session, so don't push this
+ limit too high.
+
+tune.idletimer <timeout>
+ Sets the duration after which haproxy will consider that an empty buffer is
+ probably associated with an idle stream. This is used to optimally adjust
+ some packet sizes while forwarding large and small data alternatively. The
+ decision to use splice() or to send large buffers in SSL is modulated by this
+ parameter. The value is in milliseconds between 0 and 65535. A value of zero
+ means that haproxy will not try to detect idle streams. The default is 1000,
+ which seems to correctly detect end user pauses (eg: read a page before
+ clicking). There should be not reason for changing this value. Please check
+ tune.ssl.maxrecord below.
+
+tune.lua.forced-yield <number>
+ This directive forces the Lua engine to execute a yield each <number> of
+ instructions executed. This permits interruptng a long script and allows the
+ HAProxy scheduler to process other tasks like accepting connections or
+ forwarding traffic. The default value is 10000 instructions. If HAProxy often
+ executes some Lua code but more reactivity is required, this value can be
+ lowered. If the Lua code is quite long and its result is absolutely required
+ to process the data, the <number> can be increased.
+
+tune.lua.maxmem
+ Sets the maximum amount of RAM in megabytes per process usable by Lua. By
+ default it is zero which means unlimited. It is important to set a limit to
+ ensure that a bug in a script will not result in the system running out of
+ memory.
+
+tune.lua.session-timeout <timeout>
+ This is the execution timeout for the Lua sessions. This is useful for
+ preventing infinite loops or spending too much time in Lua. This timeout
+ counts only the pure Lua runtime. If the Lua does a sleep, the sleep is
+ not taked in account. The default timeout is 4s.
+
+tune.lua.task-timeout <timeout>
+ Purpose is the same as "tune.lua.session-timeout", but this timeout is
+ dedicated to the tasks. By default, this timeout isn't set because a task may
+ remain alive during of the lifetime of HAProxy. For example, a task used to
+ check servers.
+
+tune.lua.service-timeout <timeout>
+ This is the execution timeout for the Lua services. This is useful for
+ preventing infinite loops or spending too much time in Lua. This timeout
+ counts only the pure Lua runtime. If the Lua does a sleep, the sleep is
+ not taked in account. The default timeout is 4s.
+
+tune.maxaccept <number>
+ Sets the maximum number of consecutive connections a process may accept in a
+ row before switching to other work. In single process mode, higher numbers
+ give better performance at high connection rates. However in multi-process
+ modes, keeping a bit of fairness between processes generally is better to
+ increase performance. This value applies individually to each listener, so
+ that the number of processes a listener is bound to is taken into account.
+ This value defaults to 64. In multi-process mode, it is divided by twice
+ the number of processes the listener is bound to. Setting this value to -1
+ completely disables the limitation. It should normally not be needed to tweak
+ this value.
+
+tune.maxpollevents <number>
+ Sets the maximum amount of events that can be processed at once in a call to
+ the polling system. The default value is adapted to the operating system. It
+ has been noticed that reducing it below 200 tends to slightly decrease
+ latency at the expense of network bandwidth, and increasing it above 200
+ tends to trade latency for slightly increased bandwidth.
+
+tune.maxrewrite <number>
+ Sets the reserved buffer space to this size in bytes. The reserved space is
+ used for header rewriting or appending. The first reads on sockets will never
+ fill more than bufsize-maxrewrite. Historically it has defaulted to half of
+ bufsize, though that does not make much sense since there are rarely large
+ numbers of headers to add. Setting it too high prevents processing of large
+ requests or responses. Setting it too low prevents addition of new headers
+ to already large requests or to POST requests. It is generally wise to set it
+ to about 1024. It is automatically readjusted to half of bufsize if it is
+ larger than that. This means you don't have to worry about it when changing
+ bufsize.
+
+tune.pattern.cache-size <number>
+ Sets the size of the pattern lookup cache to <number> entries. This is an LRU
+ cache which reminds previous lookups and their results. It is used by ACLs
+ and maps on slow pattern lookups, namely the ones using the "sub", "reg",
+ "dir", "dom", "end", "bin" match methods as well as the case-insensitive
+ strings. It applies to pattern expressions which means that it will be able
+ to memorize the result of a lookup among all the patterns specified on a
+ configuration line (including all those loaded from files). It automatically
+ invalidates entries which are updated using HTTP actions or on the CLI. The
+ default cache size is set to 10000 entries, which limits its footprint to
+ about 5 MB on 32-bit systems and 8 MB on 64-bit systems. There is a very low
+ risk of collision in this cache, which is in the order of the size of the
+ cache divided by 2^64. Typically, at 10000 requests per second with the
+ default cache size of 10000 entries, there's 1% chance that a brute force
+ attack could cause a single collision after 60 years, or 0.1% after 6 years.
+ This is considered much lower than the risk of a memory corruption caused by
+ aging components. If this is not acceptable, the cache can be disabled by
+ setting this parameter to 0.
+
+tune.pipesize <number>
+ Sets the kernel pipe buffer size to this size (in bytes). By default, pipes
+ are the default size for the system. But sometimes when using TCP splicing,
+ it can improve performance to increase pipe sizes, especially if it is
+ suspected that pipes are not filled and that many calls to splice() are
+ performed. This has an impact on the kernel's memory footprint, so this must
+ not be changed if impacts are not understood.
+
+tune.rcvbuf.client <number>
+tune.rcvbuf.server <number>
+ Forces the kernel socket receive buffer size on the client or the server side
+ to the specified value in bytes. This value applies to all TCP/HTTP frontends
+ and backends. It should normally never be set, and the default size (0) lets
+ the kernel autotune this value depending on the amount of available memory.
+ However it can sometimes help to set it to very low values (eg: 4096) in
+ order to save kernel memory by preventing it from buffering too large amounts
+ of received data. Lower values will significantly increase CPU usage though.
+
+tune.sndbuf.client <number>
+tune.sndbuf.server <number>
+ Forces the kernel socket send buffer size on the client or the server side to
+ the specified value in bytes. This value applies to all TCP/HTTP frontends
+ and backends. It should normally never be set, and the default size (0) lets
+ the kernel autotune this value depending on the amount of available memory.
+ However it can sometimes help to set it to very low values (eg: 4096) in
+ order to save kernel memory by preventing it from buffering too large amounts
+ of received data. Lower values will significantly increase CPU usage though.
+ Another use case is to prevent write timeouts with extremely slow clients due
+ to the kernel waiting for a large part of the buffer to be read before
+ notifying haproxy again.
+
+tune.ssl.cachesize <number>
+ Sets the size of the global SSL session cache, in a number of blocks. A block
+ is large enough to contain an encoded session without peer certificate.
+ An encoded session with peer certificate is stored in multiple blocks
+ depending on the size of the peer certificate. A block uses approximately
+ 200 bytes of memory. The default value may be forced at build time, otherwise
+ defaults to 20000. When the cache is full, the most idle entries are purged
+ and reassigned. Higher values reduce the occurrence of such a purge, hence
+ the number of CPU-intensive SSL handshakes by ensuring that all users keep
+ their session as long as possible. All entries are pre-allocated upon startup
+ and are shared between all processes if "nbproc" is greater than 1. Setting
+ this value to 0 disables the SSL session cache.
+
+tune.ssl.force-private-cache
+ This boolean disables SSL session cache sharing between all processes. It
+ should normally not be used since it will force many renegotiations due to
+ clients hitting a random process. But it may be required on some operating
+ systems where none of the SSL cache synchronization method may be used. In
+ this case, adding a first layer of hash-based load balancing before the SSL
+ layer might limit the impact of the lack of session sharing.
+
+tune.ssl.lifetime <timeout>
+ Sets how long a cached SSL session may remain valid. This time is expressed
+ in seconds and defaults to 300 (5 min). It is important to understand that it
+ does not guarantee that sessions will last that long, because if the cache is
+ full, the longest idle sessions will be purged despite their configured
+ lifetime. The real usefulness of this setting is to prevent sessions from
+ being used for too long.
+
+tune.ssl.maxrecord <number>
+ Sets the maximum amount of bytes passed to SSL_write() at a time. Default
+ value 0 means there is no limit. Over SSL/TLS, the client can decipher the
+ data only once it has received a full record. With large records, it means
+ that clients might have to download up to 16kB of data before starting to
+ process them. Limiting the value can improve page load times on browsers
+ located over high latency or low bandwidth networks. It is suggested to find
+ optimal values which fit into 1 or 2 TCP segments (generally 1448 bytes over
+ Ethernet with TCP timestamps enabled, or 1460 when timestamps are disabled),
+ keeping in mind that SSL/TLS add some overhead. Typical values of 1419 and
+ 2859 gave good results during tests. Use "strace -e trace=write" to find the
+ best value. Haproxy will automatically switch to this setting after an idle
+ stream has been detected (see tune.idletimer above).
+
+tune.ssl.default-dh-param <number>
+ Sets the maximum size of the Diffie-Hellman parameters used for generating
+ the ephemeral/temporary Diffie-Hellman key in case of DHE key exchange. The
+ final size will try to match the size of the server's RSA (or DSA) key (e.g,
+ a 2048 bits temporary DH key for a 2048 bits RSA key), but will not exceed
+ this maximum value. Default value if 1024. Only 1024 or higher values are
+ allowed. Higher values will increase the CPU load, and values greater than
+ 1024 bits are not supported by Java 7 and earlier clients. This value is not
+ used if static Diffie-Hellman parameters are supplied either directly
+ in the certificate file or by using the ssl-dh-param-file parameter.
+
+tune.ssl.ssl-ctx-cache-size <number>
+ Sets the size of the cache used to store generated certificates to <number>
+ entries. This is a LRU cache. Because generating a SSL certificate
+ dynamically is expensive, they are cached. The default cache size is set to
+ 1000 entries.
+
+tune.vars.global-max-size <size>
+tune.vars.reqres-max-size <size>
+tune.vars.sess-max-size <size>
+tune.vars.txn-max-size <size>
+ These four tunes helps to manage the allowed amount of memory used by the
+ variables system. "global" limits the memory for all the systems. "sess" limit
+ the memory by session, "txn" limits the memory by transaction and "reqres"
+ limits the memory for each request or response processing. during the
+ accounting, "sess" embbed "txn" and "txn" embed "reqres".
+
+ By example, we considers that "tune.vars.sess-max-size" is fixed to 100,
+ "tune.vars.txn-max-size" is fixed to 100, "tune.vars.reqres-max-size" is
+ also fixed to 100. If we create a variable "txn.var" that contains 100 bytes,
+ we cannot create any more variable in the other contexts.
+
+tune.zlib.memlevel <number>
+ Sets the memLevel parameter in zlib initialization for each session. It
+ defines how much memory should be allocated for the internal compression
+ state. A value of 1 uses minimum memory but is slow and reduces compression
+ ratio, a value of 9 uses maximum memory for optimal speed. Can be a value
+ between 1 and 9. The default value is 8.
+
+tune.zlib.windowsize <number>
+ Sets the window size (the size of the history buffer) as a parameter of the
+ zlib initialization for each session. Larger values of this parameter result
+ in better compression at the expense of memory usage. Can be a value between
+ 8 and 15. The default value is 15.
+
+3.3. Debugging
+--------------
+
+debug
+ Enables debug mode which dumps to stdout all exchanges, and disables forking
+ into background. It is the equivalent of the command-line argument "-d". It
+ should never be used in a production configuration since it may prevent full
+ system startup.
+
+quiet
+ Do not display any message during startup. It is equivalent to the command-
+ line argument "-q".
+
+
+3.4. Userlists
+--------------
+It is possible to control access to frontend/backend/listen sections or to
+http stats by allowing only authenticated and authorized users. To do this,
+it is required to create at least one userlist and to define users.
+
+userlist <listname>
+ Creates new userlist with name <listname>. Many independent userlists can be
+ used to store authentication & authorization data for independent customers.
+
+group <groupname> [users <user>,<user>,(...)]
+ Adds group <groupname> to the current userlist. It is also possible to
+ attach users to this group by using a comma separated list of names
+ proceeded by "users" keyword.
+
+user <username> [password|insecure-password <password>]
+ [groups <group>,<group>,(...)]
+ Adds user <username> to the current userlist. Both secure (encrypted) and
+ insecure (unencrypted) passwords can be used. Encrypted passwords are
+ evaluated using the crypt(3) function so depending of the system's
+ capabilities, different algorithms are supported. For example modern Glibc
+ based Linux system supports MD5, SHA-256, SHA-512 and of course classic,
+ DES-based method of encrypting passwords.
+
+
+ Example:
+ userlist L1
+ group G1 users tiger,scott
+ group G2 users xdb,scott
+
+ user tiger password $6$k6y3o.eP$JlKBx9za9667qe4(...)xHSwRv6J.C0/D7cV91
+ user scott insecure-password elgato
+ user xdb insecure-password hello
+
+ userlist L2
+ group G1
+ group G2
+
+ user tiger password $6$k6y3o.eP$JlKBx(...)xHSwRv6J.C0/D7cV91 groups G1
+ user scott insecure-password elgato groups G1,G2
+ user xdb insecure-password hello groups G2
+
+ Please note that both lists are functionally identical.
+
+
+3.5. Peers
+----------
+It is possible to propagate entries of any data-types in stick-tables between
+several haproxy instances over TCP connections in a multi-master fashion. Each
+instance pushes its local updates and insertions to remote peers. The pushed
+values overwrite remote ones without aggregation. Interrupted exchanges are
+automatically detected and recovered from the last known point.
+In addition, during a soft restart, the old process connects to the new one
+using such a TCP connection to push all its entries before the new process
+tries to connect to other peers. That ensures very fast replication during a
+reload, it typically takes a fraction of a second even for large tables.
+Note that Server IDs are used to identify servers remotely, so it is important
+that configurations look similar or at least that the same IDs are forced on
+each server on all participants.
+
+peers <peersect>
+ Creates a new peer list with name <peersect>. It is an independent section,
+ which is referenced by one or more stick-tables.
+
+disabled
+ Disables a peers section. It disables both listening and any synchronization
+ related to this section. This is provided to disable synchronization of stick
+ tables without having to comment out all "peers" references.
+
+enable
+ This re-enables a disabled peers section which was previously disabled.
+
+peer <peername> <ip>:<port>
+ Defines a peer inside a peers section.
+ If <peername> is set to the local peer name (by default hostname, or forced
+ using "-L" command line option), haproxy will listen for incoming remote peer
+ connection on <ip>:<port>. Otherwise, <ip>:<port> defines where to connect to
+ to join the remote peer, and <peername> is used at the protocol level to
+ identify and validate the remote peer on the server side.
+
+ During a soft restart, local peer <ip>:<port> is used by the old instance to
+ connect the new one and initiate a complete replication (teaching process).
+
+ It is strongly recommended to have the exact same peers declaration on all
+ peers and to only rely on the "-L" command line argument to change the local
+ peer name. This makes it easier to maintain coherent configuration files
+ across all peers.
+
+ You may want to reference some environment variables in the address
+ parameter, see section 2.3 about environment variables.
+
+ Example:
+ peers mypeers
+ peer haproxy1 192.168.0.1:1024
+ peer haproxy2 192.168.0.2:1024
+ peer haproxy3 10.2.0.1:1024
+
+ backend mybackend
+ mode tcp
+ balance roundrobin
+ stick-table type ip size 20k peers mypeers
+ stick on src
+
+ server srv1 192.168.0.30:80
+ server srv2 192.168.0.31:80
+
+
+3.6. Mailers
+------------
+It is possible to send email alerts when the state of servers changes.
+If configured email alerts are sent to each mailer that is configured
+in a mailers section. Email is sent to mailers using SMTP.
+
+mailers <mailersect>
+ Creates a new mailer list with the name <mailersect>. It is an
+ independent section which is referenced by one or more proxies.
+
+mailer <mailername> <ip>:<port>
+ Defines a mailer inside a mailers section.
+
+ Example:
+ mailers mymailers
+ mailer smtp1 192.168.0.1:587
+ mailer smtp2 192.168.0.2:587
+
+ backend mybackend
+ mode tcp
+ balance roundrobin
+
+ email-alert mailers mymailers
+ email-alert from test1@horms.org
+ email-alert to test2@horms.org
+
+ server srv1 192.168.0.30:80
+ server srv2 192.168.0.31:80
+
+
+4. Proxies
+----------
+
+Proxy configuration can be located in a set of sections :
+ - defaults [<name>]
+ - frontend <name>
+ - backend <name>
+ - listen <name>
+
+A "defaults" section sets default parameters for all other sections following
+its declaration. Those default parameters are reset by the next "defaults"
+section. See below for the list of parameters which can be set in a "defaults"
+section. The name is optional but its use is encouraged for better readability.
+
+A "frontend" section describes a set of listening sockets accepting client
+connections.
+
+A "backend" section describes a set of servers to which the proxy will connect
+to forward incoming connections.
+
+A "listen" section defines a complete proxy with its frontend and backend
+parts combined in one section. It is generally useful for TCP-only traffic.
+
+All proxy names must be formed from upper and lower case letters, digits,
+'-' (dash), '_' (underscore) , '.' (dot) and ':' (colon). ACL names are
+case-sensitive, which means that "www" and "WWW" are two different proxies.
+
+Historically, all proxy names could overlap, it just caused troubles in the
+logs. Since the introduction of content switching, it is mandatory that two
+proxies with overlapping capabilities (frontend/backend) have different names.
+However, it is still permitted that a frontend and a backend share the same
+name, as this configuration seems to be commonly encountered.
+
+Right now, two major proxy modes are supported : "tcp", also known as layer 4,
+and "http", also known as layer 7. In layer 4 mode, HAProxy simply forwards
+bidirectional traffic between two sides. In layer 7 mode, HAProxy analyzes the
+protocol, and can interact with it by allowing, blocking, switching, adding,
+modifying, or removing arbitrary contents in requests or responses, based on
+arbitrary criteria.
+
+In HTTP mode, the processing applied to requests and responses flowing over
+a connection depends in the combination of the frontend's HTTP options and
+the backend's. HAProxy supports 5 connection modes :
+
+ - KAL : keep alive ("option http-keep-alive") which is the default mode : all
+ requests and responses are processed, and connections remain open but idle
+ between responses and new requests.
+
+ - TUN: tunnel ("option http-tunnel") : this was the default mode for versions
+ 1.0 to 1.5-dev21 : only the first request and response are processed, and
+ everything else is forwarded with no analysis at all. This mode should not
+ be used as it creates lots of trouble with logging and HTTP processing.
+
+ - PCL: passive close ("option httpclose") : exactly the same as tunnel mode,
+ but with "Connection: close" appended in both directions to try to make
+ both ends close after the first request/response exchange.
+
+ - SCL: server close ("option http-server-close") : the server-facing
+ connection is closed after the end of the response is received, but the
+ client-facing connection remains open.
+
+ - FCL: forced close ("option forceclose") : the connection is actively closed
+ after the end of the response.
+
+The effective mode that will be applied to a connection passing through a
+frontend and a backend can be determined by both proxy modes according to the
+following matrix, but in short, the modes are symmetric, keep-alive is the
+weakest option and force close is the strongest.
+
+ Backend mode
+
+ | KAL | TUN | PCL | SCL | FCL
+ ----+-----+-----+-----+-----+----
+ KAL | KAL | TUN | PCL | SCL | FCL
+ ----+-----+-----+-----+-----+----
+ TUN | TUN | TUN | PCL | SCL | FCL
+ Frontend ----+-----+-----+-----+-----+----
+ mode PCL | PCL | PCL | PCL | FCL | FCL
+ ----+-----+-----+-----+-----+----
+ SCL | SCL | SCL | FCL | SCL | FCL
+ ----+-----+-----+-----+-----+----
+ FCL | FCL | FCL | FCL | FCL | FCL
+
+
+
+4.1. Proxy keywords matrix
+--------------------------
+
+The following list of keywords is supported. Most of them may only be used in a
+limited set of section types. Some of them are marked as "deprecated" because
+they are inherited from an old syntax which may be confusing or functionally
+limited, and there are new recommended keywords to replace them. Keywords
+marked with "(*)" can be optionally inverted using the "no" prefix, eg. "no
+option contstats". This makes sense when the option has been enabled by default
+and must be disabled for a specific instance. Such options may also be prefixed
+with "default" in order to restore default settings regardless of what has been
+specified in a previous "defaults" section.
+
+
+ keyword defaults frontend listen backend
+------------------------------------+----------+----------+---------+---------
+acl - X X X
+appsession - - - -
+backlog X X X -
+balance X - X X
+bind - X X -
+bind-process X X X X
+block - X X X
+capture cookie - X X -
+capture request header - X X -
+capture response header - X X -
+clitimeout (deprecated) X X X -
+compression X X X X
+contimeout (deprecated) X - X X
+cookie X - X X
+declare capture - X X -
+default-server X - X X
+default_backend X X X -
+description - X X X
+disabled X X X X
+dispatch - - X X
+email-alert from X X X X
+email-alert level X X X X
+email-alert mailers X X X X
+email-alert myhostname X X X X
+email-alert to X X X X
+enabled X X X X
+errorfile X X X X
+errorloc X X X X
+errorloc302 X X X X
+-- keyword -------------------------- defaults - frontend - listen -- backend -
+errorloc303 X X X X
+force-persist - X X X
+fullconn X - X X
+grace X X X X
+hash-type X - X X
+http-check disable-on-404 X - X X
+http-check expect - - X X
+http-check send-state X - X X
+http-request - X X X
+http-response - X X X
+http-reuse X - X X
+http-send-name-header - - X X
+id - X X X
+ignore-persist - X X X
+load-server-state-from-file X - X X
+log (*) X X X X
+log-format X X X -
+log-format-sd X X X -
+log-tag X X X X
+max-keep-alive-queue X - X X
+maxconn X X X -
+mode X X X X
+monitor fail - X X -
+monitor-net X X X -
+monitor-uri X X X -
+option abortonclose (*) X - X X
+option accept-invalid-http-request (*) X X X -
+option accept-invalid-http-response (*) X - X X
+option allbackups (*) X - X X
+option checkcache (*) X - X X
+option clitcpka (*) X X X -
+option contstats (*) X X X -
+option dontlog-normal (*) X X X -
+option dontlognull (*) X X X -
+option forceclose (*) X X X X
+-- keyword -------------------------- defaults - frontend - listen -- backend -
+option forwardfor X X X X
+option http-buffer-request (*) X X X X
+option http-ignore-probes (*) X X X -
+option http-keep-alive (*) X X X X
+option http-no-delay (*) X X X X
+option http-pretend-keepalive (*) X X X X
+option http-server-close (*) X X X X
+option http-tunnel (*) X X X X
+option http-use-proxy-header (*) X X X -
+option httpchk X - X X
+option httpclose (*) X X X X
+option httplog X X X X
+option http_proxy (*) X X X X
+option independent-streams (*) X X X X
+option ldap-check X - X X
+option external-check X - X X
+option log-health-checks (*) X - X X
+option log-separate-errors (*) X X X -
+option logasap (*) X X X -
+option mysql-check X - X X
+option nolinger (*) X X X X
+option originalto X X X X
+option persist (*) X - X X
+option pgsql-check X - X X
+option prefer-last-server (*) X - X X
+option redispatch (*) X - X X
+option redis-check X - X X
+option smtpchk X - X X
+option socket-stats (*) X X X -
+option splice-auto (*) X X X X
+option splice-request (*) X X X X
+option splice-response (*) X X X X
+option srvtcpka (*) X - X X
+option ssl-hello-chk X - X X
+-- keyword -------------------------- defaults - frontend - listen -- backend -
+option tcp-check X - X X
+option tcp-smart-accept (*) X X X -
+option tcp-smart-connect (*) X - X X
+option tcpka X X X X
+option tcplog X X X X
+option transparent (*) X - X X
+external-check command X - X X
+external-check path X - X X
+persist rdp-cookie X - X X
+rate-limit sessions X X X -
+redirect - X X X
+redisp (deprecated) X - X X
+redispatch (deprecated) X - X X
+reqadd - X X X
+reqallow - X X X
+reqdel - X X X
+reqdeny - X X X
+reqiallow - X X X
+reqidel - X X X
+reqideny - X X X
+reqipass - X X X
+reqirep - X X X
+reqitarpit - X X X
+reqpass - X X X
+reqrep - X X X
+-- keyword -------------------------- defaults - frontend - listen -- backend -
+reqtarpit - X X X
+retries X - X X
+rspadd - X X X
+rspdel - X X X
+rspdeny - X X X
+rspidel - X X X
+rspideny - X X X
+rspirep - X X X
+rsprep - X X X
+server - - X X
+server-state-file-name X - X X
+source X - X X
+srvtimeout (deprecated) X - X X
+stats admin - X X X
+stats auth X X X X
+stats enable X X X X
+stats hide-version X X X X
+stats http-request - X X X
+stats realm X X X X
+stats refresh X X X X
+stats scope X X X X
+stats show-desc X X X X
+stats show-legends X X X X
+stats show-node X X X X
+stats uri X X X X
+-- keyword -------------------------- defaults - frontend - listen -- backend -
+stick match - - X X
+stick on - - X X
+stick store-request - - X X
+stick store-response - - X X
+stick-table - - X X
+tcp-check connect - - X X
+tcp-check expect - - X X
+tcp-check send - - X X
+tcp-check send-binary - - X X
+tcp-request connection - X X -
+tcp-request content - X X X
+tcp-request inspect-delay - X X X
+tcp-response content - - X X
+tcp-response inspect-delay - - X X
+timeout check X - X X
+timeout client X X X -
+timeout client-fin X X X -
+timeout clitimeout (deprecated) X X X -
+timeout connect X - X X
+timeout contimeout (deprecated) X - X X
+timeout http-keep-alive X X X X
+timeout http-request X X X X
+timeout queue X - X X
+timeout server X - X X
+timeout server-fin X - X X
+timeout srvtimeout (deprecated) X - X X
+timeout tarpit X X X X
+timeout tunnel X - X X
+transparent (deprecated) X - X X
+unique-id-format X X X -
+unique-id-header X X X -
+use_backend - X X -
+use-server - - X X
+------------------------------------+----------+----------+---------+---------
+ keyword defaults frontend listen backend
+
+
+4.2. Alphabetically sorted keywords reference
+---------------------------------------------
+
+This section provides a description of each keyword and its usage.
+
+
+acl <aclname> <criterion> [flags] [operator] <value> ...
+ Declare or complete an access list.
+ May be used in sections : defaults | frontend | listen | backend
+ no | yes | yes | yes
+ Example:
+ acl invalid_src src 0.0.0.0/7 224.0.0.0/3
+ acl invalid_src src_port 0:1023
+ acl local_dst hdr(host) -i localhost
+
+ See section 7 about ACL usage.
+
+
+appsession <cookie> len <length> timeout <holdtime>
+ [request-learn] [prefix] [mode <path-parameters|query-string>]
+ Define session stickiness on an existing application cookie.
+ May be used in sections : defaults | frontend | listen | backend
+ no | no | yes | yes
+ Arguments :
+ <cookie> this is the name of the cookie used by the application and which
+ HAProxy will have to learn for each new session.
+
+ <length> this is the max number of characters that will be memorized and
+ checked in each cookie value.
+
+ <holdtime> this is the time after which the cookie will be removed from
+ memory if unused. If no unit is specified, this time is in
+ milliseconds.
+
+ request-learn
+ If this option is specified, then haproxy will be able to learn
+ the cookie found in the request in case the server does not
+ specify any in response. This is typically what happens with
+ PHPSESSID cookies, or when haproxy's session expires before
+ the application's session and the correct server is selected.
+ It is recommended to specify this option to improve reliability.
+
+ prefix When this option is specified, haproxy will match on the cookie
+ prefix (or URL parameter prefix). The appsession value is the
+ data following this prefix.
+
+ Example :
+ appsession ASPSESSIONID len 64 timeout 3h prefix
+
+ This will match the cookie ASPSESSIONIDXXXX=XXXXX,
+ the appsession value will be XXXX=XXXXX.
+
+ mode This option allows to change the URL parser mode.
+ 2 modes are currently supported :
+ - path-parameters :
+ The parser looks for the appsession in the path parameters
+ part (each parameter is separated by a semi-colon), which is
+ convenient for JSESSIONID for example.
+ This is the default mode if the option is not set.
+ - query-string :
+ In this mode, the parser will look for the appsession in the
+ query string.
+
+ As of version 1.6, appsessions was removed. It is more flexible and more
+ convenient to use stick-tables instead, and stick-tables support multi-master
+ replication and data conservation across reloads, which appsessions did not.
+
+ See also : "cookie", "capture cookie", "balance", "stick", "stick-table",
+ "ignore-persist", "nbproc" and "bind-process".
+
+
+backlog <conns>
+ Give hints to the system about the approximate listen backlog desired size
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | no
+ Arguments :
+ <conns> is the number of pending connections. Depending on the operating
+ system, it may represent the number of already acknowledged
+ connections, of non-acknowledged ones, or both.
+
+ In order to protect against SYN flood attacks, one solution is to increase
+ the system's SYN backlog size. Depending on the system, sometimes it is just
+ tunable via a system parameter, sometimes it is not adjustable at all, and
+ sometimes the system relies on hints given by the application at the time of
+ the listen() syscall. By default, HAProxy passes the frontend's maxconn value
+ to the listen() syscall. On systems which can make use of this value, it can
+ sometimes be useful to be able to specify a different value, hence this
+ backlog parameter.
+
+ On Linux 2.4, the parameter is ignored by the system. On Linux 2.6, it is
+ used as a hint and the system accepts up to the smallest greater power of
+ two, and never more than some limits (usually 32768).
+
+ See also : "maxconn" and the target operating system's tuning guide.
+
+
+balance <algorithm> [ <arguments> ]
+balance url_param <param> [check_post]
+ Define the load balancing algorithm to be used in a backend.
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
+ Arguments :
+ <algorithm> is the algorithm used to select a server when doing load
+ balancing. This only applies when no persistence information
+ is available, or when a connection is redispatched to another
+ server. <algorithm> may be one of the following :
+
+ roundrobin Each server is used in turns, according to their weights.
+ This is the smoothest and fairest algorithm when the server's
+ processing time remains equally distributed. This algorithm
+ is dynamic, which means that server weights may be adjusted
+ on the fly for slow starts for instance. It is limited by
+ design to 4095 active servers per backend. Note that in some
+ large farms, when a server becomes up after having been down
+ for a very short time, it may sometimes take a few hundreds
+ requests for it to be re-integrated into the farm and start
+ receiving traffic. This is normal, though very rare. It is
+ indicated here in case you would have the chance to observe
+ it, so that you don't worry.
+
+ static-rr Each server is used in turns, according to their weights.
+ This algorithm is as similar to roundrobin except that it is
+ static, which means that changing a server's weight on the
+ fly will have no effect. On the other hand, it has no design
+ limitation on the number of servers, and when a server goes
+ up, it is always immediately reintroduced into the farm, once
+ the full map is recomputed. It also uses slightly less CPU to
+ run (around -1%).
+
+ leastconn The server with the lowest number of connections receives the
+ connection. Round-robin is performed within groups of servers
+ of the same load to ensure that all servers will be used. Use
+ of this algorithm is recommended where very long sessions are
+ expected, such as LDAP, SQL, TSE, etc... but is not very well
+ suited for protocols using short sessions such as HTTP. This
+ algorithm is dynamic, which means that server weights may be
+ adjusted on the fly for slow starts for instance.
+
+ first The first server with available connection slots receives the
+ connection. The servers are chosen from the lowest numeric
+ identifier to the highest (see server parameter "id"), which
+ defaults to the server's position in the farm. Once a server
+ reaches its maxconn value, the next server is used. It does
+ not make sense to use this algorithm without setting maxconn.
+ The purpose of this algorithm is to always use the smallest
+ number of servers so that extra servers can be powered off
+ during non-intensive hours. This algorithm ignores the server
+ weight, and brings more benefit to long session such as RDP
+ or IMAP than HTTP, though it can be useful there too. In
+ order to use this algorithm efficiently, it is recommended
+ that a cloud controller regularly checks server usage to turn
+ them off when unused, and regularly checks backend queue to
+ turn new servers on when the queue inflates. Alternatively,
+ using "http-check send-state" may inform servers on the load.
+
+ source The source IP address is hashed and divided by the total
+ weight of the running servers to designate which server will
+ receive the request. This ensures that the same client IP
+ address will always reach the same server as long as no
+ server goes down or up. If the hash result changes due to the
+ number of running servers changing, many clients will be
+ directed to a different server. This algorithm is generally
+ used in TCP mode where no cookie may be inserted. It may also
+ be used on the Internet to provide a best-effort stickiness
+ to clients which refuse session cookies. This algorithm is
+ static by default, which means that changing a server's
+ weight on the fly will have no effect, but this can be
+ changed using "hash-type".
+
+ uri This algorithm hashes either the left part of the URI (before
+ the question mark) or the whole URI (if the "whole" parameter
+ is present) and divides the hash value by the total weight of
+ the running servers. The result designates which server will
+ receive the request. This ensures that the same URI will
+ always be directed to the same server as long as no server
+ goes up or down. This is used with proxy caches and
+ anti-virus proxies in order to maximize the cache hit rate.
+ Note that this algorithm may only be used in an HTTP backend.
+ This algorithm is static by default, which means that
+ changing a server's weight on the fly will have no effect,
+ but this can be changed using "hash-type".
+
+ This algorithm supports two optional parameters "len" and
+ "depth", both followed by a positive integer number. These
+ options may be helpful when it is needed to balance servers
+ based on the beginning of the URI only. The "len" parameter
+ indicates that the algorithm should only consider that many
+ characters at the beginning of the URI to compute the hash.
+ Note that having "len" set to 1 rarely makes sense since most
+ URIs start with a leading "/".
+
+ The "depth" parameter indicates the maximum directory depth
+ to be used to compute the hash. One level is counted for each
+ slash in the request. If both parameters are specified, the
+ evaluation stops when either is reached.
+
+ url_param The URL parameter specified in argument will be looked up in
+ the query string of each HTTP GET request.
+
+ If the modifier "check_post" is used, then an HTTP POST
+ request entity will be searched for the parameter argument,
+ when it is not found in a query string after a question mark
+ ('?') in the URL. The message body will only start to be
+ analyzed once either the advertised amount of data has been
+ received or the request buffer is full. In the unlikely event
+ that chunked encoding is used, only the first chunk is
+ scanned. Parameter values separated by a chunk boundary, may
+ be randomly balanced if at all. This keyword used to support
+ an optional <max_wait> parameter which is now ignored.
+
+ If the parameter is found followed by an equal sign ('=') and
+ a value, then the value is hashed and divided by the total
+ weight of the running servers. The result designates which
+ server will receive the request.
+
+ This is used to track user identifiers in requests and ensure
+ that a same user ID will always be sent to the same server as
+ long as no server goes up or down. If no value is found or if
+ the parameter is not found, then a round robin algorithm is
+ applied. Note that this algorithm may only be used in an HTTP
+ backend. This algorithm is static by default, which means
+ that changing a server's weight on the fly will have no
+ effect, but this can be changed using "hash-type".
+
+ hdr(<name>) The HTTP header <name> will be looked up in each HTTP
+ request. Just as with the equivalent ACL 'hdr()' function,
+ the header name in parenthesis is not case sensitive. If the
+ header is absent or if it does not contain any value, the
+ roundrobin algorithm is applied instead.
+
+ An optional 'use_domain_only' parameter is available, for
+ reducing the hash algorithm to the main domain part with some
+ specific headers such as 'Host'. For instance, in the Host
+ value "haproxy.1wt.eu", only "1wt" will be considered.
+
+ This algorithm is static by default, which means that
+ changing a server's weight on the fly will have no effect,
+ but this can be changed using "hash-type".
+
+ rdp-cookie
+ rdp-cookie(<name>)
+ The RDP cookie <name> (or "mstshash" if omitted) will be
+ looked up and hashed for each incoming TCP request. Just as
+ with the equivalent ACL 'req_rdp_cookie()' function, the name
+ is not case-sensitive. This mechanism is useful as a degraded
+ persistence mode, as it makes it possible to always send the
+ same user (or the same session ID) to the same server. If the
+ cookie is not found, the normal roundrobin algorithm is
+ used instead.
+
+ Note that for this to work, the frontend must ensure that an
+ RDP cookie is already present in the request buffer. For this
+ you must use 'tcp-request content accept' rule combined with
+ a 'req_rdp_cookie_cnt' ACL.
+
+ This algorithm is static by default, which means that
+ changing a server's weight on the fly will have no effect,
+ but this can be changed using "hash-type".
+
+ See also the rdp_cookie pattern fetch function.
+
+ <arguments> is an optional list of arguments which may be needed by some
+ algorithms. Right now, only "url_param" and "uri" support an
+ optional argument.
+
+ The load balancing algorithm of a backend is set to roundrobin when no other
+ algorithm, mode nor option have been set. The algorithm may only be set once
+ for each backend.
+
+ Examples :
+ balance roundrobin
+ balance url_param userid
+ balance url_param session_id check_post 64
+ balance hdr(User-Agent)
+ balance hdr(host)
+ balance hdr(Host) use_domain_only
+
+ Note: the following caveats and limitations on using the "check_post"
+ extension with "url_param" must be considered :
+
+ - all POST requests are eligible for consideration, because there is no way
+ to determine if the parameters will be found in the body or entity which
+ may contain binary data. Therefore another method may be required to
+ restrict consideration of POST requests that have no URL parameters in
+ the body. (see acl reqideny http_end)
+
+ - using a <max_wait> value larger than the request buffer size does not
+ make sense and is useless. The buffer size is set at build time, and
+ defaults to 16 kB.
+
+ - Content-Encoding is not supported, the parameter search will probably
+ fail; and load balancing will fall back to Round Robin.
+
+ - Expect: 100-continue is not supported, load balancing will fall back to
+ Round Robin.
+
+ - Transfer-Encoding (RFC2616 3.6.1) is only supported in the first chunk.
+ If the entire parameter value is not present in the first chunk, the
+ selection of server is undefined (actually, defined by how little
+ actually appeared in the first chunk).
+
+ - This feature does not support generation of a 100, 411 or 501 response.
+
+ - In some cases, requesting "check_post" MAY attempt to scan the entire
+ contents of a message body. Scanning normally terminates when linear
+ white space or control characters are found, indicating the end of what
+ might be a URL parameter list. This is probably not a concern with SGML
+ type message bodies.
+
+ See also : "dispatch", "cookie", "transparent", "hash-type" and "http_proxy".
+
+
+bind [<address>]:<port_range> [, ...] [param*]
+bind /<path> [, ...] [param*]
+ Define one or several listening addresses and/or ports in a frontend.
+ May be used in sections : defaults | frontend | listen | backend
+ no | yes | yes | no
+ Arguments :
+ <address> is optional and can be a host name, an IPv4 address, an IPv6
+ address, or '*'. It designates the address the frontend will
+ listen on. If unset, all IPv4 addresses of the system will be
+ listened on. The same will apply for '*' or the system's
+ special address "0.0.0.0". The IPv6 equivalent is '::'.
+ Optionally, an address family prefix may be used before the
+ address to force the family regardless of the address format,
+ which can be useful to specify a path to a unix socket with
+ no slash ('/'). Currently supported prefixes are :
+ - 'ipv4@' -> address is always IPv4
+ - 'ipv6@' -> address is always IPv6
+ - 'unix@' -> address is a path to a local unix socket
+ - 'abns@' -> address is in abstract namespace (Linux only).
+ Note: since abstract sockets are not "rebindable", they
+ do not cope well with multi-process mode during
+ soft-restart, so it is better to avoid them if
+ nbproc is greater than 1. The effect is that if the
+ new process fails to start, only one of the old ones
+ will be able to rebind to the socket.
+ - 'fd@<n>' -> use file descriptor <n> inherited from the
+ parent. The fd must be bound and may or may not already
+ be listening.
+ You may want to reference some environment variables in the
+ address parameter, see section 2.3 about environment
+ variables.
+
+ <port_range> is either a unique TCP port, or a port range for which the
+ proxy will accept connections for the IP address specified
+ above. The port is mandatory for TCP listeners. Note that in
+ the case of an IPv6 address, the port is always the number
+ after the last colon (':'). A range can either be :
+ - a numerical port (ex: '80')
+ - a dash-delimited ports range explicitly stating the lower
+ and upper bounds (ex: '2000-2100') which are included in
+ the range.
+
+ Particular care must be taken against port ranges, because
+ every <address:port> couple consumes one socket (= a file
+ descriptor), so it's easy to consume lots of descriptors
+ with a simple range, and to run out of sockets. Also, each
+ <address:port> couple must be used only once among all
+ instances running on a same system. Please note that binding
+ to ports lower than 1024 generally require particular
+ privileges to start the program, which are independent of
+ the 'uid' parameter.
+
+ <path> is a UNIX socket path beginning with a slash ('/'). This is
+ alternative to the TCP listening port. Haproxy will then
+ receive UNIX connections on the socket located at this place.
+ The path must begin with a slash and by default is absolute.
+ It can be relative to the prefix defined by "unix-bind" in
+ the global section. Note that the total length of the prefix
+ followed by the socket path cannot exceed some system limits
+ for UNIX sockets, which commonly are set to 107 characters.
+
+ <param*> is a list of parameters common to all sockets declared on the
+ same line. These numerous parameters depend on OS and build
+ options and have a complete section dedicated to them. Please
+ refer to section 5 to for more details.
+
+ It is possible to specify a list of address:port combinations delimited by
+ commas. The frontend will then listen on all of these addresses. There is no
+ fixed limit to the number of addresses and ports which can be listened on in
+ a frontend, as well as there is no limit to the number of "bind" statements
+ in a frontend.
+
+ Example :
+ listen http_proxy
+ bind :80,:443
+ bind 10.0.0.1:10080,10.0.0.1:10443
+ bind /var/run/ssl-frontend.sock user root mode 600 accept-proxy
+
+ listen http_https_proxy
+ bind :80
+ bind :443 ssl crt /etc/haproxy/site.pem
+
+ listen http_https_proxy_explicit
+ bind ipv6@:80
+ bind ipv4@public_ssl:443 ssl crt /etc/haproxy/site.pem
+ bind unix@ssl-frontend.sock user root mode 600 accept-proxy
+
+ listen external_bind_app1
+ bind "fd@${FD_APP1}"
+
+ Note: regarding Linux's abstract namespace sockets, HAProxy uses the whole
+ sun_path length is used for the address length. Some other programs
+ such as socat use the string length only by default. Pass the option
+ ",unix-tightsocklen=0" to any abstract socket definition in socat to
+ make it compatible with HAProxy's.
+
+ See also : "source", "option forwardfor", "unix-bind" and the PROXY protocol
+ documentation, and section 5 about bind options.
+
+
+bind-process [ all | odd | even | <number 1-64>[-<number 1-64>] ] ...
+ Limit visibility of an instance to a certain set of processes numbers.
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
+ Arguments :
+ all All process will see this instance. This is the default. It
+ may be used to override a default value.
+
+ odd This instance will be enabled on processes 1,3,5,...63. This
+ option may be combined with other numbers.
+
+ even This instance will be enabled on processes 2,4,6,...64. This
+ option may be combined with other numbers. Do not use it
+ with less than 2 processes otherwise some instances might be
+ missing from all processes.
+
+ number The instance will be enabled on this process number or range,
+ whose values must all be between 1 and 32 or 64 depending on
+ the machine's word size. If a proxy is bound to process
+ numbers greater than the configured global.nbproc, it will
+ either be forced to process #1 if a single process was
+ specified, or to all processes otherwise.
+
+ This keyword limits binding of certain instances to certain processes. This
+ is useful in order not to have too many processes listening to the same
+ ports. For instance, on a dual-core machine, it might make sense to set
+ 'nbproc 2' in the global section, then distributes the listeners among 'odd'
+ and 'even' instances.
+
+ At the moment, it is not possible to reference more than 32 or 64 processes
+ using this keyword, but this should be more than enough for most setups.
+ Please note that 'all' really means all processes regardless of the machine's
+ word size, and is not limited to the first 32 or 64.
+
+ Each "bind" line may further be limited to a subset of the proxy's processes,
+ please consult the "process" bind keyword in section 5.1.
+
+ When a frontend has no explicit "bind-process" line, it tries to bind to all
+ the processes referenced by its "bind" lines. That means that frontends can
+ easily adapt to their listeners' processes.
+
+ If some backends are referenced by frontends bound to other processes, the
+ backend automatically inherits the frontend's processes.
+
+ Example :
+ listen app_ip1
+ bind 10.0.0.1:80
+ bind-process odd
+
+ listen app_ip2
+ bind 10.0.0.2:80
+ bind-process even
+
+ listen management
+ bind 10.0.0.3:80
+ bind-process 1 2 3 4
+
+ listen management
+ bind 10.0.0.4:80
+ bind-process 1-4
+
+ See also : "nbproc" in global section, and "process" in section 5.1.
+
+
+block { if | unless } <condition>
+ Block a layer 7 request if/unless a condition is matched
+ May be used in sections : defaults | frontend | listen | backend
+ no | yes | yes | yes
+
+ The HTTP request will be blocked very early in the layer 7 processing
+ if/unless <condition> is matched. A 403 error will be returned if the request
+ is blocked. The condition has to reference ACLs (see section 7). This is
+ typically used to deny access to certain sensitive resources if some
+ conditions are met or not met. There is no fixed limit to the number of
+ "block" statements per instance.
+
+ Example:
+ acl invalid_src src 0.0.0.0/7 224.0.0.0/3
+ acl invalid_src src_port 0:1023
+ acl local_dst hdr(host) -i localhost
+ block if invalid_src || local_dst
+
+ See section 7 about ACL usage.
+
+
+capture cookie <name> len <length>
+ Capture and log a cookie in the request and in the response.
+ May be used in sections : defaults | frontend | listen | backend
+ no | yes | yes | no
+ Arguments :
+ <name> is the beginning of the name of the cookie to capture. In order
+ to match the exact name, simply suffix the name with an equal
+ sign ('='). The full name will appear in the logs, which is
+ useful with application servers which adjust both the cookie name
+ and value (eg: ASPSESSIONXXXXX).
+
+ <length> is the maximum number of characters to report in the logs, which
+ include the cookie name, the equal sign and the value, all in the
+ standard "name=value" form. The string will be truncated on the
+ right if it exceeds <length>.
+
+ Only the first cookie is captured. Both the "cookie" request headers and the
+ "set-cookie" response headers are monitored. This is particularly useful to
+ check for application bugs causing session crossing or stealing between
+ users, because generally the user's cookies can only change on a login page.
+
+ When the cookie was not presented by the client, the associated log column
+ will report "-". When a request does not cause a cookie to be assigned by the
+ server, a "-" is reported in the response column.
+
+ The capture is performed in the frontend only because it is necessary that
+ the log format does not change for a given frontend depending on the
+ backends. This may change in the future. Note that there can be only one
+ "capture cookie" statement in a frontend. The maximum capture length is set
+ by the global "tune.http.cookielen" setting and defaults to 63 characters. It
+ is not possible to specify a capture in a "defaults" section.
+
+ Example:
+ capture cookie ASPSESSION len 32
+
+ See also : "capture request header", "capture response header" as well as
+ section 8 about logging.
+
+
+capture request header <name> len <length>
+ Capture and log the last occurrence of the specified request header.
+ May be used in sections : defaults | frontend | listen | backend
+ no | yes | yes | no
+ Arguments :
+ <name> is the name of the header to capture. The header names are not
+ case-sensitive, but it is a common practice to write them as they
+ appear in the requests, with the first letter of each word in
+ upper case. The header name will not appear in the logs, only the
+ value is reported, but the position in the logs is respected.
+
+ <length> is the maximum number of characters to extract from the value and
+ report in the logs. The string will be truncated on the right if
+ it exceeds <length>.
+
+ The complete value of the last occurrence of the header is captured. The
+ value will be added to the logs between braces ('{}'). If multiple headers
+ are captured, they will be delimited by a vertical bar ('|') and will appear
+ in the same order they were declared in the configuration. Non-existent
+ headers will be logged just as an empty string. Common uses for request
+ header captures include the "Host" field in virtual hosting environments, the
+ "Content-length" when uploads are supported, "User-agent" to quickly
+ differentiate between real users and robots, and "X-Forwarded-For" in proxied
+ environments to find where the request came from.
+
+ Note that when capturing headers such as "User-agent", some spaces may be
+ logged, making the log analysis more difficult. Thus be careful about what
+ you log if you know your log parser is not smart enough to rely on the
+ braces.
+
+ There is no limit to the number of captured request headers nor to their
+ length, though it is wise to keep them low to limit memory usage per session.
+ In order to keep log format consistent for a same frontend, header captures
+ can only be declared in a frontend. It is not possible to specify a capture
+ in a "defaults" section.
+
+ Example:
+ capture request header Host len 15
+ capture request header X-Forwarded-For len 15
+ capture request header Referer len 15
+
+ See also : "capture cookie", "capture response header" as well as section 8
+ about logging.
+
+
+capture response header <name> len <length>
+ Capture and log the last occurrence of the specified response header.
+ May be used in sections : defaults | frontend | listen | backend
+ no | yes | yes | no
+ Arguments :
+ <name> is the name of the header to capture. The header names are not
+ case-sensitive, but it is a common practice to write them as they
+ appear in the response, with the first letter of each word in
+ upper case. The header name will not appear in the logs, only the
+ value is reported, but the position in the logs is respected.
+
+ <length> is the maximum number of characters to extract from the value and
+ report in the logs. The string will be truncated on the right if
+ it exceeds <length>.
+
+ The complete value of the last occurrence of the header is captured. The
+ result will be added to the logs between braces ('{}') after the captured
+ request headers. If multiple headers are captured, they will be delimited by
+ a vertical bar ('|') and will appear in the same order they were declared in
+ the configuration. Non-existent headers will be logged just as an empty
+ string. Common uses for response header captures include the "Content-length"
+ header which indicates how many bytes are expected to be returned, the
+ "Location" header to track redirections.
+
+ There is no limit to the number of captured response headers nor to their
+ length, though it is wise to keep them low to limit memory usage per session.
+ In order to keep log format consistent for a same frontend, header captures
+ can only be declared in a frontend. It is not possible to specify a capture
+ in a "defaults" section.
+
+ Example:
+ capture response header Content-length len 9
+ capture response header Location len 15
+
+ See also : "capture cookie", "capture request header" as well as section 8
+ about logging.
+
+
+clitimeout <timeout> (deprecated)
+ Set the maximum inactivity time on the client side.
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | no
+ Arguments :
+ <timeout> is the timeout value is specified in milliseconds by default, but
+ can be in any other unit if the number is suffixed by the unit,
+ as explained at the top of this document.
+
+ The inactivity timeout applies when the client is expected to acknowledge or
+ send data. In HTTP mode, this timeout is particularly important to consider
+ during the first phase, when the client sends the request, and during the
+ response while it is reading data sent by the server. The value is specified
+ in milliseconds by default, but can be in any other unit if the number is
+ suffixed by the unit, as specified at the top of this document. In TCP mode
+ (and to a lesser extent, in HTTP mode), it is highly recommended that the
+ client timeout remains equal to the server timeout in order to avoid complex
+ situations to debug. It is a good practice to cover one or several TCP packet
+ losses by specifying timeouts that are slightly above multiples of 3 seconds
+ (eg: 4 or 5 seconds).
+
+ This parameter is specific to frontends, but can be specified once for all in
+ "defaults" sections. This is in fact one of the easiest solutions not to
+ forget about it. An unspecified timeout results in an infinite timeout, which
+ is not recommended. Such a usage is accepted and works but reports a warning
+ during startup because it may results in accumulation of expired sessions in
+ the system if the system's timeouts are not configured either.
+
+ This parameter is provided for compatibility but is currently deprecated.
+ Please use "timeout client" instead.
+
+ See also : "timeout client", "timeout http-request", "timeout server", and
+ "srvtimeout".
+
+compression algo <algorithm> ...
+compression type <mime type> ...
+compression offload
+ Enable HTTP compression.
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
+ Arguments :
+ algo is followed by the list of supported compression algorithms.
+ type is followed by the list of MIME types that will be compressed.
+ offload makes haproxy work as a compression offloader only (see notes).
+
+ The currently supported algorithms are :
+ identity this is mostly for debugging, and it was useful for developing
+ the compression feature. Identity does not apply any change on
+ data.
+
+ gzip applies gzip compression. This setting is only available when
+ support for zlib or libslz was built in.
+
+ deflate same as "gzip", but with deflate algorithm and zlib format.
+ Note that this algorithm has ambiguous support on many
+ browsers and no support at all from recent ones. It is
+ strongly recommended not to use it for anything else than
+ experimentation. This setting is only available when support
+ for zlib or libslz was built in.
+
+ raw-deflate same as "deflate" without the zlib wrapper, and used as an
+ alternative when the browser wants "deflate". All major
+ browsers understand it and despite violating the standards,
+ it is known to work better than "deflate", at least on MSIE
+ and some versions of Safari. Do not use it in conjunction
+ with "deflate", use either one or the other since both react
+ to the same Accept-Encoding token. This setting is only
+ available when support for zlib or libslz was built in.
+
+ Compression will be activated depending on the Accept-Encoding request
+ header. With identity, it does not take care of that header.
+ If backend servers support HTTP compression, these directives
+ will be no-op: haproxy will see the compressed response and will not
+ compress again. If backend servers do not support HTTP compression and
+ there is Accept-Encoding header in request, haproxy will compress the
+ matching response.
+
+ The "offload" setting makes haproxy remove the Accept-Encoding header to
+ prevent backend servers from compressing responses. It is strongly
+ recommended not to do this because this means that all the compression work
+ will be done on the single point where haproxy is located. However in some
+ deployment scenarios, haproxy may be installed in front of a buggy gateway
+ with broken HTTP compression implementation which can't be turned off.
+ In that case haproxy can be used to prevent that gateway from emitting
+ invalid payloads. In this case, simply removing the header in the
+ configuration does not work because it applies before the header is parsed,
+ so that prevents haproxy from compressing. The "offload" setting should
+ then be used for such scenarios. Note: for now, the "offload" setting is
+ ignored when set in a defaults section.
+
+ Compression is disabled when:
+ * the request does not advertise a supported compression algorithm in the
+ "Accept-Encoding" header
+ * the response message is not HTTP/1.1
+ * HTTP status code is not 200
+ * response header "Transfer-Encoding" contains "chunked" (Temporary
+ Workaround)
+ * response contain neither a "Content-Length" header nor a
+ "Transfer-Encoding" whose last value is "chunked"
+ * response contains a "Content-Type" header whose first value starts with
+ "multipart"
+ * the response contains the "no-transform" value in the "Cache-control"
+ header
+ * User-Agent matches "Mozilla/4" unless it is MSIE 6 with XP SP2, or MSIE 7
+ and later
+ * The response contains a "Content-Encoding" header, indicating that the
+ response is already compressed (see compression offload)
+
+ Note: The compression does not rewrite Etag headers, and does not emit the
+ Warning header.
+
+ Examples :
+ compression algo gzip
+ compression type text/html text/plain
+
+contimeout <timeout> (deprecated)
+ Set the maximum time to wait for a connection attempt to a server to succeed.
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
+ Arguments :
+ <timeout> is the timeout value is specified in milliseconds by default, but
+ can be in any other unit if the number is suffixed by the unit,
+ as explained at the top of this document.
+
+ If the server is located on the same LAN as haproxy, the connection should be
+ immediate (less than a few milliseconds). Anyway, it is a good practice to
+ cover one or several TCP packet losses by specifying timeouts that are
+ slightly above multiples of 3 seconds (eg: 4 or 5 seconds). By default, the
+ connect timeout also presets the queue timeout to the same value if this one
+ has not been specified. Historically, the contimeout was also used to set the
+ tarpit timeout in a listen section, which is not possible in a pure frontend.
+
+ This parameter is specific to backends, but can be specified once for all in
+ "defaults" sections. This is in fact one of the easiest solutions not to
+ forget about it. An unspecified timeout results in an infinite timeout, which
+ is not recommended. Such a usage is accepted and works but reports a warning
+ during startup because it may results in accumulation of failed sessions in
+ the system if the system's timeouts are not configured either.
+
+ This parameter is provided for backwards compatibility but is currently
+ deprecated. Please use "timeout connect", "timeout queue" or "timeout tarpit"
+ instead.
+
+ See also : "timeout connect", "timeout queue", "timeout tarpit",
+ "timeout server", "contimeout".
+
+
+cookie <name> [ rewrite | insert | prefix ] [ indirect ] [ nocache ]
+ [ postonly ] [ preserve ] [ httponly ] [ secure ]
+ [ domain <domain> ]* [ maxidle <idle> ] [ maxlife <life> ]
+ Enable cookie-based persistence in a backend.
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
+ Arguments :
+ <name> is the name of the cookie which will be monitored, modified or
+ inserted in order to bring persistence. This cookie is sent to
+ the client via a "Set-Cookie" header in the response, and is
+ brought back by the client in a "Cookie" header in all requests.
+ Special care should be taken to choose a name which does not
+ conflict with any likely application cookie. Also, if the same
+ backends are subject to be used by the same clients (eg:
+ HTTP/HTTPS), care should be taken to use different cookie names
+ between all backends if persistence between them is not desired.
+
+ rewrite This keyword indicates that the cookie will be provided by the
+ server and that haproxy will have to modify its value to set the
+ server's identifier in it. This mode is handy when the management
+ of complex combinations of "Set-cookie" and "Cache-control"
+ headers is left to the application. The application can then
+ decide whether or not it is appropriate to emit a persistence
+ cookie. Since all responses should be monitored, this mode only
+ works in HTTP close mode. Unless the application behaviour is
+ very complex and/or broken, it is advised not to start with this
+ mode for new deployments. This keyword is incompatible with
+ "insert" and "prefix".
+
+ insert This keyword indicates that the persistence cookie will have to
+ be inserted by haproxy in server responses if the client did not
+
+ already have a cookie that would have permitted it to access this
+ server. When used without the "preserve" option, if the server
+ emits a cookie with the same name, it will be remove before
+ processing. For this reason, this mode can be used to upgrade
+ existing configurations running in the "rewrite" mode. The cookie
+ will only be a session cookie and will not be stored on the
+ client's disk. By default, unless the "indirect" option is added,
+ the server will see the cookies emitted by the client. Due to
+ caching effects, it is generally wise to add the "nocache" or
+ "postonly" keywords (see below). The "insert" keyword is not
+ compatible with "rewrite" and "prefix".
+
+ prefix This keyword indicates that instead of relying on a dedicated
+ cookie for the persistence, an existing one will be completed.
+ This may be needed in some specific environments where the client
+ does not support more than one single cookie and the application
+ already needs it. In this case, whenever the server sets a cookie
+ named <name>, it will be prefixed with the server's identifier
+ and a delimiter. The prefix will be removed from all client
+ requests so that the server still finds the cookie it emitted.
+ Since all requests and responses are subject to being modified,
+ this mode requires the HTTP close mode. The "prefix" keyword is
+ not compatible with "rewrite" and "insert". Note: it is highly
+ recommended not to use "indirect" with "prefix", otherwise server
+ cookie updates would not be sent to clients.
+
+ indirect When this option is specified, no cookie will be emitted to a
+ client which already has a valid one for the server which has
+ processed the request. If the server sets such a cookie itself,
+ it will be removed, unless the "preserve" option is also set. In
+ "insert" mode, this will additionally remove cookies from the
+ requests transmitted to the server, making the persistence
+ mechanism totally transparent from an application point of view.
+ Note: it is highly recommended not to use "indirect" with
+ "prefix", otherwise server cookie updates would not be sent to
+ clients.
+
+ nocache This option is recommended in conjunction with the insert mode
+ when there is a cache between the client and HAProxy, as it
+ ensures that a cacheable response will be tagged non-cacheable if
+ a cookie needs to be inserted. This is important because if all
+ persistence cookies are added on a cacheable home page for
+ instance, then all customers will then fetch the page from an
+ outer cache and will all share the same persistence cookie,
+ leading to one server receiving much more traffic than others.
+ See also the "insert" and "postonly" options.
+
+ postonly This option ensures that cookie insertion will only be performed
+ on responses to POST requests. It is an alternative to the
+ "nocache" option, because POST responses are not cacheable, so
+ this ensures that the persistence cookie will never get cached.
+ Since most sites do not need any sort of persistence before the
+ first POST which generally is a login request, this is a very
+ efficient method to optimize caching without risking to find a
+ persistence cookie in the cache.
+ See also the "insert" and "nocache" options.
+
+ preserve This option may only be used with "insert" and/or "indirect". It
+ allows the server to emit the persistence cookie itself. In this
+ case, if a cookie is found in the response, haproxy will leave it
+ untouched. This is useful in order to end persistence after a
+ logout request for instance. For this, the server just has to
+ emit a cookie with an invalid value (eg: empty) or with a date in
+ the past. By combining this mechanism with the "disable-on-404"
+ check option, it is possible to perform a completely graceful
+ shutdown because users will definitely leave the server after
+ they logout.
+
+ httponly This option tells haproxy to add an "HttpOnly" cookie attribute
+ when a cookie is inserted. This attribute is used so that a
+ user agent doesn't share the cookie with non-HTTP components.
+ Please check RFC6265 for more information on this attribute.
+
+ secure This option tells haproxy to add a "Secure" cookie attribute when
+ a cookie is inserted. This attribute is used so that a user agent
+ never emits this cookie over non-secure channels, which means
+ that a cookie learned with this flag will be presented only over
+ SSL/TLS connections. Please check RFC6265 for more information on
+ this attribute.
+
+ domain This option allows to specify the domain at which a cookie is
+ inserted. It requires exactly one parameter: a valid domain
+ name. If the domain begins with a dot, the browser is allowed to
+ use it for any host ending with that name. It is also possible to
+ specify several domain names by invoking this option multiple
+ times. Some browsers might have small limits on the number of
+ domains, so be careful when doing that. For the record, sending
+ 10 domains to MSIE 6 or Firefox 2 works as expected.
+
+ maxidle This option allows inserted cookies to be ignored after some idle
+ time. It only works with insert-mode cookies. When a cookie is
+ sent to the client, the date this cookie was emitted is sent too.
+ Upon further presentations of this cookie, if the date is older
+ than the delay indicated by the parameter (in seconds), it will
+ be ignored. Otherwise, it will be refreshed if needed when the
+ response is sent to the client. This is particularly useful to
+ prevent users who never close their browsers from remaining for
+ too long on the same server (eg: after a farm size change). When
+ this option is set and a cookie has no date, it is always
+ accepted, but gets refreshed in the response. This maintains the
+ ability for admins to access their sites. Cookies that have a
+ date in the future further than 24 hours are ignored. Doing so
+ lets admins fix timezone issues without risking kicking users off
+ the site.
+
+ maxlife This option allows inserted cookies to be ignored after some life
+ time, whether they're in use or not. It only works with insert
+ mode cookies. When a cookie is first sent to the client, the date
+ this cookie was emitted is sent too. Upon further presentations
+ of this cookie, if the date is older than the delay indicated by
+ the parameter (in seconds), it will be ignored. If the cookie in
+ the request has no date, it is accepted and a date will be set.
+ Cookies that have a date in the future further than 24 hours are
+ ignored. Doing so lets admins fix timezone issues without risking
+ kicking users off the site. Contrary to maxidle, this value is
+ not refreshed, only the first visit date counts. Both maxidle and
+ maxlife may be used at the time. This is particularly useful to
+ prevent users who never close their browsers from remaining for
+ too long on the same server (eg: after a farm size change). This
+ is stronger than the maxidle method in that it forces a
+ redispatch after some absolute delay.
+
+ There can be only one persistence cookie per HTTP backend, and it can be
+ declared in a defaults section. The value of the cookie will be the value
+ indicated after the "cookie" keyword in a "server" statement. If no cookie
+ is declared for a given server, the cookie is not set.
+
+ Examples :
+ cookie JSESSIONID prefix
+ cookie SRV insert indirect nocache
+ cookie SRV insert postonly indirect
+ cookie SRV insert indirect nocache maxidle 30m maxlife 8h
+
+ See also : "balance source", "capture cookie", "server" and "ignore-persist".
+
+
+declare capture [ request | response ] len <length>
+ Declares a capture slot.
+ May be used in sections : defaults | frontend | listen | backend
+ no | yes | yes | no
+ Arguments:
+ <length> is the length allowed for the capture.
+
+ This declaration is only available in the frontend or listen section, but the
+ reserved slot can be used in the backends. The "request" keyword allocates a
+ capture slot for use in the request, and "response" allocates a capture slot
+ for use in the response.
+
+ See also: "capture-req", "capture-res" (sample converters),
+ "capture.req.hdr", "capture.res.hdr" (sample fetches),
+ "http-request capture" and "http-response capture".
+
+
+default-server [param*]
+ Change default options for a server in a backend
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
+ Arguments:
+ <param*> is a list of parameters for this server. The "default-server"
+ keyword accepts an important number of options and has a complete
+ section dedicated to it. Please refer to section 5 for more
+ details.
+
+ Example :
+ default-server inter 1000 weight 13
+
+ See also: "server" and section 5 about server options
+
+
+default_backend <backend>
+ Specify the backend to use when no "use_backend" rule has been matched.
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | no
+ Arguments :
+ <backend> is the name of the backend to use.
+
+ When doing content-switching between frontend and backends using the
+ "use_backend" keyword, it is often useful to indicate which backend will be
+ used when no rule has matched. It generally is the dynamic backend which
+ will catch all undetermined requests.
+
+ Example :
+
+ use_backend dynamic if url_dyn
+ use_backend static if url_css url_img extension_img
+ default_backend dynamic
+
+ See also : "use_backend"
+
+
+description <string>
+ Describe a listen, frontend or backend.
+ May be used in sections : defaults | frontend | listen | backend
+ no | yes | yes | yes
+ Arguments : string
+
+ Allows to add a sentence to describe the related object in the HAProxy HTML
+ stats page. The description will be printed on the right of the object name
+ it describes.
+ No need to backslash spaces in the <string> arguments.
+
+
+disabled
+ Disable a proxy, frontend or backend.
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
+ Arguments : none
+
+ The "disabled" keyword is used to disable an instance, mainly in order to
+ liberate a listening port or to temporarily disable a service. The instance
+ will still be created and its configuration will be checked, but it will be
+ created in the "stopped" state and will appear as such in the statistics. It
+ will not receive any traffic nor will it send any health-checks or logs. It
+ is possible to disable many instances at once by adding the "disabled"
+ keyword in a "defaults" section.
+
+ See also : "enabled"
+
+
+dispatch <address>:<port>
+ Set a default server address
+ May be used in sections : defaults | frontend | listen | backend
+ no | no | yes | yes
+ Arguments :
+
+ <address> is the IPv4 address of the default server. Alternatively, a
+ resolvable hostname is supported, but this name will be resolved
+ during start-up.
+
+ <ports> is a mandatory port specification. All connections will be sent
+ to this port, and it is not permitted to use port offsets as is
+ possible with normal servers.
+
+ The "dispatch" keyword designates a default server for use when no other
+ server can take the connection. In the past it was used to forward non
+ persistent connections to an auxiliary load balancer. Due to its simple
+ syntax, it has also been used for simple TCP relays. It is recommended not to
+ use it for more clarity, and to use the "server" directive instead.
+
+ See also : "server"
+
+
+enabled
+ Enable a proxy, frontend or backend.
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
+ Arguments : none
+
+ The "enabled" keyword is used to explicitly enable an instance, when the
+ defaults has been set to "disabled". This is very rarely used.
+
+ See also : "disabled"
+
+
+errorfile <code> <file>
+ Return a file contents instead of errors generated by HAProxy
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
+ Arguments :
+ <code> is the HTTP status code. Currently, HAProxy is capable of
+ generating codes 200, 400, 403, 405, 408, 429, 500, 502, 503, and
+ 504.
+
+ <file> designates a file containing the full HTTP response. It is
+ recommended to follow the common practice of appending ".http" to
+ the filename so that people do not confuse the response with HTML
+ error pages, and to use absolute paths, since files are read
+ before any chroot is performed.
+
+ It is important to understand that this keyword is not meant to rewrite
+ errors returned by the server, but errors detected and returned by HAProxy.
+ This is why the list of supported errors is limited to a small set.
+
+ Code 200 is emitted in response to requests matching a "monitor-uri" rule.
+
+ The files are returned verbatim on the TCP socket. This allows any trick such
+ as redirections to another URL or site, as well as tricks to clean cookies,
+ force enable or disable caching, etc... The package provides default error
+ files returning the same contents as default errors.
+
+ The files should not exceed the configured buffer size (BUFSIZE), which
+ generally is 8 or 16 kB, otherwise they will be truncated. It is also wise
+ not to put any reference to local contents (eg: images) in order to avoid
+ loops between the client and HAProxy when all servers are down, causing an
+ error to be returned instead of an image. For better HTTP compliance, it is
+ recommended that all header lines end with CR-LF and not LF alone.
+
+ The files are read at the same time as the configuration and kept in memory.
+ For this reason, the errors continue to be returned even when the process is
+ chrooted, and no file change is considered while the process is running. A
+ simple method for developing those files consists in associating them to the
+ 403 status code and interrogating a blocked URL.
+
+ See also : "errorloc", "errorloc302", "errorloc303"
+
+ Example :
+ errorfile 400 /etc/haproxy/errorfiles/400badreq.http
+ errorfile 408 /dev/null # workaround Chrome pre-connect bug
+ errorfile 403 /etc/haproxy/errorfiles/403forbid.http
+ errorfile 503 /etc/haproxy/errorfiles/503sorry.http
+
+
+errorloc <code> <url>
+errorloc302 <code> <url>
+ Return an HTTP redirection to a URL instead of errors generated by HAProxy
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
+ Arguments :
+ <code> is the HTTP status code. Currently, HAProxy is capable of
+ generating codes 200, 400, 403, 408, 500, 502, 503, and 504.
+
+ <url> it is the exact contents of the "Location" header. It may contain
+ either a relative URI to an error page hosted on the same site,
+ or an absolute URI designating an error page on another site.
+ Special care should be given to relative URIs to avoid redirect
+ loops if the URI itself may generate the same error (eg: 500).
+
+ It is important to understand that this keyword is not meant to rewrite
+ errors returned by the server, but errors detected and returned by HAProxy.
+ This is why the list of supported errors is limited to a small set.
+
+ Code 200 is emitted in response to requests matching a "monitor-uri" rule.
+
+ Note that both keyword return the HTTP 302 status code, which tells the
+ client to fetch the designated URL using the same HTTP method. This can be
+ quite problematic in case of non-GET methods such as POST, because the URL
+ sent to the client might not be allowed for something other than GET. To
+ workaround this problem, please use "errorloc303" which send the HTTP 303
+ status code, indicating to the client that the URL must be fetched with a GET
+ request.
+
+ See also : "errorfile", "errorloc303"
+
+
+errorloc303 <code> <url>
+ Return an HTTP redirection to a URL instead of errors generated by HAProxy
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
+ Arguments :
+ <code> is the HTTP status code. Currently, HAProxy is capable of
+ generating codes 400, 403, 408, 500, 502, 503, and 504.
+
+ <url> it is the exact contents of the "Location" header. It may contain
+ either a relative URI to an error page hosted on the same site,
+ or an absolute URI designating an error page on another site.
+ Special care should be given to relative URIs to avoid redirect
+ loops if the URI itself may generate the same error (eg: 500).
+
+ It is important to understand that this keyword is not meant to rewrite
+ errors returned by the server, but errors detected and returned by HAProxy.
+ This is why the list of supported errors is limited to a small set.
+
+ Code 200 is emitted in response to requests matching a "monitor-uri" rule.
+
+ Note that both keyword return the HTTP 303 status code, which tells the
+ client to fetch the designated URL using the same HTTP GET method. This
+ solves the usual problems associated with "errorloc" and the 302 code. It is
+ possible that some very old browsers designed before HTTP/1.1 do not support
+ it, but no such problem has been reported till now.
+
+ See also : "errorfile", "errorloc", "errorloc302"
+
+
+email-alert from <emailaddr>
+ Declare the from email address to be used in both the envelope and header
+ of email alerts. This is the address that email alerts are sent from.
+ May be used in sections: defaults | frontend | listen | backend
+ yes | yes | yes | yes
+
+ Arguments :
+
+ <emailaddr> is the from email address to use when sending email alerts
+
+ Also requires "email-alert mailers" and "email-alert to" to be set
+ and if so sending email alerts is enabled for the proxy.
+
+ See also : "email-alert level", "email-alert mailers",
+ "email-alert myhostname", "email-alert to", section 3.6 about
+ mailers.
+
+
+email-alert level <level>
+ Declare the maximum log level of messages for which email alerts will be
+ sent. This acts as a filter on the sending of email alerts.
+ May be used in sections: defaults | frontend | listen | backend
+ yes | yes | yes | yes
+
+ Arguments :
+
+ <level> One of the 8 syslog levels:
+ emerg alert crit err warning notice info debug
+ The above syslog levels are ordered from lowest to highest.
+
+ By default level is alert
+
+ Also requires "email-alert from", "email-alert mailers" and
+ "email-alert to" to be set and if so sending email alerts is enabled
+ for the proxy.
+
+ Alerts are sent when :
+
+ * An un-paused server is marked as down and <level> is alert or lower
+ * A paused server is marked as down and <level> is notice or lower
+ * A server is marked as up or enters the drain state and <level>
+ is notice or lower
+ * "option log-health-checks" is enabled, <level> is info or lower,
+ and a health check status update occurs
+
+ See also : "email-alert from", "email-alert mailers",
+ "email-alert myhostname", "email-alert to",
+ section 3.6 about mailers.
+
+
+email-alert mailers <mailersect>
+ Declare the mailers to be used when sending email alerts
+ May be used in sections: defaults | frontend | listen | backend
+ yes | yes | yes | yes
+
+ Arguments :
+
+ <mailersect> is the name of the mailers section to send email alerts.
+
+ Also requires "email-alert from" and "email-alert to" to be set
+ and if so sending email alerts is enabled for the proxy.
+
+ See also : "email-alert from", "email-alert level", "email-alert myhostname",
+ "email-alert to", section 3.6 about mailers.
+
+
+email-alert myhostname <hostname>
+ Declare the to hostname address to be used when communicating with
+ mailers.
+ May be used in sections: defaults | frontend | listen | backend
+ yes | yes | yes | yes
+
+ Arguments :
+
+ <hostname> is the hostname to use when communicating with mailers
+
+ By default the systems hostname is used.
+
+ Also requires "email-alert from", "email-alert mailers" and
+ "email-alert to" to be set and if so sending email alerts is enabled
+ for the proxy.
+
+ See also : "email-alert from", "email-alert level", "email-alert mailers",
+ "email-alert to", section 3.6 about mailers.
+
+
+email-alert to <emailaddr>
+ Declare both the recipent address in the envelope and to address in the
+ header of email alerts. This is the address that email alerts are sent to.
+ May be used in sections: defaults | frontend | listen | backend
+ yes | yes | yes | yes
+
+ Arguments :
+
+ <emailaddr> is the to email address to use when sending email alerts
+
+ Also requires "email-alert mailers" and "email-alert to" to be set
+ and if so sending email alerts is enabled for the proxy.
+
+ See also : "email-alert from", "email-alert level", "email-alert mailers",
+ "email-alert myhostname", section 3.6 about mailers.
+
+
+force-persist { if | unless } <condition>
+ Declare a condition to force persistence on down servers
+ May be used in sections: defaults | frontend | listen | backend
+ no | yes | yes | yes
+
+ By default, requests are not dispatched to down servers. It is possible to
+ force this using "option persist", but it is unconditional and redispatches
+ to a valid server if "option redispatch" is set. That leaves with very little
+ possibilities to force some requests to reach a server which is artificially
+ marked down for maintenance operations.
+
+ The "force-persist" statement allows one to declare various ACL-based
+ conditions which, when met, will cause a request to ignore the down status of
+ a server and still try to connect to it. That makes it possible to start a
+ server, still replying an error to the health checks, and run a specially
+ configured browser to test the service. Among the handy methods, one could
+ use a specific source IP address, or a specific cookie. The cookie also has
+ the advantage that it can easily be added/removed on the browser from a test
+ page. Once the service is validated, it is then possible to open the service
+ to the world by returning a valid response to health checks.
+
+ The forced persistence is enabled when an "if" condition is met, or unless an
+ "unless" condition is met. The final redispatch is always disabled when this
+ is used.
+
+ See also : "option redispatch", "ignore-persist", "persist",
+ and section 7 about ACL usage.
+
+
+fullconn <conns>
+ Specify at what backend load the servers will reach their maxconn
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
+ Arguments :
+ <conns> is the number of connections on the backend which will make the
+ servers use the maximal number of connections.
+
+ When a server has a "maxconn" parameter specified, it means that its number
+ of concurrent connections will never go higher. Additionally, if it has a
+ "minconn" parameter, it indicates a dynamic limit following the backend's
+ load. The server will then always accept at least <minconn> connections,
+ never more than <maxconn>, and the limit will be on the ramp between both
+ values when the backend has less than <conns> concurrent connections. This
+ makes it possible to limit the load on the servers during normal loads, but
+ push it further for important loads without overloading the servers during
+ exceptional loads.
+
+ Since it's hard to get this value right, haproxy automatically sets it to
+ 10% of the sum of the maxconns of all frontends that may branch to this
+ backend (based on "use_backend" and "default_backend" rules). That way it's
+ safe to leave it unset. However, "use_backend" involving dynamic names are
+ not counted since there is no way to know if they could match or not.
+
+ Example :
+ # The servers will accept between 100 and 1000 concurrent connections each
+ # and the maximum of 1000 will be reached when the backend reaches 10000
+ # connections.
+ backend dynamic
+ fullconn 10000
+ server srv1 dyn1:80 minconn 100 maxconn 1000
+ server srv2 dyn2:80 minconn 100 maxconn 1000
+
+ See also : "maxconn", "server"
+
+
+grace <time>
+ Maintain a proxy operational for some time after a soft stop
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
+ Arguments :
+ <time> is the time (by default in milliseconds) for which the instance
+ will remain operational with the frontend sockets still listening
+ when a soft-stop is received via the SIGUSR1 signal.
+
+ This may be used to ensure that the services disappear in a certain order.
+ This was designed so that frontends which are dedicated to monitoring by an
+ external equipment fail immediately while other ones remain up for the time
+ needed by the equipment to detect the failure.
+
+ Note that currently, there is very little benefit in using this parameter,
+ and it may in fact complicate the soft-reconfiguration process more than
+ simplify it.
+
+
+hash-type <method> <function> <modifier>
+ Specify a method to use for mapping hashes to servers
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
+ Arguments :
+ <method> is the method used to select a server from the hash computed by
+ the <function> :
+
+ map-based the hash table is a static array containing all alive servers.
+ The hashes will be very smooth, will consider weights, but
+ will be static in that weight changes while a server is up
+ will be ignored. This means that there will be no slow start.
+ Also, since a server is selected by its position in the array,
+ most mappings are changed when the server count changes. This
+ means that when a server goes up or down, or when a server is
+ added to a farm, most connections will be redistributed to
+ different servers. This can be inconvenient with caches for
+ instance.
+
+ consistent the hash table is a tree filled with many occurrences of each
+ server. The hash key is looked up in the tree and the closest
+ server is chosen. This hash is dynamic, it supports changing
+ weights while the servers are up, so it is compatible with the
+ slow start feature. It has the advantage that when a server
+ goes up or down, only its associations are moved. When a
+ server is added to the farm, only a few part of the mappings
+ are redistributed, making it an ideal method for caches.
+ However, due to its principle, the distribution will never be
+ very smooth and it may sometimes be necessary to adjust a
+ server's weight or its ID to get a more balanced distribution.
+ In order to get the same distribution on multiple load
+ balancers, it is important that all servers have the exact
+ same IDs. Note: consistent hash uses sdbm and avalanche if no
+ hash function is specified.
+
+ <function> is the hash function to be used :
+
+ sdbm this function was created initially for sdbm (a public-domain
+ reimplementation of ndbm) database library. It was found to do
+ well in scrambling bits, causing better distribution of the keys
+ and fewer splits. It also happens to be a good general hashing
+ function with good distribution, unless the total server weight
+ is a multiple of 64, in which case applying the avalanche
+ modifier may help.
+
+ djb2 this function was first proposed by Dan Bernstein many years ago
+ on comp.lang.c. Studies have shown that for certain workload this
+ function provides a better distribution than sdbm. It generally
+ works well with text-based inputs though it can perform extremely
+ poorly with numeric-only input or when the total server weight is
+ a multiple of 33, unless the avalanche modifier is also used.
+
+ wt6 this function was designed for haproxy while testing other
+ functions in the past. It is not as smooth as the other ones, but
+ is much less sensible to the input data set or to the number of
+ servers. It can make sense as an alternative to sdbm+avalanche or
+ djb2+avalanche for consistent hashing or when hashing on numeric
+ data such as a source IP address or a visitor identifier in a URL
+ parameter.
+
+ crc32 this is the most common CRC32 implementation as used in Ethernet,
+ gzip, PNG, etc. It is slower than the other ones but may provide
+ a better distribution or less predictable results especially when
+ used on strings.
+
+ <modifier> indicates an optional method applied after hashing the key :
+
+ avalanche This directive indicates that the result from the hash
+ function above should not be used in its raw form but that
+ a 4-byte full avalanche hash must be applied first. The
+ purpose of this step is to mix the resulting bits from the
+ previous hash in order to avoid any undesired effect when
+ the input contains some limited values or when the number of
+ servers is a multiple of one of the hash's components (64
+ for SDBM, 33 for DJB2). Enabling avalanche tends to make the
+ result less predictable, but it's also not as smooth as when
+ using the original function. Some testing might be needed
+ with some workloads. This hash is one of the many proposed
+ by Bob Jenkins.
+
+ The default hash type is "map-based" and is recommended for most usages. The
+ default function is "sdbm", the selection of a function should be based on
+ the range of the values being hashed.
+
+ See also : "balance", "server"
+
+
+http-check disable-on-404
+ Enable a maintenance mode upon HTTP/404 response to health-checks
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
+ Arguments : none
+
+ When this option is set, a server which returns an HTTP code 404 will be
+ excluded from further load-balancing, but will still receive persistent
+ connections. This provides a very convenient method for Web administrators
+ to perform a graceful shutdown of their servers. It is also important to note
+ that a server which is detected as failed while it was in this mode will not
+ generate an alert, just a notice. If the server responds 2xx or 3xx again, it
+ will immediately be reinserted into the farm. The status on the stats page
+ reports "NOLB" for a server in this mode. It is important to note that this
+ option only works in conjunction with the "httpchk" option. If this option
+ is used with "http-check expect", then it has precedence over it so that 404
+ responses will still be considered as soft-stop.
+
+ See also : "option httpchk", "http-check expect"
+
+
+http-check expect [!] <match> <pattern>
+ Make HTTP health checks consider response contents or specific status codes
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
+ Arguments :
+ <match> is a keyword indicating how to look for a specific pattern in the
+ response. The keyword may be one of "status", "rstatus",
+ "string", or "rstring". The keyword may be preceded by an
+ exclamation mark ("!") to negate the match. Spaces are allowed
+ between the exclamation mark and the keyword. See below for more
+ details on the supported keywords.
+
+ <pattern> is the pattern to look for. It may be a string or a regular
+ expression. If the pattern contains spaces, they must be escaped
+ with the usual backslash ('\').
+
+ By default, "option httpchk" considers that response statuses 2xx and 3xx
+ are valid, and that others are invalid. When "http-check expect" is used,
+ it defines what is considered valid or invalid. Only one "http-check"
+ statement is supported in a backend. If a server fails to respond or times
+ out, the check obviously fails. The available matches are :
+
+ status <string> : test the exact string match for the HTTP status code.
+ A health check response will be considered valid if the
+ response's status code is exactly this string. If the
+ "status" keyword is prefixed with "!", then the response
+ will be considered invalid if the status code matches.
+
+ rstatus <regex> : test a regular expression for the HTTP status code.
+ A health check response will be considered valid if the
+ response's status code matches the expression. If the
+ "rstatus" keyword is prefixed with "!", then the response
+ will be considered invalid if the status code matches.
+ This is mostly used to check for multiple codes.
+
+ string <string> : test the exact string match in the HTTP response body.
+ A health check response will be considered valid if the
+ response's body contains this exact string. If the
+ "string" keyword is prefixed with "!", then the response
+ will be considered invalid if the body contains this
+ string. This can be used to look for a mandatory word at
+ the end of a dynamic page, or to detect a failure when a
+ specific error appears on the check page (eg: a stack
+ trace).
+
+ rstring <regex> : test a regular expression on the HTTP response body.
+ A health check response will be considered valid if the
+ response's body matches this expression. If the "rstring"
+ keyword is prefixed with "!", then the response will be
+ considered invalid if the body matches the expression.
+ This can be used to look for a mandatory word at the end
+ of a dynamic page, or to detect a failure when a specific
+ error appears on the check page (eg: a stack trace).
+
+ It is important to note that the responses will be limited to a certain size
+ defined by the global "tune.chksize" option, which defaults to 16384 bytes.
+ Thus, too large responses may not contain the mandatory pattern when using
+ "string" or "rstring". If a large response is absolutely required, it is
+ possible to change the default max size by setting the global variable.
+ However, it is worth keeping in mind that parsing very large responses can
+ waste some CPU cycles, especially when regular expressions are used, and that
+ it is always better to focus the checks on smaller resources.
+
+ Also "http-check expect" doesn't support HTTP keep-alive. Keep in mind that it
+ will automatically append a "Connection: close" header, meaning that this
+ header should not be present in the request provided by "option httpchk".
+
+ Last, if "http-check expect" is combined with "http-check disable-on-404",
+ then this last one has precedence when the server responds with 404.
+
+ Examples :
+ # only accept status 200 as valid
+ http-check expect status 200
+
+ # consider SQL errors as errors
+ http-check expect ! string SQL\ Error
+
+ # consider status 5xx only as errors
+ http-check expect ! rstatus ^5
+
+ # check that we have a correct hexadecimal tag before /html
+ http-check expect rstring <!--tag:[0-9a-f]*</html>
+
+ See also : "option httpchk", "http-check disable-on-404"
+
+
+http-check send-state
+ Enable emission of a state header with HTTP health checks
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
+ Arguments : none
+
+ When this option is set, haproxy will systematically send a special header
+ "X-Haproxy-Server-State" with a list of parameters indicating to each server
+ how they are seen by haproxy. This can be used for instance when a server is
+ manipulated without access to haproxy and the operator needs to know whether
+ haproxy still sees it up or not, or if the server is the last one in a farm.
+
+ The header is composed of fields delimited by semi-colons, the first of which
+ is a word ("UP", "DOWN", "NOLB"), possibly followed by a number of valid
+ checks on the total number before transition, just as appears in the stats
+ interface. Next headers are in the form "<variable>=<value>", indicating in
+ no specific order some values available in the stats interface :
+ - a variable "address", containing the address of the backend server.
+ This corresponds to the <address> field in the server declaration. For
+ unix domain sockets, it will read "unix".
+
+ - a variable "port", containing the port of the backend server. This
+ corresponds to the <port> field in the server declaration. For unix
+ domain sockets, it will read "unix".
+
+ - a variable "name", containing the name of the backend followed by a slash
+ ("/") then the name of the server. This can be used when a server is
+ checked in multiple backends.
+
+ - a variable "node" containing the name of the haproxy node, as set in the
+ global "node" variable, otherwise the system's hostname if unspecified.
+
+ - a variable "weight" indicating the weight of the server, a slash ("/")
+ and the total weight of the farm (just counting usable servers). This
+ helps to know if other servers are available to handle the load when this
+ one fails.
+
+ - a variable "scur" indicating the current number of concurrent connections
+ on the server, followed by a slash ("/") then the total number of
+ connections on all servers of the same backend.
+
+ - a variable "qcur" indicating the current number of requests in the
+ server's queue.
+
+ Example of a header received by the application server :
+ >>> X-Haproxy-Server-State: UP 2/3; name=bck/srv2; node=lb1; weight=1/2; \
+ scur=13/22; qcur=0
+
+ See also : "option httpchk", "http-check disable-on-404"
+
+http-request { allow | deny | tarpit | auth [realm <realm>] | redirect <rule> |
+ add-header <name> <fmt> | set-header <name> <fmt> |
+ capture <sample> [ len <length> | id <id> ] |
+ del-header <name> | set-nice <nice> | set-log-level <level> |
+ replace-header <name> <match-regex> <replace-fmt> |
+ replace-value <name> <match-regex> <replace-fmt> |
+ set-method <fmt> | set-path <fmt> | set-query <fmt> |
+ set-uri <fmt> | set-tos <tos> | set-mark <mark> |
+ add-acl(<file name>) <key fmt> |
+ del-acl(<file name>) <key fmt> |
+ del-map(<file name>) <key fmt> |
+ set-map(<file name>) <key fmt> <value fmt> |
+ set-var(<var name>) <expr> |
+ { track-sc0 | track-sc1 | track-sc2 } <key> [table <table>] |
+ sc-inc-gpc0(<sc-id>) |
+ sc-set-gpt0(<sc-id>) <int> |
+ silent-drop |
+ }
+ [ { if | unless } <condition> ]
+ Access control for Layer 7 requests
+
+ May be used in sections: defaults | frontend | listen | backend
+ no | yes | yes | yes
+
+ The http-request statement defines a set of rules which apply to layer 7
+ processing. The rules are evaluated in their declaration order when they are
+ met in a frontend, listen or backend section. Any rule may optionally be
+ followed by an ACL-based condition, in which case it will only be evaluated
+ if the condition is true.
+
+ The first keyword is the rule's action. Currently supported actions include :
+ - "allow" : this stops the evaluation of the rules and lets the request
+ pass the check. No further "http-request" rules are evaluated.
+
+ - "deny" : this stops the evaluation of the rules and immediately rejects
+ the request and emits an HTTP 403 error. No further "http-request" rules
+ are evaluated.
+
+ - "tarpit" : this stops the evaluation of the rules and immediately blocks
+ the request without responding for a delay specified by "timeout tarpit"
+ or "timeout connect" if the former is not set. After that delay, if the
+ client is still connected, an HTTP error 500 is returned so that the
+ client does not suspect it has been tarpitted. Logs will report the flags
+ "PT". The goal of the tarpit rule is to slow down robots during an attack
+ when they're limited on the number of concurrent requests. It can be very
+ efficient against very dumb robots, and will significantly reduce the
+ load on firewalls compared to a "deny" rule. But when facing "correctly"
+ developed robots, it can make things worse by forcing haproxy and the
+ front firewall to support insane number of concurrent connections. See
+ also the "silent-drop" action below.
+
+ - "auth" : this stops the evaluation of the rules and immediately responds
+ with an HTTP 401 or 407 error code to invite the user to present a valid
+ user name and password. No further "http-request" rules are evaluated. An
+ optional "realm" parameter is supported, it sets the authentication realm
+ that is returned with the response (typically the application's name).
+
+ - "redirect" : this performs an HTTP redirection based on a redirect rule.
+ This is exactly the same as the "redirect" statement except that it
+ inserts a redirect rule which can be processed in the middle of other
+ "http-request" rules and that these rules use the "log-format" strings.
+ See the "redirect" keyword for the rule's syntax.
+
+ - "add-header" appends an HTTP header field whose name is specified in
+ <name> and whose value is defined by <fmt> which follows the log-format
+ rules (see Custom Log Format in section 8.2.4). This is particularly
+ useful to pass connection-specific information to the server (eg: the
+ client's SSL certificate), or to combine several headers into one. This
+ rule is not final, so it is possible to add other similar rules. Note
+ that header addition is performed immediately, so one rule might reuse
+ the resulting header from a previous rule.
+
+ - "set-header" does the same as "add-header" except that the header name
+ is first removed if it existed. This is useful when passing security
+ information to the server, where the header must not be manipulated by
+ external users. Note that the new value is computed before the removal so
+ it is possible to concatenate a value to an existing header.
+
+ - "del-header" removes all HTTP header fields whose name is specified in
+ <name>.
+
+ - "replace-header" matches the regular expression in all occurrences of
+ header field <name> according to <match-regex>, and replaces them with
+ the <replace-fmt> argument. Format characters are allowed in replace-fmt
+ and work like in <fmt> arguments in "add-header". The match is only
+ case-sensitive. It is important to understand that this action only
+ considers whole header lines, regardless of the number of values they
+ may contain. This usage is suited to headers naturally containing commas
+ in their value, such as If-Modified-Since and so on.
+
+ Example:
+
+ http-request replace-header Cookie foo=([^;]*);(.*) foo=\1;ip=%bi;\2
+
+ applied to:
+
+ Cookie: foo=foobar; expires=Tue, 14-Jun-2016 01:40:45 GMT;
+
+ outputs:
+
+ Cookie: foo=foobar;ip=192.168.1.20; expires=Tue, 14-Jun-2016 01:40:45 GMT;
+
+ assuming the backend IP is 192.168.1.20
+
+ - "replace-value" works like "replace-header" except that it matches the
+ regex against every comma-delimited value of the header field <name>
+ instead of the entire header. This is suited for all headers which are
+ allowed to carry more than one value. An example could be the Accept
+ header.
+
+ Example:
+
+ http-request replace-value X-Forwarded-For ^192\.168\.(.*)$ 172.16.\1
+
+ applied to:
+
+ X-Forwarded-For: 192.168.10.1, 192.168.13.24, 10.0.0.37
+
+ outputs:
+
+ X-Forwarded-For: 172.16.10.1, 172.16.13.24, 10.0.0.37
+
+ - "set-method" rewrites the request method with the result of the
+ evaluation of format string <fmt>. There should be very few valid reasons
+ for having to do so as this is more likely to break something than to fix
+ it.
+
+ - "set-path" rewrites the request path with the result of the evaluation of
+ format string <fmt>. The query string, if any, is left intact. If a
+ scheme and authority is found before the path, they are left intact as
+ well. If the request doesn't have a path ("*"), this one is replaced with
+ the format. This can be used to prepend a directory component in front of
+ a path for example. See also "set-query" and "set-uri".
+
+ Example :
+ # prepend the host name before the path
+ http-request set-path /%[hdr(host)]%[path]
+
+ - "set-query" rewrites the request's query string which appears after the
+ first question mark ("?") with the result of the evaluation of format
+ string <fmt>. The part prior to the question mark is left intact. If the
+ request doesn't contain a question mark and the new value is not empty,
+ then one is added at the end of the URI, followed by the new value. If
+ a question mark was present, it will never be removed even if the value
+ is empty. This can be used to add or remove parameters from the query
+ string. See also "set-query" and "set-uri".
+
+ Example :
+ # replace "%3D" with "=" in the query string
+ http-request set-query %[query,regsub(%3D,=,g)]
+
+ - "set-uri" rewrites the request URI with the result of the evaluation of
+ format string <fmt>. The scheme, authority, path and query string are all
+ replaced at once. This can be used to rewrite hosts in front of proxies,
+ or to perform complex modifications to the URI such as moving parts
+ between the path and the query string. See also "set-path" and
+ "set-query".
+
+ - "set-nice" sets the "nice" factor of the current request being processed.
+ It only has effect against the other requests being processed at the same
+ time. The default value is 0, unless altered by the "nice" setting on the
+ "bind" line. The accepted range is -1024..1024. The higher the value, the
+ nicest the request will be. Lower values will make the request more
+ important than other ones. This can be useful to improve the speed of
+ some requests, or lower the priority of non-important requests. Using
+ this setting without prior experimentation can cause some major slowdown.
+
+ - "set-log-level" is used to change the log level of the current request
+ when a certain condition is met. Valid levels are the 8 syslog levels
+ (see the "log" keyword) plus the special level "silent" which disables
+ logging for this request. This rule is not final so the last matching
+ rule wins. This rule can be useful to disable health checks coming from
+ another equipment.
+
+ - "set-tos" is used to set the TOS or DSCP field value of packets sent to
+ the client to the value passed in <tos> on platforms which support this.
+ This value represents the whole 8 bits of the IP TOS field, and can be
+ expressed both in decimal or hexadecimal format (prefixed by "0x"). Note
+ that only the 6 higher bits are used in DSCP or TOS, and the two lower
+ bits are always 0. This can be used to adjust some routing behaviour on
+ border routers based on some information from the request. See RFC 2474,
+ 2597, 3260 and 4594 for more information.
+
+ - "set-mark" is used to set the Netfilter MARK on all packets sent to the
+ client to the value passed in <mark> on platforms which support it. This
+ value is an unsigned 32 bit value which can be matched by netfilter and
+ by the routing table. It can be expressed both in decimal or hexadecimal
+ format (prefixed by "0x"). This can be useful to force certain packets to
+ take a different route (for example a cheaper network path for bulk
+ downloads). This works on Linux kernels 2.6.32 and above and requires
+ admin privileges.
+
+ - "add-acl" is used to add a new entry into an ACL. The ACL must be loaded
+ from a file (even a dummy empty file). The file name of the ACL to be
+ updated is passed between parentheses. It takes one argument: <key fmt>,
+ which follows log-format rules, to collect content of the new entry. It
+ performs a lookup in the ACL before insertion, to avoid duplicated (or
+ more) values. This lookup is done by a linear search and can be expensive
+ with large lists! It is the equivalent of the "add acl" command from the
+ stats socket, but can be triggered by an HTTP request.
+
+ - "del-acl" is used to delete an entry from an ACL. The ACL must be loaded
+ from a file (even a dummy empty file). The file name of the ACL to be
+ updated is passed between parentheses. It takes one argument: <key fmt>,
+ which follows log-format rules, to collect content of the entry to delete.
+ It is the equivalent of the "del acl" command from the stats socket, but
+ can be triggered by an HTTP request.
+
+ - "del-map" is used to delete an entry from a MAP. The MAP must be loaded
+ from a file (even a dummy empty file). The file name of the MAP to be
+ updated is passed between parentheses. It takes one argument: <key fmt>,
+ which follows log-format rules, to collect content of the entry to delete.
+ It takes one argument: "file name" It is the equivalent of the "del map"
+ command from the stats socket, but can be triggered by an HTTP request.
+
+ - "set-map" is used to add a new entry into a MAP. The MAP must be loaded
+ from a file (even a dummy empty file). The file name of the MAP to be
+ updated is passed between parentheses. It takes 2 arguments: <key fmt>,
+ which follows log-format rules, used to collect MAP key, and <value fmt>,
+ which follows log-format rules, used to collect content for the new entry.
+ It performs a lookup in the MAP before insertion, to avoid duplicated (or
+ more) values. This lookup is done by a linear search and can be expensive
+ with large lists! It is the equivalent of the "set map" command from the
+ stats socket, but can be triggered by an HTTP request.
+
+ - capture <sample> [ len <length> | id <id> ] :
+ captures sample expression <sample> from the request buffer, and converts
+ it to a string of at most <len> characters. The resulting string is
+ stored into the next request "capture" slot, so it will possibly appear
+ next to some captured HTTP headers. It will then automatically appear in
+ the logs, and it will be possible to extract it using sample fetch rules
+ to feed it into headers or anything. The length should be limited given
+ that this size will be allocated for each capture during the whole
+ session life. Please check section 7.3 (Fetching samples) and "capture
+ request header" for more information.
+
+ If the keyword "id" is used instead of "len", the action tries to store
+ the captured string in a previously declared capture slot. This is useful
+ to run captures in backends. The slot id can be declared by a previous
+ directive "http-request capture" or with the "declare capture" keyword.
+ If the slot <id> doesn't exist, then HAProxy fails parsing the
+ configuration to prevent unexpected behavior at run time.
+
+ - { track-sc0 | track-sc1 | track-sc2 } <key> [table <table>] :
+ enables tracking of sticky counters from current request. These rules
+ do not stop evaluation and do not change default action. Three sets of
+ counters may be simultaneously tracked by the same connection. The first
+ "track-sc0" rule executed enables tracking of the counters of the
+ specified table as the first set. The first "track-sc1" rule executed
+ enables tracking of the counters of the specified table as the second
+ set. The first "track-sc2" rule executed enables tracking of the
+ counters of the specified table as the third set. It is a recommended
+ practice to use the first set of counters for the per-frontend counters
+ and the second set for the per-backend ones. But this is just a
+ guideline, all may be used everywhere.
+
+ These actions take one or two arguments :
+ <key> is mandatory, and is a sample expression rule as described
+ in section 7.3. It describes what elements of the incoming
+ request or connection will be analysed, extracted, combined,
+ and used to select which table entry to update the counters.
+
+ <table> is an optional table to be used instead of the default one,
+ which is the stick-table declared in the current proxy. All
+ the counters for the matches and updates for the key will
+ then be performed in that table until the session ends.
+
+ Once a "track-sc*" rule is executed, the key is looked up in the table
+ and if it is not found, an entry is allocated for it. Then a pointer to
+ that entry is kept during all the session's life, and this entry's
+ counters are updated as often as possible, every time the session's
+ counters are updated, and also systematically when the session ends.
+ Counters are only updated for events that happen after the tracking has
+ been started. As an exception, connection counters and request counters
+ are systematically updated so that they reflect useful information.
+
+ If the entry tracks concurrent connection counters, one connection is
+ counted for as long as the entry is tracked, and the entry will not
+ expire during that time. Tracking counters also provides a performance
+ advantage over just checking the keys, because only one table lookup is
+ performed for all ACL checks that make use of it.
+
+ - sc-set-gpt0(<sc-id>) <int> :
+ This action sets the GPT0 tag according to the sticky counter designated
+ by <sc-id> and the value of <int>. The expected result is a boolean. If
+ an error occurs, this action silently fails and the actions evaluation
+ continues.
+
+ - sc-inc-gpc0(<sc-id>):
+ This action increments the GPC0 counter according with the sticky counter
+ designated by <sc-id>. If an error occurs, this action silently fails and
+ the actions evaluation continues.
+
+ - set-var(<var-name>) <expr> :
+ Is used to set the contents of a variable. The variable is declared
+ inline.
+
+ <var-name> The name of the variable starts by an indication about its
+ scope. The allowed scopes are:
+ "sess" : the variable is shared with all the session,
+ "txn" : the variable is shared with all the transaction
+ (request and response)
+ "req" : the variable is shared only during the request
+ processing
+ "res" : the variable is shared only during the response
+ processing.
+ This prefix is followed by a name. The separator is a '.'.
+ The name may only contain characters 'a-z', 'A-Z', '0-9',
+ and '_'.
+
+ <expr> Is a standard HAProxy expression formed by a sample-fetch
+ followed by some converters.
+
+ Example:
+
+ http-request set-var(req.my_var) req.fhdr(user-agent),lower
+
+ - set-src <expr> :
+ Is used to set the source IP address to the value of specified
+ expression. Useful when a proxy in front of HAProxy rewrites source IP,
+ but provides the correct IP in a HTTP header; or you want to mask
+ source IP for privacy.
+
+ <expr> Is a standard HAProxy expression formed by a sample-fetch
+ followed by some converters.
+
+ Example:
+
+ http-request set-src hdr(x-forwarded-for)
+ http-request set-src src,ipmask(24)
+
+ When set-src is successful, the source port is set to 0.
+
+ - "silent-drop" : this stops the evaluation of the rules and makes the
+ client-facing connection suddenly disappear using a system-dependant way
+ that tries to prevent the client from being notified. The effect it then
+ that the client still sees an established connection while there's none
+ on HAProxy. The purpose is to achieve a comparable effect to "tarpit"
+ except that it doesn't use any local resource at all on the machine
+ running HAProxy. It can resist much higher loads than "tarpit", and slow
+ down stronger attackers. It is important to undestand the impact of using
+ this mechanism. All stateful equipments placed between the client and
+ HAProxy (firewalls, proxies, load balancers) will also keep the
+ established connection for a long time and may suffer from this action.
+ On modern Linux systems running with enough privileges, the TCP_REPAIR
+ socket option is used to block the emission of a TCP reset. On other
+ systems, the socket's TTL is reduced to 1 so that the TCP reset doesn't
+ pass the first router, though it's still delivered to local networks. Do
+ not use it unless you fully understand how it works.
+
+ There is no limit to the number of http-request statements per instance.
+
+ It is important to know that http-request rules are processed very early in
+ the HTTP processing, just after "block" rules and before "reqdel" or "reqrep"
+ or "reqadd" rules. That way, headers added by "add-header"/"set-header" are
+ visible by almost all further ACL rules.
+
+ Using "reqadd"/"reqdel"/"reqrep" to manipulate request headers is discouraged
+ in newer versions (>= 1.5). But if you need to use regular expression to
+ delete headers, you can still use "reqdel". Also please use
+ "http-request deny/allow/tarpit" instead of "reqdeny"/"reqpass"/"reqtarpit".
+
+ Example:
+ acl nagios src 192.168.129.3
+ acl local_net src 192.168.0.0/16
+ acl auth_ok http_auth(L1)
+
+ http-request allow if nagios
+ http-request allow if local_net auth_ok
+ http-request auth realm Gimme if local_net auth_ok
+ http-request deny
+
+ Example:
+ acl auth_ok http_auth_group(L1) G1
+ http-request auth unless auth_ok
+
+ Example:
+ http-request set-header X-Haproxy-Current-Date %T
+ http-request set-header X-SSL %[ssl_fc]
+ http-request set-header X-SSL-Session_ID %[ssl_fc_session_id,hex]
+ http-request set-header X-SSL-Client-Verify %[ssl_c_verify]
+ http-request set-header X-SSL-Client-DN %{+Q}[ssl_c_s_dn]
+ http-request set-header X-SSL-Client-CN %{+Q}[ssl_c_s_dn(cn)]
+ http-request set-header X-SSL-Issuer %{+Q}[ssl_c_i_dn]
+ http-request set-header X-SSL-Client-NotBefore %{+Q}[ssl_c_notbefore]
+ http-request set-header X-SSL-Client-NotAfter %{+Q}[ssl_c_notafter]
+
+ Example:
+ acl key req.hdr(X-Add-Acl-Key) -m found
+ acl add path /addacl
+ acl del path /delacl
+
+ acl myhost hdr(Host) -f myhost.lst
+
+ http-request add-acl(myhost.lst) %[req.hdr(X-Add-Acl-Key)] if key add
+ http-request del-acl(myhost.lst) %[req.hdr(X-Add-Acl-Key)] if key del
+
+ Example:
+ acl value req.hdr(X-Value) -m found
+ acl setmap path /setmap
+ acl delmap path /delmap
+
+ use_backend bk_appli if { hdr(Host),map_str(map.lst) -m found }
+
+ http-request set-map(map.lst) %[src] %[req.hdr(X-Value)] if setmap value
+ http-request del-map(map.lst) %[src] if delmap
+
+ See also : "stats http-request", section 3.4 about userlists and section 7
+ about ACL usage.
+
+http-response { allow | deny | add-header <name> <fmt> | set-nice <nice> |
+ capture <sample> id <id> | redirect <rule> |
+ set-header <name> <fmt> | del-header <name> |
+ replace-header <name> <regex-match> <replace-fmt> |
+ replace-value <name> <regex-match> <replace-fmt> |
+ set-status <status> |
+ set-log-level <level> | set-mark <mark> | set-tos <tos> |
+ add-acl(<file name>) <key fmt> |
+ del-acl(<file name>) <key fmt> |
+ del-map(<file name>) <key fmt> |
+ set-map(<file name>) <key fmt> <value fmt> |
+ set-var(<var-name>) <expr> |
+ sc-inc-gpc0(<sc-id>) |
+ sc-set-gpt0(<sc-id>) <int> |
+ silent-drop |
+ }
+ [ { if | unless } <condition> ]
+ Access control for Layer 7 responses
+
+ May be used in sections: defaults | frontend | listen | backend
+ no | yes | yes | yes
+
+ The http-response statement defines a set of rules which apply to layer 7
+ processing. The rules are evaluated in their declaration order when they are
+ met in a frontend, listen or backend section. Any rule may optionally be
+ followed by an ACL-based condition, in which case it will only be evaluated
+ if the condition is true. Since these rules apply on responses, the backend
+ rules are applied first, followed by the frontend's rules.
+
+ The first keyword is the rule's action. Currently supported actions include :
+ - "allow" : this stops the evaluation of the rules and lets the response
+ pass the check. No further "http-response" rules are evaluated for the
+ current section.
+
+ - "deny" : this stops the evaluation of the rules and immediately rejects
+ the response and emits an HTTP 502 error. No further "http-response"
+ rules are evaluated.
+
+ - "add-header" appends an HTTP header field whose name is specified in
+ <name> and whose value is defined by <fmt> which follows the log-format
+ rules (see Custom Log Format in section 8.2.4). This may be used to send
+ a cookie to a client for example, or to pass some internal information.
+ This rule is not final, so it is possible to add other similar rules.
+ Note that header addition is performed immediately, so one rule might
+ reuse the resulting header from a previous rule.
+
+ - "set-header" does the same as "add-header" except that the header name
+ is first removed if it existed. This is useful when passing security
+ information to the server, where the header must not be manipulated by
+ external users.
+
+ - "del-header" removes all HTTP header fields whose name is specified in
+ <name>.
+
+ - "replace-header" matches the regular expression in all occurrences of
+ header field <name> according to <match-regex>, and replaces them with
+ the <replace-fmt> argument. Format characters are allowed in replace-fmt
+ and work like in <fmt> arguments in "add-header". The match is only
+ case-sensitive. It is important to understand that this action only
+ considers whole header lines, regardless of the number of values they
+ may contain. This usage is suited to headers naturally containing commas
+ in their value, such as Set-Cookie, Expires and so on.
+
+ Example:
+
+ http-response replace-header Set-Cookie (C=[^;]*);(.*) \1;ip=%bi;\2
+
+ applied to:
+
+ Set-Cookie: C=1; expires=Tue, 14-Jun-2016 01:40:45 GMT
+
+ outputs:
+
+ Set-Cookie: C=1;ip=192.168.1.20; expires=Tue, 14-Jun-2016 01:40:45 GMT
+
+ assuming the backend IP is 192.168.1.20.
+
+ - "replace-value" works like "replace-header" except that it matches the
+ regex against every comma-delimited value of the header field <name>
+ instead of the entire header. This is suited for all headers which are
+ allowed to carry more than one value. An example could be the Accept
+ header.
+
+ Example:
+
+ http-response replace-value Cache-control ^public$ private
+
+ applied to:
+
+ Cache-Control: max-age=3600, public
+
+ outputs:
+
+ Cache-Control: max-age=3600, private
+
+ - "set-status" replaces the response status code with <status> which must
+ be an integer between 100 and 999. Note that the reason is automatically
+ adapted to the new code.
+
+ Example:
+
+ # return "431 Request Header Fields Too Large"
+ http-response set-status 431
+
+ - "set-nice" sets the "nice" factor of the current request being processed.
+ It only has effect against the other requests being processed at the same
+ time. The default value is 0, unless altered by the "nice" setting on the
+ "bind" line. The accepted range is -1024..1024. The higher the value, the
+ nicest the request will be. Lower values will make the request more
+ important than other ones. This can be useful to improve the speed of
+ some requests, or lower the priority of non-important requests. Using
+ this setting without prior experimentation can cause some major slowdown.
+
+ - "set-log-level" is used to change the log level of the current request
+ when a certain condition is met. Valid levels are the 8 syslog levels
+ (see the "log" keyword) plus the special level "silent" which disables
+ logging for this request. This rule is not final so the last matching
+ rule wins. This rule can be useful to disable health checks coming from
+ another equipment.
+
+ - "set-tos" is used to set the TOS or DSCP field value of packets sent to
+ the client to the value passed in <tos> on platforms which support this.
+ This value represents the whole 8 bits of the IP TOS field, and can be
+ expressed both in decimal or hexadecimal format (prefixed by "0x"). Note
+ that only the 6 higher bits are used in DSCP or TOS, and the two lower
+ bits are always 0. This can be used to adjust some routing behaviour on
+ border routers based on some information from the request. See RFC 2474,
+ 2597, 3260 and 4594 for more information.
+
+ - "set-mark" is used to set the Netfilter MARK on all packets sent to the
+ client to the value passed in <mark> on platforms which support it. This
+ value is an unsigned 32 bit value which can be matched by netfilter and
+ by the routing table. It can be expressed both in decimal or hexadecimal
+ format (prefixed by "0x"). This can be useful to force certain packets to
+ take a different route (for example a cheaper network path for bulk
+ downloads). This works on Linux kernels 2.6.32 and above and requires
+ admin privileges.
+
+ - "add-acl" is used to add a new entry into an ACL. The ACL must be loaded
+ from a file (even a dummy empty file). The file name of the ACL to be
+ updated is passed between parentheses. It takes one argument: <key fmt>,
+ which follows log-format rules, to collect content of the new entry. It
+ performs a lookup in the ACL before insertion, to avoid duplicated (or
+ more) values. This lookup is done by a linear search and can be expensive
+ with large lists! It is the equivalent of the "add acl" command from the
+ stats socket, but can be triggered by an HTTP response.
+
+ - "del-acl" is used to delete an entry from an ACL. The ACL must be loaded
+ from a file (even a dummy empty file). The file name of the ACL to be
+ updated is passed between parentheses. It takes one argument: <key fmt>,
+ which follows log-format rules, to collect content of the entry to delete.
+ It is the equivalent of the "del acl" command from the stats socket, but
+ can be triggered by an HTTP response.
+
+ - "del-map" is used to delete an entry from a MAP. The MAP must be loaded
+ from a file (even a dummy empty file). The file name of the MAP to be
+ updated is passed between parentheses. It takes one argument: <key fmt>,
+ which follows log-format rules, to collect content of the entry to delete.
+ It takes one argument: "file name" It is the equivalent of the "del map"
+ command from the stats socket, but can be triggered by an HTTP response.
+
+ - "set-map" is used to add a new entry into a MAP. The MAP must be loaded
+ from a file (even a dummy empty file). The file name of the MAP to be
+ updated is passed between parentheses. It takes 2 arguments: <key fmt>,
+ which follows log-format rules, used to collect MAP key, and <value fmt>,
+ which follows log-format rules, used to collect content for the new entry.
+ It performs a lookup in the MAP before insertion, to avoid duplicated (or
+ more) values. This lookup is done by a linear search and can be expensive
+ with large lists! It is the equivalent of the "set map" command from the
+ stats socket, but can be triggered by an HTTP response.
+
+ - capture <sample> id <id> :
+ captures sample expression <sample> from the response buffer, and converts
+ it to a string. The resulting string is stored into the next request
+ "capture" slot, so it will possibly appear next to some captured HTTP
+ headers. It will then automatically appear in the logs, and it will be
+ possible to extract it using sample fetch rules to feed it into headers or
+ anything. Please check section 7.3 (Fetching samples) and "capture
+ response header" for more information.
+
+ The keyword "id" is the id of the capture slot which is used for storing
+ the string. The capture slot must be defined in an associated frontend.
+ This is useful to run captures in backends. The slot id can be declared by
+ a previous directive "http-response capture" or with the "declare capture"
+ keyword.
+ If the slot <id> doesn't exist, then HAProxy fails parsing the
+ configuration to prevent unexpected behavior at run time.
+
+ - "redirect" : this performs an HTTP redirection based on a redirect rule.
+ This supports a format string similarly to "http-request redirect" rules,
+ with the exception that only the "location" type of redirect is possible
+ on the response. See the "redirect" keyword for the rule's syntax. When
+ a redirect rule is applied during a response, connections to the server
+ are closed so that no data can be forwarded from the server to the client.
+
+ - set-var(<var-name>) expr:
+ Is used to set the contents of a variable. The variable is declared
+ inline.
+
+ <var-name> The name of the variable starts by an indication about its
+ scope. The allowed scopes are:
+ "sess" : the variable is shared with all the session,
+ "txn" : the variable is shared with all the transaction
+ (request and response)
+ "req" : the variable is shared only during the request
+ processing
+ "res" : the variable is shared only during the response
+ processing.
+ This prefix is followed by a name. The separator is a '.'.
+ The name may only contain characters 'a-z', 'A-Z', '0-9',
+ and '_'.
+
+ <expr> Is a standard HAProxy expression formed by a sample-fetch
+ followed by some converters.
+
+ Example:
+
+ http-response set-var(sess.last_redir) res.hdr(location)
+
+ - sc-set-gpt0(<sc-id>) <int> :
+ This action sets the GPT0 tag according to the sticky counter designated
+ by <sc-id> and the value of <int>. The expected result is a boolean. If
+ an error occurs, this action silently fails and the actions evaluation
+ continues.
+
+ - sc-inc-gpc0(<sc-id>):
+ This action increments the GPC0 counter according with the sticky counter
+ designated by <sc-id>. If an error occurs, this action silently fails and
+ the actions evaluation continues.
+
+ - "silent-drop" : this stops the evaluation of the rules and makes the
+ client-facing connection suddenly disappear using a system-dependant way
+ that tries to prevent the client from being notified. The effect it then
+ that the client still sees an established connection while there's none
+ on HAProxy. The purpose is to achieve a comparable effect to "tarpit"
+ except that it doesn't use any local resource at all on the machine
+ running HAProxy. It can resist much higher loads than "tarpit", and slow
+ down stronger attackers. It is important to undestand the impact of using
+ this mechanism. All stateful equipments placed between the client and
+ HAProxy (firewalls, proxies, load balancers) will also keep the
+ established connection for a long time and may suffer from this action.
+ On modern Linux systems running with enough privileges, the TCP_REPAIR
+ socket option is used to block the emission of a TCP reset. On other
+ systems, the socket's TTL is reduced to 1 so that the TCP reset doesn't
+ pass the first router, though it's still delivered to local networks. Do
+ not use it unless you fully understand how it works.
+
+ There is no limit to the number of http-response statements per instance.
+
+ It is important to know that http-response rules are processed very early in
+ the HTTP processing, before "rspdel" or "rsprep" or "rspadd" rules. That way,
+ headers added by "add-header"/"set-header" are visible by almost all further ACL
+ rules.
+
+ Using "rspadd"/"rspdel"/"rsprep" to manipulate request headers is discouraged
+ in newer versions (>= 1.5). But if you need to use regular expression to
+ delete headers, you can still use "rspdel". Also please use
+ "http-response deny" instead of "rspdeny".
+
+ Example:
+ acl key_acl res.hdr(X-Acl-Key) -m found
+
+ acl myhost hdr(Host) -f myhost.lst
+
+ http-response add-acl(myhost.lst) %[res.hdr(X-Acl-Key)] if key_acl
+ http-response del-acl(myhost.lst) %[res.hdr(X-Acl-Key)] if key_acl
+
+ Example:
+ acl value res.hdr(X-Value) -m found
+
+ use_backend bk_appli if { hdr(Host),map_str(map.lst) -m found }
+
+ http-response set-map(map.lst) %[src] %[res.hdr(X-Value)] if value
+ http-response del-map(map.lst) %[src] if ! value
+
+ See also : "http-request", section 3.4 about userlists and section 7 about
+ ACL usage.
+
+
+http-reuse { never | safe | aggressive | always }
+ Declare how idle HTTP connections may be shared between requests
+
+ May be used in sections: defaults | frontend | listen | backend
+ yes | no | yes | yes
+
+ By default, a connection established between haproxy and the backend server
+ belongs to the session that initiated it. The downside is that between the
+ response and the next request, the connection remains idle and is not used.
+ In many cases for performance reasons it is desirable to make it possible to
+ reuse these idle connections to serve other requests from different sessions.
+ This directive allows to tune this behaviour.
+
+ The argument indicates the desired connection reuse strategy :
+
+ - "never" : idle connections are never shared between sessions. This is
+ the default choice. It may be enforced to cancel a different
+ strategy inherited from a defaults section or for
+ troubleshooting. For example, if an old bogus application
+ considers that multiple requests over the same connection come
+ from the same client and it is not possible to fix the
+ application, it may be desirable to disable connection sharing
+ in a single backend. An example of such an application could
+ be an old haproxy using cookie insertion in tunnel mode and
+ not checking any request past the first one.
+
+ - "safe" : this is the recommended strategy. The first request of a
+ session is always sent over its own connection, and only
+ subsequent requests may be dispatched over other existing
+ connections. This ensures that in case the server closes the
+ connection when the request is being sent, the browser can
+ decide to silently retry it. Since it is exactly equivalent to
+ regular keep-alive, there should be no side effects.
+
+ - "aggressive" : this mode may be useful in webservices environments where
+ all servers are not necessarily known and where it would be
+ appreciable to deliver most first requests over existing
+ connections. In this case, first requests are only delivered
+ over existing connections that have been reused at least once,
+ proving that the server correctly supports connection reuse.
+ It should only be used when it's sure that the client can
+ retry a failed request once in a while and where the benefit
+ of aggressive connection reuse significantly outweights the
+ downsides of rare connection failures.
+
+ - "always" : this mode is only recommended when the path to the server is
+ known for never breaking existing connections quickly after
+ releasing them. It allows the first request of a session to be
+ sent to an existing connection. This can provide a significant
+ performance increase over the "safe" strategy when the backend
+ is a cache farm, since such components tend to show a
+ consistent behaviour and will benefit from the connection
+ sharing. It is recommended that the "http-keep-alive" timeout
+ remains low in this mode so that no dead connections remain
+ usable. In most cases, this will lead to the same performance
+ gains as "aggressive" but with more risks. It should only be
+ used when it improves the situation over "aggressive".
+
+ When http connection sharing is enabled, a great care is taken to respect the
+ connection properties and compatiblities. Specifically :
+ - connections made with "usesrc" followed by a client-dependant value
+ ("client", "clientip", "hdr_ip") are marked private and never shared ;
+
+ - connections sent to a server with a TLS SNI extension are marked private
+ and are never shared ;
+
+ - connections receiving a status code 401 or 407 expect some authentication
+ to be sent in return. Due to certain bogus authentication schemes (such
+ as NTLM) relying on the connection, these connections are marked private
+ and are never shared ;
+
+ No connection pool is involved, once a session dies, the last idle connection
+ it was attached to is deleted at the same time. This ensures that connections
+ may not last after all sessions are closed.
+
+ Note: connection reuse improves the accuracy of the "server maxconn" setting,
+ because almost no new connection will be established while idle connections
+ remain available. This is particularly true with the "always" strategy.
+
+ See also : "option http-keep-alive", "server maxconn"
+
+
+http-send-name-header [<header>]
+ Add the server name to a request. Use the header string given by <header>
+
+ May be used in sections: defaults | frontend | listen | backend
+ yes | no | yes | yes
+
+ Arguments :
+
+ <header> The header string to use to send the server name
+
+ The "http-send-name-header" statement causes the name of the target
+ server to be added to the headers of an HTTP request. The name
+ is added with the header string proved.
+
+ See also : "server"
+
+id <value>
+ Set a persistent ID to a proxy.
+ May be used in sections : defaults | frontend | listen | backend
+ no | yes | yes | yes
+ Arguments : none
+
+ Set a persistent ID for the proxy. This ID must be unique and positive.
+ An unused ID will automatically be assigned if unset. The first assigned
+ value will be 1. This ID is currently only returned in statistics.
+
+
+ignore-persist { if | unless } <condition>
+ Declare a condition to ignore persistence
+ May be used in sections: defaults | frontend | listen | backend
+ no | yes | yes | yes
+
+ By default, when cookie persistence is enabled, every requests containing
+ the cookie are unconditionally persistent (assuming the target server is up
+ and running).
+
+ The "ignore-persist" statement allows one to declare various ACL-based
+ conditions which, when met, will cause a request to ignore persistence.
+ This is sometimes useful to load balance requests for static files, which
+ often don't require persistence. This can also be used to fully disable
+ persistence for a specific User-Agent (for example, some web crawler bots).
+
+ The persistence is ignored when an "if" condition is met, or unless an
+ "unless" condition is met.
+
+ See also : "force-persist", "cookie", and section 7 about ACL usage.
+
+load-server-state-from-file { global | local | none }
+ Allow seamless reload of HAProxy
+ May be used in sections: defaults | frontend | listen | backend
+ yes | no | yes | yes
+
+ This directive points HAProxy to a file where server state from previous
+ running process has been saved. That way, when starting up, before handling
+ traffic, the new process can apply old states to servers exactly has if no
+ reload occured. The purpose of the "load-server-state-from-file" directive is
+ to tell haproxy which file to use. For now, only 2 arguments to either prevent
+ loading state or load states from a file containing all backends and servers.
+ The state file can be generated by running the command "show servers state"
+ over the stats socket and redirect output.
+
+ The format of the file is versionned and is very specific. To understand it,
+ please read the documentation of the "show servers state" command (chapter
+ 9.2 of Management Guide).
+
+ Arguments:
+ global load the content of the file pointed by the global directive
+ named "server-state-file".
+
+ local load the content of the file pointed by the directive
+ "server-state-file-name" if set. If not set, then the backend
+ name is used as a file name.
+
+ none don't load any stat for this backend
+
+ Notes:
+ - server's IP address is not updated unless DNS resolution is enabled on
+ the server. It means that if a server IP address has been changed using
+ the stat socket, this information won't be re-applied after reloading.
+
+ - server's weight is applied from previous running process unless it has
+ has changed between previous and new configuration files.
+
+ Example 1:
+
+ Minimal configuration:
+
+ global
+ stats socket /tmp/socket
+ server-state-file /tmp/server_state
+
+ defaults
+ load-server-state-from-file global
+
+ backend bk
+ server s1 127.0.0.1:22 check weight 11
+ server s2 127.0.0.1:22 check weight 12
+
+ Then one can run :
+
+ socat /tmp/socket - <<< "show servers state" > /tmp/server_state
+
+ Content of the file /tmp/server_state would be like this:
+
+ 1
+ # <field names skipped for the doc example>
+ 1 bk 1 s1 127.0.0.1 2 0 11 11 4 6 3 4 6 0 0
+ 1 bk 2 s2 127.0.0.1 2 0 12 12 4 6 3 4 6 0 0
+
+ Example 2:
+
+ Minimal configuration:
+
+ global
+ stats socket /tmp/socket
+ server-state-base /etc/haproxy/states
+
+ defaults
+ load-server-state-from-file local
+
+ backend bk
+ server s1 127.0.0.1:22 check weight 11
+ server s2 127.0.0.1:22 check weight 12
+
+ Then one can run :
+
+ socat /tmp/socket - <<< "show servers state bk" > /etc/haproxy/states/bk
+
+ Content of the file /etc/haproxy/states/bk would be like this:
+
+ 1
+ # <field names skipped for the doc example>
+ 1 bk 1 s1 127.0.0.1 2 0 11 11 4 6 3 4 6 0 0
+ 1 bk 2 s2 127.0.0.1 2 0 12 12 4 6 3 4 6 0 0
+
+ See also: "server-state-file", "server-state-file-name", and
+ "show servers state"
+
+
+log global
+log <address> [len <length>] <facility> [<level> [<minlevel>]]
+no log
+ Enable per-instance logging of events and traffic.
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
+
+ Prefix :
+ no should be used when the logger list must be flushed. For example,
+ if you don't want to inherit from the default logger list. This
+ prefix does not allow arguments.
+
+ Arguments :
+ global should be used when the instance's logging parameters are the
+ same as the global ones. This is the most common usage. "global"
+ replaces <address>, <facility> and <level> with those of the log
+ entries found in the "global" section. Only one "log global"
+ statement may be used per instance, and this form takes no other
+ parameter.
+
+ <address> indicates where to send the logs. It takes the same format as
+ for the "global" section's logs, and can be one of :
+
+ - An IPv4 address optionally followed by a colon (':') and a UDP
+ port. If no port is specified, 514 is used by default (the
+ standard syslog port).
+
+ - An IPv6 address followed by a colon (':') and optionally a UDP
+ port. If no port is specified, 514 is used by default (the
+ standard syslog port).
+
+ - A filesystem path to a UNIX domain socket, keeping in mind
+ considerations for chroot (be sure the path is accessible
+ inside the chroot) and uid/gid (be sure the path is
+ appropriately writeable).
+
+ You may want to reference some environment variables in the
+ address parameter, see section 2.3 about environment variables.
+
+ <length> is an optional maximum line length. Log lines larger than this
+ value will be truncated before being sent. The reason is that
+ syslog servers act differently on log line length. All servers
+ support the default value of 1024, but some servers simply drop
+ larger lines while others do log them. If a server supports long
+ lines, it may make sense to set this value here in order to avoid
+ truncating long lines. Similarly, if a server drops long lines,
+ it is preferable to truncate them before sending them. Accepted
+ values are 80 to 65535 inclusive. The default value of 1024 is
+ generally fine for all standard usages. Some specific cases of
+ long captures or JSON-formated logs may require larger values.
+
+ <facility> must be one of the 24 standard syslog facilities :
+
+ kern user mail daemon auth syslog lpr news
+ uucp cron auth2 ftp ntp audit alert cron2
+ local0 local1 local2 local3 local4 local5 local6 local7
+
+ <level> is optional and can be specified to filter outgoing messages. By
+ default, all messages are sent. If a level is specified, only
+ messages with a severity at least as important as this level
+ will be sent. An optional minimum level can be specified. If it
+ is set, logs emitted with a more severe level than this one will
+ be capped to this level. This is used to avoid sending "emerg"
+ messages on all terminals on some default syslog configurations.
+ Eight levels are known :
+
+ emerg alert crit err warning notice info debug
+
+ It is important to keep in mind that it is the frontend which decides what to
+ log from a connection, and that in case of content switching, the log entries
+ from the backend will be ignored. Connections are logged at level "info".
+
+ However, backend log declaration define how and where servers status changes
+ will be logged. Level "notice" will be used to indicate a server going up,
+ "warning" will be used for termination signals and definitive service
+ termination, and "alert" will be used for when a server goes down.
+
+ Note : According to RFC3164, messages are truncated to 1024 bytes before
+ being emitted.
+
+ Example :
+ log global
+ log 127.0.0.1:514 local0 notice # only send important events
+ log 127.0.0.1:514 local0 notice notice # same but limit output level
+ log "${LOCAL_SYSLOG}:514" local0 notice # send to local server
+
+
+log-format <string>
+ Specifies the log format string to use for traffic logs
+ May be used in sections: defaults | frontend | listen | backend
+ yes | yes | yes | no
+
+ This directive specifies the log format string that will be used for all logs
+ resulting from traffic passing through the frontend using this line. If the
+ directive is used in a defaults section, all subsequent frontends will use
+ the same log format. Please see section 8.2.4 which covers the log format
+ string in depth.
+
+log-format-sd <string>
+ Specifies the RFC5424 structured-data log format string
+ May be used in sections: defaults | frontend | listen | backend
+ yes | yes | yes | no
+
+ This directive specifies the RFC5424 structured-data log format string that
+ will be used for all logs resulting from traffic passing through the frontend
+ using this line. If the directive is used in a defaults section, all
+ subsequent frontends will use the same log format. Please see section 8.2.4
+ which covers the log format string in depth.
+
+ See https://tools.ietf.org/html/rfc5424#section-6.3 for more information
+ about the RFC5424 structured-data part.
+
+ Note : This log format string will be used only for loggers that have set
+ log format to "rfc5424".
+
+ Example :
+ log-format-sd [exampleSDID@1234\ bytes=\"%B\"\ status=\"%ST\"]
+
+
+log-tag <string>
+ Specifies the log tag to use for all outgoing logs
+ May be used in sections: defaults | frontend | listen | backend
+ yes | yes | yes | yes
+
+ Sets the tag field in the syslog header to this string. It defaults to the
+ log-tag set in the global section, otherwise the program name as launched
+ from the command line, which usually is "haproxy". Sometimes it can be useful
+ to differentiate between multiple processes running on the same host, or to
+ differentiate customer instances running in the same process. In the backend,
+ logs about servers up/down will use this tag. As a hint, it can be convenient
+ to set a log-tag related to a hosted customer in a defaults section then put
+ all the frontends and backends for that customer, then start another customer
+ in a new defaults section. See also the global "log-tag" directive.
+
+max-keep-alive-queue <value>
+ Set the maximum server queue size for maintaining keep-alive connections
+ May be used in sections: defaults | frontend | listen | backend
+ yes | no | yes | yes
+
+ HTTP keep-alive tries to reuse the same server connection whenever possible,
+ but sometimes it can be counter-productive, for example if a server has a lot
+ of connections while other ones are idle. This is especially true for static
+ servers.
+
+ The purpose of this setting is to set a threshold on the number of queued
+ connections at which haproxy stops trying to reuse the same server and prefers
+ to find another one. The default value, -1, means there is no limit. A value
+ of zero means that keep-alive requests will never be queued. For very close
+ servers which can be reached with a low latency and which are not sensible to
+ breaking keep-alive, a low value is recommended (eg: local static server can
+ use a value of 10 or less). For remote servers suffering from a high latency,
+ higher values might be needed to cover for the latency and/or the cost of
+ picking a different server.
+
+ Note that this has no impact on responses which are maintained to the same
+ server consecutively to a 401 response. They will still go to the same server
+ even if they have to be queued.
+
+ See also : "option http-server-close", "option prefer-last-server", server
+ "maxconn" and cookie persistence.
+
+
+maxconn <conns>
+ Fix the maximum number of concurrent connections on a frontend
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | no
+ Arguments :
+ <conns> is the maximum number of concurrent connections the frontend will
+ accept to serve. Excess connections will be queued by the system
+ in the socket's listen queue and will be served once a connection
+ closes.
+
+ If the system supports it, it can be useful on big sites to raise this limit
+ very high so that haproxy manages connection queues, instead of leaving the
+ clients with unanswered connection attempts. This value should not exceed the
+ global maxconn. Also, keep in mind that a connection contains two buffers
+ of 8kB each, as well as some other data resulting in about 17 kB of RAM being
+ consumed per established connection. That means that a medium system equipped
+ with 1GB of RAM can withstand around 40000-50000 concurrent connections if
+ properly tuned.
+
+ Also, when <conns> is set to large values, it is possible that the servers
+ are not sized to accept such loads, and for this reason it is generally wise
+ to assign them some reasonable connection limits.
+
+ By default, this value is set to 2000.
+
+ See also : "server", global section's "maxconn", "fullconn"
+
+
+mode { tcp|http|health }
+ Set the running mode or protocol of the instance
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
+ Arguments :
+ tcp The instance will work in pure TCP mode. A full-duplex connection
+ will be established between clients and servers, and no layer 7
+ examination will be performed. This is the default mode. It
+ should be used for SSL, SSH, SMTP, ...
+
+ http The instance will work in HTTP mode. The client request will be
+ analyzed in depth before connecting to any server. Any request
+ which is not RFC-compliant will be rejected. Layer 7 filtering,
+ processing and switching will be possible. This is the mode which
+ brings HAProxy most of its value.
+
+ health The instance will work in "health" mode. It will just reply "OK"
+ to incoming connections and close the connection. Alternatively,
+ If the "httpchk" option is set, "HTTP/1.0 200 OK" will be sent
+ instead. Nothing will be logged in either case. This mode is used
+ to reply to external components health checks. This mode is
+ deprecated and should not be used anymore as it is possible to do
+ the same and even better by combining TCP or HTTP modes with the
+ "monitor" keyword.
+
+ When doing content switching, it is mandatory that the frontend and the
+ backend are in the same mode (generally HTTP), otherwise the configuration
+ will be refused.
+
+ Example :
+ defaults http_instances
+ mode http
+
+ See also : "monitor", "monitor-net"
+
+
+monitor fail { if | unless } <condition>
+ Add a condition to report a failure to a monitor HTTP request.
+ May be used in sections : defaults | frontend | listen | backend
+ no | yes | yes | no
+ Arguments :
+ if <cond> the monitor request will fail if the condition is satisfied,
+ and will succeed otherwise. The condition should describe a
+ combined test which must induce a failure if all conditions
+ are met, for instance a low number of servers both in a
+ backend and its backup.
+
+ unless <cond> the monitor request will succeed only if the condition is
+ satisfied, and will fail otherwise. Such a condition may be
+ based on a test on the presence of a minimum number of active
+ servers in a list of backends.
+
+ This statement adds a condition which can force the response to a monitor
+ request to report a failure. By default, when an external component queries
+ the URI dedicated to monitoring, a 200 response is returned. When one of the
+ conditions above is met, haproxy will return 503 instead of 200. This is
+ very useful to report a site failure to an external component which may base
+ routing advertisements between multiple sites on the availability reported by
+ haproxy. In this case, one would rely on an ACL involving the "nbsrv"
+ criterion. Note that "monitor fail" only works in HTTP mode. Both status
+ messages may be tweaked using "errorfile" or "errorloc" if needed.
+
+ Example:
+ frontend www
+ mode http
+ acl site_dead nbsrv(dynamic) lt 2
+ acl site_dead nbsrv(static) lt 2
+ monitor-uri /site_alive
+ monitor fail if site_dead
+
+ See also : "monitor-net", "monitor-uri", "errorfile", "errorloc"
+
+
+monitor-net <source>
+ Declare a source network which is limited to monitor requests
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | no
+ Arguments :
+ <source> is the source IPv4 address or network which will only be able to
+ get monitor responses to any request. It can be either an IPv4
+ address, a host name, or an address followed by a slash ('/')
+ followed by a mask.
+
+ In TCP mode, any connection coming from a source matching <source> will cause
+ the connection to be immediately closed without any log. This allows another
+ equipment to probe the port and verify that it is still listening, without
+ forwarding the connection to a remote server.
+
+ In HTTP mode, a connection coming from a source matching <source> will be
+ accepted, the following response will be sent without waiting for a request,
+ then the connection will be closed : "HTTP/1.0 200 OK". This is normally
+ enough for any front-end HTTP probe to detect that the service is UP and
+ running without forwarding the request to a backend server. Note that this
+ response is sent in raw format, without any transformation. This is important
+ as it means that it will not be SSL-encrypted on SSL listeners.
+
+ Monitor requests are processed very early, just after tcp-request connection
+ ACLs which are the only ones able to block them. These connections are short
+ lived and never wait for any data from the client. They cannot be logged, and
+ it is the intended purpose. They are only used to report HAProxy's health to
+ an upper component, nothing more. Please note that "monitor fail" rules do
+ not apply to connections intercepted by "monitor-net".
+
+ Last, please note that only one "monitor-net" statement can be specified in
+ a frontend. If more than one is found, only the last one will be considered.
+
+ Example :
+ # addresses .252 and .253 are just probing us.
+ frontend www
+ monitor-net 192.168.0.252/31
+
+ See also : "monitor fail", "monitor-uri"
+
+
+monitor-uri <uri>
+ Intercept a URI used by external components' monitor requests
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | no
+ Arguments :
+ <uri> is the exact URI which we want to intercept to return HAProxy's
+ health status instead of forwarding the request.
+
+ When an HTTP request referencing <uri> will be received on a frontend,
+ HAProxy will not forward it nor log it, but instead will return either
+ "HTTP/1.0 200 OK" or "HTTP/1.0 503 Service unavailable", depending on failure
+ conditions defined with "monitor fail". This is normally enough for any
+ front-end HTTP probe to detect that the service is UP and running without
+ forwarding the request to a backend server. Note that the HTTP method, the
+ version and all headers are ignored, but the request must at least be valid
+ at the HTTP level. This keyword may only be used with an HTTP-mode frontend.
+
+ Monitor requests are processed very early. It is not possible to block nor
+ divert them using ACLs. They cannot be logged either, and it is the intended
+ purpose. They are only used to report HAProxy's health to an upper component,
+ nothing more. However, it is possible to add any number of conditions using
+ "monitor fail" and ACLs so that the result can be adjusted to whatever check
+ can be imagined (most often the number of available servers in a backend).
+
+ Example :
+ # Use /haproxy_test to report haproxy's status
+ frontend www
+ mode http
+ monitor-uri /haproxy_test
+
+ See also : "monitor fail", "monitor-net"
+
+
+option abortonclose
+no option abortonclose
+ Enable or disable early dropping of aborted requests pending in queues.
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
+ Arguments : none
+
+ In presence of very high loads, the servers will take some time to respond.
+ The per-instance connection queue will inflate, and the response time will
+ increase respective to the size of the queue times the average per-session
+ response time. When clients will wait for more than a few seconds, they will
+ often hit the "STOP" button on their browser, leaving a useless request in
+ the queue, and slowing down other users, and the servers as well, because the
+ request will eventually be served, then aborted at the first error
+ encountered while delivering the response.
+
+ As there is no way to distinguish between a full STOP and a simple output
+ close on the client side, HTTP agents should be conservative and consider
+ that the client might only have closed its output channel while waiting for
+ the response. However, this introduces risks of congestion when lots of users
+ do the same, and is completely useless nowadays because probably no client at
+ all will close the session while waiting for the response. Some HTTP agents
+ support this behaviour (Squid, Apache, HAProxy), and others do not (TUX, most
+ hardware-based load balancers). So the probability for a closed input channel
+ to represent a user hitting the "STOP" button is close to 100%, and the risk
+ of being the single component to break rare but valid traffic is extremely
+ low, which adds to the temptation to be able to abort a session early while
+ still not served and not pollute the servers.
+
+ In HAProxy, the user can choose the desired behaviour using the option
+ "abortonclose". By default (without the option) the behaviour is HTTP
+ compliant and aborted requests will be served. But when the option is
+ specified, a session with an incoming channel closed will be aborted while
+ it is still possible, either pending in the queue for a connection slot, or
+ during the connection establishment if the server has not yet acknowledged
+ the connection request. This considerably reduces the queue size and the load
+ on saturated servers when users are tempted to click on STOP, which in turn
+ reduces the response time for other users.
+
+ If this option has been enabled in a "defaults" section, it can be disabled
+ in a specific instance by prepending the "no" keyword before it.
+
+ See also : "timeout queue" and server's "maxconn" and "maxqueue" parameters
+
+
+option accept-invalid-http-request
+no option accept-invalid-http-request
+ Enable or disable relaxing of HTTP request parsing
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | no
+ Arguments : none
+
+ By default, HAProxy complies with RFC7230 in terms of message parsing. This
+ means that invalid characters in header names are not permitted and cause an
+ error to be returned to the client. This is the desired behaviour as such
+ forbidden characters are essentially used to build attacks exploiting server
+ weaknesses, and bypass security filtering. Sometimes, a buggy browser or
+ server will emit invalid header names for whatever reason (configuration,
+ implementation) and the issue will not be immediately fixed. In such a case,
+ it is possible to relax HAProxy's header name parser to accept any character
+ even if that does not make sense, by specifying this option. Similarly, the
+ list of characters allowed to appear in a URI is well defined by RFC3986, and
+ chars 0-31, 32 (space), 34 ('"'), 60 ('<'), 62 ('>'), 92 ('\'), 94 ('^'), 96
+ ('`'), 123 ('{'), 124 ('|'), 125 ('}'), 127 (delete) and anything above are
+ not allowed at all. Haproxy always blocks a number of them (0..32, 127). The
+ remaining ones are blocked by default unless this option is enabled. This
+ option also relaxes the test on the HTTP version, it allows HTTP/0.9 requests
+ to pass through (no version specified) and multiple digits for both the major
+ and the minor version.
+
+ This option should never be enabled by default as it hides application bugs
+ and open security breaches. It should only be deployed after a problem has
+ been confirmed.
+
+ When this option is enabled, erroneous header names will still be accepted in
+ requests, but the complete request will be captured in order to permit later
+ analysis using the "show errors" request on the UNIX stats socket. Similarly,
+ requests containing invalid chars in the URI part will be logged. Doing this
+ also helps confirming that the issue has been solved.
+
+ If this option has been enabled in a "defaults" section, it can be disabled
+ in a specific instance by prepending the "no" keyword before it.
+
+ See also : "option accept-invalid-http-response" and "show errors" on the
+ stats socket.
+
+
+option accept-invalid-http-response
+no option accept-invalid-http-response
+ Enable or disable relaxing of HTTP response parsing
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
+ Arguments : none
+
+ By default, HAProxy complies with RFC7230 in terms of message parsing. This
+ means that invalid characters in header names are not permitted and cause an
+ error to be returned to the client. This is the desired behaviour as such
+ forbidden characters are essentially used to build attacks exploiting server
+ weaknesses, and bypass security filtering. Sometimes, a buggy browser or
+ server will emit invalid header names for whatever reason (configuration,
+ implementation) and the issue will not be immediately fixed. In such a case,
+ it is possible to relax HAProxy's header name parser to accept any character
+ even if that does not make sense, by specifying this option. This option also
+ relaxes the test on the HTTP version format, it allows multiple digits for
+ both the major and the minor version.
+
+ This option should never be enabled by default as it hides application bugs
+ and open security breaches. It should only be deployed after a problem has
+ been confirmed.
+
+ When this option is enabled, erroneous header names will still be accepted in
+ responses, but the complete response will be captured in order to permit
+ later analysis using the "show errors" request on the UNIX stats socket.
+ Doing this also helps confirming that the issue has been solved.
+
+ If this option has been enabled in a "defaults" section, it can be disabled
+ in a specific instance by prepending the "no" keyword before it.
+
+ See also : "option accept-invalid-http-request" and "show errors" on the
+ stats socket.
+
+
+option allbackups
+no option allbackups
+ Use either all backup servers at a time or only the first one
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
+ Arguments : none
+
+ By default, the first operational backup server gets all traffic when normal
+ servers are all down. Sometimes, it may be preferred to use multiple backups
+ at once, because one will not be enough. When "option allbackups" is enabled,
+ the load balancing will be performed among all backup servers when all normal
+ ones are unavailable. The same load balancing algorithm will be used and the
+ servers' weights will be respected. Thus, there will not be any priority
+ order between the backup servers anymore.
+
+ This option is mostly used with static server farms dedicated to return a
+ "sorry" page when an application is completely offline.
+
+ If this option has been enabled in a "defaults" section, it can be disabled
+ in a specific instance by prepending the "no" keyword before it.
+
+
+option checkcache
+no option checkcache
+ Analyze all server responses and block responses with cacheable cookies
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
+ Arguments : none
+
+ Some high-level frameworks set application cookies everywhere and do not
+ always let enough control to the developer to manage how the responses should
+ be cached. When a session cookie is returned on a cacheable object, there is a
+ high risk of session crossing or stealing between users traversing the same
+ caches. In some situations, it is better to block the response than to let
+ some sensitive session information go in the wild.
+
+ The option "checkcache" enables deep inspection of all server responses for
+ strict compliance with HTTP specification in terms of cacheability. It
+ carefully checks "Cache-control", "Pragma" and "Set-cookie" headers in server
+ response to check if there's a risk of caching a cookie on a client-side
+ proxy. When this option is enabled, the only responses which can be delivered
+ to the client are :
+ - all those without "Set-Cookie" header ;
+ - all those with a return code other than 200, 203, 206, 300, 301, 410,
+ provided that the server has not set a "Cache-control: public" header ;
+ - all those that come from a POST request, provided that the server has not
+ set a 'Cache-Control: public' header ;
+ - those with a 'Pragma: no-cache' header
+ - those with a 'Cache-control: private' header
+ - those with a 'Cache-control: no-store' header
+ - those with a 'Cache-control: max-age=0' header
+ - those with a 'Cache-control: s-maxage=0' header
+ - those with a 'Cache-control: no-cache' header
+ - those with a 'Cache-control: no-cache="set-cookie"' header
+ - those with a 'Cache-control: no-cache="set-cookie,' header
+ (allowing other fields after set-cookie)
+
+ If a response doesn't respect these requirements, then it will be blocked
+ just as if it was from an "rspdeny" filter, with an "HTTP 502 bad gateway".
+ The session state shows "PH--" meaning that the proxy blocked the response
+ during headers processing. Additionally, an alert will be sent in the logs so
+ that admins are informed that there's something to be fixed.
+
+ Due to the high impact on the application, the application should be tested
+ in depth with the option enabled before going to production. It is also a
+ good practice to always activate it during tests, even if it is not used in
+ production, as it will report potentially dangerous application behaviours.
+
+ If this option has been enabled in a "defaults" section, it can be disabled
+ in a specific instance by prepending the "no" keyword before it.
+
+
+option clitcpka
+no option clitcpka
+ Enable or disable the sending of TCP keepalive packets on the client side
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | no
+ Arguments : none
+
+ When there is a firewall or any session-aware component between a client and
+ a server, and when the protocol involves very long sessions with long idle
+ periods (eg: remote desktops), there is a risk that one of the intermediate
+ components decides to expire a session which has remained idle for too long.
+
+ Enabling socket-level TCP keep-alives makes the system regularly send packets
+ to the other end of the connection, leaving it active. The delay between
+ keep-alive probes is controlled by the system only and depends both on the
+ operating system and its tuning parameters.
+
+ It is important to understand that keep-alive packets are neither emitted nor
+ received at the application level. It is only the network stacks which sees
+ them. For this reason, even if one side of the proxy already uses keep-alives
+ to maintain its connection alive, those keep-alive packets will not be
+ forwarded to the other side of the proxy.
+
+ Please note that this has nothing to do with HTTP keep-alive.
+
+ Using option "clitcpka" enables the emission of TCP keep-alive probes on the
+ client side of a connection, which should help when session expirations are
+ noticed between HAProxy and a client.
+
+ If this option has been enabled in a "defaults" section, it can be disabled
+ in a specific instance by prepending the "no" keyword before it.
+
+ See also : "option srvtcpka", "option tcpka"
+
+
+option contstats
+ Enable continuous traffic statistics updates
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | no
+ Arguments : none
+
+ By default, counters used for statistics calculation are incremented
+ only when a session finishes. It works quite well when serving small
+ objects, but with big ones (for example large images or archives) or
+ with A/V streaming, a graph generated from haproxy counters looks like
+ a hedgehog. With this option enabled counters get incremented continuously,
+ during a whole session. Recounting touches a hotpath directly so
+ it is not enabled by default, as it has small performance impact (~0.5%).
+
+
+option dontlog-normal
+no option dontlog-normal
+ Enable or disable logging of normal, successful connections
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | no
+ Arguments : none
+
+ There are large sites dealing with several thousand connections per second
+ and for which logging is a major pain. Some of them are even forced to turn
+ logs off and cannot debug production issues. Setting this option ensures that
+ normal connections, those which experience no error, no timeout, no retry nor
+ redispatch, will not be logged. This leaves disk space for anomalies. In HTTP
+ mode, the response status code is checked and return codes 5xx will still be
+ logged.
+
+ It is strongly discouraged to use this option as most of the time, the key to
+ complex issues is in the normal logs which will not be logged here. If you
+ need to separate logs, see the "log-separate-errors" option instead.
+
+ See also : "log", "dontlognull", "log-separate-errors" and section 8 about
+ logging.
+
+
+option dontlognull
+no option dontlognull
+ Enable or disable logging of null connections
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | no
+ Arguments : none
+
+ In certain environments, there are components which will regularly connect to
+ various systems to ensure that they are still alive. It can be the case from
+ another load balancer as well as from monitoring systems. By default, even a
+ simple port probe or scan will produce a log. If those connections pollute
+ the logs too much, it is possible to enable option "dontlognull" to indicate
+ that a connection on which no data has been transferred will not be logged,
+ which typically corresponds to those probes. Note that errors will still be
+ returned to the client and accounted for in the stats. If this is not what is
+ desired, option http-ignore-probes can be used instead.
+
+ It is generally recommended not to use this option in uncontrolled
+ environments (eg: internet), otherwise scans and other malicious activities
+ would not be logged.
+
+ If this option has been enabled in a "defaults" section, it can be disabled
+ in a specific instance by prepending the "no" keyword before it.
+
+ See also : "log", "http-ignore-probes", "monitor-net", "monitor-uri", and
+ section 8 about logging.
+
+
+option forceclose
+no option forceclose
+ Enable or disable active connection closing after response is transferred.
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
+ Arguments : none
+
+ Some HTTP servers do not necessarily close the connections when they receive
+ the "Connection: close" set by "option httpclose", and if the client does not
+ close either, then the connection remains open till the timeout expires. This
+ causes high number of simultaneous connections on the servers and shows high
+ global session times in the logs.
+
+ When this happens, it is possible to use "option forceclose". It will
+ actively close the outgoing server channel as soon as the server has finished
+ to respond and release some resources earlier than with "option httpclose".
+
+ This option may also be combined with "option http-pretend-keepalive", which
+ will disable sending of the "Connection: close" header, but will still cause
+ the connection to be closed once the whole response is received.
+
+ This option disables and replaces any previous "option httpclose", "option
+ http-server-close", "option http-keep-alive", or "option http-tunnel".
+
+ If this option has been enabled in a "defaults" section, it can be disabled
+ in a specific instance by prepending the "no" keyword before it.
+
+ See also : "option httpclose" and "option http-pretend-keepalive"
+
+
+option forwardfor [ except <network> ] [ header <name> ] [ if-none ]
+ Enable insertion of the X-Forwarded-For header to requests sent to servers
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
+ Arguments :
+ <network> is an optional argument used to disable this option for sources
+ matching <network>
+ <name> an optional argument to specify a different "X-Forwarded-For"
+ header name.
+
+ Since HAProxy works in reverse-proxy mode, the servers see its IP address as
+ their client address. This is sometimes annoying when the client's IP address
+ is expected in server logs. To solve this problem, the well-known HTTP header
+ "X-Forwarded-For" may be added by HAProxy to all requests sent to the server.
+ This header contains a value representing the client's IP address. Since this
+ header is always appended at the end of the existing header list, the server
+ must be configured to always use the last occurrence of this header only. See
+ the server's manual to find how to enable use of this standard header. Note
+ that only the last occurrence of the header must be used, since it is really
+ possible that the client has already brought one.
+
+ The keyword "header" may be used to supply a different header name to replace
+ the default "X-Forwarded-For". This can be useful where you might already
+ have a "X-Forwarded-For" header from a different application (eg: stunnel),
+ and you need preserve it. Also if your backend server doesn't use the
+ "X-Forwarded-For" header and requires different one (eg: Zeus Web Servers
+ require "X-Cluster-Client-IP").
+
+ Sometimes, a same HAProxy instance may be shared between a direct client
+ access and a reverse-proxy access (for instance when an SSL reverse-proxy is
+ used to decrypt HTTPS traffic). It is possible to disable the addition of the
+ header for a known source address or network by adding the "except" keyword
+ followed by the network address. In this case, any source IP matching the
+ network will not cause an addition of this header. Most common uses are with
+ private networks or 127.0.0.1.
+
+ Alternatively, the keyword "if-none" states that the header will only be
+ added if it is not present. This should only be used in perfectly trusted
+ environment, as this might cause a security issue if headers reaching haproxy
+ are under the control of the end-user.
+
+ This option may be specified either in the frontend or in the backend. If at
+ least one of them uses it, the header will be added. Note that the backend's
+ setting of the header subargument takes precedence over the frontend's if
+ both are defined. In the case of the "if-none" argument, if at least one of
+ the frontend or the backend does not specify it, it wants the addition to be
+ mandatory, so it wins.
+
+ Examples :
+ # Public HTTP address also used by stunnel on the same machine
+ frontend www
+ mode http
+ option forwardfor except 127.0.0.1 # stunnel already adds the header
+
+ # Those servers want the IP Address in X-Client
+ backend www
+ mode http
+ option forwardfor header X-Client
+
+ See also : "option httpclose", "option http-server-close",
+ "option forceclose", "option http-keep-alive"
+
+
+option http-buffer-request
+no option http-buffer-request
+ Enable or disable waiting for whole HTTP request body before proceeding
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
+ Arguments : none
+
+ It is sometimes desirable to wait for the body of an HTTP request before
+ taking a decision. This is what is being done by "balance url_param" for
+ example. The first use case is to buffer requests from slow clients before
+ connecting to the server. Another use case consists in taking the routing
+ decision based on the request body's contents. This option placed in a
+ frontend or backend forces the HTTP processing to wait until either the whole
+ body is received, or the request buffer is full, or the first chunk is
+ complete in case of chunked encoding. It can have undesired side effects with
+ some applications abusing HTTP by expecting unbufferred transmissions between
+ the frontend and the backend, so this should definitely not be used by
+ default.
+
+ See also : "option http-no-delay", "timeout http-request"
+
+
+option http-ignore-probes
+no option http-ignore-probes
+ Enable or disable logging of null connections and request timeouts
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | no
+ Arguments : none
+
+ Recently some browsers started to implement a "pre-connect" feature
+ consisting in speculatively connecting to some recently visited web sites
+ just in case the user would like to visit them. This results in many
+ connections being established to web sites, which end up in 408 Request
+ Timeout if the timeout strikes first, or 400 Bad Request when the browser
+ decides to close them first. These ones pollute the log and feed the error
+ counters. There was already "option dontlognull" but it's insufficient in
+ this case. Instead, this option does the following things :
+ - prevent any 400/408 message from being sent to the client if nothing
+ was received over a connection before it was closed ;
+ - prevent any log from being emitted in this situation ;
+ - prevent any error counter from being incremented
+
+ That way the empty connection is silently ignored. Note that it is better
+ not to use this unless it is clear that it is needed, because it will hide
+ real problems. The most common reason for not receiving a request and seeing
+ a 408 is due to an MTU inconsistency between the client and an intermediary
+ element such as a VPN, which blocks too large packets. These issues are
+ generally seen with POST requests as well as GET with large cookies. The logs
+ are often the only way to detect them.
+
+ If this option has been enabled in a "defaults" section, it can be disabled
+ in a specific instance by prepending the "no" keyword before it.
+
+ See also : "log", "dontlognull", "errorfile", and section 8 about logging.
+
+
+option http-keep-alive
+no option http-keep-alive
+ Enable or disable HTTP keep-alive from client to server
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
+ Arguments : none
+
+ By default HAProxy operates in keep-alive mode with regards to persistent
+ connections: for each connection it processes each request and response, and
+ leaves the connection idle on both sides between the end of a response and the
+ start of a new request. This mode may be changed by several options such as
+ "option http-server-close", "option forceclose", "option httpclose" or
+ "option http-tunnel". This option allows to set back the keep-alive mode,
+ which can be useful when another mode was used in a defaults section.
+
+ Setting "option http-keep-alive" enables HTTP keep-alive mode on the client-
+ and server- sides. This provides the lowest latency on the client side (slow
+ network) and the fastest session reuse on the server side at the expense
+ of maintaining idle connections to the servers. In general, it is possible
+ with this option to achieve approximately twice the request rate that the
+ "http-server-close" option achieves on small objects. There are mainly two
+ situations where this option may be useful :
+
+ - when the server is non-HTTP compliant and authenticates the connection
+ instead of requests (eg: NTLM authentication)
+
+ - when the cost of establishing the connection to the server is significant
+ compared to the cost of retrieving the associated object from the server.
+
+ This last case can happen when the server is a fast static server of cache.
+ In this case, the server will need to be properly tuned to support high enough
+ connection counts because connections will last until the client sends another
+ request.
+
+ If the client request has to go to another backend or another server due to
+ content switching or the load balancing algorithm, the idle connection will
+ immediately be closed and a new one re-opened. Option "prefer-last-server" is
+ available to try optimize server selection so that if the server currently
+ attached to an idle connection is usable, it will be used.
+
+ In general it is preferred to use "option http-server-close" with application
+ servers, and some static servers might benefit from "option http-keep-alive".
+
+ At the moment, logs will not indicate whether requests came from the same
+ session or not. The accept date reported in the logs corresponds to the end
+ of the previous request, and the request time corresponds to the time spent
+ waiting for a new request. The keep-alive request time is still bound to the
+ timeout defined by "timeout http-keep-alive" or "timeout http-request" if
+ not set.
+
+ This option disables and replaces any previous "option httpclose", "option
+ http-server-close", "option forceclose" or "option http-tunnel". When backend
+ and frontend options differ, all of these 4 options have precedence over
+ "option http-keep-alive".
+
+ See also : "option forceclose", "option http-server-close",
+ "option prefer-last-server", "option http-pretend-keepalive",
+ "option httpclose", and "1.1. The HTTP transaction model".
+
+
+option http-no-delay
+no option http-no-delay
+ Instruct the system to favor low interactive delays over performance in HTTP
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
+ Arguments : none
+
+ In HTTP, each payload is unidirectional and has no notion of interactivity.
+ Any agent is expected to queue data somewhat for a reasonably low delay.
+ There are some very rare server-to-server applications that abuse the HTTP
+ protocol and expect the payload phase to be highly interactive, with many
+ interleaved data chunks in both directions within a single request. This is
+ absolutely not supported by the HTTP specification and will not work across
+ most proxies or servers. When such applications attempt to do this through
+ haproxy, it works but they will experience high delays due to the network
+ optimizations which favor performance by instructing the system to wait for
+ enough data to be available in order to only send full packets. Typical
+ delays are around 200 ms per round trip. Note that this only happens with
+ abnormal uses. Normal uses such as CONNECT requests nor WebSockets are not
+ affected.
+
+ When "option http-no-delay" is present in either the frontend or the backend
+ used by a connection, all such optimizations will be disabled in order to
+ make the exchanges as fast as possible. Of course this offers no guarantee on
+ the functionality, as it may break at any other place. But if it works via
+ HAProxy, it will work as fast as possible. This option should never be used
+ by default, and should never be used at all unless such a buggy application
+ is discovered. The impact of using this option is an increase of bandwidth
+ usage and CPU usage, which may significantly lower performance in high
+ latency environments.
+
+ See also : "option http-buffer-request"
+
+
+option http-pretend-keepalive
+no option http-pretend-keepalive
+ Define whether haproxy will announce keepalive to the server or not
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
+ Arguments : none
+
+ When running with "option http-server-close" or "option forceclose", haproxy
+ adds a "Connection: close" header to the request forwarded to the server.
+ Unfortunately, when some servers see this header, they automatically refrain
+ from using the chunked encoding for responses of unknown length, while this
+ is totally unrelated. The immediate effect is that this prevents haproxy from
+ maintaining the client connection alive. A second effect is that a client or
+ a cache could receive an incomplete response without being aware of it, and
+ consider the response complete.
+
+ By setting "option http-pretend-keepalive", haproxy will make the server
+ believe it will keep the connection alive. The server will then not fall back
+ to the abnormal undesired above. When haproxy gets the whole response, it
+ will close the connection with the server just as it would do with the
+ "forceclose" option. That way the client gets a normal response and the
+ connection is correctly closed on the server side.
+
+ It is recommended not to enable this option by default, because most servers
+ will more efficiently close the connection themselves after the last packet,
+ and release its buffers slightly earlier. Also, the added packet on the
+ network could slightly reduce the overall peak performance. However it is
+ worth noting that when this option is enabled, haproxy will have slightly
+ less work to do. So if haproxy is the bottleneck on the whole architecture,
+ enabling this option might save a few CPU cycles.
+
+ This option may be set both in a frontend and in a backend. It is enabled if
+ at least one of the frontend or backend holding a connection has it enabled.
+ This option may be combined with "option httpclose", which will cause
+ keepalive to be announced to the server and close to be announced to the
+ client. This practice is discouraged though.
+
+ If this option has been enabled in a "defaults" section, it can be disabled
+ in a specific instance by prepending the "no" keyword before it.
+
+ See also : "option forceclose", "option http-server-close", and
+ "option http-keep-alive"
+
+
+option http-server-close
+no option http-server-close
+ Enable or disable HTTP connection closing on the server side
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
+ Arguments : none
+
+ By default HAProxy operates in keep-alive mode with regards to persistent
+ connections: for each connection it processes each request and response, and
+ leaves the connection idle on both sides between the end of a response and
+ the start of a new request. This mode may be changed by several options such
+ as "option http-server-close", "option forceclose", "option httpclose" or
+ "option http-tunnel". Setting "option http-server-close" enables HTTP
+ connection-close mode on the server side while keeping the ability to support
+ HTTP keep-alive and pipelining on the client side. This provides the lowest
+ latency on the client side (slow network) and the fastest session reuse on
+ the server side to save server resources, similarly to "option forceclose".
+ It also permits non-keepalive capable servers to be served in keep-alive mode
+ to the clients if they conform to the requirements of RFC2616. Please note
+ that some servers do not always conform to those requirements when they see
+ "Connection: close" in the request. The effect will be that keep-alive will
+ never be used. A workaround consists in enabling "option
+ http-pretend-keepalive".
+
+ At the moment, logs will not indicate whether requests came from the same
+ session or not. The accept date reported in the logs corresponds to the end
+ of the previous request, and the request time corresponds to the time spent
+ waiting for a new request. The keep-alive request time is still bound to the
+ timeout defined by "timeout http-keep-alive" or "timeout http-request" if
+ not set.
+
+ This option may be set both in a frontend and in a backend. It is enabled if
+ at least one of the frontend or backend holding a connection has it enabled.
+ It disables and replaces any previous "option httpclose", "option forceclose",
+ "option http-tunnel" or "option http-keep-alive". Please check section 4
+ ("Proxies") to see how this option combines with others when frontend and
+ backend options differ.
+
+ If this option has been enabled in a "defaults" section, it can be disabled
+ in a specific instance by prepending the "no" keyword before it.
+
+ See also : "option forceclose", "option http-pretend-keepalive",
+ "option httpclose", "option http-keep-alive", and
+ "1.1. The HTTP transaction model".
+
+
+option http-tunnel
+no option http-tunnel
+ Disable or enable HTTP connection processing after first transaction
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
+ Arguments : none
+
+ By default HAProxy operates in keep-alive mode with regards to persistent
+ connections: for each connection it processes each request and response, and
+ leaves the connection idle on both sides between the end of a response and
+ the start of a new request. This mode may be changed by several options such
+ as "option http-server-close", "option forceclose", "option httpclose" or
+ "option http-tunnel".
+
+ Option "http-tunnel" disables any HTTP processing past the first request and
+ the first response. This is the mode which was used by default in versions
+ 1.0 to 1.5-dev21. It is the mode with the lowest processing overhead, which
+ is normally not needed anymore unless in very specific cases such as when
+ using an in-house protocol that looks like HTTP but is not compatible, or
+ just to log one request per client in order to reduce log size. Note that
+ everything which works at the HTTP level, including header parsing/addition,
+ cookie processing or content switching will only work for the first request
+ and will be ignored after the first response.
+
+ If this option has been enabled in a "defaults" section, it can be disabled
+ in a specific instance by prepending the "no" keyword before it.
+
+ See also : "option forceclose", "option http-server-close",
+ "option httpclose", "option http-keep-alive", and
+ "1.1. The HTTP transaction model".
+
+
+option http-use-proxy-header
+no option http-use-proxy-header
+ Make use of non-standard Proxy-Connection header instead of Connection
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | no
+ Arguments : none
+
+ While RFC2616 explicitly states that HTTP/1.1 agents must use the
+ Connection header to indicate their wish of persistent or non-persistent
+ connections, both browsers and proxies ignore this header for proxied
+ connections and make use of the undocumented, non-standard Proxy-Connection
+ header instead. The issue begins when trying to put a load balancer between
+ browsers and such proxies, because there will be a difference between what
+ haproxy understands and what the client and the proxy agree on.
+
+ By setting this option in a frontend, haproxy can automatically switch to use
+ that non-standard header if it sees proxied requests. A proxied request is
+ defined here as one where the URI begins with neither a '/' nor a '*'. The
+ choice of header only affects requests passing through proxies making use of
+ one of the "httpclose", "forceclose" and "http-server-close" options. Note
+ that this option can only be specified in a frontend and will affect the
+ request along its whole life.
+
+ Also, when this option is set, a request which requires authentication will
+ automatically switch to use proxy authentication headers if it is itself a
+ proxied request. That makes it possible to check or enforce authentication in
+ front of an existing proxy.
+
+ This option should normally never be used, except in front of a proxy.
+
+ See also : "option httpclose", "option forceclose" and "option
+ http-server-close".
+
+
+option httpchk
+option httpchk <uri>
+option httpchk <method> <uri>
+option httpchk <method> <uri> <version>
+ Enable HTTP protocol to check on the servers health
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
+ Arguments :
+ <method> is the optional HTTP method used with the requests. When not set,
+ the "OPTIONS" method is used, as it generally requires low server
+ processing and is easy to filter out from the logs. Any method
+ may be used, though it is not recommended to invent non-standard
+ ones.
+
+ <uri> is the URI referenced in the HTTP requests. It defaults to " / "
+ which is accessible by default on almost any server, but may be
+ changed to any other URI. Query strings are permitted.
+
+ <version> is the optional HTTP version string. It defaults to "HTTP/1.0"
+ but some servers might behave incorrectly in HTTP 1.0, so turning
+ it to HTTP/1.1 may sometimes help. Note that the Host field is
+ mandatory in HTTP/1.1, and as a trick, it is possible to pass it
+ after "\r\n" following the version string.
+
+ By default, server health checks only consist in trying to establish a TCP
+ connection. When "option httpchk" is specified, a complete HTTP request is
+ sent once the TCP connection is established, and responses 2xx and 3xx are
+ considered valid, while all other ones indicate a server failure, including
+ the lack of any response.
+
+ The port and interval are specified in the server configuration.
+
+ This option does not necessarily require an HTTP backend, it also works with
+ plain TCP backends. This is particularly useful to check simple scripts bound
+ to some dedicated ports using the inetd daemon.
+
+ Examples :
+ # Relay HTTPS traffic to Apache instance and check service availability
+ # using HTTP request "OPTIONS * HTTP/1.1" on port 80.
+ backend https_relay
+ mode tcp
+ option httpchk OPTIONS * HTTP/1.1\r\nHost:\ www
+ server apache1 192.168.1.1:443 check port 80
+
+ See also : "option ssl-hello-chk", "option smtpchk", "option mysql-check",
+ "option pgsql-check", "http-check" and the "check", "port" and
+ "inter" server options.
+
+
+option httpclose
+no option httpclose
+ Enable or disable passive HTTP connection closing
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
+ Arguments : none
+
+ By default HAProxy operates in keep-alive mode with regards to persistent
+ connections: for each connection it processes each request and response, and
+ leaves the connection idle on both sides between the end of a response and
+ the start of a new request. This mode may be changed by several options such
+ as "option http-server-close", "option forceclose", "option httpclose" or
+ "option http-tunnel".
+
+ If "option httpclose" is set, HAProxy will work in HTTP tunnel mode and check
+ if a "Connection: close" header is already set in each direction, and will
+ add one if missing. Each end should react to this by actively closing the TCP
+ connection after each transfer, thus resulting in a switch to the HTTP close
+ mode. Any "Connection" header different from "close" will also be removed.
+ Note that this option is deprecated since what it does is very cheap but not
+ reliable. Using "option http-server-close" or "option forceclose" is strongly
+ recommended instead.
+
+ It seldom happens that some servers incorrectly ignore this header and do not
+ close the connection even though they reply "Connection: close". For this
+ reason, they are not compatible with older HTTP 1.0 browsers. If this happens
+ it is possible to use the "option forceclose" which actively closes the
+ request connection once the server responds. Option "forceclose" also
+ releases the server connection earlier because it does not have to wait for
+ the client to acknowledge it.
+
+ This option may be set both in a frontend and in a backend. It is enabled if
+ at least one of the frontend or backend holding a connection has it enabled.
+ It disables and replaces any previous "option http-server-close",
+ "option forceclose", "option http-keep-alive" or "option http-tunnel". Please
+ check section 4 ("Proxies") to see how this option combines with others when
+ frontend and backend options differ.
+
+ If this option has been enabled in a "defaults" section, it can be disabled
+ in a specific instance by prepending the "no" keyword before it.
+
+ See also : "option forceclose", "option http-server-close" and
+ "1.1. The HTTP transaction model".
+
+
+option httplog [ clf ]
+ Enable logging of HTTP request, session state and timers
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
+ Arguments :
+ clf if the "clf" argument is added, then the output format will be
+ the CLF format instead of HAProxy's default HTTP format. You can
+ use this when you need to feed HAProxy's logs through a specific
+ log analyser which only support the CLF format and which is not
+ extensible.
+
+ By default, the log output format is very poor, as it only contains the
+ source and destination addresses, and the instance name. By specifying
+ "option httplog", each log line turns into a much richer format including,
+ but not limited to, the HTTP request, the connection timers, the session
+ status, the connections numbers, the captured headers and cookies, the
+ frontend, backend and server name, and of course the source address and
+ ports.
+
+ This option may be set either in the frontend or the backend.
+
+ Specifying only "option httplog" will automatically clear the 'clf' mode
+ if it was set by default.
+
+ See also : section 8 about logging.
+
+
+option http_proxy
+no option http_proxy
+ Enable or disable plain HTTP proxy mode
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
+ Arguments : none
+
+ It sometimes happens that people need a pure HTTP proxy which understands
+ basic proxy requests without caching nor any fancy feature. In this case,
+ it may be worth setting up an HAProxy instance with the "option http_proxy"
+ set. In this mode, no server is declared, and the connection is forwarded to
+ the IP address and port found in the URL after the "http://" scheme.
+
+ No host address resolution is performed, so this only works when pure IP
+ addresses are passed. Since this option's usage perimeter is rather limited,
+ it will probably be used only by experts who know they need exactly it. Last,
+ if the clients are susceptible of sending keep-alive requests, it will be
+ needed to add "option httpclose" to ensure that all requests will correctly
+ be analyzed.
+
+ If this option has been enabled in a "defaults" section, it can be disabled
+ in a specific instance by prepending the "no" keyword before it.
+
+ Example :
+ # this backend understands HTTP proxy requests and forwards them directly.
+ backend direct_forward
+ option httpclose
+ option http_proxy
+
+ See also : "option httpclose"
+
+
+option independent-streams
+no option independent-streams
+ Enable or disable independent timeout processing for both directions
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
+ Arguments : none
+
+ By default, when data is sent over a socket, both the write timeout and the
+ read timeout for that socket are refreshed, because we consider that there is
+ activity on that socket, and we have no other means of guessing if we should
+ receive data or not.
+
+ While this default behaviour is desirable for almost all applications, there
+ exists a situation where it is desirable to disable it, and only refresh the
+ read timeout if there are incoming data. This happens on sessions with large
+ timeouts and low amounts of exchanged data such as telnet session. If the
+ server suddenly disappears, the output data accumulates in the system's
+ socket buffers, both timeouts are correctly refreshed, and there is no way
+ to know the server does not receive them, so we don't timeout. However, when
+ the underlying protocol always echoes sent data, it would be enough by itself
+ to detect the issue using the read timeout. Note that this problem does not
+ happen with more verbose protocols because data won't accumulate long in the
+ socket buffers.
+
+ When this option is set on the frontend, it will disable read timeout updates
+ on data sent to the client. There probably is little use of this case. When
+ the option is set on the backend, it will disable read timeout updates on
+ data sent to the server. Doing so will typically break large HTTP posts from
+ slow lines, so use it with caution.
+
+ Note: older versions used to call this setting "option independent-streams"
+ with a spelling mistake. This spelling is still supported but
+ deprecated.
+
+ See also : "timeout client", "timeout server" and "timeout tunnel"
+
+
+option ldap-check
+ Use LDAPv3 health checks for server testing
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
+ Arguments : none
+
+ It is possible to test that the server correctly talks LDAPv3 instead of just
+ testing that it accepts the TCP connection. When this option is set, an
+ LDAPv3 anonymous simple bind message is sent to the server, and the response
+ is analyzed to find an LDAPv3 bind response message.
+
+ The server is considered valid only when the LDAP response contains success
+ resultCode (http://tools.ietf.org/html/rfc4511#section-4.1.9).
+
+ Logging of bind requests is server dependent see your documentation how to
+ configure it.
+
+ Example :
+ option ldap-check
+
+ See also : "option httpchk"
+
+
+option external-check
+ Use external processes for server health checks
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
+
+ It is possible to test the health of a server using an external command.
+ This is achieved by running the executable set using "external-check
+ command".
+
+ Requires the "external-check" global to be set.
+
+ See also : "external-check", "external-check command", "external-check path"
+
+
+option log-health-checks
+no option log-health-checks
+ Enable or disable logging of health checks status updates
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
+ Arguments : none
+
+ By default, failed health check are logged if server is UP and successful
+ health checks are logged if server is DOWN, so the amount of additional
+ information is limited.
+
+ When this option is enabled, any change of the health check status or to
+ the server's health will be logged, so that it becomes possible to know
+ that a server was failing occasional checks before crashing, or exactly when
+ it failed to respond a valid HTTP status, then when the port started to
+ reject connections, then when the server stopped responding at all.
+
+ Note that status changes not caused by health checks (eg: enable/disable on
+ the CLI) are intentionally not logged by this option.
+
+ See also: "option httpchk", "option ldap-check", "option mysql-check",
+ "option pgsql-check", "option redis-check", "option smtpchk",
+ "option tcp-check", "log" and section 8 about logging.
+
+
+option log-separate-errors
+no option log-separate-errors
+ Change log level for non-completely successful connections
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | no
+ Arguments : none
+
+ Sometimes looking for errors in logs is not easy. This option makes haproxy
+ raise the level of logs containing potentially interesting information such
+ as errors, timeouts, retries, redispatches, or HTTP status codes 5xx. The
+ level changes from "info" to "err". This makes it possible to log them
+ separately to a different file with most syslog daemons. Be careful not to
+ remove them from the original file, otherwise you would lose ordering which
+ provides very important information.
+
+ Using this option, large sites dealing with several thousand connections per
+ second may log normal traffic to a rotating buffer and only archive smaller
+ error logs.
+
+ See also : "log", "dontlognull", "dontlog-normal" and section 8 about
+ logging.
+
+
+option logasap
+no option logasap
+ Enable or disable early logging of HTTP requests
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | no
+ Arguments : none
+
+ By default, HTTP requests are logged upon termination so that the total
+ transfer time and the number of bytes appear in the logs. When large objects
+ are being transferred, it may take a while before the request appears in the
+ logs. Using "option logasap", the request gets logged as soon as the server
+ sends the complete headers. The only missing information in the logs will be
+ the total number of bytes which will indicate everything except the amount
+ of data transferred, and the total time which will not take the transfer
+ time into account. In such a situation, it's a good practice to capture the
+ "Content-Length" response header so that the logs at least indicate how many
+ bytes are expected to be transferred.
+
+ Examples :
+ listen http_proxy 0.0.0.0:80
+ mode http
+ option httplog
+ option logasap
+ log 192.168.2.200 local3
+
+ >>> Feb 6 12:14:14 localhost \
+ haproxy[14389]: 10.0.1.2:33317 [06/Feb/2009:12:14:14.655] http-in \
+ static/srv1 9/10/7/14/+30 200 +243 - - ---- 3/1/1/1/0 1/0 \
+ "GET /image.iso HTTP/1.0"
+
+ See also : "option httplog", "capture response header", and section 8 about
+ logging.
+
+
+option mysql-check [ user <username> [ post-41 ] ]
+ Use MySQL health checks for server testing
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
+ Arguments :
+ <username> This is the username which will be used when connecting to MySQL
+ server.
+ post-41 Send post v4.1 client compatible checks
+
+ If you specify a username, the check consists of sending two MySQL packet,
+ one Client Authentication packet, and one QUIT packet, to correctly close
+ MySQL session. We then parse the MySQL Handshake Initialisation packet and/or
+ Error packet. It is a basic but useful test which does not produce error nor
+ aborted connect on the server. However, it requires adding an authorization
+ in the MySQL table, like this :
+
+ USE mysql;
+ INSERT INTO user (Host,User) values ('<ip_of_haproxy>','<username>');
+ FLUSH PRIVILEGES;
+
+ If you don't specify a username (it is deprecated and not recommended), the
+ check only consists in parsing the Mysql Handshake Initialisation packet or
+ Error packet, we don't send anything in this mode. It was reported that it
+ can generate lockout if check is too frequent and/or if there is not enough
+ traffic. In fact, you need in this case to check MySQL "max_connect_errors"
+ value as if a connection is established successfully within fewer than MySQL
+ "max_connect_errors" attempts after a previous connection was interrupted,
+ the error count for the host is cleared to zero. If HAProxy's server get
+ blocked, the "FLUSH HOSTS" statement is the only way to unblock it.
+
+ Remember that this does not check database presence nor database consistency.
+ To do this, you can use an external check with xinetd for example.
+
+ The check requires MySQL >=3.22, for older version, please use TCP check.
+
+ Most often, an incoming MySQL server needs to see the client's IP address for
+ various purposes, including IP privilege matching and connection logging.
+ When possible, it is often wise to masquerade the client's IP address when
+ connecting to the server using the "usesrc" argument of the "source" keyword,
+ which requires the transparent proxy feature to be compiled in, and the MySQL
+ server to route the client via the machine hosting haproxy.
+
+ See also: "option httpchk"
+
+
+option nolinger
+no option nolinger
+ Enable or disable immediate session resource cleaning after close
+ May be used in sections: defaults | frontend | listen | backend
+ yes | yes | yes | yes
+ Arguments : none
+
+ When clients or servers abort connections in a dirty way (eg: they are
+ physically disconnected), the session timeouts triggers and the session is
+ closed. But it will remain in FIN_WAIT1 state for some time in the system,
+ using some resources and possibly limiting the ability to establish newer
+ connections.
+
+ When this happens, it is possible to activate "option nolinger" which forces
+ the system to immediately remove any socket's pending data on close. Thus,
+ the session is instantly purged from the system's tables. This usually has
+ side effects such as increased number of TCP resets due to old retransmits
+ getting immediately rejected. Some firewalls may sometimes complain about
+ this too.
+
+ For this reason, it is not recommended to use this option when not absolutely
+ needed. You know that you need it when you have thousands of FIN_WAIT1
+ sessions on your system (TIME_WAIT ones do not count).
+
+ This option may be used both on frontends and backends, depending on the side
+ where it is required. Use it on the frontend for clients, and on the backend
+ for servers.
+
+ If this option has been enabled in a "defaults" section, it can be disabled
+ in a specific instance by prepending the "no" keyword before it.
+
+
+option originalto [ except <network> ] [ header <name> ]
+ Enable insertion of the X-Original-To header to requests sent to servers
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
+ Arguments :
+ <network> is an optional argument used to disable this option for sources
+ matching <network>
+ <name> an optional argument to specify a different "X-Original-To"
+ header name.
+
+ Since HAProxy can work in transparent mode, every request from a client can
+ be redirected to the proxy and HAProxy itself can proxy every request to a
+ complex SQUID environment and the destination host from SO_ORIGINAL_DST will
+ be lost. This is annoying when you want access rules based on destination ip
+ addresses. To solve this problem, a new HTTP header "X-Original-To" may be
+ added by HAProxy to all requests sent to the server. This header contains a
+ value representing the original destination IP address. Since this must be
+ configured to always use the last occurrence of this header only. Note that
+ only the last occurrence of the header must be used, since it is really
+ possible that the client has already brought one.
+
+ The keyword "header" may be used to supply a different header name to replace
+ the default "X-Original-To". This can be useful where you might already
+ have a "X-Original-To" header from a different application, and you need
+ preserve it. Also if your backend server doesn't use the "X-Original-To"
+ header and requires different one.
+
+ Sometimes, a same HAProxy instance may be shared between a direct client
+ access and a reverse-proxy access (for instance when an SSL reverse-proxy is
+ used to decrypt HTTPS traffic). It is possible to disable the addition of the
+ header for a known source address or network by adding the "except" keyword
+ followed by the network address. In this case, any source IP matching the
+ network will not cause an addition of this header. Most common uses are with
+ private networks or 127.0.0.1.
+
+ This option may be specified either in the frontend or in the backend. If at
+ least one of them uses it, the header will be added. Note that the backend's
+ setting of the header subargument takes precedence over the frontend's if
+ both are defined.
+
+ Examples :
+ # Original Destination address
+ frontend www
+ mode http
+ option originalto except 127.0.0.1
+
+ # Those servers want the IP Address in X-Client-Dst
+ backend www
+ mode http
+ option originalto header X-Client-Dst
+
+ See also : "option httpclose", "option http-server-close",
+ "option forceclose"
+
+
+option persist
+no option persist
+ Enable or disable forced persistence on down servers
+ May be used in sections: defaults | frontend | listen | backend
+ yes | no | yes | yes
+ Arguments : none
+
+ When an HTTP request reaches a backend with a cookie which references a dead
+ server, by default it is redispatched to another server. It is possible to
+ force the request to be sent to the dead server first using "option persist"
+ if absolutely needed. A common use case is when servers are under extreme
+ load and spend their time flapping. In this case, the users would still be
+ directed to the server they opened the session on, in the hope they would be
+ correctly served. It is recommended to use "option redispatch" in conjunction
+ with this option so that in the event it would not be possible to connect to
+ the server at all (server definitely dead), the client would finally be
+ redirected to another valid server.
+
+ If this option has been enabled in a "defaults" section, it can be disabled
+ in a specific instance by prepending the "no" keyword before it.
+
+ See also : "option redispatch", "retries", "force-persist"
+
+
+option pgsql-check [ user <username> ]
+ Use PostgreSQL health checks for server testing
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
+ Arguments :
+ <username> This is the username which will be used when connecting to
+ PostgreSQL server.
+
+ The check sends a PostgreSQL StartupMessage and waits for either
+ Authentication request or ErrorResponse message. It is a basic but useful
+ test which does not produce error nor aborted connect on the server.
+ This check is identical with the "mysql-check".
+
+ See also: "option httpchk"
+
+
+option prefer-last-server
+no option prefer-last-server
+ Allow multiple load balanced requests to remain on the same server
+ May be used in sections: defaults | frontend | listen | backend
+ yes | no | yes | yes
+ Arguments : none
+
+ When the load balancing algorithm in use is not deterministic, and a previous
+ request was sent to a server to which haproxy still holds a connection, it is
+ sometimes desirable that subsequent requests on a same session go to the same
+ server as much as possible. Note that this is different from persistence, as
+ we only indicate a preference which haproxy tries to apply without any form
+ of warranty. The real use is for keep-alive connections sent to servers. When
+ this option is used, haproxy will try to reuse the same connection that is
+ attached to the server instead of rebalancing to another server, causing a
+ close of the connection. This can make sense for static file servers. It does
+ not make much sense to use this in combination with hashing algorithms. Note,
+ haproxy already automatically tries to stick to a server which sends a 401 or
+ to a proxy which sends a 407 (authentication required). This is mandatory for
+ use with the broken NTLM authentication challenge, and significantly helps in
+ troubleshooting some faulty applications. Option prefer-last-server might be
+ desirable in these environments as well, to avoid redistributing the traffic
+ after every other response.
+
+ If this option has been enabled in a "defaults" section, it can be disabled
+ in a specific instance by prepending the "no" keyword before it.
+
+ See also: "option http-keep-alive"
+
+
+option redispatch
+option redispatch <interval>
+no option redispatch
+ Enable or disable session redistribution in case of connection failure
+ May be used in sections: defaults | frontend | listen | backend
+ yes | no | yes | yes
+ Arguments :
+ <interval> The optional integer value that controls how often redispatches
+ occur when retrying connections. Positive value P indicates a
+ redispatch is desired on every Pth retry, and negative value
+ N indicate a redispath is desired on the Nth retry prior to the
+ last retry. For example, the default of -1 preserves the
+ historical behaviour of redispatching on the last retry, a
+ positive value of 1 would indicate a redispatch on every retry,
+ and a positive value of 3 would indicate a redispatch on every
+ third retry. You can disable redispatches with a value of 0.
+
+
+ In HTTP mode, if a server designated by a cookie is down, clients may
+ definitely stick to it because they cannot flush the cookie, so they will not
+ be able to access the service anymore.
+
+ Specifying "option redispatch" will allow the proxy to break their
+ persistence and redistribute them to a working server.
+
+ It also allows to retry connections to another server in case of multiple
+ connection failures. Of course, it requires having "retries" set to a nonzero
+ value.
+
+ This form is the preferred form, which replaces both the "redispatch" and
+ "redisp" keywords.
+
+ If this option has been enabled in a "defaults" section, it can be disabled
+ in a specific instance by prepending the "no" keyword before it.
+
+ See also : "redispatch", "retries", "force-persist"
+
+
+option redis-check
+ Use redis health checks for server testing
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
+ Arguments : none
+
+ It is possible to test that the server correctly talks REDIS protocol instead
+ of just testing that it accepts the TCP connection. When this option is set,
+ a PING redis command is sent to the server, and the response is analyzed to
+ find the "+PONG" response message.
+
+ Example :
+ option redis-check
+
+ See also : "option httpchk"
+
+
+option smtpchk
+option smtpchk <hello> <domain>
+ Use SMTP health checks for server testing
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
+ Arguments :
+ <hello> is an optional argument. It is the "hello" command to use. It can
+ be either "HELO" (for SMTP) or "EHLO" (for ESTMP). All other
+ values will be turned into the default command ("HELO").
+
+ <domain> is the domain name to present to the server. It may only be
+ specified (and is mandatory) if the hello command has been
+ specified. By default, "localhost" is used.
+
+ When "option smtpchk" is set, the health checks will consist in TCP
+ connections followed by an SMTP command. By default, this command is
+ "HELO localhost". The server's return code is analyzed and only return codes
+ starting with a "2" will be considered as valid. All other responses,
+ including a lack of response will constitute an error and will indicate a
+ dead server.
+
+ This test is meant to be used with SMTP servers or relays. Depending on the
+ request, it is possible that some servers do not log each connection attempt,
+ so you may want to experiment to improve the behaviour. Using telnet on port
+ 25 is often easier than adjusting the configuration.
+
+ Most often, an incoming SMTP server needs to see the client's IP address for
+ various purposes, including spam filtering, anti-spoofing and logging. When
+ possible, it is often wise to masquerade the client's IP address when
+ connecting to the server using the "usesrc" argument of the "source" keyword,
+ which requires the transparent proxy feature to be compiled in.
+
+ Example :
+ option smtpchk HELO mydomain.org
+
+ See also : "option httpchk", "source"
+
+
+option socket-stats
+no option socket-stats
+
+ Enable or disable collecting & providing separate statistics for each socket.
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | no
+
+ Arguments : none
+
+
+option splice-auto
+no option splice-auto
+ Enable or disable automatic kernel acceleration on sockets in both directions
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
+ Arguments : none
+
+ When this option is enabled either on a frontend or on a backend, haproxy
+ will automatically evaluate the opportunity to use kernel tcp splicing to
+ forward data between the client and the server, in either direction. Haproxy
+ uses heuristics to estimate if kernel splicing might improve performance or
+ not. Both directions are handled independently. Note that the heuristics used
+ are not much aggressive in order to limit excessive use of splicing. This
+ option requires splicing to be enabled at compile time, and may be globally
+ disabled with the global option "nosplice". Since splice uses pipes, using it
+ requires that there are enough spare pipes.
+
+ Important note: kernel-based TCP splicing is a Linux-specific feature which
+ first appeared in kernel 2.6.25. It offers kernel-based acceleration to
+ transfer data between sockets without copying these data to user-space, thus
+ providing noticeable performance gains and CPU cycles savings. Since many
+ early implementations are buggy, corrupt data and/or are inefficient, this
+ feature is not enabled by default, and it should be used with extreme care.
+ While it is not possible to detect the correctness of an implementation,
+ 2.6.29 is the first version offering a properly working implementation. In
+ case of doubt, splicing may be globally disabled using the global "nosplice"
+ keyword.
+
+ Example :
+ option splice-auto
+
+ If this option has been enabled in a "defaults" section, it can be disabled
+ in a specific instance by prepending the "no" keyword before it.
+
+ See also : "option splice-request", "option splice-response", and global
+ options "nosplice" and "maxpipes"
+
+
+option splice-request
+no option splice-request
+ Enable or disable automatic kernel acceleration on sockets for requests
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
+ Arguments : none
+
+ When this option is enabled either on a frontend or on a backend, haproxy
+ will use kernel tcp splicing whenever possible to forward data going from
+ the client to the server. It might still use the recv/send scheme if there
+ are no spare pipes left. This option requires splicing to be enabled at
+ compile time, and may be globally disabled with the global option "nosplice".
+ Since splice uses pipes, using it requires that there are enough spare pipes.
+
+ Important note: see "option splice-auto" for usage limitations.
+
+ Example :
+ option splice-request
+
+ If this option has been enabled in a "defaults" section, it can be disabled
+ in a specific instance by prepending the "no" keyword before it.
+
+ See also : "option splice-auto", "option splice-response", and global options
+ "nosplice" and "maxpipes"
+
+
+option splice-response
+no option splice-response
+ Enable or disable automatic kernel acceleration on sockets for responses
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
+ Arguments : none
+
+ When this option is enabled either on a frontend or on a backend, haproxy
+ will use kernel tcp splicing whenever possible to forward data going from
+ the server to the client. It might still use the recv/send scheme if there
+ are no spare pipes left. This option requires splicing to be enabled at
+ compile time, and may be globally disabled with the global option "nosplice".
+ Since splice uses pipes, using it requires that there are enough spare pipes.
+
+ Important note: see "option splice-auto" for usage limitations.
+
+ Example :
+ option splice-response
+
+ If this option has been enabled in a "defaults" section, it can be disabled
+ in a specific instance by prepending the "no" keyword before it.
+
+ See also : "option splice-auto", "option splice-request", and global options
+ "nosplice" and "maxpipes"
+
+
+option srvtcpka
+no option srvtcpka
+ Enable or disable the sending of TCP keepalive packets on the server side
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
+ Arguments : none
+
+ When there is a firewall or any session-aware component between a client and
+ a server, and when the protocol involves very long sessions with long idle
+ periods (eg: remote desktops), there is a risk that one of the intermediate
+ components decides to expire a session which has remained idle for too long.
+
+ Enabling socket-level TCP keep-alives makes the system regularly send packets
+ to the other end of the connection, leaving it active. The delay between
+ keep-alive probes is controlled by the system only and depends both on the
+ operating system and its tuning parameters.
+
+ It is important to understand that keep-alive packets are neither emitted nor
+ received at the application level. It is only the network stacks which sees
+ them. For this reason, even if one side of the proxy already uses keep-alives
+ to maintain its connection alive, those keep-alive packets will not be
+ forwarded to the other side of the proxy.
+
+ Please note that this has nothing to do with HTTP keep-alive.
+
+ Using option "srvtcpka" enables the emission of TCP keep-alive probes on the
+ server side of a connection, which should help when session expirations are
+ noticed between HAProxy and a server.
+
+ If this option has been enabled in a "defaults" section, it can be disabled
+ in a specific instance by prepending the "no" keyword before it.
+
+ See also : "option clitcpka", "option tcpka"
+
+
+option ssl-hello-chk
+ Use SSLv3 client hello health checks for server testing
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
+ Arguments : none
+
+ When some SSL-based protocols are relayed in TCP mode through HAProxy, it is
+ possible to test that the server correctly talks SSL instead of just testing
+ that it accepts the TCP connection. When "option ssl-hello-chk" is set, pure
+ SSLv3 client hello messages are sent once the connection is established to
+ the server, and the response is analyzed to find an SSL server hello message.
+ The server is considered valid only when the response contains this server
+ hello message.
+
+ All servers tested till there correctly reply to SSLv3 client hello messages,
+ and most servers tested do not even log the requests containing only hello
+ messages, which is appreciable.
+
+ Note that this check works even when SSL support was not built into haproxy
+ because it forges the SSL message. When SSL support is available, it is best
+ to use native SSL health checks instead of this one.
+
+ See also: "option httpchk", "check-ssl"
+
+
+option tcp-check
+ Perform health checks using tcp-check send/expect sequences
+ May be used in sections: defaults | frontend | listen | backend
+ yes | no | yes | yes
+
+ This health check method is intended to be combined with "tcp-check" command
+ lists in order to support send/expect types of health check sequences.
+
+ TCP checks currently support 4 modes of operations :
+ - no "tcp-check" directive : the health check only consists in a connection
+ attempt, which remains the default mode.
+
+ - "tcp-check send" or "tcp-check send-binary" only is mentioned : this is
+ used to send a string along with a connection opening. With some
+ protocols, it helps sending a "QUIT" message for example that prevents
+ the server from logging a connection error for each health check. The
+ check result will still be based on the ability to open the connection
+ only.
+
+ - "tcp-check expect" only is mentioned : this is used to test a banner.
+ The connection is opened and haproxy waits for the server to present some
+ contents which must validate some rules. The check result will be based
+ on the matching between the contents and the rules. This is suited for
+ POP, IMAP, SMTP, FTP, SSH, TELNET.
+
+ - both "tcp-check send" and "tcp-check expect" are mentioned : this is
+ used to test a hello-type protocol. Haproxy sends a message, the server
+ responds and its response is analysed. the check result will be based on
+ the matching between the response contents and the rules. This is often
+ suited for protocols which require a binding or a request/response model.
+ LDAP, MySQL, Redis and SSL are example of such protocols, though they
+ already all have their dedicated checks with a deeper understanding of
+ the respective protocols.
+ In this mode, many questions may be sent and many answers may be
+ analysed.
+
+ A fifth mode can be used to insert comments in different steps of the
+ script.
+
+ For each tcp-check rule you create, you can add a "comment" directive,
+ followed by a string. This string will be reported in the log and stderr
+ in debug mode. It is useful to make user-friendly error reporting.
+ The "comment" is of course optional.
+
+
+ Examples :
+ # perform a POP check (analyse only server's banner)
+ option tcp-check
+ tcp-check expect string +OK\ POP3\ ready comment POP\ protocol
+
+ # perform an IMAP check (analyse only server's banner)
+ option tcp-check
+ tcp-check expect string *\ OK\ IMAP4\ ready comment IMAP\ protocol
+
+ # look for the redis master server after ensuring it speaks well
+ # redis protocol, then it exits properly.
+ # (send a command then analyse the response 3 times)
+ option tcp-check
+ tcp-check comment PING\ phase
+ tcp-check send PING\r\n
+ tcp-check expect string +PONG
+ tcp-check comment role\ check
+ tcp-check send info\ replication\r\n
+ tcp-check expect string role:master
+ tcp-check comment QUIT\ phase
+ tcp-check send QUIT\r\n
+ tcp-check expect string +OK
+
+ forge a HTTP request, then analyse the response
+ (send many headers before analyzing)
+ option tcp-check
+ tcp-check comment forge\ and\ send\ HTTP\ request
+ tcp-check send HEAD\ /\ HTTP/1.1\r\n
+ tcp-check send Host:\ www.mydomain.com\r\n
+ tcp-check send User-Agent:\ HAProxy\ tcpcheck\r\n
+ tcp-check send \r\n
+ tcp-check expect rstring HTTP/1\..\ (2..|3..) comment check\ HTTP\ response
+
+
+ See also : "tcp-check expect", "tcp-check send"
+
+
+option tcp-smart-accept
+no option tcp-smart-accept
+ Enable or disable the saving of one ACK packet during the accept sequence
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | no
+ Arguments : none
+
+ When an HTTP connection request comes in, the system acknowledges it on
+ behalf of HAProxy, then the client immediately sends its request, and the
+ system acknowledges it too while it is notifying HAProxy about the new
+ connection. HAProxy then reads the request and responds. This means that we
+ have one TCP ACK sent by the system for nothing, because the request could
+ very well be acknowledged by HAProxy when it sends its response.
+
+ For this reason, in HTTP mode, HAProxy automatically asks the system to avoid
+ sending this useless ACK on platforms which support it (currently at least
+ Linux). It must not cause any problem, because the system will send it anyway
+ after 40 ms if the response takes more time than expected to come.
+
+ During complex network debugging sessions, it may be desirable to disable
+ this optimization because delayed ACKs can make troubleshooting more complex
+ when trying to identify where packets are delayed. It is then possible to
+ fall back to normal behaviour by specifying "no option tcp-smart-accept".
+
+ It is also possible to force it for non-HTTP proxies by simply specifying
+ "option tcp-smart-accept". For instance, it can make sense with some services
+ such as SMTP where the server speaks first.
+
+ It is recommended to avoid forcing this option in a defaults section. In case
+ of doubt, consider setting it back to automatic values by prepending the
+ "default" keyword before it, or disabling it using the "no" keyword.
+
+ See also : "option tcp-smart-connect"
+
+
+option tcp-smart-connect
+no option tcp-smart-connect
+ Enable or disable the saving of one ACK packet during the connect sequence
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
+ Arguments : none
+
+ On certain systems (at least Linux), HAProxy can ask the kernel not to
+ immediately send an empty ACK upon a connection request, but to directly
+ send the buffer request instead. This saves one packet on the network and
+ thus boosts performance. It can also be useful for some servers, because they
+ immediately get the request along with the incoming connection.
+
+ This feature is enabled when "option tcp-smart-connect" is set in a backend.
+ It is not enabled by default because it makes network troubleshooting more
+ complex.
+
+ It only makes sense to enable it with protocols where the client speaks first
+ such as HTTP. In other situations, if there is no data to send in place of
+ the ACK, a normal ACK is sent.
+
+ If this option has been enabled in a "defaults" section, it can be disabled
+ in a specific instance by prepending the "no" keyword before it.
+
+ See also : "option tcp-smart-accept"
+
+
+option tcpka
+ Enable or disable the sending of TCP keepalive packets on both sides
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
+ Arguments : none
+
+ When there is a firewall or any session-aware component between a client and
+ a server, and when the protocol involves very long sessions with long idle
+ periods (eg: remote desktops), there is a risk that one of the intermediate
+ components decides to expire a session which has remained idle for too long.
+
+ Enabling socket-level TCP keep-alives makes the system regularly send packets
+ to the other end of the connection, leaving it active. The delay between
+ keep-alive probes is controlled by the system only and depends both on the
+ operating system and its tuning parameters.
+
+ It is important to understand that keep-alive packets are neither emitted nor
+ received at the application level. It is only the network stacks which sees
+ them. For this reason, even if one side of the proxy already uses keep-alives
+ to maintain its connection alive, those keep-alive packets will not be
+ forwarded to the other side of the proxy.
+
+ Please note that this has nothing to do with HTTP keep-alive.
+
+ Using option "tcpka" enables the emission of TCP keep-alive probes on both
+ the client and server sides of a connection. Note that this is meaningful
+ only in "defaults" or "listen" sections. If this option is used in a
+ frontend, only the client side will get keep-alives, and if this option is
+ used in a backend, only the server side will get keep-alives. For this
+ reason, it is strongly recommended to explicitly use "option clitcpka" and
+ "option srvtcpka" when the configuration is split between frontends and
+ backends.
+
+ See also : "option clitcpka", "option srvtcpka"
+
+
+option tcplog
+ Enable advanced logging of TCP connections with session state and timers
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
+ Arguments : none
+
+ By default, the log output format is very poor, as it only contains the
+ source and destination addresses, and the instance name. By specifying
+ "option tcplog", each log line turns into a much richer format including, but
+ not limited to, the connection timers, the session status, the connections
+ numbers, the frontend, backend and server name, and of course the source
+ address and ports. This option is useful for pure TCP proxies in order to
+ find which of the client or server disconnects or times out. For normal HTTP
+ proxies, it's better to use "option httplog" which is even more complete.
+
+ This option may be set either in the frontend or the backend.
+
+ See also : "option httplog", and section 8 about logging.
+
+
+option transparent
+no option transparent
+ Enable client-side transparent proxying
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
+ Arguments : none
+
+ This option was introduced in order to provide layer 7 persistence to layer 3
+ load balancers. The idea is to use the OS's ability to redirect an incoming
+ connection for a remote address to a local process (here HAProxy), and let
+ this process know what address was initially requested. When this option is
+ used, sessions without cookies will be forwarded to the original destination
+ IP address of the incoming request (which should match that of another
+ equipment), while requests with cookies will still be forwarded to the
+ appropriate server.
+
+ Note that contrary to a common belief, this option does NOT make HAProxy
+ present the client's IP to the server when establishing the connection.
+
+ See also: the "usesrc" argument of the "source" keyword, and the
+ "transparent" option of the "bind" keyword.
+
+
+external-check command <command>
+ Executable to run when performing an external-check
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
+
+ Arguments :
+ <command> is the external command to run
+
+ The arguments passed to the to the command are:
+
+ <proxy_address> <proxy_port> <server_address> <server_port>
+
+ The <proxy_address> and <proxy_port> are derived from the first listener
+ that is either IPv4, IPv6 or a UNIX socket. In the case of a UNIX socket
+ listener the proxy_address will be the path of the socket and the
+ <proxy_port> will be the string "NOT_USED". In a backend section, it's not
+ possible to determine a listener, and both <proxy_address> and <proxy_port>
+ will have the string value "NOT_USED".
+
+ Some values are also provided through environment variables.
+
+ Environment variables :
+ HAPROXY_PROXY_ADDR The first bind address if available (or empty if not
+ applicable, for example in a "backend" section).
+
+ HAPROXY_PROXY_ID The backend id.
+
+ HAPROXY_PROXY_NAME The backend name.
+
+ HAPROXY_PROXY_PORT The first bind port if available (or empty if not
+ applicable, for example in a "backend" section or
+ for a UNIX socket).
+
+ HAPROXY_SERVER_ADDR The server address.
+
+ HAPROXY_SERVER_CURCONN The current number of connections on the server.
+
+ HAPROXY_SERVER_ID The server id.
+
+ HAPROXY_SERVER_MAXCONN The server max connections.
+
+ HAPROXY_SERVER_NAME The server name.
+
+ HAPROXY_SERVER_PORT The server port if available (or empty for a UNIX
+ socket).
+
+ PATH The PATH environment variable used when executing
+ the command may be set using "external-check path".
+
+ If the command executed and exits with a zero status then the check is
+ considered to have passed, otherwise the check is considered to have
+ failed.
+
+ Example :
+ external-check command /bin/true
+
+ See also : "external-check", "option external-check", "external-check path"
+
+
+external-check path <path>
+ The value of the PATH environment variable used when running an external-check
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
+
+ Arguments :
+ <path> is the path used when executing external command to run
+
+ The default path is "".
+
+ Example :
+ external-check path "/usr/bin:/bin"
+
+ See also : "external-check", "option external-check",
+ "external-check command"
+
+
+persist rdp-cookie
+persist rdp-cookie(<name>)
+ Enable RDP cookie-based persistence
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
+ Arguments :
+ <name> is the optional name of the RDP cookie to check. If omitted, the
+ default cookie name "msts" will be used. There currently is no
+ valid reason to change this name.
+
+ This statement enables persistence based on an RDP cookie. The RDP cookie
+ contains all information required to find the server in the list of known
+ servers. So when this option is set in the backend, the request is analysed
+ and if an RDP cookie is found, it is decoded. If it matches a known server
+ which is still UP (or if "option persist" is set), then the connection is
+ forwarded to this server.
+
+ Note that this only makes sense in a TCP backend, but for this to work, the
+ frontend must have waited long enough to ensure that an RDP cookie is present
+ in the request buffer. This is the same requirement as with the "rdp-cookie"
+ load-balancing method. Thus it is highly recommended to put all statements in
+ a single "listen" section.
+
+ Also, it is important to understand that the terminal server will emit this
+ RDP cookie only if it is configured for "token redirection mode", which means
+ that the "IP address redirection" option is disabled.
+
+ Example :
+ listen tse-farm
+ bind :3389
+ # wait up to 5s for an RDP cookie in the request
+ tcp-request inspect-delay 5s
+ tcp-request content accept if RDP_COOKIE
+ # apply RDP cookie persistence
+ persist rdp-cookie
+ # if server is unknown, let's balance on the same cookie.
+ # alternatively, "balance leastconn" may be useful too.
+ balance rdp-cookie
+ server srv1 1.1.1.1:3389
+ server srv2 1.1.1.2:3389
+
+ See also : "balance rdp-cookie", "tcp-request", the "req_rdp_cookie" ACL and
+ the rdp_cookie pattern fetch function.
+
+
+rate-limit sessions <rate>
+ Set a limit on the number of new sessions accepted per second on a frontend
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | no
+ Arguments :
+ <rate> The <rate> parameter is an integer designating the maximum number
+ of new sessions per second to accept on the frontend.
+
+ When the frontend reaches the specified number of new sessions per second, it
+ stops accepting new connections until the rate drops below the limit again.
+ During this time, the pending sessions will be kept in the socket's backlog
+ (in system buffers) and haproxy will not even be aware that sessions are
+ pending. When applying very low limit on a highly loaded service, it may make
+ sense to increase the socket's backlog using the "backlog" keyword.
+
+ This feature is particularly efficient at blocking connection-based attacks
+ or service abuse on fragile servers. Since the session rate is measured every
+ millisecond, it is extremely accurate. Also, the limit applies immediately,
+ no delay is needed at all to detect the threshold.
+
+ Example : limit the connection rate on SMTP to 10 per second max
+ listen smtp
+ mode tcp
+ bind :25
+ rate-limit sessions 10
+ server 127.0.0.1:1025
+
+ Note : when the maximum rate is reached, the frontend's status is not changed
+ but its sockets appear as "WAITING" in the statistics if the
+ "socket-stats" option is enabled.
+
+ See also : the "backlog" keyword and the "fe_sess_rate" ACL criterion.
+
+
+redirect location <loc> [code <code>] <option> [{if | unless} <condition>]
+redirect prefix <pfx> [code <code>] <option> [{if | unless} <condition>]
+redirect scheme <sch> [code <code>] <option> [{if | unless} <condition>]
+ Return an HTTP redirection if/unless a condition is matched
+ May be used in sections : defaults | frontend | listen | backend
+ no | yes | yes | yes
+
+ If/unless the condition is matched, the HTTP request will lead to a redirect
+ response. If no condition is specified, the redirect applies unconditionally.
+
+ Arguments :
+ <loc> With "redirect location", the exact value in <loc> is placed into
+ the HTTP "Location" header. When used in an "http-request" rule,
+ <loc> value follows the log-format rules and can include some
+ dynamic values (see Custom Log Format in section 8.2.4).
+
+ <pfx> With "redirect prefix", the "Location" header is built from the
+ concatenation of <pfx> and the complete URI path, including the
+ query string, unless the "drop-query" option is specified (see
+ below). As a special case, if <pfx> equals exactly "/", then
+ nothing is inserted before the original URI. It allows one to
+ redirect to the same URL (for instance, to insert a cookie). When
+ used in an "http-request" rule, <pfx> value follows the log-format
+ rules and can include some dynamic values (see Custom Log Format
+ in section 8.2.4).
+
+ <sch> With "redirect scheme", then the "Location" header is built by
+ concatenating <sch> with "://" then the first occurrence of the
+ "Host" header, and then the URI path, including the query string
+ unless the "drop-query" option is specified (see below). If no
+ path is found or if the path is "*", then "/" is used instead. If
+ no "Host" header is found, then an empty host component will be
+ returned, which most recent browsers interpret as redirecting to
+ the same host. This directive is mostly used to redirect HTTP to
+ HTTPS. When used in an "http-request" rule, <sch> value follows
+ the log-format rules and can include some dynamic values (see
+ Custom Log Format in section 8.2.4).
+
+ <code> The code is optional. It indicates which type of HTTP redirection
+ is desired. Only codes 301, 302, 303, 307 and 308 are supported,
+ with 302 used by default if no code is specified. 301 means
+ "Moved permanently", and a browser may cache the Location. 302
+ means "Moved temporarily" and means that the browser should not
+ cache the redirection. 303 is equivalent to 302 except that the
+ browser will fetch the location with a GET method. 307 is just
+ like 302 but makes it clear that the same method must be reused.
+ Likewise, 308 replaces 301 if the same method must be used.
+
+ <option> There are several options which can be specified to adjust the
+ expected behaviour of a redirection :
+
+ - "drop-query"
+ When this keyword is used in a prefix-based redirection, then the
+ location will be set without any possible query-string, which is useful
+ for directing users to a non-secure page for instance. It has no effect
+ with a location-type redirect.
+
+ - "append-slash"
+ This keyword may be used in conjunction with "drop-query" to redirect
+ users who use a URL not ending with a '/' to the same one with the '/'.
+ It can be useful to ensure that search engines will only see one URL.
+ For this, a return code 301 is preferred.
+
+ - "set-cookie NAME[=value]"
+ A "Set-Cookie" header will be added with NAME (and optionally "=value")
+ to the response. This is sometimes used to indicate that a user has
+ been seen, for instance to protect against some types of DoS. No other
+ cookie option is added, so the cookie will be a session cookie. Note
+ that for a browser, a sole cookie name without an equal sign is
+ different from a cookie with an equal sign.
+
+ - "clear-cookie NAME[=]"
+ A "Set-Cookie" header will be added with NAME (and optionally "="), but
+ with the "Max-Age" attribute set to zero. This will tell the browser to
+ delete this cookie. It is useful for instance on logout pages. It is
+ important to note that clearing the cookie "NAME" will not remove a
+ cookie set with "NAME=value". You have to clear the cookie "NAME=" for
+ that, because the browser makes the difference.
+
+ Example: move the login URL only to HTTPS.
+ acl clear dst_port 80
+ acl secure dst_port 8080
+ acl login_page url_beg /login
+ acl logout url_beg /logout
+ acl uid_given url_reg /login?userid=[^&]+
+ acl cookie_set hdr_sub(cookie) SEEN=1
+
+ redirect prefix https://mysite.com set-cookie SEEN=1 if !cookie_set
+ redirect prefix https://mysite.com if login_page !secure
+ redirect prefix http://mysite.com drop-query if login_page !uid_given
+ redirect location http://mysite.com/ if !login_page secure
+ redirect location / clear-cookie USERID= if logout
+
+ Example: send redirects for request for articles without a '/'.
+ acl missing_slash path_reg ^/article/[^/]*$
+ redirect code 301 prefix / drop-query append-slash if missing_slash
+
+ Example: redirect all HTTP traffic to HTTPS when SSL is handled by haproxy.
+ redirect scheme https if !{ ssl_fc }
+
+ Example: append 'www.' prefix in front of all hosts not having it
+ http-request redirect code 301 location www.%[hdr(host)]%[req.uri] \
+ unless { hdr_beg(host) -i www }
+
+ See section 7 about ACL usage.
+
+
+redisp (deprecated)
+redispatch (deprecated)
+ Enable or disable session redistribution in case of connection failure
+ May be used in sections: defaults | frontend | listen | backend
+ yes | no | yes | yes
+ Arguments : none
+
+ In HTTP mode, if a server designated by a cookie is down, clients may
+ definitely stick to it because they cannot flush the cookie, so they will not
+ be able to access the service anymore.
+
+ Specifying "redispatch" will allow the proxy to break their persistence and
+ redistribute them to a working server.
+
+ It also allows to retry last connection to another server in case of multiple
+ connection failures. Of course, it requires having "retries" set to a nonzero
+ value.
+
+ This form is deprecated, do not use it in any new configuration, use the new
+ "option redispatch" instead.
+
+ See also : "option redispatch"
+
+
+reqadd <string> [{if | unless} <cond>]
+ Add a header at the end of the HTTP request
+ May be used in sections : defaults | frontend | listen | backend
+ no | yes | yes | yes
+ Arguments :
+ <string> is the complete line to be added. Any space or known delimiter
+ must be escaped using a backslash ('\'). Please refer to section
+ 6 about HTTP header manipulation for more information.
+
+ <cond> is an optional matching condition built from ACLs. It makes it
+ possible to ignore this rule when other conditions are not met.
+
+ A new line consisting in <string> followed by a line feed will be added after
+ the last header of an HTTP request.
+
+ Header transformations only apply to traffic which passes through HAProxy,
+ and not to traffic generated by HAProxy, such as health-checks or error
+ responses.
+
+ Example : add "X-Proto: SSL" to requests coming via port 81
+ acl is-ssl dst_port 81
+ reqadd X-Proto:\ SSL if is-ssl
+
+ See also: "rspadd", "http-request", section 6 about HTTP header manipulation,
+ and section 7 about ACLs.
+
+
+reqallow <search> [{if | unless} <cond>]
+reqiallow <search> [{if | unless} <cond>] (ignore case)
+ Definitely allow an HTTP request if a line matches a regular expression
+ May be used in sections : defaults | frontend | listen | backend
+ no | yes | yes | yes
+ Arguments :
+ <search> is the regular expression applied to HTTP headers and to the
+ request line. This is an extended regular expression. Parenthesis
+ grouping is supported and no preliminary backslash is required.
+ Any space or known delimiter must be escaped using a backslash
+ ('\'). The pattern applies to a full line at a time. The
+ "reqallow" keyword strictly matches case while "reqiallow"
+ ignores case.
+
+ <cond> is an optional matching condition built from ACLs. It makes it
+ possible to ignore this rule when other conditions are not met.
+
+ A request containing any line which matches extended regular expression
+ <search> will mark the request as allowed, even if any later test would
+ result in a deny. The test applies both to the request line and to request
+ headers. Keep in mind that URLs in request line are case-sensitive while
+ header names are not.
+
+ It is easier, faster and more powerful to use ACLs to write access policies.
+ Reqdeny, reqallow and reqpass should be avoided in new designs.
+
+ Example :
+ # allow www.* but refuse *.local
+ reqiallow ^Host:\ www\.
+ reqideny ^Host:\ .*\.local
+
+ See also: "reqdeny", "block", "http-request", section 6 about HTTP header
+ manipulation, and section 7 about ACLs.
+
+
+reqdel <search> [{if | unless} <cond>]
+reqidel <search> [{if | unless} <cond>] (ignore case)
+ Delete all headers matching a regular expression in an HTTP request
+ May be used in sections : defaults | frontend | listen | backend
+ no | yes | yes | yes
+ Arguments :
+ <search> is the regular expression applied to HTTP headers and to the
+ request line. This is an extended regular expression. Parenthesis
+ grouping is supported and no preliminary backslash is required.
+ Any space or known delimiter must be escaped using a backslash
+ ('\'). The pattern applies to a full line at a time. The "reqdel"
+ keyword strictly matches case while "reqidel" ignores case.
+
+ <cond> is an optional matching condition built from ACLs. It makes it
+ possible to ignore this rule when other conditions are not met.
+
+ Any header line matching extended regular expression <search> in the request
+ will be completely deleted. Most common use of this is to remove unwanted
+ and/or dangerous headers or cookies from a request before passing it to the
+ next servers.
+
+ Header transformations only apply to traffic which passes through HAProxy,
+ and not to traffic generated by HAProxy, such as health-checks or error
+ responses. Keep in mind that header names are not case-sensitive.
+
+ Example :
+ # remove X-Forwarded-For header and SERVER cookie
+ reqidel ^X-Forwarded-For:.*
+ reqidel ^Cookie:.*SERVER=
+
+ See also: "reqadd", "reqrep", "rspdel", "http-request", section 6 about
+ HTTP header manipulation, and section 7 about ACLs.
+
+
+reqdeny <search> [{if | unless} <cond>]
+reqideny <search> [{if | unless} <cond>] (ignore case)
+ Deny an HTTP request if a line matches a regular expression
+ May be used in sections : defaults | frontend | listen | backend
+ no | yes | yes | yes
+ Arguments :
+ <search> is the regular expression applied to HTTP headers and to the
+ request line. This is an extended regular expression. Parenthesis
+ grouping is supported and no preliminary backslash is required.
+ Any space or known delimiter must be escaped using a backslash
+ ('\'). The pattern applies to a full line at a time. The
+ "reqdeny" keyword strictly matches case while "reqideny" ignores
+ case.
+
+ <cond> is an optional matching condition built from ACLs. It makes it
+ possible to ignore this rule when other conditions are not met.
+
+ A request containing any line which matches extended regular expression
+ <search> will mark the request as denied, even if any later test would
+ result in an allow. The test applies both to the request line and to request
+ headers. Keep in mind that URLs in request line are case-sensitive while
+ header names are not.
+
+ A denied request will generate an "HTTP 403 forbidden" response once the
+ complete request has been parsed. This is consistent with what is practiced
+ using ACLs.
+
+ It is easier, faster and more powerful to use ACLs to write access policies.
+ Reqdeny, reqallow and reqpass should be avoided in new designs.
+
+ Example :
+ # refuse *.local, then allow www.*
+ reqideny ^Host:\ .*\.local
+ reqiallow ^Host:\ www\.
+
+ See also: "reqallow", "rspdeny", "block", "http-request", section 6 about
+ HTTP header manipulation, and section 7 about ACLs.
+
+
+reqpass <search> [{if | unless} <cond>]
+reqipass <search> [{if | unless} <cond>] (ignore case)
+ Ignore any HTTP request line matching a regular expression in next rules
+ May be used in sections : defaults | frontend | listen | backend
+ no | yes | yes | yes
+ Arguments :
+ <search> is the regular expression applied to HTTP headers and to the
+ request line. This is an extended regular expression. Parenthesis
+ grouping is supported and no preliminary backslash is required.
+ Any space or known delimiter must be escaped using a backslash
+ ('\'). The pattern applies to a full line at a time. The
+ "reqpass" keyword strictly matches case while "reqipass" ignores
+ case.
+
+ <cond> is an optional matching condition built from ACLs. It makes it
+ possible to ignore this rule when other conditions are not met.
+
+ A request containing any line which matches extended regular expression
+ <search> will skip next rules, without assigning any deny or allow verdict.
+ The test applies both to the request line and to request headers. Keep in
+ mind that URLs in request line are case-sensitive while header names are not.
+
+ It is easier, faster and more powerful to use ACLs to write access policies.
+ Reqdeny, reqallow and reqpass should be avoided in new designs.
+
+ Example :
+ # refuse *.local, then allow www.*, but ignore "www.private.local"
+ reqipass ^Host:\ www.private\.local
+ reqideny ^Host:\ .*\.local
+ reqiallow ^Host:\ www\.
+
+ See also: "reqallow", "reqdeny", "block", "http-request", section 6 about
+ HTTP header manipulation, and section 7 about ACLs.
+
+
+reqrep <search> <string> [{if | unless} <cond>]
+reqirep <search> <string> [{if | unless} <cond>] (ignore case)
+ Replace a regular expression with a string in an HTTP request line
+ May be used in sections : defaults | frontend | listen | backend
+ no | yes | yes | yes
+ Arguments :
+ <search> is the regular expression applied to HTTP headers and to the
+ request line. This is an extended regular expression. Parenthesis
+ grouping is supported and no preliminary backslash is required.
+ Any space or known delimiter must be escaped using a backslash
+ ('\'). The pattern applies to a full line at a time. The "reqrep"
+ keyword strictly matches case while "reqirep" ignores case.
+
+ <string> is the complete line to be added. Any space or known delimiter
+ must be escaped using a backslash ('\'). References to matched
+ pattern groups are possible using the common \N form, with N
+ being a single digit between 0 and 9. Please refer to section
+ 6 about HTTP header manipulation for more information.
+
+ <cond> is an optional matching condition built from ACLs. It makes it
+ possible to ignore this rule when other conditions are not met.
+
+ Any line matching extended regular expression <search> in the request (both
+ the request line and header lines) will be completely replaced with <string>.
+ Most common use of this is to rewrite URLs or domain names in "Host" headers.
+
+ Header transformations only apply to traffic which passes through HAProxy,
+ and not to traffic generated by HAProxy, such as health-checks or error
+ responses. Note that for increased readability, it is suggested to add enough
+ spaces between the request and the response. Keep in mind that URLs in
+ request line are case-sensitive while header names are not.
+
+ Example :
+ # replace "/static/" with "/" at the beginning of any request path.
+ reqrep ^([^\ :]*)\ /static/(.*) \1\ /\2
+ # replace "www.mydomain.com" with "www" in the host name.
+ reqirep ^Host:\ www.mydomain.com Host:\ www
+
+ See also: "reqadd", "reqdel", "rsprep", "tune.bufsize", "http-request",
+ section 6 about HTTP header manipulation, and section 7 about ACLs.
+
+
+reqtarpit <search> [{if | unless} <cond>]
+reqitarpit <search> [{if | unless} <cond>] (ignore case)
+ Tarpit an HTTP request containing a line matching a regular expression
+ May be used in sections : defaults | frontend | listen | backend
+ no | yes | yes | yes
+ Arguments :
+ <search> is the regular expression applied to HTTP headers and to the
+ request line. This is an extended regular expression. Parenthesis
+ grouping is supported and no preliminary backslash is required.
+ Any space or known delimiter must be escaped using a backslash
+ ('\'). The pattern applies to a full line at a time. The
+ "reqtarpit" keyword strictly matches case while "reqitarpit"
+ ignores case.
+
+ <cond> is an optional matching condition built from ACLs. It makes it
+ possible to ignore this rule when other conditions are not met.
+
+ A request containing any line which matches extended regular expression
+ <search> will be tarpitted, which means that it will connect to nowhere, will
+ be kept open for a pre-defined time, then will return an HTTP error 500 so
+ that the attacker does not suspect it has been tarpitted. The status 500 will
+ be reported in the logs, but the completion flags will indicate "PT". The
+ delay is defined by "timeout tarpit", or "timeout connect" if the former is
+ not set.
+
+ The goal of the tarpit is to slow down robots attacking servers with
+ identifiable requests. Many robots limit their outgoing number of connections
+ and stay connected waiting for a reply which can take several minutes to
+ come. Depending on the environment and attack, it may be particularly
+ efficient at reducing the load on the network and firewalls.
+
+ Examples :
+ # ignore user-agents reporting any flavour of "Mozilla" or "MSIE", but
+ # block all others.
+ reqipass ^User-Agent:\.*(Mozilla|MSIE)
+ reqitarpit ^User-Agent:
+
+ # block bad guys
+ acl badguys src 10.1.0.3 172.16.13.20/28
+ reqitarpit . if badguys
+
+ See also: "reqallow", "reqdeny", "reqpass", "http-request", section 6
+ about HTTP header manipulation, and section 7 about ACLs.
+
+
+retries <value>
+ Set the number of retries to perform on a server after a connection failure
+ May be used in sections: defaults | frontend | listen | backend
+ yes | no | yes | yes
+ Arguments :
+ <value> is the number of times a connection attempt should be retried on
+ a server when a connection either is refused or times out. The
+ default value is 3.
+
+ It is important to understand that this value applies to the number of
+ connection attempts, not full requests. When a connection has effectively
+ been established to a server, there will be no more retry.
+
+ In order to avoid immediate reconnections to a server which is restarting,
+ a turn-around timer of min("timeout connect", one second) is applied before
+ a retry occurs.
+
+ When "option redispatch" is set, the last retry may be performed on another
+ server even if a cookie references a different server.
+
+ See also : "option redispatch"
+
+
+rspadd <string> [{if | unless} <cond>]
+ Add a header at the end of the HTTP response
+ May be used in sections : defaults | frontend | listen | backend
+ no | yes | yes | yes
+ Arguments :
+ <string> is the complete line to be added. Any space or known delimiter
+ must be escaped using a backslash ('\'). Please refer to section
+ 6 about HTTP header manipulation for more information.
+
+ <cond> is an optional matching condition built from ACLs. It makes it
+ possible to ignore this rule when other conditions are not met.
+
+ A new line consisting in <string> followed by a line feed will be added after
+ the last header of an HTTP response.
+
+ Header transformations only apply to traffic which passes through HAProxy,
+ and not to traffic generated by HAProxy, such as health-checks or error
+ responses.
+
+ See also: "rspdel" "reqadd", "http-response", section 6 about HTTP header
+ manipulation, and section 7 about ACLs.
+
+
+rspdel <search> [{if | unless} <cond>]
+rspidel <search> [{if | unless} <cond>] (ignore case)
+ Delete all headers matching a regular expression in an HTTP response
+ May be used in sections : defaults | frontend | listen | backend
+ no | yes | yes | yes
+ Arguments :
+ <search> is the regular expression applied to HTTP headers and to the
+ response line. This is an extended regular expression, so
+ parenthesis grouping is supported and no preliminary backslash
+ is required. Any space or known delimiter must be escaped using
+ a backslash ('\'). The pattern applies to a full line at a time.
+ The "rspdel" keyword strictly matches case while "rspidel"
+ ignores case.
+
+ <cond> is an optional matching condition built from ACLs. It makes it
+ possible to ignore this rule when other conditions are not met.
+
+ Any header line matching extended regular expression <search> in the response
+ will be completely deleted. Most common use of this is to remove unwanted
+ and/or sensitive headers or cookies from a response before passing it to the
+ client.
+
+ Header transformations only apply to traffic which passes through HAProxy,
+ and not to traffic generated by HAProxy, such as health-checks or error
+ responses. Keep in mind that header names are not case-sensitive.
+
+ Example :
+ # remove the Server header from responses
+ rspidel ^Server:.*
+
+ See also: "rspadd", "rsprep", "reqdel", "http-response", section 6 about
+ HTTP header manipulation, and section 7 about ACLs.
+
+
+rspdeny <search> [{if | unless} <cond>]
+rspideny <search> [{if | unless} <cond>] (ignore case)
+ Block an HTTP response if a line matches a regular expression
+ May be used in sections : defaults | frontend | listen | backend
+ no | yes | yes | yes
+ Arguments :
+ <search> is the regular expression applied to HTTP headers and to the
+ response line. This is an extended regular expression, so
+ parenthesis grouping is supported and no preliminary backslash
+ is required. Any space or known delimiter must be escaped using
+ a backslash ('\'). The pattern applies to a full line at a time.
+ The "rspdeny" keyword strictly matches case while "rspideny"
+ ignores case.
+
+ <cond> is an optional matching condition built from ACLs. It makes it
+ possible to ignore this rule when other conditions are not met.
+
+ A response containing any line which matches extended regular expression
+ <search> will mark the request as denied. The test applies both to the
+ response line and to response headers. Keep in mind that header names are not
+ case-sensitive.
+
+ Main use of this keyword is to prevent sensitive information leak and to
+ block the response before it reaches the client. If a response is denied, it
+ will be replaced with an HTTP 502 error so that the client never retrieves
+ any sensitive data.
+
+ It is easier, faster and more powerful to use ACLs to write access policies.
+ Rspdeny should be avoided in new designs.
+
+ Example :
+ # Ensure that no content type matching ms-word will leak
+ rspideny ^Content-type:\.*/ms-word
+
+ See also: "reqdeny", "acl", "block", "http-response", section 6 about
+ HTTP header manipulation and section 7 about ACLs.
+
+
+rsprep <search> <string> [{if | unless} <cond>]
+rspirep <search> <string> [{if | unless} <cond>] (ignore case)
+ Replace a regular expression with a string in an HTTP response line
+ May be used in sections : defaults | frontend | listen | backend
+ no | yes | yes | yes
+ Arguments :
+ <search> is the regular expression applied to HTTP headers and to the
+ response line. This is an extended regular expression, so
+ parenthesis grouping is supported and no preliminary backslash
+ is required. Any space or known delimiter must be escaped using
+ a backslash ('\'). The pattern applies to a full line at a time.
+ The "rsprep" keyword strictly matches case while "rspirep"
+ ignores case.
+
+ <string> is the complete line to be added. Any space or known delimiter
+ must be escaped using a backslash ('\'). References to matched
+ pattern groups are possible using the common \N form, with N
+ being a single digit between 0 and 9. Please refer to section
+ 6 about HTTP header manipulation for more information.
+
+ <cond> is an optional matching condition built from ACLs. It makes it
+ possible to ignore this rule when other conditions are not met.
+
+ Any line matching extended regular expression <search> in the response (both
+ the response line and header lines) will be completely replaced with
+ <string>. Most common use of this is to rewrite Location headers.
+
+ Header transformations only apply to traffic which passes through HAProxy,
+ and not to traffic generated by HAProxy, such as health-checks or error
+ responses. Note that for increased readability, it is suggested to add enough
+ spaces between the request and the response. Keep in mind that header names
+ are not case-sensitive.
+
+ Example :
+ # replace "Location: 127.0.0.1:8080" with "Location: www.mydomain.com"
+ rspirep ^Location:\ 127.0.0.1:8080 Location:\ www.mydomain.com
+
+ See also: "rspadd", "rspdel", "reqrep", "http-response", section 6 about
+ HTTP header manipulation, and section 7 about ACLs.
+
+
+server <name> <address>[:[port]] [param*]
+ Declare a server in a backend
+ May be used in sections : defaults | frontend | listen | backend
+ no | no | yes | yes
+ Arguments :
+ <name> is the internal name assigned to this server. This name will
+ appear in logs and alerts. If "http-send-name-header" is
+ set, it will be added to the request header sent to the server.
+
+ <address> is the IPv4 or IPv6 address of the server. Alternatively, a
+ resolvable hostname is supported, but this name will be resolved
+ during start-up. Address "0.0.0.0" or "*" has a special meaning.
+ It indicates that the connection will be forwarded to the same IP
+ address as the one from the client connection. This is useful in
+ transparent proxy architectures where the client's connection is
+ intercepted and haproxy must forward to the original destination
+ address. This is more or less what the "transparent" keyword does
+ except that with a server it's possible to limit concurrency and
+ to report statistics. Optionally, an address family prefix may be
+ used before the address to force the family regardless of the
+ address format, which can be useful to specify a path to a unix
+ socket with no slash ('/'). Currently supported prefixes are :
+ - 'ipv4@' -> address is always IPv4
+ - 'ipv6@' -> address is always IPv6
+ - 'unix@' -> address is a path to a local unix socket
+ - 'abns@' -> address is in abstract namespace (Linux only)
+ You may want to reference some environment variables in the
+ address parameter, see section 2.3 about environment
+ variables.
+
+ <port> is an optional port specification. If set, all connections will
+ be sent to this port. If unset, the same port the client
+ connected to will be used. The port may also be prefixed by a "+"
+ or a "-". In this case, the server's port will be determined by
+ adding this value to the client's port.
+
+ <param*> is a list of parameters for this server. The "server" keywords
+ accepts an important number of options and has a complete section
+ dedicated to it. Please refer to section 5 for more details.
+
+ Examples :
+ server first 10.1.1.1:1080 cookie first check inter 1000
+ server second 10.1.1.2:1080 cookie second check inter 1000
+ server transp ipv4@
+ server backup "${SRV_BACKUP}:1080" backup
+ server www1_dc1 "${LAN_DC1}.101:80"
+ server www1_dc2 "${LAN_DC2}.101:80"
+
+ Note: regarding Linux's abstract namespace sockets, HAProxy uses the whole
+ sun_path length is used for the address length. Some other programs
+ such as socat use the string length only by default. Pass the option
+ ",unix-tightsocklen=0" to any abstract socket definition in socat to
+ make it compatible with HAProxy's.
+
+ See also: "default-server", "http-send-name-header" and section 5 about
+ server options
+
+server-state-file-name [<file>]
+ Set the server state file to read, load and apply to servers available in
+ this backend. It only applies when the directive "load-server-state-from-file"
+ is set to "local". When <file> is not provided or if this directive is not
+ set, then backend name is used. If <file> starts with a slash '/', then it is
+ considered as an absolute path. Otherwise, <file> is concatenated to the
+ global directive "server-state-file-base".
+
+ Example: the minimal configuration below would make HAProxy look for the
+ state server file '/etc/haproxy/states/bk':
+
+ global
+ server-state-file-base /etc/haproxy/states
+
+ backend bk
+ load-server-state-from-file
+
+ See also: "server-state-file-base", "load-server-state-from-file", and
+ "show servers state"
+
+source <addr>[:<port>] [usesrc { <addr2>[:<port2>] | client | clientip } ]
+source <addr>[:<port>] [usesrc { <addr2>[:<port2>] | hdr_ip(<hdr>[,<occ>]) } ]
+source <addr>[:<port>] [interface <name>]
+ Set the source address for outgoing connections
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
+ Arguments :
+ <addr> is the IPv4 address HAProxy will bind to before connecting to a
+ server. This address is also used as a source for health checks.
+
+ The default value of 0.0.0.0 means that the system will select
+ the most appropriate address to reach its destination. Optionally
+ an address family prefix may be used before the address to force
+ the family regardless of the address format, which can be useful
+ to specify a path to a unix socket with no slash ('/'). Currently
+ supported prefixes are :
+ - 'ipv4@' -> address is always IPv4
+ - 'ipv6@' -> address is always IPv6
+ - 'unix@' -> address is a path to a local unix socket
+ - 'abns@' -> address is in abstract namespace (Linux only)
+ You may want to reference some environment variables in the
+ address parameter, see section 2.3 about environment variables.
+
+ <port> is an optional port. It is normally not needed but may be useful
+ in some very specific contexts. The default value of zero means
+ the system will select a free port. Note that port ranges are not
+ supported in the backend. If you want to force port ranges, you
+ have to specify them on each "server" line.
+
+ <addr2> is the IP address to present to the server when connections are
+ forwarded in full transparent proxy mode. This is currently only
+ supported on some patched Linux kernels. When this address is
+ specified, clients connecting to the server will be presented
+ with this address, while health checks will still use the address
+ <addr>.
+
+ <port2> is the optional port to present to the server when connections
+ are forwarded in full transparent proxy mode (see <addr2> above).
+ The default value of zero means the system will select a free
+ port.
+
+ <hdr> is the name of a HTTP header in which to fetch the IP to bind to.
+ This is the name of a comma-separated header list which can
+ contain multiple IP addresses. By default, the last occurrence is
+ used. This is designed to work with the X-Forwarded-For header
+ and to automatically bind to the client's IP address as seen
+ by previous proxy, typically Stunnel. In order to use another
+ occurrence from the last one, please see the <occ> parameter
+ below. When the header (or occurrence) is not found, no binding
+ is performed so that the proxy's default IP address is used. Also
+ keep in mind that the header name is case insensitive, as for any
+ HTTP header.
+
+ <occ> is the occurrence number of a value to be used in a multi-value
+ header. This is to be used in conjunction with "hdr_ip(<hdr>)",
+ in order to specify which occurrence to use for the source IP
+ address. Positive values indicate a position from the first
+ occurrence, 1 being the first one. Negative values indicate
+ positions relative to the last one, -1 being the last one. This
+ is helpful for situations where an X-Forwarded-For header is set
+ at the entry point of an infrastructure and must be used several
+ proxy layers away. When this value is not specified, -1 is
+ assumed. Passing a zero here disables the feature.
+
+ <name> is an optional interface name to which to bind to for outgoing
+ traffic. On systems supporting this features (currently, only
+ Linux), this allows one to bind all traffic to the server to
+ this interface even if it is not the one the system would select
+ based on routing tables. This should be used with extreme care.
+ Note that using this option requires root privileges.
+
+ The "source" keyword is useful in complex environments where a specific
+ address only is allowed to connect to the servers. It may be needed when a
+ private address must be used through a public gateway for instance, and it is
+ known that the system cannot determine the adequate source address by itself.
+
+ An extension which is available on certain patched Linux kernels may be used
+ through the "usesrc" optional keyword. It makes it possible to connect to the
+ servers with an IP address which does not belong to the system itself. This
+ is called "full transparent proxy mode". For this to work, the destination
+ servers have to route their traffic back to this address through the machine
+ running HAProxy, and IP forwarding must generally be enabled on this machine.
+
+ In this "full transparent proxy" mode, it is possible to force a specific IP
+ address to be presented to the servers. This is not much used in fact. A more
+ common use is to tell HAProxy to present the client's IP address. For this,
+ there are two methods :
+
+ - present the client's IP and port addresses. This is the most transparent
+ mode, but it can cause problems when IP connection tracking is enabled on
+ the machine, because a same connection may be seen twice with different
+ states. However, this solution presents the huge advantage of not
+ limiting the system to the 64k outgoing address+port couples, because all
+ of the client ranges may be used.
+
+ - present only the client's IP address and select a spare port. This
+ solution is still quite elegant but slightly less transparent (downstream
+ firewalls logs will not match upstream's). It also presents the downside
+ of limiting the number of concurrent connections to the usual 64k ports.
+ However, since the upstream and downstream ports are different, local IP
+ connection tracking on the machine will not be upset by the reuse of the
+ same session.
+
+ This option sets the default source for all servers in the backend. It may
+ also be specified in a "defaults" section. Finer source address specification
+ is possible at the server level using the "source" server option. Refer to
+ section 5 for more information.
+
+ In order to work, "usesrc" requires root privileges.
+
+ Examples :
+ backend private
+ # Connect to the servers using our 192.168.1.200 source address
+ source 192.168.1.200
+
+ backend transparent_ssl1
+ # Connect to the SSL farm from the client's source address
+ source 192.168.1.200 usesrc clientip
+
+ backend transparent_ssl2
+ # Connect to the SSL farm from the client's source address and port
+ # not recommended if IP conntrack is present on the local machine.
+ source 192.168.1.200 usesrc client
+
+ backend transparent_ssl3
+ # Connect to the SSL farm from the client's source address. It
+ # is more conntrack-friendly.
+ source 192.168.1.200 usesrc clientip
+
+ backend transparent_smtp
+ # Connect to the SMTP farm from the client's source address/port
+ # with Tproxy version 4.
+ source 0.0.0.0 usesrc clientip
+
+ backend transparent_http
+ # Connect to the servers using the client's IP as seen by previous
+ # proxy.
+ source 0.0.0.0 usesrc hdr_ip(x-forwarded-for,-1)
+
+ See also : the "source" server option in section 5, the Tproxy patches for
+ the Linux kernel on www.balabit.com, the "bind" keyword.
+
+
+srvtimeout <timeout> (deprecated)
+ Set the maximum inactivity time on the server side.
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
+ Arguments :
+ <timeout> is the timeout value specified in milliseconds by default, but
+ can be in any other unit if the number is suffixed by the unit,
+ as explained at the top of this document.
+
+ The inactivity timeout applies when the server is expected to acknowledge or
+ send data. In HTTP mode, this timeout is particularly important to consider
+ during the first phase of the server's response, when it has to send the
+ headers, as it directly represents the server's processing time for the
+ request. To find out what value to put there, it's often good to start with
+ what would be considered as unacceptable response times, then check the logs
+ to observe the response time distribution, and adjust the value accordingly.
+
+ The value is specified in milliseconds by default, but can be in any other
+ unit if the number is suffixed by the unit, as specified at the top of this
+ document. In TCP mode (and to a lesser extent, in HTTP mode), it is highly
+ recommended that the client timeout remains equal to the server timeout in
+ order to avoid complex situations to debug. Whatever the expected server
+ response times, it is a good practice to cover at least one or several TCP
+ packet losses by specifying timeouts that are slightly above multiples of 3
+ seconds (eg: 4 or 5 seconds minimum).
+
+ This parameter is specific to backends, but can be specified once for all in
+ "defaults" sections. This is in fact one of the easiest solutions not to
+ forget about it. An unspecified timeout results in an infinite timeout, which
+ is not recommended. Such a usage is accepted and works but reports a warning
+ during startup because it may results in accumulation of expired sessions in
+ the system if the system's timeouts are not configured either.
+
+ This parameter is provided for compatibility but is currently deprecated.
+ Please use "timeout server" instead.
+
+ See also : "timeout server", "timeout tunnel", "timeout client" and
+ "clitimeout".
+
+
+stats admin { if | unless } <cond>
+ Enable statistics admin level if/unless a condition is matched
+ May be used in sections : defaults | frontend | listen | backend
+ no | yes | yes | yes
+
+ This statement enables the statistics admin level if/unless a condition is
+ matched.
+
+ The admin level allows to enable/disable servers from the web interface. By
+ default, statistics page is read-only for security reasons.
+
+ Note : Consider not using this feature in multi-process mode (nbproc > 1)
+ unless you know what you do : memory is not shared between the
+ processes, which can result in random behaviours.
+
+ Currently, the POST request is limited to the buffer size minus the reserved
+ buffer space, which means that if the list of servers is too long, the
+ request won't be processed. It is recommended to alter few servers at a
+ time.
+
+ Example :
+ # statistics admin level only for localhost
+ backend stats_localhost
+ stats enable
+ stats admin if LOCALHOST
+
+ Example :
+ # statistics admin level always enabled because of the authentication
+ backend stats_auth
+ stats enable
+ stats auth admin:AdMiN123
+ stats admin if TRUE
+
+ Example :
+ # statistics admin level depends on the authenticated user
+ userlist stats-auth
+ group admin users admin
+ user admin insecure-password AdMiN123
+ group readonly users haproxy
+ user haproxy insecure-password haproxy
+
+ backend stats_auth
+ stats enable
+ acl AUTH http_auth(stats-auth)
+ acl AUTH_ADMIN http_auth_group(stats-auth) admin
+ stats http-request auth unless AUTH
+ stats admin if AUTH_ADMIN
+
+ See also : "stats enable", "stats auth", "stats http-request", "nbproc",
+ "bind-process", section 3.4 about userlists and section 7 about
+ ACL usage.
+
+
+stats auth <user>:<passwd>
+ Enable statistics with authentication and grant access to an account
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
+ Arguments :
+ <user> is a user name to grant access to
+
+ <passwd> is the cleartext password associated to this user
+
+ This statement enables statistics with default settings, and restricts access
+ to declared users only. It may be repeated as many times as necessary to
+ allow as many users as desired. When a user tries to access the statistics
+ without a valid account, a "401 Forbidden" response will be returned so that
+ the browser asks the user to provide a valid user and password. The real
+ which will be returned to the browser is configurable using "stats realm".
+
+ Since the authentication method is HTTP Basic Authentication, the passwords
+ circulate in cleartext on the network. Thus, it was decided that the
+ configuration file would also use cleartext passwords to remind the users
+ that those ones should not be sensitive and not shared with any other account.
+
+ It is also possible to reduce the scope of the proxies which appear in the
+ report using "stats scope".
+
+ Though this statement alone is enough to enable statistics reporting, it is
+ recommended to set all other settings in order to avoid relying on default
+ unobvious parameters.
+
+ Example :
+ # public access (limited to this backend only)
+ backend public_www
+ server srv1 192.168.0.1:80
+ stats enable
+ stats hide-version
+ stats scope .
+ stats uri /admin?stats
+ stats realm Haproxy\ Statistics
+ stats auth admin1:AdMiN123
+ stats auth admin2:AdMiN321
+
+ # internal monitoring access (unlimited)
+ backend private_monitoring
+ stats enable
+ stats uri /admin?stats
+ stats refresh 5s
+
+ See also : "stats enable", "stats realm", "stats scope", "stats uri"
+
+
+stats enable
+ Enable statistics reporting with default settings
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
+ Arguments : none
+
+ This statement enables statistics reporting with default settings defined
+ at build time. Unless stated otherwise, these settings are used :
+ - stats uri : /haproxy?stats
+ - stats realm : "HAProxy Statistics"
+ - stats auth : no authentication
+ - stats scope : no restriction
+
+ Though this statement alone is enough to enable statistics reporting, it is
+ recommended to set all other settings in order to avoid relying on default
+ unobvious parameters.
+
+ Example :
+ # public access (limited to this backend only)
+ backend public_www
+ server srv1 192.168.0.1:80
+ stats enable
+ stats hide-version
+ stats scope .
+ stats uri /admin?stats
+ stats realm Haproxy\ Statistics
+ stats auth admin1:AdMiN123
+ stats auth admin2:AdMiN321
+
+ # internal monitoring access (unlimited)
+ backend private_monitoring
+ stats enable
+ stats uri /admin?stats
+ stats refresh 5s
+
+ See also : "stats auth", "stats realm", "stats uri"
+
+
+stats hide-version
+ Enable statistics and hide HAProxy version reporting
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
+ Arguments : none
+
+ By default, the stats page reports some useful status information along with
+ the statistics. Among them is HAProxy's version. However, it is generally
+ considered dangerous to report precise version to anyone, as it can help them
+ target known weaknesses with specific attacks. The "stats hide-version"
+ statement removes the version from the statistics report. This is recommended
+ for public sites or any site with a weak login/password.
+
+ Though this statement alone is enough to enable statistics reporting, it is
+ recommended to set all other settings in order to avoid relying on default
+ unobvious parameters.
+
+ Example :
+ # public access (limited to this backend only)
+ backend public_www
+ server srv1 192.168.0.1:80
+ stats enable
+ stats hide-version
+ stats scope .
+ stats uri /admin?stats
+ stats realm Haproxy\ Statistics
+ stats auth admin1:AdMiN123
+ stats auth admin2:AdMiN321
+
+ # internal monitoring access (unlimited)
+ backend private_monitoring
+ stats enable
+ stats uri /admin?stats
+ stats refresh 5s
+
+ See also : "stats auth", "stats enable", "stats realm", "stats uri"
+
+
+stats http-request { allow | deny | auth [realm <realm>] }
+ [ { if | unless } <condition> ]
+ Access control for statistics
+
+ May be used in sections: defaults | frontend | listen | backend
+ no | no | yes | yes
+
+ As "http-request", these set of options allow to fine control access to
+ statistics. Each option may be followed by if/unless and acl.
+ First option with matched condition (or option without condition) is final.
+ For "deny" a 403 error will be returned, for "allow" normal processing is
+ performed, for "auth" a 401/407 error code is returned so the client
+ should be asked to enter a username and password.
+
+ There is no fixed limit to the number of http-request statements per
+ instance.
+
+ See also : "http-request", section 3.4 about userlists and section 7
+ about ACL usage.
+
+
+stats realm <realm>
+ Enable statistics and set authentication realm
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
+ Arguments :
+ <realm> is the name of the HTTP Basic Authentication realm reported to
+ the browser. The browser uses it to display it in the pop-up
+ inviting the user to enter a valid username and password.
+
+ The realm is read as a single word, so any spaces in it should be escaped
+ using a backslash ('\').
+
+ This statement is useful only in conjunction with "stats auth" since it is
+ only related to authentication.
+
+ Though this statement alone is enough to enable statistics reporting, it is
+ recommended to set all other settings in order to avoid relying on default
+ unobvious parameters.
+
+ Example :
+ # public access (limited to this backend only)
+ backend public_www
+ server srv1 192.168.0.1:80
+ stats enable
+ stats hide-version
+ stats scope .
+ stats uri /admin?stats
+ stats realm Haproxy\ Statistics
+ stats auth admin1:AdMiN123
+ stats auth admin2:AdMiN321
+
+ # internal monitoring access (unlimited)
+ backend private_monitoring
+ stats enable
+ stats uri /admin?stats
+ stats refresh 5s
+
+ See also : "stats auth", "stats enable", "stats uri"
+
+
+stats refresh <delay>
+ Enable statistics with automatic refresh
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
+ Arguments :
+ <delay> is the suggested refresh delay, specified in seconds, which will
+ be returned to the browser consulting the report page. While the
+ browser is free to apply any delay, it will generally respect it
+ and refresh the page this every seconds. The refresh interval may
+ be specified in any other non-default time unit, by suffixing the
+ unit after the value, as explained at the top of this document.
+
+ This statement is useful on monitoring displays with a permanent page
+ reporting the load balancer's activity. When set, the HTML report page will
+ include a link "refresh"/"stop refresh" so that the user can select whether
+ he wants automatic refresh of the page or not.
+
+ Though this statement alone is enough to enable statistics reporting, it is
+ recommended to set all other settings in order to avoid relying on default
+ unobvious parameters.
+
+ Example :
+ # public access (limited to this backend only)
+ backend public_www
+ server srv1 192.168.0.1:80
+ stats enable
+ stats hide-version
+ stats scope .
+ stats uri /admin?stats
+ stats realm Haproxy\ Statistics
+ stats auth admin1:AdMiN123
+ stats auth admin2:AdMiN321
+
+ # internal monitoring access (unlimited)
+ backend private_monitoring
+ stats enable
+ stats uri /admin?stats
+ stats refresh 5s
+
+ See also : "stats auth", "stats enable", "stats realm", "stats uri"
+
+
+stats scope { <name> | "." }
+ Enable statistics and limit access scope
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
+ Arguments :
+ <name> is the name of a listen, frontend or backend section to be
+ reported. The special name "." (a single dot) designates the
+ section in which the statement appears.
+
+ When this statement is specified, only the sections enumerated with this
+ statement will appear in the report. All other ones will be hidden. This
+ statement may appear as many times as needed if multiple sections need to be
+ reported. Please note that the name checking is performed as simple string
+ comparisons, and that it is never checked that a give section name really
+ exists.
+
+ Though this statement alone is enough to enable statistics reporting, it is
+ recommended to set all other settings in order to avoid relying on default
+ unobvious parameters.
+
+ Example :
+ # public access (limited to this backend only)
+ backend public_www
+ server srv1 192.168.0.1:80
+ stats enable
+ stats hide-version
+ stats scope .
+ stats uri /admin?stats
+ stats realm Haproxy\ Statistics
+ stats auth admin1:AdMiN123
+ stats auth admin2:AdMiN321
+
+ # internal monitoring access (unlimited)
+ backend private_monitoring
+ stats enable
+ stats uri /admin?stats
+ stats refresh 5s
+
+ See also : "stats auth", "stats enable", "stats realm", "stats uri"
+
+
+stats show-desc [ <desc> ]
+ Enable reporting of a description on the statistics page.
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
+
+ <desc> is an optional description to be reported. If unspecified, the
+ description from global section is automatically used instead.
+
+ This statement is useful for users that offer shared services to their
+ customers, where node or description should be different for each customer.
+
+ Though this statement alone is enough to enable statistics reporting, it is
+ recommended to set all other settings in order to avoid relying on default
+ unobvious parameters. By default description is not shown.
+
+ Example :
+ # internal monitoring access (unlimited)
+ backend private_monitoring
+ stats enable
+ stats show-desc Master node for Europe, Asia, Africa
+ stats uri /admin?stats
+ stats refresh 5s
+
+ See also: "show-node", "stats enable", "stats uri" and "description" in
+ global section.
+
+
+stats show-legends
+ Enable reporting additional information on the statistics page
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
+ Arguments : none
+
+ Enable reporting additional information on the statistics page :
+ - cap: capabilities (proxy)
+ - mode: one of tcp, http or health (proxy)
+ - id: SNMP ID (proxy, socket, server)
+ - IP (socket, server)
+ - cookie (backend, server)
+
+ Though this statement alone is enough to enable statistics reporting, it is
+ recommended to set all other settings in order to avoid relying on default
+ unobvious parameters. Default behaviour is not to show this information.
+
+ See also: "stats enable", "stats uri".
+
+
+stats show-node [ <name> ]
+ Enable reporting of a host name on the statistics page.
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
+ Arguments:
+ <name> is an optional name to be reported. If unspecified, the
+ node name from global section is automatically used instead.
+
+ This statement is useful for users that offer shared services to their
+ customers, where node or description might be different on a stats page
+ provided for each customer. Default behaviour is not to show host name.
+
+ Though this statement alone is enough to enable statistics reporting, it is
+ recommended to set all other settings in order to avoid relying on default
+ unobvious parameters.
+
+ Example:
+ # internal monitoring access (unlimited)
+ backend private_monitoring
+ stats enable
+ stats show-node Europe-1
+ stats uri /admin?stats
+ stats refresh 5s
+
+ See also: "show-desc", "stats enable", "stats uri", and "node" in global
+ section.
+
+
+stats uri <prefix>
+ Enable statistics and define the URI prefix to access them
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
+ Arguments :
+ <prefix> is the prefix of any URI which will be redirected to stats. This
+ prefix may contain a question mark ('?') to indicate part of a
+ query string.
+
+ The statistics URI is intercepted on the relayed traffic, so it appears as a
+ page within the normal application. It is strongly advised to ensure that the
+ selected URI will never appear in the application, otherwise it will never be
+ possible to reach it in the application.
+
+ The default URI compiled in haproxy is "/haproxy?stats", but this may be
+ changed at build time, so it's better to always explicitly specify it here.
+ It is generally a good idea to include a question mark in the URI so that
+ intermediate proxies refrain from caching the results. Also, since any string
+ beginning with the prefix will be accepted as a stats request, the question
+ mark helps ensuring that no valid URI will begin with the same words.
+
+ It is sometimes very convenient to use "/" as the URI prefix, and put that
+ statement in a "listen" instance of its own. That makes it easy to dedicate
+ an address or a port to statistics only.
+
+ Though this statement alone is enough to enable statistics reporting, it is
+ recommended to set all other settings in order to avoid relying on default
+ unobvious parameters.
+
+ Example :
+ # public access (limited to this backend only)
+ backend public_www
+ server srv1 192.168.0.1:80
+ stats enable
+ stats hide-version
+ stats scope .
+ stats uri /admin?stats
+ stats realm Haproxy\ Statistics
+ stats auth admin1:AdMiN123
+ stats auth admin2:AdMiN321
+
+ # internal monitoring access (unlimited)
+ backend private_monitoring
+ stats enable
+ stats uri /admin?stats
+ stats refresh 5s
+
+ See also : "stats auth", "stats enable", "stats realm"
+
+
+stick match <pattern> [table <table>] [{if | unless} <cond>]
+ Define a request pattern matching condition to stick a user to a server
+ May be used in sections : defaults | frontend | listen | backend
+ no | no | yes | yes
+
+ Arguments :
+ <pattern> is a sample expression rule as described in section 7.3. It
+ describes what elements of the incoming request or connection
+ will be analysed in the hope to find a matching entry in a
+ stickiness table. This rule is mandatory.
+
+ <table> is an optional stickiness table name. If unspecified, the same
+ backend's table is used. A stickiness table is declared using
+ the "stick-table" statement.
+
+ <cond> is an optional matching condition. It makes it possible to match
+ on a certain criterion only when other conditions are met (or
+ not met). For instance, it could be used to match on a source IP
+ address except when a request passes through a known proxy, in
+ which case we'd match on a header containing that IP address.
+
+ Some protocols or applications require complex stickiness rules and cannot
+ always simply rely on cookies nor hashing. The "stick match" statement
+ describes a rule to extract the stickiness criterion from an incoming request
+ or connection. See section 7 for a complete list of possible patterns and
+ transformation rules.
+
+ The table has to be declared using the "stick-table" statement. It must be of
+ a type compatible with the pattern. By default it is the one which is present
+ in the same backend. It is possible to share a table with other backends by
+ referencing it using the "table" keyword. If another table is referenced,
+ the server's ID inside the backends are used. By default, all server IDs
+ start at 1 in each backend, so the server ordering is enough. But in case of
+ doubt, it is highly recommended to force server IDs using their "id" setting.
+
+ It is possible to restrict the conditions where a "stick match" statement
+ will apply, using "if" or "unless" followed by a condition. See section 7 for
+ ACL based conditions.
+
+ There is no limit on the number of "stick match" statements. The first that
+ applies and matches will cause the request to be directed to the same server
+ as was used for the request which created the entry. That way, multiple
+ matches can be used as fallbacks.
+
+ The stick rules are checked after the persistence cookies, so they will not
+ affect stickiness if a cookie has already been used to select a server. That
+ way, it becomes very easy to insert cookies and match on IP addresses in
+ order to maintain stickiness between HTTP and HTTPS.
+
+ Note : Consider not using this feature in multi-process mode (nbproc > 1)
+ unless you know what you do : memory is not shared between the
+ processes, which can result in random behaviours.
+
+ Example :
+ # forward SMTP users to the same server they just used for POP in the
+ # last 30 minutes
+ backend pop
+ mode tcp
+ balance roundrobin
+ stick store-request src
+ stick-table type ip size 200k expire 30m
+ server s1 192.168.1.1:110
+ server s2 192.168.1.1:110
+
+ backend smtp
+ mode tcp
+ balance roundrobin
+ stick match src table pop
+ server s1 192.168.1.1:25
+ server s2 192.168.1.1:25
+
+ See also : "stick-table", "stick on", "nbproc", "bind-process" and section 7
+ about ACLs and samples fetching.
+
+
+stick on <pattern> [table <table>] [{if | unless} <condition>]
+ Define a request pattern to associate a user to a server
+ May be used in sections : defaults | frontend | listen | backend
+ no | no | yes | yes
+
+ Note : This form is exactly equivalent to "stick match" followed by
+ "stick store-request", all with the same arguments. Please refer
+ to both keywords for details. It is only provided as a convenience
+ for writing more maintainable configurations.
+
+ Note : Consider not using this feature in multi-process mode (nbproc > 1)
+ unless you know what you do : memory is not shared between the
+ processes, which can result in random behaviours.
+
+ Examples :
+ # The following form ...
+ stick on src table pop if !localhost
+
+ # ...is strictly equivalent to this one :
+ stick match src table pop if !localhost
+ stick store-request src table pop if !localhost
+
+
+ # Use cookie persistence for HTTP, and stick on source address for HTTPS as
+ # well as HTTP without cookie. Share the same table between both accesses.
+ backend http
+ mode http
+ balance roundrobin
+ stick on src table https
+ cookie SRV insert indirect nocache
+ server s1 192.168.1.1:80 cookie s1
+ server s2 192.168.1.1:80 cookie s2
+
+ backend https
+ mode tcp
+ balance roundrobin
+ stick-table type ip size 200k expire 30m
+ stick on src
+ server s1 192.168.1.1:443
+ server s2 192.168.1.1:443
+
+ See also : "stick match", "stick store-request", "nbproc" and "bind-process".
+
+
+stick store-request <pattern> [table <table>] [{if | unless} <condition>]
+ Define a request pattern used to create an entry in a stickiness table
+ May be used in sections : defaults | frontend | listen | backend
+ no | no | yes | yes
+
+ Arguments :
+ <pattern> is a sample expression rule as described in section 7.3. It
+ describes what elements of the incoming request or connection
+ will be analysed, extracted and stored in the table once a
+ server is selected.
+
+ <table> is an optional stickiness table name. If unspecified, the same
+ backend's table is used. A stickiness table is declared using
+ the "stick-table" statement.
+
+ <cond> is an optional storage condition. It makes it possible to store
+ certain criteria only when some conditions are met (or not met).
+ For instance, it could be used to store the source IP address
+ except when the request passes through a known proxy, in which
+ case we'd store a converted form of a header containing that IP
+ address.
+
+ Some protocols or applications require complex stickiness rules and cannot
+ always simply rely on cookies nor hashing. The "stick store-request" statement
+ describes a rule to decide what to extract from the request and when to do
+ it, in order to store it into a stickiness table for further requests to
+ match it using the "stick match" statement. Obviously the extracted part must
+ make sense and have a chance to be matched in a further request. Storing a
+ client's IP address for instance often makes sense. Storing an ID found in a
+ URL parameter also makes sense. Storing a source port will almost never make
+ any sense because it will be randomly matched. See section 7 for a complete
+ list of possible patterns and transformation rules.
+
+ The table has to be declared using the "stick-table" statement. It must be of
+ a type compatible with the pattern. By default it is the one which is present
+ in the same backend. It is possible to share a table with other backends by
+ referencing it using the "table" keyword. If another table is referenced,
+ the server's ID inside the backends are used. By default, all server IDs
+ start at 1 in each backend, so the server ordering is enough. But in case of
+ doubt, it is highly recommended to force server IDs using their "id" setting.
+
+ It is possible to restrict the conditions where a "stick store-request"
+ statement will apply, using "if" or "unless" followed by a condition. This
+ condition will be evaluated while parsing the request, so any criteria can be
+ used. See section 7 for ACL based conditions.
+
+ There is no limit on the number of "stick store-request" statements, but
+ there is a limit of 8 simultaneous stores per request or response. This
+ makes it possible to store up to 8 criteria, all extracted from either the
+ request or the response, regardless of the number of rules. Only the 8 first
+ ones which match will be kept. Using this, it is possible to feed multiple
+ tables at once in the hope to increase the chance to recognize a user on
+ another protocol or access method. Using multiple store-request rules with
+ the same table is possible and may be used to find the best criterion to rely
+ on, by arranging the rules by decreasing preference order. Only the first
+ extracted criterion for a given table will be stored. All subsequent store-
+ request rules referencing the same table will be skipped and their ACLs will
+ not be evaluated.
+
+ The "store-request" rules are evaluated once the server connection has been
+ established, so that the table will contain the real server that processed
+ the request.
+
+ Note : Consider not using this feature in multi-process mode (nbproc > 1)
+ unless you know what you do : memory is not shared between the
+ processes, which can result in random behaviours.
+
+ Example :
+ # forward SMTP users to the same server they just used for POP in the
+ # last 30 minutes
+ backend pop
+ mode tcp
+ balance roundrobin
+ stick store-request src
+ stick-table type ip size 200k expire 30m
+ server s1 192.168.1.1:110
+ server s2 192.168.1.1:110
+
+ backend smtp
+ mode tcp
+ balance roundrobin
+ stick match src table pop
+ server s1 192.168.1.1:25
+ server s2 192.168.1.1:25
+
+ See also : "stick-table", "stick on", "nbproc", "bind-process" and section 7
+ about ACLs and sample fetching.
+
+
+stick-table type {ip | integer | string [len <length>] | binary [len <length>]}
+ size <size> [expire <expire>] [nopurge] [peers <peersect>]
+ [store <data_type>]*
+ Configure the stickiness table for the current section
+ May be used in sections : defaults | frontend | listen | backend
+ no | yes | yes | yes
+
+ Arguments :
+ ip a table declared with "type ip" will only store IPv4 addresses.
+ This form is very compact (about 50 bytes per entry) and allows
+ very fast entry lookup and stores with almost no overhead. This
+ is mainly used to store client source IP addresses.
+
+ ipv6 a table declared with "type ipv6" will only store IPv6 addresses.
+ This form is very compact (about 60 bytes per entry) and allows
+ very fast entry lookup and stores with almost no overhead. This
+ is mainly used to store client source IP addresses.
+
+ integer a table declared with "type integer" will store 32bit integers
+ which can represent a client identifier found in a request for
+ instance.
+
+ string a table declared with "type string" will store substrings of up
+ to <len> characters. If the string provided by the pattern
+ extractor is larger than <len>, it will be truncated before
+ being stored. During matching, at most <len> characters will be
+ compared between the string in the table and the extracted
+ pattern. When not specified, the string is automatically limited
+ to 32 characters.
+
+ binary a table declared with "type binary" will store binary blocks
+ of <len> bytes. If the block provided by the pattern
+ extractor is larger than <len>, it will be truncated before
+ being stored. If the block provided by the sample expression
+ is shorter than <len>, it will be padded by 0. When not
+ specified, the block is automatically limited to 32 bytes.
+
+ <length> is the maximum number of characters that will be stored in a
+ "string" type table (See type "string" above). Or the number
+ of bytes of the block in "binary" type table. Be careful when
+ changing this parameter as memory usage will proportionally
+ increase.
+
+ <size> is the maximum number of entries that can fit in the table. This
+ value directly impacts memory usage. Count approximately
+ 50 bytes per entry, plus the size of a string if any. The size
+ supports suffixes "k", "m", "g" for 2^10, 2^20 and 2^30 factors.
+
+ [nopurge] indicates that we refuse to purge older entries when the table
+ is full. When not specified and the table is full when haproxy
+ wants to store an entry in it, it will flush a few of the oldest
+ entries in order to release some space for the new ones. This is
+ most often the desired behaviour. In some specific cases, it
+ be desirable to refuse new entries instead of purging the older
+ ones. That may be the case when the amount of data to store is
+ far above the hardware limits and we prefer not to offer access
+ to new clients than to reject the ones already connected. When
+ using this parameter, be sure to properly set the "expire"
+ parameter (see below).
+
+ <peersect> is the name of the peers section to use for replication. Entries
+ which associate keys to server IDs are kept synchronized with
+ the remote peers declared in this section. All entries are also
+ automatically learned from the local peer (old process) during a
+ soft restart.
+
+ NOTE : each peers section may be referenced only by tables
+ belonging to the same unique process.
+
+ <expire> defines the maximum duration of an entry in the table since it
+ was last created, refreshed or matched. The expiration delay is
+ defined using the standard time format, similarly as the various
+ timeouts. The maximum duration is slightly above 24 days. See
+ section 2.2 for more information. If this delay is not specified,
+ the session won't automatically expire, but older entries will
+ be removed once full. Be sure not to use the "nopurge" parameter
+ if not expiration delay is specified.
+
+ <data_type> is used to store additional information in the stick-table. This
+ may be used by ACLs in order to control various criteria related
+ to the activity of the client matching the stick-table. For each
+ item specified here, the size of each entry will be inflated so
+ that the additional data can fit. Several data types may be
+ stored with an entry. Multiple data types may be specified after
+ the "store" keyword, as a comma-separated list. Alternatively,
+ it is possible to repeat the "store" keyword followed by one or
+ several data types. Except for the "server_id" type which is
+ automatically detected and enabled, all data types must be
+ explicitly declared to be stored. If an ACL references a data
+ type which is not stored, the ACL will simply not match. Some
+ data types require an argument which must be passed just after
+ the type between parenthesis. See below for the supported data
+ types and their arguments.
+
+ The data types that can be stored with an entry are the following :
+ - server_id : this is an integer which holds the numeric ID of the server a
+ request was assigned to. It is used by the "stick match", "stick store",
+ and "stick on" rules. It is automatically enabled when referenced.
+
+ - gpc0 : first General Purpose Counter. It is a positive 32-bit integer
+ integer which may be used for anything. Most of the time it will be used
+ to put a special tag on some entries, for instance to note that a
+ specific behaviour was detected and must be known for future matches.
+
+ - gpc0_rate(<period>) : increment rate of the first General Purpose Counter
+ over a period. It is a positive 32-bit integer integer which may be used
+ for anything. Just like <gpc0>, it counts events, but instead of keeping
+ a cumulative count, it maintains the rate at which the counter is
+ incremented. Most of the time it will be used to measure the frequency of
+ occurrence of certain events (eg: requests to a specific URL).
+
+ - conn_cnt : Connection Count. It is a positive 32-bit integer which counts
+ the absolute number of connections received from clients which matched
+ this entry. It does not mean the connections were accepted, just that
+ they were received.
+
+ - conn_cur : Current Connections. It is a positive 32-bit integer which
+ stores the concurrent connection counts for the entry. It is incremented
+ once an incoming connection matches the entry, and decremented once the
+ connection leaves. That way it is possible to know at any time the exact
+ number of concurrent connections for an entry.
+
+ - conn_rate(<period>) : frequency counter (takes 12 bytes). It takes an
+ integer parameter <period> which indicates in milliseconds the length
+ of the period over which the average is measured. It reports the average
+ incoming connection rate over that period, in connections per period. The
+ result is an integer which can be matched using ACLs.
+
+ - sess_cnt : Session Count. It is a positive 32-bit integer which counts
+ the absolute number of sessions received from clients which matched this
+ entry. A session is a connection that was accepted by the layer 4 rules.
+
+ - sess_rate(<period>) : frequency counter (takes 12 bytes). It takes an
+ integer parameter <period> which indicates in milliseconds the length
+ of the period over which the average is measured. It reports the average
+ incoming session rate over that period, in sessions per period. The
+ result is an integer which can be matched using ACLs.
+
+ - http_req_cnt : HTTP request Count. It is a positive 32-bit integer which
+ counts the absolute number of HTTP requests received from clients which
+ matched this entry. It does not matter whether they are valid requests or
+ not. Note that this is different from sessions when keep-alive is used on
+ the client side.
+
+ - http_req_rate(<period>) : frequency counter (takes 12 bytes). It takes an
+ integer parameter <period> which indicates in milliseconds the length
+ of the period over which the average is measured. It reports the average
+ HTTP request rate over that period, in requests per period. The result is
+ an integer which can be matched using ACLs. It does not matter whether
+ they are valid requests or not. Note that this is different from sessions
+ when keep-alive is used on the client side.
+
+ - http_err_cnt : HTTP Error Count. It is a positive 32-bit integer which
+ counts the absolute number of HTTP requests errors induced by clients
+ which matched this entry. Errors are counted on invalid and truncated
+ requests, as well as on denied or tarpitted requests, and on failed
+ authentications. If the server responds with 4xx, then the request is
+ also counted as an error since it's an error triggered by the client
+ (eg: vulnerability scan).
+
+ - http_err_rate(<period>) : frequency counter (takes 12 bytes). It takes an
+ integer parameter <period> which indicates in milliseconds the length
+ of the period over which the average is measured. It reports the average
+ HTTP request error rate over that period, in requests per period (see
+ http_err_cnt above for what is accounted as an error). The result is an
+ integer which can be matched using ACLs.
+
+ - bytes_in_cnt : client to server byte count. It is a positive 64-bit
+ integer which counts the cumulated amount of bytes received from clients
+ which matched this entry. Headers are included in the count. This may be
+ used to limit abuse of upload features on photo or video servers.
+
+ - bytes_in_rate(<period>) : frequency counter (takes 12 bytes). It takes an
+ integer parameter <period> which indicates in milliseconds the length
+ of the period over which the average is measured. It reports the average
+ incoming bytes rate over that period, in bytes per period. It may be used
+ to detect users which upload too much and too fast. Warning: with large
+ uploads, it is possible that the amount of uploaded data will be counted
+ once upon termination, thus causing spikes in the average transfer speed
+ instead of having a smooth one. This may partially be smoothed with
+ "option contstats" though this is not perfect yet. Use of byte_in_cnt is
+ recommended for better fairness.
+
+ - bytes_out_cnt : server to client byte count. It is a positive 64-bit
+ integer which counts the cumulated amount of bytes sent to clients which
+ matched this entry. Headers are included in the count. This may be used
+ to limit abuse of bots sucking the whole site.
+
+ - bytes_out_rate(<period>) : frequency counter (takes 12 bytes). It takes
+ an integer parameter <period> which indicates in milliseconds the length
+ of the period over which the average is measured. It reports the average
+ outgoing bytes rate over that period, in bytes per period. It may be used
+ to detect users which download too much and too fast. Warning: with large
+ transfers, it is possible that the amount of transferred data will be
+ counted once upon termination, thus causing spikes in the average
+ transfer speed instead of having a smooth one. This may partially be
+ smoothed with "option contstats" though this is not perfect yet. Use of
+ byte_out_cnt is recommended for better fairness.
+
+ There is only one stick-table per proxy. At the moment of writing this doc,
+ it does not seem useful to have multiple tables per proxy. If this happens
+ to be required, simply create a dummy backend with a stick-table in it and
+ reference it.
+
+ It is important to understand that stickiness based on learning information
+ has some limitations, including the fact that all learned associations are
+ lost upon restart. In general it can be good as a complement but not always
+ as an exclusive stickiness.
+
+ Last, memory requirements may be important when storing many data types.
+ Indeed, storing all indicators above at once in each entry requires 116 bytes
+ per entry, or 116 MB for a 1-million entries table. This is definitely not
+ something that can be ignored.
+
+ Example:
+ # Keep track of counters of up to 1 million IP addresses over 5 minutes
+ # and store a general purpose counter and the average connection rate
+ # computed over a sliding window of 30 seconds.
+ stick-table type ip size 1m expire 5m store gpc0,conn_rate(30s)
+
+ See also : "stick match", "stick on", "stick store-request", section 2.2
+ about time format and section 7 about ACLs.
+
+
+stick store-response <pattern> [table <table>] [{if | unless} <condition>]
+ Define a request pattern used to create an entry in a stickiness table
+ May be used in sections : defaults | frontend | listen | backend
+ no | no | yes | yes
+
+ Arguments :
+ <pattern> is a sample expression rule as described in section 7.3. It
+ describes what elements of the response or connection will
+ be analysed, extracted and stored in the table once a
+ server is selected.
+
+ <table> is an optional stickiness table name. If unspecified, the same
+ backend's table is used. A stickiness table is declared using
+ the "stick-table" statement.
+
+ <cond> is an optional storage condition. It makes it possible to store
+ certain criteria only when some conditions are met (or not met).
+ For instance, it could be used to store the SSL session ID only
+ when the response is a SSL server hello.
+
+ Some protocols or applications require complex stickiness rules and cannot
+ always simply rely on cookies nor hashing. The "stick store-response"
+ statement describes a rule to decide what to extract from the response and
+ when to do it, in order to store it into a stickiness table for further
+ requests to match it using the "stick match" statement. Obviously the
+ extracted part must make sense and have a chance to be matched in a further
+ request. Storing an ID found in a header of a response makes sense.
+ See section 7 for a complete list of possible patterns and transformation
+ rules.
+
+ The table has to be declared using the "stick-table" statement. It must be of
+ a type compatible with the pattern. By default it is the one which is present
+ in the same backend. It is possible to share a table with other backends by
+ referencing it using the "table" keyword. If another table is referenced,
+ the server's ID inside the backends are used. By default, all server IDs
+ start at 1 in each backend, so the server ordering is enough. But in case of
+ doubt, it is highly recommended to force server IDs using their "id" setting.
+
+ It is possible to restrict the conditions where a "stick store-response"
+ statement will apply, using "if" or "unless" followed by a condition. This
+ condition will be evaluated while parsing the response, so any criteria can
+ be used. See section 7 for ACL based conditions.
+
+ There is no limit on the number of "stick store-response" statements, but
+ there is a limit of 8 simultaneous stores per request or response. This
+ makes it possible to store up to 8 criteria, all extracted from either the
+ request or the response, regardless of the number of rules. Only the 8 first
+ ones which match will be kept. Using this, it is possible to feed multiple
+ tables at once in the hope to increase the chance to recognize a user on
+ another protocol or access method. Using multiple store-response rules with
+ the same table is possible and may be used to find the best criterion to rely
+ on, by arranging the rules by decreasing preference order. Only the first
+ extracted criterion for a given table will be stored. All subsequent store-
+ response rules referencing the same table will be skipped and their ACLs will
+ not be evaluated. However, even if a store-request rule references a table, a
+ store-response rule may also use the same table. This means that each table
+ may learn exactly one element from the request and one element from the
+ response at once.
+
+ The table will contain the real server that processed the request.
+
+ Example :
+ # Learn SSL session ID from both request and response and create affinity.
+ backend https
+ mode tcp
+ balance roundrobin
+ # maximum SSL session ID length is 32 bytes.
+ stick-table type binary len 32 size 30k expire 30m
+
+ acl clienthello req_ssl_hello_type 1
+ acl serverhello rep_ssl_hello_type 2
+
+ # use tcp content accepts to detects ssl client and server hello.
+ tcp-request inspect-delay 5s
+ tcp-request content accept if clienthello
+
+ # no timeout on response inspect delay by default.
+ tcp-response content accept if serverhello
+
+ # SSL session ID (SSLID) may be present on a client or server hello.
+ # Its length is coded on 1 byte at offset 43 and its value starts
+ # at offset 44.
+
+ # Match and learn on request if client hello.
+ stick on payload_lv(43,1) if clienthello
+
+ # Learn on response if server hello.
+ stick store-response payload_lv(43,1) if serverhello
+
+ server s1 192.168.1.1:443
+ server s2 192.168.1.1:443
+
+ See also : "stick-table", "stick on", and section 7 about ACLs and pattern
+ extraction.
+
+
+tcp-check connect [params*]
+ Opens a new connection
+ May be used in sections: defaults | frontend | listen | backend
+ no | no | yes | yes
+
+ When an application lies on more than a single TCP port or when HAProxy
+ load-balance many services in a single backend, it makes sense to probe all
+ the services individually before considering a server as operational.
+
+ When there are no TCP port configured on the server line neither server port
+ directive, then the 'tcp-check connect port <port>' must be the first step
+ of the sequence.
+
+ In a tcp-check ruleset a 'connect' is required, it is also mandatory to start
+ the ruleset with a 'connect' rule. Purpose is to ensure admin know what they
+ do.
+
+ Parameters :
+ They are optional and can be used to describe how HAProxy should open and
+ use the TCP connection.
+
+ port if not set, check port or server port is used.
+ It tells HAProxy where to open the connection to.
+ <port> must be a valid TCP port source integer, from 1 to 65535.
+
+ send-proxy send a PROXY protocol string
+
+ ssl opens a ciphered connection
+
+ Examples:
+ # check HTTP and HTTPs services on a server.
+ # first open port 80 thanks to server line port directive, then
+ # tcp-check opens port 443, ciphered and run a request on it:
+ option tcp-check
+ tcp-check connect
+ tcp-check send GET\ /\ HTTP/1.0\r\n
+ tcp-check send Host:\ haproxy.1wt.eu\r\n
+ tcp-check send \r\n
+ tcp-check expect rstring (2..|3..)
+ tcp-check connect port 443 ssl
+ tcp-check send GET\ /\ HTTP/1.0\r\n
+ tcp-check send Host:\ haproxy.1wt.eu\r\n
+ tcp-check send \r\n
+ tcp-check expect rstring (2..|3..)
+ server www 10.0.0.1 check port 80
+
+ # check both POP and IMAP from a single server:
+ option tcp-check
+ tcp-check connect port 110
+ tcp-check expect string +OK\ POP3\ ready
+ tcp-check connect port 143
+ tcp-check expect string *\ OK\ IMAP4\ ready
+ server mail 10.0.0.1 check
+
+ See also : "option tcp-check", "tcp-check send", "tcp-check expect"
+
+
+tcp-check expect [!] <match> <pattern>
+ Specify data to be collected and analysed during a generic health check
+ May be used in sections: defaults | frontend | listen | backend
+ no | no | yes | yes
+
+ Arguments :
+ <match> is a keyword indicating how to look for a specific pattern in the
+ response. The keyword may be one of "string", "rstring" or
+ binary.
+ The keyword may be preceded by an exclamation mark ("!") to negate
+ the match. Spaces are allowed between the exclamation mark and the
+ keyword. See below for more details on the supported keywords.
+
+ <pattern> is the pattern to look for. It may be a string or a regular
+ expression. If the pattern contains spaces, they must be escaped
+ with the usual backslash ('\').
+ If the match is set to binary, then the pattern must be passed as
+ a serie of hexadecimal digits in an even number. Each sequence of
+ two digits will represent a byte. The hexadecimal digits may be
+ used upper or lower case.
+
+
+ The available matches are intentionally similar to their http-check cousins :
+
+ string <string> : test the exact string matches in the response buffer.
+ A health check response will be considered valid if the
+ response's buffer contains this exact string. If the
+ "string" keyword is prefixed with "!", then the response
+ will be considered invalid if the body contains this
+ string. This can be used to look for a mandatory pattern
+ in a protocol response, or to detect a failure when a
+ specific error appears in a protocol banner.
+
+ rstring <regex> : test a regular expression on the response buffer.
+ A health check response will be considered valid if the
+ response's buffer matches this expression. If the
+ "rstring" keyword is prefixed with "!", then the response
+ will be considered invalid if the body matches the
+ expression.
+
+ binary <hexstring> : test the exact string in its hexadecimal form matches
+ in the response buffer. A health check response will
+ be considered valid if the response's buffer contains
+ this exact hexadecimal string.
+ Purpose is to match data on binary protocols.
+
+ It is important to note that the responses will be limited to a certain size
+ defined by the global "tune.chksize" option, which defaults to 16384 bytes.
+ Thus, too large responses may not contain the mandatory pattern when using
+ "string", "rstring" or binary. If a large response is absolutely required, it
+ is possible to change the default max size by setting the global variable.
+ However, it is worth keeping in mind that parsing very large responses can
+ waste some CPU cycles, especially when regular expressions are used, and that
+ it is always better to focus the checks on smaller resources. Also, in its
+ current state, the check will not find any string nor regex past a null
+ character in the response. Similarly it is not possible to request matching
+ the null character.
+
+ Examples :
+ # perform a POP check
+ option tcp-check
+ tcp-check expect string +OK\ POP3\ ready
+
+ # perform an IMAP check
+ option tcp-check
+ tcp-check expect string *\ OK\ IMAP4\ ready
+
+ # look for the redis master server
+ option tcp-check
+ tcp-check send PING\r\n
+ tcp-check expect string +PONG
+ tcp-check send info\ replication\r\n
+ tcp-check expect string role:master
+ tcp-check send QUIT\r\n
+ tcp-check expect string +OK
+
+
+ See also : "option tcp-check", "tcp-check connect", "tcp-check send",
+ "tcp-check send-binary", "http-check expect", tune.chksize
+
+
+tcp-check send <data>
+ Specify a string to be sent as a question during a generic health check
+ May be used in sections: defaults | frontend | listen | backend
+ no | no | yes | yes
+
+ <data> : the data to be sent as a question during a generic health check
+ session. For now, <data> must be a string.
+
+ Examples :
+ # look for the redis master server
+ option tcp-check
+ tcp-check send info\ replication\r\n
+ tcp-check expect string role:master
+
+ See also : "option tcp-check", "tcp-check connect", "tcp-check expect",
+ "tcp-check send-binary", tune.chksize
+
+
+tcp-check send-binary <hexastring>
+ Specify an hexa digits string to be sent as a binary question during a raw
+ tcp health check
+ May be used in sections: defaults | frontend | listen | backend
+ no | no | yes | yes
+
+ <data> : the data to be sent as a question during a generic health check
+ session. For now, <data> must be a string.
+ <hexastring> : test the exact string in its hexadecimal form matches in the
+ response buffer. A health check response will be considered
+ valid if the response's buffer contains this exact
+ hexadecimal string.
+ Purpose is to send binary data to ask on binary protocols.
+
+ Examples :
+ # redis check in binary
+ option tcp-check
+ tcp-check send-binary 50494e470d0a # PING\r\n
+ tcp-check expect binary 2b504F4e47 # +PONG
+
+
+ See also : "option tcp-check", "tcp-check connect", "tcp-check expect",
+ "tcp-check send", tune.chksize
+
+
+tcp-request connection <action> [{if | unless} <condition>]
+ Perform an action on an incoming connection depending on a layer 4 condition
+ May be used in sections : defaults | frontend | listen | backend
+ no | yes | yes | no
+ Arguments :
+ <action> defines the action to perform if the condition applies. See
+ below.
+
+ <condition> is a standard layer4-only ACL-based condition (see section 7).
+
+ Immediately after acceptance of a new incoming connection, it is possible to
+ evaluate some conditions to decide whether this connection must be accepted
+ or dropped or have its counters tracked. Those conditions cannot make use of
+ any data contents because the connection has not been read from yet, and the
+ buffers are not yet allocated. This is used to selectively and very quickly
+ accept or drop connections from various sources with a very low overhead. If
+ some contents need to be inspected in order to take the decision, the
+ "tcp-request content" statements must be used instead.
+
+ The "tcp-request connection" rules are evaluated in their exact declaration
+ order. If no rule matches or if there is no rule, the default action is to
+ accept the incoming connection. There is no specific limit to the number of
+ rules which may be inserted.
+
+ Four types of actions are supported :
+ - accept :
+ accepts the connection if the condition is true (when used with "if")
+ or false (when used with "unless"). The first such rule executed ends
+ the rules evaluation.
+
+ - reject :
+ rejects the connection if the condition is true (when used with "if")
+ or false (when used with "unless"). The first such rule executed ends
+ the rules evaluation. Rejected connections do not even become a
+ session, which is why they are accounted separately for in the stats,
+ as "denied connections". They are not considered for the session
+ rate-limit and are not logged either. The reason is that these rules
+ should only be used to filter extremely high connection rates such as
+ the ones encountered during a massive DDoS attack. Under these extreme
+ conditions, the simple action of logging each event would make the
+ system collapse and would considerably lower the filtering capacity. If
+ logging is absolutely desired, then "tcp-request content" rules should
+ be used instead.
+
+ - expect-proxy layer4 :
+ configures the client-facing connection to receive a PROXY protocol
+ header before any byte is read from the socket. This is equivalent to
+ having the "accept-proxy" keyword on the "bind" line, except that using
+ the TCP rule allows the PROXY protocol to be accepted only for certain
+ IP address ranges using an ACL. This is convenient when multiple layers
+ of load balancers are passed through by traffic coming from public
+ hosts.
+
+ - capture <sample> len <length> :
+ This only applies to "tcp-request content" rules. It captures sample
+ expression <sample> from the request buffer, and converts it to a
+ string of at most <len> characters. The resulting string is stored into
+ the next request "capture" slot, so it will possibly appear next to
+ some captured HTTP headers. It will then automatically appear in the
+ logs, and it will be possible to extract it using sample fetch rules to
+ feed it into headers or anything. The length should be limited given
+ that this size will be allocated for each capture during the whole
+ session life. Please check section 7.3 (Fetching samples) and "capture
+ request header" for more information.
+
+ - { track-sc0 | track-sc1 | track-sc2 } <key> [table <table>] :
+ enables tracking of sticky counters from current connection. These
+ rules do not stop evaluation and do not change default action. 3 sets
+ of counters may be simultaneously tracked by the same connection. The
+ first "track-sc0" rule executed enables tracking of the counters of the
+ specified table as the first set. The first "track-sc1" rule executed
+ enables tracking of the counters of the specified table as the second
+ set. The first "track-sc2" rule executed enables tracking of the
+ counters of the specified table as the third set. It is a recommended
+ practice to use the first set of counters for the per-frontend counters
+ and the second set for the per-backend ones. But this is just a
+ guideline, all may be used everywhere.
+
+ These actions take one or two arguments :
+ <key> is mandatory, and is a sample expression rule as described
+ in section 7.3. It describes what elements of the incoming
+ request or connection will be analysed, extracted, combined,
+ and used to select which table entry to update the counters.
+ Note that "tcp-request connection" cannot use content-based
+ fetches.
+
+ <table> is an optional table to be used instead of the default one,
+ which is the stick-table declared in the current proxy. All
+ the counters for the matches and updates for the key will
+ then be performed in that table until the session ends.
+
+ Once a "track-sc*" rule is executed, the key is looked up in the table
+ and if it is not found, an entry is allocated for it. Then a pointer to
+ that entry is kept during all the session's life, and this entry's
+ counters are updated as often as possible, every time the session's
+ counters are updated, and also systematically when the session ends.
+ Counters are only updated for events that happen after the tracking has
+ been started. For example, connection counters will not be updated when
+ tracking layer 7 information, since the connection event happens before
+ layer7 information is extracted.
+
+ If the entry tracks concurrent connection counters, one connection is
+ counted for as long as the entry is tracked, and the entry will not
+ expire during that time. Tracking counters also provides a performance
+ advantage over just checking the keys, because only one table lookup is
+ performed for all ACL checks that make use of it.
+
+ - sc-inc-gpc0(<sc-id>):
+ The "sc-inc-gpc0" increments the GPC0 counter according to the sticky
+ counter designated by <sc-id>. If an error occurs, this action silently
+ fails and the actions evaluation continues.
+
+ - sc-set-gpt0(<sc-id>) <int>:
+ This action sets the GPT0 tag according to the sticky counter designated
+ by <sc-id> and the value of <int>. The expected result is a boolean. If
+ an error occurs, this action silently fails and the actions evaluation
+ continues.
+
+ - "silent-drop" :
+ This stops the evaluation of the rules and makes the client-facing
+ connection suddenly disappear using a system-dependant way that tries
+ to prevent the client from being notified. The effect it then that the
+ client still sees an established connection while there's none on
+ HAProxy. The purpose is to achieve a comparable effect to "tarpit"
+ except that it doesn't use any local resource at all on the machine
+ running HAProxy. It can resist much higher loads than "tarpit", and
+ slow down stronger attackers. It is important to undestand the impact
+ of using this mechanism. All stateful equipments placed between the
+ client and HAProxy (firewalls, proxies, load balancers) will also keep
+ the established connection for a long time and may suffer from this
+ action. On modern Linux systems running with enough privileges, the
+ TCP_REPAIR socket option is used to block the emission of a TCP
+ reset. On other systems, the socket's TTL is reduced to 1 so that the
+ TCP reset doesn't pass the first router, though it's still delivered to
+ local networks. Do not use it unless you fully understand how it works.
+
+ Note that the "if/unless" condition is optional. If no condition is set on
+ the action, it is simply performed unconditionally. That can be useful for
+ "track-sc*" actions as well as for changing the default action to a reject.
+
+ Example: accept all connections from white-listed hosts, reject too fast
+ connection without counting them, and track accepted connections.
+ This results in connection rate being capped from abusive sources.
+
+ tcp-request connection accept if { src -f /etc/haproxy/whitelist.lst }
+ tcp-request connection reject if { src_conn_rate gt 10 }
+ tcp-request connection track-sc0 src
+
+ Example: accept all connections from white-listed hosts, count all other
+ connections and reject too fast ones. This results in abusive ones
+ being blocked as long as they don't slow down.
+
+ tcp-request connection accept if { src -f /etc/haproxy/whitelist.lst }
+ tcp-request connection track-sc0 src
+ tcp-request connection reject if { sc0_conn_rate gt 10 }
+
+ Example: enable the PROXY protocol for traffic coming from all known proxies.
+
+ tcp-request connection expect-proxy layer4 if { src -f proxies.lst }
+
+ See section 7 about ACL usage.
+
+ See also : "tcp-request content", "stick-table"
+
+
+tcp-request content <action> [{if | unless} <condition>]
+ Perform an action on a new session depending on a layer 4-7 condition
+ May be used in sections : defaults | frontend | listen | backend
+ no | yes | yes | yes
+ Arguments :
+ <action> defines the action to perform if the condition applies. See
+ below.
+
+ <condition> is a standard layer 4-7 ACL-based condition (see section 7).
+
+ A request's contents can be analysed at an early stage of request processing
+ called "TCP content inspection". During this stage, ACL-based rules are
+ evaluated every time the request contents are updated, until either an
+ "accept" or a "reject" rule matches, or the TCP request inspection delay
+ expires with no matching rule.
+
+ The first difference between these rules and "tcp-request connection" rules
+ is that "tcp-request content" rules can make use of contents to take a
+ decision. Most often, these decisions will consider a protocol recognition or
+ validity. The second difference is that content-based rules can be used in
+ both frontends and backends. In case of HTTP keep-alive with the client, all
+ tcp-request content rules are evaluated again, so haproxy keeps a record of
+ what sticky counters were assigned by a "tcp-request connection" versus a
+ "tcp-request content" rule, and flushes all the content-related ones after
+ processing an HTTP request, so that they may be evaluated again by the rules
+ being evaluated again for the next request. This is of particular importance
+ when the rule tracks some L7 information or when it is conditioned by an
+ L7-based ACL, since tracking may change between requests.
+
+ Content-based rules are evaluated in their exact declaration order. If no
+ rule matches or if there is no rule, the default action is to accept the
+ contents. There is no specific limit to the number of rules which may be
+ inserted.
+
+ Several types of actions are supported :
+ - accept : the request is accepted
+ - reject : the request is rejected and the connection is closed
+ - capture : the specified sample expression is captured
+ - { track-sc0 | track-sc1 | track-sc2 } <key> [table <table>]
+ - sc-inc-gpc0(<sc-id>)
+ - set-gpt0(<sc-id>) <int>
+ - set-var(<var-name>) <expr>
+ - silent-drop
+
+ They have the same meaning as their counter-parts in "tcp-request connection"
+ so please refer to that section for a complete description.
+
+ While there is nothing mandatory about it, it is recommended to use the
+ track-sc0 in "tcp-request connection" rules, track-sc1 for "tcp-request
+ content" rules in the frontend, and track-sc2 for "tcp-request content"
+ rules in the backend, because that makes the configuration more readable
+ and easier to troubleshoot, but this is just a guideline and all counters
+ may be used everywhere.
+
+ Note that the "if/unless" condition is optional. If no condition is set on
+ the action, it is simply performed unconditionally. That can be useful for
+ "track-sc*" actions as well as for changing the default action to a reject.
+
+ It is perfectly possible to match layer 7 contents with "tcp-request content"
+ rules, since HTTP-specific ACL matches are able to preliminarily parse the
+ contents of a buffer before extracting the required data. If the buffered
+ contents do not parse as a valid HTTP message, then the ACL does not match.
+ The parser which is involved there is exactly the same as for all other HTTP
+ processing, so there is no risk of parsing something differently. In an HTTP
+ backend connected to from an HTTP frontend, it is guaranteed that HTTP
+ contents will always be immediately present when the rule is evaluated first.
+
+ Tracking layer7 information is also possible provided that the information
+ are present when the rule is processed. The rule processing engine is able to
+ wait until the inspect delay expires when the data to be tracked is not yet
+ available.
+
+ The "set-var" is used to set the content of a variable. The variable is
+ declared inline.
+
+ <var-name> The name of the variable starts by an indication about its scope.
+ The allowed scopes are:
+ "sess" : the variable is shared with all the session,
+ "txn" : the variable is shared with all the transaction
+ (request and response)
+ "req" : the variable is shared only during the request
+ processing
+ "res" : the variable is shared only during the response
+ processing.
+ This prefix is followed by a name. The separator is a '.'.
+ The name may only contain characters 'a-z', 'A-Z', '0-9' and '_'.
+
+ <expr> Is a standard HAProxy expression formed by a sample-fetch
+ followed by some converters.
+
+ Example:
+
+ tcp-request content set-var(sess.my_var) src
+
+ Example:
+ # Accept HTTP requests containing a Host header saying "example.com"
+ # and reject everything else.
+ acl is_host_com hdr(Host) -i example.com
+ tcp-request inspect-delay 30s
+ tcp-request content accept if is_host_com
+ tcp-request content reject
+
+ Example:
+ # reject SMTP connection if client speaks first
+ tcp-request inspect-delay 30s
+ acl content_present req_len gt 0
+ tcp-request content reject if content_present
+
+ # Forward HTTPS connection only if client speaks
+ tcp-request inspect-delay 30s
+ acl content_present req_len gt 0
+ tcp-request content accept if content_present
+ tcp-request content reject
+
+ Example:
+ # Track the last IP from X-Forwarded-For
+ tcp-request inspect-delay 10s
+ tcp-request content track-sc0 hdr(x-forwarded-for,-1)
+
+ Example:
+ # track request counts per "base" (concatenation of Host+URL)
+ tcp-request inspect-delay 10s
+ tcp-request content track-sc0 base table req-rate
+
+ Example: track per-frontend and per-backend counters, block abusers at the
+ frontend when the backend detects abuse.
+
+ frontend http
+ # Use General Purpose Couter 0 in SC0 as a global abuse counter
+ # protecting all our sites
+ stick-table type ip size 1m expire 5m store gpc0
+ tcp-request connection track-sc0 src
+ tcp-request connection reject if { sc0_get_gpc0 gt 0 }
+ ...
+ use_backend http_dynamic if { path_end .php }
+
+ backend http_dynamic
+ # if a source makes too fast requests to this dynamic site (tracked
+ # by SC1), block it globally in the frontend.
+ stick-table type ip size 1m expire 5m store http_req_rate(10s)
+ acl click_too_fast sc1_http_req_rate gt 10
+ acl mark_as_abuser sc0_inc_gpc0 gt 0
+ tcp-request content track-sc1 src
+ tcp-request content reject if click_too_fast mark_as_abuser
+
+ See section 7 about ACL usage.
+
+ See also : "tcp-request connection", "tcp-request inspect-delay"
+
+
+tcp-request inspect-delay <timeout>
+ Set the maximum allowed time to wait for data during content inspection
+ May be used in sections : defaults | frontend | listen | backend
+ no | yes | yes | yes
+ Arguments :
+ <timeout> is the timeout value specified in milliseconds by default, but
+ can be in any other unit if the number is suffixed by the unit,
+ as explained at the top of this document.
+
+ People using haproxy primarily as a TCP relay are often worried about the
+ risk of passing any type of protocol to a server without any analysis. In
+ order to be able to analyze the request contents, we must first withhold
+ the data then analyze them. This statement simply enables withholding of
+ data for at most the specified amount of time.
+
+ TCP content inspection applies very early when a connection reaches a
+ frontend, then very early when the connection is forwarded to a backend. This
+ means that a connection may experience a first delay in the frontend and a
+ second delay in the backend if both have tcp-request rules.
+
+ Note that when performing content inspection, haproxy will evaluate the whole
+ rules for every new chunk which gets in, taking into account the fact that
+ those data are partial. If no rule matches before the aforementioned delay,
+ a last check is performed upon expiration, this time considering that the
+ contents are definitive. If no delay is set, haproxy will not wait at all
+ and will immediately apply a verdict based on the available information.
+ Obviously this is unlikely to be very useful and might even be racy, so such
+ setups are not recommended.
+
+ As soon as a rule matches, the request is released and continues as usual. If
+ the timeout is reached and no rule matches, the default policy will be to let
+ it pass through unaffected.
+
+ For most protocols, it is enough to set it to a few seconds, as most clients
+ send the full request immediately upon connection. Add 3 or more seconds to
+ cover TCP retransmits but that's all. For some protocols, it may make sense
+ to use large values, for instance to ensure that the client never talks
+ before the server (eg: SMTP), or to wait for a client to talk before passing
+ data to the server (eg: SSL). Note that the client timeout must cover at
+ least the inspection delay, otherwise it will expire first. If the client
+ closes the connection or if the buffer is full, the delay immediately expires
+ since the contents will not be able to change anymore.
+
+ See also : "tcp-request content accept", "tcp-request content reject",
+ "timeout client".
+
+
+tcp-response content <action> [{if | unless} <condition>]
+ Perform an action on a session response depending on a layer 4-7 condition
+ May be used in sections : defaults | frontend | listen | backend
+ no | no | yes | yes
+ Arguments :
+ <action> defines the action to perform if the condition applies. See
+ below.
+
+ <condition> is a standard layer 4-7 ACL-based condition (see section 7).
+
+ Response contents can be analysed at an early stage of response processing
+ called "TCP content inspection". During this stage, ACL-based rules are
+ evaluated every time the response contents are updated, until either an
+ "accept", "close" or a "reject" rule matches, or a TCP response inspection
+ delay is set and expires with no matching rule.
+
+ Most often, these decisions will consider a protocol recognition or validity.
+
+ Content-based rules are evaluated in their exact declaration order. If no
+ rule matches or if there is no rule, the default action is to accept the
+ contents. There is no specific limit to the number of rules which may be
+ inserted.
+
+ Several types of actions are supported :
+ - accept :
+ accepts the response if the condition is true (when used with "if")
+ or false (when used with "unless"). The first such rule executed ends
+ the rules evaluation.
+
+ - close :
+ immediately closes the connection with the server if the condition is
+ true (when used with "if"), or false (when used with "unless"). The
+ first such rule executed ends the rules evaluation. The main purpose of
+ this action is to force a connection to be finished between a client
+ and a server after an exchange when the application protocol expects
+ some long time outs to elapse first. The goal is to eliminate idle
+ connections which take significant resources on servers with certain
+ protocols.
+
+ - reject :
+ rejects the response if the condition is true (when used with "if")
+ or false (when used with "unless"). The first such rule executed ends
+ the rules evaluation. Rejected session are immediately closed.
+
+ - set-var(<var-name>) <expr>
+ Sets a variable.
+
+ - sc-inc-gpc0(<sc-id>):
+ This action increments the GPC0 counter according to the sticky
+ counter designated by <sc-id>. If an error occurs, this action fails
+ silently and the actions evaluation continues.
+
+ - sc-set-gpt0(<sc-id>) <int> :
+ This action sets the GPT0 tag according to the sticky counter designated
+ by <sc-id> and the value of <int>. The expected result is a boolean. If
+ an error occurs, this action silently fails and the actions evaluation
+ continues.
+
+ - "silent-drop" :
+ This stops the evaluation of the rules and makes the client-facing
+ connection suddenly disappear using a system-dependant way that tries
+ to prevent the client from being notified. The effect it then that the
+ client still sees an established connection while there's none on
+ HAProxy. The purpose is to achieve a comparable effect to "tarpit"
+ except that it doesn't use any local resource at all on the machine
+ running HAProxy. It can resist much higher loads than "tarpit", and
+ slow down stronger attackers. It is important to undestand the impact
+ of using this mechanism. All stateful equipments placed between the
+ client and HAProxy (firewalls, proxies, load balancers) will also keep
+ the established connection for a long time and may suffer from this
+ action. On modern Linux systems running with enough privileges, the
+ TCP_REPAIR socket option is used to block the emission of a TCP
+ reset. On other systems, the socket's TTL is reduced to 1 so that the
+ TCP reset doesn't pass the first router, though it's still delivered to
+ local networks. Do not use it unless you fully understand how it works.
+
+ Note that the "if/unless" condition is optional. If no condition is set on
+ the action, it is simply performed unconditionally. That can be useful for
+ for changing the default action to a reject.
+
+ It is perfectly possible to match layer 7 contents with "tcp-response
+ content" rules, but then it is important to ensure that a full response has
+ been buffered, otherwise no contents will match. In order to achieve this,
+ the best solution involves detecting the HTTP protocol during the inspection
+ period.
+
+ The "set-var" is used to set the content of a variable. The variable is
+ declared inline.
+
+ <var-name> The name of the variable starts by an indication about its scope.
+ The allowed scopes are:
+ "sess" : the variable is shared with all the session,
+ "txn" : the variable is shared with all the transaction
+ (request and response)
+ "req" : the variable is shared only during the request
+ processing
+ "res" : the variable is shared only during the response
+ processing.
+ This prefix is followed by a name. The separator is a '.'.
+ The name may only contain characters 'a-z', 'A-Z', '0-9' and '_'.
+
+ <expr> Is a standard HAProxy expression formed by a sample-fetch
+ followed by some converters.
+
+ Example:
+
+ tcp-request content set-var(sess.my_var) src
+
+ See section 7 about ACL usage.
+
+ See also : "tcp-request content", "tcp-response inspect-delay"
+
+
+tcp-response inspect-delay <timeout>
+ Set the maximum allowed time to wait for a response during content inspection
+ May be used in sections : defaults | frontend | listen | backend
+ no | no | yes | yes
+ Arguments :
+ <timeout> is the timeout value specified in milliseconds by default, but
+ can be in any other unit if the number is suffixed by the unit,
+ as explained at the top of this document.
+
+ See also : "tcp-response content", "tcp-request inspect-delay".
+
+
+timeout check <timeout>
+ Set additional check timeout, but only after a connection has been already
+ established.
+
+ May be used in sections: defaults | frontend | listen | backend
+ yes | no | yes | yes
+ Arguments:
+ <timeout> is the timeout value specified in milliseconds by default, but
+ can be in any other unit if the number is suffixed by the unit,
+ as explained at the top of this document.
+
+ If set, haproxy uses min("timeout connect", "inter") as a connect timeout
+ for check and "timeout check" as an additional read timeout. The "min" is
+ used so that people running with *very* long "timeout connect" (eg. those
+ who needed this due to the queue or tarpit) do not slow down their checks.
+ (Please also note that there is no valid reason to have such long connect
+ timeouts, because "timeout queue" and "timeout tarpit" can always be used to
+ avoid that).
+
+ If "timeout check" is not set haproxy uses "inter" for complete check
+ timeout (connect + read) exactly like all <1.3.15 version.
+
+ In most cases check request is much simpler and faster to handle than normal
+ requests and people may want to kick out laggy servers so this timeout should
+ be smaller than "timeout server".
+
+ This parameter is specific to backends, but can be specified once for all in
+ "defaults" sections. This is in fact one of the easiest solutions not to
+ forget about it.
+
+ See also: "timeout connect", "timeout queue", "timeout server",
+ "timeout tarpit".
+
+
+timeout client <timeout>
+timeout clitimeout <timeout> (deprecated)
+ Set the maximum inactivity time on the client side.
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | no
+ Arguments :
+ <timeout> is the timeout value specified in milliseconds by default, but
+ can be in any other unit if the number is suffixed by the unit,
+ as explained at the top of this document.
+
+ The inactivity timeout applies when the client is expected to acknowledge or
+ send data. In HTTP mode, this timeout is particularly important to consider
+ during the first phase, when the client sends the request, and during the
+ response while it is reading data sent by the server. The value is specified
+ in milliseconds by default, but can be in any other unit if the number is
+ suffixed by the unit, as specified at the top of this document. In TCP mode
+ (and to a lesser extent, in HTTP mode), it is highly recommended that the
+ client timeout remains equal to the server timeout in order to avoid complex
+ situations to debug. It is a good practice to cover one or several TCP packet
+ losses by specifying timeouts that are slightly above multiples of 3 seconds
+ (eg: 4 or 5 seconds). If some long-lived sessions are mixed with short-lived
+ sessions (eg: WebSocket and HTTP), it's worth considering "timeout tunnel",
+ which overrides "timeout client" and "timeout server" for tunnels, as well as
+ "timeout client-fin" for half-closed connections.
+
+ This parameter is specific to frontends, but can be specified once for all in
+ "defaults" sections. This is in fact one of the easiest solutions not to
+ forget about it. An unspecified timeout results in an infinite timeout, which
+ is not recommended. Such a usage is accepted and works but reports a warning
+ during startup because it may results in accumulation of expired sessions in
+ the system if the system's timeouts are not configured either.
+
+ This parameter replaces the old, deprecated "clitimeout". It is recommended
+ to use it to write new configurations. The form "timeout clitimeout" is
+ provided only by backwards compatibility but its use is strongly discouraged.
+
+ See also : "clitimeout", "timeout server", "timeout tunnel".
+
+
+timeout client-fin <timeout>
+ Set the inactivity timeout on the client side for half-closed connections.
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | no
+ Arguments :
+ <timeout> is the timeout value specified in milliseconds by default, but
+ can be in any other unit if the number is suffixed by the unit,
+ as explained at the top of this document.
+
+ The inactivity timeout applies when the client is expected to acknowledge or
+ send data while one direction is already shut down. This timeout is different
+ from "timeout client" in that it only applies to connections which are closed
+ in one direction. This is particularly useful to avoid keeping connections in
+ FIN_WAIT state for too long when clients do not disconnect cleanly. This
+ problem is particularly common long connections such as RDP or WebSocket.
+ Note that this timeout can override "timeout tunnel" when a connection shuts
+ down in one direction.
+
+ This parameter is specific to frontends, but can be specified once for all in
+ "defaults" sections. By default it is not set, so half-closed connections
+ will use the other timeouts (timeout.client or timeout.tunnel).
+
+ See also : "timeout client", "timeout server-fin", and "timeout tunnel".
+
+
+timeout connect <timeout>
+timeout contimeout <timeout> (deprecated)
+ Set the maximum time to wait for a connection attempt to a server to succeed.
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
+ Arguments :
+ <timeout> is the timeout value specified in milliseconds by default, but
+ can be in any other unit if the number is suffixed by the unit,
+ as explained at the top of this document.
+
+ If the server is located on the same LAN as haproxy, the connection should be
+ immediate (less than a few milliseconds). Anyway, it is a good practice to
+ cover one or several TCP packet losses by specifying timeouts that are
+ slightly above multiples of 3 seconds (eg: 4 or 5 seconds). By default, the
+ connect timeout also presets both queue and tarpit timeouts to the same value
+ if these have not been specified.
+
+ This parameter is specific to backends, but can be specified once for all in
+ "defaults" sections. This is in fact one of the easiest solutions not to
+ forget about it. An unspecified timeout results in an infinite timeout, which
+ is not recommended. Such a usage is accepted and works but reports a warning
+ during startup because it may results in accumulation of failed sessions in
+ the system if the system's timeouts are not configured either.
+
+ This parameter replaces the old, deprecated "contimeout". It is recommended
+ to use it to write new configurations. The form "timeout contimeout" is
+ provided only by backwards compatibility but its use is strongly discouraged.
+
+ See also: "timeout check", "timeout queue", "timeout server", "contimeout",
+ "timeout tarpit".
+
+
+timeout http-keep-alive <timeout>
+ Set the maximum allowed time to wait for a new HTTP request to appear
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
+ Arguments :
+ <timeout> is the timeout value specified in milliseconds by default, but
+ can be in any other unit if the number is suffixed by the unit,
+ as explained at the top of this document.
+
+ By default, the time to wait for a new request in case of keep-alive is set
+ by "timeout http-request". However this is not always convenient because some
+ people want very short keep-alive timeouts in order to release connections
+ faster, and others prefer to have larger ones but still have short timeouts
+ once the request has started to present itself.
+
+ The "http-keep-alive" timeout covers these needs. It will define how long to
+ wait for a new HTTP request to start coming after a response was sent. Once
+ the first byte of request has been seen, the "http-request" timeout is used
+ to wait for the complete request to come. Note that empty lines prior to a
+ new request do not refresh the timeout and are not counted as a new request.
+
+ There is also another difference between the two timeouts : when a connection
+ expires during timeout http-keep-alive, no error is returned, the connection
+ just closes. If the connection expires in "http-request" while waiting for a
+ connection to complete, a HTTP 408 error is returned.
+
+ In general it is optimal to set this value to a few tens to hundreds of
+ milliseconds, to allow users to fetch all objects of a page at once but
+ without waiting for further clicks. Also, if set to a very small value (eg:
+ 1 millisecond) it will probably only accept pipelined requests but not the
+ non-pipelined ones. It may be a nice trade-off for very large sites running
+ with tens to hundreds of thousands of clients.
+
+ If this parameter is not set, the "http-request" timeout applies, and if both
+ are not set, "timeout client" still applies at the lower level. It should be
+ set in the frontend to take effect, unless the frontend is in TCP mode, in
+ which case the HTTP backend's timeout will be used.
+
+ See also : "timeout http-request", "timeout client".
+
+
+timeout http-request <timeout>
+ Set the maximum allowed time to wait for a complete HTTP request
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
+ Arguments :
+ <timeout> is the timeout value specified in milliseconds by default, but
+ can be in any other unit if the number is suffixed by the unit,
+ as explained at the top of this document.
+
+ In order to offer DoS protection, it may be required to lower the maximum
+ accepted time to receive a complete HTTP request without affecting the client
+ timeout. This helps protecting against established connections on which
+ nothing is sent. The client timeout cannot offer a good protection against
+ this abuse because it is an inactivity timeout, which means that if the
+ attacker sends one character every now and then, the timeout will not
+ trigger. With the HTTP request timeout, no matter what speed the client
+ types, the request will be aborted if it does not complete in time. When the
+ timeout expires, an HTTP 408 response is sent to the client to inform it
+ about the problem, and the connection is closed. The logs will report
+ termination codes "cR". Some recent browsers are having problems with this
+ standard, well-documented behaviour, so it might be needed to hide the 408
+ code using "option http-ignore-probes" or "errorfile 408 /dev/null". See
+ more details in the explanations of the "cR" termination code in section 8.5.
+
+ By default, this timeout only applies to the header part of the request,
+ and not to any data. As soon as the empty line is received, this timeout is
+ not used anymore. When combined with "option http-buffer-request", this
+ timeout also applies to the body of the request..
+ It is used again on keep-alive connections to wait for a second
+ request if "timeout http-keep-alive" is not set.
+
+ Generally it is enough to set it to a few seconds, as most clients send the
+ full request immediately upon connection. Add 3 or more seconds to cover TCP
+ retransmits but that's all. Setting it to very low values (eg: 50 ms) will
+ generally work on local networks as long as there are no packet losses. This
+ will prevent people from sending bare HTTP requests using telnet.
+
+ If this parameter is not set, the client timeout still applies between each
+ chunk of the incoming request. It should be set in the frontend to take
+ effect, unless the frontend is in TCP mode, in which case the HTTP backend's
+ timeout will be used.
+
+ See also : "errorfile", "http-ignore-probes", "timeout http-keep-alive", and
+ "timeout client", "option http-buffer-request".
+
+
+timeout queue <timeout>
+ Set the maximum time to wait in the queue for a connection slot to be free
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
+ Arguments :
+ <timeout> is the timeout value specified in milliseconds by default, but
+ can be in any other unit if the number is suffixed by the unit,
+ as explained at the top of this document.
+
+ When a server's maxconn is reached, connections are left pending in a queue
+ which may be server-specific or global to the backend. In order not to wait
+ indefinitely, a timeout is applied to requests pending in the queue. If the
+ timeout is reached, it is considered that the request will almost never be
+ served, so it is dropped and a 503 error is returned to the client.
+
+ The "timeout queue" statement allows to fix the maximum time for a request to
+ be left pending in a queue. If unspecified, the same value as the backend's
+ connection timeout ("timeout connect") is used, for backwards compatibility
+ with older versions with no "timeout queue" parameter.
+
+ See also : "timeout connect", "contimeout".
+
+
+timeout server <timeout>
+timeout srvtimeout <timeout> (deprecated)
+ Set the maximum inactivity time on the server side.
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
+ Arguments :
+ <timeout> is the timeout value specified in milliseconds by default, but
+ can be in any other unit if the number is suffixed by the unit,
+ as explained at the top of this document.
+
+ The inactivity timeout applies when the server is expected to acknowledge or
+ send data. In HTTP mode, this timeout is particularly important to consider
+ during the first phase of the server's response, when it has to send the
+ headers, as it directly represents the server's processing time for the
+ request. To find out what value to put there, it's often good to start with
+ what would be considered as unacceptable response times, then check the logs
+ to observe the response time distribution, and adjust the value accordingly.
+
+ The value is specified in milliseconds by default, but can be in any other
+ unit if the number is suffixed by the unit, as specified at the top of this
+ document. In TCP mode (and to a lesser extent, in HTTP mode), it is highly
+ recommended that the client timeout remains equal to the server timeout in
+ order to avoid complex situations to debug. Whatever the expected server
+ response times, it is a good practice to cover at least one or several TCP
+ packet losses by specifying timeouts that are slightly above multiples of 3
+ seconds (eg: 4 or 5 seconds minimum). If some long-lived sessions are mixed
+ with short-lived sessions (eg: WebSocket and HTTP), it's worth considering
+ "timeout tunnel", which overrides "timeout client" and "timeout server" for
+ tunnels.
+
+ This parameter is specific to backends, but can be specified once for all in
+ "defaults" sections. This is in fact one of the easiest solutions not to
+ forget about it. An unspecified timeout results in an infinite timeout, which
+ is not recommended. Such a usage is accepted and works but reports a warning
+ during startup because it may results in accumulation of expired sessions in
+ the system if the system's timeouts are not configured either.
+
+ This parameter replaces the old, deprecated "srvtimeout". It is recommended
+ to use it to write new configurations. The form "timeout srvtimeout" is
+ provided only by backwards compatibility but its use is strongly discouraged.
+
+ See also : "srvtimeout", "timeout client" and "timeout tunnel".
+
+
+timeout server-fin <timeout>
+ Set the inactivity timeout on the server side for half-closed connections.
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
+ Arguments :
+ <timeout> is the timeout value specified in milliseconds by default, but
+ can be in any other unit if the number is suffixed by the unit,
+ as explained at the top of this document.
+
+ The inactivity timeout applies when the server is expected to acknowledge or
+ send data while one direction is already shut down. This timeout is different
+ from "timeout server" in that it only applies to connections which are closed
+ in one direction. This is particularly useful to avoid keeping connections in
+ FIN_WAIT state for too long when a remote server does not disconnect cleanly.
+ This problem is particularly common long connections such as RDP or WebSocket.
+ Note that this timeout can override "timeout tunnel" when a connection shuts
+ down in one direction. This setting was provided for completeness, but in most
+ situations, it should not be needed.
+
+ This parameter is specific to backends, but can be specified once for all in
+ "defaults" sections. By default it is not set, so half-closed connections
+ will use the other timeouts (timeout.server or timeout.tunnel).
+
+ See also : "timeout client-fin", "timeout server", and "timeout tunnel".
+
+
+timeout tarpit <timeout>
+ Set the duration for which tarpitted connections will be maintained
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | yes
+ Arguments :
+ <timeout> is the tarpit duration specified in milliseconds by default, but
+ can be in any other unit if the number is suffixed by the unit,
+ as explained at the top of this document.
+
+ When a connection is tarpitted using "reqtarpit", it is maintained open with
+ no activity for a certain amount of time, then closed. "timeout tarpit"
+ defines how long it will be maintained open.
+
+ The value is specified in milliseconds by default, but can be in any other
+ unit if the number is suffixed by the unit, as specified at the top of this
+ document. If unspecified, the same value as the backend's connection timeout
+ ("timeout connect") is used, for backwards compatibility with older versions
+ with no "timeout tarpit" parameter.
+
+ See also : "timeout connect", "contimeout".
+
+
+timeout tunnel <timeout>
+ Set the maximum inactivity time on the client and server side for tunnels.
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
+ Arguments :
+ <timeout> is the timeout value specified in milliseconds by default, but
+ can be in any other unit if the number is suffixed by the unit,
+ as explained at the top of this document.
+
+ The tunnel timeout applies when a bidirectional connection is established
+ between a client and a server, and the connection remains inactive in both
+ directions. This timeout supersedes both the client and server timeouts once
+ the connection becomes a tunnel. In TCP, this timeout is used as soon as no
+ analyser remains attached to either connection (eg: tcp content rules are
+ accepted). In HTTP, this timeout is used when a connection is upgraded (eg:
+ when switching to the WebSocket protocol, or forwarding a CONNECT request
+ to a proxy), or after the first response when no keepalive/close option is
+ specified.
+
+ Since this timeout is usually used in conjunction with long-lived connections,
+ it usually is a good idea to also set "timeout client-fin" to handle the
+ situation where a client suddenly disappears from the net and does not
+ acknowledge a close, or sends a shutdown and does not acknowledge pending
+ data anymore. This can happen in lossy networks where firewalls are present,
+ and is detected by the presence of large amounts of sessions in a FIN_WAIT
+ state.
+
+ The value is specified in milliseconds by default, but can be in any other
+ unit if the number is suffixed by the unit, as specified at the top of this
+ document. Whatever the expected normal idle time, it is a good practice to
+ cover at least one or several TCP packet losses by specifying timeouts that
+ are slightly above multiples of 3 seconds (eg: 4 or 5 seconds minimum).
+
+ This parameter is specific to backends, but can be specified once for all in
+ "defaults" sections. This is in fact one of the easiest solutions not to
+ forget about it.
+
+ Example :
+ defaults http
+ option http-server-close
+ timeout connect 5s
+ timeout client 30s
+ timeout client-fin 30s
+ timeout server 30s
+ timeout tunnel 1h # timeout to use with WebSocket and CONNECT
+
+ See also : "timeout client", "timeout client-fin", "timeout server".
+
+
+transparent (deprecated)
+ Enable client-side transparent proxying
+ May be used in sections : defaults | frontend | listen | backend
+ yes | no | yes | yes
+ Arguments : none
+
+ This keyword was introduced in order to provide layer 7 persistence to layer
+ 3 load balancers. The idea is to use the OS's ability to redirect an incoming
+ connection for a remote address to a local process (here HAProxy), and let
+ this process know what address was initially requested. When this option is
+ used, sessions without cookies will be forwarded to the original destination
+ IP address of the incoming request (which should match that of another
+ equipment), while requests with cookies will still be forwarded to the
+ appropriate server.
+
+ The "transparent" keyword is deprecated, use "option transparent" instead.
+
+ Note that contrary to a common belief, this option does NOT make HAProxy
+ present the client's IP to the server when establishing the connection.
+
+ See also: "option transparent"
+
+unique-id-format <string>
+ Generate a unique ID for each request.
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | no
+ Arguments :
+ <string> is a log-format string.
+
+ This keyword creates a ID for each request using the custom log format. A
+ unique ID is useful to trace a request passing through many components of
+ a complex infrastructure. The newly created ID may also be logged using the
+ %ID tag the log-format string.
+
+ The format should be composed from elements that are guaranteed to be
+ unique when combined together. For instance, if multiple haproxy instances
+ are involved, it might be important to include the node name. It is often
+ needed to log the incoming connection's source and destination addresses
+ and ports. Note that since multiple requests may be performed over the same
+ connection, including a request counter may help differentiate them.
+ Similarly, a timestamp may protect against a rollover of the counter.
+ Logging the process ID will avoid collisions after a service restart.
+
+ It is recommended to use hexadecimal notation for many fields since it
+ makes them more compact and saves space in logs.
+
+ Example:
+
+ unique-id-format %{+X}o\ %ci:%cp_%fi:%fp_%Ts_%rt:%pid
+
+ will generate:
+
+ 7F000001:8296_7F00001E:1F90_4F7B0A69_0003:790A
+
+ See also: "unique-id-header"
+
+unique-id-header <name>
+ Add a unique ID header in the HTTP request.
+ May be used in sections : defaults | frontend | listen | backend
+ yes | yes | yes | no
+ Arguments :
+ <name> is the name of the header.
+
+ Add a unique-id header in the HTTP request sent to the server, using the
+ unique-id-format. It can't work if the unique-id-format doesn't exist.
+
+ Example:
+
+ unique-id-format %{+X}o\ %ci:%cp_%fi:%fp_%Ts_%rt:%pid
+ unique-id-header X-Unique-ID
+
+ will generate:
+
+ X-Unique-ID: 7F000001:8296_7F00001E:1F90_4F7B0A69_0003:790A
+
+ See also: "unique-id-format"
+
+use_backend <backend> [{if | unless} <condition>]
+ Switch to a specific backend if/unless an ACL-based condition is matched.
+ May be used in sections : defaults | frontend | listen | backend
+ no | yes | yes | no
+ Arguments :
+ <backend> is the name of a valid backend or "listen" section, or a
+ "log-format" string resolving to a backend name.
+
+ <condition> is a condition composed of ACLs, as described in section 7. If
+ it is omitted, the rule is unconditionally applied.
+
+ When doing content-switching, connections arrive on a frontend and are then
+ dispatched to various backends depending on a number of conditions. The
+ relation between the conditions and the backends is described with the
+ "use_backend" keyword. While it is normally used with HTTP processing, it can
+ also be used in pure TCP, either without content using stateless ACLs (eg:
+ source address validation) or combined with a "tcp-request" rule to wait for
+ some payload.
+
+ There may be as many "use_backend" rules as desired. All of these rules are
+ evaluated in their declaration order, and the first one which matches will
+ assign the backend.
+
+ In the first form, the backend will be used if the condition is met. In the
+ second form, the backend will be used if the condition is not met. If no
+ condition is valid, the backend defined with "default_backend" will be used.
+ If no default backend is defined, either the servers in the same section are
+ used (in case of a "listen" section) or, in case of a frontend, no server is
+ used and a 503 service unavailable response is returned.
+
+ Note that it is possible to switch from a TCP frontend to an HTTP backend. In
+ this case, either the frontend has already checked that the protocol is HTTP,
+ and backend processing will immediately follow, or the backend will wait for
+ a complete HTTP request to get in. This feature is useful when a frontend
+ must decode several protocols on a unique port, one of them being HTTP.
+
+ When <backend> is a simple name, it is resolved at configuration time, and an
+ error is reported if the specified backend does not exist. If <backend> is
+ a log-format string instead, no check may be done at configuration time, so
+ the backend name is resolved dynamically at run time. If the resulting
+ backend name does not correspond to any valid backend, no other rule is
+ evaluated, and the default_backend directive is applied instead. Note that
+ when using dynamic backend names, it is highly recommended to use a prefix
+ that no other backend uses in order to ensure that an unauthorized backend
+ cannot be forced from the request.
+
+ It is worth mentioning that "use_backend" rules with an explicit name are
+ used to detect the association between frontends and backends to compute the
+ backend's "fullconn" setting. This cannot be done for dynamic names.
+
+ See also: "default_backend", "tcp-request", "fullconn", "log-format", and
+ section 7 about ACLs.
+
+
+use-server <server> if <condition>
+use-server <server> unless <condition>
+ Only use a specific server if/unless an ACL-based condition is matched.
+ May be used in sections : defaults | frontend | listen | backend
+ no | no | yes | yes
+ Arguments :
+ <server> is the name of a valid server in the same backend section.
+
+ <condition> is a condition composed of ACLs, as described in section 7.
+
+ By default, connections which arrive to a backend are load-balanced across
+ the available servers according to the configured algorithm, unless a
+ persistence mechanism such as a cookie is used and found in the request.
+
+ Sometimes it is desirable to forward a particular request to a specific
+ server without having to declare a dedicated backend for this server. This
+ can be achieved using the "use-server" rules. These rules are evaluated after
+ the "redirect" rules and before evaluating cookies, and they have precedence
+ on them. There may be as many "use-server" rules as desired. All of these
+ rules are evaluated in their declaration order, and the first one which
+ matches will assign the server.
+
+ If a rule designates a server which is down, and "option persist" is not used
+ and no force-persist rule was validated, it is ignored and evaluation goes on
+ with the next rules until one matches.
+
+ In the first form, the server will be used if the condition is met. In the
+ second form, the server will be used if the condition is not met. If no
+ condition is valid, the processing continues and the server will be assigned
+ according to other persistence mechanisms.
+
+ Note that even if a rule is matched, cookie processing is still performed but
+ does not assign the server. This allows prefixed cookies to have their prefix
+ stripped.
+
+ The "use-server" statement works both in HTTP and TCP mode. This makes it
+ suitable for use with content-based inspection. For instance, a server could
+ be selected in a farm according to the TLS SNI field. And if these servers
+ have their weight set to zero, they will not be used for other traffic.
+
+ Example :
+ # intercept incoming TLS requests based on the SNI field
+ use-server www if { req_ssl_sni -i www.example.com }
+ server www 192.168.0.1:443 weight 0
+ use-server mail if { req_ssl_sni -i mail.example.com }
+ server mail 192.168.0.1:587 weight 0
+ use-server imap if { req_ssl_sni -i imap.example.com }
+ server mail 192.168.0.1:993 weight 0
+ # all the rest is forwarded to this server
+ server default 192.168.0.2:443 check
+
+ See also: "use_backend", section 5 about server and section 7 about ACLs.
+
+
+5. Bind and Server options
+--------------------------
+
+The "bind", "server" and "default-server" keywords support a number of settings
+depending on some build options and on the system HAProxy was built on. These
+settings generally each consist in one word sometimes followed by a value,
+written on the same line as the "bind" or "server" line. All these options are
+described in this section.
+
+
+5.1. Bind options
+-----------------
+
+The "bind" keyword supports a certain number of settings which are all passed
+as arguments on the same line. The order in which those arguments appear makes
+no importance, provided that they appear after the bind address. All of these
+parameters are optional. Some of them consist in a single words (booleans),
+while other ones expect a value after them. In this case, the value must be
+provided immediately after the setting name.
+
+The currently supported settings are the following ones.
+
+accept-proxy
+ Enforces the use of the PROXY protocol over any connection accepted by any of
+ the sockets declared on the same line. Versions 1 and 2 of the PROXY protocol
+ are supported and correctly detected. The PROXY protocol dictates the layer
+ 3/4 addresses of the incoming connection to be used everywhere an address is
+ used, with the only exception of "tcp-request connection" rules which will
+ only see the real connection address. Logs will reflect the addresses
+ indicated in the protocol, unless it is violated, in which case the real
+ address will still be used. This keyword combined with support from external
+ components can be used as an efficient and reliable alternative to the
+ X-Forwarded-For mechanism which is not always reliable and not even always
+ usable. See also "tcp-request connection expect-proxy" for a finer-grained
+ setting of which client is allowed to use the protocol.
+
+alpn <protocols>
+ This enables the TLS ALPN extension and advertises the specified protocol
+ list as supported on top of ALPN. The protocol list consists in a comma-
+ delimited list of protocol names, for instance: "http/1.1,http/1.0" (without
+ quotes). This requires that the SSL library is build with support for TLS
+ extensions enabled (check with haproxy -vv). The ALPN extension replaces the
+ initial NPN extension.
+
+backlog <backlog>
+ Sets the socket's backlog to this value. If unspecified, the frontend's
+ backlog is used instead, which generally defaults to the maxconn value.
+
+ecdhe <named curve>
+ This setting is only available when support for OpenSSL was built in. It sets
+ the named curve (RFC 4492) used to generate ECDH ephemeral keys. By default,
+ used named curve is prime256v1.
+
+ca-file <cafile>
+ This setting is only available when support for OpenSSL was built in. It
+ designates a PEM file from which to load CA certificates used to verify
+ client's certificate.
+
+ca-ignore-err [all|<errorID>,...]
+ This setting is only available when support for OpenSSL was built in.
+ Sets a comma separated list of errorIDs to ignore during verify at depth > 0.
+ If set to 'all', all errors are ignored. SSL handshake is not aborted if an
+ error is ignored.
+
+ca-sign-file <cafile>
+ This setting is only available when support for OpenSSL was built in. It
+ designates a PEM file containing both the CA certificate and the CA private
+ key used to create and sign server's certificates. This is a mandatory
+ setting when the dynamic generation of certificates is enabled. See
+ 'generate-certificates' for details.
+
+ca-sign-passphrase <passphrase>
+ This setting is only available when support for OpenSSL was built in. It is
+ the CA private key passphrase. This setting is optional and used only when
+ the dynamic generation of certificates is enabled. See
+ 'generate-certificates' for details.
+
+ciphers <ciphers>
+ This setting is only available when support for OpenSSL was built in. It sets
+ the string describing the list of cipher algorithms ("cipher suite") that are
+ negotiated during the SSL/TLS handshake. The format of the string is defined
+ in "man 1 ciphers" from OpenSSL man pages, and can be for instance a string
+ such as "AES:ALL:!aNULL:!eNULL:+RC4:@STRENGTH" (without quotes).
+
+crl-file <crlfile>
+ This setting is only available when support for OpenSSL was built in. It
+ designates a PEM file from which to load certificate revocation list used
+ to verify client's certificate.
+
+crt <cert>
+ This setting is only available when support for OpenSSL was built in. It
+ designates a PEM file containing both the required certificates and any
+ associated private keys. This file can be built by concatenating multiple
+ PEM files into one (e.g. cat cert.pem key.pem > combined.pem). If your CA
+ requires an intermediate certificate, this can also be concatenated into this
+ file.
+
+ If the OpenSSL used supports Diffie-Hellman, parameters present in this file
+ are loaded.
+
+ If a directory name is used instead of a PEM file, then all files found in
+ that directory will be loaded in alphabetic order unless their name ends with
+ '.issuer', '.ocsp' or '.sctl' (reserved extensions). This directive may be
+ specified multiple times in order to load certificates from multiple files or
+ directories. The certificates will be presented to clients who provide a
+ valid TLS Server Name Indication field matching one of their CN or alt
+ subjects. Wildcards are supported, where a wildcard character '*' is used
+ instead of the first hostname component (eg: *.example.org matches
+ www.example.org but not www.sub.example.org).
+
+ If no SNI is provided by the client or if the SSL library does not support
+ TLS extensions, or if the client provides an SNI hostname which does not
+ match any certificate, then the first loaded certificate will be presented.
+ This means that when loading certificates from a directory, it is highly
+ recommended to load the default one first as a file or to ensure that it will
+ always be the first one in the directory.
+
+ Note that the same cert may be loaded multiple times without side effects.
+
+ Some CAs (such as Godaddy) offer a drop down list of server types that do not
+ include HAProxy when obtaining a certificate. If this happens be sure to
+ choose a webserver that the CA believes requires an intermediate CA (for
+ Godaddy, selection Apache Tomcat will get the correct bundle, but many
+ others, e.g. nginx, result in a wrong bundle that will not work for some
+ clients).
+
+ For each PEM file, haproxy checks for the presence of file at the same path
+ suffixed by ".ocsp". If such file is found, support for the TLS Certificate
+ Status Request extension (also known as "OCSP stapling") is automatically
+ enabled. The content of this file is optional. If not empty, it must contain
+ a valid OCSP Response in DER format. In order to be valid an OCSP Response
+ must comply with the following rules: it has to indicate a good status,
+ it has to be a single response for the certificate of the PEM file, and it
+ has to be valid at the moment of addition. If these rules are not respected
+ the OCSP Response is ignored and a warning is emitted. In order to identify
+ which certificate an OCSP Response applies to, the issuer's certificate is
+ necessary. If the issuer's certificate is not found in the PEM file, it will
+ be loaded from a file at the same path as the PEM file suffixed by ".issuer"
+ if it exists otherwise it will fail with an error.
+
+ For each PEM file, haproxy also checks for the presence of file at the same
+ path suffixed by ".sctl". If such file is found, support for Certificate
+ Transparency (RFC6962) TLS extension is enabled. The file must contain a
+ valid Signed Certificate Timestamp List, as described in RFC. File is parsed
+ to check basic syntax, but no signatures are verified.
+
+crt-ignore-err <errors>
+ This setting is only available when support for OpenSSL was built in. Sets a
+ comma separated list of errorIDs to ignore during verify at depth == 0. If
+ set to 'all', all errors are ignored. SSL handshake is not aborted if an error
+ is ignored.
+
+crt-list <file>
+ This setting is only available when support for OpenSSL was built in. It
+ designates a list of PEM file with an optional list of SNI filter per
+ certificate, with the following format for each line :
+
+ <crtfile> [[!]<snifilter> ...]
+
+ Wildcards are supported in the SNI filter. Negative filter are also supported,
+ only useful in combination with a wildcard filter to exclude a particular SNI.
+ The certificates will be presented to clients who provide a valid TLS Server
+ Name Indication field matching one of the SNI filters. If no SNI filter is
+ specified, the CN and alt subjects are used. This directive may be specified
+ multiple times. See the "crt" option for more information. The default
+ certificate is still needed to meet OpenSSL expectations. If it is not used,
+ the 'strict-sni' option may be used.
+
+defer-accept
+ Is an optional keyword which is supported only on certain Linux kernels. It
+ states that a connection will only be accepted once some data arrive on it,
+ or at worst after the first retransmit. This should be used only on protocols
+ for which the client talks first (eg: HTTP). It can slightly improve
+ performance by ensuring that most of the request is already available when
+ the connection is accepted. On the other hand, it will not be able to detect
+ connections which don't talk. It is important to note that this option is
+ broken in all kernels up to 2.6.31, as the connection is never accepted until
+ the client talks. This can cause issues with front firewalls which would see
+ an established connection while the proxy will only see it in SYN_RECV. This
+ option is only supported on TCPv4/TCPv6 sockets and ignored by other ones.
+
+force-sslv3
+ This option enforces use of SSLv3 only on SSL connections instantiated from
+ this listener. SSLv3 is generally less expensive than the TLS counterparts
+ for high connection rates. This option is also available on global statement
+ "ssl-default-bind-options". See also "no-tlsv*" and "no-sslv3".
+
+force-tlsv10
+ This option enforces use of TLSv1.0 only on SSL connections instantiated from
+ this listener. This option is also available on global statement
+ "ssl-default-bind-options". See also "no-tlsv*" and "no-sslv3".
+
+force-tlsv11
+ This option enforces use of TLSv1.1 only on SSL connections instantiated from
+ this listener. This option is also available on global statement
+ "ssl-default-bind-options". See also "no-tlsv*", and "no-sslv3".
+
+force-tlsv12
+ This option enforces use of TLSv1.2 only on SSL connections instantiated from
+ this listener. This option is also available on global statement
+ "ssl-default-bind-options". See also "no-tlsv*", and "no-sslv3".
+
+generate-certificates
+ This setting is only available when support for OpenSSL was built in. It
+ enables the dynamic SSL certificates generation. A CA certificate and its
+ private key are necessary (see 'ca-sign-file'). When HAProxy is configured as
+ a transparent forward proxy, SSL requests generate errors because of a common
+ name mismatch on the certificate presented to the client. With this option
+ enabled, HAProxy will try to forge a certificate using the SNI hostname
+ indicated by the client. This is done only if no certificate matches the SNI
+ hostname (see 'crt-list'). If an error occurs, the default certificate is
+ used, else the 'strict-sni' option is set.
+ It can also be used when HAProxy is configured as a reverse proxy to ease the
+ deployment of an architecture with many backends.
+
+ Creating a SSL certificate is an expensive operation, so a LRU cache is used
+ to store forged certificates (see 'tune.ssl.ssl-ctx-cache-size'). It
+ increases the HAProxy's memroy footprint to reduce latency when the same
+ certificate is used many times.
+
+gid <gid>
+ Sets the group of the UNIX sockets to the designated system gid. It can also
+ be set by default in the global section's "unix-bind" statement. Note that
+ some platforms simply ignore this. This setting is equivalent to the "group"
+ setting except that the group ID is used instead of its name. This setting is
+ ignored by non UNIX sockets.
+
+group <group>
+ Sets the group of the UNIX sockets to the designated system group. It can
+ also be set by default in the global section's "unix-bind" statement. Note
+ that some platforms simply ignore this. This setting is equivalent to the
+ "gid" setting except that the group name is used instead of its gid. This
+ setting is ignored by non UNIX sockets.
+
+id <id>
+ Fixes the socket ID. By default, socket IDs are automatically assigned, but
+ sometimes it is more convenient to fix them to ease monitoring. This value
+ must be strictly positive and unique within the listener/frontend. This
+ option can only be used when defining only a single socket.
+
+interface <interface>
+ Restricts the socket to a specific interface. When specified, only packets
+ received from that particular interface are processed by the socket. This is
+ currently only supported on Linux. The interface must be a primary system
+ interface, not an aliased interface. It is also possible to bind multiple
+ frontends to the same address if they are bound to different interfaces. Note
+ that binding to a network interface requires root privileges. This parameter
+ is only compatible with TCPv4/TCPv6 sockets.
+
+level <level>
+ This setting is used with the stats sockets only to restrict the nature of
+ the commands that can be issued on the socket. It is ignored by other
+ sockets. <level> can be one of :
+ - "user" is the least privileged level ; only non-sensitive stats can be
+ read, and no change is allowed. It would make sense on systems where it
+ is not easy to restrict access to the socket.
+ - "operator" is the default level and fits most common uses. All data can
+ be read, and only non-sensitive changes are permitted (eg: clear max
+ counters).
+ - "admin" should be used with care, as everything is permitted (eg: clear
+ all counters).
+
+maxconn <maxconn>
+ Limits the sockets to this number of concurrent connections. Extraneous
+ connections will remain in the system's backlog until a connection is
+ released. If unspecified, the limit will be the same as the frontend's
+ maxconn. Note that in case of port ranges or multiple addresses, the same
+ value will be applied to each socket. This setting enables different
+ limitations on expensive sockets, for instance SSL entries which may easily
+ eat all memory.
+
+mode <mode>
+ Sets the octal mode used to define access permissions on the UNIX socket. It
+ can also be set by default in the global section's "unix-bind" statement.
+ Note that some platforms simply ignore this. This setting is ignored by non
+ UNIX sockets.
+
+mss <maxseg>
+ Sets the TCP Maximum Segment Size (MSS) value to be advertised on incoming
+ connections. This can be used to force a lower MSS for certain specific
+ ports, for instance for connections passing through a VPN. Note that this
+ relies on a kernel feature which is theoretically supported under Linux but
+ was buggy in all versions prior to 2.6.28. It may or may not work on other
+ operating systems. It may also not change the advertised value but change the
+ effective size of outgoing segments. The commonly advertised value for TCPv4
+ over Ethernet networks is 1460 = 1500(MTU) - 40(IP+TCP). If this value is
+ positive, it will be used as the advertised MSS. If it is negative, it will
+ indicate by how much to reduce the incoming connection's advertised MSS for
+ outgoing segments. This parameter is only compatible with TCP v4/v6 sockets.
+
+name <name>
+ Sets an optional name for these sockets, which will be reported on the stats
+ page.
+
+namespace <name>
+ On Linux, it is possible to specify which network namespace a socket will
+ belong to. This directive makes it possible to explicitly bind a listener to
+ a namespace different from the default one. Please refer to your operating
+ system's documentation to find more details about network namespaces.
+
+nice <nice>
+ Sets the 'niceness' of connections initiated from the socket. Value must be
+ in the range -1024..1024 inclusive, and defaults to zero. Positive values
+ means that such connections are more friendly to others and easily offer
+ their place in the scheduler. On the opposite, negative values mean that
+ connections want to run with a higher priority than others. The difference
+ only happens under high loads when the system is close to saturation.
+ Negative values are appropriate for low-latency or administration services,
+ and high values are generally recommended for CPU intensive tasks such as SSL
+ processing or bulk transfers which are less sensible to latency. For example,
+ it may make sense to use a positive value for an SMTP socket and a negative
+ one for an RDP socket.
+
+no-sslv3
+ This setting is only available when support for OpenSSL was built in. It
+ disables support for SSLv3 on any sockets instantiated from the listener when
+ SSL is supported. Note that SSLv2 is forced disabled in the code and cannot
+ be enabled using any configuration option. This option is also available on
+ global statement "ssl-default-bind-options". See also "force-tls*",
+ and "force-sslv3".
+
+no-tls-tickets
+ This setting is only available when support for OpenSSL was built in. It
+ disables the stateless session resumption (RFC 5077 TLS Ticket
+ extension) and force to use stateful session resumption. Stateless
+ session resumption is more expensive in CPU usage. This option is also
+ available on global statement "ssl-default-bind-options".
+
+no-tlsv10
+ This setting is only available when support for OpenSSL was built in. It
+ disables support for TLSv1.0 on any sockets instantiated from the listener
+ when SSL is supported. Note that SSLv2 is forced disabled in the code and
+ cannot be enabled using any configuration option. This option is also
+ available on global statement "ssl-default-bind-options". See also
+ "force-tlsv*", and "force-sslv3".
+
+no-tlsv11
+ This setting is only available when support for OpenSSL was built in. It
+ disables support for TLSv1.1 on any sockets instantiated from the listener
+ when SSL is supported. Note that SSLv2 is forced disabled in the code and
+ cannot be enabled using any configuration option. This option is also
+ available on global statement "ssl-default-bind-options". See also
+ "force-tlsv*", and "force-sslv3".
+
+no-tlsv12
+ This setting is only available when support for OpenSSL was built in. It
+ disables support for TLSv1.2 on any sockets instantiated from the listener
+ when SSL is supported. Note that SSLv2 is forced disabled in the code and
+ cannot be enabled using any configuration option. This option is also
+ available on global statement "ssl-default-bind-options". See also
+ "force-tlsv*", and "force-sslv3".
+
+npn <protocols>
+ This enables the NPN TLS extension and advertises the specified protocol list
+ as supported on top of NPN. The protocol list consists in a comma-delimited
+ list of protocol names, for instance: "http/1.1,http/1.0" (without quotes).
+ This requires that the SSL library is build with support for TLS extensions
+ enabled (check with haproxy -vv). Note that the NPN extension has been
+ replaced with the ALPN extension (see the "alpn" keyword).
+
+process [ all | odd | even | <number 1-64>[-<number 1-64>] ]
+ This restricts the list of processes on which this listener is allowed to
+ run. It does not enforce any process but eliminates those which do not match.
+ If the frontend uses a "bind-process" setting, the intersection between the
+ two is applied. If in the end the listener is not allowed to run on any
+ remaining process, a warning is emitted, and the listener will either run on
+ the first process of the listener if a single process was specified, or on
+ all of its processes if multiple processes were specified. For the unlikely
+ case where several ranges are needed, this directive may be repeated. The
+ main purpose of this directive is to be used with the stats sockets and have
+ one different socket per process. The second purpose is to have multiple bind
+ lines sharing the same IP:port but not the same process in a listener, so
+ that the system can distribute the incoming connections into multiple queues
+ and allow a smoother inter-process load balancing. Currently Linux 3.9 and
+ above is known for supporting this. See also "bind-process" and "nbproc".
+
+ssl
+ This setting is only available when support for OpenSSL was built in. It
+ enables SSL deciphering on connections instantiated from this listener. A
+ certificate is necessary (see "crt" above). All contents in the buffers will
+ appear in clear text, so that ACLs and HTTP processing will only have access
+ to deciphered contents.
+
+strict-sni
+ This setting is only available when support for OpenSSL was built in. The
+ SSL/TLS negotiation is allow only if the client provided an SNI which match
+ a certificate. The default certificate is not used.
+ See the "crt" option for more information.
+
+tcp-ut <delay>
+ Sets the TCP User Timeout for all incoming connections instanciated from this
+ listening socket. This option is available on Linux since version 2.6.37. It
+ allows haproxy to configure a timeout for sockets which contain data not
+ receiving an acknoledgement for the configured delay. This is especially
+ useful on long-lived connections experiencing long idle periods such as
+ remote terminals or database connection pools, where the client and server
+ timeouts must remain high to allow a long period of idle, but where it is
+ important to detect that the client has disappeared in order to release all
+ resources associated with its connection (and the server's session). The
+ argument is a delay expressed in milliseconds by default. This only works
+ for regular TCP connections, and is ignored for other protocols.
+
+tfo
+ Is an optional keyword which is supported only on Linux kernels >= 3.7. It
+ enables TCP Fast Open on the listening socket, which means that clients which
+ support this feature will be able to send a request and receive a response
+ during the 3-way handshake starting from second connection, thus saving one
+ round-trip after the first connection. This only makes sense with protocols
+ that use high connection rates and where each round trip matters. This can
+ possibly cause issues with many firewalls which do not accept data on SYN
+ packets, so this option should only be enabled once well tested. This option
+ is only supported on TCPv4/TCPv6 sockets and ignored by other ones. You may
+ need to build HAProxy with USE_TFO=1 if your libc doesn't define
+ TCP_FASTOPEN.
+
+tls-ticket-keys <keyfile>
+ Sets the TLS ticket keys file to load the keys from. The keys need to be 48
+ bytes long, encoded with base64 (ex. openssl rand -base64 48). Number of keys
+ is specified by the TLS_TICKETS_NO build option (default 3) and at least as
+ many keys need to be present in the file. Last TLS_TICKETS_NO keys will be
+ used for decryption and the penultimate one for encryption. This enables easy
+ key rotation by just appending new key to the file and reloading the process.
+ Keys must be periodically rotated (ex. every 12h) or Perfect Forward Secrecy
+ is compromised. It is also a good idea to keep the keys off any permanent
+ storage such as hard drives (hint: use tmpfs and don't swap those files).
+ Lifetime hint can be changed using tune.ssl.timeout.
+
+transparent
+ Is an optional keyword which is supported only on certain Linux kernels. It
+ indicates that the addresses will be bound even if they do not belong to the
+ local machine, and that packets targeting any of these addresses will be
+ intercepted just as if the addresses were locally configured. This normally
+ requires that IP forwarding is enabled. Caution! do not use this with the
+ default address '*', as it would redirect any traffic for the specified port.
+ This keyword is available only when HAProxy is built with USE_LINUX_TPROXY=1.
+ This parameter is only compatible with TCPv4 and TCPv6 sockets, depending on
+ kernel version. Some distribution kernels include backports of the feature,
+ so check for support with your vendor.
+
+v4v6
+ Is an optional keyword which is supported only on most recent systems
+ including Linux kernels >= 2.4.21. It is used to bind a socket to both IPv4
+ and IPv6 when it uses the default address. Doing so is sometimes necessary
+ on systems which bind to IPv6 only by default. It has no effect on non-IPv6
+ sockets, and is overridden by the "v6only" option.
+
+v6only
+ Is an optional keyword which is supported only on most recent systems
+ including Linux kernels >= 2.4.21. It is used to bind a socket to IPv6 only
+ when it uses the default address. Doing so is sometimes preferred to doing it
+ system-wide as it is per-listener. It has no effect on non-IPv6 sockets and
+ has precedence over the "v4v6" option.
+
+uid <uid>
+ Sets the owner of the UNIX sockets to the designated system uid. It can also
+ be set by default in the global section's "unix-bind" statement. Note that
+ some platforms simply ignore this. This setting is equivalent to the "user"
+ setting except that the user numeric ID is used instead of its name. This
+ setting is ignored by non UNIX sockets.
+
+user <user>
+ Sets the owner of the UNIX sockets to the designated system user. It can also
+ be set by default in the global section's "unix-bind" statement. Note that
+ some platforms simply ignore this. This setting is equivalent to the "uid"
+ setting except that the user name is used instead of its uid. This setting is
+ ignored by non UNIX sockets.
+
+verify [none|optional|required]
+ This setting is only available when support for OpenSSL was built in. If set
+ to 'none', client certificate is not requested. This is the default. In other
+ cases, a client certificate is requested. If the client does not provide a
+ certificate after the request and if 'verify' is set to 'required', then the
+ handshake is aborted, while it would have succeeded if set to 'optional'. The
+ certificate provided by the client is always verified using CAs from
+ 'ca-file' and optional CRLs from 'crl-file'. On verify failure the handshake
+ is aborted, regardless of the 'verify' option, unless the error code exactly
+ matches one of those listed with 'ca-ignore-err' or 'crt-ignore-err'.
+
+5.2. Server and default-server options
+------------------------------------
+
+The "server" and "default-server" keywords support a certain number of settings
+which are all passed as arguments on the server line. The order in which those
+arguments appear does not count, and they are all optional. Some of those
+settings are single words (booleans) while others expect one or several values
+after them. In this case, the values must immediately follow the setting name.
+Except default-server, all those settings must be specified after the server's
+address if they are used:
+
+ server <name> <address>[:port] [settings ...]
+ default-server [settings ...]
+
+The currently supported settings are the following ones.
+
+addr <ipv4|ipv6>
+ Using the "addr" parameter, it becomes possible to use a different IP address
+ to send health-checks. On some servers, it may be desirable to dedicate an IP
+ address to specific component able to perform complex tests which are more
+ suitable to health-checks than the application. This parameter is ignored if
+ the "check" parameter is not set. See also the "port" parameter.
+
+ Supported in default-server: No
+
+agent-check
+ Enable an auxiliary agent check which is run independently of a regular
+ health check. An agent health check is performed by making a TCP connection
+ to the port set by the "agent-port" parameter and reading an ASCII string.
+ The string is made of a series of words delimited by spaces, tabs or commas
+ in any order, optionally terminated by '\r' and/or '\n', each consisting of :
+
+ - An ASCII representation of a positive integer percentage, e.g. "75%".
+ Values in this format will set the weight proportional to the initial
+ weight of a server as configured when haproxy starts. Note that a zero
+ weight is reported on the stats page as "DRAIN" since it has the same
+ effect on the server (it's removed from the LB farm).
+
+ - The word "ready". This will turn the server's administrative state to the
+ READY mode, thus cancelling any DRAIN or MAINT state
+
+ - The word "drain". This will turn the server's administrative state to the
+ DRAIN mode, thus it will not accept any new connections other than those
+ that are accepted via persistence.
+
+ - The word "maint". This will turn the server's administrative state to the
+ MAINT mode, thus it will not accept any new connections at all, and health
+ checks will be stopped.
+
+ - The words "down", "failed", or "stopped", optionally followed by a
+ description string after a sharp ('#'). All of these mark the server's
+ operating state as DOWN, but since the word itself is reported on the stats
+ page, the difference allows an administrator to know if the situation was
+ expected or not : the service may intentionally be stopped, may appear up
+ but fail some validity tests, or may be seen as down (eg: missing process,
+ or port not responding).
+
+ - The word "up" sets back the server's operating state as UP if health checks
+ also report that the service is accessible.
+
+ Parameters which are not advertised by the agent are not changed. For
+ example, an agent might be designed to monitor CPU usage and only report a
+ relative weight and never interact with the operating status. Similarly, an
+ agent could be designed as an end-user interface with 3 radio buttons
+ allowing an administrator to change only the administrative state. However,
+ it is important to consider that only the agent may revert its own actions,
+ so if a server is set to DRAIN mode or to DOWN state using the agent, the
+ agent must implement the other equivalent actions to bring the service into
+ operations again.
+
+ Failure to connect to the agent is not considered an error as connectivity
+ is tested by the regular health check which is enabled by the "check"
+ parameter. Warning though, it is not a good idea to stop an agent after it
+ reports "down", since only an agent reporting "up" will be able to turn the
+ server up again. Note that the CLI on the Unix stats socket is also able to
+ force an agent's result in order to workaround a bogus agent if needed.
+
+ Requires the "agent-port" parameter to be set. See also the "agent-inter"
+ parameter.
+
+ Supported in default-server: No
+
+agent-inter <delay>
+ The "agent-inter" parameter sets the interval between two agent checks
+ to <delay> milliseconds. If left unspecified, the delay defaults to 2000 ms.
+
+ Just as with every other time-based parameter, it may be entered in any
+ other explicit unit among { us, ms, s, m, h, d }. The "agent-inter"
+ parameter also serves as a timeout for agent checks "timeout check" is
+ not set. In order to reduce "resonance" effects when multiple servers are
+ hosted on the same hardware, the agent and health checks of all servers
+ are started with a small time offset between them. It is also possible to
+ add some random noise in the agent and health checks interval using the
+ global "spread-checks" keyword. This makes sense for instance when a lot
+ of backends use the same servers.
+
+ See also the "agent-check" and "agent-port" parameters.
+
+ Supported in default-server: Yes
+
+agent-port <port>
+ The "agent-port" parameter sets the TCP port used for agent checks.
+
+ See also the "agent-check" and "agent-inter" parameters.
+
+ Supported in default-server: Yes
+
+backup
+ When "backup" is present on a server line, the server is only used in load
+ balancing when all other non-backup servers are unavailable. Requests coming
+ with a persistence cookie referencing the server will always be served
+ though. By default, only the first operational backup server is used, unless
+ the "allbackups" option is set in the backend. See also the "allbackups"
+ option.
+
+ Supported in default-server: No
+
+ca-file <cafile>
+ This setting is only available when support for OpenSSL was built in. It
+ designates a PEM file from which to load CA certificates used to verify
+ server's certificate.
+
+ Supported in default-server: No
+
+check
+ This option enables health checks on the server. By default, a server is
+ always considered available. If "check" is set, the server is available when
+ accepting periodic TCP connections, to ensure that it is really able to serve
+ requests. The default address and port to send the tests to are those of the
+ server, and the default source is the same as the one defined in the
+ backend. It is possible to change the address using the "addr" parameter, the
+ port using the "port" parameter, the source address using the "source"
+ address, and the interval and timers using the "inter", "rise" and "fall"
+ parameters. The request method is define in the backend using the "httpchk",
+ "smtpchk", "mysql-check", "pgsql-check" and "ssl-hello-chk" options. Please
+ refer to those options and parameters for more information.
+
+ Supported in default-server: No
+
+check-send-proxy
+ This option forces emission of a PROXY protocol line with outgoing health
+ checks, regardless of whether the server uses send-proxy or not for the
+ normal traffic. By default, the PROXY protocol is enabled for health checks
+ if it is already enabled for normal traffic and if no "port" nor "addr"
+ directive is present. However, if such a directive is present, the
+ "check-send-proxy" option needs to be used to force the use of the
+ protocol. See also the "send-proxy" option for more information.
+
+ Supported in default-server: No
+
+check-ssl
+ This option forces encryption of all health checks over SSL, regardless of
+ whether the server uses SSL or not for the normal traffic. This is generally
+ used when an explicit "port" or "addr" directive is specified and SSL health
+ checks are not inherited. It is important to understand that this option
+ inserts an SSL transport layer below the checks, so that a simple TCP connect
+ check becomes an SSL connect, which replaces the old ssl-hello-chk. The most
+ common use is to send HTTPS checks by combining "httpchk" with SSL checks.
+ All SSL settings are common to health checks and traffic (eg: ciphers).
+ See the "ssl" option for more information.
+
+ Supported in default-server: No
+
+ciphers <ciphers>
+ This option sets the string describing the list of cipher algorithms that is
+ is negotiated during the SSL/TLS handshake with the server. The format of the
+ string is defined in "man 1 ciphers". When SSL is used to communicate with
+ servers on the local network, it is common to see a weaker set of algorithms
+ than what is used over the internet. Doing so reduces CPU usage on both the
+ server and haproxy while still keeping it compatible with deployed software.
+ Some algorithms such as RC4-SHA1 are reasonably cheap. If no security at all
+ is needed and just connectivity, using DES can be appropriate.
+
+ Supported in default-server: No
+
+cookie <value>
+ The "cookie" parameter sets the cookie value assigned to the server to
+ <value>. This value will be checked in incoming requests, and the first
+ operational server possessing the same value will be selected. In return, in
+ cookie insertion or rewrite modes, this value will be assigned to the cookie
+ sent to the client. There is nothing wrong in having several servers sharing
+ the same cookie value, and it is in fact somewhat common between normal and
+ backup servers. See also the "cookie" keyword in backend section.
+
+ Supported in default-server: No
+
+crl-file <crlfile>
+ This setting is only available when support for OpenSSL was built in. It
+ designates a PEM file from which to load certificate revocation list used
+ to verify server's certificate.
+
+ Supported in default-server: No
+
+crt <cert>
+ This setting is only available when support for OpenSSL was built in.
+ It designates a PEM file from which to load both a certificate and the
+ associated private key. This file can be built by concatenating both PEM
+ files into one. This certificate will be sent if the server send a client
+ certificate request.
+
+ Supported in default-server: No
+
+disabled
+ The "disabled" keyword starts the server in the "disabled" state. That means
+ that it is marked down in maintenance mode, and no connection other than the
+ ones allowed by persist mode will reach it. It is very well suited to setup
+ new servers, because normal traffic will never reach them, while it is still
+ possible to test the service by making use of the force-persist mechanism.
+
+ Supported in default-server: No
+
+error-limit <count>
+ If health observing is enabled, the "error-limit" parameter specifies the
+ number of consecutive errors that triggers event selected by the "on-error"
+ option. By default it is set to 10 consecutive errors.
+
+ Supported in default-server: Yes
+
+ See also the "check", "error-limit" and "on-error".
+
+fall <count>
+ The "fall" parameter states that a server will be considered as dead after
+ <count> consecutive unsuccessful health checks. This value defaults to 3 if
+ unspecified. See also the "check", "inter" and "rise" parameters.
+
+ Supported in default-server: Yes
+
+force-sslv3
+ This option enforces use of SSLv3 only when SSL is used to communicate with
+ the server. SSLv3 is generally less expensive than the TLS counterparts for
+ high connection rates. This option is also available on global statement
+ "ssl-default-server-options". See also "no-tlsv*", "no-sslv3".
+
+ Supported in default-server: No
+
+force-tlsv10
+ This option enforces use of TLSv1.0 only when SSL is used to communicate with
+ the server. This option is also available on global statement
+ "ssl-default-server-options". See also "no-tlsv*", "no-sslv3".
+
+ Supported in default-server: No
+
+force-tlsv11
+ This option enforces use of TLSv1.1 only when SSL is used to communicate with
+ the server. This option is also available on global statement
+ "ssl-default-server-options". See also "no-tlsv*", "no-sslv3".
+
+ Supported in default-server: No
+
+force-tlsv12
+ This option enforces use of TLSv1.2 only when SSL is used to communicate with
+ the server. This option is also available on global statement
+ "ssl-default-server-options". See also "no-tlsv*", "no-sslv3".
+
+ Supported in default-server: No
+
+id <value>
+ Set a persistent ID for the server. This ID must be positive and unique for
+ the proxy. An unused ID will automatically be assigned if unset. The first
+ assigned value will be 1. This ID is currently only returned in statistics.
+
+ Supported in default-server: No
+
+inter <delay>
+fastinter <delay>
+downinter <delay>
+ The "inter" parameter sets the interval between two consecutive health checks
+ to <delay> milliseconds. If left unspecified, the delay defaults to 2000 ms.
+ It is also possible to use "fastinter" and "downinter" to optimize delays
+ between checks depending on the server state :
+
+ Server state | Interval used
+ ----------------------------------------+----------------------------------
+ UP 100% (non-transitional) | "inter"
+ ----------------------------------------+----------------------------------
+ Transitionally UP (going down "fall"), | "fastinter" if set,
+ Transitionally DOWN (going up "rise"), | "inter" otherwise.
+ or yet unchecked. |
+ ----------------------------------------+----------------------------------
+ DOWN 100% (non-transitional) | "downinter" if set,
+ | "inter" otherwise.
+ ----------------------------------------+----------------------------------
+
+ Just as with every other time-based parameter, they can be entered in any
+ other explicit unit among { us, ms, s, m, h, d }. The "inter" parameter also
+ serves as a timeout for health checks sent to servers if "timeout check" is
+ not set. In order to reduce "resonance" effects when multiple servers are
+ hosted on the same hardware, the agent and health checks of all servers
+ are started with a small time offset between them. It is also possible to
+ add some random noise in the agent and health checks interval using the
+ global "spread-checks" keyword. This makes sense for instance when a lot
+ of backends use the same servers.
+
+ Supported in default-server: Yes
+
+maxconn <maxconn>
+ The "maxconn" parameter specifies the maximal number of concurrent
+ connections that will be sent to this server. If the number of incoming
+ concurrent requests goes higher than this value, they will be queued, waiting
+ for a connection to be released. This parameter is very important as it can
+ save fragile servers from going down under extreme loads. If a "minconn"
+ parameter is specified, the limit becomes dynamic. The default value is "0"
+ which means unlimited. See also the "minconn" and "maxqueue" parameters, and
+ the backend's "fullconn" keyword.
+
+ Supported in default-server: Yes
+
+maxqueue <maxqueue>
+ The "maxqueue" parameter specifies the maximal number of connections which
+ will wait in the queue for this server. If this limit is reached, next
+ requests will be redispatched to other servers instead of indefinitely
+ waiting to be served. This will break persistence but may allow people to
+ quickly re-log in when the server they try to connect to is dying. The
+ default value is "0" which means the queue is unlimited. See also the
+ "maxconn" and "minconn" parameters.
+
+ Supported in default-server: Yes
+
+minconn <minconn>
+ When the "minconn" parameter is set, the maxconn limit becomes a dynamic
+ limit following the backend's load. The server will always accept at least
+ <minconn> connections, never more than <maxconn>, and the limit will be on
+ the ramp between both values when the backend has less than <fullconn>
+ concurrent connections. This makes it possible to limit the load on the
+ server during normal loads, but push it further for important loads without
+ overloading the server during exceptional loads. See also the "maxconn"
+ and "maxqueue" parameters, as well as the "fullconn" backend keyword.
+
+ Supported in default-server: Yes
+
+namespace <name>
+ On Linux, it is possible to specify which network namespace a socket will
+ belong to. This directive makes it possible to explicitly bind a server to
+ a namespace different from the default one. Please refer to your operating
+ system's documentation to find more details about network namespaces.
+
+no-ssl-reuse
+ This option disables SSL session reuse when SSL is used to communicate with
+ the server. It will force the server to perform a full handshake for every
+ new connection. It's probably only useful for benchmarking, troubleshooting,
+ and for paranoid users.
+
+ Supported in default-server: No
+
+no-sslv3
+ This option disables support for SSLv3 when SSL is used to communicate with
+ the server. Note that SSLv2 is disabled in the code and cannot be enabled
+ using any configuration option. See also "force-sslv3", "force-tlsv*".
+
+ Supported in default-server: No
+
+no-tls-tickets
+ This setting is only available when support for OpenSSL was built in. It
+ disables the stateless session resumption (RFC 5077 TLS Ticket
+ extension) and force to use stateful session resumption. Stateless
+ session resumption is more expensive in CPU usage for servers. This option
+ is also available on global statement "ssl-default-server-options".
+
+ Supported in default-server: No
+
+no-tlsv10
+ This option disables support for TLSv1.0 when SSL is used to communicate with
+ the server. Note that SSLv2 is disabled in the code and cannot be enabled
+ using any configuration option. TLSv1 is more expensive than SSLv3 so it
+ often makes sense to disable it when communicating with local servers. This
+ option is also available on global statement "ssl-default-server-options".
+ See also "force-sslv3", "force-tlsv*".
+
+ Supported in default-server: No
+
+no-tlsv11
+ This option disables support for TLSv1.1 when SSL is used to communicate with
+ the server. Note that SSLv2 is disabled in the code and cannot be enabled
+ using any configuration option. TLSv1 is more expensive than SSLv3 so it
+ often makes sense to disable it when communicating with local servers. This
+ option is also available on global statement "ssl-default-server-options".
+ See also "force-sslv3", "force-tlsv*".
+
+ Supported in default-server: No
+
+no-tlsv12
+ This option disables support for TLSv1.2 when SSL is used to communicate with
+ the server. Note that SSLv2 is disabled in the code and cannot be enabled
+ using any configuration option. TLSv1 is more expensive than SSLv3 so it
+ often makes sense to disable it when communicating with local servers. This
+ option is also available on global statement "ssl-default-server-options".
+ See also "force-sslv3", "force-tlsv*".
+
+ Supported in default-server: No
+
+non-stick
+ Never add connections allocated to this sever to a stick-table.
+ This may be used in conjunction with backup to ensure that
+ stick-table persistence is disabled for backup servers.
+
+ Supported in default-server: No
+
+observe <mode>
+ This option enables health adjusting based on observing communication with
+ the server. By default this functionality is disabled and enabling it also
+ requires to enable health checks. There are two supported modes: "layer4" and
+ "layer7". In layer4 mode, only successful/unsuccessful tcp connections are
+ significant. In layer7, which is only allowed for http proxies, responses
+ received from server are verified, like valid/wrong http code, unparsable
+ headers, a timeout, etc. Valid status codes include 100 to 499, 501 and 505.
+
+ Supported in default-server: No
+
+ See also the "check", "on-error" and "error-limit".
+
+on-error <mode>
+ Select what should happen when enough consecutive errors are detected.
+ Currently, four modes are available:
+ - fastinter: force fastinter
+ - fail-check: simulate a failed check, also forces fastinter (default)
+ - sudden-death: simulate a pre-fatal failed health check, one more failed
+ check will mark a server down, forces fastinter
+ - mark-down: mark the server immediately down and force fastinter
+
+ Supported in default-server: Yes
+
+ See also the "check", "observe" and "error-limit".
+
+on-marked-down <action>
+ Modify what occurs when a server is marked down.
+ Currently one action is available:
+ - shutdown-sessions: Shutdown peer sessions. When this setting is enabled,
+ all connections to the server are immediately terminated when the server
+ goes down. It might be used if the health check detects more complex cases
+ than a simple connection status, and long timeouts would cause the service
+ to remain unresponsive for too long a time. For instance, a health check
+ might detect that a database is stuck and that there's no chance to reuse
+ existing connections anymore. Connections killed this way are logged with
+ a 'D' termination code (for "Down").
+
+ Actions are disabled by default
+
+ Supported in default-server: Yes
+
+on-marked-up <action>
+ Modify what occurs when a server is marked up.
+ Currently one action is available:
+ - shutdown-backup-sessions: Shutdown sessions on all backup servers. This is
+ done only if the server is not in backup state and if it is not disabled
+ (it must have an effective weight > 0). This can be used sometimes to force
+ an active server to take all the traffic back after recovery when dealing
+ with long sessions (eg: LDAP, SQL, ...). Doing this can cause more trouble
+ than it tries to solve (eg: incomplete transactions), so use this feature
+ with extreme care. Sessions killed because a server comes up are logged
+ with an 'U' termination code (for "Up").
+
+ Actions are disabled by default
+
+ Supported in default-server: Yes
+
+port <port>
+ Using the "port" parameter, it becomes possible to use a different port to
+ send health-checks. On some servers, it may be desirable to dedicate a port
+ to a specific component able to perform complex tests which are more suitable
+ to health-checks than the application. It is common to run a simple script in
+ inetd for instance. This parameter is ignored if the "check" parameter is not
+ set. See also the "addr" parameter.
+
+ Supported in default-server: Yes
+
+redir <prefix>
+ The "redir" parameter enables the redirection mode for all GET and HEAD
+ requests addressing this server. This means that instead of having HAProxy
+ forward the request to the server, it will send an "HTTP 302" response with
+ the "Location" header composed of this prefix immediately followed by the
+ requested URI beginning at the leading '/' of the path component. That means
+ that no trailing slash should be used after <prefix>. All invalid requests
+ will be rejected, and all non-GET or HEAD requests will be normally served by
+ the server. Note that since the response is completely forged, no header
+ mangling nor cookie insertion is possible in the response. However, cookies in
+ requests are still analysed, making this solution completely usable to direct
+ users to a remote location in case of local disaster. Main use consists in
+ increasing bandwidth for static servers by having the clients directly
+ connect to them. Note: never use a relative location here, it would cause a
+ loop between the client and HAProxy!
+
+ Example : server srv1 192.168.1.1:80 redir http://image1.mydomain.com check
+
+ Supported in default-server: No
+
+rise <count>
+ The "rise" parameter states that a server will be considered as operational
+ after <count> consecutive successful health checks. This value defaults to 2
+ if unspecified. See also the "check", "inter" and "fall" parameters.
+
+ Supported in default-server: Yes
+
+resolve-prefer <family>
+ When DNS resolution is enabled for a server and multiple IP addresses from
+ different families are returned, HAProxy will prefer using an IP address
+ from the family mentioned in the "resolve-prefer" parameter.
+ Available families: "ipv4" and "ipv6"
+
+ Default value: ipv6
+
+ Supported in default-server: Yes
+
+ Example: server s1 app1.domain.com:80 resolvers mydns resolve-prefer ipv6
+
+resolvers <id>
+ Points to an existing "resolvers" section to resolve current server's
+ hostname.
+ In order to be operational, DNS resolution requires that health check is
+ enabled on the server. Actually, health checks triggers the DNS resolution.
+ You must precise one 'resolvers' parameter on each server line where DNS
+ resolution is required.
+
+ Supported in default-server: No
+
+ Example: server s1 app1.domain.com:80 check resolvers mydns
+
+ See also chapter 5.3
+
+send-proxy
+ The "send-proxy" parameter enforces use of the PROXY protocol over any
+ connection established to this server. The PROXY protocol informs the other
+ end about the layer 3/4 addresses of the incoming connection, so that it can
+ know the client's address or the public address it accessed to, whatever the
+ upper layer protocol. For connections accepted by an "accept-proxy" listener,
+ the advertised address will be used. Only TCPv4 and TCPv6 address families
+ are supported. Other families such as Unix sockets, will report an UNKNOWN
+ family. Servers using this option can fully be chained to another instance of
+ haproxy listening with an "accept-proxy" setting. This setting must not be
+ used if the server isn't aware of the protocol. When health checks are sent
+ to the server, the PROXY protocol is automatically used when this option is
+ set, unless there is an explicit "port" or "addr" directive, in which case an
+ explicit "check-send-proxy" directive would also be needed to use the PROXY
+ protocol. See also the "accept-proxy" option of the "bind" keyword.
+
+ Supported in default-server: No
+
+send-proxy-v2
+ The "send-proxy-v2" parameter enforces use of the PROXY protocol version 2
+ over any connection established to this server. The PROXY protocol informs
+ the other end about the layer 3/4 addresses of the incoming connection, so
+ that it can know the client's address or the public address it accessed to,
+ whatever the upper layer protocol. This setting must not be used if the
+ server isn't aware of this version of the protocol. See also the "send-proxy"
+ option of the "bind" keyword.
+
+ Supported in default-server: No
+
+send-proxy-v2-ssl
+ The "send-proxy-v2-ssl" parameter enforces use of the PROXY protocol version
+ 2 over any connection established to this server. The PROXY protocol informs
+ the other end about the layer 3/4 addresses of the incoming connection, so
+ that it can know the client's address or the public address it accessed to,
+ whatever the upper layer protocol. In addition, the SSL information extension
+ of the PROXY protocol is added to the PROXY protocol header. This setting
+ must not be used if the server isn't aware of this version of the protocol.
+ See also the "send-proxy-v2" option of the "bind" keyword.
+
+ Supported in default-server: No
+
+send-proxy-v2-ssl-cn
+ The "send-proxy-v2-ssl" parameter enforces use of the PROXY protocol version
+ 2 over any connection established to this server. The PROXY protocol informs
+ the other end about the layer 3/4 addresses of the incoming connection, so
+ that it can know the client's address or the public address it accessed to,
+ whatever the upper layer protocol. In addition, the SSL information extension
+ of the PROXY protocol, along along with the Common Name from the subject of
+ the client certificate (if any), is added to the PROXY protocol header. This
+ setting must not be used if the server isn't aware of this version of the
+ protocol. See also the "send-proxy-v2" option of the "bind" keyword.
+
+ Supported in default-server: No
+
+slowstart <start_time_in_ms>
+ The "slowstart" parameter for a server accepts a value in milliseconds which
+ indicates after how long a server which has just come back up will run at
+ full speed. Just as with every other time-based parameter, it can be entered
+ in any other explicit unit among { us, ms, s, m, h, d }. The speed grows
+ linearly from 0 to 100% during this time. The limitation applies to two
+ parameters :
+
+ - maxconn: the number of connections accepted by the server will grow from 1
+ to 100% of the usual dynamic limit defined by (minconn,maxconn,fullconn).
+
+ - weight: when the backend uses a dynamic weighted algorithm, the weight
+ grows linearly from 1 to 100%. In this case, the weight is updated at every
+ health-check. For this reason, it is important that the "inter" parameter
+ is smaller than the "slowstart", in order to maximize the number of steps.
+
+ The slowstart never applies when haproxy starts, otherwise it would cause
+ trouble to running servers. It only applies when a server has been previously
+ seen as failed.
+
+ Supported in default-server: Yes
+
+sni <expression>
+ The "sni" parameter evaluates the sample fetch expression, converts it to a
+ string and uses the result as the host name sent in the SNI TLS extension to
+ the server. A typical use case is to send the SNI received from the client in
+ a bridged HTTPS scenario, using the "ssl_fc_sni" sample fetch for the
+ expression, though alternatives such as req.hdr(host) can also make sense.
+
+ Supported in default-server: no
+
+source <addr>[:<pl>[-<ph>]] [usesrc { <addr2>[:<port2>] | client | clientip } ]
+source <addr>[:<port>] [usesrc { <addr2>[:<port2>] | hdr_ip(<hdr>[,<occ>]) } ]
+source <addr>[:<pl>[-<ph>]] [interface <name>] ...
+ The "source" parameter sets the source address which will be used when
+ connecting to the server. It follows the exact same parameters and principle
+ as the backend "source" keyword, except that it only applies to the server
+ referencing it. Please consult the "source" keyword for details.
+
+ Additionally, the "source" statement on a server line allows one to specify a
+ source port range by indicating the lower and higher bounds delimited by a
+ dash ('-'). Some operating systems might require a valid IP address when a
+ source port range is specified. It is permitted to have the same IP/range for
+ several servers. Doing so makes it possible to bypass the maximum of 64k
+ total concurrent connections. The limit will then reach 64k connections per
+ server.
+
+ Supported in default-server: No
+
+ssl
+ This option enables SSL ciphering on outgoing connections to the server. It
+ is critical to verify server certificates using "verify" when using SSL to
+ connect to servers, otherwise the communication is prone to trivial man in
+ the-middle attacks rendering SSL useless. When this option is used, health
+ checks are automatically sent in SSL too unless there is a "port" or an
+ "addr" directive indicating the check should be sent to a different location.
+ See the "check-ssl" option to force SSL health checks.
+
+ Supported in default-server: No
+
+tcp-ut <delay>
+ Sets the TCP User Timeout for all outgoing connections to this server. This
+ option is available on Linux since version 2.6.37. It allows haproxy to
+ configure a timeout for sockets which contain data not receiving an
+ acknoledgement for the configured delay. This is especially useful on
+ long-lived connections experiencing long idle periods such as remote
+ terminals or database connection pools, where the client and server timeouts
+ must remain high to allow a long period of idle, but where it is important to
+ detect that the server has disappeared in order to release all resources
+ associated with its connection (and the client's session). One typical use
+ case is also to force dead server connections to die when health checks are
+ too slow or during a soft reload since health checks are then disabled. The
+ argument is a delay expressed in milliseconds by default. This only works for
+ regular TCP connections, and is ignored for other protocols.
+
+track [<proxy>/]<server>
+ This option enables ability to set the current state of the server by tracking
+ another one. It is possible to track a server which itself tracks another
+ server, provided that at the end of the chain, a server has health checks
+ enabled. If <proxy> is omitted the current one is used. If disable-on-404 is
+ used, it has to be enabled on both proxies.
+
+ Supported in default-server: No
+
+verify [none|required]
+ This setting is only available when support for OpenSSL was built in. If set
+ to 'none', server certificate is not verified. In the other case, The
+ certificate provided by the server is verified using CAs from 'ca-file'
+ and optional CRLs from 'crl-file'. If 'ssl_server_verify' is not specified
+ in global section, this is the default. On verify failure the handshake
+ is aborted. It is critically important to verify server certificates when
+ using SSL to connect to servers, otherwise the communication is prone to
+ trivial man-in-the-middle attacks rendering SSL totally useless.
+
+ Supported in default-server: No
+
+verifyhost <hostname>
+ This setting is only available when support for OpenSSL was built in, and
+ only takes effect if 'verify required' is also specified. When set, the
+ hostnames in the subject and subjectAlternateNames of the certificate
+ provided by the server are checked. If none of the hostnames in the
+ certificate match the specified hostname, the handshake is aborted. The
+ hostnames in the server-provided certificate may include wildcards.
+
+ Supported in default-server: No
+
+weight <weight>
+ The "weight" parameter is used to adjust the server's weight relative to
+ other servers. All servers will receive a load proportional to their weight
+ relative to the sum of all weights, so the higher the weight, the higher the
+ load. The default weight is 1, and the maximal value is 256. A value of 0
+ means the server will not participate in load-balancing but will still accept
+ persistent connections. If this parameter is used to distribute the load
+ according to server's capacity, it is recommended to start with values which
+ can both grow and shrink, for instance between 10 and 100 to leave enough
+ room above and below for later adjustments.
+
+ Supported in default-server: Yes
+
+
+5.3. Server IP address resolution using DNS
+-------------------------------------------
+
+HAProxy allows using a host name on the server line to retrieve its IP address
+using name servers. By default, HAProxy resolves the name when parsing the
+configuration file, at startup and cache the result for the process' life.
+This is not sufficient in some cases, such as in Amazon where a server's IP
+can change after a reboot or an ELB Virtual IP can change based on current
+workload.
+This chapter describes how HAProxy can be configured to process server's name
+resolution at run time.
+Whether run time server name resolution has been enable or not, HAProxy will
+carry on doing the first resolution when parsing the configuration.
+
+Bear in mind that DNS resolution is triggered by health checks. This makes
+health checks mandatory to allow DNS resolution.
+
+
+5.3.1. Global overview
+----------------------
+
+As we've seen in introduction, name resolution in HAProxy occurs at two
+different steps of the process life:
+
+ 1. when starting up, HAProxy parses the server line definition and matches a
+ host name. It uses libc functions to get the host name resolved. This
+ resolution relies on /etc/resolv.conf file.
+
+ 2. at run time, when HAProxy gets prepared to run a health check on a server,
+ it verifies if the current name resolution is still considered as valid.
+ If not, it processes a new resolution, in parallel of the health check.
+
+A few other events can trigger a name resolution at run time:
+ - when a server's health check ends up in a connection timeout: this may be
+ because the server has a new IP address. So we need to trigger a name
+ resolution to know this new IP.
+
+A few things important to notice:
+ - all the name servers are queried in the mean time. HAProxy will process the
+ first valid response.
+
+ - a resolution is considered as invalid (NX, timeout, refused), when all the
+ servers return an error.
+
+
+5.3.2. The resolvers section
+----------------------------
+
+This section is dedicated to host information related to name resolution in
+HAProxy.
+There can be as many as resolvers section as needed. Each section can contain
+many name servers.
+
+When multiple name servers are configured in a resolvers section, then HAProxy
+uses the first valid response. In case of invalid responses, only the last one
+is treated. Purpose is to give the chance to a slow server to deliver a valid
+answer after a fast faulty or outdated server.
+
+When each server returns a different error type, then only the last error is
+used by HAProxy to decide what type of behavior to apply.
+
+Two types of behavior can be applied:
+ 1. stop DNS resolution
+ 2. replay the DNS query with a new query type
+ In such case, the following types are applied in this exact order:
+ 1. ANY query type
+ 2. query type corresponding to family pointed by resolve-prefer
+ server's parameter
+ 3. remaining family type
+
+HAProxy stops DNS resolution when the following errors occur:
+ - invalid DNS response packet
+ - wrong name in the query section of the response
+ - NX domain
+ - Query refused by server
+ - CNAME not pointing to an IP address
+
+HAProxy tries a new query type when the following errors occur:
+ - no Answer records in the response
+ - DNS response truncated
+ - Error in DNS response
+ - No expected DNS records found in the response
+ - name server timeout
+
+For example, with 2 name servers configured in a resolvers section:
+ - first response is valid and is applied directly, second response is ignored
+ - first response is invalid and second one is valid, then second response is
+ applied;
+ - first response is a NX domain and second one a truncated response, then
+ HAProxy replays the query with a new type;
+ - first response is truncated and second one is a NX Domain, then HAProxy
+ stops resolution.
+
+
+resolvers <resolvers id>
+ Creates a new name server list labelled <resolvers id>
+
+A resolvers section accept the following parameters:
+
+nameserver <id> <ip>:<port>
+ DNS server description:
+ <id> : label of the server, should be unique
+ <ip> : IP address of the server
+ <port> : port where the DNS service actually runs
+
+hold <status> <period>
+ Defines <period> during which the last name resolution should be kept based
+ on last resolution <status>
+ <status> : last name resolution status. Only "valid" is accepted for now.
+ <period> : interval between two successive name resolution when the last
+ answer was in <status>. It follows the HAProxy time format.
+ <period> is in milliseconds by default.
+
+ Default value is 10s for "valid".
+
+ Note: since the name resolution is triggered by the health checks, a new
+ resolution is triggered after <period> modulo the <inter> parameter of
+ the healch check.
+
+resolve_retries <nb>
+ Defines the number <nb> of queries to send to resolve a server name before
+ giving up.
+ Default value: 3
+
+ A retry occurs on name server timeout or when the full sequence of DNS query
+ type failover is over and we need to start up from the default ANY query
+ type.
+
+timeout <event> <time>
+ Defines timeouts related to name resolution
+ <event> : the event on which the <time> timeout period applies to.
+ events available are:
+ - retry: time between two DNS queries, when no response have
+ been received.
+ Default value: 1s
+ <time> : time related to the event. It follows the HAProxy time format.
+ <time> is expressed in milliseconds.
+
+Example of a resolvers section (with default values):
+
+ resolvers mydns
+ nameserver dns1 10.0.0.1:53
+ nameserver dns2 10.0.0.2:53
+ resolve_retries 3
+ timeout retry 1s
+ hold valid 10s
+
+
+6. HTTP header manipulation
+---------------------------
+
+In HTTP mode, it is possible to rewrite, add or delete some of the request and
+response headers based on regular expressions. It is also possible to block a
+request or a response if a particular header matches a regular expression,
+which is enough to stop most elementary protocol attacks, and to protect
+against information leak from the internal network.
+
+If HAProxy encounters an "Informational Response" (status code 1xx), it is able
+to process all rsp* rules which can allow, deny, rewrite or delete a header,
+but it will refuse to add a header to any such messages as this is not
+HTTP-compliant. The reason for still processing headers in such responses is to
+stop and/or fix any possible information leak which may happen, for instance
+because another downstream equipment would unconditionally add a header, or if
+a server name appears there. When such messages are seen, normal processing
+still occurs on the next non-informational messages.
+
+This section covers common usage of the following keywords, described in detail
+in section 4.2 :
+
+ - reqadd <string>
+ - reqallow <search>
+ - reqiallow <search>
+ - reqdel <search>
+ - reqidel <search>
+ - reqdeny <search>
+ - reqideny <search>
+ - reqpass <search>
+ - reqipass <search>
+ - reqrep <search> <replace>
+ - reqirep <search> <replace>
+ - reqtarpit <search>
+ - reqitarpit <search>
+ - rspadd <string>
+ - rspdel <search>
+ - rspidel <search>
+ - rspdeny <search>
+ - rspideny <search>
+ - rsprep <search> <replace>
+ - rspirep <search> <replace>
+
+With all these keywords, the same conventions are used. The <search> parameter
+is a POSIX extended regular expression (regex) which supports grouping through
+parenthesis (without the backslash). Spaces and other delimiters must be
+prefixed with a backslash ('\') to avoid confusion with a field delimiter.
+Other characters may be prefixed with a backslash to change their meaning :
+
+ \t for a tab
+ \r for a carriage return (CR)
+ \n for a new line (LF)
+ \ to mark a space and differentiate it from a delimiter
+ \# to mark a sharp and differentiate it from a comment
+ \\ to use a backslash in a regex
+ \\\\ to use a backslash in the text (*2 for regex, *2 for haproxy)
+ \xXX to write the ASCII hex code XX as in the C language
+
+The <replace> parameter contains the string to be used to replace the largest
+portion of text matching the regex. It can make use of the special characters
+above, and can reference a substring which is delimited by parenthesis in the
+regex, by writing a backslash ('\') immediately followed by one digit from 0 to
+9 indicating the group position (0 designating the entire line). This practice
+is very common to users of the "sed" program.
+
+The <string> parameter represents the string which will systematically be added
+after the last header line. It can also use special character sequences above.
+
+Notes related to these keywords :
+---------------------------------
+ - these keywords are not always convenient to allow/deny based on header
+ contents. It is strongly recommended to use ACLs with the "block" keyword
+ instead, resulting in far more flexible and manageable rules.
+
+ - lines are always considered as a whole. It is not possible to reference
+ a header name only or a value only. This is important because of the way
+ headers are written (notably the number of spaces after the colon).
+
+ - the first line is always considered as a header, which makes it possible to
+ rewrite or filter HTTP requests URIs or response codes, but in turn makes
+ it harder to distinguish between headers and request line. The regex prefix
+ ^[^\ \t]*[\ \t] matches any HTTP method followed by a space, and the prefix
+ ^[^ \t:]*: matches any header name followed by a colon.
+
+ - for performances reasons, the number of characters added to a request or to
+ a response is limited at build time to values between 1 and 4 kB. This
+ should normally be far more than enough for most usages. If it is too short
+ on occasional usages, it is possible to gain some space by removing some
+ useless headers before adding new ones.
+
+ - keywords beginning with "reqi" and "rspi" are the same as their counterpart
+ without the 'i' letter except that they ignore case when matching patterns.
+
+ - when a request passes through a frontend then a backend, all req* rules
+ from the frontend will be evaluated, then all req* rules from the backend
+ will be evaluated. The reverse path is applied to responses.
+
+ - req* statements are applied after "block" statements, so that "block" is
+ always the first one, but before "use_backend" in order to permit rewriting
+ before switching.
+
+
+7. Using ACLs and fetching samples
+----------------------------------
+
+Haproxy is capable of extracting data from request or response streams, from
+client or server information, from tables, environmental information etc...
+The action of extracting such data is called fetching a sample. Once retrieved,
+these samples may be used for various purposes such as a key to a stick-table,
+but most common usages consist in matching them against predefined constant
+data called patterns.
+
+
+7.1. ACL basics
+---------------
+
+The use of Access Control Lists (ACL) provides a flexible solution to perform
+content switching and generally to take decisions based on content extracted
+from the request, the response or any environmental status. The principle is
+simple :
+
+ - extract a data sample from a stream, table or the environment
+ - optionally apply some format conversion to the extracted sample
+ - apply one or multiple pattern matching methods on this sample
+ - perform actions only when a pattern matches the sample
+
+The actions generally consist in blocking a request, selecting a backend, or
+adding a header.
+
+In order to define a test, the "acl" keyword is used. The syntax is :
+
+ acl <aclname> <criterion> [flags] [operator] [<value>] ...
+
+This creates a new ACL <aclname> or completes an existing one with new tests.
+Those tests apply to the portion of request/response specified in <criterion>
+and may be adjusted with optional flags [flags]. Some criteria also support
+an operator which may be specified before the set of values. Optionally some
+conversion operators may be applied to the sample, and they will be specified
+as a comma-delimited list of keywords just after the first keyword. The values
+are of the type supported by the criterion, and are separated by spaces.
+
+ACL names must be formed from upper and lower case letters, digits, '-' (dash),
+'_' (underscore) , '.' (dot) and ':' (colon). ACL names are case-sensitive,
+which means that "my_acl" and "My_Acl" are two different ACLs.
+
+There is no enforced limit to the number of ACLs. The unused ones do not affect
+performance, they just consume a small amount of memory.
+
+The criterion generally is the name of a sample fetch method, or one of its ACL
+specific declinations. The default test method is implied by the output type of
+this sample fetch method. The ACL declinations can describe alternate matching
+methods of a same sample fetch method. The sample fetch methods are the only
+ones supporting a conversion.
+
+Sample fetch methods return data which can be of the following types :
+ - boolean
+ - integer (signed or unsigned)
+ - IPv4 or IPv6 address
+ - string
+ - data block
+
+Converters transform any of these data into any of these. For example, some
+converters might convert a string to a lower-case string while other ones
+would turn a string to an IPv4 address, or apply a netmask to an IP address.
+The resulting sample is of the type of the last converter applied to the list,
+which defaults to the type of the sample fetch method.
+
+Each sample or converter returns data of a specific type, specified with its
+keyword in this documentation. When an ACL is declared using a standard sample
+fetch method, certain types automatically involved a default matching method
+which are summarized in the table below :
+
+ +---------------------+-----------------+
+ | Sample or converter | Default |
+ | output type | matching method |
+ +---------------------+-----------------+
+ | boolean | bool |
+ +---------------------+-----------------+
+ | integer | int |
+ +---------------------+-----------------+
+ | ip | ip |
+ +---------------------+-----------------+
+ | string | str |
+ +---------------------+-----------------+
+ | binary | none, use "-m" |
+ +---------------------+-----------------+
+
+Note that in order to match a binary samples, it is mandatory to specify a
+matching method, see below.
+
+The ACL engine can match these types against patterns of the following types :
+ - boolean
+ - integer or integer range
+ - IP address / network
+ - string (exact, substring, suffix, prefix, subdir, domain)
+ - regular expression
+ - hex block
+
+The following ACL flags are currently supported :
+
+ -i : ignore case during matching of all subsequent patterns.
+ -f : load patterns from a file.
+ -m : use a specific pattern matching method
+ -n : forbid the DNS resolutions
+ -M : load the file pointed by -f like a map file.
+ -u : force the unique id of the ACL
+ -- : force end of flags. Useful when a string looks like one of the flags.
+
+The "-f" flag is followed by the name of a file from which all lines will be
+read as individual values. It is even possible to pass multiple "-f" arguments
+if the patterns are to be loaded from multiple files. Empty lines as well as
+lines beginning with a sharp ('#') will be ignored. All leading spaces and tabs
+will be stripped. If it is absolutely necessary to insert a valid pattern
+beginning with a sharp, just prefix it with a space so that it is not taken for
+a comment. Depending on the data type and match method, haproxy may load the
+lines into a binary tree, allowing very fast lookups. This is true for IPv4 and
+exact string matching. In this case, duplicates will automatically be removed.
+
+The "-M" flag allows an ACL to use a map file. If this flag is set, the file is
+parsed as two column file. The first column contains the patterns used by the
+ACL, and the second column contain the samples. The sample can be used later by
+a map. This can be useful in some rare cases where an ACL would just be used to
+check for the existence of a pattern in a map before a mapping is applied.
+
+The "-u" flag forces the unique id of the ACL. This unique id is used with the
+socket interface to identify ACL and dynamically change its values. Note that a
+file is always identified by its name even if an id is set.
+
+Also, note that the "-i" flag applies to subsequent entries and not to entries
+loaded from files preceding it. For instance :
+
+ acl valid-ua hdr(user-agent) -f exact-ua.lst -i -f generic-ua.lst test
+
+In this example, each line of "exact-ua.lst" will be exactly matched against
+the "user-agent" header of the request. Then each line of "generic-ua" will be
+case-insensitively matched. Then the word "test" will be insensitively matched
+as well.
+
+The "-m" flag is used to select a specific pattern matching method on the input
+sample. All ACL-specific criteria imply a pattern matching method and generally
+do not need this flag. However, this flag is useful with generic sample fetch
+methods to describe how they're going to be matched against the patterns. This
+is required for sample fetches which return data type for which there is no
+obvious matching method (eg: string or binary). When "-m" is specified and
+followed by a pattern matching method name, this method is used instead of the
+default one for the criterion. This makes it possible to match contents in ways
+that were not initially planned, or with sample fetch methods which return a
+string. The matching method also affects the way the patterns are parsed.
+
+The "-n" flag forbids the dns resolutions. It is used with the load of ip files.
+By default, if the parser cannot parse ip address it considers that the parsed
+string is maybe a domain name and try dns resolution. The flag "-n" disable this
+resolution. It is useful for detecting malformed ip lists. Note that if the DNS
+server is not reachable, the haproxy configuration parsing may last many minutes
+waiting fir the timeout. During this time no error messages are displayed. The
+flag "-n" disable this behavior. Note also that during the runtime, this
+function is disabled for the dynamic acl modifications.
+
+There are some restrictions however. Not all methods can be used with all
+sample fetch methods. Also, if "-m" is used in conjunction with "-f", it must
+be placed first. The pattern matching method must be one of the following :
+
+ - "found" : only check if the requested sample could be found in the stream,
+ but do not compare it against any pattern. It is recommended not
+ to pass any pattern to avoid confusion. This matching method is
+ particularly useful to detect presence of certain contents such
+ as headers, cookies, etc... even if they are empty and without
+ comparing them to anything nor counting them.
+
+ - "bool" : check the value as a boolean. It can only be applied to fetches
+ which return a boolean or integer value, and takes no pattern.
+ Value zero or false does not match, all other values do match.
+
+ - "int" : match the value as an integer. It can be used with integer and
+ boolean samples. Boolean false is integer 0, true is integer 1.
+
+ - "ip" : match the value as an IPv4 or IPv6 address. It is compatible
+ with IP address samples only, so it is implied and never needed.
+
+ - "bin" : match the contents against an hexadecimal string representing a
+ binary sequence. This may be used with binary or string samples.
+
+ - "len" : match the sample's length as an integer. This may be used with
+ binary or string samples.
+
+ - "str" : exact match : match the contents against a string. This may be
+ used with binary or string samples.
+
+ - "sub" : substring match : check that the contents contain at least one of
+ the provided string patterns. This may be used with binary or
+ string samples.
+
+ - "reg" : regex match : match the contents against a list of regular
+ expressions. This may be used with binary or string samples.
+
+ - "beg" : prefix match : check that the contents begin like the provided
+ string patterns. This may be used with binary or string samples.
+
+ - "end" : suffix match : check that the contents end like the provided
+ string patterns. This may be used with binary or string samples.
+
+ - "dir" : subdir match : check that a slash-delimited portion of the
+ contents exactly matches one of the provided string patterns.
+ This may be used with binary or string samples.
+
+ - "dom" : domain match : check that a dot-delimited portion of the contents
+ exactly match one of the provided string patterns. This may be
+ used with binary or string samples.
+
+For example, to quickly detect the presence of cookie "JSESSIONID" in an HTTP
+request, it is possible to do :
+
+ acl jsess_present cook(JSESSIONID) -m found
+
+In order to apply a regular expression on the 500 first bytes of data in the
+buffer, one would use the following acl :
+
+ acl script_tag payload(0,500) -m reg -i <script>
+
+On systems where the regex library is much slower when using "-i", it is
+possible to convert the sample to lowercase before matching, like this :
+
+ acl script_tag payload(0,500),lower -m reg <script>
+
+All ACL-specific criteria imply a default matching method. Most often, these
+criteria are composed by concatenating the name of the original sample fetch
+method and the matching method. For example, "hdr_beg" applies the "beg" match
+to samples retrieved using the "hdr" fetch method. Since all ACL-specific
+criteria rely on a sample fetch method, it is always possible instead to use
+the original sample fetch method and the explicit matching method using "-m".
+
+If an alternate match is specified using "-m" on an ACL-specific criterion,
+the matching method is simply applied to the underlying sample fetch method.
+For example, all ACLs below are exact equivalent :
+
+ acl short_form hdr_beg(host) www.
+ acl alternate1 hdr_beg(host) -m beg www.
+ acl alternate2 hdr_dom(host) -m beg www.
+ acl alternate3 hdr(host) -m beg www.
+
+
+The table below summarizes the compatibility matrix between sample or converter
+types and the pattern types to fetch against. It indicates for each compatible
+combination the name of the matching method to be used, surrounded with angle
+brackets ">" and "<" when the method is the default one and will work by
+default without "-m".
+
+ +-------------------------------------------------+
+ | Input sample type |
+ +----------------------+---------+---------+---------+---------+---------+
+ | pattern type | boolean | integer | ip | string | binary |
+ +----------------------+---------+---------+---------+---------+---------+
+ | none (presence only) | found | found | found | found | found |
+ +----------------------+---------+---------+---------+---------+---------+
+ | none (boolean value) |> bool <| bool | | bool | |
+ +----------------------+---------+---------+---------+---------+---------+
+ | integer (value) | int |> int <| int | int | |
+ +----------------------+---------+---------+---------+---------+---------+
+ | integer (length) | len | len | len | len | len |
+ +----------------------+---------+---------+---------+---------+---------+
+ | IP address | | |> ip <| ip | ip |
+ +----------------------+---------+---------+---------+---------+---------+
+ | exact string | str | str | str |> str <| str |
+ +----------------------+---------+---------+---------+---------+---------+
+ | prefix | beg | beg | beg | beg | beg |
+ +----------------------+---------+---------+---------+---------+---------+
+ | suffix | end | end | end | end | end |
+ +----------------------+---------+---------+---------+---------+---------+
+ | substring | sub | sub | sub | sub | sub |
+ +----------------------+---------+---------+---------+---------+---------+
+ | subdir | dir | dir | dir | dir | dir |
+ +----------------------+---------+---------+---------+---------+---------+
+ | domain | dom | dom | dom | dom | dom |
+ +----------------------+---------+---------+---------+---------+---------+
+ | regex | reg | reg | reg | reg | reg |
+ +----------------------+---------+---------+---------+---------+---------+
+ | hex block | | | | bin | bin |
+ +----------------------+---------+---------+---------+---------+---------+
+
+
+7.1.1. Matching booleans
+------------------------
+
+In order to match a boolean, no value is needed and all values are ignored.
+Boolean matching is used by default for all fetch methods of type "boolean".
+When boolean matching is used, the fetched value is returned as-is, which means
+that a boolean "true" will always match and a boolean "false" will never match.
+
+Boolean matching may also be enforced using "-m bool" on fetch methods which
+return an integer value. Then, integer value 0 is converted to the boolean
+"false" and all other values are converted to "true".
+
+
+7.1.2. Matching integers
+------------------------
+
+Integer matching applies by default to integer fetch methods. It can also be
+enforced on boolean fetches using "-m int". In this case, "false" is converted
+to the integer 0, and "true" is converted to the integer 1.
+
+Integer matching also supports integer ranges and operators. Note that integer
+matching only applies to positive values. A range is a value expressed with a
+lower and an upper bound separated with a colon, both of which may be omitted.
+
+For instance, "1024:65535" is a valid range to represent a range of
+unprivileged ports, and "1024:" would also work. "0:1023" is a valid
+representation of privileged ports, and ":1023" would also work.
+
+As a special case, some ACL functions support decimal numbers which are in fact
+two integers separated by a dot. This is used with some version checks for
+instance. All integer properties apply to those decimal numbers, including
+ranges and operators.
+
+For an easier usage, comparison operators are also supported. Note that using
+operators with ranges does not make much sense and is strongly discouraged.
+Similarly, it does not make much sense to perform order comparisons with a set
+of values.
+
+Available operators for integer matching are :
+
+ eq : true if the tested value equals at least one value
+ ge : true if the tested value is greater than or equal to at least one value
+ gt : true if the tested value is greater than at least one value
+ le : true if the tested value is less than or equal to at least one value
+ lt : true if the tested value is less than at least one value
+
+For instance, the following ACL matches any negative Content-Length header :
+
+ acl negative-length hdr_val(content-length) lt 0
+
+This one matches SSL versions between 3.0 and 3.1 (inclusive) :
+
+ acl sslv3 req_ssl_ver 3:3.1
+
+
+7.1.3. Matching strings
+-----------------------
+
+String matching applies to string or binary fetch methods, and exists in 6
+different forms :
+
+ - exact match (-m str) : the extracted string must exactly match the
+ patterns ;
+
+ - substring match (-m sub) : the patterns are looked up inside the
+ extracted string, and the ACL matches if any of them is found inside ;
+
+ - prefix match (-m beg) : the patterns are compared with the beginning of
+ the extracted string, and the ACL matches if any of them matches.
+
+ - suffix match (-m end) : the patterns are compared with the end of the
+ extracted string, and the ACL matches if any of them matches.
+
+ - subdir match (-m sub) : the patterns are looked up inside the extracted
+ string, delimited with slashes ("/"), and the ACL matches if any of them
+ matches.
+
+ - domain match (-m dom) : the patterns are looked up inside the extracted
+ string, delimited with dots ("."), and the ACL matches if any of them
+ matches.
+
+String matching applies to verbatim strings as they are passed, with the
+exception of the backslash ("\") which makes it possible to escape some
+characters such as the space. If the "-i" flag is passed before the first
+string, then the matching will be performed ignoring the case. In order
+to match the string "-i", either set it second, or pass the "--" flag
+before the first string. Same applies of course to match the string "--".
+
+
+7.1.4. Matching regular expressions (regexes)
+---------------------------------------------
+
+Just like with string matching, regex matching applies to verbatim strings as
+they are passed, with the exception of the backslash ("\") which makes it
+possible to escape some characters such as the space. If the "-i" flag is
+passed before the first regex, then the matching will be performed ignoring
+the case. In order to match the string "-i", either set it second, or pass
+the "--" flag before the first string. Same principle applies of course to
+match the string "--".
+
+
+7.1.5. Matching arbitrary data blocks
+-------------------------------------
+
+It is possible to match some extracted samples against a binary block which may
+not safely be represented as a string. For this, the patterns must be passed as
+a series of hexadecimal digits in an even number, when the match method is set
+to binary. Each sequence of two digits will represent a byte. The hexadecimal
+digits may be used upper or lower case.
+
+Example :
+ # match "Hello\n" in the input stream (\x48 \x65 \x6c \x6c \x6f \x0a)
+ acl hello payload(0,6) -m bin 48656c6c6f0a
+
+
+7.1.6. Matching IPv4 and IPv6 addresses
+---------------------------------------
+
+IPv4 addresses values can be specified either as plain addresses or with a
+netmask appended, in which case the IPv4 address matches whenever it is
+within the network. Plain addresses may also be replaced with a resolvable
+host name, but this practice is generally discouraged as it makes it more
+difficult to read and debug configurations. If hostnames are used, you should
+at least ensure that they are present in /etc/hosts so that the configuration
+does not depend on any random DNS match at the moment the configuration is
+parsed.
+
+IPv6 may be entered in their usual form, with or without a netmask appended.
+Only bit counts are accepted for IPv6 netmasks. In order to avoid any risk of
+trouble with randomly resolved IP addresses, host names are never allowed in
+IPv6 patterns.
+
+HAProxy is also able to match IPv4 addresses with IPv6 addresses in the
+following situations :
+ - tested address is IPv4, pattern address is IPv4, the match applies
+ in IPv4 using the supplied mask if any.
+ - tested address is IPv6, pattern address is IPv6, the match applies
+ in IPv6 using the supplied mask if any.
+ - tested address is IPv6, pattern address is IPv4, the match applies in IPv4
+ using the pattern's mask if the IPv6 address matches with 2002:IPV4::,
+ ::IPV4 or ::ffff:IPV4, otherwise it fails.
+ - tested address is IPv4, pattern address is IPv6, the IPv4 address is first
+ converted to IPv6 by prefixing ::ffff: in front of it, then the match is
+ applied in IPv6 using the supplied IPv6 mask.
+
+
+7.2. Using ACLs to form conditions
+----------------------------------
+
+Some actions are only performed upon a valid condition. A condition is a
+combination of ACLs with operators. 3 operators are supported :
+
+ - AND (implicit)
+ - OR (explicit with the "or" keyword or the "||" operator)
+ - Negation with the exclamation mark ("!")
+
+A condition is formed as a disjunctive form:
+
+ [!]acl1 [!]acl2 ... [!]acln { or [!]acl1 [!]acl2 ... [!]acln } ...
+
+Such conditions are generally used after an "if" or "unless" statement,
+indicating when the condition will trigger the action.
+
+For instance, to block HTTP requests to the "*" URL with methods other than
+"OPTIONS", as well as POST requests without content-length, and GET or HEAD
+requests with a content-length greater than 0, and finally every request which
+is not either GET/HEAD/POST/OPTIONS !
+
+ acl missing_cl hdr_cnt(Content-length) eq 0
+ block if HTTP_URL_STAR !METH_OPTIONS || METH_POST missing_cl
+ block if METH_GET HTTP_CONTENT
+ block unless METH_GET or METH_POST or METH_OPTIONS
+
+To select a different backend for requests to static contents on the "www" site
+and to every request on the "img", "video", "download" and "ftp" hosts :
+
+ acl url_static path_beg /static /images /img /css
+ acl url_static path_end .gif .png .jpg .css .js
+ acl host_www hdr_beg(host) -i www
+ acl host_static hdr_beg(host) -i img. video. download. ftp.
+
+ # now use backend "static" for all static-only hosts, and for static urls
+ # of host "www". Use backend "www" for the rest.
+ use_backend static if host_static or host_www url_static
+ use_backend www if host_www
+
+It is also possible to form rules using "anonymous ACLs". Those are unnamed ACL
+expressions that are built on the fly without needing to be declared. They must
+be enclosed between braces, with a space before and after each brace (because
+the braces must be seen as independent words). Example :
+
+ The following rule :
+
+ acl missing_cl hdr_cnt(Content-length) eq 0
+ block if METH_POST missing_cl
+
+ Can also be written that way :
+
+ block if METH_POST { hdr_cnt(Content-length) eq 0 }
+
+It is generally not recommended to use this construct because it's a lot easier
+to leave errors in the configuration when written that way. However, for very
+simple rules matching only one source IP address for instance, it can make more
+sense to use them than to declare ACLs with random names. Another example of
+good use is the following :
+
+ With named ACLs :
+
+ acl site_dead nbsrv(dynamic) lt 2
+ acl site_dead nbsrv(static) lt 2
+ monitor fail if site_dead
+
+ With anonymous ACLs :
+
+ monitor fail if { nbsrv(dynamic) lt 2 } || { nbsrv(static) lt 2 }
+
+See section 4.2 for detailed help on the "block" and "use_backend" keywords.
+
+
+7.3. Fetching samples
+---------------------
+
+Historically, sample fetch methods were only used to retrieve data to match
+against patterns using ACLs. With the arrival of stick-tables, a new class of
+sample fetch methods was created, most often sharing the same syntax as their
+ACL counterpart. These sample fetch methods are also known as "fetches". As
+of now, ACLs and fetches have converged. All ACL fetch methods have been made
+available as fetch methods, and ACLs may use any sample fetch method as well.
+
+This section details all available sample fetch methods and their output type.
+Some sample fetch methods have deprecated aliases that are used to maintain
+compatibility with existing configurations. They are then explicitly marked as
+deprecated and should not be used in new setups.
+
+The ACL derivatives are also indicated when available, with their respective
+matching methods. These ones all have a well defined default pattern matching
+method, so it is never necessary (though allowed) to pass the "-m" option to
+indicate how the sample will be matched using ACLs.
+
+As indicated in the sample type versus matching compatibility matrix above,
+when using a generic sample fetch method in an ACL, the "-m" option is
+mandatory unless the sample type is one of boolean, integer, IPv4 or IPv6. When
+the same keyword exists as an ACL keyword and as a standard fetch method, the
+ACL engine will automatically pick the ACL-only one by default.
+
+Some of these keywords support one or multiple mandatory arguments, and one or
+multiple optional arguments. These arguments are strongly typed and are checked
+when the configuration is parsed so that there is no risk of running with an
+incorrect argument (eg: an unresolved backend name). Fetch function arguments
+are passed between parenthesis and are delimited by commas. When an argument
+is optional, it will be indicated below between square brackets ('[ ]'). When
+all arguments are optional, the parenthesis may be omitted.
+
+Thus, the syntax of a standard sample fetch method is one of the following :
+ - name
+ - name(arg1)
+ - name(arg1,arg2)
+
+
+7.3.1. Converters
+-----------------
+
+Sample fetch methods may be combined with transformations to be applied on top
+of the fetched sample (also called "converters"). These combinations form what
+is called "sample expressions" and the result is a "sample". Initially this
+was only supported by "stick on" and "stick store-request" directives but this
+has now be extended to all places where samples may be used (acls, log-format,
+unique-id-format, add-header, ...).
+
+These transformations are enumerated as a series of specific keywords after the
+sample fetch method. These keywords may equally be appended immediately after
+the fetch keyword's argument, delimited by a comma. These keywords can also
+support some arguments (eg: a netmask) which must be passed in parenthesis.
+
+A certain category of converters are bitwise and arithmetic operators which
+support performing basic operations on integers. Some bitwise operations are
+supported (and, or, xor, cpl) and some arithmetic operations are supported
+(add, sub, mul, div, mod, neg). Some comparators are provided (odd, even, not,
+bool) which make it possible to report a match without having to write an ACL.
+
+The currently available list of transformation keywords include :
+
+add(<value>)
+ Adds <value> to the input value of type signed integer, and returns the
+ result as a signed integer. <value> can be a numeric value or a variable
+ name. The name of the variable starts by an indication about its scope. The
+ allowed scopes are:
+ "sess" : the variable is shared with all the session,
+ "txn" : the variable is shared with all the transaction (request and
+ response),
+ "req" : the variable is shared only during the request processing,
+ "res" : the variable is shared only during the response processing.
+ This prefix is followed by a name. The separator is a '.'. The name may only
+ contain characters 'a-z', 'A-Z', '0-9' and '_'.
+
+and(<value>)
+ Performs a bitwise "AND" between <value> and the input value of type signed
+ integer, and returns the result as an signed integer. <value> can be a
+ numeric value or a variable name. The name of the variable starts by an
+ indication about its scope. The allowed scopes are:
+ "sess" : the variable is shared with all the session,
+ "txn" : the variable is shared with all the transaction (request and
+ response),
+ "req" : the variable is shared only during the request processing,
+ "res" : the variable is shared only during the response processing.
+ This prefix is followed by a name. The separator is a '.'. The name may only
+ contain characters 'a-z', 'A-Z', '0-9' and '_'.
+
+base64
+ Converts a binary input sample to a base64 string. It is used to log or
+ transfer binary content in a way that can be reliably transferred (eg:
+ an SSL ID can be copied in a header).
+
+bool
+ Returns a boolean TRUE if the input value of type signed integer is
+ non-null, otherwise returns FALSE. Used in conjunction with and(), it can be
+ used to report true/false for bit testing on input values (eg: verify the
+ presence of a flag).
+
+bytes(<offset>[,<length>])
+ Extracts some bytes from an input binary sample. The result is a binary
+ sample starting at an offset (in bytes) of the original sample and
+ optionnaly truncated at the given length.
+
+cpl
+ Takes the input value of type signed integer, applies a ones-complement
+ (flips all bits) and returns the result as an signed integer.
+
+crc32([<avalanche>])
+ Hashes a binary input sample into an unsigned 32-bit quantity using the CRC32
+ hash function. Optionally, it is possible to apply a full avalanche hash
+ function to the output if the optional <avalanche> argument equals 1. This
+ converter uses the same functions as used by the various hash-based load
+ balancing algorithms, so it will provide exactly the same results. It is
+ provided for compatibility with other software which want a CRC32 to be
+ computed on some input keys, so it follows the most common implementation as
+ found in Ethernet, Gzip, PNG, etc... It is slower than the other algorithms
+ but may provide a better or at least less predictable distribution. It must
+ not be used for security purposes as a 32-bit hash is trivial to break. See
+ also "djb2", "sdbm", "wt6" and the "hash-type" directive.
+
+da-csv-conv(<prop>[,<prop>*])
+ Asks the DeviceAtlas converter to identify the User Agent string passed on
+ input, and to emit a string made of the concatenation of the properties
+ enumerated in argument, delimited by the separator defined by the global
+ keyword "deviceatlas-property-separator", or by default the pipe character
+ ('|'). There's a limit of 5 different properties imposed by the haproxy
+ configuration language.
+
+ Example:
+ frontend www
+ bind *:8881
+ default_backend servers
+ http-request set-header X-DeviceAtlas-Data %[req.fhdr(User-Agent),da-csv(primaryHardwareType,osName,osVersion,browserName,browserVersion)]
+
+debug
+ This converter is used as debug tool. It dumps on screen the content and the
+ type of the input sample. The sample is returned as is on its output. This
+ converter only exists when haproxy was built with debugging enabled.
+
+div(<value>)
+ Divides the input value of type signed integer by <value>, and returns the
+ result as an signed integer. If <value> is null, the largest unsigned
+ integer is returned (typically 2^63-1). <value> can be a numeric value or a
+ variable name. The name of the variable starts by an indication about it
+ scope. The scope allowed are:
+ "sess" : the variable is shared with all the session,
+ "txn" : the variable is shared with all the transaction (request and
+ response),
+ "req" : the variable is shared only during the request processing,
+ "res" : the variable is shared only during the response processing.
+ This prefix is followed by a name. The separator is a '.'. The name may only
+ contain characters 'a-z', 'A-Z', '0-9' and '_'.
+
+djb2([<avalanche>])
+ Hashes a binary input sample into an unsigned 32-bit quantity using the DJB2
+ hash function. Optionally, it is possible to apply a full avalanche hash
+ function to the output if the optional <avalanche> argument equals 1. This
+ converter uses the same functions as used by the various hash-based load
+ balancing algorithms, so it will provide exactly the same results. It is
+ mostly intended for debugging, but can be used as a stick-table entry to
+ collect rough statistics. It must not be used for security purposes as a
+ 32-bit hash is trivial to break. See also "crc32", "sdbm", "wt6" and the
+ "hash-type" directive.
+
+even
+ Returns a boolean TRUE if the input value of type signed integer is even
+ otherwise returns FALSE. It is functionally equivalent to "not,and(1),bool".
+
+field(<index>,<delimiters>)
+ Extracts the substring at the given index considering given delimiters from
+ an input string. Indexes start at 1 and delimiters are a string formatted
+ list of chars.
+
+hex
+ Converts a binary input sample to an hex string containing two hex digits per
+ input byte. It is used to log or transfer hex dumps of some binary input data
+ in a way that can be reliably transferred (eg: an SSL ID can be copied in a
+ header).
+
+http_date([<offset>])
+ Converts an integer supposed to contain a date since epoch to a string
+ representing this date in a format suitable for use in HTTP header fields. If
+ an offset value is specified, then it is a number of seconds that is added to
+ the date before the conversion is operated. This is particularly useful to
+ emit Date header fields, Expires values in responses when combined with a
+ positive offset, or Last-Modified values when the offset is negative.
+
+in_table(<table>)
+ Uses the string representation of the input sample to perform a look up in
+ the specified table. If the key is not found in the table, a boolean false
+ is returned. Otherwise a boolean true is returned. This can be used to verify
+ the presence of a certain key in a table tracking some elements (eg: whether
+ or not a source IP address or an Authorization header was already seen).
+
+ipmask(<mask>)
+ Apply a mask to an IPv4 address, and use the result for lookups and storage.
+ This can be used to make all hosts within a certain mask to share the same
+ table entries and as such use the same server. The mask can be passed in
+ dotted form (eg: 255.255.255.0) or in CIDR form (eg: 24).
+
+json([<input-code>])
+ Escapes the input string and produces an ASCII ouput string ready to use as a
+ JSON string. The converter tries to decode the input string according to the
+ <input-code> parameter. It can be "ascii", "utf8", "utf8s", "utf8"" or
+ "utf8ps". The "ascii" decoder never fails. The "utf8" decoder detects 3 types
+ of errors:
+ - bad UTF-8 sequence (lone continuation byte, bad number of continuation
+ bytes, ...)
+ - invalid range (the decoded value is within a UTF-8 prohibited range),
+ - code overlong (the value is encoded with more bytes than necessary).
+
+ The UTF-8 JSON encoding can produce a "too long value" error when the UTF-8
+ character is greater than 0xffff because the JSON string escape specification
+ only authorizes 4 hex digits for the value encoding. The UTF-8 decoder exists
+ in 4 variants designated by a combination of two suffix letters : "p" for
+ "permissive" and "s" for "silently ignore". The behaviors of the decoders
+ are :
+ - "ascii" : never fails ;
+ - "utf8" : fails on any detected errors ;
+ - "utf8s" : never fails, but removes characters corresponding to errors ;
+ - "utf8p" : accepts and fixes the overlong errors, but fails on any other
+ error ;
+ - "utf8ps" : never fails, accepts and fixes the overlong errors, but removes
+ characters corresponding to the other errors.
+
+ This converter is particularly useful for building properly escaped JSON for
+ logging to servers which consume JSON-formated traffic logs.
+
+ Example:
+ capture request header user-agent len 150
+ capture request header Host len 15
+ log-format {"ip":"%[src]","user-agent":"%[capture.req.hdr(1),json]"}
+
+ Input request from client 127.0.0.1:
+ GET / HTTP/1.0
+ User-Agent: Very "Ugly" UA 1/2
+
+ Output log:
+ {"ip":"127.0.0.1","user-agent":"Very \"Ugly\" UA 1\/2"}
+
+language(<value>[,<default>])
+ Returns the value with the highest q-factor from a list as extracted from the
+ "accept-language" header using "req.fhdr". Values with no q-factor have a
+ q-factor of 1. Values with a q-factor of 0 are dropped. Only values which
+ belong to the list of semi-colon delimited <values> will be considered. The
+ argument <value> syntax is "lang[;lang[;lang[;...]]]". If no value matches the
+ given list and a default value is provided, it is returned. Note that language
+ names may have a variant after a dash ('-'). If this variant is present in the
+ list, it will be matched, but if it is not, only the base language is checked.
+ The match is case-sensitive, and the output string is always one of those
+ provided in arguments. The ordering of arguments is meaningless, only the
+ ordering of the values in the request counts, as the first value among
+ multiple sharing the same q-factor is used.
+
+ Example :
+
+ # this configuration switches to the backend matching a
+ # given language based on the request :
+
+ acl es req.fhdr(accept-language),language(es;fr;en) -m str es
+ acl fr req.fhdr(accept-language),language(es;fr;en) -m str fr
+ acl en req.fhdr(accept-language),language(es;fr;en) -m str en
+ use_backend spanish if es
+ use_backend french if fr
+ use_backend english if en
+ default_backend choose_your_language
+
+lower
+ Convert a string sample to lower case. This can only be placed after a string
+ sample fetch function or after a transformation keyword returning a string
+ type. The result is of type string.
+
+ltime(<format>[,<offset>])
+ Converts an integer supposed to contain a date since epoch to a string
+ representing this date in local time using a format defined by the <format>
+ string using strftime(3). The purpose is to allow any date format to be used
+ in logs. An optional <offset> in seconds may be applied to the input date
+ (positive or negative). See the strftime() man page for the format supported
+ by your operating system. See also the utime converter.
+
+ Example :
+
+ # Emit two colons, one with the local time and another with ip:port
+ # Eg: 20140710162350 127.0.0.1:57325
+ log-format %[date,ltime(%Y%m%d%H%M%S)]\ %ci:%cp
+
+map(<map_file>[,<default_value>])
+map_<match_type>(<map_file>[,<default_value>])
+map_<match_type>_<output_type>(<map_file>[,<default_value>])
+ Search the input value from <map_file> using the <match_type> matching method,
+ and return the associated value converted to the type <output_type>. If the
+ input value cannot be found in the <map_file>, the converter returns the
+ <default_value>. If the <default_value> is not set, the converter fails and
+ acts as if no input value could be fetched. If the <match_type> is not set, it
+ defaults to "str". Likewise, if the <output_type> is not set, it defaults to
+ "str". For convenience, the "map" keyword is an alias for "map_str" and maps a
+ string to another string.
+
+ It is important to avoid overlapping between the keys : IP addresses and
+ strings are stored in trees, so the first of the finest match will be used.
+ Other keys are stored in lists, so the first matching occurrence will be used.
+
+ The following array contains the list of all map functions avalaible sorted by
+ input type, match type and output type.
+
+ input type | match method | output type str | output type int | output type ip
+ -----------+--------------+-----------------+-----------------+---------------
+ str | str | map_str | map_str_int | map_str_ip
+ -----------+--------------+-----------------+-----------------+---------------
+ str | beg | map_beg | map_beg_int | map_end_ip
+ -----------+--------------+-----------------+-----------------+---------------
+ str | sub | map_sub | map_sub_int | map_sub_ip
+ -----------+--------------+-----------------+-----------------+---------------
+ str | dir | map_dir | map_dir_int | map_dir_ip
+ -----------+--------------+-----------------+-----------------+---------------
+ str | dom | map_dom | map_dom_int | map_dom_ip
+ -----------+--------------+-----------------+-----------------+---------------
+ str | end | map_end | map_end_int | map_end_ip
+ -----------+--------------+-----------------+-----------------+---------------
+ str | reg | map_reg | map_reg_int | map_reg_ip
+ -----------+--------------+-----------------+-----------------+---------------
+ int | int | map_int | map_int_int | map_int_ip
+ -----------+--------------+-----------------+-----------------+---------------
+ ip | ip | map_ip | map_ip_int | map_ip_ip
+ -----------+--------------+-----------------+-----------------+---------------
+
+ The file contains one key + value per line. Lines which start with '#' are
+ ignored, just like empty lines. Leading tabs and spaces are stripped. The key
+ is then the first "word" (series of non-space/tabs characters), and the value
+ is what follows this series of space/tab till the end of the line excluding
+ trailing spaces/tabs.
+
+ Example :
+
+ # this is a comment and is ignored
+ 2.22.246.0/23 United Kingdom \n
+ <-><-----------><--><------------><---->
+ | | | | `- trailing spaces ignored
+ | | | `---------- value
+ | | `-------------------- middle spaces ignored
+ | `---------------------------- key
+ `------------------------------------ leading spaces ignored
+
+mod(<value>)
+ Divides the input value of type signed integer by <value>, and returns the
+ remainder as an signed integer. If <value> is null, then zero is returned.
+ <value> can be a numeric value or a variable name. The name of the variable
+ starts by an indication about its scope. The allowed scopes are:
+ "sess" : the variable is shared with all the session,
+ "txn" : the variable is shared with all the transaction (request and
+ response),
+ "req" : the variable is shared only during the request processing,
+ "res" : the variable is shared only during the response processing.
+ This prefix is followed by a name. The separator is a '.'. The name may only
+ contain characters 'a-z', 'A-Z', '0-9' and '_'.
+
+mul(<value>)
+ Multiplies the input value of type signed integer by <value>, and returns
+ the product as an signed integer. In case of overflow, the largest possible
+ value for the sign is returned so that the operation doesn't wrap around.
+ <value> can be a numeric value or a variable name. The name of the variable
+ starts by an indication about its scope. The allowed scopes are:
+ "sess" : the variable is shared with all the session,
+ "txn" : the variable is shared with all the transaction (request and
+ response),
+ "req" : the variable is shared only during the request processing,
+ "res" : the variable is shared only during the response processing.
+ This prefix is followed by a name. The separator is a '.'. The name may only
+ contain characters 'a-z', 'A-Z', '0-9' and '_'.
+
+neg
+ Takes the input value of type signed integer, computes the opposite value,
+ and returns the remainder as an signed integer. 0 is identity. This operator
+ is provided for reversed subtracts : in order to subtract the input from a
+ constant, simply perform a "neg,add(value)".
+
+not
+ Returns a boolean FALSE if the input value of type signed integer is
+ non-null, otherwise returns TRUE. Used in conjunction with and(), it can be
+ used to report true/false for bit testing on input values (eg: verify the
+ absence of a flag).
+
+odd
+ Returns a boolean TRUE if the input value of type signed integer is odd
+ otherwise returns FALSE. It is functionally equivalent to "and(1),bool".
+
+or(<value>)
+ Performs a bitwise "OR" between <value> and the input value of type signed
+ integer, and returns the result as an signed integer. <value> can be a
+ numeric value or a variable name. The name of the variable starts by an
+ indication about its scope. The allowed scopes are:
+ "sess" : the variable is shared with all the session,
+ "txn" : the variable is shared with all the transaction (request and
+ response),
+ "req" : the variable is shared only during the request processing,
+ "res" : the variable is shared only during the response processing.
+ This prefix is followed by a name. The separator is a '.'. The name may only
+ contain characters 'a-z', 'A-Z', '0-9' and '_'.
+
+regsub(<regex>,<subst>[,<flags>])
+ Applies a regex-based substitution to the input string. It does the same
+ operation as the well-known "sed" utility with "s/<regex>/<subst>/". By
+ default it will replace in the input string the first occurrence of the
+ largest part matching the regular expression <regex> with the substitution
+ string <subst>. It is possible to replace all occurrences instead by adding
+ the flag "g" in the third argument <flags>. It is also possible to make the
+ regex case insensitive by adding the flag "i" in <flags>. Since <flags> is a
+ string, it is made up from the concatenation of all desired flags. Thus if
+ both "i" and "g" are desired, using "gi" or "ig" will have the same effect.
+ It is important to note that due to the current limitations of the
+ configuration parser, some characters such as closing parenthesis or comma
+ are not possible to use in the arguments. The first use of this converter is
+ to replace certain characters or sequence of characters with other ones.
+
+ Example :
+
+ # de-duplicate "/" in header "x-path".
+ # input: x-path: /////a///b/c/xzxyz/
+ # output: x-path: /a/b/c/xzxyz/
+ http-request set-header x-path %[hdr(x-path),regsub(/+,/,g)]
+
+capture-req(<id>)
+ Capture the string entry in the request slot <id> and returns the entry as
+ is. If the slot doesn't exist, the capture fails silently.
+
+ See also: "declare capture", "http-request capture",
+ "http-response capture", "capture.req.hdr" and
+ "capture.res.hdr" (sample fetches).
+
+capture-res(<id>)
+ Capture the string entry in the response slot <id> and returns the entry as
+ is. If the slot doesn't exist, the capture fails silently.
+
+ See also: "declare capture", "http-request capture",
+ "http-response capture", "capture.req.hdr" and
+ "capture.res.hdr" (sample fetches).
+
+sdbm([<avalanche>])
+ Hashes a binary input sample into an unsigned 32-bit quantity using the SDBM
+ hash function. Optionally, it is possible to apply a full avalanche hash
+ function to the output if the optional <avalanche> argument equals 1. This
+ converter uses the same functions as used by the various hash-based load
+ balancing algorithms, so it will provide exactly the same results. It is
+ mostly intended for debugging, but can be used as a stick-table entry to
+ collect rough statistics. It must not be used for security purposes as a
+ 32-bit hash is trivial to break. See also "crc32", "djb2", "wt6" and the
+ "hash-type" directive.
+
+set-var(<var name>)
+ Sets a variable with the input content and return the content on the output as
+ is. The variable keep the value and the associated input type. The name of the
+ variable starts by an indication about it scope. The scope allowed are:
+ "sess" : the variable is shared with all the session,
+ "txn" : the variable is shared with all the transaction (request and
+ response),
+ "req" : the variable is shared only during the request processing,
+ "res" : the variable is shared only during the response processing.
+ This prefix is followed by a name. The separator is a '.'. The name may only
+ contain characters 'a-z', 'A-Z', '0-9' and '_'.
+
+sub(<value>)
+ Subtracts <value> from the input value of type signed integer, and returns
+ the result as an signed integer. Note: in order to subtract the input from
+ a constant, simply perform a "neg,add(value)". <value> can be a numeric value
+ or a variable name. The name of the variable starts by an indication about its
+ scope. The allowed scopes are:
+ "sess" : the variable is shared with all the session,
+ "txn" : the variable is shared with all the transaction (request and
+ response),
+ "req" : the variable is shared only during the request processing,
+ "res" : the variable is shared only during the response processing.
+ This prefix is followed by a name. The separator is a '.'. The name may only
+ contain characters 'a-z', 'A-Z', '0-9' and '_'.
+
+table_bytes_in_rate(<table>)
+ Uses the string representation of the input sample to perform a look up in
+ the specified table. If the key is not found in the table, integer value zero
+ is returned. Otherwise the converter returns the average client-to-server
+ bytes rate associated with the input sample in the designated table, measured
+ in amount of bytes over the period configured in the table. See also the
+ sc_bytes_in_rate sample fetch keyword.
+
+
+table_bytes_out_rate(<table>)
+ Uses the string representation of the input sample to perform a look up in
+ the specified table. If the key is not found in the table, integer value zero
+ is returned. Otherwise the converter returns the average server-to-client
+ bytes rate associated with the input sample in the designated table, measured
+ in amount of bytes over the period configured in the table. See also the
+ sc_bytes_out_rate sample fetch keyword.
+
+table_conn_cnt(<table>)
+ Uses the string representation of the input sample to perform a look up in
+ the specified table. If the key is not found in the table, integer value zero
+ is returned. Otherwise the converter returns the cumulated amount of incoming
+ connections associated with the input sample in the designated table. See
+ also the sc_conn_cnt sample fetch keyword.
+
+table_conn_cur(<table>)
+ Uses the string representation of the input sample to perform a look up in
+ the specified table. If the key is not found in the table, integer value zero
+ is returned. Otherwise the converter returns the current amount of concurrent
+ tracked connections associated with the input sample in the designated table.
+ See also the sc_conn_cur sample fetch keyword.
+
+table_conn_rate(<table>)
+ Uses the string representation of the input sample to perform a look up in
+ the specified table. If the key is not found in the table, integer value zero
+ is returned. Otherwise the converter returns the average incoming connection
+ rate associated with the input sample in the designated table. See also the
+ sc_conn_rate sample fetch keyword.
+
+table_gpt0(<table>)
+ Uses the string representation of the input sample to perform a look up in
+ the specified table. If the key is not found in the table, boolean value zero
+ is returned. Otherwise the converter returns the current value of the first
+ general purpose tag associated with the input sample in the designated table.
+ See also the sc_get_gpt0 sample fetch keyword.
+
+table_gpc0(<table>)
+ Uses the string representation of the input sample to perform a look up in
+ the specified table. If the key is not found in the table, integer value zero
+ is returned. Otherwise the converter returns the current value of the first
+ general purpose counter associated with the input sample in the designated
+ table. See also the sc_get_gpc0 sample fetch keyword.
+
+table_gpc0_rate(<table>)
+ Uses the string representation of the input sample to perform a look up in
+ the specified table. If the key is not found in the table, integer value zero
+ is returned. Otherwise the converter returns the frequency which the gpc0
+ counter was incremented over the configured period in the table, associated
+ with the input sample in the designated table. See also the sc_get_gpc0_rate
+ sample fetch keyword.
+
+table_http_err_cnt(<table>)
+ Uses the string representation of the input sample to perform a look up in
+ the specified table. If the key is not found in the table, integer value zero
+ is returned. Otherwise the converter returns the cumulated amount of HTTP
+ errors associated with the input sample in the designated table. See also the
+ sc_http_err_cnt sample fetch keyword.
+
+table_http_err_rate(<table>)
+ Uses the string representation of the input sample to perform a look up in
+ the specified table. If the key is not found in the table, integer value zero
+ is returned. Otherwise the average rate of HTTP errors associated with the
+ input sample in the designated table, measured in amount of errors over the
+ period configured in the table. See also the sc_http_err_rate sample fetch
+ keyword.
+
+table_http_req_cnt(<table>)
+ Uses the string representation of the input sample to perform a look up in
+ the specified table. If the key is not found in the table, integer value zero
+ is returned. Otherwise the converter returns the cumulated amount of HTTP
+ requests associated with the input sample in the designated table. See also
+ the sc_http_req_cnt sample fetch keyword.
+
+table_http_req_rate(<table>)
+ Uses the string representation of the input sample to perform a look up in
+ the specified table. If the key is not found in the table, integer value zero
+ is returned. Otherwise the average rate of HTTP requests associated with the
+ input sample in the designated table, measured in amount of requests over the
+ period configured in the table. See also the sc_http_req_rate sample fetch
+ keyword.
+
+table_kbytes_in(<table>)
+ Uses the string representation of the input sample to perform a look up in
+ the specified table. If the key is not found in the table, integer value zero
+ is returned. Otherwise the converter returns the cumulated amount of client-
+ to-server data associated with the input sample in the designated table,
+ measured in kilobytes. The test is currently performed on 32-bit integers,
+ which limits values to 4 terabytes. See also the sc_kbytes_in sample fetch
+ keyword.
+
+table_kbytes_out(<table>)
+ Uses the string representation of the input sample to perform a look up in
+ the specified table. If the key is not found in the table, integer value zero
+ is returned. Otherwise the converter returns the cumulated amount of server-
+ to-client data associated with the input sample in the designated table,
+ measured in kilobytes. The test is currently performed on 32-bit integers,
+ which limits values to 4 terabytes. See also the sc_kbytes_out sample fetch
+ keyword.
+
+table_server_id(<table>)
+ Uses the string representation of the input sample to perform a look up in
+ the specified table. If the key is not found in the table, integer value zero
+ is returned. Otherwise the converter returns the server ID associated with
+ the input sample in the designated table. A server ID is associated to a
+ sample by a "stick" rule when a connection to a server succeeds. A server ID
+ zero means that no server is associated with this key.
+
+table_sess_cnt(<table>)
+ Uses the string representation of the input sample to perform a look up in
+ the specified table. If the key is not found in the table, integer value zero
+ is returned. Otherwise the converter returns the cumulated amount of incoming
+ sessions associated with the input sample in the designated table. Note that
+ a session here refers to an incoming connection being accepted by the
+ "tcp-request connection" rulesets. See also the sc_sess_cnt sample fetch
+ keyword.
+
+table_sess_rate(<table>)
+ Uses the string representation of the input sample to perform a look up in
+ the specified table. If the key is not found in the table, integer value zero
+ is returned. Otherwise the converter returns the average incoming session
+ rate associated with the input sample in the designated table. Note that a
+ session here refers to an incoming connection being accepted by the
+ "tcp-request connection" rulesets. See also the sc_sess_rate sample fetch
+ keyword.
+
+table_trackers(<table>)
+ Uses the string representation of the input sample to perform a look up in
+ the specified table. If the key is not found in the table, integer value zero
+ is returned. Otherwise the converter returns the current amount of concurrent
+ connections tracking the same key as the input sample in the designated
+ table. It differs from table_conn_cur in that it does not rely on any stored
+ information but on the table's reference count (the "use" value which is
+ returned by "show table" on the CLI). This may sometimes be more suited for
+ layer7 tracking. It can be used to tell a server how many concurrent
+ connections there are from a given address for example. See also the
+ sc_trackers sample fetch keyword.
+
+upper
+ Convert a string sample to upper case. This can only be placed after a string
+ sample fetch function or after a transformation keyword returning a string
+ type. The result is of type string.
+
+url_dec
+ Takes an url-encoded string provided as input and returns the decoded
+ version as output. The input and the output are of type string.
+
+utime(<format>[,<offset>])
+ Converts an integer supposed to contain a date since epoch to a string
+ representing this date in UTC time using a format defined by the <format>
+ string using strftime(3). The purpose is to allow any date format to be used
+ in logs. An optional <offset> in seconds may be applied to the input date
+ (positive or negative). See the strftime() man page for the format supported
+ by your operating system. See also the ltime converter.
+
+ Example :
+
+ # Emit two colons, one with the UTC time and another with ip:port
+ # Eg: 20140710162350 127.0.0.1:57325
+ log-format %[date,utime(%Y%m%d%H%M%S)]\ %ci:%cp
+
+word(<index>,<delimiters>)
+ Extracts the nth word considering given delimiters from an input string.
+ Indexes start at 1 and delimiters are a string formatted list of chars.
+
+wt6([<avalanche>])
+ Hashes a binary input sample into an unsigned 32-bit quantity using the WT6
+ hash function. Optionally, it is possible to apply a full avalanche hash
+ function to the output if the optional <avalanche> argument equals 1. This
+ converter uses the same functions as used by the various hash-based load
+ balancing algorithms, so it will provide exactly the same results. It is
+ mostly intended for debugging, but can be used as a stick-table entry to
+ collect rough statistics. It must not be used for security purposes as a
+ 32-bit hash is trivial to break. See also "crc32", "djb2", "sdbm", and the
+ "hash-type" directive.
+
+xor(<value>)
+ Performs a bitwise "XOR" (exclusive OR) between <value> and the input value
+ of type signed integer, and returns the result as an signed integer.
+ <value> can be a numeric value or a variable name. The name of the variable
+ starts by an indication about its scope. The allowed scopes are:
+ "sess" : the variable is shared with all the session,
+ "txn" : the variable is shared with all the transaction (request and
+ response),
+ "req" : the variable is shared only during the request processing,
+ "res" : the variable is shared only during the response processing.
+ This prefix is followed by a name. The separator is a '.'. The name may only
+ contain characters 'a-z', 'A-Z', '0-9' and '_'.
+
+
+7.3.2. Fetching samples from internal states
+--------------------------------------------
+
+A first set of sample fetch methods applies to internal information which does
+not even relate to any client information. These ones are sometimes used with
+"monitor-fail" directives to report an internal status to external watchers.
+The sample fetch methods described in this section are usable anywhere.
+
+always_false : boolean
+ Always returns the boolean "false" value. It may be used with ACLs as a
+ temporary replacement for another one when adjusting configurations.
+
+always_true : boolean
+ Always returns the boolean "true" value. It may be used with ACLs as a
+ temporary replacement for another one when adjusting configurations.
+
+avg_queue([<backend>]) : integer
+ Returns the total number of queued connections of the designated backend
+ divided by the number of active servers. The current backend is used if no
+ backend is specified. This is very similar to "queue" except that the size of
+ the farm is considered, in order to give a more accurate measurement of the
+ time it may take for a new connection to be processed. The main usage is with
+ ACL to return a sorry page to new users when it becomes certain they will get
+ a degraded service, or to pass to the backend servers in a header so that
+ they decide to work in degraded mode or to disable some functions to speed up
+ the processing a bit. Note that in the event there would not be any active
+ server anymore, twice the number of queued connections would be considered as
+ the measured value. This is a fair estimate, as we expect one server to get
+ back soon anyway, but we still prefer to send new traffic to another backend
+ if in better shape. See also the "queue", "be_conn", and "be_sess_rate"
+ sample fetches.
+
+be_conn([<backend>]) : integer
+ Applies to the number of currently established connections on the backend,
+ possibly including the connection being evaluated. If no backend name is
+ specified, the current one is used. But it is also possible to check another
+ backend. It can be used to use a specific farm when the nominal one is full.
+ See also the "fe_conn", "queue" and "be_sess_rate" criteria.
+
+be_sess_rate([<backend>]) : integer
+ Returns an integer value corresponding to the sessions creation rate on the
+ backend, in number of new sessions per second. This is used with ACLs to
+ switch to an alternate backend when an expensive or fragile one reaches too
+ high a session rate, or to limit abuse of service (eg. prevent sucking of an
+ online dictionary). It can also be useful to add this element to logs using a
+ log-format directive.
+
+ Example :
+ # Redirect to an error page if the dictionary is requested too often
+ backend dynamic
+ mode http
+ acl being_scanned be_sess_rate gt 100
+ redirect location /denied.html if being_scanned
+
+bin(<hexa>) : bin
+ Returns a binary chain. The input is the hexadecimal representation
+ of the string.
+
+bool(<bool>) : bool
+ Returns a boolean value. <bool> can be 'true', 'false', '1' or '0'.
+ 'false' and '0' are the same. 'true' and '1' are the same.
+
+connslots([<backend>]) : integer
+ Returns an integer value corresponding to the number of connection slots
+ still available in the backend, by totaling the maximum amount of
+ connections on all servers and the maximum queue size. This is probably only
+ used with ACLs.
+
+ The basic idea here is to be able to measure the number of connection "slots"
+ still available (connection + queue), so that anything beyond that (intended
+ usage; see "use_backend" keyword) can be redirected to a different backend.
+
+ 'connslots' = number of available server connection slots, + number of
+ available server queue slots.
+
+ Note that while "fe_conn" may be used, "connslots" comes in especially
+ useful when you have a case of traffic going to one single ip, splitting into
+ multiple backends (perhaps using ACLs to do name-based load balancing) and
+ you want to be able to differentiate between different backends, and their
+ available "connslots". Also, whereas "nbsrv" only measures servers that are
+ actually *down*, this fetch is more fine-grained and looks into the number of
+ available connection slots as well. See also "queue" and "avg_queue".
+
+ OTHER CAVEATS AND NOTES: at this point in time, the code does not take care
+ of dynamic connections. Also, if any of the server maxconn, or maxqueue is 0,
+ then this fetch clearly does not make sense, in which case the value returned
+ will be -1.
+
+date([<offset>]) : integer
+ Returns the current date as the epoch (number of seconds since 01/01/1970).
+ If an offset value is specified, then it is a number of seconds that is added
+ to the current date before returning the value. This is particularly useful
+ to compute relative dates, as both positive and negative offsets are allowed.
+ It is useful combined with the http_date converter.
+
+ Example :
+
+ # set an expires header to now+1 hour in every response
+ http-response set-header Expires %[date(3600),http_date]
+
+env(<name>) : string
+ Returns a string containing the value of environment variable <name>. As a
+ reminder, environment variables are per-process and are sampled when the
+ process starts. This can be useful to pass some information to a next hop
+ server, or with ACLs to take specific action when the process is started a
+ certain way.
+
+ Examples :
+ # Pass the Via header to next hop with the local hostname in it
+ http-request add-header Via 1.1\ %[env(HOSTNAME)]
+
+ # reject cookie-less requests when the STOP environment variable is set
+ http-request deny if !{ cook(SESSIONID) -m found } { env(STOP) -m found }
+
+fe_conn([<frontend>]) : integer
+ Returns the number of currently established connections on the frontend,
+ possibly including the connection being evaluated. If no frontend name is
+ specified, the current one is used. But it is also possible to check another
+ frontend. It can be used to return a sorry page before hard-blocking, or to
+ use a specific backend to drain new requests when the farm is considered
+ full. This is mostly used with ACLs but can also be used to pass some
+ statistics to servers in HTTP headers. See also the "dst_conn", "be_conn",
+ "fe_sess_rate" fetches.
+
+fe_sess_rate([<frontend>]) : integer
+ Returns an integer value corresponding to the sessions creation rate on the
+ frontend, in number of new sessions per second. This is used with ACLs to
+ limit the incoming session rate to an acceptable range in order to prevent
+ abuse of service at the earliest moment, for example when combined with other
+ layer 4 ACLs in order to force the clients to wait a bit for the rate to go
+ down below the limit. It can also be useful to add this element to logs using
+ a log-format directive. See also the "rate-limit sessions" directive for use
+ in frontends.
+
+ Example :
+ # This frontend limits incoming mails to 10/s with a max of 100
+ # concurrent connections. We accept any connection below 10/s, and
+ # force excess clients to wait for 100 ms. Since clients are limited to
+ # 100 max, there cannot be more than 10 incoming mails per second.
+ frontend mail
+ bind :25
+ mode tcp
+ maxconn 100
+ acl too_fast fe_sess_rate ge 10
+ tcp-request inspect-delay 100ms
+ tcp-request content accept if ! too_fast
+ tcp-request content accept if WAIT_END
+
+int(<integer>) : signed integer
+ Returns a signed integer.
+
+ipv4(<ipv4>) : ipv4
+ Returns an ipv4.
+
+ipv6(<ipv6>) : ipv6
+ Returns an ipv6.
+
+meth(<method>) : method
+ Returns a method.
+
+nbproc : integer
+ Returns an integer value corresponding to the number of processes that were
+ started (it equals the global "nbproc" setting). This is useful for logging
+ and debugging purposes.
+
+nbsrv([<backend>]) : integer
+ Returns an integer value corresponding to the number of usable servers of
+ either the current backend or the named backend. This is mostly used with
+ ACLs but can also be useful when added to logs. This is normally used to
+ switch to an alternate backend when the number of servers is too low to
+ to handle some load. It is useful to report a failure when combined with
+ "monitor fail".
+
+proc : integer
+ Returns an integer value corresponding to the position of the process calling
+ the function, between 1 and global.nbproc. This is useful for logging and
+ debugging purposes.
+
+queue([<backend>]) : integer
+ Returns the total number of queued connections of the designated backend,
+ including all the connections in server queues. If no backend name is
+ specified, the current one is used, but it is also possible to check another
+ one. This is useful with ACLs or to pass statistics to backend servers. This
+ can be used to take actions when queuing goes above a known level, generally
+ indicating a surge of traffic or a massive slowdown on the servers. One
+ possible action could be to reject new users but still accept old ones. See
+ also the "avg_queue", "be_conn", and "be_sess_rate" fetches.
+
+rand([<range>]) : integer
+ Returns a random integer value within a range of <range> possible values,
+ starting at zero. If the range is not specified, it defaults to 2^32, which
+ gives numbers between 0 and 4294967295. It can be useful to pass some values
+ needed to take some routing decisions for example, or just for debugging
+ purposes. This random must not be used for security purposes.
+
+srv_conn([<backend>/]<server>) : integer
+ Returns an integer value corresponding to the number of currently established
+ connections on the designated server, possibly including the connection being
+ evaluated. If <backend> is omitted, then the server is looked up in the
+ current backend. It can be used to use a specific farm when one server is
+ full, or to inform the server about our view of the number of active
+ connections with it. See also the "fe_conn", "be_conn" and "queue" fetch
+ methods.
+
+srv_is_up([<backend>/]<server>) : boolean
+ Returns true when the designated server is UP, and false when it is either
+ DOWN or in maintenance mode. If <backend> is omitted, then the server is
+ looked up in the current backend. It is mainly used to take action based on
+ an external status reported via a health check (eg: a geographical site's
+ availability). Another possible use which is more of a hack consists in
+ using dummy servers as boolean variables that can be enabled or disabled from
+ the CLI, so that rules depending on those ACLs can be tweaked in realtime.
+
+srv_sess_rate([<backend>/]<server>) : integer
+ Returns an integer corresponding to the sessions creation rate on the
+ designated server, in number of new sessions per second. If <backend> is
+ omitted, then the server is looked up in the current backend. This is mostly
+ used with ACLs but can make sense with logs too. This is used to switch to an
+ alternate backend when an expensive or fragile one reaches too high a session
+ rate, or to limit abuse of service (eg. prevent latent requests from
+ overloading servers).
+
+ Example :
+ # Redirect to a separate back
+ acl srv1_full srv_sess_rate(be1/srv1) gt 50
+ acl srv2_full srv_sess_rate(be1/srv2) gt 50
+ use_backend be2 if srv1_full or srv2_full
+
+stopping : boolean
+ Returns TRUE if the process calling the function is currently stopping. This
+ can be useful for logging, or for relaxing certain checks or helping close
+ certain connections upon graceful shutdown.
+
+str(<string>) : string
+ Returns a string.
+
+table_avl([<table>]) : integer
+ Returns the total number of available entries in the current proxy's
+ stick-table or in the designated stick-table. See also table_cnt.
+
+table_cnt([<table>]) : integer
+ Returns the total number of entries currently in use in the current proxy's
+ stick-table or in the designated stick-table. See also src_conn_cnt and
+ table_avl for other entry counting methods.
+
+var(<var-name>) : undefined
+ Returns a variable with the stored type. If the variable is not set, the
+ sample fetch fails. The name of the variable starts by an indication about its
+ scope. The scope allowed are:
+ "sess" : the variable is shared with all the session,
+ "txn" : the variable is shared with all the transaction (request and
+ response),
+ "req" : the variable is shared only during the request processing,
+ "res" : the variable is shared only during the response processing.
+ This prefix is followed by a name. The separator is a '.'. The name may only
+ contain characters 'a-z', 'A-Z', '0-9' and '_'.
+
+
+7.3.3. Fetching samples at Layer 4
+----------------------------------
+
+The layer 4 usually describes just the transport layer which in haproxy is
+closest to the connection, where no content is yet made available. The fetch
+methods described here are usable as low as the "tcp-request connection" rule
+sets unless they require some future information. Those generally include
+TCP/IP addresses and ports, as well as elements from stick-tables related to
+the incoming connection. For retrieving a value from a sticky counters, the
+counter number can be explicitly set as 0, 1, or 2 using the pre-defined
+"sc0_", "sc1_", or "sc2_" prefix, or it can be specified as the first integer
+argument when using the "sc_" prefix. An optional table may be specified with
+the "sc*" form, in which case the currently tracked key will be looked up into
+this alternate table instead of the table currently being tracked.
+
+be_id : integer
+ Returns an integer containing the current backend's id. It can be used in
+ frontends with responses to check which backend processed the request.
+
+dst : ip
+ This is the destination IPv4 address of the connection on the client side,
+ which is the address the client connected to. It can be useful when running
+ in transparent mode. It is of type IP and works on both IPv4 and IPv6 tables.
+ On IPv6 tables, IPv4 address is mapped to its IPv6 equivalent, according to
+ RFC 4291.
+
+dst_conn : integer
+ Returns an integer value corresponding to the number of currently established
+ connections on the same socket including the one being evaluated. It is
+ normally used with ACLs but can as well be used to pass the information to
+ servers in an HTTP header or in logs. It can be used to either return a sorry
+ page before hard-blocking, or to use a specific backend to drain new requests
+ when the socket is considered saturated. This offers the ability to assign
+ different limits to different listening ports or addresses. See also the
+ "fe_conn" and "be_conn" fetches.
+
+dst_port : integer
+ Returns an integer value corresponding to the destination TCP port of the
+ connection on the client side, which is the port the client connected to.
+ This might be used when running in transparent mode, when assigning dynamic
+ ports to some clients for a whole application session, to stick all users to
+ a same server, or to pass the destination port information to a server using
+ an HTTP header.
+
+fe_id : integer
+ Returns an integer containing the current frontend's id. It can be used in
+ backends to check from which backend it was called, or to stick all users
+ coming via a same frontend to the same server.
+
+sc_bytes_in_rate(<ctr>[,<table>]) : integer
+sc0_bytes_in_rate([<table>]) : integer
+sc1_bytes_in_rate([<table>]) : integer
+sc2_bytes_in_rate([<table>]) : integer
+ Returns the average client-to-server bytes rate from the currently tracked
+ counters, measured in amount of bytes over the period configured in the
+ table. See also src_bytes_in_rate.
+
+sc_bytes_out_rate(<ctr>[,<table>]) : integer
+sc0_bytes_out_rate([<table>]) : integer
+sc1_bytes_out_rate([<table>]) : integer
+sc2_bytes_out_rate([<table>]) : integer
+ Returns the average server-to-client bytes rate from the currently tracked
+ counters, measured in amount of bytes over the period configured in the
+ table. See also src_bytes_out_rate.
+
+sc_clr_gpc0(<ctr>[,<table>]) : integer
+sc0_clr_gpc0([<table>]) : integer
+sc1_clr_gpc0([<table>]) : integer
+sc2_clr_gpc0([<table>]) : integer
+ Clears the first General Purpose Counter associated to the currently tracked
+ counters, and returns its previous value. Before the first invocation, the
+ stored value is zero, so first invocation will always return zero. This is
+ typically used as a second ACL in an expression in order to mark a connection
+ when a first ACL was verified :
+
+ # block if 5 consecutive requests continue to come faster than 10 sess
+ # per second, and reset the counter as soon as the traffic slows down.
+ acl abuse sc0_http_req_rate gt 10
+ acl kill sc0_inc_gpc0 gt 5
+ acl save sc0_clr_gpc0 ge 0
+ tcp-request connection accept if !abuse save
+ tcp-request connection reject if abuse kill
+
+sc_conn_cnt(<ctr>[,<table>]) : integer
+sc0_conn_cnt([<table>]) : integer
+sc1_conn_cnt([<table>]) : integer
+sc2_conn_cnt([<table>]) : integer
+ Returns the cumulated number of incoming connections from currently tracked
+ counters. See also src_conn_cnt.
+
+sc_conn_cur(<ctr>[,<table>]) : integer
+sc0_conn_cur([<table>]) : integer
+sc1_conn_cur([<table>]) : integer
+sc2_conn_cur([<table>]) : integer
+ Returns the current amount of concurrent connections tracking the same
+ tracked counters. This number is automatically incremented when tracking
+ begins and decremented when tracking stops. See also src_conn_cur.
+
+sc_conn_rate(<ctr>[,<table>]) : integer
+sc0_conn_rate([<table>]) : integer
+sc1_conn_rate([<table>]) : integer
+sc2_conn_rate([<table>]) : integer
+ Returns the average connection rate from the currently tracked counters,
+ measured in amount of connections over the period configured in the table.
+ See also src_conn_rate.
+
+sc_get_gpc0(<ctr>[,<table>]) : integer
+sc0_get_gpc0([<table>]) : integer
+sc1_get_gpc0([<table>]) : integer
+sc2_get_gpc0([<table>]) : integer
+ Returns the value of the first General Purpose Counter associated to the
+ currently tracked counters. See also src_get_gpc0 and sc/sc0/sc1/sc2_inc_gpc0.
+
+sc_get_gpt0(<ctr>[,<table>]) : integer
+sc0_get_gpt0([<table>]) : integer
+sc1_get_gpt0([<table>]) : integer
+sc2_get_gpt0([<table>]) : integer
+ Returns the value of the first General Purpose Tag associated to the
+ currently tracked counters. See also src_get_gpt0.
+
+sc_gpc0_rate(<ctr>[,<table>]) : integer
+sc0_gpc0_rate([<table>]) : integer
+sc1_gpc0_rate([<table>]) : integer
+sc2_gpc0_rate([<table>]) : integer
+ Returns the average increment rate of the first General Purpose Counter
+ associated to the currently tracked counters. It reports the frequency
+ which the gpc0 counter was incremented over the configured period. See also
+ src_gpc0_rate, sc/sc0/sc1/sc2_get_gpc0, and sc/sc0/sc1/sc2_inc_gpc0. Note
+ that the "gpc0_rate" counter must be stored in the stick-table for a value to
+ be returned, as "gpc0" only holds the event count.
+
+sc_http_err_cnt(<ctr>[,<table>]) : integer
+sc0_http_err_cnt([<table>]) : integer
+sc1_http_err_cnt([<table>]) : integer
+sc2_http_err_cnt([<table>]) : integer
+ Returns the cumulated number of HTTP errors from the currently tracked
+ counters. This includes the both request errors and 4xx error responses.
+ See also src_http_err_cnt.
+
+sc_http_err_rate(<ctr>[,<table>]) : integer
+sc0_http_err_rate([<table>]) : integer
+sc1_http_err_rate([<table>]) : integer
+sc2_http_err_rate([<table>]) : integer
+ Returns the average rate of HTTP errors from the currently tracked counters,
+ measured in amount of errors over the period configured in the table. This
+ includes the both request errors and 4xx error responses. See also
+ src_http_err_rate.
+
+sc_http_req_cnt(<ctr>[,<table>]) : integer
+sc0_http_req_cnt([<table>]) : integer
+sc1_http_req_cnt([<table>]) : integer
+sc2_http_req_cnt([<table>]) : integer
+ Returns the cumulated number of HTTP requests from the currently tracked
+ counters. This includes every started request, valid or not. See also
+ src_http_req_cnt.
+
+sc_http_req_rate(<ctr>[,<table>]) : integer
+sc0_http_req_rate([<table>]) : integer
+sc1_http_req_rate([<table>]) : integer
+sc2_http_req_rate([<table>]) : integer
+ Returns the average rate of HTTP requests from the currently tracked
+ counters, measured in amount of requests over the period configured in
+ the table. This includes every started request, valid or not. See also
+ src_http_req_rate.
+
+sc_inc_gpc0(<ctr>[,<table>]) : integer
+sc0_inc_gpc0([<table>]) : integer
+sc1_inc_gpc0([<table>]) : integer
+sc2_inc_gpc0([<table>]) : integer
+ Increments the first General Purpose Counter associated to the currently
+ tracked counters, and returns its new value. Before the first invocation,
+ the stored value is zero, so first invocation will increase it to 1 and will
+ return 1. This is typically used as a second ACL in an expression in order
+ to mark a connection when a first ACL was verified :
+
+ acl abuse sc0_http_req_rate gt 10
+ acl kill sc0_inc_gpc0 gt 0
+ tcp-request connection reject if abuse kill
+
+sc_kbytes_in(<ctr>[,<table>]) : integer
+sc0_kbytes_in([<table>]) : integer
+sc1_kbytes_in([<table>]) : integer
+sc2_kbytes_in([<table>]) : integer
+ Returns the total amount of client-to-server data from the currently tracked
+ counters, measured in kilobytes. The test is currently performed on 32-bit
+ integers, which limits values to 4 terabytes. See also src_kbytes_in.
+
+sc_kbytes_out(<ctr>[,<table>]) : integer
+sc0_kbytes_out([<table>]) : integer
+sc1_kbytes_out([<table>]) : integer
+sc2_kbytes_out([<table>]) : integer
+ Returns the total amount of server-to-client data from the currently tracked
+ counters, measured in kilobytes. The test is currently performed on 32-bit
+ integers, which limits values to 4 terabytes. See also src_kbytes_out.
+
+sc_sess_cnt(<ctr>[,<table>]) : integer
+sc0_sess_cnt([<table>]) : integer
+sc1_sess_cnt([<table>]) : integer
+sc2_sess_cnt([<table>]) : integer
+ Returns the cumulated number of incoming connections that were transformed
+ into sessions, which means that they were accepted by a "tcp-request
+ connection" rule, from the currently tracked counters. A backend may count
+ more sessions than connections because each connection could result in many
+ backend sessions if some HTTP keep-alive is performed over the connection
+ with the client. See also src_sess_cnt.
+
+sc_sess_rate(<ctr>[,<table>]) : integer
+sc0_sess_rate([<table>]) : integer
+sc1_sess_rate([<table>]) : integer
+sc2_sess_rate([<table>]) : integer
+ Returns the average session rate from the currently tracked counters,
+ measured in amount of sessions over the period configured in the table. A
+ session is a connection that got past the early "tcp-request connection"
+ rules. A backend may count more sessions than connections because each
+ connection could result in many backend sessions if some HTTP keep-alive is
+ performed over the connection with the client. See also src_sess_rate.
+
+sc_tracked(<ctr>[,<table>]) : boolean
+sc0_tracked([<table>]) : boolean
+sc1_tracked([<table>]) : boolean
+sc2_tracked([<table>]) : boolean
+ Returns true if the designated session counter is currently being tracked by
+ the current session. This can be useful when deciding whether or not we want
+ to set some values in a header passed to the server.
+
+sc_trackers(<ctr>[,<table>]) : integer
+sc0_trackers([<table>]) : integer
+sc1_trackers([<table>]) : integer
+sc2_trackers([<table>]) : integer
+ Returns the current amount of concurrent connections tracking the same
+ tracked counters. This number is automatically incremented when tracking
+ begins and decremented when tracking stops. It differs from sc0_conn_cur in
+ that it does not rely on any stored information but on the table's reference
+ count (the "use" value which is returned by "show table" on the CLI). This
+ may sometimes be more suited for layer7 tracking. It can be used to tell a
+ server how many concurrent connections there are from a given address for
+ example.
+
+so_id : integer
+ Returns an integer containing the current listening socket's id. It is useful
+ in frontends involving many "bind" lines, or to stick all users coming via a
+ same socket to the same server.
+
+src : ip
+ This is the source IPv4 address of the client of the session. It is of type
+ IP and works on both IPv4 and IPv6 tables. On IPv6 tables, IPv4 addresses are
+ mapped to their IPv6 equivalent, according to RFC 4291. Note that it is the
+ TCP-level source address which is used, and not the address of a client
+ behind a proxy. However if the "accept-proxy" bind directive is used, it can
+ be the address of a client behind another PROXY-protocol compatible component
+ for all rule sets except "tcp-request connection" which sees the real address.
+
+ Example:
+ # add an HTTP header in requests with the originating address' country
+ http-request set-header X-Country %[src,map_ip(geoip.lst)]
+
+src_bytes_in_rate([<table>]) : integer
+ Returns the average bytes rate from the incoming connection's source address
+ in the current proxy's stick-table or in the designated stick-table, measured
+ in amount of bytes over the period configured in the table. If the address is
+ not found, zero is returned. See also sc/sc0/sc1/sc2_bytes_in_rate.
+
+src_bytes_out_rate([<table>]) : integer
+ Returns the average bytes rate to the incoming connection's source address in
+ the current proxy's stick-table or in the designated stick-table, measured in
+ amount of bytes over the period configured in the table. If the address is
+ not found, zero is returned. See also sc/sc0/sc1/sc2_bytes_out_rate.
+
+src_clr_gpc0([<table>]) : integer
+ Clears the first General Purpose Counter associated to the incoming
+ connection's source address in the current proxy's stick-table or in the
+ designated stick-table, and returns its previous value. If the address is not
+ found, an entry is created and 0 is returned. This is typically used as a
+ second ACL in an expression in order to mark a connection when a first ACL
+ was verified :
+
+ # block if 5 consecutive requests continue to come faster than 10 sess
+ # per second, and reset the counter as soon as the traffic slows down.
+ acl abuse src_http_req_rate gt 10
+ acl kill src_inc_gpc0 gt 5
+ acl save src_clr_gpc0 ge 0
+ tcp-request connection accept if !abuse save
+ tcp-request connection reject if abuse kill
+
+src_conn_cnt([<table>]) : integer
+ Returns the cumulated number of connections initiated from the current
+ incoming connection's source address in the current proxy's stick-table or in
+ the designated stick-table. If the address is not found, zero is returned.
+ See also sc/sc0/sc1/sc2_conn_cnt.
+
+src_conn_cur([<table>]) : integer
+ Returns the current amount of concurrent connections initiated from the
+ current incoming connection's source address in the current proxy's
+ stick-table or in the designated stick-table. If the address is not found,
+ zero is returned. See also sc/sc0/sc1/sc2_conn_cur.
+
+src_conn_rate([<table>]) : integer
+ Returns the average connection rate from the incoming connection's source
+ address in the current proxy's stick-table or in the designated stick-table,
+ measured in amount of connections over the period configured in the table. If
+ the address is not found, zero is returned. See also sc/sc0/sc1/sc2_conn_rate.
+
+src_get_gpc0([<table>]) : integer
+ Returns the value of the first General Purpose Counter associated to the
+ incoming connection's source address in the current proxy's stick-table or in
+ the designated stick-table. If the address is not found, zero is returned.
+ See also sc/sc0/sc1/sc2_get_gpc0 and src_inc_gpc0.
+
+src_get_gpt0([<table>]) : integer
+ Returns the value of the first General Purpose Tag associated to the
+ incoming connection's source address in the current proxy's stick-table or in
+ the designated stick-table. If the address is not found, zero is returned.
+ See also sc/sc0/sc1/sc2_get_gpt0.
+
+src_gpc0_rate([<table>]) : integer
+ Returns the average increment rate of the first General Purpose Counter
+ associated to the incoming connection's source address in the current proxy's
+ stick-table or in the designated stick-table. It reports the frequency
+ which the gpc0 counter was incremented over the configured period. See also
+ sc/sc0/sc1/sc2_gpc0_rate, src_get_gpc0, and sc/sc0/sc1/sc2_inc_gpc0. Note
+ that the "gpc0_rate" counter must be stored in the stick-table for a value to
+ be returned, as "gpc0" only holds the event count.
+
+src_http_err_cnt([<table>]) : integer
+ Returns the cumulated number of HTTP errors from the incoming connection's
+ source address in the current proxy's stick-table or in the designated
+ stick-table. This includes the both request errors and 4xx error responses.
+ See also sc/sc0/sc1/sc2_http_err_cnt. If the address is not found, zero is
+ returned.
+
+src_http_err_rate([<table>]) : integer
+ Returns the average rate of HTTP errors from the incoming connection's source
+ address in the current proxy's stick-table or in the designated stick-table,
+ measured in amount of errors over the period configured in the table. This
+ includes the both request errors and 4xx error responses. If the address is
+ not found, zero is returned. See also sc/sc0/sc1/sc2_http_err_rate.
+
+src_http_req_cnt([<table>]) : integer
+ Returns the cumulated number of HTTP requests from the incoming connection's
+ source address in the current proxy's stick-table or in the designated stick-
+ table. This includes every started request, valid or not. If the address is
+ not found, zero is returned. See also sc/sc0/sc1/sc2_http_req_cnt.
+
+src_http_req_rate([<table>]) : integer
+ Returns the average rate of HTTP requests from the incoming connection's
+ source address in the current proxy's stick-table or in the designated stick-
+ table, measured in amount of requests over the period configured in the
+ table. This includes every started request, valid or not. If the address is
+ not found, zero is returned. See also sc/sc0/sc1/sc2_http_req_rate.
+
+src_inc_gpc0([<table>]) : integer
+ Increments the first General Purpose Counter associated to the incoming
+ connection's source address in the current proxy's stick-table or in the
+ designated stick-table, and returns its new value. If the address is not
+ found, an entry is created and 1 is returned. See also sc0/sc2/sc2_inc_gpc0.
+ This is typically used as a second ACL in an expression in order to mark a
+ connection when a first ACL was verified :
+
+ acl abuse src_http_req_rate gt 10
+ acl kill src_inc_gpc0 gt 0
+ tcp-request connection reject if abuse kill
+
+src_kbytes_in([<table>]) : integer
+ Returns the total amount of data received from the incoming connection's
+ source address in the current proxy's stick-table or in the designated
+ stick-table, measured in kilobytes. If the address is not found, zero is
+ returned. The test is currently performed on 32-bit integers, which limits
+ values to 4 terabytes. See also sc/sc0/sc1/sc2_kbytes_in.
+
+src_kbytes_out([<table>]) : integer
+ Returns the total amount of data sent to the incoming connection's source
+ address in the current proxy's stick-table or in the designated stick-table,
+ measured in kilobytes. If the address is not found, zero is returned. The
+ test is currently performed on 32-bit integers, which limits values to 4
+ terabytes. See also sc/sc0/sc1/sc2_kbytes_out.
+
+src_port : integer
+ Returns an integer value corresponding to the TCP source port of the
+ connection on the client side, which is the port the client connected from.
+ Usage of this function is very limited as modern protocols do not care much
+ about source ports nowadays.
+
+src_sess_cnt([<table>]) : integer
+ Returns the cumulated number of connections initiated from the incoming
+ connection's source IPv4 address in the current proxy's stick-table or in the
+ designated stick-table, that were transformed into sessions, which means that
+ they were accepted by "tcp-request" rules. If the address is not found, zero
+ is returned. See also sc/sc0/sc1/sc2_sess_cnt.
+
+src_sess_rate([<table>]) : integer
+ Returns the average session rate from the incoming connection's source
+ address in the current proxy's stick-table or in the designated stick-table,
+ measured in amount of sessions over the period configured in the table. A
+ session is a connection that went past the early "tcp-request" rules. If the
+ address is not found, zero is returned. See also sc/sc0/sc1/sc2_sess_rate.
+
+src_updt_conn_cnt([<table>]) : integer
+ Creates or updates the entry associated to the incoming connection's source
+ address in the current proxy's stick-table or in the designated stick-table.
+ This table must be configured to store the "conn_cnt" data type, otherwise
+ the match will be ignored. The current count is incremented by one, and the
+ expiration timer refreshed. The updated count is returned, so this match
+ can't return zero. This was used to reject service abusers based on their
+ source address. Note: it is recommended to use the more complete "track-sc*"
+ actions in "tcp-request" rules instead.
+
+ Example :
+ # This frontend limits incoming SSH connections to 3 per 10 second for
+ # each source address, and rejects excess connections until a 10 second
+ # silence is observed. At most 20 addresses are tracked.
+ listen ssh
+ bind :22
+ mode tcp
+ maxconn 100
+ stick-table type ip size 20 expire 10s store conn_cnt
+ tcp-request content reject if { src_updt_conn_cnt gt 3 }
+ server local 127.0.0.1:22
+
+srv_id : integer
+ Returns an integer containing the server's id when processing the response.
+ While it's almost only used with ACLs, it may be used for logging or
+ debugging.
+
+
+7.3.4. Fetching samples at Layer 5
+----------------------------------
+
+The layer 5 usually describes just the session layer which in haproxy is
+closest to the session once all the connection handshakes are finished, but
+when no content is yet made available. The fetch methods described here are
+usable as low as the "tcp-request content" rule sets unless they require some
+future information. Those generally include the results of SSL negotiations.
+
+ssl_bc : boolean
+ Returns true when the back connection was made via an SSL/TLS transport
+ layer and is locally deciphered. This means the outgoing connection was made
+ other a server with the "ssl" option.
+
+ssl_bc_alg_keysize : integer
+ Returns the symmetric cipher key size supported in bits when the outgoing
+ connection was made over an SSL/TLS transport layer.
+
+ssl_bc_cipher : string
+ Returns the name of the used cipher when the outgoing connection was made
+ over an SSL/TLS transport layer.
+
+ssl_bc_protocol : string
+ Returns the name of the used protocol when the outgoing connection was made
+ over an SSL/TLS transport layer.
+
+ssl_bc_unique_id : binary
+ When the outgoing connection was made over an SSL/TLS transport layer,
+ returns the TLS unique ID as defined in RFC5929 section 3. The unique id
+ can be encoded to base64 using the converter: "ssl_bc_unique_id,base64".
+
+ssl_bc_session_id : binary
+ Returns the SSL ID of the back connection when the outgoing connection was
+ made over an SSL/TLS transport layer. It is useful to log if we want to know
+ if session was reused or not.
+
+ssl_bc_use_keysize : integer
+ Returns the symmetric cipher key size used in bits when the outgoing
+ connection was made over an SSL/TLS transport layer.
+
+ssl_c_ca_err : integer
+ When the incoming connection was made over an SSL/TLS transport layer,
+ returns the ID of the first error detected during verification of the client
+ certificate at depth > 0, or 0 if no error was encountered during this
+ verification process. Please refer to your SSL library's documentation to
+ find the exhaustive list of error codes.
+
+ssl_c_ca_err_depth : integer
+ When the incoming connection was made over an SSL/TLS transport layer,
+ returns the depth in the CA chain of the first error detected during the
+ verification of the client certificate. If no error is encountered, 0 is
+ returned.
+
+ssl_c_der : binary
+ Returns the DER formatted certificate presented by the client when the
+ incoming connection was made over an SSL/TLS transport layer. When used for
+ an ACL, the value(s) to match against can be passed in hexadecimal form.
+
+ssl_c_err : integer
+ When the incoming connection was made over an SSL/TLS transport layer,
+ returns the ID of the first error detected during verification at depth 0, or
+ 0 if no error was encountered during this verification process. Please refer
+ to your SSL library's documentation to find the exhaustive list of error
+ codes.
+
+ssl_c_i_dn([<entry>[,<occ>]]) : string
+ When the incoming connection was made over an SSL/TLS transport layer,
+ returns the full distinguished name of the issuer of the certificate
+ presented by the client when no <entry> is specified, or the value of the
+ first given entry found from the beginning of the DN. If a positive/negative
+ occurrence number is specified as the optional second argument, it returns
+ the value of the nth given entry value from the beginning/end of the DN.
+ For instance, "ssl_c_i_dn(OU,2)" the second organization unit, and
+ "ssl_c_i_dn(CN)" retrieves the common name.
+
+ssl_c_key_alg : string
+ Returns the name of the algorithm used to generate the key of the certificate
+ presented by the client when the incoming connection was made over an SSL/TLS
+ transport layer.
+
+ssl_c_notafter : string
+ Returns the end date presented by the client as a formatted string
+ YYMMDDhhmmss[Z] when the incoming connection was made over an SSL/TLS
+ transport layer.
+
+ssl_c_notbefore : string
+ Returns the start date presented by the client as a formatted string
+ YYMMDDhhmmss[Z] when the incoming connection was made over an SSL/TLS
+ transport layer.
+
+ssl_c_s_dn([<entry>[,<occ>]]) : string
+ When the incoming connection was made over an SSL/TLS transport layer,
+ returns the full distinguished name of the subject of the certificate
+ presented by the client when no <entry> is specified, or the value of the
+ first given entry found from the beginning of the DN. If a positive/negative
+ occurrence number is specified as the optional second argument, it returns
+ the value of the nth given entry value from the beginning/end of the DN.
+ For instance, "ssl_c_s_dn(OU,2)" the second organization unit, and
+ "ssl_c_s_dn(CN)" retrieves the common name.
+
+ssl_c_serial : binary
+ Returns the serial of the certificate presented by the client when the
+ incoming connection was made over an SSL/TLS transport layer. When used for
+ an ACL, the value(s) to match against can be passed in hexadecimal form.
+
+ssl_c_sha1 : binary
+ Returns the SHA-1 fingerprint of the certificate presented by the client when
+ the incoming connection was made over an SSL/TLS transport layer. This can be
+ used to stick a client to a server, or to pass this information to a server.
+ Note that the output is binary, so if you want to pass that signature to the
+ server, you need to encode it in hex or base64, such as in the example below:
+
+ http-request set-header X-SSL-Client-SHA1 %[ssl_c_sha1,hex]
+
+ssl_c_sig_alg : string
+ Returns the name of the algorithm used to sign the certificate presented by
+ the client when the incoming connection was made over an SSL/TLS transport
+ layer.
+
+ssl_c_used : boolean
+ Returns true if current SSL session uses a client certificate even if current
+ connection uses SSL session resumption. See also "ssl_fc_has_crt".
+
+ssl_c_verify : integer
+ Returns the verify result error ID when the incoming connection was made over
+ an SSL/TLS transport layer, otherwise zero if no error is encountered. Please
+ refer to your SSL library's documentation for an exhaustive list of error
+ codes.
+
+ssl_c_version : integer
+ Returns the version of the certificate presented by the client when the
+ incoming connection was made over an SSL/TLS transport layer.
+
+ssl_f_der : binary
+ Returns the DER formatted certificate presented by the frontend when the
+ incoming connection was made over an SSL/TLS transport layer. When used for
+ an ACL, the value(s) to match against can be passed in hexadecimal form.
+
+ssl_f_i_dn([<entry>[,<occ>]]) : string
+ When the incoming connection was made over an SSL/TLS transport layer,
+ returns the full distinguished name of the issuer of the certificate
+ presented by the frontend when no <entry> is specified, or the value of the
+ first given entry found from the beginning of the DN. If a positive/negative
+ occurrence number is specified as the optional second argument, it returns
+ the value of the nth given entry value from the beginning/end of the DN.
+ For instance, "ssl_f_i_dn(OU,2)" the second organization unit, and
+ "ssl_f_i_dn(CN)" retrieves the common name.
+
+ssl_f_key_alg : string
+ Returns the name of the algorithm used to generate the key of the certificate
+ presented by the frontend when the incoming connection was made over an
+ SSL/TLS transport layer.
+
+ssl_f_notafter : string
+ Returns the end date presented by the frontend as a formatted string
+ YYMMDDhhmmss[Z] when the incoming connection was made over an SSL/TLS
+ transport layer.
+
+ssl_f_notbefore : string
+ Returns the start date presented by the frontend as a formatted string
+ YYMMDDhhmmss[Z] when the incoming connection was made over an SSL/TLS
+ transport layer.
+
+ssl_f_s_dn([<entry>[,<occ>]]) : string
+ When the incoming connection was made over an SSL/TLS transport layer,
+ returns the full distinguished name of the subject of the certificate
+ presented by the frontend when no <entry> is specified, or the value of the
+ first given entry found from the beginning of the DN. If a positive/negative
+ occurrence number is specified as the optional second argument, it returns
+ the value of the nth given entry value from the beginning/end of the DN.
+ For instance, "ssl_f_s_dn(OU,2)" the second organization unit, and
+ "ssl_f_s_dn(CN)" retrieves the common name.
+
+ssl_f_serial : binary
+ Returns the serial of the certificate presented by the frontend when the
+ incoming connection was made over an SSL/TLS transport layer. When used for
+ an ACL, the value(s) to match against can be passed in hexadecimal form.
+
+ssl_f_sha1 : binary
+ Returns the SHA-1 fingerprint of the certificate presented by the frontend
+ when the incoming connection was made over an SSL/TLS transport layer. This
+ can be used to know which certificate was chosen using SNI.
+
+ssl_f_sig_alg : string
+ Returns the name of the algorithm used to sign the certificate presented by
+ the frontend when the incoming connection was made over an SSL/TLS transport
+ layer.
+
+ssl_f_version : integer
+ Returns the version of the certificate presented by the frontend when the
+ incoming connection was made over an SSL/TLS transport layer.
+
+ssl_fc : boolean
+ Returns true when the front connection was made via an SSL/TLS transport
+ layer and is locally deciphered. This means it has matched a socket declared
+ with a "bind" line having the "ssl" option.
+
+ Example :
+ # This passes "X-Proto: https" to servers when client connects over SSL
+ listen http-https
+ bind :80
+ bind :443 ssl crt /etc/haproxy.pem
+ http-request add-header X-Proto https if { ssl_fc }
+
+ssl_fc_alg_keysize : integer
+ Returns the symmetric cipher key size supported in bits when the incoming
+ connection was made over an SSL/TLS transport layer.
+
+ssl_fc_alpn : string
+ This extracts the Application Layer Protocol Negotiation field from an
+ incoming connection made via a TLS transport layer and locally deciphered by
+ haproxy. The result is a string containing the protocol name advertised by
+ the client. The SSL library must have been built with support for TLS
+ extensions enabled (check haproxy -vv). Note that the TLS ALPN extension is
+ not advertised unless the "alpn" keyword on the "bind" line specifies a
+ protocol list. Also, nothing forces the client to pick a protocol from this
+ list, any other one may be requested. The TLS ALPN extension is meant to
+ replace the TLS NPN extension. See also "ssl_fc_npn".
+
+ssl_fc_cipher : string
+ Returns the name of the used cipher when the incoming connection was made
+ over an SSL/TLS transport layer.
+
+ssl_fc_has_crt : boolean
+ Returns true if a client certificate is present in an incoming connection over
+ SSL/TLS transport layer. Useful if 'verify' statement is set to 'optional'.
+ Note: on SSL session resumption with Session ID or TLS ticket, client
+ certificate is not present in the current connection but may be retrieved
+ from the cache or the ticket. So prefer "ssl_c_used" if you want to check if
+ current SSL session uses a client certificate.
+
+ssl_fc_has_sni : boolean
+ This checks for the presence of a Server Name Indication TLS extension (SNI)
+ in an incoming connection was made over an SSL/TLS transport layer. Returns
+ true when the incoming connection presents a TLS SNI field. This requires
+ that the SSL library is build with support for TLS extensions enabled (check
+ haproxy -vv).
+
+ssl_fc_is_resumed: boolean
+ Returns true if the SSL/TLS session has been resumed through the use of
+ SSL session cache or TLS tickets.
+
+ssl_fc_npn : string
+ This extracts the Next Protocol Negotiation field from an incoming connection
+ made via a TLS transport layer and locally deciphered by haproxy. The result
+ is a string containing the protocol name advertised by the client. The SSL
+ library must have been built with support for TLS extensions enabled (check
+ haproxy -vv). Note that the TLS NPN extension is not advertised unless the
+ "npn" keyword on the "bind" line specifies a protocol list. Also, nothing
+ forces the client to pick a protocol from this list, any other one may be
+ requested. Please note that the TLS NPN extension was replaced with ALPN.
+
+ssl_fc_protocol : string
+ Returns the name of the used protocol when the incoming connection was made
+ over an SSL/TLS transport layer.
+
+ssl_fc_unique_id : binary
+ When the incoming connection was made over an SSL/TLS transport layer,
+ returns the TLS unique ID as defined in RFC5929 section 3. The unique id
+ can be encoded to base64 using the converter: "ssl_bc_unique_id,base64".
+
+ssl_fc_session_id : binary
+ Returns the SSL ID of the front connection when the incoming connection was
+ made over an SSL/TLS transport layer. It is useful to stick a given client to
+ a server. It is important to note that some browsers refresh their session ID
+ every few minutes.
+
+ssl_fc_sni : string
+ This extracts the Server Name Indication TLS extension (SNI) field from an
+ incoming connection made via an SSL/TLS transport layer and locally
+ deciphered by haproxy. The result (when present) typically is a string
+ matching the HTTPS host name (253 chars or less). The SSL library must have
+ been built with support for TLS extensions enabled (check haproxy -vv).
+
+ This fetch is different from "req_ssl_sni" above in that it applies to the
+ connection being deciphered by haproxy and not to SSL contents being blindly
+ forwarded. See also "ssl_fc_sni_end" and "ssl_fc_sni_reg" below. This
+ requires that the SSL library is build with support for TLS extensions
+ enabled (check haproxy -vv).
+
+ ACL derivatives :
+ ssl_fc_sni_end : suffix match
+ ssl_fc_sni_reg : regex match
+
+ssl_fc_use_keysize : integer
+ Returns the symmetric cipher key size used in bits when the incoming
+ connection was made over an SSL/TLS transport layer.
+
+
+7.3.5. Fetching samples from buffer contents (Layer 6)
+------------------------------------------------------
+
+Fetching samples from buffer contents is a bit different from the previous
+sample fetches above because the sampled data are ephemeral. These data can
+only be used when they're available and will be lost when they're forwarded.
+For this reason, samples fetched from buffer contents during a request cannot
+be used in a response for example. Even while the data are being fetched, they
+can change. Sometimes it is necessary to set some delays or combine multiple
+sample fetch methods to ensure that the expected data are complete and usable,
+for example through TCP request content inspection. Please see the "tcp-request
+content" keyword for more detailed information on the subject.
+
+payload(<offset>,<length>) : binary (deprecated)
+ This is an alias for "req.payload" when used in the context of a request (eg:
+ "stick on", "stick match"), and for "res.payload" when used in the context of
+ a response such as in "stick store response".
+
+payload_lv(<offset1>,<length>[,<offset2>]) : binary (deprecated)
+ This is an alias for "req.payload_lv" when used in the context of a request
+ (eg: "stick on", "stick match"), and for "res.payload_lv" when used in the
+ context of a response such as in "stick store response".
+
+req.len : integer
+req_len : integer (deprecated)
+ Returns an integer value corresponding to the number of bytes present in the
+ request buffer. This is mostly used in ACL. It is important to understand
+ that this test does not return false as long as the buffer is changing. This
+ means that a check with equality to zero will almost always immediately match
+ at the beginning of the session, while a test for more data will wait for
+ that data to come in and return false only when haproxy is certain that no
+ more data will come in. This test was designed to be used with TCP request
+ content inspection.
+
+req.payload(<offset>,<length>) : binary
+ This extracts a binary block of <length> bytes and starting at byte <offset>
+ in the request buffer. As a special case, if the <length> argument is zero,
+ the the whole buffer from <offset> to the end is extracted. This can be used
+ with ACLs in order to check for the presence of some content in a buffer at
+ any location.
+
+ ACL alternatives :
+ payload(<offset>,<length>) : hex binary match
+
+req.payload_lv(<offset1>,<length>[,<offset2>]) : binary
+ This extracts a binary block whose size is specified at <offset1> for <length>
+ bytes, and which starts at <offset2> if specified or just after the length in
+ the request buffer. The <offset2> parameter also supports relative offsets if
+ prepended with a '+' or '-' sign.
+
+ ACL alternatives :
+ payload_lv(<offset1>,<length>[,<offset2>]) : hex binary match
+
+ Example : please consult the example from the "stick store-response" keyword.
+
+req.proto_http : boolean
+req_proto_http : boolean (deprecated)
+ Returns true when data in the request buffer look like HTTP and correctly
+ parses as such. It is the same parser as the common HTTP request parser which
+ is used so there should be no surprises. The test does not match until the
+ request is complete, failed or timed out. This test may be used to report the
+ protocol in TCP logs, but the biggest use is to block TCP request analysis
+ until a complete HTTP request is present in the buffer, for example to track
+ a header.
+
+ Example:
+ # track request counts per "base" (concatenation of Host+URL)
+ tcp-request inspect-delay 10s
+ tcp-request content reject if !HTTP
+ tcp-request content track-sc0 base table req-rate
+
+req.rdp_cookie([<name>]) : string
+rdp_cookie([<name>]) : string (deprecated)
+ When the request buffer looks like the RDP protocol, extracts the RDP cookie
+ <name>, or any cookie if unspecified. The parser only checks for the first
+ cookie, as illustrated in the RDP protocol specification. The cookie name is
+ case insensitive. Generally the "MSTS" cookie name will be used, as it can
+ contain the user name of the client connecting to the server if properly
+ configured on the client. The "MSTSHASH" cookie is often used as well for
+ session stickiness to servers.
+
+ This differs from "balance rdp-cookie" in that any balancing algorithm may be
+ used and thus the distribution of clients to backend servers is not linked to
+ a hash of the RDP cookie. It is envisaged that using a balancing algorithm
+ such as "balance roundrobin" or "balance leastconn" will lead to a more even
+ distribution of clients to backend servers than the hash used by "balance
+ rdp-cookie".
+
+ ACL derivatives :
+ req_rdp_cookie([<name>]) : exact string match
+
+ Example :
+ listen tse-farm
+ bind 0.0.0.0:3389
+ # wait up to 5s for an RDP cookie in the request
+ tcp-request inspect-delay 5s
+ tcp-request content accept if RDP_COOKIE
+ # apply RDP cookie persistence
+ persist rdp-cookie
+ # Persist based on the mstshash cookie
+ # This is only useful makes sense if
+ # balance rdp-cookie is not used
+ stick-table type string size 204800
+ stick on req.rdp_cookie(mstshash)
+ server srv1 1.1.1.1:3389
+ server srv1 1.1.1.2:3389
+
+ See also : "balance rdp-cookie", "persist rdp-cookie", "tcp-request" and the
+ "req_rdp_cookie" ACL.
+
+req.rdp_cookie_cnt([name]) : integer
+rdp_cookie_cnt([name]) : integer (deprecated)
+ Tries to parse the request buffer as RDP protocol, then returns an integer
+ corresponding to the number of RDP cookies found. If an optional cookie name
+ is passed, only cookies matching this name are considered. This is mostly
+ used in ACL.
+
+ ACL derivatives :
+ req_rdp_cookie_cnt([<name>]) : integer match
+
+req.ssl_ec_ext : boolean
+ Returns a boolean identifying if client sent the Supported Elliptic Curves
+ Extension as defined in RFC4492, section 5.1. within the SSL ClientHello
+ message. This can be used to present ECC compatible clients with EC
+ certificate and to use RSA for all others, on the same IP address. Note that
+ this only applies to raw contents found in the request buffer and not to
+ contents deciphered via an SSL data layer, so this will not work with "bind"
+ lines having the "ssl" option.
+
+req.ssl_hello_type : integer
+req_ssl_hello_type : integer (deprecated)
+ Returns an integer value containing the type of the SSL hello message found
+ in the request buffer if the buffer contains data that parse as a complete
+ SSL (v3 or superior) client hello message. Note that this only applies to raw
+ contents found in the request buffer and not to contents deciphered via an
+ SSL data layer, so this will not work with "bind" lines having the "ssl"
+ option. This is mostly used in ACL to detect presence of an SSL hello message
+ that is supposed to contain an SSL session ID usable for stickiness.
+
+req.ssl_sni : string
+req_ssl_sni : string (deprecated)
+ Returns a string containing the value of the Server Name TLS extension sent
+ by a client in a TLS stream passing through the request buffer if the buffer
+ contains data that parse as a complete SSL (v3 or superior) client hello
+ message. Note that this only applies to raw contents found in the request
+ buffer and not to contents deciphered via an SSL data layer, so this will not
+ work with "bind" lines having the "ssl" option. SNI normally contains the
+ name of the host the client tries to connect to (for recent browsers). SNI is
+ useful for allowing or denying access to certain hosts when SSL/TLS is used
+ by the client. This test was designed to be used with TCP request content
+ inspection. If content switching is needed, it is recommended to first wait
+ for a complete client hello (type 1), like in the example below. See also
+ "ssl_fc_sni".
+
+ ACL derivatives :
+ req_ssl_sni : exact string match
+
+ Examples :
+ # Wait for a client hello for at most 5 seconds
+ tcp-request inspect-delay 5s
+ tcp-request content accept if { req_ssl_hello_type 1 }
+ use_backend bk_allow if { req_ssl_sni -f allowed_sites }
+ default_backend bk_sorry_page
+
+req.ssl_st_ext : integer
+ Returns 0 if the client didn't send a SessionTicket TLS Extension (RFC5077)
+ Returns 1 if the client sent SessionTicket TLS Extension
+ Returns 2 if the client also sent non-zero length TLS SessionTicket
+ Note that this only applies to raw contents found in the request buffer and
+ not to contents deciphered via an SSL data layer, so this will not work with
+ "bind" lines having the "ssl" option. This can for example be used to detect
+ whether the client sent a SessionTicket or not and stick it accordingly, if
+ no SessionTicket then stick on SessionID or don't stick as there's no server
+ side state is there when SessionTickets are in use.
+
+req.ssl_ver : integer
+req_ssl_ver : integer (deprecated)
+ Returns an integer value containing the version of the SSL/TLS protocol of a
+ stream present in the request buffer. Both SSLv2 hello messages and SSLv3
+ messages are supported. TLSv1 is announced as SSL version 3.1. The value is
+ composed of the major version multiplied by 65536, added to the minor
+ version. Note that this only applies to raw contents found in the request
+ buffer and not to contents deciphered via an SSL data layer, so this will not
+ work with "bind" lines having the "ssl" option. The ACL version of the test
+ matches against a decimal notation in the form MAJOR.MINOR (eg: 3.1). This
+ fetch is mostly used in ACL.
+
+ ACL derivatives :
+ req_ssl_ver : decimal match
+
+res.len : integer
+ Returns an integer value corresponding to the number of bytes present in the
+ response buffer. This is mostly used in ACL. It is important to understand
+ that this test does not return false as long as the buffer is changing. This
+ means that a check with equality to zero will almost always immediately match
+ at the beginning of the session, while a test for more data will wait for
+ that data to come in and return false only when haproxy is certain that no
+ more data will come in. This test was designed to be used with TCP response
+ content inspection.
+
+res.payload(<offset>,<length>) : binary
+ This extracts a binary block of <length> bytes and starting at byte <offset>
+ in the response buffer. As a special case, if the <length> argument is zero,
+ the the whole buffer from <offset> to the end is extracted. This can be used
+ with ACLs in order to check for the presence of some content in a buffer at
+ any location.
+
+res.payload_lv(<offset1>,<length>[,<offset2>]) : binary
+ This extracts a binary block whose size is specified at <offset1> for <length>
+ bytes, and which starts at <offset2> if specified or just after the length in
+ the response buffer. The <offset2> parameter also supports relative offsets
+ if prepended with a '+' or '-' sign.
+
+ Example : please consult the example from the "stick store-response" keyword.
+
+res.ssl_hello_type : integer
+rep_ssl_hello_type : integer (deprecated)
+ Returns an integer value containing the type of the SSL hello message found
+ in the response buffer if the buffer contains data that parses as a complete
+ SSL (v3 or superior) hello message. Note that this only applies to raw
+ contents found in the response buffer and not to contents deciphered via an
+ SSL data layer, so this will not work with "server" lines having the "ssl"
+ option. This is mostly used in ACL to detect presence of an SSL hello message
+ that is supposed to contain an SSL session ID usable for stickiness.
+
+wait_end : boolean
+ This fetch either returns true when the inspection period is over, or does
+ not fetch. It is only used in ACLs, in conjunction with content analysis to
+ avoid returning a wrong verdict early. It may also be used to delay some
+ actions, such as a delayed reject for some special addresses. Since it either
+ stops the rules evaluation or immediately returns true, it is recommended to
+ use this acl as the last one in a rule. Please note that the default ACL
+ "WAIT_END" is always usable without prior declaration. This test was designed
+ to be used with TCP request content inspection.
+
+ Examples :
+ # delay every incoming request by 2 seconds
+ tcp-request inspect-delay 2s
+ tcp-request content accept if WAIT_END
+
+ # don't immediately tell bad guys they are rejected
+ tcp-request inspect-delay 10s
+ acl goodguys src 10.0.0.0/24
+ acl badguys src 10.0.1.0/24
+ tcp-request content accept if goodguys
+ tcp-request content reject if badguys WAIT_END
+ tcp-request content reject
+
+
+7.3.6. Fetching HTTP samples (Layer 7)
+--------------------------------------
+
+It is possible to fetch samples from HTTP contents, requests and responses.
+This application layer is also called layer 7. It is only possible to fetch the
+data in this section when a full HTTP request or response has been parsed from
+its respective request or response buffer. This is always the case with all
+HTTP specific rules and for sections running with "mode http". When using TCP
+content inspection, it may be necessary to support an inspection delay in order
+to let the request or response come in first. These fetches may require a bit
+more CPU resources than the layer 4 ones, but not much since the request and
+response are indexed.
+
+base : string
+ This returns the concatenation of the first Host header and the path part of
+ the request, which starts at the first slash and ends before the question
+ mark. It can be useful in virtual hosted environments to detect URL abuses as
+ well as to improve shared caches efficiency. Using this with a limited size
+ stick table also allows one to collect statistics about most commonly
+ requested objects by host/path. With ACLs it can allow simple content
+ switching rules involving the host and the path at the same time, such as
+ "www.example.com/favicon.ico". See also "path" and "uri".
+
+ ACL derivatives :
+ base : exact string match
+ base_beg : prefix match
+ base_dir : subdir match
+ base_dom : domain match
+ base_end : suffix match
+ base_len : length match
+ base_reg : regex match
+ base_sub : substring match
+
+base32 : integer
+ This returns a 32-bit hash of the value returned by the "base" fetch method
+ above. This is useful to track per-URL activity on high traffic sites without
+ having to store all URLs. Instead a shorter hash is stored, saving a lot of
+ memory. The output type is an unsigned integer. The hash function used is
+ SDBM with full avalanche on the output. Technically, base32 is exactly equal
+ to "base,sdbm(1)".
+
+base32+src : binary
+ This returns the concatenation of the base32 fetch above and the src fetch
+ below. The resulting type is of type binary, with a size of 8 or 20 bytes
+ depending on the source address family. This can be used to track per-IP,
+ per-URL counters.
+
+capture.req.hdr(<idx>) : string
+ This extracts the content of the header captured by the "capture request
+ header", idx is the position of the capture keyword in the configuration.
+ The first entry is an index of 0. See also: "capture request header".
+
+capture.req.method : string
+ This extracts the METHOD of an HTTP request. It can be used in both request
+ and response. Unlike "method", it can be used in both request and response
+ because it's allocated.
+
+capture.req.uri : string
+ This extracts the request's URI, which starts at the first slash and ends
+ before the first space in the request (without the host part). Unlike "path"
+ and "url", it can be used in both request and response because it's
+ allocated.
+
+capture.req.ver : string
+ This extracts the request's HTTP version and returns either "HTTP/1.0" or
+ "HTTP/1.1". Unlike "req.ver", it can be used in both request, response, and
+ logs because it relies on a persistent flag.
+
+capture.res.hdr(<idx>) : string
+ This extracts the content of the header captured by the "capture response
+ header", idx is the position of the capture keyword in the configuration.
+ The first entry is an index of 0.
+ See also: "capture response header"
+
+capture.res.ver : string
+ This extracts the response's HTTP version and returns either "HTTP/1.0" or
+ "HTTP/1.1". Unlike "res.ver", it can be used in logs because it relies on a
+ persistent flag.
+
+req.body : binary
+ This returns the HTTP request's available body as a block of data. It
+ requires that the request body has been buffered made available using
+ "option http-buffer-request". In case of chunked-encoded body, currently only
+ the first chunk is analyzed.
+
+req.body_param([<name>) : string
+ This fetch assumes that the body of the POST request is url-encoded. The user
+ can check if the "content-type" contains the value
+ "application/x-www-form-urlencoded". This extracts the first occurrence of the
+ parameter <name> in the body, which ends before '&'. The parameter name is
+ case-sensitive. If no name is given, any parameter will match, and the first
+ one will be returned. The result is a string corresponding to the value of the
+ parameter <name> as presented in the request body (no URL decoding is
+ performed). Note that the ACL version of this fetch iterates over multiple
+ parameters and will iteratively report all parameters values if no name is
+ given.
+
+req.body_len : integer
+ This returns the length of the HTTP request's available body in bytes. It may
+ be lower than the advertised length if the body is larger than the buffer. It
+ requires that the request body has been buffered made available using
+ "option http-buffer-request".
+
+req.body_size : integer
+ This returns the advertised length of the HTTP request's body in bytes. It
+ will represent the advertised Content-Length header, or the size of the first
+ chunk in case of chunked encoding. In order to parse the chunks, it requires
+ that the request body has been buffered made available using
+ "option http-buffer-request".
+
+req.cook([<name>]) : string
+cook([<name>]) : string (deprecated)
+ This extracts the last occurrence of the cookie name <name> on a "Cookie"
+ header line from the request, and returns its value as string. If no name is
+ specified, the first cookie value is returned. When used with ACLs, all
+ matching cookies are evaluated. Spaces around the name and the value are
+ ignored as requested by the Cookie header specification (RFC6265). The cookie
+ name is case-sensitive. Empty cookies are valid, so an empty cookie may very
+ well return an empty value if it is present. Use the "found" match to detect
+ presence. Use the res.cook() variant for response cookies sent by the server.
+
+ ACL derivatives :
+ cook([<name>]) : exact string match
+ cook_beg([<name>]) : prefix match
+ cook_dir([<name>]) : subdir match
+ cook_dom([<name>]) : domain match
+ cook_end([<name>]) : suffix match
+ cook_len([<name>]) : length match
+ cook_reg([<name>]) : regex match
+ cook_sub([<name>]) : substring match
+
+req.cook_cnt([<name>]) : integer
+cook_cnt([<name>]) : integer (deprecated)
+ Returns an integer value representing the number of occurrences of the cookie
+ <name> in the request, or all cookies if <name> is not specified.
+
+req.cook_val([<name>]) : integer
+cook_val([<name>]) : integer (deprecated)
+ This extracts the last occurrence of the cookie name <name> on a "Cookie"
+ header line from the request, and converts its value to an integer which is
+ returned. If no name is specified, the first cookie value is returned. When
+ used in ACLs, all matching names are iterated over until a value matches.
+
+cookie([<name>]) : string (deprecated)
+ This extracts the last occurrence of the cookie name <name> on a "Cookie"
+ header line from the request, or a "Set-Cookie" header from the response, and
+ returns its value as a string. A typical use is to get multiple clients
+ sharing a same profile use the same server. This can be similar to what
+ "appsession" did with the "request-learn" statement, but with support for
+ multi-peer synchronization and state keeping across restarts. If no name is
+ specified, the first cookie value is returned. This fetch should not be used
+ anymore and should be replaced by req.cook() or res.cook() instead as it
+ ambiguously uses the direction based on the context where it is used.
+
+hdr([<name>[,<occ>]]) : string
+ This is equivalent to req.hdr() when used on requests, and to res.hdr() when
+ used on responses. Please refer to these respective fetches for more details.
+ In case of doubt about the fetch direction, please use the explicit ones.
+ Note that contrary to the hdr() sample fetch method, the hdr_* ACL keywords
+ unambiguously apply to the request headers.
+
+req.fhdr(<name>[,<occ>]) : string
+ This extracts the last occurrence of header <name> in an HTTP request. When
+ used from an ACL, all occurrences are iterated over until a match is found.
+ Optionally, a specific occurrence might be specified as a position number.
+ Positive values indicate a position from the first occurrence, with 1 being
+ the first one. Negative values indicate positions relative to the last one,
+ with -1 being the last one. It differs from req.hdr() in that any commas
+ present in the value are returned and are not used as delimiters. This is
+ sometimes useful with headers such as User-Agent.
+
+req.fhdr_cnt([<name>]) : integer
+ Returns an integer value representing the number of occurrences of request
+ header field name <name>, or the total number of header fields if <name> is
+ not specified. Contrary to its req.hdr_cnt() cousin, this function returns
+ the number of full line headers and does not stop on commas.
+
+req.hdr([<name>[,<occ>]]) : string
+ This extracts the last occurrence of header <name> in an HTTP request. When
+ used from an ACL, all occurrences are iterated over until a match is found.
+ Optionally, a specific occurrence might be specified as a position number.
+ Positive values indicate a position from the first occurrence, with 1 being
+ the first one. Negative values indicate positions relative to the last one,
+ with -1 being the last one. A typical use is with the X-Forwarded-For header
+ once converted to IP, associated with an IP stick-table. The function
+ considers any comma as a delimiter for distinct values. If full-line headers
+ are desired instead, use req.fhdr(). Please carefully check RFC2616 to know
+ how certain headers are supposed to be parsed. Also, some of them are case
+ insensitive (eg: Connection).
+
+ ACL derivatives :
+ hdr([<name>[,<occ>]]) : exact string match
+ hdr_beg([<name>[,<occ>]]) : prefix match
+ hdr_dir([<name>[,<occ>]]) : subdir match
+ hdr_dom([<name>[,<occ>]]) : domain match
+ hdr_end([<name>[,<occ>]]) : suffix match
+ hdr_len([<name>[,<occ>]]) : length match
+ hdr_reg([<name>[,<occ>]]) : regex match
+ hdr_sub([<name>[,<occ>]]) : substring match
+
+req.hdr_cnt([<name>]) : integer
+hdr_cnt([<header>]) : integer (deprecated)
+ Returns an integer value representing the number of occurrences of request
+ header field name <name>, or the total number of header field values if
+ <name> is not specified. It is important to remember that one header line may
+ count as several headers if it has several values. The function considers any
+ comma as a delimiter for distinct values. If full-line headers are desired
+ instead, req.fhdr_cnt() should be used instead. With ACLs, it can be used to
+ detect presence, absence or abuse of a specific header, as well as to block
+ request smuggling attacks by rejecting requests which contain more than one
+ of certain headers. See "req.hdr" for more information on header matching.
+
+req.hdr_ip([<name>[,<occ>]]) : ip
+hdr_ip([<name>[,<occ>]]) : ip (deprecated)
+ This extracts the last occurrence of header <name> in an HTTP request,
+ converts it to an IPv4 or IPv6 address and returns this address. When used
+ with ACLs, all occurrences are checked, and if <name> is omitted, every value
+ of every header is checked. Optionally, a specific occurrence might be
+ specified as a position number. Positive values indicate a position from the
+ first occurrence, with 1 being the first one. Negative values indicate
+ positions relative to the last one, with -1 being the last one. A typical use
+ is with the X-Forwarded-For and X-Client-IP headers.
+
+req.hdr_val([<name>[,<occ>]]) : integer
+hdr_val([<name>[,<occ>]]) : integer (deprecated)
+ This extracts the last occurrence of header <name> in an HTTP request, and
+ converts it to an integer value. When used with ACLs, all occurrences are
+ checked, and if <name> is omitted, every value of every header is checked.
+ Optionally, a specific occurrence might be specified as a position number.
+ Positive values indicate a position from the first occurrence, with 1 being
+ the first one. Negative values indicate positions relative to the last one,
+ with -1 being the last one. A typical use is with the X-Forwarded-For header.
+
+http_auth(<userlist>) : boolean
+ Returns a boolean indicating whether the authentication data received from
+ the client match a username & password stored in the specified userlist. This
+ fetch function is not really useful outside of ACLs. Currently only http
+ basic auth is supported.
+
+http_auth_group(<userlist>) : string
+ Returns a string corresponding to the user name found in the authentication
+ data received from the client if both the user name and password are valid
+ according to the specified userlist. The main purpose is to use it in ACLs
+ where it is then checked whether the user belongs to any group within a list.
+ This fetch function is not really useful outside of ACLs. Currently only http
+ basic auth is supported.
+
+ ACL derivatives :
+ http_auth_group(<userlist>) : group ...
+ Returns true when the user extracted from the request and whose password is
+ valid according to the specified userlist belongs to at least one of the
+ groups.
+
+http_first_req : boolean
+ Returns true when the request being processed is the first one of the
+ connection. This can be used to add or remove headers that may be missing
+ from some requests when a request is not the first one, or to help grouping
+ requests in the logs.
+
+method : integer + string
+ Returns an integer value corresponding to the method in the HTTP request. For
+ example, "GET" equals 1 (check sources to establish the matching). Value 9
+ means "other method" and may be converted to a string extracted from the
+ stream. This should not be used directly as a sample, this is only meant to
+ be used from ACLs, which transparently convert methods from patterns to these
+ integer + string values. Some predefined ACL already check for most common
+ methods.
+
+ ACL derivatives :
+ method : case insensitive method match
+
+ Example :
+ # only accept GET and HEAD requests
+ acl valid_method method GET HEAD
+ http-request deny if ! valid_method
+
+path : string
+ This extracts the request's URL path, which starts at the first slash and
+ ends before the question mark (without the host part). A typical use is with
+ prefetch-capable caches, and with portals which need to aggregate multiple
+ information from databases and keep them in caches. Note that with outgoing
+ caches, it would be wiser to use "url" instead. With ACLs, it's typically
+ used to match exact file names (eg: "/login.php"), or directory parts using
+ the derivative forms. See also the "url" and "base" fetch methods.
+
+ ACL derivatives :
+ path : exact string match
+ path_beg : prefix match
+ path_dir : subdir match
+ path_dom : domain match
+ path_end : suffix match
+ path_len : length match
+ path_reg : regex match
+ path_sub : substring match
+
+query : string
+ This extracts the request's query string, which starts after the first
+ question mark. If no question mark is present, this fetch returns nothing. If
+ a question mark is present but nothing follows, it returns an empty string.
+ This means it's possible to easily know whether a query string is present
+ using the "found" matching method. This fetch is the completemnt of "path"
+ which stops before the question mark.
+
+req.hdr_names([<delim>]) : string
+ This builds a string made from the concatenation of all header names as they
+ appear in the request when the rule is evaluated. The default delimiter is
+ the comma (',') but it may be overridden as an optional argument <delim>. In
+ this case, only the first character of <delim> is considered.
+
+req.ver : string
+req_ver : string (deprecated)
+ Returns the version string from the HTTP request, for example "1.1". This can
+ be useful for logs, but is mostly there for ACL. Some predefined ACL already
+ check for versions 1.0 and 1.1.
+
+ ACL derivatives :
+ req_ver : exact string match
+
+res.comp : boolean
+ Returns the boolean "true" value if the response has been compressed by
+ HAProxy, otherwise returns boolean "false". This may be used to add
+ information in the logs.
+
+res.comp_algo : string
+ Returns a string containing the name of the algorithm used if the response
+ was compressed by HAProxy, for example : "deflate". This may be used to add
+ some information in the logs.
+
+res.cook([<name>]) : string
+scook([<name>]) : string (deprecated)
+ This extracts the last occurrence of the cookie name <name> on a "Set-Cookie"
+ header line from the response, and returns its value as string. If no name is
+ specified, the first cookie value is returned.
+
+ ACL derivatives :
+ scook([<name>] : exact string match
+
+res.cook_cnt([<name>]) : integer
+scook_cnt([<name>]) : integer (deprecated)
+ Returns an integer value representing the number of occurrences of the cookie
+ <name> in the response, or all cookies if <name> is not specified. This is
+ mostly useful when combined with ACLs to detect suspicious responses.
+
+res.cook_val([<name>]) : integer
+scook_val([<name>]) : integer (deprecated)
+ This extracts the last occurrence of the cookie name <name> on a "Set-Cookie"
+ header line from the response, and converts its value to an integer which is
+ returned. If no name is specified, the first cookie value is returned.
+
+res.fhdr([<name>[,<occ>]]) : string
+ This extracts the last occurrence of header <name> in an HTTP response, or of
+ the last header if no <name> is specified. Optionally, a specific occurrence
+ might be specified as a position number. Positive values indicate a position
+ from the first occurrence, with 1 being the first one. Negative values
+ indicate positions relative to the last one, with -1 being the last one. It
+ differs from res.hdr() in that any commas present in the value are returned
+ and are not used as delimiters. If this is not desired, the res.hdr() fetch
+ should be used instead. This is sometimes useful with headers such as Date or
+ Expires.
+
+res.fhdr_cnt([<name>]) : integer
+ Returns an integer value representing the number of occurrences of response
+ header field name <name>, or the total number of header fields if <name> is
+ not specified. Contrary to its res.hdr_cnt() cousin, this function returns
+ the number of full line headers and does not stop on commas. If this is not
+ desired, the res.hdr_cnt() fetch should be used instead.
+
+res.hdr([<name>[,<occ>]]) : string
+shdr([<name>[,<occ>]]) : string (deprecated)
+ This extracts the last occurrence of header <name> in an HTTP response, or of
+ the last header if no <name> is specified. Optionally, a specific occurrence
+ might be specified as a position number. Positive values indicate a position
+ from the first occurrence, with 1 being the first one. Negative values
+ indicate positions relative to the last one, with -1 being the last one. This
+ can be useful to learn some data into a stick-table. The function considers
+ any comma as a delimiter for distinct values. If this is not desired, the
+ res.fhdr() fetch should be used instead.
+
+ ACL derivatives :
+ shdr([<name>[,<occ>]]) : exact string match
+ shdr_beg([<name>[,<occ>]]) : prefix match
+ shdr_dir([<name>[,<occ>]]) : subdir match
+ shdr_dom([<name>[,<occ>]]) : domain match
+ shdr_end([<name>[,<occ>]]) : suffix match
+ shdr_len([<name>[,<occ>]]) : length match
+ shdr_reg([<name>[,<occ>]]) : regex match
+ shdr_sub([<name>[,<occ>]]) : substring match
+
+res.hdr_cnt([<name>]) : integer
+shdr_cnt([<name>]) : integer (deprecated)
+ Returns an integer value representing the number of occurrences of response
+ header field name <name>, or the total number of header fields if <name> is
+ not specified. The function considers any comma as a delimiter for distinct
+ values. If this is not desired, the res.fhdr_cnt() fetch should be used
+ instead.
+
+res.hdr_ip([<name>[,<occ>]]) : ip
+shdr_ip([<name>[,<occ>]]) : ip (deprecated)
+ This extracts the last occurrence of header <name> in an HTTP response,
+ convert it to an IPv4 or IPv6 address and returns this address. Optionally, a
+ specific occurrence might be specified as a position number. Positive values
+ indicate a position from the first occurrence, with 1 being the first one.
+ Negative values indicate positions relative to the last one, with -1 being
+ the last one. This can be useful to learn some data into a stick table.
+
+res.hdr_names([<delim>]) : string
+ This builds a string made from the concatenation of all header names as they
+ appear in the response when the rule is evaluated. The default delimiter is
+ the comma (',') but it may be overridden as an optional argument <delim>. In
+ this case, only the first character of <delim> is considered.
+
+res.hdr_val([<name>[,<occ>]]) : integer
+shdr_val([<name>[,<occ>]]) : integer (deprecated)
+ This extracts the last occurrence of header <name> in an HTTP response, and
+ converts it to an integer value. Optionally, a specific occurrence might be
+ specified as a position number. Positive values indicate a position from the
+ first occurrence, with 1 being the first one. Negative values indicate
+ positions relative to the last one, with -1 being the last one. This can be
+ useful to learn some data into a stick table.
+
+res.ver : string
+resp_ver : string (deprecated)
+ Returns the version string from the HTTP response, for example "1.1". This
+ can be useful for logs, but is mostly there for ACL.
+
+ ACL derivatives :
+ resp_ver : exact string match
+
+set-cookie([<name>]) : string (deprecated)
+ This extracts the last occurrence of the cookie name <name> on a "Set-Cookie"
+ header line from the response and uses the corresponding value to match. This
+ can be comparable to what "appsession" did with default options, but with
+ support for multi-peer synchronization and state keeping across restarts.
+
+ This fetch function is deprecated and has been superseded by the "res.cook"
+ fetch. This keyword will disappear soon.
+
+status : integer
+ Returns an integer containing the HTTP status code in the HTTP response, for
+ example, 302. It is mostly used within ACLs and integer ranges, for example,
+ to remove any Location header if the response is not a 3xx.
+
+url : string
+ This extracts the request's URL as presented in the request. A typical use is
+ with prefetch-capable caches, and with portals which need to aggregate
+ multiple information from databases and keep them in caches. With ACLs, using
+ "path" is preferred over using "url", because clients may send a full URL as
+ is normally done with proxies. The only real use is to match "*" which does
+ not match in "path", and for which there is already a predefined ACL. See
+ also "path" and "base".
+
+ ACL derivatives :
+ url : exact string match
+ url_beg : prefix match
+ url_dir : subdir match
+ url_dom : domain match
+ url_end : suffix match
+ url_len : length match
+ url_reg : regex match
+ url_sub : substring match
+
+url_ip : ip
+ This extracts the IP address from the request's URL when the host part is
+ presented as an IP address. Its use is very limited. For instance, a
+ monitoring system might use this field as an alternative for the source IP in
+ order to test what path a given source address would follow, or to force an
+ entry in a table for a given source address. With ACLs it can be used to
+ restrict access to certain systems through a proxy, for example when combined
+ with option "http_proxy".
+
+url_port : integer
+ This extracts the port part from the request's URL. Note that if the port is
+ not specified in the request, port 80 is assumed. With ACLs it can be used to
+ restrict access to certain systems through a proxy, for example when combined
+ with option "http_proxy".
+
+urlp([<name>[,<delim>]]) : string
+url_param([<name>[,<delim>]]) : string
+ This extracts the first occurrence of the parameter <name> in the query
+ string, which begins after either '?' or <delim>, and which ends before '&',
+ ';' or <delim>. The parameter name is case-sensitive. If no name is given,
+ any parameter will match, and the first one will be returned. The result is
+ a string corresponding to the value of the parameter <name> as presented in
+ the request (no URL decoding is performed). This can be used for session
+ stickiness based on a client ID, to extract an application cookie passed as a
+ URL parameter, or in ACLs to apply some checks. Note that the ACL version of
+ this fetch iterates over multiple parameters and will iteratively report all
+ parameters values if no name is given
+
+ ACL derivatives :
+ urlp(<name>[,<delim>]) : exact string match
+ urlp_beg(<name>[,<delim>]) : prefix match
+ urlp_dir(<name>[,<delim>]) : subdir match
+ urlp_dom(<name>[,<delim>]) : domain match
+ urlp_end(<name>[,<delim>]) : suffix match
+ urlp_len(<name>[,<delim>]) : length match
+ urlp_reg(<name>[,<delim>]) : regex match
+ urlp_sub(<name>[,<delim>]) : substring match
+
+
+ Example :
+ # match http://example.com/foo?PHPSESSIONID=some_id
+ stick on urlp(PHPSESSIONID)
+ # match http://example.com/foo;JSESSIONID=some_id
+ stick on urlp(JSESSIONID,;)
+
+urlp_val([<name>[,<delim>])] : integer
+ See "urlp" above. This one extracts the URL parameter <name> in the request
+ and converts it to an integer value. This can be used for session stickiness
+ based on a user ID for example, or with ACLs to match a page number or price.
+
+
+7.4. Pre-defined ACLs
+---------------------
+
+Some predefined ACLs are hard-coded so that they do not have to be declared in
+every frontend which needs them. They all have their names in upper case in
+order to avoid confusion. Their equivalence is provided below.
+
+ACL name Equivalent to Usage
+---------------+-----------------------------+---------------------------------
+FALSE always_false never match
+HTTP req_proto_http match if protocol is valid HTTP
+HTTP_1.0 req_ver 1.0 match HTTP version 1.0
+HTTP_1.1 req_ver 1.1 match HTTP version 1.1
+HTTP_CONTENT hdr_val(content-length) gt 0 match an existing content-length
+HTTP_URL_ABS url_reg ^[^/:]*:// match absolute URL with scheme
+HTTP_URL_SLASH url_beg / match URL beginning with "/"
+HTTP_URL_STAR url * match URL equal to "*"
+LOCALHOST src 127.0.0.1/8 match connection from local host
+METH_CONNECT method CONNECT match HTTP CONNECT method
+METH_GET method GET HEAD match HTTP GET or HEAD method
+METH_HEAD method HEAD match HTTP HEAD method
+METH_OPTIONS method OPTIONS match HTTP OPTIONS method
+METH_POST method POST match HTTP POST method
+METH_TRACE method TRACE match HTTP TRACE method
+RDP_COOKIE req_rdp_cookie_cnt gt 0 match presence of an RDP cookie
+REQ_CONTENT req_len gt 0 match data in the request buffer
+TRUE always_true always match
+WAIT_END wait_end wait for end of content analysis
+---------------+-----------------------------+---------------------------------
+
+
+8. Logging
+----------
+
+One of HAProxy's strong points certainly lies is its precise logs. It probably
+provides the finest level of information available for such a product, which is
+very important for troubleshooting complex environments. Standard information
+provided in logs include client ports, TCP/HTTP state timers, precise session
+state at termination and precise termination cause, information about decisions
+to direct traffic to a server, and of course the ability to capture arbitrary
+headers.
+
+In order to improve administrators reactivity, it offers a great transparency
+about encountered problems, both internal and external, and it is possible to
+send logs to different sources at the same time with different level filters :
+
+ - global process-level logs (system errors, start/stop, etc..)
+ - per-instance system and internal errors (lack of resource, bugs, ...)
+ - per-instance external troubles (servers up/down, max connections)
+ - per-instance activity (client connections), either at the establishment or
+ at the termination.
+ - per-request control of log-level, eg:
+ http-request set-log-level silent if sensitive_request
+
+The ability to distribute different levels of logs to different log servers
+allow several production teams to interact and to fix their problems as soon
+as possible. For example, the system team might monitor system-wide errors,
+while the application team might be monitoring the up/down for their servers in
+real time, and the security team might analyze the activity logs with one hour
+delay.
+
+
+8.1. Log levels
+---------------
+
+TCP and HTTP connections can be logged with information such as the date, time,
+source IP address, destination address, connection duration, response times,
+HTTP request, HTTP return code, number of bytes transmitted, conditions
+in which the session ended, and even exchanged cookies values. For example
+track a particular user's problems. All messages may be sent to up to two
+syslog servers. Check the "log" keyword in section 4.2 for more information
+about log facilities.
+
+
+8.2. Log formats
+----------------
+
+HAProxy supports 5 log formats. Several fields are common between these formats
+and will be detailed in the following sections. A few of them may vary
+slightly with the configuration, due to indicators specific to certain
+options. The supported formats are as follows :
+
+ - the default format, which is very basic and very rarely used. It only
+ provides very basic information about the incoming connection at the moment
+ it is accepted : source IP:port, destination IP:port, and frontend-name.
+ This mode will eventually disappear so it will not be described to great
+ extents.
+
+ - the TCP format, which is more advanced. This format is enabled when "option
+ tcplog" is set on the frontend. HAProxy will then usually wait for the
+ connection to terminate before logging. This format provides much richer
+ information, such as timers, connection counts, queue size, etc... This
+ format is recommended for pure TCP proxies.
+
+ - the HTTP format, which is the most advanced for HTTP proxying. This format
+ is enabled when "option httplog" is set on the frontend. It provides the
+ same information as the TCP format with some HTTP-specific fields such as
+ the request, the status code, and captures of headers and cookies. This
+ format is recommended for HTTP proxies.
+
+ - the CLF HTTP format, which is equivalent to the HTTP format, but with the
+ fields arranged in the same order as the CLF format. In this mode, all
+ timers, captures, flags, etc... appear one per field after the end of the
+ common fields, in the same order they appear in the standard HTTP format.
+
+ - the custom log format, allows you to make your own log line.
+
+Next sections will go deeper into details for each of these formats. Format
+specification will be performed on a "field" basis. Unless stated otherwise, a
+field is a portion of text delimited by any number of spaces. Since syslog
+servers are susceptible of inserting fields at the beginning of a line, it is
+always assumed that the first field is the one containing the process name and
+identifier.
+
+Note : Since log lines may be quite long, the log examples in sections below
+ might be broken into multiple lines. The example log lines will be
+ prefixed with 3 closing angle brackets ('>>>') and each time a log is
+ broken into multiple lines, each non-final line will end with a
+ backslash ('\') and the next line will start indented by two characters.
+
+
+8.2.1. Default log format
+-------------------------
+
+This format is used when no specific option is set. The log is emitted as soon
+as the connection is accepted. One should note that this currently is the only
+format which logs the request's destination IP and ports.
+
+ Example :
+ listen www
+ mode http
+ log global
+ server srv1 127.0.0.1:8000
+
+ >>> Feb 6 12:12:09 localhost \
+ haproxy[14385]: Connect from 10.0.1.2:33312 to 10.0.3.31:8012 \
+ (www/HTTP)
+
+ Field Format Extract from the example above
+ 1 process_name '[' pid ']:' haproxy[14385]:
+ 2 'Connect from' Connect from
+ 3 source_ip ':' source_port 10.0.1.2:33312
+ 4 'to' to
+ 5 destination_ip ':' destination_port 10.0.3.31:8012
+ 6 '(' frontend_name '/' mode ')' (www/HTTP)
+
+Detailed fields description :
+ - "source_ip" is the IP address of the client which initiated the connection.
+ - "source_port" is the TCP port of the client which initiated the connection.
+ - "destination_ip" is the IP address the client connected to.
+ - "destination_port" is the TCP port the client connected to.
+ - "frontend_name" is the name of the frontend (or listener) which received
+ and processed the connection.
+ - "mode is the mode the frontend is operating (TCP or HTTP).
+
+In case of a UNIX socket, the source and destination addresses are marked as
+"unix:" and the ports reflect the internal ID of the socket which accepted the
+connection (the same ID as reported in the stats).
+
+It is advised not to use this deprecated format for newer installations as it
+will eventually disappear.
+
+
+8.2.2. TCP log format
+---------------------
+
+The TCP format is used when "option tcplog" is specified in the frontend, and
+is the recommended format for pure TCP proxies. It provides a lot of precious
+information for troubleshooting. Since this format includes timers and byte
+counts, the log is normally emitted at the end of the session. It can be
+emitted earlier if "option logasap" is specified, which makes sense in most
+environments with long sessions such as remote terminals. Sessions which match
+the "monitor" rules are never logged. It is also possible not to emit logs for
+sessions for which no data were exchanged between the client and the server, by
+specifying "option dontlognull" in the frontend. Successful connections will
+not be logged if "option dontlog-normal" is specified in the frontend. A few
+fields may slightly vary depending on some configuration options, those are
+marked with a star ('*') after the field name below.
+
+ Example :
+ frontend fnt
+ mode tcp
+ option tcplog
+ log global
+ default_backend bck
+
+ backend bck
+ server srv1 127.0.0.1:8000
+
+ >>> Feb 6 12:12:56 localhost \
+ haproxy[14387]: 10.0.1.2:33313 [06/Feb/2009:12:12:51.443] fnt \
+ bck/srv1 0/0/5007 212 -- 0/0/0/0/3 0/0
+
+ Field Format Extract from the example above
+ 1 process_name '[' pid ']:' haproxy[14387]:
+ 2 client_ip ':' client_port 10.0.1.2:33313
+ 3 '[' accept_date ']' [06/Feb/2009:12:12:51.443]
+ 4 frontend_name fnt
+ 5 backend_name '/' server_name bck/srv1
+ 6 Tw '/' Tc '/' Tt* 0/0/5007
+ 7 bytes_read* 212
+ 8 termination_state --
+ 9 actconn '/' feconn '/' beconn '/' srv_conn '/' retries* 0/0/0/0/3
+ 10 srv_queue '/' backend_queue 0/0
+
+Detailed fields description :
+ - "client_ip" is the IP address of the client which initiated the TCP
+ connection to haproxy. If the connection was accepted on a UNIX socket
+ instead, the IP address would be replaced with the word "unix". Note that
+ when the connection is accepted on a socket configured with "accept-proxy"
+ and the PROXY protocol is correctly used, then the logs will reflect the
+ forwarded connection's information.
+
+ - "client_port" is the TCP port of the client which initiated the connection.
+ If the connection was accepted on a UNIX socket instead, the port would be
+ replaced with the ID of the accepting socket, which is also reported in the
+ stats interface.
+
+ - "accept_date" is the exact date when the connection was received by haproxy
+ (which might be very slightly different from the date observed on the
+ network if there was some queuing in the system's backlog). This is usually
+ the same date which may appear in any upstream firewall's log.
+
+ - "frontend_name" is the name of the frontend (or listener) which received
+ and processed the connection.
+
+ - "backend_name" is the name of the backend (or listener) which was selected
+ to manage the connection to the server. This will be the same as the
+ frontend if no switching rule has been applied, which is common for TCP
+ applications.
+
+ - "server_name" is the name of the last server to which the connection was
+ sent, which might differ from the first one if there were connection errors
+ and a redispatch occurred. Note that this server belongs to the backend
+ which processed the request. If the connection was aborted before reaching
+ a server, "<NOSRV>" is indicated instead of a server name.
+
+ - "Tw" is the total time in milliseconds spent waiting in the various queues.
+ It can be "-1" if the connection was aborted before reaching the queue.
+ See "Timers" below for more details.
+
+ - "Tc" is the total time in milliseconds spent waiting for the connection to
+ establish to the final server, including retries. It can be "-1" if the
+ connection was aborted before a connection could be established. See
+ "Timers" below for more details.
+
+ - "Tt" is the total time in milliseconds elapsed between the accept and the
+ last close. It covers all possible processing. There is one exception, if
+ "option logasap" was specified, then the time counting stops at the moment
+ the log is emitted. In this case, a '+' sign is prepended before the value,
+ indicating that the final one will be larger. See "Timers" below for more
+ details.
+
+ - "bytes_read" is the total number of bytes transmitted from the server to
+ the client when the log is emitted. If "option logasap" is specified, the
+ this value will be prefixed with a '+' sign indicating that the final one
+ may be larger. Please note that this value is a 64-bit counter, so log
+ analysis tools must be able to handle it without overflowing.
+
+ - "termination_state" is the condition the session was in when the session
+ ended. This indicates the session state, which side caused the end of
+ session to happen, and for what reason (timeout, error, ...). The normal
+ flags should be "--", indicating the session was closed by either end with
+ no data remaining in buffers. See below "Session state at disconnection"
+ for more details.
+
+ - "actconn" is the total number of concurrent connections on the process when
+ the session was logged. It is useful to detect when some per-process system
+ limits have been reached. For instance, if actconn is close to 512 when
+ multiple connection errors occur, chances are high that the system limits
+ the process to use a maximum of 1024 file descriptors and that all of them
+ are used. See section 3 "Global parameters" to find how to tune the system.
+
+ - "feconn" is the total number of concurrent connections on the frontend when
+ the session was logged. It is useful to estimate the amount of resource
+ required to sustain high loads, and to detect when the frontend's "maxconn"
+ has been reached. Most often when this value increases by huge jumps, it is
+ because there is congestion on the backend servers, but sometimes it can be
+ caused by a denial of service attack.
+
+ - "beconn" is the total number of concurrent connections handled by the
+ backend when the session was logged. It includes the total number of
+ concurrent connections active on servers as well as the number of
+ connections pending in queues. It is useful to estimate the amount of
+ additional servers needed to support high loads for a given application.
+ Most often when this value increases by huge jumps, it is because there is
+ congestion on the backend servers, but sometimes it can be caused by a
+ denial of service attack.
+
+ - "srv_conn" is the total number of concurrent connections still active on
+ the server when the session was logged. It can never exceed the server's
+ configured "maxconn" parameter. If this value is very often close or equal
+ to the server's "maxconn", it means that traffic regulation is involved a
+ lot, meaning that either the server's maxconn value is too low, or that
+ there aren't enough servers to process the load with an optimal response
+ time. When only one of the server's "srv_conn" is high, it usually means
+ that this server has some trouble causing the connections to take longer to
+ be processed than on other servers.
+
+ - "retries" is the number of connection retries experienced by this session
+ when trying to connect to the server. It must normally be zero, unless a
+ server is being stopped at the same moment the connection was attempted.
+ Frequent retries generally indicate either a network problem between
+ haproxy and the server, or a misconfigured system backlog on the server
+ preventing new connections from being queued. This field may optionally be
+ prefixed with a '+' sign, indicating that the session has experienced a
+ redispatch after the maximal retry count has been reached on the initial
+ server. In this case, the server name appearing in the log is the one the
+ connection was redispatched to, and not the first one, though both may
+ sometimes be the same in case of hashing for instance. So as a general rule
+ of thumb, when a '+' is present in front of the retry count, this count
+ should not be attributed to the logged server.
+
+ - "srv_queue" is the total number of requests which were processed before
+ this one in the server queue. It is zero when the request has not gone
+ through the server queue. It makes it possible to estimate the approximate
+ server's response time by dividing the time spent in queue by the number of
+ requests in the queue. It is worth noting that if a session experiences a
+ redispatch and passes through two server queues, their positions will be
+ cumulated. A request should not pass through both the server queue and the
+ backend queue unless a redispatch occurs.
+
+ - "backend_queue" is the total number of requests which were processed before
+ this one in the backend's global queue. It is zero when the request has not
+ gone through the global queue. It makes it possible to estimate the average
+ queue length, which easily translates into a number of missing servers when
+ divided by a server's "maxconn" parameter. It is worth noting that if a
+ session experiences a redispatch, it may pass twice in the backend's queue,
+ and then both positions will be cumulated. A request should not pass
+ through both the server queue and the backend queue unless a redispatch
+ occurs.
+
+
+8.2.3. HTTP log format
+----------------------
+
+The HTTP format is the most complete and the best suited for HTTP proxies. It
+is enabled by when "option httplog" is specified in the frontend. It provides
+the same level of information as the TCP format with additional features which
+are specific to the HTTP protocol. Just like the TCP format, the log is usually
+emitted at the end of the session, unless "option logasap" is specified, which
+generally only makes sense for download sites. A session which matches the
+"monitor" rules will never logged. It is also possible not to log sessions for
+which no data were sent by the client by specifying "option dontlognull" in the
+frontend. Successful connections will not be logged if "option dontlog-normal"
+is specified in the frontend.
+
+Most fields are shared with the TCP log, some being different. A few fields may
+slightly vary depending on some configuration options. Those ones are marked
+with a star ('*') after the field name below.
+
+ Example :
+ frontend http-in
+ mode http
+ option httplog
+ log global
+ default_backend bck
+
+ backend static
+ server srv1 127.0.0.1:8000
+
+ >>> Feb 6 12:14:14 localhost \
+ haproxy[14389]: 10.0.1.2:33317 [06/Feb/2009:12:14:14.655] http-in \
+ static/srv1 10/0/30/69/109 200 2750 - - ---- 1/1/1/1/0 0/0 {1wt.eu} \
+ {} "GET /index.html HTTP/1.1"
+
+ Field Format Extract from the example above
+ 1 process_name '[' pid ']:' haproxy[14389]:
+ 2 client_ip ':' client_port 10.0.1.2:33317
+ 3 '[' accept_date ']' [06/Feb/2009:12:14:14.655]
+ 4 frontend_name http-in
+ 5 backend_name '/' server_name static/srv1
+ 6 Tq '/' Tw '/' Tc '/' Tr '/' Tt* 10/0/30/69/109
+ 7 status_code 200
+ 8 bytes_read* 2750
+ 9 captured_request_cookie -
+ 10 captured_response_cookie -
+ 11 termination_state ----
+ 12 actconn '/' feconn '/' beconn '/' srv_conn '/' retries* 1/1/1/1/0
+ 13 srv_queue '/' backend_queue 0/0
+ 14 '{' captured_request_headers* '}' {haproxy.1wt.eu}
+ 15 '{' captured_response_headers* '}' {}
+ 16 '"' http_request '"' "GET /index.html HTTP/1.1"
+
+
+Detailed fields description :
+ - "client_ip" is the IP address of the client which initiated the TCP
+ connection to haproxy. If the connection was accepted on a UNIX socket
+ instead, the IP address would be replaced with the word "unix". Note that
+ when the connection is accepted on a socket configured with "accept-proxy"
+ and the PROXY protocol is correctly used, then the logs will reflect the
+ forwarded connection's information.
+
+ - "client_port" is the TCP port of the client which initiated the connection.
+ If the connection was accepted on a UNIX socket instead, the port would be
+ replaced with the ID of the accepting socket, which is also reported in the
+ stats interface.
+
+ - "accept_date" is the exact date when the TCP connection was received by
+ haproxy (which might be very slightly different from the date observed on
+ the network if there was some queuing in the system's backlog). This is
+ usually the same date which may appear in any upstream firewall's log. This
+ does not depend on the fact that the client has sent the request or not.
+
+ - "frontend_name" is the name of the frontend (or listener) which received
+ and processed the connection.
+
+ - "backend_name" is the name of the backend (or listener) which was selected
+ to manage the connection to the server. This will be the same as the
+ frontend if no switching rule has been applied.
+
+ - "server_name" is the name of the last server to which the connection was
+ sent, which might differ from the first one if there were connection errors
+ and a redispatch occurred. Note that this server belongs to the backend
+ which processed the request. If the request was aborted before reaching a
+ server, "<NOSRV>" is indicated instead of a server name. If the request was
+ intercepted by the stats subsystem, "<STATS>" is indicated instead.
+
+ - "Tq" is the total time in milliseconds spent waiting for the client to send
+ a full HTTP request, not counting data. It can be "-1" if the connection
+ was aborted before a complete request could be received. It should always
+ be very small because a request generally fits in one single packet. Large
+ times here generally indicate network trouble between the client and
+ haproxy. See "Timers" below for more details.
+
+ - "Tw" is the total time in milliseconds spent waiting in the various queues.
+ It can be "-1" if the connection was aborted before reaching the queue.
+ See "Timers" below for more details.
+
+ - "Tc" is the total time in milliseconds spent waiting for the connection to
+ establish to the final server, including retries. It can be "-1" if the
+ request was aborted before a connection could be established. See "Timers"
+ below for more details.
+
+ - "Tr" is the total time in milliseconds spent waiting for the server to send
+ a full HTTP response, not counting data. It can be "-1" if the request was
+ aborted before a complete response could be received. It generally matches
+ the server's processing time for the request, though it may be altered by
+ the amount of data sent by the client to the server. Large times here on
+ "GET" requests generally indicate an overloaded server. See "Timers" below
+ for more details.
+
+ - "Tt" is the total time in milliseconds elapsed between the accept and the
+ last close. It covers all possible processing. There is one exception, if
+ "option logasap" was specified, then the time counting stops at the moment
+ the log is emitted. In this case, a '+' sign is prepended before the value,
+ indicating that the final one will be larger. See "Timers" below for more
+ details.
+
+ - "status_code" is the HTTP status code returned to the client. This status
+ is generally set by the server, but it might also be set by haproxy when
+ the server cannot be reached or when its response is blocked by haproxy.
+
+ - "bytes_read" is the total number of bytes transmitted to the client when
+ the log is emitted. This does include HTTP headers. If "option logasap" is
+ specified, the this value will be prefixed with a '+' sign indicating that
+ the final one may be larger. Please note that this value is a 64-bit
+ counter, so log analysis tools must be able to handle it without
+ overflowing.
+
+ - "captured_request_cookie" is an optional "name=value" entry indicating that
+ the client had this cookie in the request. The cookie name and its maximum
+ length are defined by the "capture cookie" statement in the frontend
+ configuration. The field is a single dash ('-') when the option is not
+ set. Only one cookie may be captured, it is generally used to track session
+ ID exchanges between a client and a server to detect session crossing
+ between clients due to application bugs. For more details, please consult
+ the section "Capturing HTTP headers and cookies" below.
+
+ - "captured_response_cookie" is an optional "name=value" entry indicating
+ that the server has returned a cookie with its response. The cookie name
+ and its maximum length are defined by the "capture cookie" statement in the
+ frontend configuration. The field is a single dash ('-') when the option is
+ not set. Only one cookie may be captured, it is generally used to track
+ session ID exchanges between a client and a server to detect session
+ crossing between clients due to application bugs. For more details, please
+ consult the section "Capturing HTTP headers and cookies" below.
+
+ - "termination_state" is the condition the session was in when the session
+ ended. This indicates the session state, which side caused the end of
+ session to happen, for what reason (timeout, error, ...), just like in TCP
+ logs, and information about persistence operations on cookies in the last
+ two characters. The normal flags should begin with "--", indicating the
+ session was closed by either end with no data remaining in buffers. See
+ below "Session state at disconnection" for more details.
+
+ - "actconn" is the total number of concurrent connections on the process when
+ the session was logged. It is useful to detect when some per-process system
+ limits have been reached. For instance, if actconn is close to 512 or 1024
+ when multiple connection errors occur, chances are high that the system
+ limits the process to use a maximum of 1024 file descriptors and that all
+ of them are used. See section 3 "Global parameters" to find how to tune the
+ system.
+
+ - "feconn" is the total number of concurrent connections on the frontend when
+ the session was logged. It is useful to estimate the amount of resource
+ required to sustain high loads, and to detect when the frontend's "maxconn"
+ has been reached. Most often when this value increases by huge jumps, it is
+ because there is congestion on the backend servers, but sometimes it can be
+ caused by a denial of service attack.
+
+ - "beconn" is the total number of concurrent connections handled by the
+ backend when the session was logged. It includes the total number of
+ concurrent connections active on servers as well as the number of
+ connections pending in queues. It is useful to estimate the amount of
+ additional servers needed to support high loads for a given application.
+ Most often when this value increases by huge jumps, it is because there is
+ congestion on the backend servers, but sometimes it can be caused by a
+ denial of service attack.
+
+ - "srv_conn" is the total number of concurrent connections still active on
+ the server when the session was logged. It can never exceed the server's
+ configured "maxconn" parameter. If this value is very often close or equal
+ to the server's "maxconn", it means that traffic regulation is involved a
+ lot, meaning that either the server's maxconn value is too low, or that
+ there aren't enough servers to process the load with an optimal response
+ time. When only one of the server's "srv_conn" is high, it usually means
+ that this server has some trouble causing the requests to take longer to be
+ processed than on other servers.
+
+ - "retries" is the number of connection retries experienced by this session
+ when trying to connect to the server. It must normally be zero, unless a
+ server is being stopped at the same moment the connection was attempted.
+ Frequent retries generally indicate either a network problem between
+ haproxy and the server, or a misconfigured system backlog on the server
+ preventing new connections from being queued. This field may optionally be
+ prefixed with a '+' sign, indicating that the session has experienced a
+ redispatch after the maximal retry count has been reached on the initial
+ server. In this case, the server name appearing in the log is the one the
+ connection was redispatched to, and not the first one, though both may
+ sometimes be the same in case of hashing for instance. So as a general rule
+ of thumb, when a '+' is present in front of the retry count, this count
+ should not be attributed to the logged server.
+
+ - "srv_queue" is the total number of requests which were processed before
+ this one in the server queue. It is zero when the request has not gone
+ through the server queue. It makes it possible to estimate the approximate
+ server's response time by dividing the time spent in queue by the number of
+ requests in the queue. It is worth noting that if a session experiences a
+ redispatch and passes through two server queues, their positions will be
+ cumulated. A request should not pass through both the server queue and the
+ backend queue unless a redispatch occurs.
+
+ - "backend_queue" is the total number of requests which were processed before
+ this one in the backend's global queue. It is zero when the request has not
+ gone through the global queue. It makes it possible to estimate the average
+ queue length, which easily translates into a number of missing servers when
+ divided by a server's "maxconn" parameter. It is worth noting that if a
+ session experiences a redispatch, it may pass twice in the backend's queue,
+ and then both positions will be cumulated. A request should not pass
+ through both the server queue and the backend queue unless a redispatch
+ occurs.
+
+ - "captured_request_headers" is a list of headers captured in the request due
+ to the presence of the "capture request header" statement in the frontend.
+ Multiple headers can be captured, they will be delimited by a vertical bar
+ ('|'). When no capture is enabled, the braces do not appear, causing a
+ shift of remaining fields. It is important to note that this field may
+ contain spaces, and that using it requires a smarter log parser than when
+ it's not used. Please consult the section "Capturing HTTP headers and
+ cookies" below for more details.
+
+ - "captured_response_headers" is a list of headers captured in the response
+ due to the presence of the "capture response header" statement in the
+ frontend. Multiple headers can be captured, they will be delimited by a
+ vertical bar ('|'). When no capture is enabled, the braces do not appear,
+ causing a shift of remaining fields. It is important to note that this
+ field may contain spaces, and that using it requires a smarter log parser
+ than when it's not used. Please consult the section "Capturing HTTP headers
+ and cookies" below for more details.
+
+ - "http_request" is the complete HTTP request line, including the method,
+ request and HTTP version string. Non-printable characters are encoded (see
+ below the section "Non-printable characters"). This is always the last
+ field, and it is always delimited by quotes and is the only one which can
+ contain quotes. If new fields are added to the log format, they will be
+ added before this field. This field might be truncated if the request is
+ huge and does not fit in the standard syslog buffer (1024 characters). This
+ is the reason why this field must always remain the last one.
+
+
+8.2.4. Custom log format
+------------------------
+
+The directive log-format allows you to customize the logs in http mode and tcp
+mode. It takes a string as argument.
+
+HAproxy understands some log format variables. % precedes log format variables.
+Variables can take arguments using braces ('{}'), and multiple arguments are
+separated by commas within the braces. Flags may be added or removed by
+prefixing them with a '+' or '-' sign.
+
+Special variable "%o" may be used to propagate its flags to all other
+variables on the same format string. This is particularly handy with quoted
+string formats ("Q").
+
+If a variable is named between square brackets ('[' .. ']') then it is used
+as a sample expression rule (see section 7.3). This it useful to add some
+less common information such as the client's SSL certificate's DN, or to log
+the key that would be used to store an entry into a stick table.
+
+Note: spaces must be escaped. A space character is considered as a separator.
+In order to emit a verbatim '%', it must be preceded by another '%' resulting
+in '%%'. HAProxy will automatically merge consecutive separators.
+
+Flags are :
+ * Q: quote a string
+ * X: hexadecimal representation (IPs, Ports, %Ts, %rt, %pid)
+
+ Example:
+
+ log-format %T\ %t\ Some\ Text
+ log-format %{+Q}o\ %t\ %s\ %{-Q}r
+
+At the moment, the default HTTP format is defined this way :
+
+ log-format %ci:%cp\ [%t]\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\ %ST\ %B\ %CC\ \
+ %CS\ %tsc\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %hr\ %hs\ %{+Q}r
+
+the default CLF format is defined this way :
+
+ log-format %{+Q}o\ %{-Q}ci\ -\ -\ [%T]\ %r\ %ST\ %B\ \"\"\ \"\"\ %cp\ \
+ %ms\ %ft\ %b\ %s\ \%Tq\ %Tw\ %Tc\ %Tr\ %Tt\ %tsc\ %ac\ %fc\ \
+ %bc\ %sc\ %rc\ %sq\ %bq\ %CC\ %CS\ \%hrl\ %hsl
+
+and the default TCP format is defined this way :
+
+ log-format %ci:%cp\ [%t]\ %ft\ %b/%s\ %Tw/%Tc/%Tt\ %B\ %ts\ \
+ %ac/%fc/%bc/%sc/%rc\ %sq/%bq
+
+Please refer to the table below for currently defined variables :
+
+ +---+------+-----------------------------------------------+-------------+
+ | R | var | field name (8.2.2 and 8.2.3 for description) | type |
+ +---+------+-----------------------------------------------+-------------+
+ | | %o | special variable, apply flags on all next var | |
+ +---+------+-----------------------------------------------+-------------+
+ | | %B | bytes_read (from server to client) | numeric |
+ | H | %CC | captured_request_cookie | string |
+ | H | %CS | captured_response_cookie | string |
+ | | %H | hostname | string |
+ | H | %HM | HTTP method (ex: POST) | string |
+ | H | %HP | HTTP request URI without query string (path) | string |
+ | H | %HQ | HTTP request URI query string (ex: ?bar=baz) | string |
+ | H | %HU | HTTP request URI (ex: /foo?bar=baz) | string |
+ | H | %HV | HTTP version (ex: HTTP/1.0) | string |
+ | | %ID | unique-id | string |
+ | | %ST | status_code | numeric |
+ | | %T | gmt_date_time | date |
+ | | %Tc | Tc | numeric |
+ | | %Tl | local_date_time | date |
+ | H | %Tq | Tq | numeric |
+ | H | %Tr | Tr | numeric |
+ | | %Ts | timestamp | numeric |
+ | | %Tt | Tt | numeric |
+ | | %Tw | Tw | numeric |
+ | | %U | bytes_uploaded (from client to server) | numeric |
+ | | %ac | actconn | numeric |
+ | | %b | backend_name | string |
+ | | %bc | beconn (backend concurrent connections) | numeric |
+ | | %bi | backend_source_ip (connecting address) | IP |
+ | | %bp | backend_source_port (connecting address) | numeric |
+ | | %bq | backend_queue | numeric |
+ | | %ci | client_ip (accepted address) | IP |
+ | | %cp | client_port (accepted address) | numeric |
+ | | %f | frontend_name | string |
+ | | %fc | feconn (frontend concurrent connections) | numeric |
+ | | %fi | frontend_ip (accepting address) | IP |
+ | | %fp | frontend_port (accepting address) | numeric |
+ | | %ft | frontend_name_transport ('~' suffix for SSL) | string |
+ | | %lc | frontend_log_counter | numeric |
+ | | %hr | captured_request_headers default style | string |
+ | | %hrl | captured_request_headers CLF style | string list |
+ | | %hs | captured_response_headers default style | string |
+ | | %hsl | captured_response_headers CLF style | string list |
+ | | %ms | accept date milliseconds (left-padded with 0) | numeric |
+ | | %pid | PID | numeric |
+ | H | %r | http_request | string |
+ | | %rc | retries | numeric |
+ | | %rt | request_counter (HTTP req or TCP session) | numeric |
+ | | %s | server_name | string |
+ | | %sc | srv_conn (server concurrent connections) | numeric |
+ | | %si | server_IP (target address) | IP |
+ | | %sp | server_port (target address) | numeric |
+ | | %sq | srv_queue | numeric |
+ | S | %sslc| ssl_ciphers (ex: AES-SHA) | string |
+ | S | %sslv| ssl_version (ex: TLSv1) | string |
+ | | %t | date_time (with millisecond resolution) | date |
+ | | %ts | termination_state | string |
+ | H | %tsc | termination_state with cookie status | string |
+ +---+------+-----------------------------------------------+-------------+
+
+ R = Restrictions : H = mode http only ; S = SSL only
+
+
+8.2.5. Error log format
+-----------------------
+
+When an incoming connection fails due to an SSL handshake or an invalid PROXY
+protocol header, haproxy will log the event using a shorter, fixed line format.
+By default, logs are emitted at the LOG_INFO level, unless the option
+"log-separate-errors" is set in the backend, in which case the LOG_ERR level
+will be used. Connections on which no data are exchanged (eg: probes) are not
+logged if the "dontlognull" option is set.
+
+The format looks like this :
+
+ >>> Dec 3 18:27:14 localhost \
+ haproxy[6103]: 127.0.0.1:56059 [03/Dec/2012:17:35:10.380] frt/f1: \
+ Connection error during SSL handshake
+
+ Field Format Extract from the example above
+ 1 process_name '[' pid ']:' haproxy[6103]:
+ 2 client_ip ':' client_port 127.0.0.1:56059
+ 3 '[' accept_date ']' [03/Dec/2012:17:35:10.380]
+ 4 frontend_name "/" bind_name ":" frt/f1:
+ 5 message Connection error during SSL handshake
+
+These fields just provide minimal information to help debugging connection
+failures.
+
+
+8.3. Advanced logging options
+-----------------------------
+
+Some advanced logging options are often looked for but are not easy to find out
+just by looking at the various options. Here is an entry point for the few
+options which can enable better logging. Please refer to the keywords reference
+for more information about their usage.
+
+
+8.3.1. Disabling logging of external tests
+------------------------------------------
+
+It is quite common to have some monitoring tools perform health checks on
+haproxy. Sometimes it will be a layer 3 load-balancer such as LVS or any
+commercial load-balancer, and sometimes it will simply be a more complete
+monitoring system such as Nagios. When the tests are very frequent, users often
+ask how to disable logging for those checks. There are three possibilities :
+
+ - if connections come from everywhere and are just TCP probes, it is often
+ desired to simply disable logging of connections without data exchange, by
+ setting "option dontlognull" in the frontend. It also disables logging of
+ port scans, which may or may not be desired.
+
+ - if the connection come from a known source network, use "monitor-net" to
+ declare this network as monitoring only. Any host in this network will then
+ only be able to perform health checks, and their requests will not be
+ logged. This is generally appropriate to designate a list of equipment
+ such as other load-balancers.
+
+ - if the tests are performed on a known URI, use "monitor-uri" to declare
+ this URI as dedicated to monitoring. Any host sending this request will
+ only get the result of a health-check, and the request will not be logged.
+
+
+8.3.2. Logging before waiting for the session to terminate
+----------------------------------------------------------
+
+The problem with logging at end of connection is that you have no clue about
+what is happening during very long sessions, such as remote terminal sessions
+or large file downloads. This problem can be worked around by specifying
+"option logasap" in the frontend. Haproxy will then log as soon as possible,
+just before data transfer begins. This means that in case of TCP, it will still
+log the connection status to the server, and in case of HTTP, it will log just
+after processing the server headers. In this case, the number of bytes reported
+is the number of header bytes sent to the client. In order to avoid confusion
+with normal logs, the total time field and the number of bytes are prefixed
+with a '+' sign which means that real numbers are certainly larger.
+
+
+8.3.3. Raising log level upon errors
+------------------------------------
+
+Sometimes it is more convenient to separate normal traffic from errors logs,
+for instance in order to ease error monitoring from log files. When the option
+"log-separate-errors" is used, connections which experience errors, timeouts,
+retries, redispatches or HTTP status codes 5xx will see their syslog level
+raised from "info" to "err". This will help a syslog daemon store the log in
+a separate file. It is very important to keep the errors in the normal traffic
+file too, so that log ordering is not altered. You should also be careful if
+you already have configured your syslog daemon to store all logs higher than
+"notice" in an "admin" file, because the "err" level is higher than "notice".
+
+
+8.3.4. Disabling logging of successful connections
+--------------------------------------------------
+
+Although this may sound strange at first, some large sites have to deal with
+multiple thousands of logs per second and are experiencing difficulties keeping
+them intact for a long time or detecting errors within them. If the option
+"dontlog-normal" is set on the frontend, all normal connections will not be
+logged. In this regard, a normal connection is defined as one without any
+error, timeout, retry nor redispatch. In HTTP, the status code is checked too,
+and a response with a status 5xx is not considered normal and will be logged
+too. Of course, doing is is really discouraged as it will remove most of the
+useful information from the logs. Do this only if you have no other
+alternative.
+
+
+8.4. Timing events
+------------------
+
+Timers provide a great help in troubleshooting network problems. All values are
+reported in milliseconds (ms). These timers should be used in conjunction with
+the session termination flags. In TCP mode with "option tcplog" set on the
+frontend, 3 control points are reported under the form "Tw/Tc/Tt", and in HTTP
+mode, 5 control points are reported under the form "Tq/Tw/Tc/Tr/Tt" :
+
+ - Tq: total time to get the client request (HTTP mode only). It's the time
+ elapsed between the moment the client connection was accepted and the
+ moment the proxy received the last HTTP header. The value "-1" indicates
+ that the end of headers (empty line) has never been seen. This happens when
+ the client closes prematurely or times out.
+
+ - Tw: total time spent in the queues waiting for a connection slot. It
+ accounts for backend queue as well as the server queues, and depends on the
+ queue size, and the time needed for the server to complete previous
+ requests. The value "-1" means that the request was killed before reaching
+ the queue, which is generally what happens with invalid or denied requests.
+
+ - Tc: total time to establish the TCP connection to the server. It's the time
+ elapsed between the moment the proxy sent the connection request, and the
+ moment it was acknowledged by the server, or between the TCP SYN packet and
+ the matching SYN/ACK packet in return. The value "-1" means that the
+ connection never established.
+
+ - Tr: server response time (HTTP mode only). It's the time elapsed between
+ the moment the TCP connection was established to the server and the moment
+ the server sent its complete response headers. It purely shows its request
+ processing time, without the network overhead due to the data transmission.
+ It is worth noting that when the client has data to send to the server, for
+ instance during a POST request, the time already runs, and this can distort
+ apparent response time. For this reason, it's generally wise not to trust
+ too much this field for POST requests initiated from clients behind an
+ untrusted network. A value of "-1" here means that the last the response
+ header (empty line) was never seen, most likely because the server timeout
+ stroke before the server managed to process the request.
+
+ - Tt: total session duration time, between the moment the proxy accepted it
+ and the moment both ends were closed. The exception is when the "logasap"
+ option is specified. In this case, it only equals (Tq+Tw+Tc+Tr), and is
+ prefixed with a '+' sign. From this field, we can deduce "Td", the data
+ transmission time, by subtracting other timers when valid :
+
+ Td = Tt - (Tq + Tw + Tc + Tr)
+
+ Timers with "-1" values have to be excluded from this equation. In TCP
+ mode, "Tq" and "Tr" have to be excluded too. Note that "Tt" can never be
+ negative.
+
+These timers provide precious indications on trouble causes. Since the TCP
+protocol defines retransmit delays of 3, 6, 12... seconds, we know for sure
+that timers close to multiples of 3s are nearly always related to lost packets
+due to network problems (wires, negotiation, congestion). Moreover, if "Tt" is
+close to a timeout value specified in the configuration, it often means that a
+session has been aborted on timeout.
+
+Most common cases :
+
+ - If "Tq" is close to 3000, a packet has probably been lost between the
+ client and the proxy. This is very rare on local networks but might happen
+ when clients are on far remote networks and send large requests. It may
+ happen that values larger than usual appear here without any network cause.
+ Sometimes, during an attack or just after a resource starvation has ended,
+ haproxy may accept thousands of connections in a few milliseconds. The time
+ spent accepting these connections will inevitably slightly delay processing
+ of other connections, and it can happen that request times in the order of
+ a few tens of milliseconds are measured after a few thousands of new
+ connections have been accepted at once. Setting "option http-server-close"
+ may display larger request times since "Tq" also measures the time spent
+ waiting for additional requests.
+
+ - If "Tc" is close to 3000, a packet has probably been lost between the
+ server and the proxy during the server connection phase. This value should
+ always be very low, such as 1 ms on local networks and less than a few tens
+ of ms on remote networks.
+
+ - If "Tr" is nearly always lower than 3000 except some rare values which seem
+ to be the average majored by 3000, there are probably some packets lost
+ between the proxy and the server.
+
+ - If "Tt" is large even for small byte counts, it generally is because
+ neither the client nor the server decides to close the connection, for
+ instance because both have agreed on a keep-alive connection mode. In order
+ to solve this issue, it will be needed to specify "option httpclose" on
+ either the frontend or the backend. If the problem persists, it means that
+ the server ignores the "close" connection mode and expects the client to
+ close. Then it will be required to use "option forceclose". Having the
+ smallest possible 'Tt' is important when connection regulation is used with
+ the "maxconn" option on the servers, since no new connection will be sent
+ to the server until another one is released.
+
+Other noticeable HTTP log cases ('xx' means any value to be ignored) :
+
+ Tq/Tw/Tc/Tr/+Tt The "option logasap" is present on the frontend and the log
+ was emitted before the data phase. All the timers are valid
+ except "Tt" which is shorter than reality.
+
+ -1/xx/xx/xx/Tt The client was not able to send a complete request in time
+ or it aborted too early. Check the session termination flags
+ then "timeout http-request" and "timeout client" settings.
+
+ Tq/-1/xx/xx/Tt It was not possible to process the request, maybe because
+ servers were out of order, because the request was invalid
+ or forbidden by ACL rules. Check the session termination
+ flags.
+
+ Tq/Tw/-1/xx/Tt The connection could not establish on the server. Either it
+ actively refused it or it timed out after Tt-(Tq+Tw) ms.
+ Check the session termination flags, then check the
+ "timeout connect" setting. Note that the tarpit action might
+ return similar-looking patterns, with "Tw" equal to the time
+ the client connection was maintained open.
+
+ Tq/Tw/Tc/-1/Tt The server has accepted the connection but did not return
+ a complete response in time, or it closed its connection
+ unexpectedly after Tt-(Tq+Tw+Tc) ms. Check the session
+ termination flags, then check the "timeout server" setting.
+
+
+8.5. Session state at disconnection
+-----------------------------------
+
+TCP and HTTP logs provide a session termination indicator in the
+"termination_state" field, just before the number of active connections. It is
+2-characters long in TCP mode, and is extended to 4 characters in HTTP mode,
+each of which has a special meaning :
+
+ - On the first character, a code reporting the first event which caused the
+ session to terminate :
+
+ C : the TCP session was unexpectedly aborted by the client.
+
+ S : the TCP session was unexpectedly aborted by the server, or the
+ server explicitly refused it.
+
+ P : the session was prematurely aborted by the proxy, because of a
+ connection limit enforcement, because a DENY filter was matched,
+ because of a security check which detected and blocked a dangerous
+ error in server response which might have caused information leak
+ (eg: cacheable cookie).
+
+ L : the session was locally processed by haproxy and was not passed to
+ a server. This is what happens for stats and redirects.
+
+ R : a resource on the proxy has been exhausted (memory, sockets, source
+ ports, ...). Usually, this appears during the connection phase, and
+ system logs should contain a copy of the precise error. If this
+ happens, it must be considered as a very serious anomaly which
+ should be fixed as soon as possible by any means.
+
+ I : an internal error was identified by the proxy during a self-check.
+ This should NEVER happen, and you are encouraged to report any log
+ containing this, because this would almost certainly be a bug. It
+ would be wise to preventively restart the process after such an
+ event too, in case it would be caused by memory corruption.
+
+ D : the session was killed by haproxy because the server was detected
+ as down and was configured to kill all connections when going down.
+
+ U : the session was killed by haproxy on this backup server because an
+ active server was detected as up and was configured to kill all
+ backup connections when going up.
+
+ K : the session was actively killed by an admin operating on haproxy.
+
+ c : the client-side timeout expired while waiting for the client to
+ send or receive data.
+
+ s : the server-side timeout expired while waiting for the server to
+ send or receive data.
+
+ - : normal session completion, both the client and the server closed
+ with nothing left in the buffers.
+
+ - on the second character, the TCP or HTTP session state when it was closed :
+
+ R : the proxy was waiting for a complete, valid REQUEST from the client
+ (HTTP mode only). Nothing was sent to any server.
+
+ Q : the proxy was waiting in the QUEUE for a connection slot. This can
+ only happen when servers have a 'maxconn' parameter set. It can
+ also happen in the global queue after a redispatch consecutive to
+ a failed attempt to connect to a dying server. If no redispatch is
+ reported, then no connection attempt was made to any server.
+
+ C : the proxy was waiting for the CONNECTION to establish on the
+ server. The server might at most have noticed a connection attempt.
+
+ H : the proxy was waiting for complete, valid response HEADERS from the
+ server (HTTP only).
+
+ D : the session was in the DATA phase.
+
+ L : the proxy was still transmitting LAST data to the client while the
+ server had already finished. This one is very rare as it can only
+ happen when the client dies while receiving the last packets.
+
+ T : the request was tarpitted. It has been held open with the client
+ during the whole "timeout tarpit" duration or until the client
+ closed, both of which will be reported in the "Tw" timer.
+
+ - : normal session completion after end of data transfer.
+
+ - the third character tells whether the persistence cookie was provided by
+ the client (only in HTTP mode) :
+
+ N : the client provided NO cookie. This is usually the case for new
+ visitors, so counting the number of occurrences of this flag in the
+ logs generally indicate a valid trend for the site frequentation.
+
+ I : the client provided an INVALID cookie matching no known server.
+ This might be caused by a recent configuration change, mixed
+ cookies between HTTP/HTTPS sites, persistence conditionally
+ ignored, or an attack.
+
+ D : the client provided a cookie designating a server which was DOWN,
+ so either "option persist" was used and the client was sent to
+ this server, or it was not set and the client was redispatched to
+ another server.
+
+ V : the client provided a VALID cookie, and was sent to the associated
+ server.
+
+ E : the client provided a valid cookie, but with a last date which was
+ older than what is allowed by the "maxidle" cookie parameter, so
+ the cookie is consider EXPIRED and is ignored. The request will be
+ redispatched just as if there was no cookie.
+
+ O : the client provided a valid cookie, but with a first date which was
+ older than what is allowed by the "maxlife" cookie parameter, so
+ the cookie is consider too OLD and is ignored. The request will be
+ redispatched just as if there was no cookie.
+
+ U : a cookie was present but was not used to select the server because
+ some other server selection mechanism was used instead (typically a
+ "use-server" rule).
+
+ - : does not apply (no cookie set in configuration).
+
+ - the last character reports what operations were performed on the persistence
+ cookie returned by the server (only in HTTP mode) :
+
+ N : NO cookie was provided by the server, and none was inserted either.
+
+ I : no cookie was provided by the server, and the proxy INSERTED one.
+ Note that in "cookie insert" mode, if the server provides a cookie,
+ it will still be overwritten and reported as "I" here.
+
+ U : the proxy UPDATED the last date in the cookie that was presented by
+ the client. This can only happen in insert mode with "maxidle". It
+ happens every time there is activity at a different date than the
+ date indicated in the cookie. If any other change happens, such as
+ a redispatch, then the cookie will be marked as inserted instead.
+
+ P : a cookie was PROVIDED by the server and transmitted as-is.
+
+ R : the cookie provided by the server was REWRITTEN by the proxy, which
+ happens in "cookie rewrite" or "cookie prefix" modes.
+
+ D : the cookie provided by the server was DELETED by the proxy.
+
+ - : does not apply (no cookie set in configuration).
+
+The combination of the two first flags gives a lot of information about what
+was happening when the session terminated, and why it did terminate. It can be
+helpful to detect server saturation, network troubles, local system resource
+starvation, attacks, etc...
+
+The most common termination flags combinations are indicated below. They are
+alphabetically sorted, with the lowercase set just after the upper case for
+easier finding and understanding.
+
+ Flags Reason
+
+ -- Normal termination.
+
+ CC The client aborted before the connection could be established to the
+ server. This can happen when haproxy tries to connect to a recently
+ dead (or unchecked) server, and the client aborts while haproxy is
+ waiting for the server to respond or for "timeout connect" to expire.
+
+ CD The client unexpectedly aborted during data transfer. This can be
+ caused by a browser crash, by an intermediate equipment between the
+ client and haproxy which decided to actively break the connection,
+ by network routing issues between the client and haproxy, or by a
+ keep-alive session between the server and the client terminated first
+ by the client.
+
+ cD The client did not send nor acknowledge any data for as long as the
+ "timeout client" delay. This is often caused by network failures on
+ the client side, or the client simply leaving the net uncleanly.
+
+ CH The client aborted while waiting for the server to start responding.
+ It might be the server taking too long to respond or the client
+ clicking the 'Stop' button too fast.
+
+ cH The "timeout client" stroke while waiting for client data during a
+ POST request. This is sometimes caused by too large TCP MSS values
+ for PPPoE networks which cannot transport full-sized packets. It can
+ also happen when client timeout is smaller than server timeout and
+ the server takes too long to respond.
+
+ CQ The client aborted while its session was queued, waiting for a server
+ with enough empty slots to accept it. It might be that either all the
+ servers were saturated or that the assigned server was taking too
+ long a time to respond.
+
+ CR The client aborted before sending a full HTTP request. Most likely
+ the request was typed by hand using a telnet client, and aborted
+ too early. The HTTP status code is likely a 400 here. Sometimes this
+ might also be caused by an IDS killing the connection between haproxy
+ and the client. "option http-ignore-probes" can be used to ignore
+ connections without any data transfer.
+
+ cR The "timeout http-request" stroke before the client sent a full HTTP
+ request. This is sometimes caused by too large TCP MSS values on the
+ client side for PPPoE networks which cannot transport full-sized
+ packets, or by clients sending requests by hand and not typing fast
+ enough, or forgetting to enter the empty line at the end of the
+ request. The HTTP status code is likely a 408 here. Note: recently,
+ some browsers started to implement a "pre-connect" feature consisting
+ in speculatively connecting to some recently visited web sites just
+ in case the user would like to visit them. This results in many
+ connections being established to web sites, which end up in 408
+ Request Timeout if the timeout strikes first, or 400 Bad Request when
+ the browser decides to close them first. These ones pollute the log
+ and feed the error counters. Some versions of some browsers have even
+ been reported to display the error code. It is possible to work
+ around the undesirable effects of this behaviour by adding "option
+ http-ignore-probes" in the frontend, resulting in connections with
+ zero data transfer to be totally ignored. This will definitely hide
+ the errors of people experiencing connectivity issues though.
+
+ CT The client aborted while its session was tarpitted. It is important to
+ check if this happens on valid requests, in order to be sure that no
+ wrong tarpit rules have been written. If a lot of them happen, it
+ might make sense to lower the "timeout tarpit" value to something
+ closer to the average reported "Tw" timer, in order not to consume
+ resources for just a few attackers.
+
+ LR The request was intercepted and locally handled by haproxy. Generally
+ it means that this was a redirect or a stats request.
+
+ SC The server or an equipment between it and haproxy explicitly refused
+ the TCP connection (the proxy received a TCP RST or an ICMP message
+ in return). Under some circumstances, it can also be the network
+ stack telling the proxy that the server is unreachable (eg: no route,
+ or no ARP response on local network). When this happens in HTTP mode,
+ the status code is likely a 502 or 503 here.
+
+ sC The "timeout connect" stroke before a connection to the server could
+ complete. When this happens in HTTP mode, the status code is likely a
+ 503 or 504 here.
+
+ SD The connection to the server died with an error during the data
+ transfer. This usually means that haproxy has received an RST from
+ the server or an ICMP message from an intermediate equipment while
+ exchanging data with the server. This can be caused by a server crash
+ or by a network issue on an intermediate equipment.
+
+ sD The server did not send nor acknowledge any data for as long as the
+ "timeout server" setting during the data phase. This is often caused
+ by too short timeouts on L4 equipments before the server (firewalls,
+ load-balancers, ...), as well as keep-alive sessions maintained
+ between the client and the server expiring first on haproxy.
+
+ SH The server aborted before sending its full HTTP response headers, or
+ it crashed while processing the request. Since a server aborting at
+ this moment is very rare, it would be wise to inspect its logs to
+ control whether it crashed and why. The logged request may indicate a
+ small set of faulty requests, demonstrating bugs in the application.
+ Sometimes this might also be caused by an IDS killing the connection
+ between haproxy and the server.
+
+ sH The "timeout server" stroke before the server could return its
+ response headers. This is the most common anomaly, indicating too
+ long transactions, probably caused by server or database saturation.
+ The immediate workaround consists in increasing the "timeout server"
+ setting, but it is important to keep in mind that the user experience
+ will suffer from these long response times. The only long term
+ solution is to fix the application.
+
+ sQ The session spent too much time in queue and has been expired. See
+ the "timeout queue" and "timeout connect" settings to find out how to
+ fix this if it happens too often. If it often happens massively in
+ short periods, it may indicate general problems on the affected
+ servers due to I/O or database congestion, or saturation caused by
+ external attacks.
+
+ PC The proxy refused to establish a connection to the server because the
+ process' socket limit has been reached while attempting to connect.
+ The global "maxconn" parameter may be increased in the configuration
+ so that it does not happen anymore. This status is very rare and
+ might happen when the global "ulimit-n" parameter is forced by hand.
+
+ PD The proxy blocked an incorrectly formatted chunked encoded message in
+ a request or a response, after the server has emitted its headers. In
+ most cases, this will indicate an invalid message from the server to
+ the client. Haproxy supports chunk sizes of up to 2GB - 1 (2147483647
+ bytes). Any larger size will be considered as an error.
+
+ PH The proxy blocked the server's response, because it was invalid,
+ incomplete, dangerous (cache control), or matched a security filter.
+ In any case, an HTTP 502 error is sent to the client. One possible
+ cause for this error is an invalid syntax in an HTTP header name
+ containing unauthorized characters. It is also possible but quite
+ rare, that the proxy blocked a chunked-encoding request from the
+ client due to an invalid syntax, before the server responded. In this
+ case, an HTTP 400 error is sent to the client and reported in the
+ logs.
+
+ PR The proxy blocked the client's HTTP request, either because of an
+ invalid HTTP syntax, in which case it returned an HTTP 400 error to
+ the client, or because a deny filter matched, in which case it
+ returned an HTTP 403 error.
+
+ PT The proxy blocked the client's request and has tarpitted its
+ connection before returning it a 500 server error. Nothing was sent
+ to the server. The connection was maintained open for as long as
+ reported by the "Tw" timer field.
+
+ RC A local resource has been exhausted (memory, sockets, source ports)
+ preventing the connection to the server from establishing. The error
+ logs will tell precisely what was missing. This is very rare and can
+ only be solved by proper system tuning.
+
+The combination of the two last flags gives a lot of information about how
+persistence was handled by the client, the server and by haproxy. This is very
+important to troubleshoot disconnections, when users complain they have to
+re-authenticate. The commonly encountered flags are :
+
+ -- Persistence cookie is not enabled.
+
+ NN No cookie was provided by the client, none was inserted in the
+ response. For instance, this can be in insert mode with "postonly"
+ set on a GET request.
+
+ II A cookie designating an invalid server was provided by the client,
+ a valid one was inserted in the response. This typically happens when
+ a "server" entry is removed from the configuration, since its cookie
+ value can be presented by a client when no other server knows it.
+
+ NI No cookie was provided by the client, one was inserted in the
+ response. This typically happens for first requests from every user
+ in "insert" mode, which makes it an easy way to count real users.
+
+ VN A cookie was provided by the client, none was inserted in the
+ response. This happens for most responses for which the client has
+ already got a cookie.
+
+ VU A cookie was provided by the client, with a last visit date which is
+ not completely up-to-date, so an updated cookie was provided in
+ response. This can also happen if there was no date at all, or if
+ there was a date but the "maxidle" parameter was not set, so that the
+ cookie can be switched to unlimited time.
+
+ EI A cookie was provided by the client, with a last visit date which is
+ too old for the "maxidle" parameter, so the cookie was ignored and a
+ new cookie was inserted in the response.
+
+ OI A cookie was provided by the client, with a first visit date which is
+ too old for the "maxlife" parameter, so the cookie was ignored and a
+ new cookie was inserted in the response.
+
+ DI The server designated by the cookie was down, a new server was
+ selected and a new cookie was emitted in the response.
+
+ VI The server designated by the cookie was not marked dead but could not
+ be reached. A redispatch happened and selected another one, which was
+ then advertised in the response.
+
+
+8.6. Non-printable characters
+-----------------------------
+
+In order not to cause trouble to log analysis tools or terminals during log
+consulting, non-printable characters are not sent as-is into log files, but are
+converted to the two-digits hexadecimal representation of their ASCII code,
+prefixed by the character '#'. The only characters that can be logged without
+being escaped are comprised between 32 and 126 (inclusive). Obviously, the
+escape character '#' itself is also encoded to avoid any ambiguity ("#23"). It
+is the same for the character '"' which becomes "#22", as well as '{', '|' and
+'}' when logging headers.
+
+Note that the space character (' ') is not encoded in headers, which can cause
+issues for tools relying on space count to locate fields. A typical header
+containing spaces is "User-Agent".
+
+Last, it has been observed that some syslog daemons such as syslog-ng escape
+the quote ('"') with a backslash ('\'). The reverse operation can safely be
+performed since no quote may appear anywhere else in the logs.
+
+
+8.7. Capturing HTTP cookies
+---------------------------
+
+Cookie capture simplifies the tracking a complete user session. This can be
+achieved using the "capture cookie" statement in the frontend. Please refer to
+section 4.2 for more details. Only one cookie can be captured, and the same
+cookie will simultaneously be checked in the request ("Cookie:" header) and in
+the response ("Set-Cookie:" header). The respective values will be reported in
+the HTTP logs at the "captured_request_cookie" and "captured_response_cookie"
+locations (see section 8.2.3 about HTTP log format). When either cookie is
+not seen, a dash ('-') replaces the value. This way, it's easy to detect when a
+user switches to a new session for example, because the server will reassign it
+a new cookie. It is also possible to detect if a server unexpectedly sets a
+wrong cookie to a client, leading to session crossing.
+
+ Examples :
+ # capture the first cookie whose name starts with "ASPSESSION"
+ capture cookie ASPSESSION len 32
+
+ # capture the first cookie whose name is exactly "vgnvisitor"
+ capture cookie vgnvisitor= len 32
+
+
+8.8. Capturing HTTP headers
+---------------------------
+
+Header captures are useful to track unique request identifiers set by an upper
+proxy, virtual host names, user-agents, POST content-length, referrers, etc. In
+the response, one can search for information about the response length, how the
+server asked the cache to behave, or an object location during a redirection.
+
+Header captures are performed using the "capture request header" and "capture
+response header" statements in the frontend. Please consult their definition in
+section 4.2 for more details.
+
+It is possible to include both request headers and response headers at the same
+time. Non-existent headers are logged as empty strings, and if one header
+appears more than once, only its last occurrence will be logged. Request headers
+are grouped within braces '{' and '}' in the same order as they were declared,
+and delimited with a vertical bar '|' without any space. Response headers
+follow the same representation, but are displayed after a space following the
+request headers block. These blocks are displayed just before the HTTP request
+in the logs.
+
+As a special case, it is possible to specify an HTTP header capture in a TCP
+frontend. The purpose is to enable logging of headers which will be parsed in
+an HTTP backend if the request is then switched to this HTTP backend.
+
+ Example :
+ # This instance chains to the outgoing proxy
+ listen proxy-out
+ mode http
+ option httplog
+ option logasap
+ log global
+ server cache1 192.168.1.1:3128
+
+ # log the name of the virtual server
+ capture request header Host len 20
+
+ # log the amount of data uploaded during a POST
+ capture request header Content-Length len 10
+
+ # log the beginning of the referrer
+ capture request header Referer len 20
+
+ # server name (useful for outgoing proxies only)
+ capture response header Server len 20
+
+ # logging the content-length is useful with "option logasap"
+ capture response header Content-Length len 10
+
+ # log the expected cache behaviour on the response
+ capture response header Cache-Control len 8
+
+ # the Via header will report the next proxy's name
+ capture response header Via len 20
+
+ # log the URL location during a redirection
+ capture response header Location len 20
+
+ >>> Aug 9 20:26:09 localhost \
+ haproxy[2022]: 127.0.0.1:34014 [09/Aug/2004:20:26:09] proxy-out \
+ proxy-out/cache1 0/0/0/162/+162 200 +350 - - ---- 0/0/0/0/0 0/0 \
+ {fr.adserver.yahoo.co||http://fr.f416.mail.} {|864|private||} \
+ "GET http://fr.adserver.yahoo.com/"
+
+ >>> Aug 9 20:30:46 localhost \
+ haproxy[2022]: 127.0.0.1:34020 [09/Aug/2004:20:30:46] proxy-out \
+ proxy-out/cache1 0/0/0/182/+182 200 +279 - - ---- 0/0/0/0/0 0/0 \
+ {w.ods.org||} {Formilux/0.1.8|3495|||} \
+ "GET http://trafic.1wt.eu/ HTTP/1.1"
+
+ >>> Aug 9 20:30:46 localhost \
+ haproxy[2022]: 127.0.0.1:34028 [09/Aug/2004:20:30:46] proxy-out \
+ proxy-out/cache1 0/0/2/126/+128 301 +223 - - ---- 0/0/0/0/0 0/0 \
+ {www.sytadin.equipement.gouv.fr||http://trafic.1wt.eu/} \
+ {Apache|230|||http://www.sytadin.} \
+ "GET http://www.sytadin.equipement.gouv.fr/ HTTP/1.1"
+
+
+8.9. Examples of logs
+---------------------
+
+These are real-world examples of logs accompanied with an explanation. Some of
+them have been made up by hand. The syslog part has been removed for better
+reading. Their sole purpose is to explain how to decipher them.
+
+ >>> haproxy[674]: 127.0.0.1:33318 [15/Oct/2003:08:31:57.130] px-http \
+ px-http/srv1 6559/0/7/147/6723 200 243 - - ---- 5/3/3/1/0 0/0 \
+ "HEAD / HTTP/1.0"
+
+ => long request (6.5s) entered by hand through 'telnet'. The server replied
+ in 147 ms, and the session ended normally ('----')
+
+ >>> haproxy[674]: 127.0.0.1:33319 [15/Oct/2003:08:31:57.149] px-http \
+ px-http/srv1 6559/1230/7/147/6870 200 243 - - ---- 324/239/239/99/0 \
+ 0/9 "HEAD / HTTP/1.0"
+
+ => Idem, but the request was queued in the global queue behind 9 other
+ requests, and waited there for 1230 ms.
+
+ >>> haproxy[674]: 127.0.0.1:33320 [15/Oct/2003:08:32:17.654] px-http \
+ px-http/srv1 9/0/7/14/+30 200 +243 - - ---- 3/3/3/1/0 0/0 \
+ "GET /image.iso HTTP/1.0"
+
+ => request for a long data transfer. The "logasap" option was specified, so
+ the log was produced just before transferring data. The server replied in
+ 14 ms, 243 bytes of headers were sent to the client, and total time from
+ accept to first data byte is 30 ms.
+
+ >>> haproxy[674]: 127.0.0.1:33320 [15/Oct/2003:08:32:17.925] px-http \
+ px-http/srv1 9/0/7/14/30 502 243 - - PH-- 3/2/2/0/0 0/0 \
+ "GET /cgi-bin/bug.cgi? HTTP/1.0"
+
+ => the proxy blocked a server response either because of an "rspdeny" or
+ "rspideny" filter, or because the response was improperly formatted and
+ not HTTP-compliant, or because it blocked sensitive information which
+ risked being cached. In this case, the response is replaced with a "502
+ bad gateway". The flags ("PH--") tell us that it was haproxy who decided
+ to return the 502 and not the server.
+
+ >>> haproxy[18113]: 127.0.0.1:34548 [15/Oct/2003:15:18:55.798] px-http \
+ px-http/<NOSRV> -1/-1/-1/-1/8490 -1 0 - - CR-- 2/2/2/0/0 0/0 ""
+
+ => the client never completed its request and aborted itself ("C---") after
+ 8.5s, while the proxy was waiting for the request headers ("-R--").
+ Nothing was sent to any server.
+
+ >>> haproxy[18113]: 127.0.0.1:34549 [15/Oct/2003:15:19:06.103] px-http \
+ px-http/<NOSRV> -1/-1/-1/-1/50001 408 0 - - cR-- 2/2/2/0/0 0/0 ""
+
+ => The client never completed its request, which was aborted by the
+ time-out ("c---") after 50s, while the proxy was waiting for the request
+ headers ("-R--"). Nothing was sent to any server, but the proxy could
+ send a 408 return code to the client.
+
+ >>> haproxy[18989]: 127.0.0.1:34550 [15/Oct/2003:15:24:28.312] px-tcp \
+ px-tcp/srv1 0/0/5007 0 cD 0/0/0/0/0 0/0
+
+ => This log was produced with "option tcplog". The client timed out after
+ 5 seconds ("c----").
+
+ >>> haproxy[18989]: 10.0.0.1:34552 [15/Oct/2003:15:26:31.462] px-http \
+ px-http/srv1 3183/-1/-1/-1/11215 503 0 - - SC-- 205/202/202/115/3 \
+ 0/0 "HEAD / HTTP/1.0"
+
+ => The request took 3s to complete (probably a network problem), and the
+ connection to the server failed ('SC--') after 4 attempts of 2 seconds
+ (config says 'retries 3'), and no redispatch (otherwise we would have
+ seen "/+3"). Status code 503 was returned to the client. There were 115
+ connections on this server, 202 connections on this proxy, and 205 on
+ the global process. It is possible that the server refused the
+ connection because of too many already established.
+
+
+/*
+ * Local variables:
+ * fill-column: 79
+ * End:
+ */
--- /dev/null
+2011/04/13 : List of possible cookie settings with associated behaviours.
+
+PSV="preserve", PFX="prefix", INS="insert", REW="rewrite", IND="indirect"
+0 = option not set
+1 = option is set
+* = option doesn't matter
+
+PSV PFX INS REW IND Behaviour
+ 0 0 0 0 0 passive mode
+ 0 0 0 0 1 passive + indirect : remove response if not needed
+ 0 0 0 1 0 always rewrite response
+ 0 0 1 0 0 always insert or replace response
+ 0 0 1 0 1 insert + indirect : remove req and also resp if not needed
+ * * 1 1 * [ forbidden ]
+ 0 1 0 0 0 prefix
+ 0 1 0 0 1 !! prefix on request, remove reponse cookie if not needed
+ * 1 * 1 * [ forbidden ]
+ * 1 1 * * [ forbidden ]
+ * * * 1 1 [ forbidden ]
+ 1 * 0 * 0 [ forbidden ]
+ 1 0 0 0 1 passive mode (alternate form)
+ 1 0 1 0 0 insert only, and preserve server response cookie if any
+ 1 0 1 0 1 conditional insert only for new requests
+ 1 1 0 0 1 prefix on requests only (passive prefix)
+
--- /dev/null
+1 type générique "entité", avec les attributs suivants :
+
+ - frontend *f
+ - l7switch *s
+ - backend *b
+
+des types spécifiques sont simplement des entités avec certains
+de ces champs remplis et pas forcément tous :
+
+ listen = f [s] b
+ frontend = f [s]
+ l7switch = s
+ backend = [s] b
+
+Ensuite, les traitements sont évalués dans l'ordre :
+ - listen -> s'il a des règles de l7, on les évalue, et potentiellement on branche vers d'autres listen, l7 ou back, ou on travaille avec le back local.
+ - frontend -> s'il a des règles de l7, on les évalue, et potentiellement on branche vers d'autres listen, l7 ou back
+ - l7switch -> on évalue ses règles, potentiellement on branche vers d'autres listen, l7 ou backends
+ - backend -> s'il a des règles l7, on les évalue (quitte à changer encore de backend) puis on traite.
+
+Les requêtes sont traitées dans l'ordre des chaînages f->s*->b, et les réponses doivent être
+traitées dans l'ordre inverse b->s*->f. Penser aux réécritures de champs Host à l'aller et
+Location en retour.
+
+D'autre part, prévoir des "profils" plutôt que des blocs de nouveaux paramètres par défaut.
+Ca permettra d'avoir plein de jeux de paramètres par défaut à utiliser dans chacun de ces
+types.
--- /dev/null
+There has been a lot of confusion during the development because of the
+backends and frontends.
+
+What we want :
+
+- being able to still use a listener as it has always been working
+
+- being able to write a rule stating that we will *change* the backend when we
+ match some pattern. Only one jump is allowed.
+
+- being able to write a "use_filters_from XXX" line stating that we will ignore
+ any filter in the current listener, and that those from XXX will be borrowed
+ instead. A warning would be welcome for options which will silently get
+ masked. This is used to factor configuration.
+
+- being able to write a "use_backend_from XXX" line stating that we will ignore
+ any server and timeout config in the current listener, and that those from
+ XXX will be borrowed instead. A warning would be welcome for options which
+ will silently get masked. This is used to factor configuration.
+
+
+
+Example :
+---------
+
+ | # frontend HTTP/80
+ | listen fe_http 1.1.1.1:80
+ | use_filters_from default_http
+ | use_backend_from appli1
+ |
+ | # frontend HTTPS/443
+ | listen fe_https 1.1.1.1:443
+ | use_filters_from default_https
+ | use_backend_from appli1
+ |
+ | # frontend HTTP/8080
+ | listen fe_http-dev 1.1.1.1:8080
+ | reqadd "X-proto: http"
+ | reqisetbe "^Host: www1" appli1
+ | reqisetbe "^Host: www2" appli2
+ | reqisetbe "^Host: www3" appli-dev
+ | use_backend_from appli1
+ |
+ |
+ | # filters default_http
+ | listen default_http
+ | reqadd "X-proto: http"
+ | reqisetbe "^Host: www1" appli1
+ | reqisetbe "^Host: www2" appli2
+ |
+ | # filters default_https
+ | listen default_https
+ | reqadd "X-proto: https"
+ | reqisetbe "^Host: www1" appli1
+ | reqisetbe "^Host: www2" appli2
+ |
+ |
+ | # backend appli1
+ | listen appli1
+ | reqidel "^X-appli1:.*"
+ | reqadd "Via: appli1"
+ | balance roundrobin
+ | cookie app1
+ | server srv1
+ | server srv2
+ |
+ | # backend appli2
+ | listen appli2
+ | reqidel "^X-appli2:.*"
+ | reqadd "Via: appli2"
+ | balance roundrobin
+ | cookie app2
+ | server srv1
+ | server srv2
+ |
+ | # backend appli-dev
+ | listen appli-dev
+ | reqadd "Via: appli-dev"
+ | use_backend_from appli2
+ |
+ |
+
+
+Now we clearly see multiple things :
+------------------------------------
+
+ - a frontend can EITHER have filters OR reference a use_filter
+
+ - a backend can EITHER have servers OR reference a use_backend
+
+ - we want the evaluation to cross TWO levels per request. When a request is
+ being processed, it keeps track of its "frontend" side (where it came
+ from), and of its "backend" side (where the server-side parameters have
+ been found).
+
+ - the use_{filters|backend} have nothing to do with how the request is
+ decomposed.
+
+
+Conclusion :
+------------
+
+ - a proxy is always its own frontend. It also has 2 parameters :
+ - "fi_prm" : pointer to the proxy holding the filters (itself by default)
+ - "be_prm" : pointer to the proxy holding the servers (itself by default)
+
+ - a request has a frontend (fe) and a backend (be). By default, the backend
+ is initialized to the frontend. Everything related to the client side is
+ accessed through ->fe. Everything related to the server side is accessed
+ through ->be.
+
+ - request filters are first called from ->fe then ->be. Since only the
+ filters can change ->be, it is possible to iterate the filters on ->be
+ only and stop when ->be does not change anymore.
+
+ - response filters are first called from ->be then ->fe IF (fe != be).
+
+
+When we parse the configuration, we immediately configure ->fi and ->be for
+all proxies.
+
+Upon session creation, s->fe and s->be are initialized to the proxy. Filters
+are executed via s->fe->fi_prm and s->be->fi_prm. Servers are found in
+s->be->be_prm.
+
--- /dev/null
+- PR_O_TRANSP => FE !!! devra peut-être changer vu que c'est un complément du mode dispatch.
+- PR_O_NULLNOLOG => FE
+- PR_O_HTTP_CLOSE => FE. !!! mettre BE aussi !!!
+- PR_O_TCP_CLI_KA => FE
+
+- PR_O_FWDFOR => BE. FE aussi ?
+- PR_O_FORCE_CLO => BE
+- PR_O_PERSIST => BE
+- PR_O_COOK_RW, PR_O_COOK_INS, PR_O_COOK_PFX, PR_O_COOK_POST => BE
+- PR_O_COOK_NOC, PR_O_COOK_IND => BE
+- PR_O_ABRT_CLOSE => BE
+- PR_O_REDISP => BE
+- PR_O_BALANCE, PR_O_BALANCE_RR, PR_O_BALANCE_SH => BE
+- PR_O_CHK_CACHE => BE
+- PR_O_TCP_SRV_KA => BE
+- PR_O_BIND_SRC => BE
+- PR_O_TPXY_MASK => BE
+
+
+- PR_MODE_TCP : BE côté serveur, FE côté client
+
+- nbconn -> fe->nbconn, be->nbconn.
+ Pb: rendre impossible le fait que (fe == be) avant de faire ça,
+ sinon on va compter les connexions en double. Ce ne sera possible
+ que lorsque les FE et BE seront des entités distinctes. On va donc
+ commencer par laisser uniquement fe->nbconn (vu que le fe ne change
+ pas), et modifier ceci plus tard, ne serait-ce que pour prendre en
+ compte correctement les minconn/maxconn.
+ => solution : avoir beconn et feconn dans chaque proxy.
+
+- failed_conns, failed_secu (réponses bloquées), failed_resp... : be
+ Attention: voir les cas de ERR_SRVCL, il semble que parfois on
+ indique ça alors qu'il y a un write error côté client (ex: ligne
+ 2044 dans proto_http).
+
+ => be et pas be->beprm
+
+- logs du backup : ->be (idem)
+
+- queue : be
+
+- logs/debug : srv toujours associé à be (ex: proxy->id:srv->id). Rien
+ pour le client pour le moment. D'une manière générale, les erreurs
+ provoquées côté serveur vont sur BE et celles côté client vont sur
+ FE.
+- logswait & LW_BYTES : FE (puisqu'on veut savoir si on logue tout de suite)
+
+- messages d'erreurs personnalisés (errmsg, ...) -> fe
+
+- monitor_uri -> fe
+- uri_auth -> (fe->firpm puis be->fiprm). Utilisation de ->be
+
+- req_add, req_exp => fe->fiprm, puis be->fiprm
+- req_cap, rsp_cap -> fe->fiprm
+- rsp_add, rsp_exp => be->fiprm, devrait être fait ensuite aussi sur fe->fiprm
+- capture_name, capture_namelen : fe->fiprm
+
+ Ce n'est pas la solution idéale, mais au moins la capture et configurable
+ par les filtres du FE et ne bouge pas lorsque le BE est réassigné. Cela
+ résoud aussi un pb d'allocation mémoire.
+
+
+- persistance (appsessions, cookiename, ...) -> be
+- stats:scope "." = fe (celui par lequel on arrive)
+ !!!ERREUR!!! => utiliser be pour avoir celui qui a été validé par
+ l'uri_auth.
+
+
+--------- corrections à effectuer ---------
+
+- remplacement de headers : parser le header et éventuellement le supprimer puis le(les) rajouter.
+- session->proto.{l4state,l7state,l7substate} pour CLI et SRV
+- errorloc : si définie dans backend, la prendre, sinon dans front.
+- logs : faire be sinon fe.
--- /dev/null
+2013/10/10 - possibilities for setting source and destination addresses
+
+
+When establishing a connection to a remote device, this device is designated
+as a target, which designates an entity defined in the configuration. A same
+target appears only once in a configuration, and multiple targets may share
+the same settings if needed.
+
+The following types of targets are currently supported :
+
+ - listener : all connections with this type of target come from clients ;
+ - server : connections to such targets are for "server" lines ;
+ - peer : connections to such target address "peer" lines in "peers"
+ sections ;
+ - proxy : these targets are used by "dispatch", "option transparent"
+ or "option http_proxy" statements.
+
+A connection might not be reused between two different targets, even if all
+parameters seem similar. One of the reason is that some parameters are specific
+to the target and are not easy or not cheap to compare (eg: bind to interface,
+mss, ...).
+
+A number of source and destination addresses may be set for a given target.
+
+ - listener :
+ - the "from" address:port is set by accept()
+
+ - the "to" address:port is set if conn_get_to_addr() is called
+
+ - peer :
+ - the "from" address:port is not set
+
+ - the "to" address:port is static and dependent only on the peer
+
+ - server :
+ - the "from" address may be set alone when "source" is used with
+ a forced IP address, or when "usesrc clientip" is used.
+
+ - the "from" port may be set only combined with the address when
+ "source" is used with IP:port, IP:port-range or "usesrc client" is
+ used. Note that in this case, both the address and the port may be
+ 0, meaning that the kernel will pick the address or port and that
+ the final value might not match the one explicitly set (eg:
+ important for logging).
+
+ - the "from" address may be forced from a header which implies it
+ may change between two consecutive requests on the same connection.
+
+ - the "to" address and port are set together when connecting to a
+ regular server, or by copying the client's IP address when
+ "server 0.0.0.0" is used. Note that the destination port may be
+ an offset applied to the original destination port.
+
+ - proxy :
+ - the "from" address may be set alone when "source" is used with a
+ forced IP address or when "usesrc clientip" is used.
+
+ - the "from" port may be set only combined with the address when
+ "source" is used with IP:port or with "usesrc client". There is
+ no ip:port range for a proxy as of now. Same comment applies as
+ above when port and/or address are 0.
+
+ - the "from" address may be forced from a header which implies it
+ may change between two consecutive requests on the same connection.
+
+ - the "to" address and port are set together, either by configuration
+ when "dispatch" is used, or dynamically when "transparent" is used
+ (1:1 with client connection) or "option http_proxy" is used, where
+ each client request may lead to a different destination address.
+
+
+At the moment, there are some limits in what might happen between multiple
+concurrent requests to a same target.
+
+ - peers parameter do not change, so no problem.
+
+ - server parameters may change in this way :
+ - a connection may require a source bound to an IP address found in a
+ header, which will fall back to the "source" settings if the address
+ is not found in this header. This means that the source address may
+ switch between a dynamically forced IP address and another forced
+ IP and/or port range.
+
+ - if the element is not found (eg: header), the remaining "forced"
+ source address might very well be empty (unset), so the connection
+ reuse is acceptable when switching in that direction.
+
+ - it is not possible to switch between client and clientip or any of
+ these and hdr_ip() because they're exclusive.
+
+ - using a source address/port belonging to a port range is compatible
+ with connection reuse because there is a single range per target, so
+ switching from a range to another range means we remain in the same
+ range.
+
+ - destination address may currently not change since the only possible
+ case for dynamic destination address setting is the transparent mode,
+ reproducing the client's destination address.
+
+ - proxy parameters may change in this way :
+ - a connection may require a source bound to an IP address found in a
+ header, which will fall back to the "source" settings if the address
+ is not found in this header. This means that the source address may
+ switch between a dynamically forced IP address and another forced
+ IP and/or port range.
+
+ - if the element is not found (eg: header), the remaining "forced"
+ source address might very well be empty (unset), so the connection
+ reuse is acceptable when switching in that direction.
+
+ - it is not possible to switch between client and clientip or any of
+ these and hdr_ip() because they're exclusive.
+
+ - proxies do not support port ranges at the moment.
+
+ - destination address might change in the case where "option http_proxy"
+ is used.
+
+So, for each source element (IP, port), we want to know :
+ - if the element was assigned by static configuration (eg: ":80")
+ - if the element was assigned from a connection-specific value (eg: usesrc clientip)
+ - if the element was assigned from a configuration-specific range (eg: 1024-65535)
+ - if the element was assigned from a request-specific value (eg: hdr_ip(xff))
+ - if the element was not assigned at all
+
+For the destination, we want to know :
+ - if the element was assigned by static configuration (eg: ":80")
+ - if the element was assigned from a connection-specific value (eg: transparent)
+ - if the element was assigned from a request-specific value (eg: http_proxy)
+
+We don't need to store the information about the origin of the dynamic value
+since we have the value itself. So in practice we have :
+ - default value, unknown (not yet checked with getsockname/getpeername)
+ - default value, known (check done)
+ - forced value (known)
+ - forced range (known)
+
+We can't do that on an ip:port basis because the port may be fixed regardless
+of the address and conversely.
+
+So that means :
+
+ enum {
+ CO_ADDR_NONE = 0, /* not set, unknown value */
+ CO_ADDR_KNOWN = 1, /* not set, known value */
+ CO_ADDR_FIXED = 2, /* fixed value, known */
+ CO_ADDR_RANGE = 3, /* from assigned range, known */
+ } conn_addr_values;
+
+ unsigned int new_l3_src_status:2;
+ unsigned int new_l4_src_status:2;
+ unsigned int new_l3_dst_status:2;
+ unsigned int new_l4_dst_status:2;
+
+ unsigned int cur_l3_src_status:2;
+ unsigned int cur_l4_src_status:2;
+ unsigned int cur_l3_dsp_status:2;
+ unsigned int cur_l4_dst_status:2;
+
+ unsigned int new_family:2;
+ unsigned int cur_family:2;
+
+Note: this obsoletes CO_FL_ADDR_FROM_SET and CO_FL_ADDR_TO_SET. These flags
+must be changed to individual l3+l4 checks ORed between old and new values,
+or better, set to cur only which will inherit new.
+
+In the connection, these values may be merged in the same word as err_code.
--- /dev/null
+2012/02/27 - redesigning buffers for better simplicity - w@1wt.eu
+
+1) Analysis
+-----------
+
+Buffer handling becomes complex because buffers are circular but many of their
+users don't support wrapping operations (eg: HTTP parsing). Due to this fact,
+some buffer operations automatically realign buffers as soon as possible when
+the buffer is empty, which makes it very hard to track buffer pointers outside
+of the buffer struct itself. The buffer contains a pointer to last processed
+data (buf->lr) which is automatically realigned with such operations. But in
+the end, its semantics are often unclear and whether it's safe or not to use it
+isn't always obvious, as it has acquired multiple roles over the time.
+
+A "struct buffer" is declared this way :
+
+ struct buffer {
+ unsigned int flags; /* BF_* */
+ int rex; /* expiration date for a read, in ticks */
+ int wex; /* expiration date for a write or connect, in ticks */
+ int rto; /* read timeout, in ticks */
+ int wto; /* write timeout, in ticks */
+ unsigned int l; /* data length */
+ char *r, *w, *lr; /* read ptr, write ptr, last read */
+ unsigned int size; /* buffer size in bytes */
+ unsigned int send_max; /* number of bytes the sender can consume om this buffer, <= l */
+ unsigned int to_forward; /* number of bytes to forward after send_max without a wake-up */
+ unsigned int analysers; /* bit field indicating what to do on the buffer */
+ int analyse_exp; /* expiration date for current analysers (if set) */
+ void (*hijacker)(struct session *, struct buffer *); /* alternative content producer */
+ unsigned char xfer_large; /* number of consecutive large xfers */
+ unsigned char xfer_small; /* number of consecutive small xfers */
+ unsigned long long total; /* total data read */
+ struct stream_interface *prod; /* producer attached to this buffer */
+ struct stream_interface *cons; /* consumer attached to this buffer */
+ struct pipe *pipe; /* non-NULL only when data present */
+ char data[0]; /* <size> bytes */
+ };
+
+In order to address this, a struct http_msg was created with other pointers to
+the buffer. The issue is that some of these pointers are absolute and other
+ones are relative, sometimes one to another, sometimes to the beginning of the
+buffer, which doesn't help at all for the case where buffers get realigned.
+
+A "struct http_msg" is defined this way :
+
+ struct http_msg {
+ unsigned int msg_state;
+ unsigned int flags;
+ unsigned int col, sov; /* current header: colon, start of value */
+ unsigned int eoh; /* End Of Headers, relative to buffer */
+ char *sol; /* start of line, also start of message when fully parsed */
+ char *eol; /* end of line */
+ unsigned int som; /* Start Of Message, relative to buffer */
+ int err_pos; /* err handling: -2=block, -1=pass, 0+=detected */
+ union { /* useful start line pointers, relative to ->sol */
+ struct {
+ int l; /* request line length (not including CR) */
+ int m_l; /* METHOD length (method starts at ->som) */
+ int u, u_l; /* URI, length */
+ int v, v_l; /* VERSION, length */
+ } rq; /* request line : field, length */
+ struct {
+ int l; /* status line length (not including CR) */
+ int v_l; /* VERSION length (version starts at ->som) */
+ int c, c_l; /* CODE, length */
+ int r, r_l; /* REASON, length */
+ } st; /* status line : field, length */
+ } sl; /* start line */
+ unsigned long long chunk_len;
+ unsigned long long body_len;
+ char **cap;
+ };
+
+
+The first immediate observation is that nothing in a buffer should be relative
+to the beginning of the storage area, everything should be relative to the
+buffer's origin as a floating location. Right now the buffer's origin is equal
+to (buf->w + buf->send_max). It is the place where the first byte of data not
+yet scheduled for being forwarded is found.
+
+ - buf->w is an absolute pointer, just like buf->data.
+ - buf->send_max is a relative value which oscillates between 0 when nothing
+ has to be forwarded, and buf->l when the whole buffer must be forwarded.
+
+
+2) Proposal
+-----------
+
+By having such an origin, we could have everything in http_msg relative to this
+origin. This would resist buffer realigns much better than right now.
+
+At the moment we have msg->som which is relative to buf->data and which points
+to the beginning of the message. The beginning of the message should *always*
+be the buffer's origin. If data are to be skipped in the message, just wait for
+send_max to become zero and move the origin forwards ; this would definitely get
+rid of msg->som. This is already what is done in the HTTP parser except that it
+has to move both buf->lr and msg->som.
+
+Following the same principle, we should then have a relative pointer in
+http_msg to replace buf->lr. It would be relative to the buffer's origin and
+would simply recall what location was last visited.
+
+Doing all this could result in more complex operations where more time is spent
+adding buf->w to buf->send_max and then to msg->anything. It would probably make
+more sense to define the buffer's origin as an absolute pointer and to have
+both the buf->h (head) and buf->t (tail) pointers be positive and negative
+positions relative to this origin. Operating on the buffer would then look like
+this :
+
+ - no buf->l anymore. buf->l is replaced by (head + tail)
+ - no buf->lr anymore. Use origin + msg->last for instance
+ - recv() : head += recv(origin + head);
+ - send() : tail -= send(origin - tail, tail);
+ thus, buf->o effectively replaces buf->send_max.
+ - forward(N) : tail += N; origin += N;
+ - realign() : origin = data
+ - detect risk of wrapping of input : origin + head > data + size
+
+In general it looks like less pointers are manipulated for common operations
+and that maybe an additional wrapping test (hand-made modulo) will have to be
+added so send() and recv() operations.
+
+
+3) Caveats
+----------
+
+The first caveat is that the elements to modify appear at a very large number
+of places.
--- /dev/null
+#FIG 3.2 Produced by xfig version 3.2.5-alpha5
+Portrait
+Center
+Metric
+A4
+100.00
+Single
+-2
+1200 2
+6 630 900 1620 1395
+5 1 0 1 0 4 51 -1 20 0.000 0 1 0 0 900.000 1125.000 900 900 675 1125 900 1350
+6 900 900 1620 1350
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 900 900 900 1350
+4 0 0 50 -1 16 8 0.0000 4 120 570 1035 1245 descriptor\001
+4 0 0 50 -1 16 8 0.0000 4 105 165 1035 1080 file\001
+-6
+-6
+6 630 1530 1620 2070
+6 630 1530 1170 2070
+5 1 0 1 0 11 51 -1 20 0.000 0 0 0 0 900.000 1800.000 900 1575 1125 1800 900 2025
+5 1 0 1 0 4 51 -1 20 0.000 0 1 0 0 900.000 1800.000 900 1575 675 1800 900 2025
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 900 1575 900 2025
+-6
+4 0 0 50 -1 16 8 0.0000 4 105 360 1260 1710 stream\001
+4 0 0 50 -1 16 8 0.0000 4 105 330 1260 1890 driver\001
+-6
+6 675 2340 2070 2610
+6 675 2340 1575 2610
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 1530 2340 1530 2610
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 1485 2340 1485 2610
+2 2 0 1 0 6 51 -1 20 0.000 1 0 -1 0 0 5
+ 675 2340 1575 2340 1575 2610 675 2610 675 2340
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 720 2340 720 2610
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 765 2340 765 2610
+-6
+4 0 0 50 -1 16 8 0.0000 4 105 330 1710 2475 buffer\001
+-6
+6 2340 2205 3420 2745
+6 2340 2205 2610 2745
+5 1 0 1 0 14 51 -1 20 0.000 0 0 0 0 2475.000 2475.000 2610 2655 2475 2700 2340 2655
+5 1 0 1 0 14 51 -1 20 0.000 0 1 0 0 2475.000 2475.000 2610 2295 2475 2250 2340 2295
+2 2 0 1 14 14 52 -1 20 0.000 2 0 -1 0 0 5
+ 2340 2295 2610 2295 2610 2655 2340 2655 2340 2295
+-6
+4 0 0 50 -1 16 8 0.0000 4 105 645 2745 2430 schedulable\001
+4 0 0 50 -1 16 8 0.0000 4 105 225 2745 2595 task\001
+-6
+6 4455 900 7200 2070
+6 4455 900 4725 1395
+5 1 0 1 0 11 51 -1 20 0.000 0 0 0 0 4455.000 1125.000 4455 900 4680 1125 4455 1350
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 4455 900 4455 1350
+-6
+6 5130 1170 5715 2070
+4 0 0 50 -1 16 8 0.0000 4 105 585 5130 1305 functions :\001
+4 0 0 50 -1 16 6 0.0000 4 75 315 5130 1440 -queue\001
+4 0 0 50 -1 16 6 0.0000 4 90 525 5130 1590 -shutdown\001
+4 0 0 50 -1 16 6 0.0000 4 90 300 5130 1740 -flush\001
+4 0 0 50 -1 16 6 0.0000 4 90 285 5130 1890 -send\001
+4 0 0 50 -1 16 6 0.0000 4 60 270 5130 2040 -recv\001
+-6
+6 6030 900 7200 2025
+4 0 0 50 -1 16 8 0.0000 4 105 555 6030 1035 callbacks :\001
+4 0 0 50 -1 16 6 0.0000 4 105 735 6030 1170 -read_complete\001
+4 0 0 50 -1 16 6 0.0000 4 105 780 6030 1320 -write_complete\001
+4 0 0 50 -1 16 6 0.0000 4 105 450 6030 1470 -wake_up\001
+4 0 0 50 -1 16 6 0.0000 4 105 1110 6030 1755 -context (for sessions)\001
+4 0 0 50 -1 16 6 0.0000 4 105 810 6030 1890 -source (=buffer)\001
+4 0 0 50 -1 16 6 0.0000 4 90 525 6030 2025 -condition\001
+4 0 0 50 -1 16 8 0.0000 4 120 1140 6030 1620 args for *_complete :\001
+-6
+2 1 0 1 0 -1 50 -1 -1 0.000 1 0 -1 1 0 3
+ 0 0 1.00 30.00 45.00
+ 4635 990 4905 990 5985 990
+2 1 0 1 0 -1 50 -1 -1 0.000 1 0 -1 0 1 3
+ 0 0 1.00 30.00 45.00
+ 4635 1260 4905 1260 5085 1260
+-6
+6 2205 1530 3420 2070
+6 2205 1530 2745 2070
+5 1 0 1 0 11 51 -1 20 0.000 0 1 0 0 2475.000 1800.000 2475 1575 2250 1800 2475 2025
+5 1 0 1 0 11 51 -1 20 0.000 0 0 0 0 2475.000 1800.000 2475 1575 2700 1800 2475 2025
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 2475 1575 2475 2025
+-6
+4 0 0 50 -1 16 8 0.0000 4 105 360 2835 1755 stream\001
+4 0 0 50 -1 16 8 0.0000 4 90 555 2835 1935 processor\001
+-6
+6 3825 2205 4905 2745
+6 3825 2205 4365 2745
+5 1 0 1 0 11 51 -1 20 0.000 0 0 0 0 4095.000 2475.000 4230 2295 4320 2475 4230 2655
+5 1 0 1 0 11 51 -1 20 0.000 0 1 0 0 4095.000 2475.000 3960 2295 3870 2475 3960 2655
+5 1 0 1 0 14 51 -1 20 0.000 0 0 0 0 4095.000 2475.000 4230 2655 4095 2700 3960 2655
+5 1 0 1 0 14 51 -1 20 0.000 0 1 0 0 4095.000 2475.000 4230 2295 4095 2250 3960 2295
+2 2 0 1 14 14 52 -1 20 0.000 2 0 -1 0 0 5
+ 3960 2295 4230 2295 4230 2655 3960 2655 3960 2295
+-6
+4 0 0 50 -1 16 8 0.0000 4 105 240 4455 2430 flow\001
+4 0 0 50 -1 16 8 0.0000 4 120 450 4455 2610 analyzer\001
+-6
+6 2250 900 3960 1395
+6 2250 900 2520 1395
+5 1 0 1 0 11 51 -1 20 0.000 0 0 0 0 2250.000 1125.000 2250 900 2475 1125 2250 1350
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 2250 900 2250 1350
+-6
+4 0 0 50 -1 16 8 0.0000 4 105 1320 2610 1245 callbacks and functions.\001
+4 0 0 50 -1 16 8 0.0000 4 105 1335 2610 1080 stream interface made of\001
+-6
+6 7740 900 8775 3465
+6 7740 900 8640 1170
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 8595 900 8595 1170
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 8550 900 8550 1170
+2 2 0 1 0 6 51 -1 20 0.000 1 0 -1 0 0 5
+ 7740 900 8640 900 8640 1170 7740 1170 7740 900
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 7785 900 7785 1170
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 7830 900 7830 1170
+4 1 0 50 -1 16 6 0.0000 4 105 420 8190 1080 http_req\001
+-6
+4 0 0 50 -1 16 8 0.0000 4 105 495 7740 1395 variables\001
+4 0 0 50 -1 16 6 0.0000 4 105 300 7740 1485 -flags\001
+4 0 0 50 -1 16 6 0.0000 4 105 720 7740 1755 -producer_task\001
+4 0 0 50 -1 16 6 0.0000 4 105 735 7740 1845 -consumer_task\001
+4 0 0 50 -1 16 6 0.0000 4 105 780 7740 1665 -write_complete\001
+4 0 0 50 -1 16 6 0.0000 4 105 735 7740 1575 -read_complete\001
+4 0 0 50 -1 16 8 0.0000 4 120 780 7740 2160 internal flags :\001
+4 0 0 50 -1 16 6 0.0000 4 90 960 7740 2295 -SHUTR_PENDING\001
+4 0 0 50 -1 16 6 0.0000 4 90 780 7740 2385 -SHUTR_DONE\001
+4 0 0 50 -1 16 6 0.0000 4 90 975 7740 2475 -SHUTW_PENDING\001
+4 0 0 50 -1 16 6 0.0000 4 90 795 7740 2565 -SHUTW_DONE\001
+4 0 0 50 -1 16 6 0.0000 4 75 450 7740 2655 -HOLDW\001
+4 0 0 50 -1 16 6 0.0000 4 90 585 7740 2745 -READ_EN\001
+4 0 0 50 -1 16 6 0.0000 4 90 600 7740 2835 -WRITE_EN\001
+4 0 0 50 -1 16 6 0.0000 4 105 945 7740 1935 -flow_analyzer_task\001
+4 0 0 50 -1 16 8 0.0000 4 105 540 7740 3015 interface :\001
+4 0 0 50 -1 16 6 0.0000 4 90 780 7740 3150 -1 stream reader\001
+4 0 0 50 -1 16 6 0.0000 4 90 780 7740 3285 -1 stream writer\001
+4 0 0 50 -1 16 6 0.0000 4 105 1020 7740 3420 -0..n stream analyzers\001
+-6
+6 3555 3645 5895 6075
+6 3555 3915 4095 4455
+5 1 0 1 0 11 51 -1 20 0.000 0 0 0 0 3825.000 4185.000 3825 3960 4050 4185 3825 4410
+5 1 0 1 0 4 51 -1 20 0.000 0 1 0 0 3825.000 4185.000 3825 3960 3600 4185 3825 4410
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 3825 3960 3825 4410
+-6
+6 5355 3915 5895 4455
+5 1 0 1 0 11 51 -1 20 0.000 0 1 0 0 5625.000 4185.000 5625 3960 5400 4185 5625 4410
+5 1 0 1 0 4 51 -1 20 0.000 0 0 0 0 5625.000 4185.000 5625 3960 5850 4185 5625 4410
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 5625 3960 5625 4410
+-6
+6 3555 5265 4095 5805
+5 1 0 1 0 11 51 -1 20 0.000 0 0 0 0 3825.000 5535.000 3825 5310 4050 5535 3825 5760
+5 1 0 1 0 4 51 -1 20 0.000 0 1 0 0 3825.000 5535.000 3825 5310 3600 5535 3825 5760
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 3825 5310 3825 5760
+-6
+6 5355 5265 5895 5805
+5 1 0 1 0 11 51 -1 20 0.000 0 1 0 0 5625.000 5535.000 5625 5310 5400 5535 5625 5760
+5 1 0 1 0 4 51 -1 20 0.000 0 0 0 0 5625.000 5535.000 5625 5310 5850 5535 5625 5760
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 5625 5310 5625 5760
+-6
+6 4455 4590 4995 5130
+5 1 0 1 0 11 51 -1 20 0.000 0 0 0 0 4725.000 4860.000 4860 4680 4950 4860 4860 5040
+5 1 0 1 0 11 51 -1 20 0.000 0 1 0 0 4725.000 4860.000 4590 4680 4500 4860 4590 5040
+5 1 0 1 0 14 51 -1 20 0.000 0 0 0 0 4725.000 4860.000 4860 5040 4725 5085 4590 5040
+5 1 0 1 0 14 51 -1 20 0.000 0 1 0 0 4725.000 4860.000 4860 4680 4725 4635 4590 4680
+2 2 0 1 14 14 52 -1 20 0.000 2 0 -1 0 0 5
+ 4590 4680 4860 4680 4860 5040 4590 5040 4590 4680
+-6
+6 4275 5400 5175 5670
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 5130 5400 5130 5670
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 5085 5400 5085 5670
+2 2 0 1 0 6 51 -1 20 0.000 1 0 -1 0 0 5
+ 4275 5400 5175 5400 5175 5670 4275 5670 4275 5400
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 4320 5400 4320 5670
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 4365 5400 4365 5670
+4 1 0 50 -1 16 6 0.0000 4 105 465 4725 5580 http_resp\001
+-6
+6 4275 4050 5175 4320
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 5130 4050 5130 4320
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 5085 4050 5085 4320
+2 2 0 1 0 6 51 -1 20 0.000 1 0 -1 0 0 5
+ 4275 4050 5175 4050 5175 4320 4275 4320 4275 4050
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 4320 4050 4320 4320
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 4365 4050 4365 4320
+4 1 0 50 -1 16 6 0.0000 4 105 420 4725 4230 http_req\001
+-6
+2 1 0 1 0 0 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 5175 4185 5400 4185
+2 1 0 1 0 0 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 4050 4185 4275 4185
+2 1 0 1 0 14 50 -1 -1 0.000 1 0 -1 1 1 2
+ 0 0 1.00 30.00 45.00
+ 0 0 1.00 30.00 45.00
+ 4725 4635 4725 4320
+2 1 0 1 0 14 50 -1 -1 0.000 1 0 -1 1 1 2
+ 0 0 1.00 30.00 45.00
+ 0 0 1.00 30.00 45.00
+ 4725 5400 4725 5085
+2 1 0 1 0 0 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 4275 5535 4050 5535
+2 1 0 1 0 0 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 5400 5535 5175 5535
+4 1 0 50 -1 16 6 0.0000 4 90 285 5625 3870 driver\001
+4 1 0 50 -1 16 6 0.0000 4 105 300 5625 3735 output\001
+4 1 0 50 -1 16 6 0.0000 4 90 285 3825 3870 driver\001
+4 1 0 50 -1 16 6 0.0000 4 105 240 3825 3735 input\001
+4 0 0 50 -1 16 6 0.0000 4 90 405 5040 4950 eg: HTTP\001
+4 1 0 50 -1 16 6 0.0000 4 90 285 5625 6075 driver\001
+4 1 0 50 -1 16 6 0.0000 4 90 285 3825 6075 driver\001
+4 1 0 50 -1 16 6 0.0000 4 105 300 3825 5940 output\001
+4 1 0 50 -1 16 6 0.0000 4 105 240 5625 5940 input\001
+4 0 0 50 -1 16 6 0.0000 4 105 690 5040 4815 flow processor\001
+-6
+6 6480 4050 8820 5400
+5 1 0 1 0 11 51 -1 20 0.000 0 0 0 0 7650.000 4725.000 7785 4545 7875 4725 7785 4905
+5 1 0 1 0 11 51 -1 20 0.000 0 1 0 0 7650.000 4725.000 7515 4545 7425 4725 7515 4905
+5 1 0 1 0 14 51 -1 20 0.000 0 0 0 0 7650.000 4725.000 7785 4905 7650 4950 7515 4905
+5 1 0 1 0 14 51 -1 20 0.000 0 1 0 0 7650.000 4725.000 7785 4545 7650 4500 7515 4545
+6 6480 4455 7020 4995
+5 1 0 1 0 11 51 -1 20 0.000 0 0 0 0 6750.000 4725.000 6750 4500 6975 4725 6750 4950
+5 1 0 1 0 4 51 -1 20 0.000 0 1 0 0 6750.000 4725.000 6750 4500 6525 4725 6750 4950
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 6750 4500 6750 4950
+-6
+6 8280 4455 8820 4995
+5 1 0 1 0 11 51 -1 20 0.000 0 1 0 0 8550.000 4725.000 8550 4500 8325 4725 8550 4950
+5 1 0 1 0 4 51 -1 20 0.000 0 0 0 0 8550.000 4725.000 8550 4500 8775 4725 8550 4950
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 8550 4500 8550 4950
+-6
+6 7200 4050 8100 4320
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 8055 4050 8055 4320
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 8010 4050 8010 4320
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 7245 4050 7245 4320
+2 2 0 1 0 6 51 -1 20 0.000 1 0 -1 0 0 5
+ 7200 4050 8100 4050 8100 4320 7200 4320 7200 4050
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 7290 4050 7290 4320
+4 1 0 50 -1 16 6 0.0000 4 105 420 7650 4230 http_req\001
+-6
+6 7200 5130 8100 5400
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 8055 5130 8055 5400
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 8010 5130 8010 5400
+2 2 0 1 0 6 51 -1 20 0.000 1 0 -1 0 0 5
+ 7200 5130 8100 5130 8100 5400 7200 5400 7200 5130
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 7245 5130 7245 5400
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 7290 5130 7290 5400
+4 1 0 50 -1 16 6 0.0000 4 105 465 7650 5310 http_resp\001
+-6
+2 1 0 1 0 0 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 8100 4185 8370 4590
+2 1 0 1 0 0 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 6930 4590 7200 4185
+2 1 0 1 0 14 50 -1 -1 0.000 1 0 -1 1 1 2
+ 0 0 1.00 30.00 45.00
+ 0 0 1.00 30.00 45.00
+ 7650 4500 7650 4320
+2 1 0 1 0 14 50 -1 -1 0.000 1 0 -1 1 1 2
+ 0 0 1.00 30.00 45.00
+ 0 0 1.00 30.00 45.00
+ 7650 5130 7650 4950
+2 1 0 1 0 0 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 7200 5265 6930 4860
+2 1 0 1 0 0 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 8370 4860 8100 5265
+2 2 0 1 14 14 52 -1 20 0.000 2 0 -1 0 0 5
+ 7515 4545 7785 4545 7785 4905 7515 4905 7515 4545
+4 1 0 50 -1 16 6 0.0000 4 60 285 6750 4230 server\001
+4 1 0 50 -1 16 6 0.0000 4 90 285 6750 4365 socket\001
+4 1 0 50 -1 16 6 0.0000 4 90 255 8550 4230 client\001
+4 1 0 50 -1 16 6 0.0000 4 90 285 8550 4365 socket\001
+4 1 0 50 -1 16 6 0.0000 4 105 210 7650 4770 http\001
+-6
+6 630 4005 2970 5490
+6 630 4680 1170 5220
+5 1 0 1 0 11 51 -1 20 0.000 0 0 0 0 900.000 4950.000 900 4725 1125 4950 900 5175
+5 1 0 1 0 4 51 -1 20 0.000 0 1 0 0 900.000 4950.000 900 4725 675 4950 900 5175
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 900 4725 900 5175
+-6
+6 2430 4680 2970 5220
+5 1 0 1 0 11 51 -1 20 0.000 0 1 0 0 2700.000 4950.000 2700 4725 2475 4950 2700 5175
+5 1 0 1 0 4 51 -1 20 0.000 0 0 0 0 2700.000 4950.000 2700 4725 2925 4950 2700 5175
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 2700 4725 2700 5175
+-6
+6 1530 4005 2070 4545
+5 1 0 1 0 11 51 -1 20 0.000 0 0 0 0 1800.000 4275.000 1935 4095 2025 4275 1935 4455
+5 1 0 1 0 11 51 -1 20 0.000 0 1 0 0 1800.000 4275.000 1665 4095 1575 4275 1665 4455
+5 1 0 1 0 14 51 -1 20 0.000 0 0 0 0 1800.000 4275.000 1935 4455 1800 4500 1665 4455
+5 1 0 1 0 14 51 -1 20 0.000 0 1 0 0 1800.000 4275.000 1935 4095 1800 4050 1665 4095
+2 2 0 1 14 14 52 -1 20 0.000 2 0 -1 0 0 5
+ 1665 4095 1935 4095 1935 4455 1665 4455 1665 4095
+-6
+6 1350 4815 2250 5085
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 2205 4815 2205 5085
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 2160 4815 2160 5085
+2 2 0 1 0 6 51 -1 20 0.000 1 0 -1 0 0 5
+ 1350 4815 2250 4815 2250 5085 1350 5085 1350 4815
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 1395 4815 1395 5085
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 1440 4815 1440 5085
+4 1 0 50 -1 16 6 0.0000 4 105 420 1800 4995 http_req\001
+-6
+2 1 0 1 0 0 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 2250 4950 2475 4950
+2 1 0 1 0 14 50 -1 -1 0.000 1 0 -1 1 1 2
+ 0 0 1.00 30.00 45.00
+ 0 0 1.00 30.00 45.00
+ 1800 4815 1800 4500
+2 1 0 1 0 0 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 1125 4950 1350 4950
+4 0 0 50 -1 16 6 0.0000 4 105 690 2115 4365 eg: HTTP_REQ\001
+4 1 0 50 -1 16 6 0.0000 4 105 300 2700 5355 output\001
+4 1 0 50 -1 16 6 0.0000 4 90 285 2700 5490 driver\001
+4 1 0 50 -1 16 6 0.0000 4 105 240 900 5355 input\001
+4 1 0 50 -1 16 6 0.0000 4 90 285 900 5490 driver\001
+4 0 0 50 -1 16 6 0.0000 4 105 690 2115 4230 flow processor\001
+-6
+6 675 11025 6615 12555
+5 1 0 1 0 11 51 -1 20 0.000 0 0 0 0 3645.000 11790.000 3780 11610 3870 11790 3780 11970
+5 1 0 1 0 11 51 -1 20 0.000 0 1 0 0 3645.000 11790.000 3510 11610 3420 11790 3510 11970
+5 1 0 1 0 14 51 -1 20 0.000 0 0 0 0 3645.000 11790.000 3780 11970 3645 12015 3510 11970
+5 1 0 1 0 14 51 -1 20 0.000 0 1 0 0 3645.000 11790.000 3780 11610 3645 11565 3510 11610
+6 675 11520 1215 12060
+5 1 0 1 0 11 51 -1 20 0.000 0 0 0 0 945.000 11790.000 945 11565 1170 11790 945 12015
+5 1 0 1 0 4 51 -1 20 0.000 0 1 0 0 945.000 11790.000 945 11565 720 11790 945 12015
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 945 11565 945 12015
+-6
+6 3195 11115 4095 11385
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 4050 11115 4050 11385
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 4005 11115 4005 11385
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 3240 11115 3240 11385
+2 2 0 1 0 6 51 -1 20 0.000 1 0 -1 0 0 5
+ 3195 11115 4095 11115 4095 11385 3195 11385 3195 11115
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 3285 11115 3285 11385
+4 1 0 50 -1 16 6 0.0000 4 105 420 3645 11295 http_req\001
+-6
+6 3195 12195 4095 12465
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 4050 12195 4050 12465
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 4005 12195 4005 12465
+2 2 0 1 0 6 51 -1 20 0.000 1 0 -1 0 0 5
+ 3195 12195 4095 12195 4095 12465 3195 12465 3195 12195
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 3240 12195 3240 12465
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 3285 12195 3285 12465
+4 1 0 50 -1 16 6 0.0000 4 105 465 3645 12375 http_resp\001
+-6
+6 4275 11520 4815 12060
+5 1 0 1 0 11 51 -1 20 0.000 0 1 0 0 4545.000 11790.000 4545 11565 4320 11790 4545 12015
+5 1 0 1 0 11 51 -1 20 0.000 0 0 0 0 4545.000 11790.000 4545 11565 4770 11790 4545 12015
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 4545 11565 4545 12015
+-6
+6 2475 11520 3015 12060
+5 1 0 1 0 11 51 -1 20 0.000 0 1 0 0 2745.000 11790.000 2745 11565 2520 11790 2745 12015
+5 1 0 1 0 11 51 -1 20 0.000 0 0 0 0 2745.000 11790.000 2745 11565 2970 11790 2745 12015
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 2745 11565 2745 12015
+-6
+6 4995 11115 5895 11385
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 5850 11115 5850 11385
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 5805 11115 5805 11385
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 5040 11115 5040 11385
+2 2 0 1 0 6 51 -1 20 0.000 1 0 -1 0 0 5
+ 4995 11115 5895 11115 5895 11385 4995 11385 4995 11115
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 5085 11115 5085 11385
+4 1 0 50 -1 16 6 0.0000 4 105 465 5445 11295 https_req\001
+-6
+6 4995 12195 5895 12465
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 5850 12195 5850 12465
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 5805 12195 5805 12465
+2 2 0 1 0 6 51 -1 20 0.000 1 0 -1 0 0 5
+ 4995 12195 5895 12195 5895 12465 4995 12465 4995 12195
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 5040 12195 5040 12465
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 5085 12195 5085 12465
+4 1 0 50 -1 16 6 0.0000 4 105 510 5445 12375 https_resp\001
+-6
+6 1395 11115 2295 11385
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 2250 11115 2250 11385
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 2205 11115 2205 11385
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 1440 11115 1440 11385
+2 2 0 1 0 6 51 -1 20 0.000 1 0 -1 0 0 5
+ 1395 11115 2295 11115 2295 11385 1395 11385 1395 11115
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 1485 11115 1485 11385
+4 1 0 50 -1 16 6 0.0000 4 105 465 1845 11295 https_req\001
+-6
+6 1395 12195 2295 12465
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 2250 12195 2250 12465
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 2205 12195 2205 12465
+2 2 0 1 0 6 51 -1 20 0.000 1 0 -1 0 0 5
+ 1395 12195 2295 12195 2295 12465 1395 12465 1395 12195
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 1440 12195 1440 12465
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 1485 12195 1485 12465
+4 1 0 50 -1 16 6 0.0000 4 105 510 1845 12375 https_resp\001
+-6
+6 6075 11520 6615 12060
+5 1 0 1 0 11 51 -1 20 0.000 0 1 0 0 6345.000 11790.000 6345 11565 6120 11790 6345 12015
+5 1 0 1 0 4 51 -1 20 0.000 0 0 0 0 6345.000 11790.000 6345 11565 6570 11790 6345 12015
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 6345 11565 6345 12015
+-6
+6 675 12195 1170 12555
+2 4 0 1 16 17 53 -1 -1 0.000 0 0 7 0 0 5
+ 1170 12555 1170 12195 675 12195 675 12555 1170 12555
+4 1 0 50 -1 16 6 0.0000 4 90 450 900 12420 TCPv4_S\001
+-6
+6 6120 12195 6615 12555
+2 4 0 1 16 17 53 -1 -1 0.000 0 0 7 0 0 5
+ 6615 12555 6615 12195 6120 12195 6120 12555 6615 12555
+4 1 0 50 -1 16 6 0.0000 4 90 450 6345 12420 TCPv4_S\001
+-6
+2 1 0 1 0 0 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 1125 11655 1395 11250
+2 1 0 1 0 0 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 1395 12330 1125 11925
+2 1 0 1 0 14 50 -1 -1 0.000 1 0 -1 1 1 2
+ 0 0 1.00 30.00 45.00
+ 0 0 1.00 30.00 45.00
+ 3645 11565 3645 11385
+2 1 0 1 0 14 50 -1 -1 0.000 1 0 -1 1 1 2
+ 0 0 1.00 30.00 45.00
+ 0 0 1.00 30.00 45.00
+ 3645 12195 3645 12015
+2 1 0 1 0 0 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 4995 12330 4725 11925
+2 1 0 1 0 0 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 4725 11655 4995 11250
+2 1 0 1 0 0 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 4375 11925 4105 12330
+2 1 0 1 0 0 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 4095 11250 4365 11655
+2 1 0 1 0 0 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 3195 12330 2925 11925
+2 1 0 1 0 0 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 2925 11655 3195 11250
+2 1 0 1 0 0 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 2575 11925 2305 12330
+2 1 0 1 0 0 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 2295 11250 2565 11655
+2 1 0 1 0 0 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 5895 11250 6165 11655
+2 1 0 1 0 0 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 6165 11925 5895 12330
+2 2 0 1 14 14 52 -1 20 0.000 2 0 -1 0 0 5
+ 3510 11610 3780 11610 3780 11970 3510 11970 3510 11610
+2 4 0 1 16 17 54 -1 20 0.000 0 0 7 0 0 5
+ 3150 12555 3150 11025 675 11025 675 12555 3150 12555
+2 3 0 1 28 30 53 -1 20 0.000 1 0 -1 0 0 11
+ 6120 11070 5895 11565 5895 12015 6120 12510 4950 12510 4410 12510
+ 4185 12015 4185 11565 4410 11070 4995 11070 6120 11070
+2 4 0 1 16 17 54 -1 20 0.000 0 0 7 0 0 5
+ 6615 12555 6615 11025 4140 11025 4140 12555 6615 12555
+2 3 0 1 28 30 53 -1 20 0.000 0 0 -1 0 0 11
+ 1170 11070 1395 11565 1395 12015 1170 12510 2340 12510 2880 12510
+ 3105 12015 3105 11565 2880 11070 2295 11070 1170 11070
+4 1 0 50 -1 16 6 0.0000 4 60 285 945 11295 server\001
+4 1 0 50 -1 16 6 0.0000 4 90 285 945 11430 socket\001
+4 1 0 50 -1 16 6 0.0000 4 75 210 4545 11430 SSL\001
+4 1 0 50 -1 16 6 0.0000 4 75 210 2745 11430 SSL\001
+4 1 0 50 -1 16 6 0.0000 4 90 255 6345 11295 client\001
+4 1 0 50 -1 16 6 0.0000 4 90 285 6345 11430 socket\001
+4 1 0 50 -1 16 6 0.0000 4 105 210 3645 11835 http\001
+-6
+6 630 6750 2970 8100
+6 630 7155 1170 7695
+5 1 0 1 0 11 51 -1 20 0.000 0 0 0 0 900.000 7425.000 900 7200 1125 7425 900 7650
+5 1 0 1 0 4 51 -1 20 0.000 0 1 0 0 900.000 7425.000 900 7200 675 7425 900 7650
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 900 7200 900 7650
+-6
+6 2430 7155 2970 7695
+5 1 0 1 0 11 51 -1 20 0.000 0 1 0 0 2700.000 7425.000 2700 7200 2475 7425 2700 7650
+5 1 0 1 0 4 51 -1 20 0.000 0 0 0 0 2700.000 7425.000 2700 7200 2925 7425 2700 7650
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 2700 7200 2700 7650
+-6
+6 1350 6750 2250 7020
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 2205 6750 2205 7020
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 2160 6750 2160 7020
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 1395 6750 1395 7020
+2 2 0 1 0 6 51 -1 20 0.000 1 0 -1 0 0 5
+ 1350 6750 2250 6750 2250 7020 1350 7020 1350 6750
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 1440 6750 1440 7020
+4 1 0 50 -1 16 6 0.0000 4 105 420 1800 6930 http_req\001
+-6
+6 1305 7020 1845 7830
+5 1 0 1 0 11 51 -1 20 0.000 0 0 0 0 1575.000 7425.000 1710 7245 1800 7425 1710 7605
+5 1 0 1 0 11 51 -1 20 0.000 0 1 0 0 1575.000 7425.000 1440 7245 1350 7425 1440 7605
+5 1 0 1 0 14 51 -1 20 0.000 0 0 0 0 1575.000 7425.000 1710 7605 1575 7650 1440 7605
+5 1 0 1 0 14 51 -1 20 0.000 0 1 0 0 1575.000 7425.000 1710 7245 1575 7200 1440 7245
+2 1 0 1 0 14 50 -1 -1 0.000 1 0 -1 1 1 2
+ 0 0 1.00 30.00 45.00
+ 0 0 1.00 30.00 45.00
+ 1575 7200 1575 7020
+2 1 0 1 0 14 50 -1 -1 0.000 1 0 -1 1 1 2
+ 0 0 1.00 30.00 45.00
+ 0 0 1.00 30.00 45.00
+ 1575 7830 1575 7650
+2 2 0 1 14 14 52 -1 20 0.000 2 0 -1 0 0 5
+ 1440 7245 1710 7245 1710 7605 1440 7605 1440 7245
+4 1 0 50 -1 16 6 0.0000 4 105 210 1575 7470 http\001
+-6
+6 1755 7020 2295 7830
+5 1 0 1 0 14 51 -1 20 0.000 0 0 0 0 2025.000 7425.000 2160 7605 2025 7650 1890 7605
+5 1 0 1 0 14 51 -1 20 0.000 0 1 0 0 2025.000 7425.000 2160 7245 2025 7200 1890 7245
+5 1 0 1 0 11 51 -1 20 0.000 0 1 0 0 2025.000 7425.000 1890 7245 1800 7425 1890 7605
+5 1 0 1 0 11 51 -1 20 0.000 0 0 0 0 2025.000 7425.000 2160 7245 2250 7425 2160 7605
+2 1 0 1 0 14 50 -1 -1 0.000 1 0 -1 1 1 2
+ 0 0 1.00 30.00 45.00
+ 0 0 1.00 30.00 45.00
+ 2025 7200 2025 7020
+2 1 0 1 0 14 50 -1 -1 0.000 1 0 -1 1 1 2
+ 0 0 1.00 30.00 45.00
+ 0 0 1.00 30.00 45.00
+ 2025 7830 2025 7650
+2 2 0 1 14 14 52 -1 20 0.000 2 0 -1 0 0 5
+ 1890 7245 2160 7245 2160 7605 1890 7605 1890 7245
+4 1 0 50 -1 16 6 0.0000 4 105 390 2025 7470 filtering\001
+-6
+6 1350 7830 2250 8100
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 2205 7830 2205 8100
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 2160 7830 2160 8100
+2 2 0 1 0 6 51 -1 20 0.000 1 0 -1 0 0 5
+ 1350 7830 2250 7830 2250 8100 1350 8100 1350 7830
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 1395 7830 1395 8100
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 1440 7830 1440 8100
+4 1 0 50 -1 16 6 0.0000 4 105 465 1800 8010 http_resp\001
+-6
+2 1 0 1 0 0 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 2250 6885 2520 7290
+2 1 0 1 0 0 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 1080 7290 1350 6885
+2 1 0 1 0 0 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 1350 7965 1080 7560
+2 1 0 1 0 0 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 2520 7560 2250 7965
+4 1 0 50 -1 16 6 0.0000 4 60 285 900 6930 server\001
+4 1 0 50 -1 16 6 0.0000 4 90 285 900 7065 socket\001
+4 1 0 50 -1 16 6 0.0000 4 90 255 2700 6930 client\001
+4 1 0 50 -1 16 6 0.0000 4 90 285 2700 7065 socket\001
+-6
+6 4680 9000 8820 10530
+5 1 0 1 0 11 51 -1 20 0.000 0 0 0 0 7650.000 9765.000 7785 9585 7875 9765 7785 9945
+5 1 0 1 0 11 51 -1 20 0.000 0 1 0 0 7650.000 9765.000 7515 9585 7425 9765 7515 9945
+5 1 0 1 0 14 51 -1 20 0.000 0 0 0 0 7650.000 9765.000 7785 9945 7650 9990 7515 9945
+5 1 0 1 0 14 51 -1 20 0.000 0 1 0 0 7650.000 9765.000 7785 9585 7650 9540 7515 9585
+6 4680 9495 5220 10035
+5 1 0 1 0 11 51 -1 20 0.000 0 0 0 0 4950.000 9765.000 4950 9540 5175 9765 4950 9990
+5 1 0 1 0 4 51 -1 20 0.000 0 1 0 0 4950.000 9765.000 4950 9540 4725 9765 4950 9990
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 4950 9540 4950 9990
+-6
+6 8280 9495 8820 10035
+5 1 0 1 0 11 51 -1 20 0.000 0 1 0 0 8550.000 9765.000 8550 9540 8325 9765 8550 9990
+5 1 0 1 0 4 51 -1 20 0.000 0 0 0 0 8550.000 9765.000 8550 9540 8775 9765 8550 9990
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 8550 9540 8550 9990
+-6
+6 7200 9090 8100 9360
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 8055 9090 8055 9360
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 8010 9090 8010 9360
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 7245 9090 7245 9360
+2 2 0 1 0 6 51 -1 20 0.000 1 0 -1 0 0 5
+ 7200 9090 8100 9090 8100 9360 7200 9360 7200 9090
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 7290 9090 7290 9360
+4 1 0 50 -1 16 6 0.0000 4 105 420 7650 9270 http_req\001
+-6
+6 7200 10170 8100 10440
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 8055 10170 8055 10440
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 8010 10170 8010 10440
+2 2 0 1 0 6 51 -1 20 0.000 1 0 -1 0 0 5
+ 7200 10170 8100 10170 8100 10440 7200 10440 7200 10170
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 7245 10170 7245 10440
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 7290 10170 7290 10440
+4 1 0 50 -1 16 6 0.0000 4 105 465 7650 10350 http_resp\001
+-6
+6 6480 9495 7020 10035
+5 1 0 1 0 11 51 -1 20 0.000 0 1 0 0 6750.000 9765.000 6750 9540 6525 9765 6750 9990
+5 1 0 1 0 11 51 -1 20 0.000 0 0 0 0 6750.000 9765.000 6750 9540 6975 9765 6750 9990
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 6750 9540 6750 9990
+-6
+6 5400 10170 6300 10440
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 6255 10170 6255 10440
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 6210 10170 6210 10440
+2 2 0 1 0 6 51 -1 20 0.000 1 0 -1 0 0 5
+ 5400 10170 6300 10170 6300 10440 5400 10440 5400 10170
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 5445 10170 5445 10440
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 5490 10170 5490 10440
+4 1 0 50 -1 16 6 0.0000 4 105 510 5850 10350 https_resp\001
+-6
+6 5400 9090 6300 9360
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 6255 9090 6255 9360
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 6210 9090 6210 9360
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 5445 9090 5445 9360
+2 2 0 1 0 6 51 -1 20 0.000 1 0 -1 0 0 5
+ 5400 9090 6300 9090 6300 9360 5400 9360 5400 9090
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 5490 9090 5490 9360
+4 1 0 50 -1 16 6 0.0000 4 105 465 5850 9270 https_req\001
+-6
+6 4680 10170 5175 10530
+2 4 0 1 16 17 53 -1 -1 0.000 0 0 7 0 0 5
+ 5175 10530 5175 10170 4680 10170 4680 10530 5175 10530
+4 1 0 50 -1 16 6 0.0000 4 90 450 4905 10395 TCPv4_S\001
+-6
+2 1 0 1 0 0 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 5130 9630 5400 9225
+2 1 0 1 0 0 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 5400 10305 5130 9900
+2 1 0 1 0 0 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 8100 9225 8370 9630
+2 1 0 1 0 14 50 -1 -1 0.000 1 0 -1 1 1 2
+ 0 0 1.00 30.00 45.00
+ 0 0 1.00 30.00 45.00
+ 7650 9540 7650 9360
+2 1 0 1 0 14 50 -1 -1 0.000 1 0 -1 1 1 2
+ 0 0 1.00 30.00 45.00
+ 0 0 1.00 30.00 45.00
+ 7650 10170 7650 9990
+2 1 0 1 0 0 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 7200 10305 6930 9900
+2 1 0 1 0 0 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 8370 9900 8100 10305
+2 1 0 1 0 0 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 6930 9630 7200 9225
+2 1 0 1 0 0 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 6580 9900 6310 10305
+2 1 0 1 0 0 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 6300 9225 6570 9630
+2 2 0 1 14 14 52 -1 20 0.000 2 0 -1 0 0 5
+ 7515 9585 7785 9585 7785 9945 7515 9945 7515 9585
+2 4 0 1 16 17 54 -1 20 0.000 0 0 7 0 0 5
+ 7155 10530 7155 9000 4680 9000 4680 10530 7155 10530
+2 3 0 1 28 30 53 -1 20 0.000 1 0 -1 0 0 11
+ 5175 9045 5400 9540 5400 9990 5175 10485 6345 10485 6885 10485
+ 7110 9990 7110 9540 6885 9045 6300 9045 5175 9045
+4 1 0 50 -1 16 6 0.0000 4 60 285 4950 9270 server\001
+4 1 0 50 -1 16 6 0.0000 4 90 285 4950 9405 socket\001
+4 1 0 50 -1 16 6 0.0000 4 90 255 8550 9270 client\001
+4 1 0 50 -1 16 6 0.0000 4 90 285 8550 9405 socket\001
+4 1 0 50 -1 16 6 0.0000 4 75 210 6750 9405 SSL\001
+4 1 0 50 -1 16 6 0.0000 4 105 210 7650 9810 http\001
+-6
+6 675 9000 4275 10575
+6 1800 9900 2250 10125
+2 2 0 1 29 30 51 -1 20 0.000 0 0 -1 0 0 5
+ 1800 9900 2250 9900 2250 10125 1800 10125 1800 9900
+4 1 0 50 -1 16 6 0.0000 4 75 210 2025 10035 SSL\001
+-6
+6 2700 9900 3150 10125
+2 2 0 1 29 30 51 -1 20 0.000 0 0 -1 0 0 5
+ 2700 9900 3150 9900 3150 10125 2700 10125 2700 9900
+4 1 0 50 -1 16 6 0.0000 4 75 210 2925 10035 SSL\001
+-6
+6 3600 9900 4050 10125
+2 2 0 1 29 30 51 -1 20 0.000 0 0 -1 0 0 5
+ 3600 9900 4050 9900 4050 10125 3600 10125 3600 9900
+4 1 0 50 -1 16 6 0.0000 4 75 210 3825 10035 SSL\001
+-6
+6 900 9900 1350 10125
+2 2 0 1 29 30 51 -1 20 0.000 0 0 -1 0 0 5
+ 900 9900 1350 9900 1350 10125 900 10125 900 9900
+4 1 0 50 -1 16 6 0.0000 4 75 210 1125 10035 SSL\001
+-6
+6 2520 10350 3330 10575
+2 2 0 1 16 17 51 -1 20 0.000 0 0 -1 0 0 5
+ 2520 10350 3330 10350 3330 10575 2520 10575 2520 10350
+4 1 0 50 -1 16 6 0.0000 4 90 510 2925 10485 UNIX_STR\001
+-6
+6 1125 9450 1575 9675
+2 2 0 1 9 11 51 -1 20 0.000 0 0 -1 0 0 5
+ 1125 9450 1575 9450 1575 9675 1125 9675 1125 9450
+4 1 0 50 -1 16 6 0.0000 4 90 315 1350 9585 deflate\001
+-6
+6 2025 9450 2475 9675
+2 2 0 1 9 11 51 -1 20 0.000 0 0 -1 0 0 5
+ 2025 9450 2475 9450 2475 9675 2025 9675 2025 9450
+4 1 0 50 -1 16 6 0.0000 4 90 315 2250 9585 deflate\001
+-6
+6 2925 9450 3375 9675
+2 2 0 1 9 11 51 -1 20 0.000 0 0 -1 0 0 5
+ 2925 9450 3375 9450 3375 9675 2925 9675 2925 9450
+4 1 0 50 -1 16 6 0.0000 4 90 315 3150 9585 deflate\001
+-6
+6 3825 9450 4275 9675
+2 2 0 1 9 11 51 -1 20 0.000 0 0 -1 0 0 5
+ 3825 9450 4275 9450 4275 9675 3825 9675 3825 9450
+4 1 0 50 -1 16 6 0.0000 4 90 315 4050 9585 deflate\001
+-6
+6 675 10350 1530 10575
+2 2 0 1 16 17 51 -1 20 0.000 0 0 -1 0 0 5
+ 675 10350 1530 10350 1530 10575 675 10575 675 10350
+4 1 0 50 -1 16 6 0.0000 4 75 315 1125 10485 TCPv4\001
+-6
+6 1620 10350 2430 10575
+2 2 0 1 16 17 51 -1 20 0.000 0 0 -1 0 0 5
+ 1620 10350 2430 10350 2430 10575 1620 10575 1620 10350
+4 1 0 50 -1 16 6 0.0000 4 75 315 2025 10485 TCPv6\001
+-6
+6 3420 10350 4275 10575
+2 2 0 1 16 17 51 -1 20 0.000 0 0 -1 0 0 5
+ 3420 10350 4275 10350 4275 10575 3420 10575 3420 10350
+4 1 0 50 -1 16 6 0.0000 4 75 330 3825 10485 PIPES\001
+-6
+6 675 9000 4275 9225
+2 2 0 1 12 14 51 -1 20 0.000 0 0 -1 0 0 5
+ 675 9000 4275 9000 4275 9225 675 9225 675 9000
+4 1 0 50 -1 16 6 0.0000 4 105 1950 2475 9135 HTTP flow analyzer using stream interfaces\001
+-6
+2 1 0 1 20 -1 50 -1 -1 0.000 1 0 -1 0 1 2
+ 0 0 1.00 30.00 45.00
+ 765 10350 765 9225
+2 1 0 1 20 14 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 810 10350 810 9225
+2 1 0 1 20 14 50 -1 -1 0.000 1 0 -1 0 1 2
+ 0 0 1.00 30.00 45.00
+ 990 9900 990 9225
+2 1 0 1 20 14 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 1035 9900 1035 9225
+2 1 0 1 20 14 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 1170 10350 1170 10125
+2 1 0 1 20 14 50 -1 -1 0.000 1 0 -1 0 1 2
+ 0 0 1.00 30.00 45.00
+ 1215 9900 1215 9675
+2 1 0 1 20 14 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 1260 9900 1260 9675
+2 1 0 1 20 14 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 1395 9450 1395 9225
+2 1 0 1 20 14 50 -1 -1 0.000 1 0 -1 0 1 2
+ 0 0 1.00 30.00 45.00
+ 1440 10350 1440 9675
+2 1 0 1 20 14 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 1485 10350 1485 9675
+2 1 0 1 20 14 50 -1 -1 0.000 1 0 -1 0 1 2
+ 0 0 1.00 30.00 45.00
+ 1665 10350 1665 9225
+2 1 0 1 20 14 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 1710 10350 1710 9225
+2 1 0 1 20 14 50 -1 -1 0.000 1 0 -1 0 1 2
+ 0 0 1.00 30.00 45.00
+ 1890 9900 1890 9225
+2 1 0 1 20 14 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 1935 9900 1935 9225
+2 1 0 1 20 14 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 2070 10350 2070 10125
+2 1 0 1 20 14 50 -1 -1 0.000 1 0 -1 0 1 2
+ 0 0 1.00 30.00 45.00
+ 2115 9900 2115 9675
+2 1 0 1 20 14 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 2160 9900 2160 9675
+2 1 0 1 20 14 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 2295 9450 2295 9225
+2 1 0 1 20 14 50 -1 -1 0.000 1 0 -1 0 1 2
+ 0 0 1.00 30.00 45.00
+ 2340 10350 2340 9675
+2 1 0 1 20 14 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 2385 10350 2385 9675
+2 1 0 1 20 14 50 -1 -1 0.000 1 0 -1 0 1 2
+ 0 0 1.00 30.00 45.00
+ 2565 10350 2565 9225
+2 1 0 1 20 14 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 2610 10350 2610 9225
+2 1 0 1 20 14 50 -1 -1 0.000 1 0 -1 0 1 2
+ 0 0 1.00 30.00 45.00
+ 2790 9900 2790 9225
+2 1 0 1 20 14 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 2835 9900 2835 9225
+2 1 0 1 20 14 50 -1 -1 0.000 1 0 -1 0 1 2
+ 0 0 1.00 30.00 45.00
+ 3015 9900 3015 9675
+2 1 0 1 20 14 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 3060 9900 3060 9675
+2 1 0 1 20 14 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 2970 10350 2970 10125
+2 1 0 1 20 14 50 -1 -1 0.000 1 0 -1 0 1 2
+ 0 0 1.00 30.00 45.00
+ 3240 10350 3240 9675
+2 1 0 1 20 14 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 3285 10350 3285 9675
+2 1 0 1 20 14 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 3195 9450 3195 9225
+2 1 0 1 20 14 50 -1 -1 0.000 1 0 -1 0 1 2
+ 0 0 1.00 30.00 45.00
+ 3465 10350 3465 9225
+2 1 0 1 20 14 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 3510 10350 3510 9225
+2 1 0 1 20 14 50 -1 -1 0.000 1 0 -1 0 1 2
+ 0 0 1.00 30.00 45.00
+ 3690 9900 3690 9225
+2 1 0 1 20 14 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 3735 9900 3735 9225
+2 1 0 1 20 14 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 3870 10350 3870 10125
+2 1 0 1 20 14 50 -1 -1 0.000 1 0 -1 0 1 2
+ 0 0 1.00 30.00 45.00
+ 3915 9900 3915 9675
+2 1 0 1 20 14 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 3960 9900 3960 9675
+2 1 0 1 20 14 50 -1 -1 0.000 1 0 -1 0 1 2
+ 0 0 1.00 30.00 45.00
+ 4140 10350 4140 9675
+2 1 0 1 20 14 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 4185 10350 4185 9675
+2 1 0 1 20 14 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 4095 9450 4095 9225
+2 1 0 1 20 14 50 -1 -1 0.000 1 0 -1 0 1 2
+ 0 0 1.00 30.00 45.00
+ 1350 9450 1350 9225
+2 1 0 1 20 14 50 -1 -1 0.000 1 0 -1 0 1 2
+ 0 0 1.00 30.00 45.00
+ 2250 9450 2250 9225
+2 1 0 1 20 14 50 -1 -1 0.000 1 0 -1 0 1 2
+ 0 0 1.00 30.00 45.00
+ 3150 9450 3150 9225
+2 1 0 1 20 14 50 -1 -1 0.000 1 0 -1 0 1 2
+ 0 0 1.00 30.00 45.00
+ 4050 9450 4050 9225
+2 1 0 1 20 14 50 -1 -1 0.000 1 0 -1 0 1 2
+ 0 0 1.00 30.00 45.00
+ 3825 10350 3825 10125
+2 1 0 1 20 14 50 -1 -1 0.000 1 0 -1 0 1 2
+ 0 0 1.00 30.00 45.00
+ 2925 10350 2925 10125
+2 1 0 1 20 14 50 -1 -1 0.000 1 0 -1 0 1 2
+ 0 0 1.00 30.00 45.00
+ 2025 10350 2025 10125
+2 1 0 1 20 14 50 -1 -1 0.000 1 0 -1 0 1 2
+ 0 0 1.00 30.00 45.00
+ 1125 10350 1125 10125
+-6
+6 3780 6750 6120 8100
+5 1 0 1 0 11 51 -1 20 0.000 0 0 0 0 4725.000 7425.000 4860 7245 4950 7425 4860 7605
+5 1 0 1 0 11 51 -1 20 0.000 0 1 0 0 4725.000 7425.000 4590 7245 4500 7425 4590 7605
+5 1 0 1 0 14 51 -1 20 0.000 0 0 0 0 4725.000 7425.000 4860 7605 4725 7650 4590 7605
+5 1 0 1 0 14 51 -1 20 0.000 0 1 0 0 4725.000 7425.000 4860 7245 4725 7200 4590 7245
+5 1 0 1 0 14 51 -1 20 0.000 0 0 0 0 5175.000 7425.000 5310 7605 5175 7650 5040 7605
+5 1 0 1 0 14 51 -1 20 0.000 0 1 0 0 5175.000 7425.000 5310 7245 5175 7200 5040 7245
+5 1 0 1 0 11 51 -1 20 0.000 0 1 0 0 5175.000 7425.000 5040 7245 4950 7425 5040 7605
+5 1 0 1 0 11 51 -1 20 0.000 0 0 0 0 5175.000 7425.000 5310 7245 5400 7425 5310 7605
+6 3780 7155 4320 7695
+5 1 0 1 0 11 51 -1 20 0.000 0 0 0 0 4050.000 7425.000 4050 7200 4275 7425 4050 7650
+5 1 0 1 0 4 51 -1 20 0.000 0 1 0 0 4050.000 7425.000 4050 7200 3825 7425 4050 7650
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 4050 7200 4050 7650
+-6
+6 5580 7155 6120 7695
+5 1 0 1 0 11 51 -1 20 0.000 0 1 0 0 5850.000 7425.000 5850 7200 5625 7425 5850 7650
+5 1 0 1 0 4 51 -1 20 0.000 0 0 0 0 5850.000 7425.000 5850 7200 6075 7425 5850 7650
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 5850 7200 5850 7650
+-6
+6 4500 6750 5400 7020
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 5355 6750 5355 7020
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 5310 6750 5310 7020
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 4545 6750 4545 7020
+2 2 0 1 0 6 51 -1 20 0.000 1 0 -1 0 0 5
+ 4500 6750 5400 6750 5400 7020 4500 7020 4500 6750
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 4590 6750 4590 7020
+4 1 0 50 -1 16 6 0.0000 4 105 420 4950 6930 http_req\001
+-6
+6 4500 7830 5400 8100
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 5355 7830 5355 8100
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 5310 7830 5310 8100
+2 2 0 1 0 6 51 -1 20 0.000 1 0 -1 0 0 5
+ 4500 7830 5400 7830 5400 8100 4500 8100 4500 7830
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 4545 7830 4545 8100
+2 1 0 1 0 14 50 -1 -1 0.000 0 0 -1 0 0 2
+ 4590 7830 4590 8100
+4 1 0 50 -1 16 6 0.0000 4 105 465 4950 8010 http_resp\001
+-6
+2 1 0 1 0 0 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 5400 6885 5670 7290
+2 1 0 1 0 0 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 4230 7290 4500 6885
+2 1 0 1 0 0 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 4500 7965 4230 7560
+2 1 0 1 0 0 50 -1 -1 0.000 1 0 -1 1 0 2
+ 0 0 1.00 30.00 45.00
+ 5670 7560 5400 7965
+2 1 0 1 0 14 50 -1 -1 0.000 1 0 -1 1 1 2
+ 0 0 1.00 30.00 45.00
+ 0 0 1.00 30.00 45.00
+ 4725 7200 4725 7020
+2 2 0 1 14 14 52 -1 20 0.000 2 0 -1 0 0 5
+ 4590 7245 4860 7245 4860 7605 4590 7605 4590 7245
+2 1 0 1 0 14 50 -1 -1 0.000 1 0 -1 1 1 2
+ 0 0 1.00 30.00 45.00
+ 0 0 1.00 30.00 45.00
+ 5175 7830 5175 7650
+2 2 0 1 14 14 52 -1 20 0.000 2 0 -1 0 0 5
+ 5040 7245 5310 7245 5310 7605 5040 7605 5040 7245
+4 1 0 50 -1 16 6 0.0000 4 60 285 4050 6930 server\001
+4 1 0 50 -1 16 6 0.0000 4 90 285 4050 7065 socket\001
+4 1 0 50 -1 16 6 0.0000 4 90 255 5850 6930 client\001
+4 1 0 50 -1 16 6 0.0000 4 90 285 5850 7065 socket\001
+4 1 0 50 -1 16 6 0.0000 4 105 210 4725 7425 http\001
+4 1 0 50 -1 16 6 0.0000 4 75 150 4725 7515 req\001
+4 1 0 50 -1 16 6 0.0000 4 105 210 5175 7425 http\001
+4 1 0 50 -1 16 6 0.0000 4 75 195 5175 7515 resp\001
+-6
+4 0 0 53 -1 19 11 0.0000 4 180 5520 675 8775 Multi-layer protocol encapsulation using intermediate buffers :\001
+4 0 0 53 -1 19 11 0.0000 4 180 4200 675 6525 Multiple protocol processing on the same flow :\001
+4 0 0 53 -1 19 11 0.0000 4 180 2085 675 3600 Principles of operation :\001
--- /dev/null
+Prévoir des commandes en plusieurs mots clés.
+Par exemple :
+
+ timeout connection XXX
+ connection scale XXX
+
+On doit aussi accepter les préfixes :
+
+ tim co XXX
+ co sca XXX
+
+Prévoir de ranger les combinaisons dans un tableau. On doit même
+pouvoir effectuer un mapping simplifiant le parseur.
+
+
+Pour les filtres :
+
+
+ <direction> <where> <what> <operator> <pattern> <action> [ <args>* ]
+
+ <direction> = [ req | rsp ]
+ <where> = [ in | out ]
+ <what> = [ line | LINE | METH | URI | h(hdr) | H(hdr) | c(cookie) | C(cookie) ]
+ <operator> = [ == | =~ | =* | =^ | =/ | != | !~ | !* | !^ | !/ ]
+ <pattern> = "<string>"
+ <action> = [ allow | permit | deny | delete | replace | switch | add | set | redir ]
+ <args> = optionnal action args
+
+ exemples:
+
+ req in URI =^ "/images" switch images
+ req in h(host) =* ".mydomain.com" switch mydomain
+ req in h(host) =~ "localhost(.*)" replace "www\1"
+
+ alternative :
+
+ <direction> <where> <action> [not] <what> [<operator> <pattern> [ <args>* ]]
+
+ req in switch URI =^ "/images" images
+ req in switch h(host) =* ".mydomain.com" mydomain
+ req in replace h(host) =~ "localhost(.*)" "www\1"
+ req in delete h(Connection)
+ req in deny not line =~ "((GET|HEAD|POST|OPTIONS) /)|(OPTIONS *)"
+ req out set h(Connection) "close"
+ req out add line "Server: truc"
+
+
+ <direction> <action> <where> [not] <what> [<operator> <pattern> [ <args>* ]] ';' <action2> <what2>
+
+ req in switch URI =^ "/images/" images ; replace "/"
+ req in switch h(host) =* ".mydomain.com" mydomain
+ req in replace h(host) =~ "localhost(.*)" "www\1"
+ req in delete h(Connection)
+ req in deny not line =~ "((GET|HEAD|POST|OPTIONS) /)|(OPTIONS *)"
+ req out set h(Connection) "close"
+ req out add line == "Server: truc"
+
+
+Extension avec des ACL :
+
+ req in acl(meth_valid) METH =~ "(GET|POST|HEAD|OPTIONS)"
+ req in acl(meth_options) METH == "OPTIONS"
+ req in acl(uri_slash) URI =^ "/"
+ req in acl(uri_star) URI == "*"
+
+ req in deny acl !(meth_options && uri_star || meth_valid && uri_slash)
+
+Peut-être plus simplement :
+
+ acl meth_valid METH =~ "(GET|POST|HEAD|OPTIONS)"
+ acl meth_options METH == "OPTIONS"
+ acl uri_slash URI =^ "/"
+ acl uri_star URI == "*"
+
+ req in deny not acl(meth_options uri_star, meth_valid uri_slash)
+
+ req in switch URI =^ "/images/" images ; replace "/"
+ req in switch h(host) =* ".mydomain.com" mydomain
+ req in replace h(host) =~ "localhost(.*)" "www\1"
+ req in delete h(Connection)
+ req in deny not line =~ "((GET|HEAD|POST|OPTIONS) /)|(OPTIONS *)"
+ req out set h(Connection) "close"
+ req out add line == "Server: truc"
+
+Prévoir le cas du "if" pour exécuter plusieurs actions :
+
+ req in if URI =^ "/images/" then replace "/" ; switch images
+
+Utiliser les noms en majuscules/minuscules pour indiquer si on veut prendre
+en compte la casse ou non :
+
+ if uri =^ "/watch/" setbe watch rebase "/watch/" "/"
+ if uri =* ".jpg" setbe images
+ if uri =~ ".*dll.*" deny
+ if HOST =* ".mydomain.com" setbe mydomain
+ etc...
+
+Another solution would be to have a dedicated keyword to URI remapping. It
+would both rewrite the URI and optionally switch to another backend.
+
+ uriremap "/watch/" "/" watch
+ uriremap "/chat/" "/" chat
+ uriremap "/event/" "/event/" event
+
+Or better :
+
+ uriremap "/watch/" watch "/"
+ uriremap "/chat/" chat "/"
+ uriremap "/event/" event
+
+For the URI, using a regex is sometimes useful (eg: providing a set of possible prefixes.
+
+
+Sinon, peut-être que le "switch" peut prendre un paramètre de mapping pour la partie matchée :
+
+ req in switch URI =^ "/images/" images:"/"
+
+
+2007/03/31 - Besoins plus précis.
+
+1) aucune extension de branchement ou autre dans les "listen", c'est trop complexe.
+
+Distinguer les données entrantes (in) et sortantes (out).
+
+Le frontend ne voit que les requetes entrantes et les réponses sortantes.
+Le backend voir les requêtes in/out et les réponses in/out.
+Le frontend permet les branchements d'ensembles de filtres de requêtes vers
+d'autres. Le frontend et les ensembles de filtres de requêtes peuvent brancher
+vers un backend.
+
+-----------+--------+----------+----------+---------+----------+
+ \ Where | | | | | |
+ \______ | Listen | Frontend | ReqRules | Backend | RspRules |
+ \| | | | | |
+Capability | | | | | |
+-----------+--------+----------+----------+---------+----------+
+Frontend | X | X | | | |
+-----------+--------+----------+----------+---------+----------+
+FiltReqIn | X | X | X | X | |
+-----------+--------+----------+----------+---------+----------+
+JumpFiltReq| X | X | X | | | \
+-----------+--------+----------+----------+---------+----------+ > = ReqJump
+SetBackend | X | X | X | | | /
+-----------+--------+----------+----------+---------+----------+
+FiltReqOut | | | | X | |
+-----------+--------+----------+----------+---------+----------+
+FiltRspIn | X | | | X | X |
+-----------+--------+----------+----------+---------+----------+
+JumpFiltRsp| | | | X | X |
+-----------+--------+----------+----------+---------+----------+
+FiltRspOut | | X | | X | X |
+-----------+--------+----------+----------+---------+----------+
+Backend | X | | | X | |
+-----------+--------+----------+----------+---------+----------+
+
+En conclusion
+-------------
+
+Il y a au moins besoin de distinguer 8 fonctionnalités de base :
+ - capacité à recevoir des connexions (frontend)
+ - capacité à filtrer les requêtes entrantes
+ - capacité à brancher vers un backend ou un ensemble de règles de requêtes
+ - capacité à filtrer les requêtes sortantes
+ - capacité à filtrer les réponses entrantes
+ - capacité à brancher vers un autre ensemble de règles de réponses
+ - capacité à filtrer la réponse sortante
+ - capacité à gérer des serveurs (backend)
+
+Remarque
+--------
+ - on a souvent besoin de pouvoir appliquer un petit traitement sur un ensemble
+ host/uri/autre. Le petit traitement peut consister en quelques filtres ainsi
+ qu'une réécriture du couple (host,uri).
+
+
+Proposition : ACL
+
+Syntaxe :
+---------
+
+ acl <name> <what> <operator> <value> ...
+
+Ceci créera une acl référencée sous le nom <name> qui sera validée si
+l'application d'au moins une des valeurs <value> avec l'opérateur <operator>
+sur le sujet <what> est validée.
+
+Opérateurs :
+------------
+
+Toujours 2 caractères :
+
+ [=!][~=*^%/.]
+
+Premier caractère :
+ '=' : OK si test valide
+ '!' : OK si test échoué.
+
+Second caractère :
+ '~' : compare avec une regex
+ '=' : compare chaîne à chaîne
+ '*' : compare la fin de la chaîne (ex: =* ".mydomain.com")
+ '^' : compare le début de la chaîne (ex: =^ "/images/")
+ '%' : recherche une sous-chaîne
+ '/' : compare avec un mot entier en acceptant le '/' comme délimiteur
+ '.' : compare avec un mot entier en acceptant el '.' comme délimiteur
+
+Ensuite on exécute une action de manière conditionnelle si l'ensemble des ACLs
+mentionnées sont validées (ou invalidées pour celles précédées d'un "!") :
+
+ <what> <where> <action> on [!]<aclname> ...
+
+
+Exemple :
+---------
+
+ acl www_pub host =. www www01 dev preprod
+ acl imghost host =. images
+ acl imgdir uri =/ img
+ acl imagedir uri =/ images
+ acl msie h(user-agent) =% "MSIE"
+
+ set_host "images" on www_pub imgdir
+ remap_uri "/img" "/" on www_pub imgdir
+ remap_uri "/images" "/" on www_pub imagedir
+ setbe images on imghost
+ reqdel "Cookie" on all
+
+
+
+Actions possibles :
+
+ req {in|out} {append|delete|rem|add|set|rep|mapuri|rewrite|reqline|deny|allow|setbe|tarpit}
+ resp {in|out} {append|delete|rem|add|set|rep|maploc|rewrite|stsline|deny|allow}
+
+ req in append <line>
+ req in delete <line_regex>
+ req in rem <header>
+ req in add <header> <new_value>
+ req in set <header> <new_value>
+ req in rep <header> <old_value> <new_value>
+ req in mapuri <old_uri_prefix> <new_uri_prefix>
+ req in rewrite <old_uri_regex> <new_uri>
+ req in reqline <old_req_regex> <new_req>
+ req in deny
+ req in allow
+ req in tarpit
+ req in setbe <backend>
+
+ resp out maploc <old_location_prefix> <new_loc_prefix>
+ resp out stsline <old_sts_regex> <new_sts_regex>
+
+Les chaînes doivent être délimitées par un même caractère au début et à la fin,
+qui doit être échappé s'il est présent dans la chaîne. Tout ce qui se trouve
+entre le caractère de fin et les premiers espace est considéré comme des
+options passées au traitement. Par exemple :
+
+ req in rep host /www/i /www/
+ req in rep connection /keep-alive/i "close"
+
+Il serait pratique de pouvoir effectuer un remap en même temps qu'un setbe.
+
+Captures: les séparer en in/out. Les rendre conditionnelles ?
--- /dev/null
+2015/08/06 - server connection sharing
+
+Improvements on the connection sharing strategies
+-------------------------------------------------
+
+4 strategies are currently supported :
+ - never
+ - safe
+ - aggressive
+ - always
+
+The "aggressive" and "always" strategies take into account the fact that the
+connection has already been reused at least once or not. The principle is that
+second requests can be used to safely "validate" connection reuse on newly
+added connections, and that such validated connections may be used even by
+first requests from other sessions. A validated connection is a connection
+which has already been reused, hence proving that it definitely supports
+multiple requests. Such connections are easy to verify : after processing the
+response, if the txn already had the TX_NOT_FIRST flag, then it was not the
+first request over that connection, and it is validated as safe for reuse.
+Validated connections are put into a distinct list : server->safe_conns.
+
+Incoming requests with TX_NOT_FIRST first pick from the regular idle_conns
+list so that any new idle connection is validated as soon as possible.
+
+Incoming requests without TX_NOT_FIRST only pick from the safe_conns list for
+strategy "aggressive", guaranteeing that the server properly supports connection
+reuse, or first from the safe_conns list, then from the idle_conns list for
+strategy "always".
+
+Connections are always stacked into the list (LIFO) so that there are higher
+changes to convert recent connections and to use them. This will first optimize
+the likeliness that the connection works, and will avoid TCP metrics from being
+lost due to an idle state, and/or the congestion window to drop and the
+connection going to slow start mode.
+
+
+Handling connections in pools
+-----------------------------
+
+A per-server "pool-max" setting should be added to permit disposing unused idle
+connections not attached anymore to a session for use by future requests. The
+principle will be that attached connections are queued from the front of the
+list while the detached connections will be queued from the tail of the list.
+
+This way, most reused connections will be fairly recent and detached connections
+will most often be ignored. The number of detached idle connections in the lists
+should be accounted for (pool_used) and limited (pool_max).
+
+After some time, a part of these detached idle connections should be killed.
+For this, the list is walked from tail to head and connections without an owner
+may be evicted. It may be useful to have a per-server pool_min setting
+indicating how many idle connections should remain in the pool, ready for use
+by new requests. Conversely, a pool_low metric should be kept between eviction
+runs, to indicate the lowest amount of detached connections that were found in
+the pool.
+
+For eviction, the principle of a half-life is appealing. The principle is
+simple : over a period of time, half of the connections between pool_min and
+pool_low should be gone. Since pool_low indicates how many connections were
+remaining unused over a period, it makes sense to kill some of them.
+
+In order to avoid killing thousands of connections in one run, the purge
+interval should be split into smaller batches. Let's call N the ratio of the
+half-life interval and the effective interval.
+
+The algorithm consists in walking over them from the end every interval and
+killing ((pool_low - pool_min) + 2 * N - 1) / (2 * N). It ensures that half
+of the unused connections are killed over the half-life period, in N batches
+of population/2N entries at most.
+
+Unsafe connections should be evicted first. There should be quite few of them
+since most of them are probed and become safe. Since detached connections are
+quickly recycled and attached to a new session, there should not be too many
+detached connections in the pool, and those present there may be killed really
+quickly.
+
+Another interesting point of pools is that when a pool-max is not null, then it
+makes sense to automatically enable pretend-keep-alive on non-private connections
+going to the server in order to be able to feed them back into the pool. With
+the "aggressive" or "always" strategies, it can allow clients making a single
+request over their connection to share persistent connections to the servers.
+
+
+
+2013/10/17 - server connection management and reuse
+
+Current state
+-------------
+
+At the moment, a connection entity is needed to carry any address
+information. This means in the following situations, we need a server
+connection :
+
+- server is elected and the server's destination address is set
+
+- transparent mode is elected and the destination address is set from
+ the incoming connection
+
+- proxy mode is enabled, and the destination's address is set during
+ the parsing of the HTTP request
+
+- connection to the server fails and must be retried on the same
+ server using the same parameters, especially the destination
+ address (SN_ADDR_SET not removed)
+
+
+On the accepting side, we have further requirements :
+
+- allocate a clean connection without a stream interface
+
+- incrementally set the accepted connection's parameters without
+ clearing it, and keep track of what is set (eg: getsockname).
+
+- initialize a stream interface in established mode
+
+- attach the accepted connection to a stream interface
+
+
+This means several things :
+
+- the connection has to be allocated on the fly the first time it is
+ needed to store the source or destination address ;
+
+- the connection has to be attached to the stream interface at this
+ moment ;
+
+- it must be possible to incrementally set some settings on the
+ connection's addresses regardless of the connection's current state
+
+- the connection must not be released across connection retries ;
+
+- it must be possible to clear a connection's parameters for a
+ redispatch without having to detach/attach the connection ;
+
+- we need to allocate a connection without an existing stream interface
+
+So on the accept() side, it looks like this :
+
+ fd = accept();
+ conn = new_conn();
+ get_some_addr_info(&conn->addr);
+ ...
+ si = new_si();
+ si_attach_conn(si, conn);
+ si_set_state(si, SI_ST_EST);
+ ...
+ get_more_addr_info(&conn->addr);
+
+On the connect() side, it looks like this :
+
+ si = new_si();
+ while (!properly_connected) {
+ if (!(conn = si->end)) {
+ conn = new_conn();
+ conn_clear(conn);
+ si_attach_conn(si, conn);
+ }
+ else {
+ if (connected) {
+ f = conn->flags & CO_FL_XPRT_TRACKED;
+ conn->flags &= ~CO_FL_XPRT_TRACKED;
+ conn_close(conn);
+ conn->flags |= f;
+ }
+ if (!correct_dest)
+ conn_clear(conn);
+ }
+ set_some_addr_info(&conn->addr);
+ si_set_state(si, SI_ST_CON);
+ ...
+ set_more_addr_info(&conn->addr);
+ conn->connect();
+ if (must_retry) {
+ close_conn(conn);
+ }
+ }
+
+Note: we need to be able to set the control and transport protocols.
+On outgoing connections, this is set once we know the destination address.
+On incoming connections, this is set the earliest possible (once we know
+the source address).
+
+The problem analysed below was solved on 2013/10/22
+
+| ==> the real requirement is to know whether a connection is still valid or not
+| before deciding to close it. CO_FL_CONNECTED could be enough, though it
+| will not indicate connections that are still waiting for a connect to occur.
+| This combined with CO_FL_WAIT_L4_CONN and CO_FL_WAIT_L6_CONN should be OK.
+|
+| Alternatively, conn->xprt could be used for this, but needs some careful checks
+| (it's used by conn_full_close at least).
+|
+| Right now, conn_xprt_close() checks conn->xprt and sets it to NULL.
+| conn_full_close() also checks conn->xprt and sets it to NULL, except
+| that the check on ctrl is performed within xprt. So conn_xprt_close()
+| followed by conn_full_close() will not close the file descriptor.
+| Note that conn_xprt_close() is never called, maybe we should kill it ?
+|
+| Note: at the moment, it's problematic to leave conn->xprt to NULL before doing
+| xprt_init() because we might end up with a pending file descriptor. Or at
+| least with some transport not de-initialized. We might thus need
+| conn_xprt_close() when conn_xprt_init() fails.
+|
+| The fd should be conditionned by ->ctrl only, and the transport layer by ->xprt.
+|
+| - conn_prepare_ctrl(conn, ctrl)
+| - conn_prepare_xprt(conn, xprt)
+| - conn_prepare_data(conn, data)
+|
+| Note: conn_xprt_init() needs conn->xprt so it's not a problem to set it early.
+|
+| One problem might be with conn_xprt_close() not being able to know if xprt_init()
+| was called or not. That's where it might make sense to only set ->xprt during init.
+| Except that it does not fly with outgoing connections (xprt_init is called after
+| connect()).
+|
+| => currently conn_xprt_close() is only used by ssl_sock.c and decides whether
+| to do something based on ->xprt_ctx which is set by ->init() from xprt_init().
+| So there is nothing to worry about. We just need to restore conn_xprt_close()
+| and rely on ->ctrl to close the fd instead of ->xprt.
+|
+| => we have the same issue with conn_ctrl_close() : when is the fd supposed to be
+| valid ? On outgoing connections, the control is set much before the fd...
--- /dev/null
+2014/10/28 - Server connection sharing
+
+For HTTP/2 we'll have to use multiplexed connections to the servers and to
+share them between multiple streams. We'll also have to do this for H/1, but
+with some variations since H1 doesn't offer connection status verification.
+
+In order to validate that an idle connection is still usable, it is desirable
+to periodically send health checks over it. Normally, idle connections are
+meant to be heavily used, so there is no reason for having them idle for a long
+time. Thus we have two possibilities :
+
+ - either we time them out after some inactivity, this saves server resources ;
+ - or we check them after some inactivity. For this we can send the server-
+ side HTTP health check (only when the server uses HTTP checks), and avoid
+ using that to mark the server down, and instead consider the connection as
+ dead.
+
+For HTTP/2 we'll have to send pings periodically over these connections, so
+it's worth considering a per-connection task to validate that the channel still
+works.
+
+In the current model, a connection necessarily belongs to a session, so it's
+not really possible to share them, at best they can be exchanged, but that
+doesn't make much sense as it means that it could disturb parallel traffic.
+
+Thus we need to have a per-server list of idle connections and a max-idle-conn
+setting to kill them when there are too many. In the case of H/1 it is also
+advisable to consider that if a connection was created to pass a first non-
+idempotent request while other idle connections were still existing, then a
+connection will have to be killed in order not to exceed the limit.
+
--- /dev/null
+2014/10/30 - dynamic buffer management
+
+Since HTTP/2 processing will significantly increase the need for buffering, it
+becomes mandatory to be able to support dynamic buffer allocation. This also
+means that at any moment some buffer allocation will fail and that a task or an
+I/O operation will have to be paused for the time needed to allocate a buffer.
+
+There are 3 places where buffers are needed :
+
+ - receive side of a stream interface. A connection notifies about a pending
+ recv() and the SI calls the receive function to put the data into a buffer.
+ Here the buffer will have to be picked from a pool first, and if the
+ allocation fails, the I/O will have to temporarily be disabled, the
+ connection will have to subscribe for buffer release notification to be
+ woken up once a buffer is available again. It's important to keep in mind
+ that buffer availability doesn't necessarily mean a desire to enable recv
+ again, just that recv is not paused anymore for resource reasons.
+
+ - receive side of a stream interface when the other end point is an applet.
+ The applet wants to write into the buffer and for this the buffer needs to
+ be allocated as well. It is the same as above except that it is the applet
+ which is put to a pause. Since the applet might be at the core of the task
+ itself, it could become tricky to handle the situation correctly. Stats and
+ peers are in this situation.
+
+ - Tx of a task : some tasks perform spontaneous writes to a buffer. Checks
+ are an example of this. The checks will have to be able to sleep while a
+ buffer is being awaited.
+
+One important point is that such pauses must not prevent the task from timing
+out. There it becomes difficult because in the case of a time out, we could
+want to emit a timeout error message and for this, require a buffer. So it is
+important to keep the ability not to send messages upon error processing, and
+to be able to give up and stop waiting for buffers.
+
+The refill mechanism needs to be designed in a thread-safe way because this
+will become one of the rare cases of inter-task activity. Thus it is important
+to ensure that checking the state of the task and passing of the freshly
+released buffer are performed atomically, and that in case the task doesn't
+want it anymore, it is responsible for passing it to the next one.
+
--- /dev/null
+2012/07/05 - Connection layering and sequencing
+
+
+An FD has a state :
+ - CLOSED
+ - READY
+ - ERROR (?)
+ - LISTEN (?)
+
+A connection has a state :
+ - CLOSED
+ - ACCEPTED
+ - CONNECTING
+ - ESTABLISHED
+ - ERROR
+
+A stream interface has a state :
+ - INI, REQ, QUE, TAR, ASS, CON, CER, EST, DIS, CLO
+
+Note that CON and CER might be replaced by EST if the connection state is used
+instead. CON might even be more suited than EST to indicate that a connection
+is known.
+
+
+si_shutw() must do :
+
+ data_shutw()
+ if (shutr) {
+ data_close()
+ ctrl_shutw()
+ ctrl_close()
+ }
+
+si_shutr() must do :
+ data_shutr()
+ if (shutw) {
+ data_close()
+ ctrl_shutr()
+ ctrl_close()
+ }
+
+Each of these steps may fail, in which case the step must be retained and the
+operations postponed in an asynchronous task.
+
+The first asynchronous data_shut() might already fail so it is mandatory to
+save the other side's status with the connection in order to let the async task
+know whether the 3 next steps must be performed.
+
+The connection (or perhaps the FD) needs to know :
+ - the desired close operations : DSHR, DSHW, CSHR, CSHW
+ - the completed close operations : DSHR, DSHW, CSHR, CSHW
+
+
+On the accept() side, we probably need to know :
+ - if a header is expected (eg: accept-proxy)
+ - if this header is still being waited for
+ => maybe both info might be combined into one bit
+
+ - if a data-layer accept() is expected
+ - if a data-layer accept() has been started
+ - if a data-layer accept() has been performed
+ => possibly 2 bits, to indicate the need to free()
+
+On the connect() side, we need to konw :
+ - the desire to send a header (eg: send-proxy)
+ - if this header has been sent
+ => maybe both info might be combined
+
+ - if a data-layer connect() is expected
+ - if a data-layer connect() has been started
+ - if a data-layer connect() has been completed
+ => possibly 2 bits, to indicate the need to free()
+
+On the response side, we also need to know :
+ - the desire to send a header (eg: health check response for monitor-net)
+ - if this header was sent
+ => might be the same as sending a header over a new connection
+
+Note: monitor-net has precedence over proxy proto and data layers. Same for
+ health mode.
+
+For multi-step operations, use 2 bits :
+ 00 = operation not desired, not performed
+ 10 = operation desired, not started
+ 11 = operation desired, started but not completed
+ 01 = operation desired, started and completed
+
+ => X != 00 ==> operation desired
+ X & 01 ==> operation at least started
+ X & 10 ==> operation not completed
+
+Note: no way to store status information for error reporting.
+
+Note2: it would be nice if "tcp-request connection" rules could work at the
+connection level, just after headers ! This means support for tracking stick
+tables, possibly not too much complicated.
+
+
+Proposal for incoming connection sequence :
+
+- accept()
+- if monitor-net matches or if mode health => try to send response
+- if accept-proxy, wait for proxy request
+- if tcp-request connection, process tcp rules and possibly keep the
+ pointer to stick-table
+- if SSL is enabled, switch to SSL handshake
+- then switch to DATA state and instantiate a session
+
+We just need a map of handshake handlers on the connection. They all manage the
+FD status themselves and set the callbacks themselves. If their work succeeds,
+they remove themselves from the list. If it fails, they remain subscribed and
+enable the required polling until they are woken up again or the timeout strikes.
+
+Identified handshake handlers for incoming connections :
+ - HH_HEALTH (tries to send OK and dies)
+ - HH_MONITOR_IN (matches src IP and adds/removes HH_SEND_OK/HH_SEND_HTTP_OK)
+ - HH_SEND_OK (tries to send "OK" and dies)
+ - HH_SEND_HTTP_OK (tries to send "HTTP/1.0 200 OK" and dies)
+ - HH_ACCEPT_PROXY (waits for PROXY line and parses it)
+ - HH_TCP_RULES (processes TCP rules)
+ - HH_SSL_HS (starts SSL handshake)
+ - HH_ACCEPT_SESSION (instanciates a session)
+
+Identified handshake handlers for outgoing connections :
+ - HH_SEND_PROXY (tries to build and send the PROXY line)
+ - HH_SSL_HS (starts SSL handshake)
+
+For the pollers, we could check that handshake handlers are not 0 and decide to
+call a generic connection handshake handler instead of usual callbacks. Problem
+is that pollers don't know connections, they know fds. So entities which manage
+handlers should update change the FD callbacks accordingly.
+
+With a bit of care, we could have :
+ - HH_SEND_LAST_CHUNK (sends the chunk pointed to by a pointer and dies)
+ => merges HEALTH, SEND_OK and SEND_HTTP_OK
+
+It sounds like the ctrl vs data state for the connection are per-direction
+(eg: support an async ctrl shutw while still reading data).
+
+Also support shutr/shutw status at L4/L7.
+
+In practice, what we really need is :
+
+shutdown(conn) =
+ conn.data.shut()
+ conn.ctrl.shut()
+ conn.fd.shut()
+
+close(conn) =
+ conn.data.close()
+ conn.ctrl.close()
+ conn.fd.close()
+
+With SSL over Remote TCP (RTCP + RSSL) to reach the server, we would have :
+
+ HTTP -> RTCP+RSSL connection <-> RTCP+RRAW connection -> TCP+SSL connection
+
+The connection has to be closed at 3 places after a successful response :
+ - DATA (RSSL over RTCP)
+ - CTRL (RTCP to close connection to server)
+ - SOCK (FD to close connection to second process)
+
+Externally, the connection is seen with very few flags :
+ - SHR
+ - SHW
+ - ERR
+
+We don't need a CLOSED flag as a connection must always be detached when it's closed.
+
+The internal status doesn't need to be exposed :
+ - FD allocated (Y/N)
+ - CTRL initialized (Y/N)
+ - CTRL connected (Y/N)
+ - CTRL handlers done (Y/N)
+ - CTRL failed (Y/N)
+ - CTRL shutr (Y/N)
+ - CTRL shutw (Y/N)
+ - DATA initialized (Y/N)
+ - DATA connected (Y/N)
+ - DATA handlers done (Y/N)
+ - DATA failed (Y/N)
+ - DATA shutr (Y/N)
+ - DATA shutw (Y/N)
+
+(note that having flags for operations needing to be completed might be easier)
+--------------
+
+Maybe we need to be able to call conn->fdset() and conn->fdclr() but it sounds
+very unlikely since the only functions manipulating this are in the code of
+the data/ctrl handlers.
+
+FDSET/FDCLR cannot be directly controlled by the stream interface since it also
+depends on the DATA layer (WANT_READ/WANT_WRITE).
+
+But FDSET/FDCLR is probably controlled by who owns the connection (eg: DATA).
+
+Example: an SSL conn relies on an FD. The buffer is full, and wants the conn to
+stop reading. It must not stop the FD itself. It is the read function which
+should notice that it has nothing to do with a read wake-up, which needs to
+disable reading.
+
+Conversely, when calling conn->chk_rcv(), the reader might get a WANT_READ or
+even WANT_WRITE and adjust the FDs accordingly.
+
+------------------------
+
+OK, the problem is simple : we don't manipulate the FD at the right level.
+We should have :
+ ->connect(), ->chk_snd(), ->chk_rcv(), ->shutw(), ->shutr() which are
+ called from the upper layer (buffer)
+ ->recv(), ->send(), called from the lower layer
+
+Note that the SHR is *reported* by lower layer but can be forced by upper
+layer. In this case it's like a delayed abort. The difficulty consists in
+knowing the output data were correctly read. Probably we'd need to drain
+incoming data past the active shutr().
+
+The only four purposes of the top-down shutr() call are :
+ - acknowledge a shut read report : could probably be done better
+ - read timeout => disable reading : it's a delayed abort. We want to
+ report that the buffer is SHR, maybe even the connection, but the
+ FD clearly isn't.
+ - read abort due to error on the other side or desire to close (eg:
+ http-server-close) : delayed abort
+ - complete abort
+
+The active shutr() is problematic as we can't disable reading if we expect some
+exchanges for data acknowledgement. We probably need to drain data only until
+the shutw() has been performed and ACKed.
+
+A connection shut down for read would behave like this :
+
+ 1) bidir exchanges
+
+ 2) shutr() => read_abort_pending=1
+
+ 3) drain input, still send output
+
+ 4) shutw()
+
+ 5) drain input, wait for read0 or ack(shutw)
+
+ 6) close()
+
+--------------------- 2012/07/05 -------------------
+
+Communications must be performed this way :
+
+ connection <-> channel <-> connection
+
+A channel is composed of flags and stats, and may store data in either a buffer
+or a pipe. We need low-layer operations between sockets and buffers or pipes.
+Right now we only support sockets, but later we might support remote sockets
+and maybe pipes or shared memory segments.
+
+So we need :
+
+ - raw_sock_to_buf() => receive raw data from socket into buffer
+ - raw_sock_to_pipe => receive raw data from socket into pipe (splice in)
+ - raw_sock_from_buf() => send raw data from buffer to socket
+ - raw_sock_from_pipe => send raw data from pipe to socket (splice out)
+
+ - ssl_sock_to_buf() => receive ssl data from socket into buffer
+ - ssl_sock_to_pipe => receive ssl data from socket into a pipe (NULL)
+ - ssl_sock_from_buf() => send ssl data from buffer to socket
+ - ssl_sock_from_pipe => send ssl data from pipe to socket (NULL)
+
+These functions should set such status flags :
+
+#define ERR_IN 0x01
+#define ERR_OUT 0x02
+#define SHUT_IN 0x04
+#define SHUT_OUT 0x08
+#define EMPTY_IN 0x10
+#define FULL_OUT 0x20
+
--- /dev/null
+How it works ? (unfinished and inexact)
+
+For TCP and HTTP :
+
+- listeners create listening sockets with a READ callback pointing to the
+ protocol-specific accept() function.
+
+- the protocol-specific accept() function then accept()'s the connection and
+ instantiates a "server TCP socket" (which is dedicated to the client side),
+ and configures it (non_block, get_original_dst, ...).
+
+For TCP :
+- in case of pure TCP, a request buffer is created, as well as a "client TCP
+ socket", which tries to connect to the server.
+
+- once the connection is established, the response buffer is allocated and
+ connected to both ends.
+
+- both sockets are set to "autonomous mode" so that they only wake up their
+ supervising session when they encounter a special condition (error or close).
+
+
+For HTTP :
+- in case of HTTP, a request buffer is created with the "HOLD" flag set and
+ a read limit to support header rewriting (may be this one will be removed
+ eventually because it's better to limit only to the buffer size and report
+ an error when rewritten data overflows)
+
+- a "flow analyzer" is attached to the buffer (or possibly multiple flow
+ analyzers). For the request, the flow analyzer is "http_lb_req". The flow
+ analyzer is a function which gets called when new data is present and
+ blocked. It has a timeout (request timeout). It can also be bypassed on
+ demand.
+
+- when the "http_lb_req" has received the whole request, it creates a client
+ socket with all the parameters needed to try to connect to the server. When
+ the connection establishes, the response buffer is allocated on the fly,
+ put to HOLD mode, and a an "http_lb_resp" flow analyzer is attached to the
+ buffer.
+
+
+For client-side HTTPS :
+
+- the accept() function must completely instantiate a TCP socket + an SSL
+ reader. It is when the SSL session is complete that we call the
+ protocol-specific accept(), and create its buffer.
+
+
+
+
+Conclusions
+-----------
+
+- we need a generic TCP accept() function with a lot of flags set by the
+ listener, to tell it what info we need to get at the accept() time, and
+ what flags will have to be set on the socket.
+
+- once the TCP accept() function ends, it wakes up the protocol supervisor
+ which is in charge of creating the buffers, etc, switch states, etc...
+
--- /dev/null
+2014/10/23 - design thoughts for HTTP/2
+
+- connections : HTTP/2 depends a lot more on a connection than HTTP/1 because a
+ connection holds a compression context (headers table, etc...). We probably
+ need to have an h2_conn struct.
+
+- multiple transactions will be handled in parallel for a given h2_conn. They
+ are called streams in HTTP/2 terminology.
+
+- multiplexing : for a given client-side h2 connection, we can have multiple
+ server-side h2 connections. And for a server-side h2 connection, we can have
+ multiple client-side h2 connections. Streams circulate in N-to-N fashion.
+
+- flow control : flow control will be applied between multiple streams. Special
+ care must be taken so that an H2 client cannot block some H2 servers by
+ sending requests spread over multiple servers to the point where one server
+ response is blocked and prevents other responses from the same server from
+ reaching their clients. H2 connection buffers must always be empty or nearly
+ empty. The per-stream flow control needs to be respected as well as the
+ connection's buffers. It is important to implement some fairness between all
+ the streams so that it's not always the same which gets the bandwidth when
+ the connection is congested.
+
+- some clients can be H1 with an H2 server (is this really needed ?). Most of
+ the initial use case will be H2 clients to H1 servers. It is important to keep
+ in mind that H1 servers do not do flow control and that we don't want them to
+ block transfers (eg: post upload).
+
+- internal tasks : some H2 clients will be internal tasks (eg: health checks).
+ Some H2 servers will be internal tasks (eg: stats, cache). The model must be
+ compatible with this use case.
+
+- header indexing : headers are transported compressed, with a reference to a
+ static or a dynamic header, or a literal, possibly huffman-encoded. Indexing
+ is specific to the H2 connection. This means there is no way any binary data
+ can flow between both sides, headers will have to be decoded according to the
+ incoming connection's context and re-encoded according to the outgoing
+ connection's context, which can significantly differ. In order to avoid the
+ parsing trouble we currently face, headers will have to be clearly split
+ between name and value. It is worth noting that neither the incoming nor the
+ outgoing connections' contexts will be of any use while processing the
+ headers. At best we can have some shortcuts for well-known names that map
+ well to the static ones (eg: use the first static entry with same name), and
+ maybe have a few special cases for static name+value as well. Probably we can
+ classify headers in such categories :
+
+ - static name + value
+ - static name + other value
+ - dynamic name + other value
+
+ This will allow for better processing in some specific cases. Headers
+ supporting a single value (:method, :status, :path, ...) should probably
+ be stored in a single location with a direct access. That would allow us
+ to retrieve a method using hdr[METHOD]. All such indexing must be performed
+ while parsing. That also means that HTTP/1 will have to be converted to this
+ representation very early in the parser and possibly converted back to H/1
+ after processing.
+
+ Header names/values will have to be placed in a small memory area that will
+ inevitably get fragmented as headers are rewritten. An automatic packing
+ mechanism must be implemented so that when there's no more room, headers are
+ simply defragmented/packet to a new table and the old one is released. Just
+ like for the static chunks, we need to have a few such tables pre-allocated
+ and ready to be swapped at any moment. Repacking must not change any index
+ nor affect the way headers are compressed so that it can happen late after a
+ retry (send-name-header for example).
+
+- header processing : can still happen on a (header, value) basis. Reqrep/
+ rsprep completely disappear and will have to be replaced with something else
+ to support renaming headers and rewriting url/path/...
+
+- push_promise : servers can push dummy requests+responses. They advertise
+ the stream ID in the push_promise frame indicating the associated stream ID.
+ This means that it is possible to initiate a client-server stream from the
+ information coming from the server and make the data flow as if the client
+ had made it. It's likely that we'll have to support two types of server
+ connections: those which support push and those which do not. That way client
+ streams will be distributed to existing server connections based on their
+ capabilities. It's important to keep in mind that PUSH will not be rewritten
+ in responses.
+
+- stream ID mapping : since the stream ID is per H2 connection, stream IDs will
+ have to be mapped. Thus a given stream is an entity with two IDs (one per
+ side). Or more precisely a stream has two end points, each one carrying an ID
+ when it ends on an HTTP2 connection. Also, for each stream ID we need to
+ quickly find the associated transaction in progress. Using a small quick
+ unique tree seems indicated considering the wide range of valid values.
+
+- frame sizes : frame have to be remapped between both sides as multiplexed
+ connections won't always have the same characteristics. Thus some frames
+ might be spliced and others will be sliced.
+
+- error processing : care must be taken to never break a connection unless it
+ is dead or corrupt at the protocol level. Stats counter must exist to observe
+ the causes. Timeouts are a great problem because silent connections might
+ die out of inactivity. Ping frames should probably be scheduled a few seconds
+ before the connection timeout so that an unused connection is verified before
+ being killed. Abnormal requests must be dealt with using RST_STREAM.
+
+- ALPN : ALPN must be observed onthe client side, and transmitted to the server
+ side.
+
+- proxy protocol : proxy protocol makes little to no sense in a multiplexed
+ protocol. A per-stream equivalent will surely be needed if implementations
+ do not quickly generalize the use of Forward.
+
+- simplified protocol for local devices (eg: haproxy->varnish in clear and
+ without handshake, and possibly even with splicing if the connection's
+ settings are shared)
+
+- logging : logging must report a number of extra information such as the
+ stream ID, and whether the transaction was initiated by the client or by the
+ server (which can be deduced from the stream ID's parity). In case of push,
+ the number of the associated stream must also be reported.
+
+- memory usage : H2 increases memory usage by mandating use of 16384 bytes
+ frame size minimum. That means slightly more than 16kB of buffer in each
+ direction to process any frame. It will definitely have an impact on the
+ deployed maxconn setting in places using less than this (4..8kB are common).
+ Also, the header list is persistant per connection, so if we reach the same
+ size as the request, that's another 16kB in each direction, resulting in
+ about 48kB of memory where 8 were previously used. A more careful encoder
+ can work with a much smaller set even if that implies evicting entries
+ between multiple headers of the same message.
+
+- HTTP/1.0 should very carefully be transported over H2. Since there's no way
+ to pass version information in the protocol, the server could use some
+ features of HTTP/1.1 that are unsafe in HTTP/1.0 (compression, trailers,
+ ...).
+
+- host / :authority : ":authority" is the norm, and "host" will be absent when
+ H2 clients generate :authority. This probably means that a dummy Host header
+ will have to be produced internally from :authority and removed when passing
+ to H2 behind. This can cause some trouble when passing H2 requests to H1
+ proxies, because there's no way to know if the request should contain scheme
+ and authority in H1 or not based on the H2 request. Thus a "proxy" option
+ will have to be explicitly mentionned on HTTP/1 server lines. One of the
+ problem that it creates is that it's not longer possible to pass H/1 requests
+ to H/1 proxies without an explicit configuration. Maybe a table of the
+ various combinations is needed.
+
+ :scheme :authority host
+ HTTP/2 request present present absent
+ HTTP/1 server req absent absent present
+ HTTP/1 proxy req present present present
+
+ So in the end the issue is only with H/2 requests passed to H/1 proxies.
+
+- ping frames : they don't indicate any stream ID so by definition they cannot
+ be forwarded to any server. The H2 connection should deal with them only.
+
+There's a layering problem with H2. The framing layer has to be aware of the
+upper layer semantics. We can't simply re-encode HTTP/1 to HTTP/2 then pass
+it over a framing layer to mux the streams, the frame type must be passed below
+so that frames are properly arranged. Header encoding is connection-based and
+all streams using the same connection will interact in the way their headers
+are encoded. Thus the encoder *has* to be placed in the h2_conn entity, and
+this entity has to know for each stream what its headers are.
+
+Probably that we should remove *all* headers from transported data and move
+them on the fly to a parallel structure that can be shared between H1 and H2
+and consumed at the appropriate level. That means buffers only transport data.
+Trailers have to be dealt with differently.
+
+So if we consider an H1 request being forwarded between a client and a server,
+it would look approximately like this :
+
+ - request header + body land into a stream's receive buffer
+ - headers are indexed and stripped out so that only the body and whatever
+ follows remain in the buffer
+ - both the header index and the buffer with the body stay attached to the
+ stream
+ - the sender can rebuild the whole headers. Since they're found in a table
+ supposed to be stable, it can rebuild them as many times as desired and
+ will always get the same result, so it's safe to build them into the trash
+ buffer for immediate sending, just as we do for the PROXY protocol.
+ - the upper protocol should probably provide a build_hdr() callback which
+ when called by the socket layer, builds this header block based on the
+ current stream's header list, ready to be sent.
+ - the socket layer has to know how many bytes from the headers are left to be
+ forwarded prior to processing the body.
+ - the socket layer needs to consume only the acceptable part of the body and
+ must not release the buffer if any data remains in it (eg: pipelining over
+ H1). This is already handled by channel->o and channel->to_forward.
+ - we could possibly have another optional callback to send a preamble before
+ data, that could be used to send chunk sizes in H1. The danger is that it
+ absolutely needs to be stable if it has to be retried. But it could
+ considerably simplify de-chunking.
+
+When the request is sent to an H2 server, an H2 stream request must be made
+to the server, we find an existing connection whose settings are compatible
+with our needs (eg: tls/clear, push/no-push), and with a spare stream ID. If
+none is found, a new connection must be established, unless maxconn is reached.
+
+Servers must have a maxstream setting just like they have a maxconn. The same
+queue may be used for that.
+
+The "tcp-request content" ruleset must apply to the TCP layer. But with HTTP/2
+that becomes impossible (and useless). We still need something like the
+"tcp-request session" hook to apply just after the SSL handshake is done.
+
+It is impossible to defragment the body on the fly in HTTP/2. Since multiple
+messages are interleaved, we cannot wait for all of them and block the head of
+line. Thus if body analysis is required, it will have to use the stream's
+buffer, which necessarily implies a copy. That means that with each H2 end we
+necessarily have at least one copy. Sometimes we might be able to "splice" some
+bytes from one side to the other without copying into the stream buffer (same
+rules as for TCP splicing).
+
+In theory, only data should flow through the channel buffer, so each side's
+connector is responsible for encoding data (H1: linear/chunks, H2: frames).
+Maybe the same mechanism could be extrapolated to tunnels / TCP.
+
+Since we'd use buffers only for data (and for receipt of headers), we need to
+have dynamic buffer allocation.
+
+Thus :
+- Tx buffers do not exist. We allocate a buffer on the fly when we're ready to
+ send something that we need to build and that needs to be persistant in case
+ of partial send. H1 headers are built on the fly from the header table to a
+ temporary buffer that is immediately sent and whose amount of sent bytes is
+ the only information kept (like for PROXY protocol). H2 headers are more
+ complex since the encoding depends on what was successfully sent. Thus we
+ need to build them and put them into a temporary buffer that remains
+ persistent in case send() fails. It is possible to have a limited pool of
+ Tx buffers and refrain from sending if there is no more buffer available in
+ the pool. In that case we need a wake-up mechanism once a buffer is
+ available. Once the data are sent, the Tx buffer is then immediately recycled
+ in its pool. Note that no tx buffer being used (eg: for hdr or control) means
+ that we have to be able to serialize access to the connection and retry with
+ the same stream. It also means that a stream that times out while waiting for
+ the connector to read the second half of its request has to stay there, or at
+ least needs to be handled gracefully. However if the connector cannot read
+ the data to be sent, it means that the buffer is congested and the connection
+ is dead, so that probably means it can be killed.
+
+- Rx buffers have to be pre-allocated just before calling recv(). A connection
+ will first try to pick a buffer and disable reception if it fails, then
+ subscribe to the list of tasks waiting for an Rx buffer.
+
+- full Rx buffers might sometimes be moved around to the next buffer instead of
+ experiencing a copy. That means that channels and connectors must use the
+ same format of buffer, and that only the channel will have to see its
+ pointers adjusted.
+
+- Tx of data should be made as much as possible without copying. That possibly
+ means by directly looking into the connection buffer on the other side if
+ the local Tx buffer does not exist and the stream buffer is not allocated, or
+ even performing a splice() call between the two sides. One of the problem in
+ doing this is that it requires proper ordering of the operations (eg: when
+ multiple readers are attached to a same buffer). If the splitting occurs upon
+ receipt, there's no problem. If we expect to retrieve data directly from the
+ original buffer, it's harder since it contains various things in an order
+ which does not even indicate what belongs to whom. Thus possibly the only
+ mechanism to implement is the buffer permutation which guarantees zero-copy
+ and only in the 100% safe case. Also it's atomic and does not cause HOL
+ blocking.
+
+It makes sense to chose the frontend_accept() function right after the
+handshake ended. It is then possible to check the ALPN, the SNI, the ciphers
+and to accept to switch to the h2_conn_accept handler only if everything is OK.
+The h2_conn_accept handler will have to deal with the connection setup,
+initialization of the header table, exchange of the settings frames and
+preparing whatever is needed to fire new streams upon receipt of unknown
+stream IDs. Note: most of the time it will not be possible to splice() because
+we need to know in advance the amount of bytes to write the header, and here it
+will not be possible.
+
+H2 health checks must be seen as regular transactions/streams. The check runs a
+normal client which seeks an available stream from a server. The server then
+finds one on an existing connection or initiates a new H2 connection. The H2
+checks will have to be configurable for sharing streams or not. Another option
+could be to specify how many requests can be made over existing connections
+before insisting on getting a separate connection. Note that such separate
+connections might end up stacking up once released. So probably that they need
+to be recycled very quickly (eg: fix how many unused ones can exist max).
+
--- /dev/null
+Excellent paper about page load time for keepalive on/off, pipelining,
+multiple host names, etc...
+
+http://www.die.net/musings/page_load_time/
+
--- /dev/null
+2010/01/24 - Design of multi-criteria request rate shaping.
+
+We want to be able to rate-shape traffic on multiple cirteria. For instance, we
+may want to support shaping of per-host header requests, as well as per source.
+
+In order to achieve this, we will use checkpoints, one per criterion. Each of
+these checkpoints will consist in a test, a rate counter and a queue.
+
+A request reaches the checkpoint and checks the counter. If the counter is
+below the limit, it is updated and the request continues. If the limit is
+reached, the request attaches itself into the queue and sleeps. The sleep time
+is computed from the queue status, and updates the queue status.
+
+A task is dedicated to each queue. Its sole purpose is to be woken up when the
+next task may wake up, to check the frequency counter, wake as many requests as
+possible and update the counter. All the woken up requests are detached from
+the queue. Maybe the task dedicated to the queue can be avoided and replaced
+with all queued tasks's sleep counters, though this looks tricky. Or maybe it's
+just the first request in the queue that should be responsible for waking up
+other tasks, and not to forget to pass on this responsibility to next tasks if
+it leaves the queue.
+
+The woken up request then goes on evaluating other criteria and possibly sleeps
+again on another one. In the end, the task will have waited the amount of time
+required to pass all checkpoints, and all checkpoints will be able to maintain
+a permanent load of exactly their limit if enough streams flow through them.
+
+Since a request can only sleep in one queue at a time, it makes sense to use a
+linked list element in each session to attach it to any queue. It could very
+well be shared with the pendconn hooks which could then be part of the session.
+
+This mechanism could be used to rate-shape sessions and requests per backend
+and per server.
+
+When rate-shaping on dynamic criteria, such as the source IP address, we have
+to first extract the data pattern, then look it up in a table very similar to
+the stickiness tables, but with a frequency counter. At the checkpoint, the
+pattern is looked up, the entry created or refreshed, and the frequency counter
+updated and checked. Then the request either goes on or sleeps as described
+above, but if it sleeps, it's still in the checkpoint's queue, but with a date
+computed from the criterion's status.
+
+This means that we need 3 distinct features :
+
+ - optional pattern extraction
+ - per-pattern or per-queue frequency counter
+ - time-ordered queue with a task
+
+Based on past experiences with frequency counters, it does not appear very easy
+to exactly compute sleep delays in advance for multiple requests. So most
+likely we'll have to run per-criterion queues too, with only the head of the
+queue holding a wake-up timeout.
+
+This finally leads us to the following :
+
+ - optional pattern extraction
+ - per-pattern or per-queue frequency counter
+ - per-frequency counter queue
+ - head of the queue serves as a global queue timer.
+
+This brings us to a very flexible architecture :
+ - 1 list of rule-based checkpoints per frontend
+ - 1 list of rule-based checkpoints per backend
+ - 1 list of rule-based checkpoints per server
+
+Each of these lists have a lot of rules conditionned by ACLs, just like the
+use-backend rules, except that all rules are evaluated in turn.
+
+Since we might sometimes just want to enable that without setting any limit and
+just for enabling control in ACLs (or logging ?), we should probably try to
+find a flexible way of declaring just a counter without a queue.
+
+These checkpoints could be of two types :
+ - rate-limit (described here)
+ - concurrency-limit (very similar with the counter and no timer). This
+ feature would require to keep track of all accounted criteria in a
+ request so that they can be released upon request completion.
+
+It should be possible to define a max of requests in the queue, above which a
+503 is returned. The same applies for the max delay in the queue. We could have
+it per-task (currently it's the connection timeout) and abort tasks with a 503
+when the delay is exceeded.
+
+Per-server connection concurrency could be converted to use this mechanism
+which is very similar.
+
+The construct should be flexible enough so that the counters may be checked
+from ACLs. That would allow to reject connections or switch to an alternate
+backend when some limits are reached.
+
--- /dev/null
+Graphe des nombres de traitements par seconde unité de temps avec
+ - un algo linéaire et très peu coûteux unitairement (0.01 ut)
+ - un algo en log(2) et 5 fois plus coûteux (0.05 ut)
+
+set yrange [0:1]
+plot [0:1000] 1/(1+0.01*x), 1/(1+0.05*log(x+1)/log(2))
+
+Graphe de la latence induite par ces traitements en unités de temps :
+
+set yrange [0:1000]
+plot [0:1000] x/(1+0.01*x), x/(1+0.05*log(x+1)/log(2))
+
+
--- /dev/null
+ GNU GENERAL PUBLIC LICENSE
+ Version 2, June 1991
+
+ Copyright (C) 1989, 1991 Free Software Foundation, Inc.
+ 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ Everyone is permitted to copy and distribute verbatim copies
+ of this license document, but changing it is not allowed.
+
+ Preamble
+
+ The licenses for most software are designed to take away your
+freedom to share and change it. By contrast, the GNU General Public
+License is intended to guarantee your freedom to share and change free
+software--to make sure the software is free for all its users. This
+General Public License applies to most of the Free Software
+Foundation's software and to any other program whose authors commit to
+using it. (Some other Free Software Foundation software is covered by
+the GNU Lesser General Public License instead.) You can apply it to
+your programs, too.
+
+ When we speak of free software, we are referring to freedom, not
+price. Our General Public Licenses are designed to make sure that you
+have the freedom to distribute copies of free software (and charge for
+this service if you wish), that you receive source code or can get it
+if you want it, that you can change the software or use pieces of it
+in new free programs; and that you know you can do these things.
+
+ To protect your rights, we need to make restrictions that forbid
+anyone to deny you these rights or to ask you to surrender the rights.
+These restrictions translate to certain responsibilities for you if you
+distribute copies of the software, or if you modify it.
+
+ For example, if you distribute copies of such a program, whether
+gratis or for a fee, you must give the recipients all the rights that
+you have. You must make sure that they, too, receive or can get the
+source code. And you must show them these terms so they know their
+rights.
+
+ We protect your rights with two steps: (1) copyright the software, and
+(2) offer you this license which gives you legal permission to copy,
+distribute and/or modify the software.
+
+ Also, for each author's protection and ours, we want to make certain
+that everyone understands that there is no warranty for this free
+software. If the software is modified by someone else and passed on, we
+want its recipients to know that what they have is not the original, so
+that any problems introduced by others will not reflect on the original
+authors' reputations.
+
+ Finally, any free program is threatened constantly by software
+patents. We wish to avoid the danger that redistributors of a free
+program will individually obtain patent licenses, in effect making the
+program proprietary. To prevent this, we have made it clear that any
+patent must be licensed for everyone's free use or not licensed at all.
+
+ The precise terms and conditions for copying, distribution and
+modification follow.
+
+ GNU GENERAL PUBLIC LICENSE
+ TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
+
+ 0. This License applies to any program or other work which contains
+a notice placed by the copyright holder saying it may be distributed
+under the terms of this General Public License. The "Program", below,
+refers to any such program or work, and a "work based on the Program"
+means either the Program or any derivative work under copyright law:
+that is to say, a work containing the Program or a portion of it,
+either verbatim or with modifications and/or translated into another
+language. (Hereinafter, translation is included without limitation in
+the term "modification".) Each licensee is addressed as "you".
+
+Activities other than copying, distribution and modification are not
+covered by this License; they are outside its scope. The act of
+running the Program is not restricted, and the output from the Program
+is covered only if its contents constitute a work based on the
+Program (independent of having been made by running the Program).
+Whether that is true depends on what the Program does.
+
+ 1. You may copy and distribute verbatim copies of the Program's
+source code as you receive it, in any medium, provided that you
+conspicuously and appropriately publish on each copy an appropriate
+copyright notice and disclaimer of warranty; keep intact all the
+notices that refer to this License and to the absence of any warranty;
+and give any other recipients of the Program a copy of this License
+along with the Program.
+
+You may charge a fee for the physical act of transferring a copy, and
+you may at your option offer warranty protection in exchange for a fee.
+
+ 2. You may modify your copy or copies of the Program or any portion
+of it, thus forming a work based on the Program, and copy and
+distribute such modifications or work under the terms of Section 1
+above, provided that you also meet all of these conditions:
+
+ a) You must cause the modified files to carry prominent notices
+ stating that you changed the files and the date of any change.
+
+ b) You must cause any work that you distribute or publish, that in
+ whole or in part contains or is derived from the Program or any
+ part thereof, to be licensed as a whole at no charge to all third
+ parties under the terms of this License.
+
+ c) If the modified program normally reads commands interactively
+ when run, you must cause it, when started running for such
+ interactive use in the most ordinary way, to print or display an
+ announcement including an appropriate copyright notice and a
+ notice that there is no warranty (or else, saying that you provide
+ a warranty) and that users may redistribute the program under
+ these conditions, and telling the user how to view a copy of this
+ License. (Exception: if the Program itself is interactive but
+ does not normally print such an announcement, your work based on
+ the Program is not required to print an announcement.)
+
+These requirements apply to the modified work as a whole. If
+identifiable sections of that work are not derived from the Program,
+and can be reasonably considered independent and separate works in
+themselves, then this License, and its terms, do not apply to those
+sections when you distribute them as separate works. But when you
+distribute the same sections as part of a whole which is a work based
+on the Program, the distribution of the whole must be on the terms of
+this License, whose permissions for other licensees extend to the
+entire whole, and thus to each and every part regardless of who wrote it.
+
+Thus, it is not the intent of this section to claim rights or contest
+your rights to work written entirely by you; rather, the intent is to
+exercise the right to control the distribution of derivative or
+collective works based on the Program.
+
+In addition, mere aggregation of another work not based on the Program
+with the Program (or with a work based on the Program) on a volume of
+a storage or distribution medium does not bring the other work under
+the scope of this License.
+
+ 3. You may copy and distribute the Program (or a work based on it,
+under Section 2) in object code or executable form under the terms of
+Sections 1 and 2 above provided that you also do one of the following:
+
+ a) Accompany it with the complete corresponding machine-readable
+ source code, which must be distributed under the terms of Sections
+ 1 and 2 above on a medium customarily used for software interchange; or,
+
+ b) Accompany it with a written offer, valid for at least three
+ years, to give any third party, for a charge no more than your
+ cost of physically performing source distribution, a complete
+ machine-readable copy of the corresponding source code, to be
+ distributed under the terms of Sections 1 and 2 above on a medium
+ customarily used for software interchange; or,
+
+ c) Accompany it with the information you received as to the offer
+ to distribute corresponding source code. (This alternative is
+ allowed only for noncommercial distribution and only if you
+ received the program in object code or executable form with such
+ an offer, in accord with Subsection b above.)
+
+The source code for a work means the preferred form of the work for
+making modifications to it. For an executable work, complete source
+code means all the source code for all modules it contains, plus any
+associated interface definition files, plus the scripts used to
+control compilation and installation of the executable. However, as a
+special exception, the source code distributed need not include
+anything that is normally distributed (in either source or binary
+form) with the major components (compiler, kernel, and so on) of the
+operating system on which the executable runs, unless that component
+itself accompanies the executable.
+
+If distribution of executable or object code is made by offering
+access to copy from a designated place, then offering equivalent
+access to copy the source code from the same place counts as
+distribution of the source code, even though third parties are not
+compelled to copy the source along with the object code.
+
+ 4. You may not copy, modify, sublicense, or distribute the Program
+except as expressly provided under this License. Any attempt
+otherwise to copy, modify, sublicense or distribute the Program is
+void, and will automatically terminate your rights under this License.
+However, parties who have received copies, or rights, from you under
+this License will not have their licenses terminated so long as such
+parties remain in full compliance.
+
+ 5. You are not required to accept this License, since you have not
+signed it. However, nothing else grants you permission to modify or
+distribute the Program or its derivative works. These actions are
+prohibited by law if you do not accept this License. Therefore, by
+modifying or distributing the Program (or any work based on the
+Program), you indicate your acceptance of this License to do so, and
+all its terms and conditions for copying, distributing or modifying
+the Program or works based on it.
+
+ 6. Each time you redistribute the Program (or any work based on the
+Program), the recipient automatically receives a license from the
+original licensor to copy, distribute or modify the Program subject to
+these terms and conditions. You may not impose any further
+restrictions on the recipients' exercise of the rights granted herein.
+You are not responsible for enforcing compliance by third parties to
+this License.
+
+ 7. If, as a consequence of a court judgment or allegation of patent
+infringement or for any other reason (not limited to patent issues),
+conditions are imposed on you (whether by court order, agreement or
+otherwise) that contradict the conditions of this License, they do not
+excuse you from the conditions of this License. If you cannot
+distribute so as to satisfy simultaneously your obligations under this
+License and any other pertinent obligations, then as a consequence you
+may not distribute the Program at all. For example, if a patent
+license would not permit royalty-free redistribution of the Program by
+all those who receive copies directly or indirectly through you, then
+the only way you could satisfy both it and this License would be to
+refrain entirely from distribution of the Program.
+
+If any portion of this section is held invalid or unenforceable under
+any particular circumstance, the balance of the section is intended to
+apply and the section as a whole is intended to apply in other
+circumstances.
+
+It is not the purpose of this section to induce you to infringe any
+patents or other property right claims or to contest validity of any
+such claims; this section has the sole purpose of protecting the
+integrity of the free software distribution system, which is
+implemented by public license practices. Many people have made
+generous contributions to the wide range of software distributed
+through that system in reliance on consistent application of that
+system; it is up to the author/donor to decide if he or she is willing
+to distribute software through any other system and a licensee cannot
+impose that choice.
+
+This section is intended to make thoroughly clear what is believed to
+be a consequence of the rest of this License.
+
+ 8. If the distribution and/or use of the Program is restricted in
+certain countries either by patents or by copyrighted interfaces, the
+original copyright holder who places the Program under this License
+may add an explicit geographical distribution limitation excluding
+those countries, so that distribution is permitted only in or among
+countries not thus excluded. In such case, this License incorporates
+the limitation as if written in the body of this License.
+
+ 9. The Free Software Foundation may publish revised and/or new versions
+of the General Public License from time to time. Such new versions will
+be similar in spirit to the present version, but may differ in detail to
+address new problems or concerns.
+
+Each version is given a distinguishing version number. If the Program
+specifies a version number of this License which applies to it and "any
+later version", you have the option of following the terms and conditions
+either of that version or of any later version published by the Free
+Software Foundation. If the Program does not specify a version number of
+this License, you may choose any version ever published by the Free Software
+Foundation.
+
+ 10. If you wish to incorporate parts of the Program into other free
+programs whose distribution conditions are different, write to the author
+to ask for permission. For software which is copyrighted by the Free
+Software Foundation, write to the Free Software Foundation; we sometimes
+make exceptions for this. Our decision will be guided by the two goals
+of preserving the free status of all derivatives of our free software and
+of promoting the sharing and reuse of software generally.
+
+ NO WARRANTY
+
+ 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
+FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
+OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
+PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
+OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
+TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
+PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
+REPAIR OR CORRECTION.
+
+ 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
+WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
+REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
+INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
+OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
+TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
+YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
+PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGES.
+
+ END OF TERMS AND CONDITIONS
+
+ How to Apply These Terms to Your New Programs
+
+ If you develop a new program, and you want it to be of the greatest
+possible use to the public, the best way to achieve this is to make it
+free software which everyone can redistribute and change under these terms.
+
+ To do so, attach the following notices to the program. It is safest
+to attach them to the start of each source file to most effectively
+convey the exclusion of warranty; and each file should have at least
+the "copyright" line and a pointer to where the full notice is found.
+
+ <one line to give the program's name and a brief idea of what it does.>
+ Copyright (C) <year> <name of author>
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+
+Also add information on how to contact you by electronic and paper mail.
+
+If the program is interactive, make it output a short notice like this
+when it starts in an interactive mode:
+
+ Gnomovision version 69, Copyright (C) year name of author
+ Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
+ This is free software, and you are welcome to redistribute it
+ under certain conditions; type `show c' for details.
+
+The hypothetical commands `show w' and `show c' should show the appropriate
+parts of the General Public License. Of course, the commands you use may
+be called something other than `show w' and `show c'; they could even be
+mouse-clicks or menu items--whatever suits your program.
+
+You should also get your employer (if you work as a programmer) or your
+school, if any, to sign a "copyright disclaimer" for the program, if
+necessary. Here is a sample; alter the names:
+
+ Yoyodyne, Inc., hereby disclaims all copyright interest in the program
+ `Gnomovision' (which makes passes at compilers) written by James Hacker.
+
+ <signature of Ty Coon>, 1 April 1989
+ Ty Coon, President of Vice
+
+This General Public License does not permit incorporating your program into
+proprietary programs. If your program is a subroutine library, you may
+consider it more useful to permit linking proprietary applications with the
+library. If this is what you want to do, use the GNU Lesser General
+Public License instead of this License.
--- /dev/null
+.TH HAPROXY 1 "17 August 2007"
+
+.SH NAME
+
+HAProxy \- fast and reliable http reverse proxy and load balancer
+
+.SH SYNOPSIS
+
+haproxy \-f <configuration\ file> [\-L\ <name>] [\-n\ maxconn] [\-N\ maxconn] [\-C\ <dir>] [\-v|\-vv] [\-d] [\-D] [\-q] [\-V] [\-c] [\-p\ <pidfile>] [\-dk] [\-ds] [\-de] [\-dp] [\-db] [\-dM[<byte>]] [\-m\ <megs>] [{\-sf|\-st}\ pidlist...]
+
+.SH DESCRIPTION
+
+HAProxy is a TCP/HTTP reverse proxy which is particularly suited for
+high availability environments. Indeed, it can:
+ \- route HTTP requests depending on statically assigned cookies ;
+ \- spread the load among several servers while assuring server
+ persistence through the use of HTTP cookies ;
+ \- switch to backup servers in the event a main one fails ;
+ \- accept connections to special ports dedicated to service
+ monitoring ;
+ \- stop accepting connections without breaking existing ones ;
+ \- add/modify/delete HTTP headers both ways ;
+ \- block requests matching a particular pattern ;
+ \- hold clients to the right application server depending on
+ application cookies
+ \- report detailed status as HTML pages to authenticated users from an
+ URI intercepted from the application.
+
+It needs very little resource. Its event-driven architecture allows it
+to easily handle thousands of simultaneous connections on hundreds of
+instances without risking the system's stability.
+
+.SH OPTIONS
+
+.TP
+\fB\-f <configuration file>\fP
+Specify configuration file path.
+
+.TP
+\fB\-L <name>\fP
+Set the local instance's peer name. Peers are defined in the \fBpeers\fP
+configuration section and used for syncing stick tables between different
+instances. If this option is not specified, the local hostname is used as peer
+name.
+
+.TP
+\fB\-n <maxconn>\fP
+Set the high limit for the total number of simultaneous connections.
+
+.TP
+\fB\-N <maxconn>\fP
+Set the high limit for the per-listener number of simultaneous connections.
+
+.TP
+\fB\-C <dir>\fP
+Change directory to <\fIdir\fP> before loading any files.
+
+.TP
+\fB\-v\fP
+Display HAProxy's version.
+
+.TP
+\fB\-vv\fP
+Display HAProxy's version and all build options.
+
+.TP
+\fB\-d\fP
+Start in foreground with debugging mode enabled.
+When the proxy runs in this mode, it dumps every connections,
+disconnections, timestamps, and HTTP headers to stdout. This should
+NEVER be used in an init script since it will prevent the system from
+starting up.
+
+.TP
+\fB\-D\fP
+Start in daemon mode.
+
+.TP
+\fB\-Ds\fP
+Start in systemd daemon mode, keeping a process in foreground.
+
+.TP
+\fB\-q\fP
+Disable messages on output.
+
+.TP
+\fB\-V\fP
+Displays messages on output even when \-q or 'quiet' are specified. Some
+information about pollers and config file are displayed during startup.
+
+.TP
+\fB\-c\fP
+Only checks config file and exits with code 0 if no error was found, or
+exits with code 1 if a syntax error was found.
+
+.TP
+\fB\-p <pidfile>\fP
+Ask the process to write down each of its children's pids to this file
+in daemon mode.
+
+.TP
+\fB\-dk\fP
+Disable use of \fBkqueue\fP(2). \fBkqueue\fP(2) is available only on BSD systems.
+
+.TP
+\fB\-ds\fP
+Disable use of speculative \fBepoll\fP(7). \fBepoll\fP(7) is available only on
+Linux 2.6 and some custom Linux 2.4 systems.
+
+.TP
+\fB\-de\fP
+Disable use of \fBepoll\fP(7). \fBepoll\fP(7) is available only on Linux 2.6
+and some custom Linux 2.4 systems.
+
+.TP
+\fB\-dp\fP
+Disables use of \fBpoll\fP(2). \fBselect\fP(2) might be used instead.
+
+.TP
+\fB\-dS\fP
+Disables use of \fBsplice\fP(2), which is broken on older kernels.
+
+.TP
+\fB\-db\fP
+Disables background mode (stays in foreground, useful for debugging).
+For debugging, the '\-db' option is very useful as it temporarily
+disables daemon mode and multi-process mode. The service can then be
+stopped by simply pressing Ctrl-C, without having to edit the config nor
+run full debug.
+
+.TP
+\fB\-dM[<byte>]\fP
+Initializes all allocated memory areas with the given <\fIbyte\fP>. This makes
+it easier to detect bugs resulting from uninitialized memory accesses, at the
+expense of touching all allocated memory once. If <\fIbyte\fP> is not
+specified, it defaults to 0x50 (ASCII 'P').
+
+.TP
+\fB\-m <megs>\fP
+Enforce a memory usage limit to a maximum of <megs> megabytes.
+
+.TP
+\fB\-sf <pidlist>\fP
+Send FINISH signal to the pids in pidlist after startup. The processes
+which receive this signal will wait for all sessions to finish before
+exiting. This option must be specified last, followed by any number of
+PIDs. Technically speaking, \fBSIGTTOU\fP and \fBSIGUSR1\fP are sent.
+
+.TP
+\fB\-st <pidlist>\fP
+Send TERMINATE signal to the pids in pidlist after startup. The processes
+which receive this signal will wait immediately terminate, closing all
+active sessions. This option must be specified last, followed by any number
+of PIDs. Technically speaking, \fBSIGTTOU\fP and \fBSIGTERM\fP are sent.
+
+.SH LOGGING
+Since HAProxy can run inside a chroot, it cannot reliably access /dev/log.
+For this reason, it uses the UDP protocol to send its logs to the server,
+even if it is the local server. People who experience trouble receiving
+logs should ensure that their syslog daemon listens to the UDP socket.
+Several Linux distributions which ship with syslogd from the sysklogd
+package have UDP disabled by default. The \fB\-r\fP option must be passed
+to the daemon in order to enable UDP.
+
+.SH SIGNALS
+Some signals have a special meaning for the haproxy daemon. Generally, they are used between daemons and need not be used by the administrator.
+.TP
+\- \fBSIGUSR1\fP
+Tells the daemon to stop all proxies and exit once all sessions are closed. It is often referred to as the "soft-stop" signal.
+.TP
+\- \fBSIGTTOU\fP
+Tells the daemon to stop listening to all sockets. Used internally by \fB\-sf\fP and \fB\-st\fP.
+.TP
+\- \fBSIGTTIN\fP
+Tells the daemon to restart listening to all sockets after a \fBSIGTTOU\fP. Used internally when there was a problem during hot reconfiguration.
+.TP
+\- \fBSIGINT\fP and \fBSIGTERM\fP
+Both signals can be used to quickly stop the daemon.
+.TP
+\- \fBSIGHUP\fP
+Dumps the status of all proxies and servers into the logs. Mostly used for trouble-shooting purposes.
+.TP
+\- \fBSIGQUIT\fP
+Dumps information about memory pools on stderr. Mostly used for debugging purposes.
+.TP
+\- \fBSIGPIPE\fP
+This signal is intercepted and ignored on systems without \fBMSG_NOSIGNAL\fP.
+
+.SH SEE ALSO
+
+A much better documentation can be found in configuration.txt. On Debian
+systems, you can find this file in /usr/share/doc/haproxy/configuration.txt.gz.
+
+.SH AUTHOR
+
+HAProxy was written by Willy Tarreau. This man page was written by Arnaud Cornet and Willy Tarreau.
+
--- /dev/null
+2011/12/16 - How ACLs work internally in haproxy - w@1wt.eu
+
+An ACL is declared by the keyword "acl" followed by a name, followed by a
+matching method, followed by one or multiple pattern values :
+
+ acl internal src 127.0.0.0/8 10.0.0.0/8 192.168.0.0/16
+
+In the statement above, "internal" is the ACL's name (acl->name), "src" is the
+ACL keyword defining the matching method (acl_expr->kw) and the IP addresses
+are patterns of type acl_pattern to match against the source address.
+
+The acl_pattern struct may define one single pattern, a range of values or a
+tree of values to match against. The type of the patterns is implied by the
+ACL keyword. For instance, the "src" keyword implies IPv4 patterns.
+
+The line above constitutes an ACL expression (acl_expr). ACL expressions are
+formed of a keyword, an optional argument for the keyword, and a list of
+patterns (in fact, both a list and a root tree).
+
+Dynamic values are extracted according to a fetch function defined by the ACL
+keyword. This fetch function fills or updates a struct acl_test with all the
+extracted information so that a match function can compare it against all the
+patterns. The fetch function is called iteratively by the ACL engine until it
+reports no more value. This makes sense for instance when checking IP addresses
+found in HTTP headers, which can appear multiple times. The acl_test is kept
+intact between calls and even holds a context so that the fetch function knows
+where to start from for subsequent calls. The match function may also use the
+context eventhough it was not designed for that purpose.
+
+An ACL is defined only by its name and can be a series of ACL expressions. The
+ACL is deemed true when any of its expressions is true. They are evaluated in
+the declared order and can involve multiple matching methods.
+
+So in summary :
+
+ - an ACL is a series of tests to perform on a stream, any of which is enough
+ to validate the result.
+
+ - each test is defined by an expression associating a keyword and a series of
+ patterns.
+
+ - a keyword implies several things at once :
+ - the type of the patterns and how to parse them
+ - the method to fetch the required information from the stream
+ - the method to match the fetched information against the patterns
+
+ - a fetch function fills an acl_test struct which is passed to the match
+ function defined by the keyword
+
+ - the match function tries to match the value in the acl_test against the
+ pattern list declared in the expression which involved its acl_keyword.
+
+
+ACLs are used by conditional processing rules. A rule generally uses an "if" or
+"unless" keyword followed by an ACL condition (acl_cond). This condition is a
+series of term suites which are ORed together. Each term suite is a series of
+terms which are ANDed together. Terms may be negated before being evaluated in
+a suite. A term simply is a pointer to an ACL.
+
+We could then represent a rule by the following BNF :
+
+ rule = if-cond
+ | unless-cond
+
+ if-cond (struct acl_cond with ->pol = ACL_COND_IF)
+ = "if" condition
+
+ unless-cond (struct acl_cond with ->pol = ACL_COND_UNLESS)
+ = "unless" condition
+
+ condition
+ = term-suite
+ | term-suite "||" term-suite
+ | term-suite "or" term-suite
+
+ term-suite (struct acl_term_suite)
+ = term
+ | term term
+
+ term = acl
+ | "!" acl
+
--- /dev/null
+2014/04/16 - Pointer assignments during processing of the HTTP body
+
+In HAProxy, a struct http_msg is a descriptor for an HTTP message, which stores
+the state of an HTTP parser at any given instant, relative to a buffer which
+contains part of the message being inspected.
+
+Currently, an http_msg holds a few pointers and offsets to some important
+locations in a message depending on the state the parser is in. Some of these
+pointers and offsets may move when data are inserted into or removed from the
+buffer, others won't move.
+
+An important point is that the state of the parser only translates what the
+parser is reading, and not at all what is being done on the message (eg:
+forwarding).
+
+For an HTTP message <msg> and a buffer <buf>, we have the following elements
+to work with :
+
+
+Buffer :
+--------
+
+buf.size : the allocated size of the buffer. A message cannot be larger than
+ this size. In general, a message will even be smaller because the
+ size is almost always reduced by global.maxrewrite bytes.
+
+buf.data : memory area containing the part of the message being worked on. This
+ area is exactly <buf.size> bytes long. It should be seen as a sliding
+ window over the message, but in terms of implementation, it's closer
+ to a wrapping window. For ease of processing, new messages (requests
+ or responses) are aligned to the beginning of the buffer so that they
+ never wrap and common string processing functions can be used.
+
+buf.p : memory pointer (char *) to the beginning of the buffer as the parser
+ understands it. It commonly refers to the first character of an HTTP
+ request or response, but during forwarding, it can point to other
+ locations. This pointer always points to a location in <buf.data>.
+
+buf.i : number of bytes after <buf.p> that are available in the buffer. If
+ <buf.p + buf.i> exceeds <buf.data + buf.size>, then the pending data
+ wrap at the end of the buffer and continue at <buf.data>.
+
+buf.o : number of bytes already processed before <buf.p> that are pending
+ for departure. These bytes may leave at any instant once a connection
+ is established. These ones may wrap before <buf.data> to start before
+ <buf.data + buf.size>.
+
+It's common to call the part between buf.p and buf.p+buf.i the input buffer, and
+the part between buf.p-buf.o and buf.p the output buffer. This design permits
+efficient forwarding without copies. As a result, forwarding one byte from the
+input buffer to the output buffer only consists in :
+ - incrementing buf.p
+ - incrementing buf.o
+ - decrementing buf.i
+
+
+Message :
+---------
+Unless stated otherwise, all values are relative to <buf.p>, and are always
+comprised between 0 and <buf.i>. These values are relative offsets and they do
+not need to take wrapping into account, they are used as if the buffer was an
+infinite length sliding window. The buffer management functions handle the
+wrapping automatically.
+
+msg.next : points to the next byte to inspect. This offset is automatically
+ adjusted when inserting/removing some headers. In data states, it is
+ automatically adjusted to the number of bytes already inspected.
+
+msg.sov : start of value. First character of the header's value in the header
+ states, start of the body in the data states. Strictly positive
+ values indicate that headers were not forwarded yet (<buf.p> is
+ before the start of the body), and null or negative values are seen
+ after headers are forwarded (<buf.p> is at or past the start of the
+ body). The value stops changing when data start to leave the buffer
+ (in order to avoid integer overflows). So the maximum possible range
+ is -<buf.size> to +<buf.size>. This offset is automatically adjusted
+ when inserting or removing some headers. It is useful to rewind the
+ request buffer to the beginning of the body at any phase. The
+ response buffer does not really use it since it is immediately
+ forwarded to the client.
+
+msg.sol : start of line. Points to the beginning of the current header line
+ while parsing headers. It is cleared to zero in the BODY state,
+ and contains exactly the number of bytes comprising the preceeding
+ chunk size in the DATA state (which can be zero), so that the sum of
+ msg.sov + msg.sol always points to the beginning of data for all
+ states starting with DATA. For chunked encoded messages, this sum
+ always corresponds to the beginning of the current chunk of data as
+ it appears in the buffer, or to be more precise, it corresponds to
+ the first of the remaining bytes of chunked data to be inspected.
+
+msg.eoh : end of headers. Points to the CRLF (or LF) preceeding the body and
+ marking the end of headers. It is where new headers are appended.
+ This offset is automatically adjusted when inserting/removing some
+ headers. It always contains the size of the headers excluding the
+ trailing CRLF even after headers have been forwarded.
+
+msg.eol : end of line. Points to the CRLF or LF of the current header line
+ being inspected during the various header states. In data states, it
+ holds the trailing CRLF length (1 or 2) so that msg.eoh + msg.eol
+ always equals the exact header length. It is not affected during data
+ states nor by forwarding.
+
+The beginning of the message headers can always be found this way even after
+headers or data have been forwarded, provided that everything is still present
+in the buffer :
+
+ headers = buf.p + msg->sov - msg->eoh - msg->eol
+
+
+Message length :
+----------------
+msg.chunk_len : amount of bytes of the current chunk or total message body
+ remaining to be inspected after msg.next. It is automatically
+ incremented when parsing a chunk size, and decremented as data
+ are forwarded.
+
+msg.body_len : total message body length, for logging. Equals Content-Length
+ when used, otherwise is the sum of all correctly parsed chunks.
+
+
+Message state :
+---------------
+msg.msg_state contains the current parser state, one of HTTP_MSG_*. The state
+indicates what byte is expected at msg->next.
+
+HTTP_MSG_BODY : all headers have been parsed, parsing of body has not
+ started yet.
+
+HTTP_MSG_100_SENT : parsing of body has started. If a 100-Continue was needed
+ it has already been sent.
+
+HTTP_MSG_DATA : some bytes are remaining for either the whole body when
+ the message size is determined by Content-Length, or for
+ the current chunk in chunked-encoded mode.
+
+HTTP_MSG_CHUNK_CRLF : msg->next points to the CRLF after the current data chunk.
+
+HTTP_MSG_TRAILERS : msg->next points to the beginning of a possibly empty
+ trailer line after the final empty chunk.
+
+HTTP_MSG_DONE : all the Content-Length data has been inspected, or the
+ final CRLF after trailers has been met.
+
+
+Message forwarding :
+--------------------
+Forwarding part of a message consists in advancing buf.p up to the point where
+it points to the byte following the last one to be forwarded. This can be done
+inline if enough bytes are present in the buffer, or in multiple steps if more
+buffers need to be forwarded (possibly including splicing). Thus by definition,
+after a block has been scheduled for being forwarded, msg->next and msg->sov
+must be reset.
+
+The communication channel between the producer and the consumer holds a counter
+of extra bytes remaining to be forwarded directly without consulting analysers,
+after buf.p. This counter is called to_forward. It commonly holds the advertised
+chunk length or content-length that does not fit in the buffer. For example, if
+2000 bytes are to be forwarded, and 10 bytes are present after buf.p as reported
+by buf.i, then both buf.o and buf.p will advance by 10, buf.i will be reset, and
+to_forward will be set to 1990 so that in total, 2000 bytes will be forwarded.
+At the end of the forwarding, buf.p will point to the first byte to be inspected
+after the 2000 forwarded bytes.
--- /dev/null
+2012/02/27 - Operations on haproxy buffers - w@1wt.eu
+
+
+1) Definitions
+--------------
+
+A buffer is a unidirectional storage between two stream interfaces which are
+most often composed of a socket file descriptor. This storage is fixed sized
+and circular, which means that once data reach the end of the buffer, it loops
+back at the beginning of the buffer :
+
+
+ Representation of a non-wrapping buffer
+ ---------------------------------------
+
+
+ beginning end
+ | -------- length --------> |
+ V V
+ +-------------------------------------------+
+ | <--------------- size ----------------> |
+ +-------------------------------------------+
+
+
+ Representation of a wrapping buffer
+ -----------------------------------
+
+ end beginning
+ +------> | | -------------+
+ | V V |
+ | +-------------------------------------------+ |
+ | | <--------------- size ----------------> | |
+ | +-------------------------------------------+ |
+ | |
+ +--------------------- length -----------------------+
+
+
+Buffers are read by two entities :
+ - stream interfaces
+ - analysers
+
+Buffers are filled by two entities :
+ - stream interfaces
+ - hijackers
+
+A stream interface writes at the input of a buffer and reads at its output. An
+analyser has to parse incoming buffer contents, so it reads the input. It does
+not really write the output though it may change the buffer's contents at the
+input, possibly causing data moves. A hijacker it able to write at the output
+of a buffer. Hijackers are not used anymore at the moment though error outputs
+still work the same way.
+
+Buffers are referenced in the session. Each session has two buffers which
+interconnect the two stream interfaces. One buffer is called the request
+buffer, it sees traffic flowing from the client to the server. The other buffer
+is the response buffer, it sees traffic flowing from the server to the client.
+
+By convention, sessions are represented as 2 buffers one on top of the other,
+and with 2 stream interfaces connected to the two buffers. The client connects
+to the left stream interface (which then acts as a server), and the right
+stream interface (which acts as a client) connects to the server. The data
+circulate clockwise, so the upper buffer is the request buffer and the lower
+buffer is the response buffer :
+
+ ,------------------------.
+ ,-----> | request buffer | ------.
+ from ,--./ `------------------------' \,--. to
+ client ( ) ( ) server
+ `--' ,------------------------. /`--'
+ ^------- | response buffer | <-----'
+ `------------------------'
+
+2) Operations
+-------------
+
+Socket-based stream interfaces write to buffers directly from the I/O layer
+without relying on any specific function.
+
+Function-based stream interfaces do use a number of non-uniform functions to
+read from the buffer's output and to write to the buffer's input. More suited
+names could be :
+
+ int buffer_output_peek_at(buf, ofs, ptr, size);
+ int buffer_output_peek(buf, ptr, size);
+ int buffer_output_read(buf, ptr, size);
+ int buffer_output_skip(buf, size);
+ int buffer_input_write(buf, ptr, size);
+
+Right now some stream interfaces use the following functions which also happen
+to automatically schedule the response for automatic forward :
+
+ buffer_put_block() [peers]
+ buffer_put_chunk() -> buffer_put_block()
+ buffer_feed_chunk() -> buffer_put_chunk() -> buffer_put_block() [dumpstats]
+ buffer_feed() -> buffer_put_string() -> buffer_put_block() [dumpstats]
+
+
+The following stream-interface oriented functions are not used :
+
+ buffer_get_char()
+ buffer_write_chunk()
+
+
+Analysers read data from the buffers' input, and may sometimes write data
+there too (or trim data). More suited names could be :
+
+ int buffer_input_peek_at(buf, ofs, ptr, size);
+ int buffer_input_truncate_at(buf, ofs);
+ int buffer_input_peek(buf, ptr, size);
+ int buffer_input_read(buf, ptr, size);
+ int buffer_input_skip(buf, size);
+ int buffer_input_cut(buf, size);
+ int buffer_input_truncate(buf);
+
+
+Functions that are available and need to be renamed :
+ - buffer_skip : buffer_output_skip
+ - buffer_ignore : buffer_input_skip ? => not exactly, more like
+ buffer_output_skip() without affecting sendmax !
+ - buffer_cut_tail : deletes all pending data after sendmax.
+ -> buffer_input_truncate(). Used by si_retnclose() only.
+ - buffer_contig_data : buffer_output_contig_data
+ - buffer_pending : buffer_input_pending_data
+ - buffer_contig_space : buffer_input_contig_space
+
+
+It looks like buf->lr could be removed and be stored in the HTTP message struct
+since it's only used at the HTTP level.
--- /dev/null
+#FIG 3.2
+Portrait
+Center
+Metric
+A4
+100.00
+Single
+-2
+1200 2
+5 1 0 1 0 7 50 -1 -1 0.000 0 1 1 1 3133.105 2868.088 2385 3465 3150 3825 3915 3420
+ 0 0 1.00 60.00 120.00
+ 0 0 1.00 60.00 120.00
+5 1 0 1 0 7 50 -1 -1 0.000 0 0 1 1 3134.312 2832.717 2340 3420 3150 1845 3960 3375
+ 0 0 1.00 60.00 120.00
+ 0 0 1.00 60.00 120.00
+5 1 1 1 0 7 50 -1 -1 3.000 0 0 0 0 3150.000 2848.393 2115 3510 3150 1620 4185 3510
+5 1 0 1 0 7 50 -1 -1 0.000 0 1 1 1 3133.105 6423.088 2385 7020 3150 7380 3915 6975
+ 0 0 1.00 60.00 120.00
+ 0 0 1.00 60.00 120.00
+5 1 0 1 0 7 50 -1 -1 0.000 0 0 1 1 3134.312 6387.717 2340 6975 3150 5400 3960 6930
+ 0 0 1.00 60.00 120.00
+ 0 0 1.00 60.00 120.00
+5 1 1 1 0 7 50 -1 -1 3.000 0 0 0 0 3150.000 6403.393 2115 7065 3150 5175 4185 7065
+1 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 3150 2835 1126 1126 3150 2835 3195 3960
+1 3 0 1 0 7 51 -1 -1 0.000 1 0.0000 3150 2835 1350 1350 3150 2835 4500 2835
+1 3 0 1 0 7 50 -1 -1 0.000 1 0.0000 3150 6390 1126 1126 3150 6390 3195 7515
+1 3 0 1 0 7 51 -1 -1 0.000 1 0.0000 3150 6390 1350 1350 3150 6390 4500 6390
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 3150 3960 3150 4185
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 4050 3510 4230 3690
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 2250 3510 2070 3690
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 4410 3285 4455 3150
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 4500 2655 4455 2475
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 4185 1980 4050 1845
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 3645 1575 3510 1530
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 2295 1800 2160 1890
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 1980 2160 1935 2250
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 1800 2655 1800 2790
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 1800 3105 1845 3240
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 2877 1519 2697 1564
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 3150 7515 3150 7740
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 4050 7065 4230 7245
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 2250 7065 2070 7245
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 4410 6840 4455 6705
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 4500 6210 4455 6030
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 4185 5535 4050 5400
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 3645 5130 3510 5085
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 2295 5355 2160 5445
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 1980 5715 1935 5805
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 1800 6210 1800 6345
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 1800 6660 1845 6795
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 2877 5074 2697 5119
+3 0 0 1 0 7 50 -1 -1 0.000 0 1 0 4
+ 1 1 1.00 60.00 120.00
+ 4950 3510 4635 3690 4545 3375 4185 3600
+ 0.000 1.000 1.000 0.000
+3 0 0 1 0 7 50 -1 -1 0.000 0 1 0 4
+ 1 1 1.00 60.00 120.00
+ 2115 3600 1800 3330 1305 3285 1260 3780
+ 0.000 1.000 1.000 0.000
+3 0 0 1 0 7 50 -1 -1 0.000 0 1 0 3
+ 1 1 1.00 60.00 120.00
+ 4635 2205 4545 1890 4185 2115
+ 0.000 1.000 0.000
+3 0 0 1 0 7 50 -1 -1 0.000 0 1 0 4
+ 1 1 1.00 60.00 120.00
+ 4950 7065 4635 7245 4545 6930 4185 7155
+ 0.000 1.000 1.000 0.000
+3 0 0 1 0 7 50 -1 -1 0.000 0 1 0 4
+ 1 1 1.00 60.00 120.00
+ 2115 7155 1800 6885 1305 6840 1260 7335
+ 0.000 1.000 1.000 0.000
+3 0 0 1 0 7 50 -1 -1 0.000 0 1 0 3
+ 1 1 1.00 60.00 120.00
+ 4635 5760 4545 5445 4185 5670
+ 0.000 1.000 0.000
+4 1 0 50 -1 12 8 0.0000 4 75 75 3150 3690 l\001
+4 1 0 50 -1 12 8 0.0000 4 75 450 3150 2025 size-l\001
+4 1 0 50 -1 12 8 5.4978 4 45 75 1935 3780 w\001
+4 1 0 50 -1 12 8 0.7854 4 45 75 4365 3825 r\001
+4 1 0 50 -1 12 8 0.0000 4 90 300 3150 4365 (lr)\001
+4 1 0 50 -1 14 10 5.7596 4 90 270 2520 3960 OUT\001
+4 1 0 50 -1 12 8 5.7596 4 75 525 2430 4140 sendmax\001
+4 1 0 50 -1 14 10 0.5236 4 90 180 3690 4005 IN\001
+4 1 0 50 -1 12 8 0.5236 4 75 675 3870 4185 l-sendmax\001
+4 0 0 50 -1 12 8 0.0000 4 90 750 4545 2340 free space\001
+4 0 0 50 -1 12 8 0.0000 4 90 975 4950 3555 [eg: recv()]\001
+4 1 0 50 -1 12 8 0.0000 4 90 900 1260 4095 [eg: send()]\001
+4 1 0 50 -1 16 12 0.0000 4 165 2370 3150 855 Principle of the circular buffer\001
+4 1 0 50 -1 12 8 0.0000 4 90 600 1260 3960 buffer_*\001
+4 0 0 50 -1 12 8 0.0000 4 90 600 4950 3420 buffer_*\001
+4 1 0 50 -1 16 12 0.0000 4 165 1605 3150 1125 Current (since v1.3)\001
+4 0 0 50 -1 12 8 0.0000 4 90 1050 4950 6975 buffer_input_*\001
+4 1 0 50 -1 14 10 5.7596 4 90 270 2520 7515 OUT\001
+4 1 0 50 -1 14 10 0.5236 4 90 180 3690 7560 IN\001
+4 1 0 50 -1 12 8 0.0000 4 90 1125 1260 7515 buffer_output_*\001
+4 0 0 50 -1 12 8 0.0000 4 90 750 4545 5895 free space\001
+4 0 0 50 -1 12 8 0.0000 4 90 975 4950 7110 [eg: recv()]\001
+4 1 0 50 -1 12 8 0.0000 4 90 900 1260 7650 [eg: send()]\001
+4 0 0 50 -1 0 10 0.0000 4 135 1860 6075 1755 Some http_msg fields point to\001
+4 0 0 50 -1 0 10 0.0000 4 120 2175 6075 1950 absolute locations within the buffer,\001
+4 0 0 50 -1 0 10 0.0000 4 135 2040 6075 2145 making realignments quite tricky.\001
+4 0 0 50 -1 0 10 0.0000 4 135 1890 6075 5400 http_msg owns a pointer to the\001
+4 0 0 50 -1 0 10 0.0000 4 135 2055 6075 5595 struct_buffer and only uses offsets\001
+4 0 0 50 -1 0 10 0.0000 4 135 1095 6075 5790 relative to buf->p.\001
+4 1 0 50 -1 12 8 0.0000 4 75 600 3150 5760 size-i-o\001
+4 1 0 50 -1 12 8 0.0000 4 75 225 3150 7200 o+i\001
+4 1 0 50 -1 12 8 5.7596 4 45 75 2430 7695 o\001
+4 1 0 50 -1 12 8 0.5236 4 75 75 3870 7740 i\001
+4 1 0 50 -1 16 12 0.0000 4 180 1965 3150 4905 New design (1.5-dev9+)\001
+4 1 0 50 -1 12 8 0.0000 4 60 75 3150 7920 p\001
--- /dev/null
+Normally, we should use getsockopt(fd, SOL_SOCKET, SO_ERROR) on a pending
+connect() to detect whether the connection correctly established or not.
+
+Unfortunately, getsockopt() does not report the status of a pending connection,
+which means that it returns 0 if the connection is still pending. This has to
+be expected, because as the name implies it, it only returns errors.
+
+With the speculative I/O, a new problem was introduced : if we pretend the
+socket was indicated as ready and we go to the socket's write() function,
+a pending connection will then inevitably be identified as established.
+
+In fact, there are solutions to this issue :
+
+ - send() returns -EAGAIN if it cannot write, so that as long as there are
+ pending data in the buffer, we'll be informed about the status of the
+ connection
+
+ - connect() on an already pending connection will return -1 with errno set to
+ one of the following values :
+ - EALREADY : connection already in progress
+ - EISCONN : connection already established
+ - anything else will indicate an error.
+
+=> So instead of using getsockopt() on a pending connection with no data, we
+ will switch to connect(). This implies that the connection address must be
+ known within the socket's write() function.
+
+
--- /dev/null
+2010/01/16 - Connection header adjustments depending on the transaction mode.
+
+
+HTTP transactions supports 5 possible modes :
+
+ WANT_TUN : default, nothing changed
+ WANT_TUN + httpclose : headers set for close in both dirs
+ WANT_KAL : keep-alive desired in both dirs
+ WANT_SCL : want close with the server and KA with the client
+ WANT_CLO : want close on both sides.
+
+When only WANT_TUN is set, nothing is changed nor analysed, so for commodity
+below, we'll refer to WANT_TUN+httpclose as WANT_TUN.
+
+The mode is adjusted in 3 steps :
+ - configuration sets initial mode
+ - request headers set required request mode
+ - response headers set the final mode
+
+
+1) Adjusting the initial mode via the configuration
+
+ option httpclose => TUN
+ option http-keep-alive => KAL
+ option http-server-close => SCL
+ option forceclose => CLO
+
+Note that option httpclose combined with any other option is equivalent to
+forceclose.
+
+
+2) Adjusting the request mode once the request is parsed
+
+If we cannot determine the body length from the headers, we set the mode to CLO
+but later we'll switch to tunnel mode once forwarding the body. That way, all
+parties are informed of the correct mode.
+
+Depending on the request version and request Connection header, we may have to
+adjust the current transaction mode and to update the connection header.
+
+mode req_ver req_hdr new_mode hdr_change
+TUN 1.0 - TUN -
+TUN 1.0 ka TUN del_ka
+TUN 1.0 close TUN del_close
+TUN 1.0 both TUN del_ka, del_close
+
+TUN 1.1 - TUN add_close
+TUN 1.1 ka TUN del_ka, add_close
+TUN 1.1 close TUN -
+TUN 1.1 both TUN del_ka
+
+KAL 1.0 - CLO -
+KAL 1.0 ka KAL -
+KAL 1.0 close CLO del_close
+KAL 1.0 both CLO del_ka, del_close
+
+KAL 1.1 - KAL -
+KAL 1.1 ka KAL del_ka
+KAL 1.1 close CLO -
+KAL 1.1 both CLO del_ka
+
+SCL 1.0 - CLO -
+SCL 1.0 ka SCL del_ka
+SCL 1.0 close CLO del_close
+SCL 1.0 both CLO del_ka, del_close
+
+SCL 1.1 - SCL add_close
+SCL 1.1 ka SCL del_ka, add_close
+SCL 1.1 close CLO -
+SCL 1.1 both CLO del_ka
+
+CLO 1.0 - CLO -
+CLO 1.0 ka CLO del_ka
+CLO 1.0 close CLO del_close
+CLO 1.0 both CLO del_ka, del_close
+
+CLO 1.1 - CLO add_close
+CLO 1.1 ka CLO del_ka, add_close
+CLO 1.1 close CLO -
+CLO 1.1 both CLO del_ka
+
+=> Summary:
+ - KAL and SCL are only possible with the same requests :
+ - 1.0 + ka
+ - 1.1 + ka or nothing
+
+ - CLO is assumed for any non-TUN request which contains at least a close
+ header, as well as for any 1.0 request without a keep-alive header.
+
+ - del_ka is set whenever we want a CLO or SCL or TUN and req contains a KA,
+ or when the req is 1.1 and contains a KA.
+
+ - del_close is set whenever a 1.0 request contains a close.
+
+ - add_close is set whenever a 1.1 request must be switched to TUN, SCL, CLO
+ and did not have a close hdr.
+
+Note that the request processing is performed in two passes, one with the
+frontend's config and a second one with the backend's config. It is only
+possible to "raise" the mode between them, so during the second pass, we have
+no reason to re-add a header that we previously removed. As an exception, the
+TUN mode is converted to CLO once combined because in fact it's an httpclose
+option set on a TUN mode connection :
+
+ BE (2)
+ | TUN KAL SCL CLO
+ ----+----+----+----+----
+ TUN | TUN CLO CLO CLO
+ +
+ KAL | CLO KAL SCL CLO
+ FE +
+ (1) SCL | CLO SCL SCL CLO
+ +
+ CLO | CLO CLO CLO CLO
+
+
+3) Adjusting the final mode once the response is parsed
+
+This part becomes trickier. It is possible that the server responds with a
+version that the client does not necessarily understand. Obviously, 1.1 clients
+are asusmed to understand 1.0 responses. The problematic case is a 1.0 client
+receiving a 1.1 response without any Connection header. Some 1.0 clients might
+know that in 1.1 this means "keep-alive" while others might ignore the version
+and assume a "close". Since we know the version on both sides, we may have to
+adjust some responses to remove any ambiguous case. That's the reason why the
+following table considers both the request and the response version. If the
+response length cannot be determined, we switch to CLO mode.
+
+mode res_ver res_hdr req_ver new_mode hdr_change
+TUN 1.0 - any TUN -
+TUN 1.0 ka any TUN del_ka
+TUN 1.0 close any TUN del_close
+TUN 1.0 both any TUN del_ka, del_close
+
+TUN 1.1 - any TUN add_close
+TUN 1.1 ka any TUN del_ka, add_close
+TUN 1.1 close any TUN -
+TUN 1.1 both any TUN del_ka
+
+KAL 1.0 - any SCL add_ka
+KAL 1.0 ka any KAL -
+KAL 1.0 close any SCL del_close, add_ka
+KAL 1.0 both any SCL del_close
+
+KAL 1.1 - 1.0 KAL add_ka
+KAL 1.1 - 1.1 KAL -
+KAL 1.1 ka 1.0 KAL -
+KAL 1.1 ka 1.1 KAL del_ka
+KAL 1.1 close 1.0 SCL del_close, add_ka
+KAL 1.1 close 1.1 SCL del_close
+KAL 1.1 both 1.0 SCL del_close
+KAL 1.1 both 1.1 SCL del_ka, del_close
+
+SCL 1.0 - any SCL add_ka
+SCL 1.0 ka any SCL -
+SCL 1.0 close any SCL del_close, add_ka
+SCL 1.0 both any SCL del_close
+
+SCL 1.1 - 1.0 SCL add_ka
+SCL 1.1 - 1.1 SCL -
+SCL 1.1 ka 1.0 SCL -
+SCL 1.1 ka 1.1 SCL del_ka
+SCL 1.1 close 1.0 SCL del_close, add_ka
+SCL 1.1 close 1.1 SCL del_close
+SCL 1.1 both 1.0 SCL del_close
+SCL 1.1 both 1.1 SCL del_ka, del_close
+
+CLO 1.0 - any CLO -
+CLO 1.0 ka any CLO del_ka
+CLO 1.0 close any CLO del_close
+CLO 1.0 both any CLO del_ka, del_close
+
+CLO 1.1 - any CLO add_close
+CLO 1.1 ka any CLO del_ka, add_close
+CLO 1.1 close any CLO -
+CLO 1.1 both any CLO del_ka
+
+=> in summary :
+ - the header operations do not depend on the initial mode, they only depend
+ on versions and current connection header(s).
+
+ - both CLO and TUN modes work similarly, they need to set a close mode on the
+ reponse. A 1.1 response will exclusively need the close header, while a 1.0
+ response will have it removed. Any keep-alive header is always removed when
+ found.
+
+ - a KAL request where the server wants to close turns into an SCL response so
+ that we release the server but still maintain the connection to the client.
+
+ - the KAL and SCL modes work the same way as we need to set keep-alive on the
+ response. So a 1.0 response will only have the keep-alive header with any
+ close header removed. A 1.1 response will have the keep-alive header added
+ for 1.0 requests and the close header removed for all requests.
+
+Note that the SCL and CLO modes will automatically cause the server connection
+to be closed at the end of the data transfer.
--- /dev/null
+Problème des connexions simultanées avec un backend
+
+Pour chaque serveur, 3 cas possibles :
+
+ - pas de limite (par défaut)
+ - limite statique (maxconn)
+ - limite dynamique (maxconn/(ratio de px->conn), avec minconn)
+
+On a donc besoin d'une limite sur le proxy dans le cas de la limite
+dynamique, afin de fixer un seuil et un ratio. Ce qui compte, c'est
+le point après lequel on passe d'un régime linéaire à un régime
+saturé.
+
+On a donc 3 phases :
+
+ - régime minimal (0..srv->minconn)
+ - régime linéaire (srv->minconn..srv->maxconn)
+ - régime saturé (srv->maxconn..)
+
+Le minconn pourrait aussi ressortir du serveur ?
+En pratique, on veut :
+ - un max par serveur
+ - un seuil global auquel les serveurs appliquent le max
+ - un seuil minimal en-dessous duquel le nb de conn est
+ maintenu. Cette limite a un sens par serveur (jamais moins de X conns)
+ mais aussi en global (pas la peine de faire du dynamique en dessous de
+ X conns à répartir). La difficulté en global, c'est de savoir comment
+ on calcule le nombre min associé à chaque serveur, vu que c'est un ratio
+ défini à partir du max.
+
+Ca revient à peu près à la même chose que de faire 2 états :
+
+ - régime linéaire avec un offset (srv->minconn..srv->maxconn)
+ - régime saturé (srv->maxconn..)
+
+Sauf que dans ce cas, le min et le max sont bien par serveur, et le seuil est
+global et correspond à la limite de connexions au-delà de laquel on veut
+tourner à plein régime sur l'ensemble des serveurs. On peut donc parler de
+passage en mode "full", "saturated", "optimal". On peut également parler de
+la fin de la partie "scalable", "dynamique".
+
+=> fullconn 1000 par exemple ?
+
+
--- /dev/null
+An FD has a state :
+ - CLOSED
+ - READY
+ - ERROR (?)
+ - LISTEN (?)
+
+A connection has a state :
+ - CLOSED
+ - ACCEPTED
+ - CONNECTING
+ - ESTABLISHED
+ - ERROR
+
+A stream interface has a state :
+ - INI, REQ, QUE, TAR, ASS, CON, CER, EST, DIS, CLO
+
+Note that CON and CER might be replaced by EST if the connection state is used
+instead. CON might even be more suited than EST to indicate that a connection
+is known.
+
+
+si_shutw() must do :
+
+ data_shutw()
+ if (shutr) {
+ data_close()
+ ctrl_shutw()
+ ctrl_close()
+ }
+
+si_shutr() must do :
+ data_shutr()
+ if (shutw) {
+ data_close()
+ ctrl_shutr()
+ ctrl_close()
+ }
+
+Each of these steps may fail, in which case the step must be retained and the
+operations postponed in an asynchronous task.
+
+The first asynchronous data_shut() might already fail so it is mandatory to
+save the other side's status with the connection in order to let the async task
+know whether the 3 next steps must be performed.
+
+The connection (or perhaps the FD) needs to know :
+ - the desired close operations : DSHR, DSHW, CSHR, CSHW
+ - the completed close operations : DSHR, DSHW, CSHR, CSHW
+
+
+On the accept() side, we probably need to know :
+ - if a header is expected (eg: accept-proxy)
+ - if this header is still being waited for
+ => maybe both info might be combined into one bit
+
+ - if a data-layer accept() is expected
+ - if a data-layer accept() has been started
+ - if a data-layer accept() has been performed
+ => possibly 2 bits, to indicate the need to free()
+
+On the connect() side, we need to konw :
+ - the desire to send a header (eg: send-proxy)
+ - if this header has been sent
+ => maybe both info might be combined
+
+ - if a data-layer connect() is expected
+ - if a data-layer connect() has been started
+ - if a data-layer connect() has been completed
+ => possibly 2 bits, to indicate the need to free()
+
+On the response side, we also need to know :
+ - the desire to send a header (eg: health check response for monitor-net)
+ - if this header was sent
+ => might be the same as sending a header over a new connection
+
+Note: monitor-net has precedence over proxy proto and data layers. Same for
+ health mode.
+
+For multi-step operations, use 2 bits :
+ 00 = operation not desired, not performed
+ 10 = operation desired, not started
+ 11 = operation desired, started but not completed
+ 01 = operation desired, started and completed
+
+ => X != 00 ==> operation desired
+ X & 01 ==> operation at least started
+ X & 10 ==> operation not completed
+
+Note: no way to store status information for error reporting.
+
+Note2: it would be nice if "tcp-request connection" rules could work at the
+connection level, just after headers ! This means support for tracking stick
+tables, possibly not too much complicated.
+
+
+Proposal for incoming connection sequence :
+
+- accept()
+- if monitor-net matches or if mode health => try to send response
+- if accept-proxy, wait for proxy request
+- if tcp-request connection, process tcp rules and possibly keep the
+ pointer to stick-table
+- if SSL is enabled, switch to SSL handshake
+- then switch to DATA state and instantiate a session
+
+We just need a map of handshake handlers on the connection. They all manage the
+FD status themselves and set the callbacks themselves. If their work succeeds,
+they remove themselves from the list. If it fails, they remain subscribed and
+enable the required polling until they are woken up again or the timeout strikes.
+
+Identified handshake handlers for incoming connections :
+ - HH_HEALTH (tries to send OK and dies)
+ - HH_MONITOR_IN (matches src IP and adds/removes HH_SEND_OK/HH_SEND_HTTP_OK)
+ - HH_SEND_OK (tries to send "OK" and dies)
+ - HH_SEND_HTTP_OK (tries to send "HTTP/1.0 200 OK" and dies)
+ - HH_ACCEPT_PROXY (waits for PROXY line and parses it)
+ - HH_TCP_RULES (processes TCP rules)
+ - HH_SSL_HS (starts SSL handshake)
+ - HH_ACCEPT_SESSION (instanciates a session)
+
+Identified handshake handlers for outgoing connections :
+ - HH_SEND_PROXY (tries to build and send the PROXY line)
+ - HH_SSL_HS (starts SSL handshake)
+
+For the pollers, we could check that handshake handlers are not 0 and decide to
+call a generic connection handshake handler instead of usual callbacks. Problem
+is that pollers don't know connections, they know fds. So entities which manage
+handlers should update change the FD callbacks accordingly.
+
+With a bit of care, we could have :
+ - HH_SEND_LAST_CHUNK (sends the chunk pointed to by a pointer and dies)
+ => merges HEALTH, SEND_OK and SEND_HTTP_OK
+
+It sounds like the ctrl vs data state for the connection are per-direction
+(eg: support an async ctrl shutw while still reading data).
+
+Also support shutr/shutw status at L4/L7.
+
+In practice, what we really need is :
+
+shutdown(conn) =
+ conn.data.shut()
+ conn.ctrl.shut()
+ conn.fd.shut()
+
+close(conn) =
+ conn.data.close()
+ conn.ctrl.close()
+ conn.fd.close()
+
+With SSL over Remote TCP (RTCP + RSSL) to reach the server, we would have :
+
+ HTTP -> RTCP+RSSL connection <-> RTCP+RRAW connection -> TCP+SSL connection
+
+The connection has to be closed at 3 places after a successful response :
+ - DATA (RSSL over RTCP)
+ - CTRL (RTCP to close connection to server)
+ - SOCK (FD to close connection to second process)
+
+Externally, the connection is seen with very few flags :
+ - SHR
+ - SHW
+ - ERR
+
+We don't need a CLOSED flag as a connection must always be detached when it's closed.
+
+The internal status doesn't need to be exposed :
+ - FD allocated (Y/N)
+ - CTRL initialized (Y/N)
+ - CTRL connected (Y/N)
+ - CTRL handlers done (Y/N)
+ - CTRL failed (Y/N)
+ - CTRL shutr (Y/N)
+ - CTRL shutw (Y/N)
+ - DATA initialized (Y/N)
+ - DATA connected (Y/N)
+ - DATA handlers done (Y/N)
+ - DATA failed (Y/N)
+ - DATA shutr (Y/N)
+ - DATA shutw (Y/N)
+
+(note that having flags for operations needing to be completed might be easier)
+--------------
+
+Maybe we need to be able to call conn->fdset() and conn->fdclr() but it sounds
+very unlikely since the only functions manipulating this are in the code of
+the data/ctrl handlers.
+
+FDSET/FDCLR cannot be directly controlled by the stream interface since it also
+depends on the DATA layer (WANT_READ/wANT_WRITE).
+
+But FDSET/FDCLR is probably controlled by who owns the connection (eg: DATA).
+
--- /dev/null
+#FIG 3.2
+Portrait
+Center
+Metric
+A4
+100.00
+Single
+-2
+1200 2
+6 2475 3240 3825 3690
+2 2 0 1 0 7 53 -1 20 0.000 0 0 -1 0 0 5
+ 2475 3240 3825 3240 3825 3690 2475 3690 2475 3240
+4 1 0 50 -1 0 16 0.0000 4 165 510 3195 3510 stkctr\001
+-6
+6 4050 3195 5400 3690
+2 2 0 1 0 30 53 -1 20 0.000 0 0 -1 0 0 5
+ 4050 3239 5400 3239 5400 3689 4050 3689 4050 3239
+4 1 0 50 -1 0 16 0.0000 4 225 390 4770 3509 logs\001
+-6
+6 11250 3195 12600 3690
+2 2 0 1 0 7 53 -1 20 0.000 0 0 -1 0 0 5
+ 11250 3239 12600 3239 12600 3689 11250 3689 11250 3239
+4 1 0 50 -1 0 16 0.0000 4 195 525 11970 3509 target\001
+-6
+6 9720 3240 11070 3690
+2 2 0 1 0 7 53 -1 20 0.000 0 0 -1 0 0 5
+ 9720 3240 11070 3240 11070 3690 9720 3690 9720 3240
+4 1 0 50 -1 0 16 0.0000 4 135 450 10440 3510 store\001
+-6
+6 14265 5130 14715 5580
+2 2 0 1 0 2 51 -1 20 0.000 0 0 -1 0 0 5
+ 14265 5130 14715 5130 14715 5579 14265 5579 14265 5130
+4 1 0 50 -1 0 16 0.0000 4 165 195 14535 5399 fd\001
+-6
+6 13860 4455 15210 4950
+6 13860 4455 15210 4950
+2 2 0 1 0 2 51 -1 20 0.000 0 0 -1 0 0 5
+ 13860 4499 15210 4499 15210 4949 13860 4949 13860 4499
+4 1 0 50 -1 0 16 0.0000 4 195 525 14490 4769 target\001
+-6
+-6
+6 13725 7020 15300 7470
+2 2 0 1 0 6 52 -1 20 0.000 0 0 -1 0 0 5
+ 13725 7021 15300 7021 15300 7470 13725 7470 13725 7021
+4 1 0 50 -1 0 16 0.0000 4 195 825 14535 7335 fdtab[fd]\001
+-6
+6 -1710 4545 -360 5040
+2 2 0 1 0 2 51 -1 20 0.000 0 0 -1 0 0 5
+ -1710 4589 -360 4589 -360 5039 -1710 5039 -1710 4589
+4 1 0 50 -1 0 16 0.0000 4 195 525 -1080 4859 target\001
+-6
+6 -1215 5130 -765 5580
+2 2 0 1 0 2 51 -1 20 0.000 0 0 -1 0 0 5
+ -1215 5130 -765 5130 -765 5579 -1215 5579 -1215 5130
+4 1 0 50 -1 0 16 0.0000 4 165 195 -945 5399 fd\001
+-6
+6 -1800 7020 -225 7470
+2 2 0 1 0 6 52 -1 20 0.000 0 0 -1 0 0 5
+ -1800 7021 -225 7021 -225 7470 -1800 7470 -1800 7021
+4 1 0 50 -1 0 16 0.0000 4 195 825 -990 7335 fdtab[fd]\001
+-6
+6 10575 8325 11925 8775
+2 2 0 1 0 30 54 -1 20 0.000 0 0 -1 0 0 5
+ 10575 8325 11925 8325 11925 8775 10575 8775 10575 8325
+4 1 0 50 -1 0 16 0.0000 4 165 720 11295 8595 cookies\001
+-6
+6 10575 9225 11925 9675
+2 2 0 1 0 30 54 -1 20 0.000 0 0 -1 0 0 5
+ 10575 9225 11925 9225 11925 9675 10575 9675 10575 9225
+4 1 0 50 -1 0 16 0.0000 4 165 255 11205 9495 uri\001
+-6
+6 5985 9135 7335 9585
+2 2 0 1 0 6 52 -1 20 0.000 0 0 -1 0 0 5
+ 5985 9135 7335 9135 7335 9584 5985 9584 5985 9135
+4 1 0 50 -1 0 16 0.0000 4 165 405 6705 9404 auth\001
+-6
+6 3150 1845 4500 2295
+2 2 0 1 0 7 53 -1 20 0.000 0 0 -1 0 0 5
+ 3150 1845 4500 1845 4500 2295 3150 2295 3150 1845
+4 1 0 50 -1 0 16 0.0000 4 165 510 3870 2115 stkctr\001
+-6
+6 1575 1845 2925 2295
+2 2 0 1 0 7 53 -1 20 0.000 0 0 -1 0 0 5
+ 1575 1845 2925 1845 2925 2295 1575 2295 1575 1845
+4 1 0 50 -1 0 16 0.0000 4 165 675 2295 2160 listener\001
+-6
+6 0 1845 1350 2295
+2 2 0 1 0 7 53 -1 20 0.000 0 0 -1 0 0 5
+ 0 1845 1350 1845 1350 2295 0 2295 0 1845
+4 1 0 50 -1 0 16 0.0000 4 165 795 720 2115 frontend\001
+-6
+6 -1575 1845 -225 2295
+2 2 0 1 0 7 53 -1 20 0.000 0 0 -1 0 0 5
+ -1575 1845 -225 1845 -225 2295 -1575 2295 -1575 1845
+4 1 0 50 -1 0 16 0.0000 4 225 555 -855 2160 origin\001
+-6
+6 4950 1575 6300 2475
+2 2 0 1 0 5 54 -1 20 0.000 0 0 -1 0 0 5
+ 4950 1575 6300 1575 6300 2475 4950 2475 4950 1575
+4 1 0 50 -1 0 12 0.0000 4 165 1110 5670 2115 (kernel storage)\001
+4 1 0 50 -1 2 16 0.0000 4 225 450 5625 1845 pipe\001
+-6
+6 6525 1575 8775 2475
+2 2 0 1 0 5 54 -1 20 0.000 0 0 -1 0 0 5
+ 6525 1575 8775 1575 8775 2475 6525 2475 6525 1575
+4 1 0 50 -1 2 16 0.0000 4 165 660 7605 1845 buffer\001
+4 1 0 50 -1 0 12 0.0000 4 165 1200 7605 2115 (internal storage)\001
+-6
+6 6255 6975 8505 7875
+2 2 0 1 0 5 54 -1 20 0.000 0 0 -1 0 0 5
+ 6255 6975 8505 6975 8505 7875 6255 7875 6255 6975
+4 1 0 50 -1 2 16 0.0000 4 165 660 7335 7245 buffer\001
+4 1 0 50 -1 0 12 0.0000 4 165 1200 7335 7515 (internal storage)\001
+-6
+6 4725 6975 6075 7875
+2 2 0 1 0 5 54 -1 20 0.000 0 0 -1 0 0 5
+ 4725 6975 6075 6975 6075 7875 4725 7875 4725 6975
+4 1 0 50 -1 0 12 0.0000 4 165 1110 5445 7515 (kernel storage)\001
+4 1 0 50 -1 2 16 0.0000 4 225 450 5400 7245 pipe\001
+-6
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 120.00 180.00
+ 5445 6120 5445 7019
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 120.00 180.00
+ 7380 6120 7380 7019
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 120.00 180.00
+ 8955 8550 8640 6120
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 120.00 180.00
+ 5670 3870 5670 2475
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 120.00 180.00
+ 7605 3870 7605 2475
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 120.00 180.00
+ 4365 5625 2971 5626
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 120.00 180.00
+ 3015 5895 4365 5894
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 120.00 180.00
+ 4410 4140 3015 4140
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 120.00 180.00
+ 3015 4455 4410 4455
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 120.00 180.00
+ 9000 4140 10485 4140
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 120.00 180.00
+ 10485 4455 9000 4455
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 120.00 180.00
+ 8954 5624 10484 5625
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 120.00 180.00
+ 10485 5895 8955 5894
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 120.00 180.00
+ -990 2295 -990 3870
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 120.00 180.00
+ 1575 3240 1575 2565
+2 2 0 1 0 6 52 -1 20 0.000 0 0 -1 0 0 5
+ 13725 3870 15300 3870 15300 5669 13725 5669 13725 3870
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 120.00 180.00
+ 14490 7020 14490 5670
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 120.00 180.00
+ 13725 4995 12645 4995
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 120.00 180.00
+ 12645 4545 13725 4545
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 1
+ 675 4320
+2 2 0 1 0 6 52 -1 20 0.000 0 0 -1 0 0 5
+ -1800 3870 -225 3870 -225 5669 -1800 5669 -1800 3870
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 120.00 180.00
+ -225 4545 900 4545
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 120.00 180.00
+ 900 4995 -225 4995
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 120.00 180.00
+ -1035 7020 -1035 5670
+2 1 0 1 0 7 55 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 120.00 180.00
+ 4365 8550 4860 4680
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 120.00 180.00
+ 9990 9450 10575 9450
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 120.00 180.00
+ 9990 8550 10575 8550
+2 2 0 1 0 3 60 -1 20 0.000 0 0 -1 0 0 5
+ 765 2970 12780 2970 12780 6570 765 6570 765 2970
+2 2 0 1 0 6 52 -1 20 0.000 0 0 -1 0 0 5
+ 3465 8550 5715 8550 5715 9585 3465 9585 3465 8550
+2 2 0 1 0 6 52 -1 20 0.000 0 0 -1 0 0 5
+ 7560 8550 9810 8550 9810 9585 7560 9585 7560 8550
+2 2 0 1 0 6 52 -1 20 0.000 0 0 -1 0 0 5
+ 9450 1575 11700 1575 11700 2475 9450 2475 9450 1575
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 120.00 180.00
+ 9855 2475 9855 2970
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 120.00 180.00
+ 11250 2970 11250 2475
+2 2 0 1 0 3 60 -1 20 0.000 0 0 -1 0 0 5
+ -1800 1350 4725 1350 4725 2565 -1800 2565 -1800 1350
+2 2 0 1 0 7 53 -1 20 0.000 0 0 -1 0 0 5
+ 900 3240 2250 3240 2250 3690 900 3690 900 3240
+2 2 0 1 0 7 53 -1 20 0.000 0 0 -1 0 0 5
+ 900 3870 3015 3870 3015 6299 900 6299 900 3870
+2 2 0 1 0 7 53 -1 20 0.000 0 0 -1 0 0 5
+ 10485 3870 12645 3870 12645 6299 10485 6299 10485 3870
+2 2 0 1 0 4 54 -1 20 0.000 0 0 -1 0 0 5
+ 4365 5399 8955 5399 8955 6119 4365 6119 4365 5399
+2 2 0 1 0 4 54 -1 20 0.000 0 0 -1 0 0 5
+ 4410 3870 9000 3870 9000 4680 4410 4680 4410 3870
+2 2 0 1 0 7 60 -1 20 0.000 0 0 -1 0 0 5
+ 3285 8055 9990 8055 9990 9855 3285 9855 3285 8055
+2 2 0 1 0 6 52 -1 20 0.000 0 0 -1 0 0 5
+ 5985 8550 7335 8550 7335 8999 5985 8999 5985 8550
+4 0 0 54 -1 12 12 0.0000 4 105 210 3060 4635 ib\001
+4 0 0 54 -1 12 12 0.0000 4 75 420 9135 4050 cons\001
+4 0 0 54 -1 12 12 0.0000 4 105 210 3060 6165 ob\001
+4 2 0 54 -1 12 12 0.0000 4 75 420 4275 5535 cons\001
+4 2 0 54 -1 12 12 0.0000 4 135 420 4320 4050 prod\001
+4 0 0 54 -1 12 12 0.0000 4 135 420 9090 5580 prod\001
+4 2 0 54 -1 12 12 0.0000 4 105 210 10395 6120 ib\001
+4 2 0 54 -1 12 12 0.0000 4 105 210 10395 4680 ob\001
+4 0 0 54 -1 12 12 0.0000 4 75 525 14535 6930 owner\001
+4 1 0 50 -1 2 16 0.0000 4 165 1125 14535 4140 connection\001
+4 2 0 54 -1 12 12 0.0000 4 75 525 13680 4950 owner\001
+4 0 0 54 -1 12 12 0.0000 4 75 525 -180 4455 owner\001
+4 0 0 54 -1 12 12 0.0000 4 75 525 -990 6930 owner\001
+4 2 0 54 -1 12 12 0.0000 4 105 315 630 4950 end\001
+4 0 0 54 -1 12 12 0.0000 4 105 315 12870 4455 end\001
+4 0 0 54 -1 12 12 0.0000 4 105 315 4500 8505 chn\001
+4 0 0 54 -1 12 12 0.0000 4 105 315 9045 8505 chn\001
+4 1 0 50 -1 2 16 0.0000 4 165 435 10575 2070 task\001
+4 0 0 54 -1 12 12 0.0000 4 105 420 11385 2880 task\001
+4 0 0 54 -1 12 12 0.0000 4 105 735 9990 2655 context\001
+4 1 0 50 -1 0 16 0.0000 4 165 675 1620 3555 session\001
+4 1 0 50 -1 2 16 0.0000 4 165 705 1485 1620 session\001
+4 1 0 50 -1 2 16 0.0000 4 165 705 6660 3285 stream\001
+4 1 0 50 -1 2 16 0.0000 4 165 1125 -990 4140 connection\001
+4 1 0 50 -1 2 16 0.0000 4 225 1755 1980 5085 stream_interface\001
+4 1 0 50 -1 2 16 0.0000 4 225 1755 11610 5085 stream_interface\001
+4 1 0 50 -1 0 16 0.0000 4 195 420 11610 5355 si[1]\001
+4 1 0 50 -1 0 16 0.0000 4 195 420 1980 5355 si[0]\001
+4 1 0 50 -1 2 16 0.0000 4 225 915 6660 8325 http_txn\001
+4 1 0 50 -1 0 12 0.0000 4 165 2385 6660 4545 (request forwarding and analysis)\001
+4 1 0 50 -1 0 12 0.0000 4 165 2505 6615 5985 (response forwarding and analysis)\001
+4 1 0 50 -1 0 16 0.0000 4 105 270 6840 5669 res\001
+4 1 0 50 -1 2 16 0.0000 4 165 810 6165 4140 channel\001
+4 1 0 50 -1 0 16 0.0000 4 150 300 6840 4140 req\001
+4 1 0 50 -1 2 16 0.0000 4 165 810 6210 5669 channel\001
+4 1 0 50 -1 0 12 0.0000 4 165 1935 4590 9450 (HTTP request processing)\001
+4 1 0 50 -1 0 12 0.0000 4 165 2055 8685 9450 (HTTP response processing)\001
+4 1 0 50 -1 2 16 0.0000 4 225 975 8685 8865 http_msg\001
+4 1 0 50 -1 2 16 0.0000 4 225 975 4590 8865 http_msg\001
+4 1 0 50 -1 0 16 0.0000 4 150 300 4590 9180 req\001
+4 1 0 50 -1 0 16 0.0000 4 150 285 8685 9180 rsp\001
+4 1 0 50 -1 2 16 0.0000 4 225 825 6705 8819 hdr_idx\001
--- /dev/null
+<?xml version="1.0" standalone="no"?>
+<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 20000303 Stylable//EN"
+ "http://www.w3.org/TR/2000/03/WD-SVG-20000303/DTD/svg-20000303-stylable.dtd">
+<!-- Creator: fig2dev Version 3.2 Patchlevel 4 -->
+<!-- CreationDate: Tue Apr 21 14:12:00 2015 -->
+<svg xmlns="http://www.w3.org/2000/svg" width="8.3in" height="11.7in" viewBox="0 0 13858 20157">
+<g style="stroke-width:.025in; stroke:black; fill:none">
+<defs>
+<pattern id="tile1" x="0" y="0" width="200" height="200"
+ patternUnits="userSpaceOnUse">
+<path d="M 0 -100 200 16" />
+<path d="M 0 -60 200 56" />
+<path d="M 0 -20 200 96" />
+<path d="M 0 20 200 136" />
+<path d="M 0 60 200 176" />
+<path d="M 0 100 200 216" />
+<path d="M 0 140 200 256" />
+<path d="M 0 180 200 296" />
+</pattern>
+<pattern id="tile2" x="0" y="0" width="200" height="200"
+ patternUnits="userSpaceOnUse">
+<path d="M 200 -100 0 16" />
+<path d="M 200 -60 0 56" />
+<path d="M 200 -20 0 96" />
+<path d="M 200 20 0 136" />
+<path d="M 200 60 0 176" />
+<path d="M 200 100 0 216" />
+<path d="M 200 140 0 256" />
+<path d="M 200 180 0 296" />
+</pattern>
+<pattern id="tile3" x="0" y="0" width="200" height="200"
+ patternUnits="userSpaceOnUse">
+<path d="M 0 -100 200 16" />
+<path d="M 200 -100 0 16" />
+<path d="M 0 -60 200 56" />
+<path d="M 200 -60 0 56" />
+<path d="M 0 -20 200 96" />
+<path d="M 200 -20 0 96" />
+<path d="M 0 20 200 136" />
+<path d="M 200 20 0 136" />
+<path d="M 0 60 200 176" />
+<path d="M 200 60 0 176" />
+<path d="M 0 100 200 216" />
+<path d="M 200 100 0 216" />
+<path d="M 0 140 200 256" />
+<path d="M 200 140 0 256" />
+<path d="M 0 180 200 296" />
+<path d="M 200 180 0 296" />
+</pattern>
+<pattern id="tile4" x="0" y="0" width="200" height="200"
+ patternUnits="userSpaceOnUse">
+<path d="M 100 0 200 100" />
+<path d="M 0 0 200 200" />
+<path d="M 0 100 100 200" />
+</pattern>
+<pattern id="tile5" x="0" y="0" width="200" height="200"
+ patternUnits="userSpaceOnUse">
+<path d="M 100 0 0 100" />
+<path d="M 200 0 0 200" />
+<path d="M 200 100 100 200" />
+</pattern>
+<pattern id="tile6" x="0" y="0" width="200" height="200"
+ patternUnits="userSpaceOnUse">
+<path d="M 100 0 200 100" />
+<path d="M 0 0 200 200" />
+<path d="M 0 100 100 200" />
+<path d="M 100 0 0 100" />
+<path d="M 200 0 0 200" />
+<path d="M 200 100 100 200" />
+</pattern>
+<pattern id="tile7" x="0" y="0" width="200" height="200"
+ patternUnits="userSpaceOnUse">
+<path d="M 0 0 0 50" />
+<path d="M 0 50 200 50" />
+<path d="M 100 50 100 150" />
+<path d="M 0 150 200 150" />
+<path d="M 0 150 0 200" />
+</pattern>
+<pattern id="tile8" x="0" y="0" width="200" height="200"
+ patternUnits="userSpaceOnUse">
+<path d="M 0 0 50 0" />
+<path d="M 50 0 50 200" />
+<path d="M 50 100 150 100" />
+<path d="M 150 0 150 200" />
+<path d="M 150 0 200 0" />
+</pattern>
+<pattern id="tile9" x="0" y="0" width="200" height="200"
+ patternUnits="userSpaceOnUse">
+<path d="M 0 50 200 50" />
+<path d="M 0 150 200 150" />
+</pattern>
+<pattern id="tile10" x="0" y="0" width="200" height="200"
+ patternUnits="userSpaceOnUse">
+<path d="M 50 0 50 200" />
+<path d="M 150 0 150 200" />
+</pattern>
+<pattern id="tile11" x="0" y="0" width="200" height="200"
+ patternUnits="userSpaceOnUse">
+<path d="M 0 50 200 50" />
+<path d="M 0 150 200 150" />
+<path d="M 50 0 50 200" />
+<path d="M 150 0 150 200" />
+</pattern>
+<pattern id="tile12" x="0" y="0" width="200" height="200"
+ patternUnits="userSpaceOnUse">
+<path d="M 0 0 25 50" />
+<path d="M 0 50 200 50" />
+<path d="M 100 50 125 150" />
+<path d="M 0 150 200 150" />
+<path d="M 0 150 25 200" />
+</pattern>
+<pattern id="tile13" x="0" y="0" width="200" height="200"
+ patternUnits="userSpaceOnUse">
+<path d="M 200 0 175 50" />
+<path d="M 0 50 200 50" />
+<path d="M 100 50 75 150" />
+<path d="M 0 150 200 150" />
+<path d="M 200 150 175 200" />
+</pattern>
+<pattern id="tile14" x="0" y="0" width="200" height="200"
+ patternUnits="userSpaceOnUse">
+<path d="M 0 0 50 25" />
+<path d="M 50 0 50 200" />
+<path d="M 50 100 150 125" />
+<path d="M 150 0 150 200" />
+<path d="M 150 0 200 25" />
+</pattern>
+<pattern id="tile15" x="0" y="0" width="200" height="200"
+ patternUnits="userSpaceOnUse">
+<path d="M 0 25 50 0" />
+<path d="M 50 0 50 200" />
+<path d="M 50 125 150 100" />
+<path d="M 150 0 150 200" />
+<path d="M 150 25 200 0" />
+</pattern>
+<pattern id="tile16" x="0" y="0" width="200" height="200"
+ patternUnits="userSpaceOnUse">
+<path d="M 0 50 A 50 50 0 1 0 100 50" />
+<path d="M 100 50 A 50 50 0 1 0 200 50" />
+<path d="M 50 100 A 50 50 0 1 0 150 100" />
+<path d="M 0 150 A 50 50 0 0 0 50 100" />
+<path d="M 150 100 A 50 50 0 1 0 200 50" />
+<path d="M 50 0 A 50 50 0 1 0 150 0" />
+<path d="M 150 0 A 50 50 0 0 0 200 50" />
+<path d="M 0 50 A 50 50 0 0 0 50 0" />
+<path d="M 0 150 A 50 50 0 1 0 100 150" />
+<path d="M 100 150 A 50 50 0 1 0 200 150" />
+</pattern>
+<pattern id="tile17" x="0" y="0" width="100" height="100"
+ patternUnits="userSpaceOnUse">
+<g transform="scale(0.5)" >
+<path d="M 0 50 A 50 50 0 1 0 100 50" />
+<path d="M 100 50 A 50 50 0 1 0 200 50" />
+<path d="M 50 100 A 50 50 0 1 0 150 100" />
+<path d="M 0 150 A 50 50 0 0 0 50 100" />
+<path d="M 150 100 A 50 50 0 1 0 200 50" />
+<path d="M 50 0 A 50 50 0 1 0 150 0" />
+<path d="M 150 0 A 50 50 0 0 0 200 50" />
+<path d="M 0 50 A 50 50 0 0 0 50 0" />
+<path d="M 0 150 A 50 50 0 1 0 100 150" />
+<path d="M 100 150 A 50 50 0 1 0 200 150" />
+</g>
+</pattern>
+<pattern id="tile18" x="0" y="0" width="200" height="200"
+ patternUnits="userSpaceOnUse">
+<circle cx="100" cy="100" r="100" />
+</pattern>
+<pattern id="tile19" x="0" y="0" width="200" height="200"
+ patternUnits="userSpaceOnUse">
+<path d="M 0 50 45 0 105 0 140 50 200 50 " />
+<path d="M 0 50 45 100 105 100 140 50 200 50" />
+<path d="M 0 150 45 100 105 100 140 150 200 150" />
+<path d="M 0 150 45 200 105 200 140 150 200 150" />
+</pattern>
+<pattern id="tile20" x="0" y="0" width="200" height="200"
+ patternUnits="userSpaceOnUse">
+<path d="M 0 70 65 0 140 0 200 70 " />
+<path d="M 0 70 0 130 65 200 140 200 200 130 200 70" />
+</pattern>
+<pattern id="tile21" x="0" y="0" width="200" height="200"
+ patternUnits="userSpaceOnUse">
+<path d="M 50 0 75 25 100 0 M 150 0 175 25 200 0" />
+<path d="M 0 50 25 25 75 75 125 25 175 75 200 50" />
+<path d="M 0 100 25 75 75 125 125 75 175 125 200 100" />
+<path d="M 0 150 25 125 75 175 125 125 175 175 200 150" />
+<path d="M 0 200 25 175 75 225 125 175 175 225 200 200" />
+</pattern>
+<pattern id="tile22" x="0" y="0" width="200" height="200"
+ patternUnits="userSpaceOnUse">
+<path d="M 0 50 25 75 0 100 M 0 150 25 175 0 200" />
+<path d="M 50 0 25 25 75 75 25 125 75 175 50 200" />
+<path d="M 100 0 75 25 125 75 75 125 125 175 100 200" />
+<path d="M 150 0 125 25 175 75 125 125 175 175 150 200" />
+<path d="M 200 0 175 25 225 75 175 125 225 175 200 200" />
+</pattern>
+</defs>
+<!-- Line -->
+<path d="M 803,3118
+13417,3118
+13417,6897
+803,6897
+803,3118
+" style="stroke:#000000;stroke-width:16;
+fill:#00ffff;
+"/>
+<!-- Line -->
+<path d="M -1889,1417
+4960,1417
+4960,2692
+-1889,2692
+-1889,1417
+" style="stroke:#000000;stroke-width:16;
+fill:#00ffff;
+"/>
+<!-- Line -->
+<path d="M 3448,8456
+10488,8456
+10488,10346
+3448,10346
+3448,8456
+" style="stroke:#000000;stroke-width:16;
+fill:#ffffff;
+"/>
+<!-- Line -->
+<path d="M 4582,8976
+5102,4913
+" style="stroke:#000000;stroke-width:16;
+"/>
+<!-- Arrowhead on endpoint -->
+<path d="M 5138 5121
+5100 4925
+5013 5104
+5138 5121
+Z
+" style="stroke:#000000;stroke-width:16;
+fill:#000000;"/>
+<!-- Line -->
+<path d="M 11102,8740
+12519,8740
+12519,9212
+11102,9212
+11102,8740
+" style="stroke:#000000;stroke-width:16;
+fill:#ffe0e0;
+"/>
+<!-- Line -->
+<path d="M 11102,9685
+12519,9685
+12519,10157
+11102,10157
+11102,9685
+" style="stroke:#000000;stroke-width:16;
+fill:#ffe0e0;
+"/>
+<!-- Line -->
+<path d="M 5196,1653
+6614,1653
+6614,2598
+5196,2598
+5196,1653
+" style="stroke:#000000;stroke-width:16;
+fill:#ff00ff;
+"/>
+<!-- Line -->
+<path d="M 6850,1653
+9212,1653
+9212,2598
+6850,2598
+6850,1653
+" style="stroke:#000000;stroke-width:16;
+fill:#ff00ff;
+"/>
+<!-- Line -->
+<path d="M 6566,7322
+8929,7322
+8929,8267
+6566,8267
+6566,7322
+" style="stroke:#000000;stroke-width:16;
+fill:#ff00ff;
+"/>
+<!-- Line -->
+<path d="M 4960,7322
+6377,7322
+6377,8267
+4960,8267
+4960,7322
+" style="stroke:#000000;stroke-width:16;
+fill:#ff00ff;
+"/>
+<!-- Line -->
+<path d="M 4582,5668
+9401,5668
+9401,6424
+4582,6424
+4582,5668
+" style="stroke:#000000;stroke-width:16;
+fill:#ff0000;
+"/>
+<!-- Line -->
+<path d="M 4629,4062
+9448,4062
+9448,4913
+4629,4913
+4629,4062
+" style="stroke:#000000;stroke-width:16;
+fill:#ff0000;
+"/>
+<!-- Text -->
+<text x="3212" y="4866" fill="#000000" font-family="Courier" font-style="normal" font-weight="normal" font-size="152" text-anchor="start" >
+ib</text>
+<!-- Text -->
+<text x="9590" y="4251" fill="#000000" font-family="Courier" font-style="normal" font-weight="normal" font-size="152" text-anchor="start" >
+cons</text>
+<!-- Text -->
+<text x="3212" y="6472" fill="#000000" font-family="Courier" font-style="normal" font-weight="normal" font-size="152" text-anchor="start" >
+ob</text>
+<!-- Text -->
+<text x="4488" y="5811" fill="#000000" font-family="Courier" font-style="normal" font-weight="normal" font-size="152" text-anchor="end" >
+cons</text>
+<!-- Text -->
+<text x="4535" y="4251" fill="#000000" font-family="Courier" font-style="normal" font-weight="normal" font-size="152" text-anchor="end" >
+prod</text>
+<!-- Text -->
+<text x="9543" y="5858" fill="#000000" font-family="Courier" font-style="normal" font-weight="normal" font-size="152" text-anchor="start" >
+prod</text>
+<!-- Text -->
+<text x="10913" y="6425" fill="#000000" font-family="Courier" font-style="normal" font-weight="normal" font-size="152" text-anchor="end" >
+ib</text>
+<!-- Text -->
+<text x="10913" y="4913" fill="#000000" font-family="Courier" font-style="normal" font-weight="normal" font-size="152" text-anchor="end" >
+ob</text>
+<!-- Text -->
+<text x="15259" y="7275" fill="#000000" font-family="Courier" font-style="normal" font-weight="normal" font-size="152" text-anchor="start" >
+owner</text>
+<!-- Text -->
+<text x="14362" y="5196" fill="#000000" font-family="Courier" font-style="normal" font-weight="normal" font-size="152" text-anchor="end" >
+owner</text>
+<!-- Text -->
+<text x="-188" y="4677" fill="#000000" font-family="Courier" font-style="normal" font-weight="normal" font-size="152" text-anchor="start" >
+owner</text>
+<!-- Text -->
+<text x="-1039" y="7275" fill="#000000" font-family="Courier" font-style="normal" font-weight="normal" font-size="152" text-anchor="start" >
+owner</text>
+<!-- Text -->
+<text x="661" y="5196" fill="#000000" font-family="Courier" font-style="normal" font-weight="normal" font-size="152" text-anchor="end" >
+end</text>
+<!-- Text -->
+<text x="13511" y="4677" fill="#000000" font-family="Courier" font-style="normal" font-weight="normal" font-size="152" text-anchor="start" >
+end</text>
+<!-- Text -->
+<text x="4724" y="8929" fill="#000000" font-family="Courier" font-style="normal" font-weight="normal" font-size="152" text-anchor="start" >
+chn</text>
+<!-- Text -->
+<text x="9496" y="8929" fill="#000000" font-family="Courier" font-style="normal" font-weight="normal" font-size="152" text-anchor="start" >
+chn</text>
+<!-- Text -->
+<text x="11952" y="3023" fill="#000000" font-family="Courier" font-style="normal" font-weight="normal" font-size="152" text-anchor="start" >
+task</text>
+<!-- Text -->
+<text x="10488" y="2787" fill="#000000" font-family="Courier" font-style="normal" font-weight="normal" font-size="152" text-anchor="start" >
+context</text>
+<!-- Line -->
+<path d="M 2598,3401
+4015,3401
+4015,3874
+2598,3874
+2598,3401
+" style="stroke:#000000;stroke-width:16;
+fill:#ffffff;
+"/>
+<!-- Line -->
+<path d="M 4251,3400
+5669,3400
+5669,3872
+4251,3872
+4251,3400
+" style="stroke:#000000;stroke-width:16;
+fill:#ffe0e0;
+"/>
+<!-- Line -->
+<path d="M 11811,3400
+13228,3400
+13228,3872
+11811,3872
+11811,3400
+" style="stroke:#000000;stroke-width:16;
+fill:#ffffff;
+"/>
+<!-- Line -->
+<path d="M 10204,3401
+11622,3401
+11622,3874
+10204,3874
+10204,3401
+" style="stroke:#000000;stroke-width:16;
+fill:#ffffff;
+"/>
+<!-- Line -->
+<path d="M 3307,1937
+4724,1937
+4724,2409
+3307,2409
+3307,1937
+" style="stroke:#000000;stroke-width:16;
+fill:#ffffff;
+"/>
+<!-- Line -->
+<path d="M 1653,1937
+3070,1937
+3070,2409
+1653,2409
+1653,1937
+" style="stroke:#000000;stroke-width:16;
+fill:#ffffff;
+"/>
+<!-- Line -->
+<path d="M 0,1937
+1417,1937
+1417,2409
+0,2409
+0,1937
+" style="stroke:#000000;stroke-width:16;
+fill:#ffffff;
+"/>
+<!-- Line -->
+<path d="M -1653,1937
+-236,1937
+-236,2409
+-1653,2409
+-1653,1937
+" style="stroke:#000000;stroke-width:16;
+fill:#ffffff;
+"/>
+<!-- Line -->
+<path d="M 944,3401
+2362,3401
+2362,3874
+944,3874
+944,3401
+" style="stroke:#000000;stroke-width:16;
+fill:#ffffff;
+"/>
+<!-- Line -->
+<path d="M 944,4062
+3165,4062
+3165,6613
+944,6613
+944,4062
+" style="stroke:#000000;stroke-width:16;
+fill:#ffffff;
+"/>
+<!-- Line -->
+<path d="M 11007,4062
+13275,4062
+13275,6613
+11007,6613
+11007,4062
+" style="stroke:#000000;stroke-width:16;
+fill:#ffffff;
+"/>
+<!-- Line -->
+<path d="M 14409,7371
+16062,7371
+16062,7842
+14409,7842
+14409,7371
+" style="stroke:#000000;stroke-width:16;
+fill:#ffff00;
+"/>
+<!-- Line -->
+<path d="M -1889,7371
+-236,7371
+-236,7842
+-1889,7842
+-1889,7371
+" style="stroke:#000000;stroke-width:16;
+fill:#ffff00;
+"/>
+<!-- Line -->
+<path d="M 6283,9590
+7700,9590
+7700,10061
+6283,10061
+6283,9590
+" style="stroke:#000000;stroke-width:16;
+fill:#ffff00;
+"/>
+<!-- Line -->
+<path d="M 14409,4062
+16062,4062
+16062,5951
+14409,5951
+14409,4062
+" style="stroke:#000000;stroke-width:16;
+fill:#ffff00;
+"/>
+<!-- Line -->
+<path d="M -1889,4062
+-236,4062
+-236,5951
+-1889,5951
+-1889,4062
+" style="stroke:#000000;stroke-width:16;
+fill:#ffff00;
+"/>
+<!-- Line -->
+<path d="M 3637,8976
+6000,8976
+6000,10062
+3637,10062
+3637,8976
+" style="stroke:#000000;stroke-width:16;
+fill:#ffff00;
+"/>
+<!-- Line -->
+<path d="M 7937,8976
+10299,8976
+10299,10062
+7937,10062
+7937,8976
+" style="stroke:#000000;stroke-width:16;
+fill:#ffff00;
+"/>
+<!-- Line -->
+<path d="M 9921,1653
+12283,1653
+12283,2598
+9921,2598
+9921,1653
+" style="stroke:#000000;stroke-width:16;
+fill:#ffff00;
+"/>
+<!-- Line -->
+<path d="M 6283,8976
+7700,8976
+7700,9447
+6283,9447
+6283,8976
+" style="stroke:#000000;stroke-width:16;
+fill:#ffff00;
+"/>
+<!-- Line -->
+<path d="M 14976,5385
+15448,5385
+15448,5857
+14976,5857
+14976,5385
+" style="stroke:#000000;stroke-width:16;
+fill:#00ff00;
+"/>
+<!-- Line -->
+<path d="M 14551,4723
+15968,4723
+15968,5195
+14551,5195
+14551,4723
+" style="stroke:#000000;stroke-width:16;
+fill:#00ff00;
+"/>
+<!-- Line -->
+<path d="M -1795,4817
+-377,4817
+-377,5290
+-1795,5290
+-1795,4817
+" style="stroke:#000000;stroke-width:16;
+fill:#00ff00;
+"/>
+<!-- Line -->
+<path d="M -1275,5385
+-803,5385
+-803,5857
+-1275,5857
+-1275,5385
+" style="stroke:#000000;stroke-width:16;
+fill:#00ff00;
+"/>
+<!-- Text -->
+<text x="3354" y="3685" fill="#000000" font-family="Times" font-style="normal" font-weight="normal" font-size="202" text-anchor="middle" >
+stkctr</text>
+<!-- Text -->
+<text x="5007" y="3683" fill="#000000" font-family="Times" font-style="normal" font-weight="normal" font-size="202" text-anchor="middle" >
+logs</text>
+<!-- Text -->
+<text x="12566" y="3683" fill="#000000" font-family="Times" font-style="normal" font-weight="normal" font-size="202" text-anchor="middle" >
+target</text>
+<!-- Text -->
+<text x="10960" y="3685" fill="#000000" font-family="Times" font-style="normal" font-weight="normal" font-size="202" text-anchor="middle" >
+store</text>
+<!-- Text -->
+<text x="15259" y="5668" fill="#000000" font-family="Times" font-style="normal" font-weight="normal" font-size="202" text-anchor="middle" >
+fd</text>
+<!-- Text -->
+<text x="15212" y="5006" fill="#000000" font-family="Times" font-style="normal" font-weight="normal" font-size="202" text-anchor="middle" >
+target</text>
+<!-- Text -->
+<text x="15259" y="7700" fill="#000000" font-family="Times" font-style="normal" font-weight="normal" font-size="202" text-anchor="middle" >
+fdtab[fd]</text>
+<!-- Text -->
+<text x="-1133" y="5101" fill="#000000" font-family="Times" font-style="normal" font-weight="normal" font-size="202" text-anchor="middle" >
+target</text>
+<!-- Text -->
+<text x="-992" y="5668" fill="#000000" font-family="Times" font-style="normal" font-weight="normal" font-size="202" text-anchor="middle" >
+fd</text>
+<!-- Text -->
+<text x="-1039" y="7700" fill="#000000" font-family="Times" font-style="normal" font-weight="normal" font-size="202" text-anchor="middle" >
+fdtab[fd]</text>
+<!-- Text -->
+<text x="11858" y="9023" fill="#000000" font-family="Times" font-style="normal" font-weight="normal" font-size="202" text-anchor="middle" >
+cookies</text>
+<!-- Text -->
+<text x="11763" y="9968" fill="#000000" font-family="Times" font-style="normal" font-weight="normal" font-size="202" text-anchor="middle" >
+uri</text>
+<!-- Text -->
+<text x="7039" y="9872" fill="#000000" font-family="Times" font-style="normal" font-weight="normal" font-size="202" text-anchor="middle" >
+auth</text>
+<!-- Text -->
+<text x="4062" y="2220" fill="#000000" font-family="Times" font-style="normal" font-weight="normal" font-size="202" text-anchor="middle" >
+stkctr</text>
+<!-- Text -->
+<text x="2409" y="2267" fill="#000000" font-family="Times" font-style="normal" font-weight="normal" font-size="202" text-anchor="middle" >
+listener</text>
+<!-- Text -->
+<text x="755" y="2220" fill="#000000" font-family="Times" font-style="normal" font-weight="normal" font-size="202" text-anchor="middle" >
+frontend</text>
+<!-- Text -->
+<text x="-897" y="2267" fill="#000000" font-family="Times" font-style="normal" font-weight="normal" font-size="202" text-anchor="middle" >
+origin</text>
+<!-- Text -->
+<text x="5952" y="2220" fill="#000000" font-family="Times" font-style="normal" font-weight="normal" font-size="152" text-anchor="middle" >
+(kernel storage)</text>
+<!-- Text -->
+<text x="5905" y="1937" fill="#000000" font-family="Times" font-style="normal" font-weight="bold" font-size="202" text-anchor="middle" >
+pipe</text>
+<!-- Text -->
+<text x="7984" y="1937" fill="#000000" font-family="Times" font-style="normal" font-weight="bold" font-size="202" text-anchor="middle" >
+buffer</text>
+<!-- Text -->
+<text x="7984" y="2220" fill="#000000" font-family="Times" font-style="normal" font-weight="normal" font-size="152" text-anchor="middle" >
+(internal storage)</text>
+<!-- Text -->
+<text x="7700" y="7606" fill="#000000" font-family="Times" font-style="normal" font-weight="bold" font-size="202" text-anchor="middle" >
+buffer</text>
+<!-- Text -->
+<text x="7700" y="7889" fill="#000000" font-family="Times" font-style="normal" font-weight="normal" font-size="152" text-anchor="middle" >
+(internal storage)</text>
+<!-- Text -->
+<text x="5716" y="7889" fill="#000000" font-family="Times" font-style="normal" font-weight="normal" font-size="152" text-anchor="middle" >
+(kernel storage)</text>
+<!-- Text -->
+<text x="5669" y="7606" fill="#000000" font-family="Times" font-style="normal" font-weight="bold" font-size="202" text-anchor="middle" >
+pipe</text>
+<!-- Line -->
+<path d="M 5716,6425
+5716,7369
+" style="stroke:#000000;stroke-width:16;
+"/>
+<!-- Arrowhead on endpoint -->
+<path d="M 5653 7167
+5716 7356
+5779 7167
+5653 7167
+Z
+" style="stroke:#000000;stroke-width:16;
+fill:#000000;"/>
+<!-- Line -->
+<path d="M 7748,6425
+7748,7369
+" style="stroke:#000000;stroke-width:16;
+"/>
+<!-- Arrowhead on endpoint -->
+<path d="M 7685 7167
+7748 7356
+7811 7167
+7685 7167
+Z
+" style="stroke:#000000;stroke-width:16;
+fill:#000000;"/>
+<!-- Line -->
+<path d="M 9401,8976
+9070,6425
+" style="stroke:#000000;stroke-width:16;
+"/>
+<!-- Arrowhead on endpoint -->
+<path d="M 9159 6616
+9072 6437
+9034 6633
+9159 6616
+Z
+" style="stroke:#000000;stroke-width:16;
+fill:#000000;"/>
+<!-- Line -->
+<path d="M 5952,4062
+5952,2598
+" style="stroke:#000000;stroke-width:16;
+"/>
+<!-- Arrowhead on endpoint -->
+<path d="M 6015 2800
+5952 2611
+5889 2800
+6015 2800
+Z
+" style="stroke:#000000;stroke-width:16;
+fill:#000000;"/>
+<!-- Line -->
+<path d="M 7984,4062
+7984,2598
+" style="stroke:#000000;stroke-width:16;
+"/>
+<!-- Arrowhead on endpoint -->
+<path d="M 8047 2800
+7984 2611
+7921 2800
+8047 2800
+Z
+" style="stroke:#000000;stroke-width:16;
+fill:#000000;"/>
+<!-- Line -->
+<path d="M 4582,5905
+3119,5906
+" style="stroke:#000000;stroke-width:16;
+"/>
+<!-- Arrowhead on endpoint -->
+<path d="M 3319 5842
+3131 5906
+3320 5968
+3319 5842
+Z
+" style="stroke:#000000;stroke-width:16;
+fill:#000000;"/>
+<!-- Line -->
+<path d="M 3165,6188
+4582,6187
+" style="stroke:#000000;stroke-width:16;
+"/>
+<!-- Arrowhead on endpoint -->
+<path d="M 4381 6250
+4570 6187
+4380 6124
+4381 6250
+Z
+" style="stroke:#000000;stroke-width:16;
+fill:#000000;"/>
+<!-- Line -->
+<path d="M 4629,4346
+3165,4346
+" style="stroke:#000000;stroke-width:16;
+"/>
+<!-- Arrowhead on endpoint -->
+<path d="M 3366 4283
+3177 4346
+3366 4409
+3366 4283
+Z
+" style="stroke:#000000;stroke-width:16;
+fill:#000000;"/>
+<!-- Line -->
+<path d="M 3165,4677
+4629,4677
+" style="stroke:#000000;stroke-width:16;
+"/>
+<!-- Arrowhead on endpoint -->
+<path d="M 4428 4740
+4617 4677
+4428 4614
+4428 4740
+Z
+" style="stroke:#000000;stroke-width:16;
+fill:#000000;"/>
+<!-- Line -->
+<path d="M 9448,4346
+11007,4346
+" style="stroke:#000000;stroke-width:16;
+"/>
+<!-- Arrowhead on endpoint -->
+<path d="M 10806 4409
+10995 4346
+10806 4283
+10806 4409
+Z
+" style="stroke:#000000;stroke-width:16;
+fill:#000000;"/>
+<!-- Line -->
+<path d="M 11007,4677
+9448,4677
+" style="stroke:#000000;stroke-width:16;
+"/>
+<!-- Arrowhead on endpoint -->
+<path d="M 9650 4614
+9461 4677
+9650 4740
+9650 4614
+Z
+" style="stroke:#000000;stroke-width:16;
+fill:#000000;"/>
+<!-- Line -->
+<path d="M 9400,5904
+11006,5905
+" style="stroke:#000000;stroke-width:16;
+"/>
+<!-- Arrowhead on endpoint -->
+<path d="M 10804 5967
+10994 5905
+10805 5841
+10804 5967
+Z
+" style="stroke:#000000;stroke-width:16;
+fill:#000000;"/>
+<!-- Line -->
+<path d="M 11007,6188
+9401,6187
+" style="stroke:#000000;stroke-width:16;
+"/>
+<!-- Arrowhead on endpoint -->
+<path d="M 9603 6124
+9414 6187
+9602 6250
+9603 6124
+Z
+" style="stroke:#000000;stroke-width:16;
+fill:#000000;"/>
+<!-- Line -->
+<path d="M -1039,2409
+-1039,4062
+" style="stroke:#000000;stroke-width:16;
+"/>
+<!-- Arrowhead on endpoint -->
+<path d="M -1101 3861
+-1038 4050
+-975 3861
+-1101 3861
+Z
+" style="stroke:#000000;stroke-width:16;
+fill:#000000;"/>
+<!-- Line -->
+<path d="M 1653,3401
+1653,2692
+" style="stroke:#000000;stroke-width:16;
+"/>
+<!-- Arrowhead on endpoint -->
+<path d="M 1716 2894
+1653 2705
+1590 2894
+1716 2894
+Z
+" style="stroke:#000000;stroke-width:16;
+fill:#000000;"/>
+<!-- Line -->
+<path d="M 15212,7370
+15212,5952
+" style="stroke:#000000;stroke-width:16;
+"/>
+<!-- Arrowhead on endpoint -->
+<path d="M 15275 6154
+15212 5965
+15149 6154
+15275 6154
+Z
+" style="stroke:#000000;stroke-width:16;
+fill:#000000;"/>
+<!-- Line -->
+<path d="M 14409,5244
+13275,5244
+" style="stroke:#000000;stroke-width:16;
+"/>
+<!-- Arrowhead on endpoint -->
+<path d="M 13477 5181
+13288 5244
+13477 5307
+13477 5181
+Z
+" style="stroke:#000000;stroke-width:16;
+fill:#000000;"/>
+<!-- Line -->
+<path d="M 13275,4771
+14409,4771
+" style="stroke:#000000;stroke-width:16;
+"/>
+<!-- Arrowhead on endpoint -->
+<path d="M 14207 4834
+14396 4771
+14207 4708
+14207 4834
+Z
+" style="stroke:#000000;stroke-width:16;
+fill:#000000;"/>
+<!-- Line -->
+<path d="M 708,4535
+" style="stroke:#000000;stroke-width:16;
+"/>
+<!-- Line -->
+<path d="M -236,4771
+944,4771
+" style="stroke:#000000;stroke-width:16;
+"/>
+<!-- Arrowhead on endpoint -->
+<path d="M 743 4834
+932 4771
+743 4708
+743 4834
+Z
+" style="stroke:#000000;stroke-width:16;
+fill:#000000;"/>
+<!-- Line -->
+<path d="M 944,5244
+-236,5244
+" style="stroke:#000000;stroke-width:16;
+"/>
+<!-- Arrowhead on endpoint -->
+<path d="M -33 5181
+-222 5244
+-33 5307
+-33 5181
+Z
+" style="stroke:#000000;stroke-width:16;
+fill:#000000;"/>
+<!-- Line -->
+<path d="M -1086,7370
+-1086,5952
+" style="stroke:#000000;stroke-width:16;
+"/>
+<!-- Arrowhead on endpoint -->
+<path d="M -1022 6154
+-1085 5965
+-1148 6154
+-1022 6154
+Z
+" style="stroke:#000000;stroke-width:16;
+fill:#000000;"/>
+<!-- Line -->
+<path d="M 10488,9921
+11102,9921
+" style="stroke:#000000;stroke-width:16;
+"/>
+<!-- Arrowhead on endpoint -->
+<path d="M 10900 9984
+11089 9921
+10900 9858
+10900 9984
+Z
+" style="stroke:#000000;stroke-width:16;
+fill:#000000;"/>
+<!-- Line -->
+<path d="M 10488,8976
+11102,8976
+" style="stroke:#000000;stroke-width:16;
+"/>
+<!-- Arrowhead on endpoint -->
+<path d="M 10900 9039
+11089 8976
+10900 8913
+10900 9039
+Z
+" style="stroke:#000000;stroke-width:16;
+fill:#000000;"/>
+<!-- Line -->
+<path d="M 10346,2598
+10346,3118
+" style="stroke:#000000;stroke-width:16;
+"/>
+<!-- Arrowhead on endpoint -->
+<path d="M 10283 2916
+10346 3105
+10409 2916
+10283 2916
+Z
+" style="stroke:#000000;stroke-width:16;
+fill:#000000;"/>
+<!-- Line -->
+<path d="M 11811,3118
+11811,2598
+" style="stroke:#000000;stroke-width:16;
+"/>
+<!-- Arrowhead on endpoint -->
+<path d="M 11874 2800
+11811 2611
+11748 2800
+11874 2800
+Z
+" style="stroke:#000000;stroke-width:16;
+fill:#000000;"/>
+<!-- Text -->
+<text x="15259" y="4346" fill="#000000" font-family="Times" font-style="normal" font-weight="bold" font-size="202" text-anchor="middle" >
+connection</text>
+<!-- Text -->
+<text x="11102" y="2173" fill="#000000" font-family="Times" font-style="normal" font-weight="bold" font-size="202" text-anchor="middle" >
+task</text>
+<!-- Text -->
+<text x="1700" y="3732" fill="#000000" font-family="Times" font-style="normal" font-weight="normal" font-size="202" text-anchor="middle" >
+session</text>
+<!-- Text -->
+<text x="1559" y="1700" fill="#000000" font-family="Times" font-style="normal" font-weight="bold" font-size="202" text-anchor="middle" >
+session</text>
+<!-- Text -->
+<text x="6992" y="3448" fill="#000000" font-family="Times" font-style="normal" font-weight="bold" font-size="202" text-anchor="middle" >
+stream</text>
+<!-- Text -->
+<text x="-1039" y="4346" fill="#000000" font-family="Times" font-style="normal" font-weight="bold" font-size="202" text-anchor="middle" >
+connection</text>
+<!-- Text -->
+<text x="2078" y="5338" fill="#000000" font-family="Times" font-style="normal" font-weight="bold" font-size="202" text-anchor="middle" >
+stream_interface</text>
+<!-- Text -->
+<text x="12188" y="5338" fill="#000000" font-family="Times" font-style="normal" font-weight="bold" font-size="202" text-anchor="middle" >
+stream_interface</text>
+<!-- Text -->
+<text x="12188" y="5622" fill="#000000" font-family="Times" font-style="normal" font-weight="normal" font-size="202" text-anchor="middle" >
+si[1]</text>
+<!-- Text -->
+<text x="2078" y="5622" fill="#000000" font-family="Times" font-style="normal" font-weight="normal" font-size="202" text-anchor="middle" >
+si[0]</text>
+<!-- Text -->
+<text x="6992" y="8740" fill="#000000" font-family="Times" font-style="normal" font-weight="bold" font-size="202" text-anchor="middle" >
+http_txn</text>
+<!-- Text -->
+<text x="6992" y="4771" fill="#000000" font-family="Times" font-style="normal" font-weight="normal" font-size="152" text-anchor="middle" >
+(request forwarding and analysis)</text>
+<!-- Text -->
+<text x="6944" y="6283" fill="#000000" font-family="Times" font-style="normal" font-weight="normal" font-size="152" text-anchor="middle" >
+(response forwarding and analysis)</text>
+<!-- Text -->
+<text x="7181" y="5951" fill="#000000" font-family="Times" font-style="normal" font-weight="normal" font-size="202" text-anchor="middle" >
+res</text>
+<!-- Text -->
+<text x="6472" y="4346" fill="#000000" font-family="Times" font-style="normal" font-weight="bold" font-size="202" text-anchor="middle" >
+channel</text>
+<!-- Text -->
+<text x="7181" y="4346" fill="#000000" font-family="Times" font-style="normal" font-weight="normal" font-size="202" text-anchor="middle" >
+req</text>
+<!-- Text -->
+<text x="6519" y="5951" fill="#000000" font-family="Times" font-style="normal" font-weight="bold" font-size="202" text-anchor="middle" >
+channel</text>
+<!-- Text -->
+<text x="4818" y="9921" fill="#000000" font-family="Times" font-style="normal" font-weight="normal" font-size="152" text-anchor="middle" >
+(HTTP request processing)</text>
+<!-- Text -->
+<text x="9118" y="9921" fill="#000000" font-family="Times" font-style="normal" font-weight="normal" font-size="152" text-anchor="middle" >
+(HTTP response processing)</text>
+<!-- Text -->
+<text x="9118" y="9307" fill="#000000" font-family="Times" font-style="normal" font-weight="bold" font-size="202" text-anchor="middle" >
+http_msg</text>
+<!-- Text -->
+<text x="4818" y="9307" fill="#000000" font-family="Times" font-style="normal" font-weight="bold" font-size="202" text-anchor="middle" >
+http_msg</text>
+<!-- Text -->
+<text x="4818" y="9637" fill="#000000" font-family="Times" font-style="normal" font-weight="normal" font-size="202" text-anchor="middle" >
+req</text>
+<!-- Text -->
+<text x="9118" y="9637" fill="#000000" font-family="Times" font-style="normal" font-weight="normal" font-size="202" text-anchor="middle" >
+rsp</text>
+<!-- Text -->
+<text x="7039" y="9258" fill="#000000" font-family="Times" font-style="normal" font-weight="bold" font-size="202" text-anchor="middle" >
+hdr_idx</text>
+</g>
+</svg>
--- /dev/null
+2011/02/25 - Description of the different entities in haproxy - w@1wt.eu
+
+
+1) Definitions
+--------------
+
+Listener
+--------
+
+A listener is the entity which is part of a frontend and which accepts
+connections. There are as many listeners as there are ip:port couples.
+There is at least one listener instanciated for each "bind" entry, and
+port ranges will lead to as many listeners as there are ports in the
+range. A listener just has a listening file descriptor ready to accept
+incoming connections and to dispatch them to upper layers.
+
+
+Initiator
+---------
+
+An initiator is instanciated for each incoming connection on a listener. It may
+also be instanciated by a task pretending to be a client. An initiator calls
+the next stage's accept() callback to present it with the parameters of the
+incoming connection.
+
+
+Session
+-------
+
+A session is the only entity located between an initiator and a connector.
+This is the last stage which offers an accept() callback, and all of its
+processing will continue with the next stage's connect() callback. It holds
+the buffers needed to forward the protocol data between each side. This entity
+sees the native protocol, and is able to call analysers on these buffers. As it
+is used in both directions, it always has two buffers.
+
+When transformations are required, some of them may be done on the initiator
+side and other ones on the connector side. If additional buffers are needed for
+such transforms, those buffers cannot replace the session's buffers, but they
+may complete them.
+
+A session only needs to be instanciated when forwarding of data is required
+between two sides. Accepting and filtering on layer 4 information only does not
+require a session.
+
+For instance, let's consider the case of a proxy which receives and decodes
+HTTPS traffic, processes it as HTTP and recodes it as HTTPS before forwarding
+it. We'd have 3 layers of buffers, where the middle ones are used for
+forwarding of the protocol data (HTTP here) :
+
+ <-- ssl dec --> <-forwarding-> <-- ssl enc -->
+
+ ,->[||||]--. ,->[||||]--. ,->[||||]--.
+ client (|) (|) (|) (|) server
+ ^--[||||]<-' ^--[||||]<-' ^--[||||]<-'
+
+ HTTPS HTTP HTTPS
+
+The session handling code is only responsible for monitoring the forwarding
+buffers here. It may declare the end of the session once those buffers are
+closed and no analyser wants to re-open them. The session is also the entity
+which applies the load balancing algorithm and decides the server to use.
+
+The other sides are responsible for propagating the state up to the session
+which takes decisions.
+
+
+Connector
+---------
+
+A connector is the entity which permits to instanciate a connection to a known
+destination. It presents a connect() callback, and as such appears on the right
+side of diagrams.
+
+
+Connection
+----------
+
+A connection is the entity instanciated by a connector. It may be composed of
+multiple stages linked together. Generally it is the part of the stream
+interface holding a file descriptor, but it can also be a processing block or a
+transformation block terminated by a connection. A connection presents a
+server-side interface.
+
+
+2) Sequencing
+-------------
+
+Upon startup, listeners are instanciated by the configuration. When an incoming
+connection reaches a listening file descriptor, its read() callback calls the
+corresponding listener's accept() function which instanciates an initiator and
+in turn recursively calls upper layers' accept() callbacks until
+accept_session() is called. accept_session() instanciates a new session which
+starts protocol analysis via process_session(). When all protocol analysis is
+done, process_session() calls the connect() callback of the connector in order
+to get a connection.
--- /dev/null
+2013/11/20 - How hashing works internally in haproxy - maddalab@gmail.com
+
+This document describes how Haproxy implements hashing both map-based and
+consistent hashing, both prior to versions 1.5 and the motivation and tests
+that were done when providing additional options starting in version 1.5.
+
+A note on hashing in general, hash functions strive to have little
+correlation between input and output. The heart of a hash function is its
+mixing step. The behavior of the mixing step largely determines whether the
+hash function is collision-resistant. Hash functions that are collision
+resistant are more likely to have an even distribution of load.
+
+The purpose of the mixing function is to spread the effect of each message
+bit throughout all the bits of the internal state. Ideally every bit in the
+hash state is affected by every bit in the message. And we want to do that
+as quickly as possible simply for the sake of program performance. A
+function is said to satisfy the strict avalanche criterion if, whenever a
+single input bit is complemented (toggled between 0 and 1), each of the
+output bits should change with a probability of one half for an arbitrary
+selection of the remaining input bits.
+
+To guard against a combination of hash function and input that results in
+high rate of collisions, haproxy implements an avalanche algorithm on the
+result of the hashing function. In all versions 1.4 and prior avalanche is
+always applied when using the consistent hashing directive. It is intended
+to provide quite a good distribution for little input variations. The result
+is quite suited to fit over a 32-bit space with enough variations so that
+a randomly picked number falls equally before any server position, which is
+ideal for consistently hashed backends, a common use case for caches.
+
+In all versions 1.4 and prior Haproxy implements the SDBM hashing function.
+However tests show that alternatives to SDBM have a better cache
+distribution on different hashing criteria. Additional tests involving
+alternatives for hash input and an option to trigger avalanche, we found
+different algorithms perform better on different criteria. DJB2 performs
+well when hashing ascii text and is a good choice when hashing on host
+header. Other alternatives perform better on numbers and are a good choice
+when using source ip. The results also vary by use of the avalanche flag.
+
+The results of the testing can be found under the tests folder. Here is
+a summary of the discussion on the results on 1 input criteria and the
+methodology used to generate the results.
+
+A note of the setup when validating the results independently, one
+would want to avoid backend server counts that may skew the results. As
+an example with DJB2 avoid 33 servers. Please see the implementations of
+the hashing function, which can be found in the links under references.
+
+The following was the set up used
+
+(a) hash-type consistent/map-based
+(b) avalanche on/off
+(c) balanche host(hdr)
+(d) 3 criteria for inputs
+ - ~ 10K requests, including duplicates
+ - ~ 46K requests, unique requests from 1 MM requests were obtained
+ - ~ 250K requests, including duplicates
+(e) 17 servers in backend, all servers were assigned the same weight
+
+Result of the hashing were obtained across the server via monitoring log
+files for haproxy. Population Standard deviation was used to evaluate the
+efficacy of the hashing algorithm. Lower standard deviation, indicates
+a better distribution of load across the backends.
+
+On 10K requests, when using consistent hashing with avalanche on host
+headers, DJB2 significantly out performs SDBM. Std dev on SDBM was 48.95
+and DJB2 was 26.29. This relationship is inverted with avalanche disabled,
+however DJB2 with avalanche enabled out performs SDBM with avalanche
+disabled.
+
+On map-based hashing SDBM out performs DJB2 irrespective of the avalanche
+option. SDBM without avalanche is marginally better than with avalanche.
+DJB2 performs significantly worse with avalanche enabled.
+
+Summary: The results of the testing indicate that there isn't a hashing
+algorithm that can be applied across all input criteria. It is necessary
+to support alternatives to SDBM, which is generally the best option, with
+algorithms that are better for different inputs. Avalanche is not always
+applicable and may result in less smooth distribution.
+
+References:
+Mixing Functions/Avalanche: http://home.comcast.net/~bretm/hash/3.html
+Hash Functions: http://www.cse.yorku.ca/~oz/hash.html
--- /dev/null
+TEST 3:
+
+ printf "GET /\r\nbla: truc\r\n\r\n"
+
+
+NO SPEEDUP :
+
+WHL: hdr_st=0x00, hdr_used=1 hdr_tail=0 hdr_last=1, h=0x8071080, lr=0x8071080, r=0x8071094
+WHL: hdr_st=0x01, hdr_used=1 hdr_tail=0 hdr_last=1, h=0x8071080, lr=0x8071080, r=0x8071094
+WHL: hdr_st=0x32, hdr_used=1 hdr_tail=0 hdr_last=1, h=0x8071080, lr=0x8071086, r=0x8071094
+WHL: hdr_st=0x03, hdr_used=2 hdr_tail=1 hdr_last=2, h=0x8071087, lr=0x8071087, r=0x8071094
+WHL: hdr_st=0x34, hdr_used=2 hdr_tail=1 hdr_last=2, h=0x8071087, lr=0x8071091, r=0x8071094
+WHL: hdr_st=0x03, hdr_used=3 hdr_tail=2 hdr_last=3, h=0x8071092, lr=0x8071092, r=0x8071094
+WHL: hdr_st=0x34, hdr_used=3 hdr_tail=2 hdr_last=3, h=0x8071092, lr=0x8071093, r=0x8071094
+WHL: hdr_st=0x06, hdr_used=3 hdr_tail=2 hdr_last=3, h=0x8071092, lr=0x8071093, r=0x8071094
+END: hdr_st=0x06, hdr_used=3 hdr_tail=2 hdr_last=3, h=0x8071092, lr=0x8071094, r=0x8071094
+=> 9 trans
+
+
+FULL SPEEDUP :
+
+WHL: hdr_st=0x00, hdr_used=1 hdr_tail=0 hdr_last=1, h=0x806a770, lr=0x806a770, r=0x806a784
+WHL: hdr_st=0x32, hdr_used=1 hdr_tail=0 hdr_last=1, h=0x806a770, lr=0x806a776, r=0x806a784
+WHL: hdr_st=0x03, hdr_used=2 hdr_tail=1 hdr_last=2, h=0x806a777, lr=0x806a777, r=0x806a784
+WHL: hdr_st=0x34, hdr_used=2 hdr_tail=1 hdr_last=2, h=0x806a777, lr=0x806a781, r=0x806a784
+WHL: hdr_st=0x26, hdr_used=3 hdr_tail=2 hdr_last=3, h=0x806a782, lr=0x806a783, r=0x806a784
+END: hdr_st=0x06, hdr_used=3 hdr_tail=2 hdr_last=3, h=0x806a782, lr=0x806a784, r=0x806a784
+=> 6 trans
+
+
+
+TEST 4:
+
+
+ printf "GET /\nbla: truc\n\n"
+
+
+NO SPEEDUP :
+
+WHL: hdr_st=0x00, hdr_used=1 hdr_tail=0 hdr_last=1, h=0x80750d0, lr=0x80750d0, r=0x80750e1
+WHL: hdr_st=0x01, hdr_used=1 hdr_tail=0 hdr_last=1, h=0x80750d0, lr=0x80750d0, r=0x80750e1
+WHL: hdr_st=0x02, hdr_used=1 hdr_tail=0 hdr_last=1, h=0x80750d0, lr=0x80750d5, r=0x80750e1
+WHL: hdr_st=0x03, hdr_used=2 hdr_tail=1 hdr_last=2, h=0x80750d6, lr=0x80750d6, r=0x80750e1
+WHL: hdr_st=0x04, hdr_used=2 hdr_tail=1 hdr_last=2, h=0x80750d6, lr=0x80750df, r=0x80750e1
+WHL: hdr_st=0x03, hdr_used=3 hdr_tail=2 hdr_last=3, h=0x80750e0, lr=0x80750e0, r=0x80750e1
+WHL: hdr_st=0x04, hdr_used=3 hdr_tail=2 hdr_last=3, h=0x80750e0, lr=0x80750e0, r=0x80750e1
+WHL: hdr_st=0x06, hdr_used=3 hdr_tail=2 hdr_last=3, h=0x80750e0, lr=0x80750e0, r=0x80750e1
+END: hdr_st=0x06, hdr_used=3 hdr_tail=2 hdr_last=3, h=0x80750e0, lr=0x80750e1, r=0x80750e1
+=> 9 trans
+
+
+FULL SPEEDUP :
+
+WHL: hdr_st=0x00, hdr_used=1 hdr_tail=0 hdr_last=1, h=0x8072010, lr=0x8072010, r=0x8072021
+WHL: hdr_st=0x03, hdr_used=2 hdr_tail=1 hdr_last=2, h=0x8072016, lr=0x8072016, r=0x8072021
+END: hdr_st=0x06, hdr_used=3 hdr_tail=2 hdr_last=3, h=0x8072020, lr=0x8072021, r=0x8072021
+=> 3 trans
+
+
+TEST 5:
+
+
+ printf "GET /\r\nbla: truc\r\n truc2\r\n\r\n"
+
+
+NO SPEEDUP :
+
+WHL: hdr_st=0x00, hdr_used=1 hdr_tail=0 hdr_last=1, h=0x8071080, lr=0x8071080, r=0x807109d
+WHL: hdr_st=0x01, hdr_used=1 hdr_tail=0 hdr_last=1, h=0x8071080, lr=0x8071080, r=0x807109d
+WHL: hdr_st=0x32, hdr_used=1 hdr_tail=0 hdr_last=1, h=0x8071080, lr=0x8071086, r=0x807109d
+WHL: hdr_st=0x03, hdr_used=2 hdr_tail=1 hdr_last=2, h=0x8071087, lr=0x8071087, r=0x807109d
+WHL: hdr_st=0x34, hdr_used=2 hdr_tail=1 hdr_last=2, h=0x8071087, lr=0x8071091, r=0x807109d
+WHL: hdr_st=0x05, hdr_used=2 hdr_tail=1 hdr_last=2, h=0x8071087, lr=0x8071092, r=0x807109d
+WHL: hdr_st=0x03, hdr_used=2 hdr_tail=1 hdr_last=2, h=0x8071087, lr=0x8071094, r=0x807109d
+WHL: hdr_st=0x34, hdr_used=2 hdr_tail=1 hdr_last=2, h=0x8071087, lr=0x807109a, r=0x807109d
+WHL: hdr_st=0x03, hdr_used=3 hdr_tail=2 hdr_last=3, h=0x807109b, lr=0x807109b, r=0x807109d
+WHL: hdr_st=0x34, hdr_used=3 hdr_tail=2 hdr_last=3, h=0x807109b, lr=0x807109c, r=0x807109d
+WHL: hdr_st=0x06, hdr_used=3 hdr_tail=2 hdr_last=3, h=0x807109b, lr=0x807109c, r=0x807109d
+END: hdr_st=0x06, hdr_used=3 hdr_tail=2 hdr_last=3, h=0x807109b, lr=0x807109d, r=0x807109d
+=> 12 trans
+
+
+FULL SPEEDUP :
+
+WHL: hdr_st=0x00, hdr_used=1 hdr_tail=0 hdr_last=1, h=0x806dfc0, lr=0x806dfc0, r=0x806dfdd
+WHL: hdr_st=0x32, hdr_used=1 hdr_tail=0 hdr_last=1, h=0x806dfc0, lr=0x806dfc6, r=0x806dfdd
+WHL: hdr_st=0x03, hdr_used=2 hdr_tail=1 hdr_last=2, h=0x806dfc7, lr=0x806dfc7, r=0x806dfdd
+WHL: hdr_st=0x34, hdr_used=2 hdr_tail=1 hdr_last=2, h=0x806dfc7, lr=0x806dfd1, r=0x806dfdd
+WHL: hdr_st=0x34, hdr_used=2 hdr_tail=1 hdr_last=2, h=0x806dfc7, lr=0x806dfda, r=0x806dfdd
+WHL: hdr_st=0x26, hdr_used=3 hdr_tail=2 hdr_last=3, h=0x806dfdb, lr=0x806dfdc, r=0x806dfdd
+END: hdr_st=0x06, hdr_used=3 hdr_tail=2 hdr_last=3, h=0x806dfdb, lr=0x806dfdd, r=0x806dfdd
+=> 7 trans
--- /dev/null
+2007/03/30 - Header storage in trees
+
+This documentation describes how to store headers in radix trees, providing
+fast access to any known position, while retaining the ability to grow/reduce
+any arbitrary header without having to recompute all positions.
+
+Principle :
+ We have a radix tree represented in an integer array, which represents the
+ total number of bytes used by all headers whose position is below it. This
+ ensures that we can compute any header's position in O(log(N)) where N is
+ the number of headers.
+
+Example with N=16 :
+
+ +-----------------------+
+ | |
+ +-----------+ +-----------+
+ | | | |
+ +-----+ +-----+ +-----+ +-----+
+ | | | | | | | |
+ +--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+
+ | | | | | | | | | | | | | | | |
+
+ 0 1 2 3 4 5 6 7 8 9 A B C D E F
+
+ To reach header 6, we have to compute hdr[0]+hdr[4]+hdr[6]
+
+ With this method, it becomes easy to grow any header and update the array.
+ To achieve this, we have to replace one after the other all bits on the
+ right with one 1 followed by zeroes, and update the position if it's higher
+ than current position, and stop when it's above number of stored headers.
+
+ For instance, if we want to grow hdr[6], we proceed like this :
+
+ 6 = 0110 (BIN)
+
+ Let's consider the values to update :
+
+ (bit 0) : (0110 & ~0001) | 0001 = 0111 = 7 > 6 => update
+ (bit 1) : (0110 & ~0011) | 0010 = 0110 = 6 <= 6 => leave it
+ (bit 2) : (0110 & ~0111) | 0100 = 0100 = 4 <= 6 => leave it
+ (bit 4) : (0110 & ~1111) | 1000 = 1000 = 8 > 6 => update
+ (bit 5) : larger than array size, stop.
+
+
+It's easy to walk through the tree too. We only have one iteration per bit
+changing from X to the ancestor, and one per bit from the ancestor to Y.
+The ancestor is found while walking. To go from X to Y :
+
+ pos = pos(X)
+
+ while (Y != X) {
+ if (Y > X) {
+ // walk from Y to ancestor
+ pos += hdr[Y]
+ Y &= (Y - 1)
+ } else {
+ // walk from X to ancestor
+ pos -= hdr[X]
+ X &= (X - 1)
+ }
+ }
+
+However, it is not trivial anymore to linearly walk the tree. We have to move
+from a known place to another known place, but a jump to next entry costs the
+same as a jump to a random place.
+
+Other caveats :
+ - it is not possible to remove a header, it is only possible to empty it.
+ - it is not possible to insert a header, as that would imply a renumbering.
+ => this means that a "defrag" function is required. Headers should preferably
+ be added, then should be stuffed on top of destroyed ones, then only
+ inserted if absolutely required.
+
+
+When we have this, we can then focus on a 32-bit header descriptor which would
+look like this :
+
+{
+ unsigned line_len :13; /* total line length, including CRLF */
+ unsigned name_len :6; /* header name length, max 63 chars */
+ unsigned sp1 :5; /* max spaces before value : 31 */
+ unsigned sp2 :8; /* max spaces after value : 255 */
+}
+
+Example :
+
+ Connection: close \r\n
+ <---------+-----+-----+-------------> line_len
+ <-------->| | | name_len
+ <-----> | sp1
+ <-------------> sp2
+Rem:
+ - if there are more than 31 spaces before the value, the buffer will have to
+ be moved before being registered
+
+ - if there are more than 255 spaces after the value, the buffer will have to
+ be moved before being registered
+
+ - we can use the empty header name as an indicator for a deleted header
+
+ - it would be wise to format a new request before sending lots of random
+ spaces to the servers.
+
+ - normal clients do not send such crap, so those operations *may* reasonably
+ be more expensive than the rest provided that other ones are very fast.
+
+It would be handy to have the following macros :
+
+ hdr_eon(hdr) => end of name
+ hdr_sov(hdr) => start of value
+ hdr_eof(hdr) => end of value
+ hdr_vlen(hdr) => length of value
+ hdr_hlen(hdr) => total header length
+
+
+A 48-bit encoding would look like this :
+
+ Connection: close \r\n
+ <---------+------+---+--------------> eoh = 16 bits
+ <-------->| | | eon = 8 bits
+ <--------------->| | sov = 8 bits
+ <---> vlen = 16 bits
+
--- /dev/null
+2010/08/31 - HTTP Cookies - Theory and reality
+
+HTTP cookies are not uniformly supported across browsers, which makes it very
+hard to build a widely compatible implementation. At least four conflicting
+documents exist to describe how cookies should be handled, and browsers
+generally don't respect any but a sensibly selected mix of them :
+
+ - Netscape's original spec (also mirrored at Curl's site among others) :
+ http://web.archive.org/web/20070805052634/http://wp.netscape.com/newsref/std/cookie_spec.html
+ http://curl.haxx.se/rfc/cookie_spec.html
+
+ Issues: uses an unquoted "Expires" field that includes a comma.
+
+ - RFC 2109 :
+ http://www.ietf.org/rfc/rfc2109.txt
+
+ Issues: specifies use of "Max-Age" (not universally implemented) and does
+ not talk about "Expires" (generally supported). References quoted
+ strings, not generally supported (eg: MSIE). Stricter than browsers
+ about domains. Ambiguous about allowed spaces in values and attrs.
+
+ - RFC 2965 :
+ http://www.ietf.org/rfc/rfc2965.txt
+
+ Issues: same as RFC2109 + describes Set-Cookie2 which only Opera supports.
+
+ - Current internet draft :
+ https://datatracker.ietf.org/wg/httpstate/charter/
+
+ Issues: as of -p10, does not explain how the Set-Cookie2 header must be
+ emitted/handled, while suggesting a stricter approach for Cookie.
+ Documents reality and as such reintroduces the widely used unquoted
+ "Expires" attribute with its error-prone syntax. States that a
+ server should not emit more than one cookie per Set-Cookie header,
+ which is incompatible with HTTP which says that multiple headers
+ are allowed only if they can be folded.
+
+See also the following URL for a browser * feature matrix :
+ http://code.google.com/p/browsersec/wiki/Part2#Same-origin_policy_for_cookies
+
+In short, MSIE and Safari neither support quoted strings nor max-age, which
+make it mandatory to continue to send an unquoted Expires value (maybe the
+day of week could be omitted though). Only Safari supports comma-separated
+lists of Set-Cookie headers. Support for cross-domains is not uniform either.
+
--- /dev/null
+Many interesting RFC and drafts linked to from this site :
+
+ http://www.web-cache.com/Writings/protocols-standards.html
+
+
--- /dev/null
+--- Relevant portions of RFC2616 ---
+
+OCTET = <any 8-bit sequence of data>
+CHAR = <any US-ASCII character (octets 0 - 127)>
+UPALPHA = <any US-ASCII uppercase letter "A".."Z">
+LOALPHA = <any US-ASCII lowercase letter "a".."z">
+ALPHA = UPALPHA | LOALPHA
+DIGIT = <any US-ASCII digit "0".."9">
+CTL = <any US-ASCII control character (octets 0 - 31) and DEL (127)>
+CR = <US-ASCII CR, carriage return (13)>
+LF = <US-ASCII LF, linefeed (10)>
+SP = <US-ASCII SP, space (32)>
+HT = <US-ASCII HT, horizontal-tab (9)>
+<"> = <US-ASCII double-quote mark (34)>
+CRLF = CR LF
+LWS = [CRLF] 1*( SP | HT )
+TEXT = <any OCTET except CTLs, but including LWS>
+HEX = "A" | "B" | "C" | "D" | "E" | "F"
+ | "a" | "b" | "c" | "d" | "e" | "f" | DIGIT
+separators = "(" | ")" | "<" | ">" | "@"
+ | "," | ";" | ":" | "\" | <">
+ | "/" | "[" | "]" | "?" | "="
+ | "{" | "}" | SP | HT
+token = 1*<any CHAR except CTLs or separators>
+
+quoted-pair = "\" CHAR
+ctext = <any TEXT excluding "(" and ")">
+qdtext = <any TEXT except <">>
+quoted-string = ( <"> *(qdtext | quoted-pair ) <"> )
+comment = "(" *( ctext | quoted-pair | comment ) ")"
+
+
+
+
+
+4 HTTP Message
+4.1 Message Types
+
+HTTP messages consist of requests from client to server and responses from
+server to client. Request (section 5) and Response (section 6) messages use the
+generic message format of RFC 822 [9] for transferring entities (the payload of
+the message). Both types of message consist of :
+
+ - a start-line
+ - zero or more header fields (also known as "headers")
+ - an empty line (i.e., a line with nothing preceding the CRLF) indicating the
+ end of the header fields
+ - and possibly a message-body.
+
+
+HTTP-message = Request | Response
+
+start-line = Request-Line | Status-Line
+generic-message = start-line
+ *(message-header CRLF)
+ CRLF
+ [ message-body ]
+
+In the interest of robustness, servers SHOULD ignore any empty line(s) received
+where a Request-Line is expected. In other words, if the server is reading the
+protocol stream at the beginning of a message and receives a CRLF first, it
+should ignore the CRLF.
+
+
+4.2 Message headers
+
+- Each header field consists of a name followed by a colon (":") and the field
+ value.
+- Field names are case-insensitive.
+- The field value MAY be preceded by any amount of LWS, though a single SP is
+ preferred.
+- Header fields can be extended over multiple lines by preceding each extra
+ line with at least one SP or HT.
+
+
+message-header = field-name ":" [ field-value ]
+field-name = token
+field-value = *( field-content | LWS )
+field-content = <the OCTETs making up the field-value and consisting of
+ either *TEXT or combinations of token, separators, and
+ quoted-string>
+
+
+The field-content does not include any leading or trailing LWS occurring before
+the first non-whitespace character of the field-value or after the last
+non-whitespace character of the field-value. Such leading or trailing LWS MAY
+be removed without changing the semantics of the field value. Any LWS that
+occurs between field-content MAY be replaced with a single SP before
+interpreting the field value or forwarding the message downstream.
+
+
+=> format des headers = 1*(CHAR & !ctl & !sep) ":" *(OCTET & (!ctl | LWS))
+=> les regex de matching de headers s'appliquent sur field-content, et peuvent
+ utiliser field-value comme espace de travail (mais de préférence après le
+ premier SP).
+
+(19.3) The line terminator for message-header fields is the sequence CRLF.
+However, we recommend that applications, when parsing such headers, recognize
+a single LF as a line terminator and ignore the leading CR.
+
+
+
+
+
+message-body = entity-body
+ | <entity-body encoded as per Transfer-Encoding>
+
+
+
+5 Request
+
+Request = Request-Line
+ *(( general-header
+ | request-header
+ | entity-header ) CRLF)
+ CRLF
+ [ message-body ]
+
+
+
+5.1 Request line
+
+The elements are separated by SP characters. No CR or LF is allowed except in
+the final CRLF sequence.
+
+Request-Line = Method SP Request-URI SP HTTP-Version CRLF
+
+(19.3) Clients SHOULD be tolerant in parsing the Status-Line and servers
+tolerant when parsing the Request-Line. In particular, they SHOULD accept any
+amount of SP or HT characters between fields, even though only a single SP is
+required.
+
+4.5 General headers
+Apply to MESSAGE.
+
+general-header = Cache-Control
+ | Connection
+ | Date
+ | Pragma
+ | Trailer
+ | Transfer-Encoding
+ | Upgrade
+ | Via
+ | Warning
+
+General-header field names can be extended reliably only in combination with a
+change in the protocol version. However, new or experimental header fields may
+be given the semantics of general header fields if all parties in the
+communication recognize them to be general-header fields. Unrecognized header
+fields are treated as entity-header fields.
+
+
+
+
+5.3 Request Header Fields
+
+The request-header fields allow the client to pass additional information about
+the request, and about the client itself, to the server. These fields act as
+request modifiers, with semantics equivalent to the parameters on a programming
+language method invocation.
+
+request-header = Accept
+ | Accept-Charset
+ | Accept-Encoding
+ | Accept-Language
+ | Authorization
+ | Expect
+ | From
+ | Host
+ | If-Match
+ | If-Modified-Since
+ | If-None-Match
+ | If-Range
+ | If-Unmodified-Since
+ | Max-Forwards
+ | Proxy-Authorization
+ | Range
+ | Referer
+ | TE
+ | User-Agent
+
+Request-header field names can be extended reliably only in combination with a
+change in the protocol version. However, new or experimental header fields MAY
+be given the semantics of request-header fields if all parties in the
+communication recognize them to be request-header fields. Unrecognized header
+fields are treated as entity-header fields.
+
+
+
+7.1 Entity header fields
+
+Entity-header fields define metainformation about the entity-body or, if no
+body is present, about the resource identified by the request. Some of this
+metainformation is OPTIONAL; some might be REQUIRED by portions of this
+specification.
+
+entity-header = Allow
+ | Content-Encoding
+ | Content-Language
+ | Content-Length
+ | Content-Location
+ | Content-MD5
+ | Content-Range
+ | Content-Type
+ | Expires
+ | Last-Modified
+ | extension-header
+extension-header = message-header
+
+The extension-header mechanism allows additional entity-header fields to be
+defined without changing the protocol, but these fields cannot be assumed to be
+recognizable by the recipient. Unrecognized header fields SHOULD be ignored by
+the recipient and MUST be forwarded by transparent proxies.
+
+----------------------------------
+
+The format of Request-URI is defined by RFC3986 :
+
+ URI = scheme ":" hier-part [ "?" query ] [ "#" fragment ]
+
+ hier-part = "//" authority path-abempty
+ / path-absolute
+ / path-rootless
+ / path-empty
+
+ URI-reference = URI / relative-ref
+
+ absolute-URI = scheme ":" hier-part [ "?" query ]
+
+ relative-ref = relative-part [ "?" query ] [ "#" fragment ]
+
+ relative-part = "//" authority path-abempty
+ / path-absolute
+ / path-noscheme
+ / path-empty
+
+ scheme = ALPHA *( ALPHA / DIGIT / "+" / "-" / "." )
+
+ authority = [ userinfo "@" ] host [ ":" port ]
+ userinfo = *( unreserved / pct-encoded / sub-delims / ":" )
+ host = IP-literal / IPv4address / reg-name
+ port = *DIGIT
+
+ IP-literal = "[" ( IPv6address / IPvFuture ) "]"
+
+ IPvFuture = "v" 1*HEXDIG "." 1*( unreserved / sub-delims / ":" )
+
+ IPv6address = 6( h16 ":" ) ls32
+ / "::" 5( h16 ":" ) ls32
+ / [ h16 ] "::" 4( h16 ":" ) ls32
+ / [ *1( h16 ":" ) h16 ] "::" 3( h16 ":" ) ls32
+ / [ *2( h16 ":" ) h16 ] "::" 2( h16 ":" ) ls32
+ / [ *3( h16 ":" ) h16 ] "::" h16 ":" ls32
+ / [ *4( h16 ":" ) h16 ] "::" ls32
+ / [ *5( h16 ":" ) h16 ] "::" h16
+ / [ *6( h16 ":" ) h16 ] "::"
+
+ h16 = 1*4HEXDIG
+ ls32 = ( h16 ":" h16 ) / IPv4address
+ IPv4address = dec-octet "." dec-octet "." dec-octet "." dec-octet
+ dec-octet = DIGIT ; 0-9
+ / %x31-39 DIGIT ; 10-99
+ / "1" 2DIGIT ; 100-199
+ / "2" %x30-34 DIGIT ; 200-249
+ / "25" %x30-35 ; 250-255
+
+ reg-name = *( unreserved / pct-encoded / sub-delims )
+
+ path = path-abempty ; begins with "/" or is empty
+ / path-absolute ; begins with "/" but not "//"
+ / path-noscheme ; begins with a non-colon segment
+ / path-rootless ; begins with a segment
+ / path-empty ; zero characters
+
+ path-abempty = *( "/" segment )
+ path-absolute = "/" [ segment-nz *( "/" segment ) ]
+ path-noscheme = segment-nz-nc *( "/" segment )
+ path-rootless = segment-nz *( "/" segment )
+ path-empty = 0<pchar>
+
+ segment = *pchar
+ segment-nz = 1*pchar
+ segment-nz-nc = 1*( unreserved / pct-encoded / sub-delims / "@" )
+ ; non-zero-length segment without any colon ":"
+
+ pchar = unreserved / pct-encoded / sub-delims / ":" / "@"
+
+ query = *( pchar / "/" / "?" )
+
+ fragment = *( pchar / "/" / "?" )
+
+ pct-encoded = "%" HEXDIG HEXDIG
+
+ unreserved = ALPHA / DIGIT / "-" / "." / "_" / "~"
+ reserved = gen-delims / sub-delims
+ gen-delims = ":" / "/" / "?" / "#" / "[" / "]" / "@"
+ sub-delims = "!" / "$" / "&" / "'" / "(" / ")"
+ / "*" / "+" / "," / ";" / "="
+
+=> so the list of allowed characters in a URI is :
+
+ uri-char = unreserved / gen-delims / sub-delims / "%"
+ = ALPHA / DIGIT / "-" / "." / "_" / "~"
+ / ":" / "/" / "?" / "#" / "[" / "]" / "@"
+ / "!" / "$" / "&" / "'" / "(" / ")" /
+ / "*" / "+" / "," / ";" / "=" / "%"
+
+Note that non-ascii characters are forbidden ! Spaces and CTL are forbidden.
+Unfortunately, some products such as Apache allow such characters :-/
+
+---- The correct way to do it ----
+
+- one http_session
+ It is basically any transport session on which we talk HTTP. It may be TCP,
+ SSL over TCP, etc... It knows a way to talk to the client, either the socket
+ file descriptor or a direct access to the client-side buffer. It should hold
+ information about the last accessed server so that we can guarantee that the
+ same server can be used during a whole session if needed. A first version
+ without optimal support for HTTP pipelining will have the client buffers tied
+ to the http_session. It may be possible that it is not sufficient for full
+ pipelining, but this will need further study. The link from the buffers to
+ the backend should be managed by the http transaction (http_txn), provided
+ that they are serialized. Each http_session, has 0 to N http_txn. Each
+ http_txn belongs to one and only one http_session.
+
+- each http_txn has 1 request message (http_req), and 0 or 1 response message
+ (http_rtr). Each of them has 1 and only one http_txn. An http_txn holds
+ informations such as the HTTP method, the URI, the HTTP version, the
+ transfer-encoding, the HTTP status, the authorization, the req and rtr
+ content-length, the timers, logs, etc... The backend and server which process
+ the request are also known from the http_txn.
+
+- both request and response messages hold header and parsing informations, such
+ as the parsing state, start of headers, start of message, captures, etc...
+
--- /dev/null
+#FIG 3.2
+Landscape
+Center
+Metric
+A4
+100.00
+Single
+-2
+1200 2
+6 720 8325 1080 9135
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 990 8765 765 8765
+2 2 0 2 0 6 53 0 20 0.000 0 0 -1 0 0 5
+ 765 8415 990 8415 990 9090 765 9090 765 8415
+4 1 0 50 0 14 10 0.0000 4 90 90 880 8967 P\001
+4 1 0 50 0 14 10 0.0000 4 90 90 878 8640 N\001
+-6
+6 1170 8325 1530 9135
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 1440 8765 1215 8765
+2 2 0 2 0 7 53 0 20 0.000 0 0 -1 0 0 5
+ 1215 8415 1440 8415 1440 9090 1215 9090 1215 8415
+4 1 0 50 0 14 10 0.0000 4 90 90 1330 8967 P\001
+4 1 0 50 0 14 10 0.0000 4 90 90 1328 8640 N\001
+-6
+6 1620 8325 1980 9135
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 1890 8765 1665 8765
+2 2 0 2 0 4 53 0 20 0.000 0 0 -1 0 0 5
+ 1665 8415 1890 8415 1890 9090 1665 9090 1665 8415
+4 1 0 50 0 14 10 0.0000 4 90 90 1780 8967 P\001
+4 1 0 50 0 14 10 0.0000 4 90 90 1778 8640 N\001
+-6
+6 2700 8055 3420 9225
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 3150 8675 2925 8675
+2 2 0 2 0 2 53 0 20 0.000 0 0 -1 0 0 5
+ 2925 8325 3150 8325 3150 9000 2925 9000 2925 8325
+3 0 0 1 0 7 50 0 -1 0.000 0 1 0 6
+ 1 1 1.00 60.00 120.00
+ 3150 8505 3375 8505 3375 8100 2700 8100 2700 8505 2925 8505
+ 0.000 1.000 1.000 1.000 1.000 0.000
+3 0 0 1 0 7 50 0 -1 0.000 0 1 0 6
+ 1 1 1.00 60.00 120.00
+ 3150 8820 3375 8820 3375 9225 2700 9225 2700 8820 2925 8820
+ 0.000 1.000 1.000 1.000 1.000 0.000
+4 1 0 50 0 14 10 0.0000 4 90 90 3040 8877 P\001
+4 1 0 50 0 14 10 0.0000 4 90 90 3038 8550 N\001
+-6
+6 2115 8100 2655 9180
+6 2115 8100 2655 9180
+6 2295 8235 2655 9045
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 2565 8675 2340 8675
+2 2 0 2 0 6 53 0 20 0.000 0 0 -1 0 0 5
+ 2340 8325 2565 8325 2565 9000 2340 9000 2340 8325
+4 1 0 50 0 14 10 0.0000 4 90 90 2455 8877 P\001
+4 1 0 50 0 14 10 0.0000 4 90 90 2453 8550 N\001
+-6
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 2565 8325 2115 8325
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 2115 9000 2565 9000
+2 2 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 5
+ 2115 8100 2565 8100 2565 9180 2115 9180 2115 8100
+4 1 0 50 0 14 12 0.0000 4 120 105 2250 8730 L\001
+-6
+-6
+6 3420 8100 4095 9225
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 3870 8675 3645 8675
+2 2 0 2 0 7 50 0 -1 0.000 0 0 -1 0 0 5
+ 3645 8325 3870 8325 3870 9000 3645 9000 3645 8325
+3 0 0 1 0 7 50 0 -1 0.000 0 1 0 6
+ 1 1 1.00 60.00 120.00
+ 3870 8505 4095 8505 4095 8100 3420 8100 3420 8505 3645 8505
+ 0.000 1.000 1.000 1.000 1.000 0.000
+3 0 0 1 0 7 50 0 -1 0.000 0 1 0 6
+ 1 1 1.00 60.00 120.00
+ 3870 8820 4095 8820 4095 9225 3420 9225 3420 8820 3645 8820
+ 0.000 1.000 1.000 1.000 1.000 0.000
+4 1 0 50 0 14 10 0.0000 4 90 90 3760 8877 P\001
+4 1 0 50 0 14 10 0.0000 4 90 90 3758 8550 N\001
+-6
+6 4275 8190 4725 9090
+2 2 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 5
+ 4275 8190 4725 8190 4725 9090 4275 9090 4275 8190
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 4275 8640 4725 8640
+4 1 0 50 0 16 24 0.0000 4 285 270 4500 8550 N\001
+4 1 0 50 0 16 24 0.0000 4 285 240 4500 9000 P\001
+-6
+6 5175 8115 5655 8595
+2 2 0 2 0 7 50 0 -1 0.000 0 0 -1 0 0 5
+ 5190 8130 5640 8130 5640 8580 5190 8580 5190 8130
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 5640 8355 5190 8355
+4 1 0 50 0 16 9 0.0000 4 90 90 5415 8490 P\001
+4 1 0 50 0 16 9 0.0000 4 90 90 5415 8310 N\001
+-6
+6 4995 8655 5925 9135
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 5010 8895 5910 8895
+2 2 0 2 0 7 50 0 -1 0.000 0 0 -1 0 0 5
+ 5010 8670 5910 8670 5910 9120 5010 9120 5010 8670
+4 1 0 50 0 14 10 0.0000 4 105 630 5460 8850 list *N\001
+4 1 0 50 0 14 10 0.0000 4 105 630 5460 9075 list *P\001
+-6
+6 270 8325 630 9135
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 540 8765 315 8765
+2 2 0 2 0 2 53 0 20 0.000 0 0 -1 0 0 5
+ 315 8415 540 8415 540 9090 315 9090 315 8415
+4 1 0 50 0 14 10 0.0000 4 90 90 430 8967 P\001
+4 1 0 50 0 14 10 0.0000 4 90 90 428 8640 N\001
+-6
+6 4860 3420 5220 4230
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 5130 3860 4905 3860
+2 2 0 2 0 6 53 0 20 0.000 0 0 -1 0 0 5
+ 4905 3510 5130 3510 5130 4185 4905 4185 4905 3510
+4 1 0 50 0 14 10 0.0000 4 90 90 5020 4062 P\001
+4 1 0 50 0 14 10 0.0000 4 90 90 5018 3735 N\001
+-6
+6 5850 3420 6210 4230
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 6120 3860 5895 3860
+2 2 0 2 0 7 53 0 20 0.000 0 0 -1 0 0 5
+ 5895 3510 6120 3510 6120 4185 5895 4185 5895 3510
+4 1 0 50 0 14 10 0.0000 4 90 90 6010 4062 P\001
+4 1 0 50 0 14 10 0.0000 4 90 90 6008 3735 N\001
+-6
+6 3960 3420 4320 4230
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 4230 3860 4005 3860
+2 2 0 2 0 2 53 0 20 0.000 0 0 -1 0 0 5
+ 4005 3510 4230 3510 4230 4185 4005 4185 4005 3510
+4 1 0 50 0 14 10 0.0000 4 90 90 4120 4062 P\001
+4 1 0 50 0 14 10 0.0000 4 90 90 4118 3735 N\001
+-6
+6 4185 5580 4545 6390
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 4455 6020 4230 6020
+2 2 0 2 0 4 53 0 20 0.000 0 0 -1 0 0 5
+ 4230 5670 4455 5670 4455 6345 4230 6345 4230 5670
+4 1 0 50 0 14 10 0.0000 4 90 90 4345 6222 P\001
+4 1 0 50 0 14 10 0.0000 4 90 90 4343 5895 N\001
+-6
+6 4905 5445 5445 6525
+6 4905 5445 5445 6525
+6 5085 5580 5445 6390
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 5355 6020 5130 6020
+2 2 0 2 0 6 53 0 20 0.000 0 0 -1 0 0 5
+ 5130 5670 5355 5670 5355 6345 5130 6345 5130 5670
+4 1 0 50 0 14 10 0.0000 4 90 90 5245 6222 P\001
+4 1 0 50 0 14 10 0.0000 4 90 90 5243 5895 N\001
+-6
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 5355 5670 4905 5670
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 4905 6345 5355 6345
+2 2 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 5
+ 4905 5445 5355 5445 5355 6525 4905 6525 4905 5445
+4 1 0 50 0 14 12 0.0000 4 120 105 5040 6075 L\001
+-6
+-6
+6 5805 5445 6345 6525
+6 5805 5445 6345 6525
+6 5985 5580 6345 6390
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 6255 6020 6030 6020
+2 2 0 2 0 6 53 0 20 0.000 0 0 -1 0 0 5
+ 6030 5670 6255 5670 6255 6345 6030 6345 6030 5670
+4 1 0 50 0 14 10 0.0000 4 90 90 6145 6222 P\001
+4 1 0 50 0 14 10 0.0000 4 90 90 6143 5895 N\001
+-6
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 6255 5670 5805 5670
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 5805 6345 6255 6345
+2 2 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 5
+ 5805 5445 6255 5445 6255 6525 5805 6525 5805 5445
+4 1 0 50 0 14 12 0.0000 4 120 105 5940 6075 L\001
+-6
+-6
+6 6705 5445 7245 6525
+6 6705 5445 7245 6525
+6 6885 5580 7245 6390
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 7155 6020 6930 6020
+2 2 0 2 0 6 53 0 20 0.000 0 0 -1 0 0 5
+ 6930 5670 7155 5670 7155 6345 6930 6345 6930 5670
+4 1 0 50 0 14 10 0.0000 4 90 90 7045 6222 P\001
+4 1 0 50 0 14 10 0.0000 4 90 90 7043 5895 N\001
+-6
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 7155 5670 6705 5670
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 6705 6345 7155 6345
+2 2 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 5
+ 6705 5445 7155 5445 7155 6525 6705 6525 6705 5445
+4 1 0 50 0 14 12 0.0000 4 120 105 6840 6075 L\001
+-6
+-6
+6 450 5580 810 6390
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 720 6020 495 6020
+2 2 0 2 0 4 53 0 20 0.000 0 0 -1 0 0 5
+ 495 5670 720 5670 720 6345 495 6345 495 5670
+4 1 0 50 0 14 10 0.0000 4 90 90 610 6222 P\001
+4 1 0 50 0 14 10 0.0000 4 90 90 608 5895 N\001
+-6
+6 1170 5445 1710 6525
+6 1170 5445 1710 6525
+6 1350 5580 1710 6390
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 1620 6020 1395 6020
+2 2 0 2 0 6 53 0 20 0.000 0 0 -1 0 0 5
+ 1395 5670 1620 5670 1620 6345 1395 6345 1395 5670
+4 1 0 50 0 14 10 0.0000 4 90 90 1510 6222 P\001
+4 1 0 50 0 14 10 0.0000 4 90 90 1508 5895 N\001
+-6
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 1620 5670 1170 5670
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 1170 6345 1620 6345
+2 2 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 5
+ 1170 5445 1620 5445 1620 6525 1170 6525 1170 5445
+4 1 0 50 0 14 12 0.0000 4 120 105 1305 6075 L\001
+-6
+-6
+6 2070 5445 2610 6525
+6 2070 5445 2610 6525
+6 2250 5580 2610 6390
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 2520 6020 2295 6020
+2 2 0 2 0 6 53 0 20 0.000 0 0 -1 0 0 5
+ 2295 5670 2520 5670 2520 6345 2295 6345 2295 5670
+4 1 0 50 0 14 10 0.0000 4 90 90 2410 6222 P\001
+4 1 0 50 0 14 10 0.0000 4 90 90 2408 5895 N\001
+-6
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 2520 5670 2070 5670
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 2070 6345 2520 6345
+2 2 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 5
+ 2070 5445 2520 5445 2520 6525 2070 6525 2070 5445
+4 1 0 50 0 14 12 0.0000 4 120 105 2205 6075 L\001
+-6
+-6
+6 2970 5445 3510 6525
+6 2970 5445 3510 6525
+6 3150 5580 3510 6390
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 3420 6020 3195 6020
+2 2 0 2 0 6 53 0 20 0.000 0 0 -1 0 0 5
+ 3195 5670 3420 5670 3420 6345 3195 6345 3195 5670
+4 1 0 50 0 14 10 0.0000 4 90 90 3310 6222 P\001
+4 1 0 50 0 14 10 0.0000 4 90 90 3308 5895 N\001
+-6
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 3420 5670 2970 5670
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 2970 6345 3420 6345
+2 2 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 5
+ 2970 5445 3420 5445 3420 6525 2970 6525 2970 5445
+4 1 0 50 0 14 12 0.0000 4 120 105 3105 6075 L\001
+-6
+-6
+6 720 3420 1080 4230
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 990 3860 765 3860
+2 2 0 2 0 2 53 0 20 0.000 0 0 -1 0 0 5
+ 765 3510 990 3510 990 4185 765 4185 765 3510
+4 1 0 50 0 14 10 0.0000 4 90 90 880 4062 P\001
+4 1 0 50 0 14 10 0.0000 4 90 90 878 3735 N\001
+-6
+6 2700 3420 3060 4230
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 2970 3860 2745 3860
+2 2 0 2 0 6 53 0 20 0.000 0 0 -1 0 0 5
+ 2745 3510 2970 3510 2970 4185 2745 4185 2745 3510
+4 1 0 50 0 14 10 0.0000 4 90 90 2860 4062 P\001
+4 1 0 50 0 14 10 0.0000 4 90 90 2858 3735 N\001
+-6
+6 1620 3465 1935 4230
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 1890 3860 1665 3860
+2 2 0 2 0 7 50 0 -1 0.000 0 0 -1 0 0 5
+ 1665 3510 1890 3510 1890 4185 1665 4185 1665 3510
+4 1 0 50 0 14 10 0.0000 4 90 90 1780 4062 P\001
+4 1 0 50 0 14 10 0.0000 4 90 90 1778 3735 N\001
+-6
+6 10485 3330 11025 4410
+6 10665 3465 11025 4275
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 10935 3905 10710 3905
+2 2 0 2 0 6 53 0 20 0.000 0 0 -1 0 0 5
+ 10710 3555 10935 3555 10935 4230 10710 4230 10710 3555
+4 1 0 50 0 14 10 0.0000 4 90 90 10825 4107 P\001
+4 1 0 50 0 14 10 0.0000 4 90 90 10823 3780 N\001
+-6
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 10935 3555 10485 3555
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 10485 4230 10935 4230
+2 2 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 5
+ 10485 3330 10935 3330 10935 4410 10485 4410 10485 3330
+4 1 0 50 0 14 12 0.0000 4 120 105 10620 3960 L\001
+-6
+6 7110 3105 7650 4185
+6 7110 3105 7650 4185
+6 7290 3240 7650 4050
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 7560 3680 7335 3680
+2 2 0 2 0 6 53 0 20 0.000 0 0 -1 0 0 5
+ 7335 3330 7560 3330 7560 4005 7335 4005 7335 3330
+4 1 0 50 0 14 10 0.0000 4 90 90 7450 3882 P\001
+4 1 0 50 0 14 10 0.0000 4 90 90 7448 3555 N\001
+-6
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 7560 3330 7110 3330
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 7110 4005 7560 4005
+2 2 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 5
+ 7110 3105 7560 3105 7560 4185 7110 4185 7110 3105
+4 1 0 50 0 14 12 0.0000 4 120 105 7245 3735 L\001
+-6
+-6
+6 8010 3105 8550 4185
+6 8010 3105 8550 4185
+6 8190 3240 8550 4050
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 8460 3680 8235 3680
+2 2 0 2 0 6 53 0 20 0.000 0 0 -1 0 0 5
+ 8235 3330 8460 3330 8460 4005 8235 4005 8235 3330
+4 1 0 50 0 14 10 0.0000 4 90 90 8350 3882 P\001
+4 1 0 50 0 14 10 0.0000 4 90 90 8348 3555 N\001
+-6
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 8460 3330 8010 3330
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 8010 4005 8460 4005
+2 2 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 5
+ 8010 3105 8460 3105 8460 4185 8010 4185 8010 3105
+4 1 0 50 0 14 12 0.0000 4 120 105 8145 3735 L\001
+-6
+-6
+6 9315 990 12195 2160
+6 9675 1080 10035 1890
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 9945 1520 9720 1520
+2 2 0 2 0 2 53 0 20 0.000 0 0 -1 0 0 5
+ 9720 1170 9945 1170 9945 1845 9720 1845 9720 1170
+4 1 0 50 0 14 10 0.0000 4 90 90 9835 1722 P\001
+4 1 0 50 0 14 10 0.0000 4 90 90 9833 1395 N\001
+-6
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 10935 1520 10710 1520
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 11925 1520 11700 1520
+2 2 0 2 0 7 52 0 20 0.000 0 0 -1 0 0 5
+ 10710 1170 10935 1170 10935 1845 10710 1845 10710 1170
+2 2 0 2 0 6 52 0 20 0.000 0 0 -1 0 0 5
+ 11700 1170 11925 1170 11925 1845 11700 1845 11700 1170
+3 0 0 1 0 7 50 0 -1 0.000 0 1 0 2
+ 1 1 1.00 60.00 120.00
+ 9945 1350 10665 1350
+ 0.000 0.000
+3 0 0 1 0 7 50 0 -1 0.000 0 1 0 2
+ 1 1 1.00 60.00 120.00
+ 10935 1350 11655 1350
+ 0.000 0.000
+3 0 0 1 0 7 50 0 -1 0.000 0 1 0 8
+ 1 1 1.00 60.00 120.00
+ 11925 1350 12105 1350 12195 1350 12195 990 9315 990 9315 1350
+ 9495 1350 9675 1350
+ 0.000 1.000 1.000 1.000 1.000 1.000 1.000 0.000
+3 0 0 1 0 7 50 0 -1 0.000 0 1 0 8
+ 1 1 1.00 60.00 120.00
+ 9675 1710 9495 1710 9315 1710 9405 2160 12195 2160 12195 1710
+ 12105 1710 11925 1710
+ 0.000 1.000 1.000 1.000 1.000 1.000 1.000 0.000
+3 0 0 1 0 7 50 0 -1 0.000 0 1 0 2
+ 1 1 1.00 60.00 120.00
+ 11655 1710 10935 1710
+ 0.000 0.000
+3 0 0 1 0 7 50 0 -1 0.000 0 1 0 2
+ 1 1 1.00 60.00 120.00
+ 10665 1710 9945 1710
+ 0.000 0.000
+4 1 0 50 0 14 10 0.0000 4 90 90 10825 1722 P\001
+4 1 0 50 0 14 10 0.0000 4 90 90 10823 1395 N\001
+4 1 0 50 0 14 10 0.0000 4 90 90 11815 1722 P\001
+4 1 0 50 0 14 10 0.0000 4 90 90 11813 1395 N\001
+-6
+6 6345 1080 6705 1890
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 6615 1520 6390 1520
+2 2 0 2 0 2 53 0 20 0.000 0 0 -1 0 0 5
+ 6390 1170 6615 1170 6615 1845 6390 1845 6390 1170
+4 1 0 50 0 14 10 0.0000 4 90 90 6505 1722 P\001
+4 1 0 50 0 14 10 0.0000 4 90 90 6503 1395 N\001
+-6
+6 7335 1080 7695 1890
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 7605 1520 7380 1520
+2 2 0 2 0 6 52 0 20 0.000 0 0 -1 0 0 5
+ 7380 1170 7605 1170 7605 1845 7380 1845 7380 1170
+4 1 0 50 0 14 10 0.0000 4 90 90 7495 1722 P\001
+4 1 0 50 0 14 10 0.0000 4 90 90 7493 1395 N\001
+-6
+6 8325 1080 8685 1890
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 8595 1520 8370 1520
+2 2 0 2 0 7 50 0 -1 0.000 0 0 -1 0 0 5
+ 8370 1170 8595 1170 8595 1845 8370 1845 8370 1170
+4 1 0 50 0 14 10 0.0000 4 90 90 8485 1722 P\001
+4 1 0 50 0 14 10 0.0000 4 90 90 8483 1395 N\001
+-6
+6 3870 1215 4185 1980
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 4140 1610 3915 1610
+2 2 0 2 0 2 53 0 20 0.000 0 0 -1 0 0 5
+ 3915 1260 4140 1260 4140 1935 3915 1935 3915 1260
+4 1 0 50 0 14 10 0.0000 4 90 90 4030 1812 P\001
+4 1 0 50 0 14 10 0.0000 4 90 90 4028 1485 N\001
+-6
+6 4770 1215 5085 1980
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 5040 1610 4815 1610
+2 2 0 2 0 7 50 0 -1 0.000 0 0 -1 0 0 5
+ 4815 1260 5040 1260 5040 1935 4815 1935 4815 1260
+4 1 0 50 0 14 10 0.0000 4 90 90 4930 1812 P\001
+4 1 0 50 0 14 10 0.0000 4 90 90 4928 1485 N\001
+-6
+6 2205 990 2925 2160
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 2655 1610 2430 1610
+2 2 0 2 0 2 53 0 20 0.000 0 0 -1 0 0 5
+ 2430 1260 2655 1260 2655 1935 2430 1935 2430 1260
+3 0 0 1 0 7 50 0 -1 0.000 0 1 0 6
+ 1 1 1.00 60.00 120.00
+ 2655 1440 2880 1440 2880 1035 2205 1035 2205 1440 2430 1440
+ 0.000 1.000 1.000 1.000 1.000 0.000
+3 0 0 1 0 7 50 0 -1 0.000 0 1 0 6
+ 1 1 1.00 60.00 120.00
+ 2655 1755 2880 1755 2880 2160 2205 2160 2205 1755 2430 1755
+ 0.000 1.000 1.000 1.000 1.000 0.000
+4 1 0 50 0 14 10 0.0000 4 90 90 2545 1812 P\001
+4 1 0 50 0 14 10 0.0000 4 90 90 2543 1485 N\001
+-6
+6 525 1350 1455 1830
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 0 0 2
+ 540 1590 1440 1590
+2 2 0 2 0 7 50 0 -1 0.000 0 0 -1 0 0 5
+ 540 1365 1440 1365 1440 1815 540 1815 540 1365
+4 1 0 50 0 14 10 0.0000 4 105 630 990 1545 list *N\001
+4 1 0 50 0 14 10 0.0000 4 105 630 990 1770 list *P\001
+-6
+2 4 0 1 0 7 50 0 -1 0.000 0 0 7 0 0 5
+ 3330 2475 6435 2475 6435 4500 3330 4500 3330 2475
+2 4 0 1 0 7 50 0 -1 0.000 0 0 7 0 0 5
+ 4050 4725 7605 4725 7605 6840 4050 6840 4050 4725
+2 4 0 1 0 7 50 0 -1 0.000 0 0 7 0 0 5
+ 12600 6840 12600 4725 7785 4725 7785 6840 12600 6840
+2 4 0 1 0 7 50 0 -1 0.000 0 0 7 0 0 5
+ 315 4725 3870 4725 3870 6840 315 6840 315 4725
+2 4 0 1 0 7 50 0 -1 0.000 0 0 7 0 0 5
+ 3150 4500 315 4500 315 2475 3150 2475 3150 4500
+2 4 0 1 0 7 50 0 -1 0.000 0 0 7 0 0 5
+ 6660 2475 8910 2475 8910 4500 6660 4500 6660 2475
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 10035 3375 10485 3330
+2 1 0 1 0 7 50 0 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 10080 3735 10485 3555
+2 4 0 1 0 7 50 0 -1 0.000 0 0 7 0 0 5
+ 9135 2475 12285 2475 12285 4500 9135 4500 9135 2475
+2 4 0 1 0 7 50 0 -1 0.000 0 0 7 0 0 5
+ 9270 270 12285 270 12285 2250 9270 2250 9270 270
+2 4 0 1 0 7 50 0 -1 0.000 0 0 7 0 0 5
+ 5760 270 9045 270 9045 2250 5760 2250 5760 270
+2 4 0 1 0 7 50 0 -1 0.000 0 0 7 0 0 5
+ 3465 270 5535 270 5535 2250 3465 2250 3465 270
+2 4 0 1 0 7 50 0 -1 0.000 0 0 7 0 0 5
+ 1845 270 3240 270 3240 2250 1845 2250 1845 270
+2 4 0 1 0 7 50 0 -1 0.000 0 0 7 0 0 5
+ 315 270 1620 270 1620 2250 315 2250 315 270
+3 0 0 1 0 7 50 0 -1 0.000 0 1 0 2
+ 1 1 1.00 60.00 120.00
+ 4230 3690 4860 3690
+ 0.000 0.000
+3 0 0 1 0 7 50 0 -1 0.000 0 1 0 2
+ 1 1 1.00 60.00 120.00
+ 4860 4050 4230 4050
+ 0.000 0.000
+3 0 0 1 0 7 50 0 -1 0.000 0 1 0 7
+ 1 1 1.00 60.00 120.00
+ 5130 3690 5580 3690 5580 3240 3600 3240 3600 3690 3780 3690
+ 3960 3690
+ 0.000 1.000 1.000 1.000 1.000 1.000 0.000
+3 0 0 1 0 7 50 0 -1 0.000 0 1 0 7
+ 1 1 1.00 60.00 120.00
+ 3960 4050 3780 4050 3600 4050 3600 4410 5580 4410 5580 4050
+ 5130 4050
+ 0.000 1.000 1.000 1.000 1.000 1.000 0.000
+3 0 0 1 0 7 50 0 -1 0.000 0 1 0 2
+ 1 1 1.00 60.00 120.00
+ 6261 5805 6711 5670
+ 0.000 0.000
+3 0 0 1 0 7 50 0 -1 0.000 0 1 0 2
+ 1 1 1.00 60.00 120.00
+ 4461 5805 4911 5670
+ 0.000 0.000
+3 0 0 1 0 7 50 0 -1 0.000 0 1 0 2
+ 1 1 1.00 60.00 120.00
+ 5358 5805 5808 5670
+ 0.000 0.000
+3 0 0 1 0 7 50 0 -1 0.000 0 1 0 2
+ 1 1 1.00 60.00 120.00
+ 6705 6210 6255 6210
+ 0.000 0.000
+3 0 0 1 0 7 50 0 -1 0.000 0 1 0 2
+ 1 1 1.00 60.00 120.00
+ 5805 6210 5355 6210
+ 0.000 0.000
+3 0 0 1 0 7 50 0 -1 0.000 0 1 0 2
+ 1 1 1.00 60.00 120.00
+ 4905 6210 4455 6210
+ 0.000 0.000
+3 0 0 1 0 7 50 0 -1 0.000 0 1 0 7
+ 1 1 1.00 60.00 120.00
+ 4320 6345 4320 6525 4320 6750 7470 6750 7470 6480 7470 6210
+ 7155 6210
+ 0.000 1.000 1.000 1.000 1.000 1.000 0.000
+3 0 0 1 0 7 50 0 -1 0.000 0 1 0 8
+ 1 1 1.00 60.00 120.00
+ 7155 5850 7335 5850 7470 5850 7470 5355 7470 5085 4590 5085
+ 4590 5355 4860 5625
+ 0.000 1.000 1.000 1.000 1.000 1.000 1.000 0.000
+3 0 0 1 0 7 50 0 -1 0.000 0 1 0 2
+ 1 1 1.00 60.00 120.00
+ 2526 5805 2976 5670
+ 0.000 0.000
+3 0 0 1 0 7 50 0 -1 0.000 0 1 0 2
+ 1 1 1.00 60.00 120.00
+ 726 5805 1176 5670
+ 0.000 0.000
+3 0 0 1 0 7 50 0 -1 0.000 0 1 0 2
+ 1 1 1.00 60.00 120.00
+ 1623 5805 2073 5670
+ 0.000 0.000
+3 0 0 1 0 7 50 0 -1 0.000 0 1 0 2
+ 1 1 1.00 60.00 120.00
+ 2970 6210 2520 6210
+ 0.000 0.000
+3 0 0 1 0 7 50 0 -1 0.000 0 1 0 2
+ 1 1 1.00 60.00 120.00
+ 2070 6210 1620 6210
+ 0.000 0.000
+3 0 0 1 0 7 50 0 -1 0.000 0 1 0 2
+ 1 1 1.00 60.00 120.00
+ 1170 6210 720 6210
+ 0.000 0.000
+3 0 0 1 0 7 50 0 -1 0.000 0 1 0 7
+ 1 1 1.00 60.00 120.00
+ 585 6345 585 6525 585 6750 3735 6750 3735 6480 3735 6210
+ 3420 6210
+ 0.000 1.000 1.000 1.000 1.000 1.000 0.000
+3 0 0 1 0 7 50 0 -1 0.000 0 1 0 8
+ 1 1 1.00 60.00 120.00
+ 3420 5850 3600 5850 3735 5850 3735 5355 3735 5085 585 5085
+ 585 5265 585 5670
+ 0.000 1.000 1.000 1.000 1.000 1.000 1.000 0.000
+3 0 0 1 0 7 50 0 -1 0.000 0 1 0 2
+ 1 1 1.00 60.00 120.00
+ 990 3690 1620 3690
+ 0.000 0.000
+3 0 0 1 0 7 50 0 -1 0.000 0 1 0 2
+ 1 1 1.00 60.00 120.00
+ 1620 4050 990 4050
+ 0.000 0.000
+3 0 0 1 0 7 50 0 -1 0.000 0 1 0 7
+ 1 1 1.00 60.00 120.00
+ 1890 3690 2340 3690 2340 3240 360 3240 360 3690 540 3690
+ 720 3690
+ 0.000 1.000 1.000 1.000 1.000 1.000 0.000
+3 0 0 1 0 7 50 0 -1 0.000 0 1 0 7
+ 1 1 1.00 60.00 120.00
+ 720 4050 540 4050 360 4050 360 4410 2340 4410 2340 4050
+ 1890 4050
+ 0.000 1.000 1.000 1.000 1.000 1.000 0.000
+3 0 0 1 0 7 50 0 -1 0.000 0 1 0 2
+ 1 1 1.00 60.00 120.00
+ 7560 3465 8010 3330
+ 0.000 0.000
+3 0 0 1 0 7 50 0 -1 0.000 0 1 0 2
+ 1 1 1.00 60.00 120.00
+ 7560 3915 8010 3375
+ 0.000 0.000
+3 0 0 1 0 7 50 0 -1 0.000 0 1 0 7
+ 1 1 1.00 60.00 120.00
+ 8460 3465 8775 3465 8820 3060 8730 2745 6750 2745 6705 3330
+ 7110 3330
+ 0.000 1.000 1.000 1.000 1.000 1.000 0.000
+3 0 0 1 0 7 50 0 -1 0.000 0 1 0 8
+ 1 1 1.00 60.00 120.00
+ 8460 3870 8820 3870 8820 4230 8640 4365 6930 4365 6750 4230
+ 6705 3510 7065 3375
+ 0.000 1.000 1.000 1.000 1.000 1.000 1.000 0.000
+3 0 0 1 0 7 50 0 -1 0.000 0 1 0 2
+ 1 1 1.00 60.00 120.00
+ 6615 1350 7335 1350
+ 0.000 0.000
+3 0 0 1 0 7 50 0 -1 0.000 0 1 0 2
+ 1 1 1.00 60.00 120.00
+ 7605 1350 8325 1350
+ 0.000 0.000
+3 0 0 1 0 7 50 0 -1 0.000 0 1 0 8
+ 1 1 1.00 60.00 120.00
+ 8595 1350 8775 1350 8865 1350 8865 990 5985 990 5985 1350
+ 6165 1350 6345 1350
+ 0.000 1.000 1.000 1.000 1.000 1.000 1.000 0.000
+3 0 0 1 0 7 50 0 -1 0.000 0 1 0 8
+ 1 1 1.00 60.00 120.00
+ 6345 1710 6165 1710 5985 1710 6075 2160 8865 2160 8865 1710
+ 8775 1710 8595 1710
+ 0.000 1.000 1.000 1.000 1.000 1.000 1.000 0.000
+3 0 0 1 0 7 50 0 -1 0.000 0 1 0 2
+ 1 1 1.00 60.00 120.00
+ 8325 1710 7605 1710
+ 0.000 0.000
+3 0 0 1 0 7 50 0 -1 0.000 0 1 0 2
+ 1 1 1.00 60.00 120.00
+ 7335 1710 6615 1710
+ 0.000 0.000
+3 0 0 1 0 7 50 0 -1 0.000 0 1 0 2
+ 1 1 1.00 60.00 120.00
+ 4140 1440 4770 1440
+ 0.000 0.000
+3 0 0 1 0 7 50 0 -1 0.000 0 1 0 2
+ 1 1 1.00 60.00 120.00
+ 4770 1800 4140 1800
+ 0.000 0.000
+3 0 0 1 0 7 50 0 -1 0.000 0 1 0 7
+ 1 1 1.00 60.00 120.00
+ 5040 1440 5490 1440 5490 990 3510 990 3510 1440 3690 1440
+ 3870 1440
+ 0.000 1.000 1.000 1.000 1.000 1.000 0.000
+3 0 0 1 0 7 50 0 -1 0.000 0 1 0 7
+ 1 1 1.00 60.00 120.00
+ 3870 1800 3690 1800 3510 1800 3510 2160 5490 2160 5490 1800
+ 5040 1800
+ 0.000 1.000 1.000 1.000 1.000 1.000 0.000
+4 1 0 50 0 14 10 0.0000 4 135 3240 5805 4950 Asymmetrical list starting at R(red)\001
+4 0 0 50 0 12 10 0.0000 4 135 3780 7875 5715 - FOREACH_ITEM(it, R, end, struct foo*, L)\001
+4 0 0 50 0 12 10 0.0000 4 105 2610 7875 5490 - last element has R->P == &L\001
+4 1 0 50 0 14 10 0.0000 4 135 3510 10215 4950 Symmetrical lists vs Asymmetrical lists\001
+4 0 0 50 0 12 10 0.0000 4 135 4680 7875 6165 - FOREACH_ITEM_SAFE(it, bck, R, end, struct foo*, L)\001
+4 0 0 50 0 12 10 0.0000 4 135 4500 7875 6390 does the same except that <bck> allows to delete\001
+4 0 0 50 0 12 10 0.0000 4 135 2340 7875 6570 any node, including <it>\001
+4 1 0 50 0 12 10 0.0000 4 135 450 5130 5355 foo_0\001
+4 1 0 50 0 12 10 0.0000 4 135 450 6030 5355 foo_1\001
+4 1 0 50 0 12 10 0.0000 4 135 450 6930 5355 foo_2\001
+4 1 0 50 0 14 10 0.0000 4 135 3150 2070 4950 Symmetrical list starting at R(red)\001
+4 1 0 50 0 12 10 0.0000 4 135 450 3195 5355 foo_2\001
+4 1 0 50 0 12 10 0.0000 4 135 450 2295 5355 foo_1\001
+4 1 0 50 0 12 10 0.0000 4 135 450 1395 5355 foo_0\001
+4 1 0 50 0 12 10 0.0000 4 105 270 9855 3420 foo\001
+4 1 0 50 0 12 10 0.0000 4 90 90 9990 3825 E\001
+4 1 0 50 0 12 10 0.0000 4 135 2520 4905 3015 Replaces W with Y, returns W\001
+4 1 0 50 0 14 10 0.0000 4 135 1440 7785 2655 Linking elements\001
+4 1 0 50 0 12 10 0.0000 4 135 450 8235 3015 foo_1\001
+4 1 0 50 0 12 10 0.0000 4 135 450 7335 3015 foo_0\001
+4 1 0 50 0 12 10 0.0000 4 135 3060 7425 810 adds Y(yellow) just after G(green)\001
+4 1 0 50 0 12 10 0.0000 4 135 1170 4500 855 adds W(white)\001
+4 1 0 50 0 12 10 0.0000 4 135 2700 10755 810 adds Y at the queue (before G)\001
+4 1 0 50 0 12 12 0.0000 4 165 630 990 1080 P=prev\001
+4 1 0 50 0 14 12 0.0000 4 135 1155 945 585 struct list\001
+4 1 0 50 0 12 12 0.0000 4 120 630 990 855 N=next\001
+4 1 0 50 0 12 10 0.0000 4 105 1080 2565 900 Terminates G\001
+4 1 0 50 0 14 10 0.0000 4 105 1260 2565 675 struct list *G\001
+4 1 0 50 0 14 10 0.0000 4 135 1260 2565 495 LIST_INIT(G):G\001
+4 1 0 50 0 14 10 0.0000 4 135 1350 4500 495 LIST_ADD(G,W):W\001
+4 1 0 50 0 14 10 0.0000 4 135 1440 4500 675 LIST_ADDQ(G,W):W\001
+4 1 0 50 0 14 10 0.0000 4 135 1350 7425 540 LIST_ADD(G,Y):Y\001
+4 1 0 50 0 14 10 0.0000 4 135 1440 10755 540 LIST_ADDQ(G,Y):Y\001
+4 1 0 50 0 12 10 0.0000 4 135 2610 1755 3060 unlinks and returns Y(yellow)\001
+4 1 0 50 0 14 10 0.0000 4 135 1170 1755 2790 LIST_DEL(Y):Y\001
+4 1 0 50 0 14 10 0.0000 4 135 1440 4905 2745 LIST_RIWI(W,Y):W\001
+4 1 0 50 0 12 10 0.0000 4 135 2790 10665 3105 containing header E as member L\001
+4 1 0 50 0 14 10 0.0000 4 135 2880 10665 2700 foo=LIST_ELEM(E, struct foo*, L)\001
+4 1 0 50 0 12 10 0.0000 4 135 2880 10665 2925 Returns a pointer to struct foo*\001
+4 0 0 50 0 12 10 0.0000 4 135 2610 7875 5265 - both are empty if R->P == R\001
+4 0 0 50 0 12 10 0.0000 4 135 3960 7875 5940 iterates <it> through foo{0,1,2} and stops\001
--- /dev/null
+#FIG 3.2 Produced by xfig version 3.2.5b
+Landscape
+Center
+Metric
+A4
+100.00
+Single
+-2
+1200 2
+6 7020 8775 9675 9405
+4 0 0 50 -1 12 12 0.0000 4 165 2400 7020 8910 update_tcp_handler()\001
+4 0 0 50 -1 16 12 0.0000 4 195 2640 7020 9105 Called on each change on the \001
+4 0 0 50 -1 16 12 0.0000 4 195 1830 7020 9345 tcp connection state.\001
+-6
+6 7020 9675 10170 10080
+4 0 0 50 -1 12 12 0.0000 4 165 2160 7020 9810 hlua_tcp_release()\001
+4 0 0 50 -1 16 12 0.0000 4 195 3150 7020 10005 Called when the applet is destroyed.\001
+-6
+6 765 8730 3195 9450
+4 0 0 50 -1 12 12 0.0000 4 165 1560 765 8910 hlua_tcp_gc()\001
+4 0 0 50 -1 16 12 0.0000 4 195 2430 765 9105 Called just before the object\001
+4 0 0 50 -1 16 12 0.0000 4 195 840 765 9345 garbaging\001
+-6
+6 900 3555 2340 4365
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+ 900 3555 2340 3555 2340 4365 900 4365 900 3555
+4 0 0 50 -1 16 12 0.0000 4 180 1080 990 4005 lua_State *T\001
+4 0 0 50 -1 18 12 0.0000 4 150 990 990 3735 struct hlua\001
+4 0 0 50 -1 16 12 0.0000 4 195 1245 990 4275 stop_list *stop\001
+-6
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 10530 6750 8910 6570
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+ 10440 6390 13320 6390 13320 6930 10440 6930 10440 6390
+2 1 1 4 4 7 50 -1 -1 4.000 0 0 -1 0 0 2
+ 6480 2745 6480 10035
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 0 5310 2520 5310
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 0 5850 2520 5850
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 0 5580 2520 5580
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 6840 7245 4635 5310
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 4
+ 6885 7110 6840 7155 6840 7335 6885 7380
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 1575 6525 10350 6210
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 2295 4230 3375 4905
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+ 0 5040 2520 5040 2520 7830 0 7830 0 5040
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 0 7110 2520 7110
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 0 7470 2520 7470
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+ 90 6120 2430 6120 2430 6975 90 6975 90 6120
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+ 3375 4815 5850 4815 5850 5310 3375 5310 3375 4815
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+ 6705 6480 8910 6480 8910 8010 6705 8010 6705 6480
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 6840 7605 2430 6840
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+ 810 3015 2430 3015 2430 4455 810 4455 810 3015
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+ 6795 6750 8820 6750 8820 7920 6795 7920 6795 6750
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+ 3060 630 4500 630 4500 1440 3060 1440 3060 630
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+ 2970 90 4635 90 4635 1575 2970 1575 2970 90
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+ 10350 6120 13410 6120 13410 7020 10350 7020 10350 6120
+3 0 1 1 13 7 50 -1 -1 1.000 0 1 0 2
+ 5 1 1.00 60.00 120.00
+ 6885 8010 6885 8910
+ 0.000 0.000
+3 0 1 1 13 7 50 -1 -1 1.000 0 1 0 3
+ 5 1 1.00 60.00 120.00
+ 6750 8010 6750 9675 6885 9810
+ 0.000 1.000 0.000
+3 0 0 1 0 7 50 -1 -1 0.000 0 1 0 5
+ 1 1 1.00 60.00 120.00
+ 990 3915 540 4140 495 4365 540 4725 585 5040
+ 0.000 1.000 1.000 1.000 0.000
+3 0 1 1 13 7 50 -1 -1 1.000 0 1 0 3
+ 5 1 1.00 60.00 120.00
+ 450 7830 450 8595 675 8914
+ 0.000 1.000 0.000
+4 0 0 50 -1 18 12 0.0000 4 195 2565 10530 6570 struct stream_interface si[0]\001
+4 0 0 50 -1 16 12 0.0000 4 195 1725 10530 6840 enum obj_type *end\001
+4 0 0 50 -1 18 12 0.0000 4 150 885 90 5220 stack Lua\001
+4 0 0 50 -1 16 12 0.0000 4 195 1140 90 5490 stack entry 0\001
+4 0 0 50 -1 16 12 0.0000 4 195 1140 90 5760 stack entry 1\001
+4 0 0 50 -1 16 12 0.0000 4 195 1140 90 6030 stack entry 2\001
+4 0 0 50 -1 18 12 0.0000 4 195 1200 6795 6660 struct appctx\001
+4 0 0 50 -1 18 12 0.0000 4 195 1695 180 6300 struct hlua_socket\001
+4 0 0 50 -1 16 12 0.0000 4 150 1470 180 6570 struct session *s\001
+4 0 0 50 -1 16 12 0.0000 4 195 1140 90 7380 stack entry 3\001
+4 0 0 50 -1 16 12 0.0000 4 195 1140 90 7740 stack entry 4\001
+4 1 12 50 -1 12 9 5.6723 4 135 540 2925 4545 (list)\001
+4 0 0 50 -1 18 12 0.0000 4 195 2205 3465 4995 struct hlua_socket_com\001
+4 1 12 50 -1 12 9 5.5851 4 135 540 5265 5760 (list)\001
+4 0 0 50 -1 18 12 0.0000 4 150 1305 900 3240 struct session\001
+4 0 0 50 -1 16 12 0.0000 4 150 1440 900 3465 struct task *task\001
+4 0 0 50 -1 16 12 0.0000 4 150 1440 3465 5220 struct task *task\001
+4 0 0 50 -1 18 12 0.0000 4 150 1110 6885 6930 struct <lua>\001
+4 0 0 50 -1 16 12 0.0000 4 195 1620 6885 7425 struct hlua_tcp *wr\001
+4 0 0 50 -1 16 12 0.0000 4 195 1590 6885 7200 struct hlua_tcp *rd\001
+4 0 0 50 -1 16 12 0.0000 4 180 1845 6885 7650 struct hlua_socket *s\001
+4 0 0 50 -1 18 12 0.0000 4 195 1470 3060 270 struct hlua_task\001
+4 0 0 50 -1 16 12 0.0000 4 150 1440 3060 540 struct task *task\001
+4 0 0 50 -1 16 12 0.0000 4 180 1080 3150 1080 lua_State *T\001
+4 0 0 50 -1 18 12 0.0000 4 150 990 3150 810 struct hlua\001
+4 0 0 50 -1 16 12 0.0000 4 195 1245 3150 1350 stop_list *stop\001
+4 0 0 50 -1 18 12 0.0000 4 150 1305 10440 6300 struct session\001
--- /dev/null
+Naming rules for manipulated objects and structures.
+
+Previously, there were ambiguities between sessions, transactions and requests,
+as well as in the way responses are noted ("resp", "rep", "rsp").
+
+Here is a proposal for a better naming scheme.
+
+The "session" is above the transport level, which means at ISO layer 5.
+We can talk about "http sessions" when we consider the entity which lives
+between the accept() and the close(), or the connect() and the close().
+
+=> This demonstrates that it is not possible to have the same http session from
+ the client to the server.
+
+A session can carry one or multiple "transactions", which are each composed of
+one "request" and zero or one "response". Both "request" and "response" are
+described in RFC2616 as "HTTP messages". RFC2616 also seldom references the
+word "transaction" without explicitly defining it.
+
+An "HTTP message" is composed of a "start line" which can be either a
+"request line" or a "status line", followed by a number of "message headers"
+which can be either "request headers" or "response headers", and an "entity",
+itself composed of "entity headers" and an "entity body".Most probably,
+"message headers" and "entity headers" will always be processed together as
+"headers", while the "entity body" will design the payload.
+
+We must try to always use the same abbreviations when naming objects. Here are
+a few common ones :
+
+ - txn : transaction
+ - req : request
+ - rtr : response to request
+ - msg : message
+ - hdr : header
+ - ent : entity
+ - bdy : body
+ - sts : status
+ - stt : state
+ - idx : index
+ - cli : client
+ - srv : server
+ - svc : service
+ - ses : session
+ - tsk : task
+
+Short names for unions or cascaded structs :
+ - sl : start line
+ - sl.rq : request line
+ - sl.st : status line
+ - cl : client
+ - px : proxy
+ - sv : server
+ - st : state / status
+
--- /dev/null
+#FIG 3.2
+Portrait
+Center
+Metric
+A4
+100.00
+Single
+-2
+1200 2
+2 1 0 1 0 7 50 -1 -1 0.000 1 0 -1 1 0 2
+ 1 1 1.00 90.00 180.00
+ 1125 1350 1125 1800
+2 1 0 1 0 7 50 -1 -1 0.000 1 0 -1 1 0 2
+ 1 1 1.00 90.00 180.00
+ 1125 2250 1125 2700
+2 1 0 1 0 7 50 -1 -1 0.000 1 0 -1 1 0 2
+ 1 1 1.00 90.00 180.00
+ 1125 3150 1125 3600
+2 1 0 1 0 7 50 -1 -1 0.000 1 0 -1 1 0 2
+ 1 1 1.00 90.00 180.00
+ 2925 3150 2925 3600
+2 1 0 1 0 7 50 -1 -1 0.000 1 0 -1 1 0 2
+ 1 1 1.00 90.00 180.00
+ 2925 2250 2925 2700
+2 1 0 1 0 7 50 -1 -1 0.000 1 0 -1 1 0 2
+ 1 1 1.00 90.00 180.00
+ 2925 1350 2925 1800
+2 1 0 1 0 7 50 -1 -1 0.000 1 0 -1 1 0 2
+ 1 1 1.00 90.00 180.00
+ 1575 1800 1575 1350
+2 1 0 1 0 7 50 -1 -1 0.000 1 0 -1 1 0 2
+ 1 1 1.00 90.00 180.00
+ 1575 3600 1575 3150
+2 1 0 1 0 7 50 -1 -1 0.000 1 0 -1 1 0 2
+ 1 1 1.00 90.00 180.00
+ 3375 2700 3375 2250
+2 1 0 1 0 7 50 -1 -1 0.000 1 0 -1 1 0 2
+ 1 1 1.00 90.00 180.00
+ 3375 1800 3375 1350
+2 1 0 1 0 7 50 -1 -1 0.000 1 0 -1 1 0 2
+ 1 1 1.00 90.00 180.00
+ 2700 1125 1800 1125
+2 1 0 1 0 7 50 -1 -1 0.000 1 0 -1 1 0 2
+ 1 1 1.00 90.00 180.00
+ 2700 3825 1800 3825
+2 4 0 1 0 7 51 -1 20 0.000 0 0 7 0 0 5
+ 3600 1350 2700 1350 2700 900 3600 900 3600 1350
+2 4 0 1 0 7 51 -1 20 0.000 0 0 7 0 0 5
+ 1800 1350 900 1350 900 900 1800 900 1800 1350
+2 4 0 1 0 7 51 -1 20 0.000 0 0 7 0 0 5
+ 1800 2250 900 2250 900 1800 1800 1800 1800 2250
+2 4 0 1 0 7 51 -1 20 0.000 0 0 7 0 0 5
+ 3600 2250 2700 2250 2700 1800 3600 1800 3600 2250
+2 4 0 1 0 7 51 -1 20 0.000 0 0 7 0 0 5
+ 3600 3150 2700 3150 2700 2700 3600 2700 3600 3150
+2 4 0 1 0 7 51 -1 20 0.000 0 0 7 0 0 5
+ 3600 4050 2700 4050 2700 3600 3600 3600 3600 4050
+2 4 0 1 0 7 51 -1 20 0.000 0 0 7 0 0 5
+ 1800 4050 900 4050 900 3600 1800 3600 1800 4050
+2 4 0 1 0 7 51 -1 20 0.000 0 0 7 0 0 5
+ 1800 3150 900 3150 900 2700 1800 2700 1800 3150
+2 1 0 1 0 7 50 -1 -1 0.000 1 0 -1 1 0 2
+ 1 1 1.00 90.00 180.00
+ 1800 2925 2700 2925
+2 1 0 1 0 7 50 -1 -1 0.000 1 0 -1 1 0 2
+ 1 1 1.00 90.00 180.00
+ 1350 450 1350 900
+4 1 0 50 -1 16 8 0.0000 4 120 330 2250 1080 update\001
+4 1 0 50 -1 16 8 0.0000 4 120 330 2250 3780 update\001
+4 2 0 50 -1 16 8 0.0000 4 75 240 2880 1485 want\001
+4 0 0 50 -1 16 8 0.0000 4 105 210 3420 1755 stop\001
+4 0 0 50 -1 16 8 0.0000 4 120 585 3420 2610 poll()=>rdy\001
+4 2 0 50 -1 16 8 0.0000 4 75 210 2835 2385 cant\001
+4 2 0 50 -1 16 8 0.0000 4 90 285 2835 2655 done*\001
+4 2 0 50 -1 16 8 0.0000 4 90 255 2835 2520 wait*\001
+4 2 0 50 -1 16 8 0.0000 4 75 240 1080 1485 want\001
+4 0 0 50 -1 16 8 0.0000 4 105 210 1665 1755 stop\001
+4 0 0 50 -1 16 8 0.0000 4 90 240 1665 1620 done\001
+4 2 0 50 -1 16 8 0.0000 4 75 210 1035 2385 cant\001
+4 2 0 50 -1 16 8 0.0000 4 90 255 1035 2520 wait*\001
+4 2 0 50 -1 16 8 0.0000 4 105 210 1035 3285 stop\001
+4 0 0 50 -1 16 8 0.0000 4 75 240 1665 3510 want\001
+4 2 0 50 -1 16 8 0.0000 4 105 210 2835 3285 stop\001
+4 1 0 50 -1 16 10 0.0000 4 105 735 1350 1080 STOPPED\001
+4 1 0 50 -1 16 10 0.0000 4 105 630 3150 1080 PAUSED\001
+4 1 0 50 -1 16 10 0.0000 4 105 555 1350 1980 ACTIVE\001
+4 1 0 50 -1 16 10 0.0000 4 105 525 3150 1980 READY\001
+4 1 0 50 -1 16 10 0.0000 4 105 825 1350 2880 MUSTPOLL\001
+4 1 0 50 -1 16 10 0.0000 4 105 615 3150 2880 POLLED\001
+4 1 0 50 -1 16 10 0.0000 4 105 765 1350 3780 DISABLED\001
+4 1 0 50 -1 16 10 0.0000 4 105 525 3150 3780 ABORT\001
+4 1 0 50 -1 16 8 0.0000 4 105 360 1350 1260 R,!A,!P\001
+4 1 0 50 -1 16 8 0.0000 4 105 330 1350 2160 R,A,!P\001
+4 1 0 50 -1 16 8 0.0000 4 105 330 3150 1260 R,!A,P\001
+4 1 0 50 -1 16 8 0.0000 4 105 300 3150 2160 R,A,P\001
+4 1 0 50 -1 16 8 0.0000 4 105 330 3150 3060 !R,A,P\001
+4 1 0 50 -1 16 8 0.0000 4 105 360 1350 3060 !R,A,!P\001
+4 1 0 50 -1 16 8 0.0000 4 105 390 1350 3960 !R,!A,!P\001
+4 1 0 50 -1 16 8 0.0000 4 105 360 3150 3960 !R,!A,P\001
+4 1 0 50 -1 16 8 0.0000 4 120 330 2250 2880 update\001
+4 0 0 50 -1 16 10 0.0000 4 135 885 4275 1125 R=ready flag\001
+4 0 0 50 -1 16 10 0.0000 4 135 900 4275 1290 A=active flag\001
+4 0 0 50 -1 16 10 0.0000 4 135 915 4275 1455 P=polled flag\001
+4 0 0 50 -1 16 10 0.0000 4 135 2250 4275 1785 Transitions marked with a star (*)\001
+4 0 0 50 -1 16 10 0.0000 4 135 2505 4275 1950 are only possible with level-triggered\001
+4 0 0 50 -1 16 10 0.0000 4 135 495 4275 2115 pollers.\001
+4 0 0 50 -1 16 10 0.0000 4 135 1335 4275 2475 fd_want sets A flag\001
+4 0 0 50 -1 16 10 0.0000 4 135 1425 4275 2640 fd_stop clears A flag\001
+4 0 0 50 -1 16 10 0.0000 4 135 2340 4275 2805 fd_wait clears R flag on LT pollers\001
+4 0 0 50 -1 16 10 0.0000 4 135 1980 4275 3465 fd_done does what's best to\001
+4 0 0 50 -1 16 10 0.0000 4 105 2010 4455 3630 minimize the amount of work.\001
+4 0 0 50 -1 16 10 0.0000 4 135 1935 4275 3300 update() updates the poller.\001
+4 0 0 50 -1 16 10 0.0000 4 135 2145 4275 2970 fd_cant clears R flag (EAGAIN)\001
+4 0 0 50 -1 16 10 0.0000 4 135 2040 4275 3135 fd_rdy sets R flag (poll return)\001
--- /dev/null
+- session : ajouter ->fiprm et ->beprm comme raccourcis
+- px->maxconn: ne s'applique qu'au FE. Pour le BE, on utilise fullconn,
+ initialisé par défaut à la même chose que maxconn.
+
+
+ \ from: proxy session server actuellement
+field \
+rules px->fiprm sess->fiprm -
+srv,cookies px->beprm sess->beprm srv->px
+options(log) px-> sess->fe -
+options(fe) px-> sess->fe -
+options(be) px->beprm sess->beprm srv->px
+captures px-> sess->fe - ->fiprm
+
+
+logs px-> sess->fe srv->px
+errorloc px-> sess->beprm|fe -
+maxconn px-> sess->fe - ->be
+fullconn px-> sess->beprm srv->px -
+
--- /dev/null
+#FIG 3.2 Produced by xfig version 3.2.5-alpha5
+Portrait
+Center
+Metric
+A4
+100.00
+Single
+-2
+1200 2
+6 900 945 3015 1800
+6 1035 1215 3015 1800
+6 1035 1215 3015 1350
+2 2 0 1 26 6 51 -1 20 0.000 0 0 -1 0 0 5
+ 1035 1215 1620 1215 1620 1350 1035 1350 1035 1215
+4 0 0 50 -1 12 7 0.0000 4 90 1275 1710 1305 Standard settings\001
+-6
+6 1035 1440 2385 1575
+2 2 0 1 9 11 51 -1 20 0.000 0 0 -1 0 0 5
+ 1035 1440 1620 1440 1620 1575 1035 1575 1035 1440
+4 0 0 50 -1 12 7 0.0000 4 60 675 1710 1530 Rule sets\001
+-6
+6 1035 1665 2790 1800
+2 2 0 1 13 2 52 -1 20 0.000 0 0 -1 0 0 5
+ 1035 1665 1620 1665 1620 1800 1035 1800 1035 1665
+4 0 0 50 -1 12 7 0.0000 4 75 1050 1710 1755 HTTP mode only\001
+-6
+-6
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 945 1125 945 1800
+4 0 0 50 -1 17 10 0.0000 4 150 615 900 1080 Captions\001
+-6
+6 450 2250 3510 3195
+4 0 0 50 -1 16 10 0.0000 4 150 2865 450 2385 Each time a poller detects an activity on a\001
+4 0 0 50 -1 16 10 0.0000 4 150 2940 450 2580 listening socket, this sequence is executed.\001
+4 0 0 50 -1 16 10 0.0000 4 150 3000 450 2775 Note that stream_sock_accept() loops until\001
+4 0 0 50 -1 16 10 0.0000 4 150 3030 450 2970 accept() returns an error or tune.maxaccept\001
+4 0 0 50 -1 16 10 0.0000 4 150 1830 450 3165 loops have been executed.\001
+-6
+6 450 3375 3420 4275
+4 0 0 50 -1 16 10 0.0000 4 150 2535 450 3510 Once the session is started, function\001
+4 0 0 50 -1 16 10 0.0000 4 150 2880 450 3705 process_session() will be called once then\001
+4 0 0 50 -1 16 10 0.0000 4 150 2895 450 3900 each time an activity is detected on any of\001
+4 0 0 50 -1 16 10 0.0000 4 150 2955 450 4095 monitored file descriptors belonging to the\001
+4 0 0 50 -1 16 10 0.0000 4 120 555 450 4275 session.\001
+-6
+6 4230 945 6480 1125
+2 2 0 1 26 6 51 -1 20 0.000 0 0 -1 0 0 5
+ 4230 945 6345 945 6345 1125 4230 1125 4230 945
+4 0 0 50 -1 14 10 0.0000 4 105 2205 4275 1080 rate-limit sessions ?\001
+-6
+6 4455 1620 7065 1800
+2 2 0 1 26 6 51 -1 20 0.000 0 0 -1 0 0 5
+ 4455 1620 6885 1620 6885 1800 4455 1800 4455 1620
+4 0 0 50 -1 14 10 0.0000 4 135 2520 4521 1755 monitor-net (mode=tcp) ?\001
+-6
+6 4455 1845 7470 2025
+2 2 0 1 9 11 51 -1 20 0.000 0 0 -1 0 0 5
+ 4455 1845 7290 1845 7290 2025 4455 2025 4455 1845
+4 0 0 50 -1 14 10 0.0000 4 135 2940 4500 1980 tcp-request connection {...}\001
+-6
+6 4635 3195 7425 3735
+6 4680 3420 7380 3600
+2 2 0 1 26 6 51 -1 20 0.000 0 0 -1 0 0 5
+ 4680 3420 7200 3420 7200 3600 4680 3600 4680 3420
+4 0 0 50 -1 14 10 0.0000 4 135 2625 4725 3555 monitor-net (mode=http) ?\001
+-6
+2 2 0 1 13 2 52 -1 20 0.000 0 0 -1 0 0 5
+ 4635 3195 7425 3195 7425 3735 4635 3735 4635 3195
+4 0 0 50 -1 14 10 0.0000 4 135 1575 4725 3330 http_init_txn()\001
+-6
+2 1 0 1 0 7 51 -1 -1 4.000 0 0 -1 1 0 3
+ 1 1 1.00 60.00 120.00
+ 6885 1710 7200 1710 7200 675
+2 1 0 1 0 7 51 -1 -1 4.000 0 0 -1 1 0 3
+ 1 1 1.00 60.00 120.00
+ 7290 1935 7425 1935 7425 675
+2 1 0 1 0 7 51 -1 -1 4.000 0 0 -1 1 0 3
+ 1 1 1.00 60.00 120.00
+ 5850 2340 7650 2340 7650 675
+2 1 0 1 0 7 51 -1 -1 4.000 0 0 -1 1 0 3
+ 1 1 1.00 60.00 120.00
+ 7200 3510 7875 3510 7875 675
+2 1 0 1 0 7 51 -1 -1 0.000 0 0 -1 0 0 2
+ 4140 675 4140 4275
+2 1 0 1 0 7 51 -1 -1 0.000 0 0 -1 0 0 2
+ 4320 1575 4320 4275
+2 1 0 1 0 7 51 -1 -1 4.000 0 0 -1 1 0 3
+ 1 1 1.00 60.00 120.00
+ 5580 1260 6750 1260 6750 675
+2 1 0 1 0 7 51 -1 -1 0.000 0 0 -1 0 0 2
+ 4545 2700 4545 4050
+2 1 0 1 0 7 51 -1 -1 4.000 0 0 -1 1 0 3
+ 1 1 1.00 60.00 120.00
+ 6345 1035 6525 1035 6525 675
+2 2 0 1 26 6 51 -1 20 0.000 0 0 -1 0 0 5
+ 4635 3825 6030 3825 6030 4005 4635 4005 4635 3825
+2 1 0 1 0 7 50 -1 -1 4.000 0 0 -1 0 0 3
+ 6030 3915 7875 3915 7875 3510
+2 2 0 1 26 6 51 -1 20 0.000 0 0 -1 0 0 5
+ 4230 720 5895 720 5895 900 4230 900 4230 720
+2 1 0 1 0 7 51 -1 -1 4.000 0 0 -1 1 0 3
+ 1 1 1.00 60.00 120.00
+ 5895 810 6300 810 6300 675
+4 1 0 51 -1 12 7 0.0000 4 60 375 7515 585 close\001
+4 1 0 51 -1 12 7 0.0000 4 75 1275 6930 2250 not enough memory\001
+4 0 0 51 -1 12 7 1.5708 4 60 1575 8010 2790 return "OK" and close\001
+4 0 0 50 -1 14 10 0.0000 4 135 1365 4275 1305 sock=accept()\001
+4 0 0 50 -1 14 10 0.0000 4 135 1890 4500 2655 frontend_accept(s)\001
+4 0 0 50 -1 14 10 0.0000 4 135 2100 4275 1530 session_accept(sock)\001
+4 0 0 50 -1 14 10 0.0000 4 105 1365 4500 2385 s=new session\001
+4 0 0 50 -1 14 10 0.0000 4 135 1575 4635 2880 prepare logs(s)\001
+4 0 0 50 -1 14 10 0.0000 4 135 2100 4635 3105 prepare socket(sock)\001
+4 0 0 50 -1 14 10 0.0000 4 105 1365 4680 3960 mode=health ?\001
+4 1 0 51 -1 12 7 0.0000 4 60 225 7605 3465 Yes\001
+4 1 0 51 -1 12 7 0.0000 4 60 225 7605 3870 Yes\001
+4 1 0 51 -1 12 7 0.0000 4 60 225 7065 1665 Yes\001
+4 1 0 51 -1 12 7 0.0000 4 75 300 6570 1215 Fail\001
+4 0 0 50 -1 14 10 0.0000 4 120 1680 4500 4230 start session(s)\001
+4 0 0 50 -1 14 10 0.0000 4 105 1785 4275 855 maxconn reached ?\001
+4 1 0 51 -1 12 7 0.0000 4 90 450 6525 585 ignore\001
+4 1 0 51 -1 12 7 0.0000 4 60 225 6120 765 Yes\001
+4 0 0 50 -1 17 12 0.0000 4 210 3000 450 630 Session instantiation sequence\001
+4 0 0 50 -1 14 10 0.0000 4 135 2100 4050 630 stream_sock_accept()\001
--- /dev/null
+
+ Qcur Qmax Scur Smax Slim Scum Fin Fout Bin Bout Ereq Econ Ersp Sts Wght Act Bck EChk Down
+Frontend - - X maxX Y totX I O I O Q - - - - - - - -
+Server X maxX X maxX Y totX I O I O - C R S W A B E D
+Server X maxX X maxX Y totX I O I O - C R S W A B E D
+Server X maxX X maxX Y totX I O I O - C R S W A B E D
+Backend X maxX X maxX Y totX I O I O - C R S totW totA totB totE totD
+
--- /dev/null
+#FIG 3.2
+Portrait
+Center
+Metric
+A4
+100.00
+Single
+-2
+1200 2
+0 32 #8e8e8e
+6 2295 1260 2430 1395
+1 4 0 1 0 7 50 -1 -1 0.000 1 0.0000 2363 1328 68 68 2430 1328 2295 1328
+4 1 0 50 -1 18 5 0.0000 4 60 45 2363 1361 1\001
+-6
+6 1845 2295 1980 2430
+1 4 0 1 0 7 50 -1 -1 0.000 1 0.0000 1913 2363 68 68 1980 2363 1845 2363
+4 1 0 50 -1 18 5 0.0000 4 60 45 1913 2396 2\001
+-6
+6 2475 2340 2610 2475
+1 4 0 1 0 7 50 -1 -1 0.000 1 0.0000 2543 2408 68 68 2610 2408 2475 2408
+4 1 0 50 -1 18 5 0.0000 4 60 45 2543 2441 9\001
+-6
+6 2835 2610 2970 2745
+1 4 0 1 0 7 50 -1 -1 0.000 1 0.0000 2903 2678 68 68 2970 2678 2835 2678
+4 1 0 50 -1 18 5 0.0000 4 60 45 2903 2711 7\001
+-6
+6 3195 2025 3330 2160
+1 4 0 1 0 7 50 -1 -1 0.000 1 0.0000 3263 2093 68 68 3330 2093 3195 2093
+4 1 0 50 -1 18 5 0.0000 4 60 45 3263 2126 8\001
+-6
+6 2745 2160 2880 2295
+1 4 0 1 0 7 50 -1 -1 0.000 1 0.0000 2813 2228 68 68 2880 2228 2745 2228
+4 1 0 50 -1 18 5 0.0000 4 60 45 2813 2261 6\001
+-6
+6 990 2700 1125 2835
+1 4 0 1 0 7 50 -1 -1 0.000 1 0.0000 1058 2768 68 68 1125 2768 990 2768
+4 1 0 50 -1 18 5 0.0000 4 60 90 1058 2801 13\001
+-6
+6 1305 2970 1440 3105
+1 4 0 1 0 7 50 -1 -1 0.000 1 0.0000 1373 3038 68 68 1440 3038 1305 3038
+4 1 0 50 -1 18 5 0.0000 4 60 90 1373 3071 12\001
+-6
+6 3105 1710 3240 1845
+1 4 0 1 0 7 50 -1 -1 0.000 1 0.0000 3173 1778 68 68 3240 1778 3105 1778
+4 1 0 50 -1 18 5 0.0000 4 60 90 3173 1811 15\001
+-6
+6 4275 1260 4410 1395
+1 4 0 1 0 7 50 -1 -1 0.000 1 0.0000 4343 1328 68 68 4410 1328 4275 1328
+4 1 0 50 -1 18 5 0.0000 4 60 45 4343 1361 1\001
+-6
+6 4275 1440 4410 1575
+1 4 0 1 0 7 50 -1 -1 0.000 1 0.0000 4343 1508 68 68 4410 1508 4275 1508
+4 1 0 50 -1 18 5 0.0000 4 60 45 4343 1541 2\001
+-6
+6 4275 1620 4410 1755
+1 4 0 1 0 7 50 -1 -1 0.000 1 0.0000 4343 1688 68 68 4410 1688 4275 1688
+4 1 0 50 -1 18 5 0.0000 4 60 45 4343 1721 3\001
+-6
+6 4275 1800 4410 1935
+1 4 0 1 0 7 50 -1 -1 0.000 1 0.0000 4343 1868 68 68 4410 1868 4275 1868
+4 1 0 50 -1 18 5 0.0000 4 60 45 4343 1901 4\001
+-6
+6 3240 2835 3375 2970
+1 4 0 1 0 7 50 -1 -1 0.000 1 0.0000 3308 2903 68 68 3375 2903 3240 2903
+4 1 0 50 -1 18 5 0.0000 4 60 90 3308 2936 16\001
+-6
+6 2835 3015 2970 3150
+1 4 0 1 0 7 50 -1 -1 0.000 1 0.0000 2903 3083 68 68 2970 3083 2835 3083
+4 1 0 50 -1 18 5 0.0000 4 60 90 2903 3116 17\001
+-6
+6 2295 3195 2430 3330
+1 4 0 1 0 7 50 -1 -1 0.000 1 0.0000 2363 3263 68 68 2430 3263 2295 3263
+4 1 0 50 -1 18 5 0.0000 4 60 45 2363 3296 3\001
+-6
+6 2295 4815 2430 4950
+1 4 0 1 0 7 50 -1 -1 0.000 1 0.0000 2363 4883 68 68 2430 4883 2295 4883
+4 1 0 50 -1 18 5 0.0000 4 60 45 2363 4916 5\001
+-6
+6 1440 4815 1620 4995
+1 4 0 1 0 7 50 -1 -1 0.000 1 0.0000 1508 4883 68 68 1575 4883 1440 4883
+4 1 0 50 -1 18 5 0.0000 4 60 90 1508 4916 19\001
+-6
+6 1800 3960 1980 4140
+1 4 0 1 0 7 50 -1 -1 0.000 1 0.0000 1868 4028 68 68 1935 4028 1800 4028
+4 1 0 50 -1 18 5 0.0000 4 60 90 1868 4061 18\001
+-6
+6 4275 1980 4410 2115
+1 4 0 1 0 7 50 -1 -1 0.000 1 0.0000 4343 2048 68 68 4410 2048 4275 2048
+4 1 0 50 -1 18 5 0.0000 4 60 45 4343 2081 5\001
+-6
+6 4275 2340 4410 2475
+1 4 0 1 0 7 50 -1 -1 0.000 1 0.0000 4343 2408 68 68 4410 2408 4275 2408
+4 1 0 50 -1 18 5 0.0000 4 60 45 4343 2441 6\001
+-6
+6 4275 2520 4410 2655
+1 4 0 1 0 7 50 -1 -1 0.000 1 0.0000 4343 2588 68 68 4410 2588 4275 2588
+4 1 0 50 -1 18 5 0.0000 4 60 45 4343 2621 7\001
+-6
+6 4275 2700 4410 2835
+1 4 0 1 0 7 50 -1 -1 0.000 1 0.0000 4343 2768 68 68 4410 2768 4275 2768
+4 1 0 50 -1 18 5 0.0000 4 60 45 4343 2801 8\001
+-6
+6 4275 2880 4410 3015
+1 4 0 1 0 7 50 -1 -1 0.000 1 0.0000 4343 2948 68 68 4410 2948 4275 2948
+4 1 0 50 -1 18 5 0.0000 4 60 45 4343 2981 9\001
+-6
+6 4275 3060 4410 3195
+1 4 0 1 0 7 50 -1 -1 0.000 1 0.0000 4343 3128 68 68 4410 3128 4275 3128
+4 1 0 50 -1 18 5 0.0000 4 60 90 4343 3161 10\001
+-6
+6 4275 3240 4410 3375
+1 4 0 1 0 7 50 -1 -1 0.000 1 0.0000 4343 3308 68 68 4410 3308 4275 3308
+4 1 0 50 -1 18 5 0.0000 4 60 90 4343 3341 11\001
+-6
+6 4275 3420 4410 3555
+1 4 0 1 0 7 50 -1 -1 0.000 1 0.0000 4343 3488 68 68 4410 3488 4275 3488
+4 1 0 50 -1 18 5 0.0000 4 60 90 4343 3521 12\001
+-6
+6 4275 3600 4410 3735
+1 4 0 1 0 7 50 -1 -1 0.000 1 0.0000 4343 3668 68 68 4410 3668 4275 3668
+4 1 0 50 -1 18 5 0.0000 4 60 90 4343 3701 13\001
+-6
+6 4275 3960 4410 4095
+1 4 0 1 0 7 50 -1 -1 0.000 1 0.0000 4343 4028 68 68 4410 4028 4275 4028
+4 1 0 50 -1 18 5 0.0000 4 60 90 4343 4061 15\001
+-6
+6 4275 4140 4410 4275
+1 4 0 1 0 7 50 -1 -1 0.000 1 0.0000 4343 4208 68 68 4410 4208 4275 4208
+4 1 0 50 -1 18 5 0.0000 4 60 90 4343 4241 16\001
+-6
+6 4275 4320 4410 4455
+1 4 0 1 0 7 50 -1 -1 0.000 1 0.0000 4343 4388 68 68 4410 4388 4275 4388
+4 1 0 50 -1 18 5 0.0000 4 60 90 4343 4421 17\001
+-6
+6 4275 3780 4455 3960
+1 4 0 1 0 7 50 -1 -1 0.000 1 0.0000 4343 3848 68 68 4410 3848 4275 3848
+4 1 0 50 -1 18 5 0.0000 4 60 90 4343 3881 14\001
+-6
+6 4275 4590 4455 4770
+1 4 0 1 0 7 50 -1 -1 0.000 1 0.0000 4343 4658 68 68 4410 4658 4275 4658
+4 1 0 50 -1 18 5 0.0000 4 60 90 4343 4691 18\001
+-6
+6 4275 4770 4455 4950
+1 4 0 1 0 7 50 -1 -1 0.000 1 0.0000 4343 4838 68 68 4410 4838 4275 4838
+4 1 0 50 -1 18 5 0.0000 4 60 90 4343 4871 19\001
+-6
+6 4275 4950 4455 5130
+1 4 0 1 0 7 50 -1 -1 0.000 1 0.0000 4343 5018 68 68 4410 5018 4275 5018
+4 1 0 50 -1 18 5 0.0000 4 60 90 4343 5051 20\001
+-6
+6 2295 5670 2475 5850
+1 4 0 1 0 7 50 -1 -1 0.000 1 0.0000 2363 5738 68 68 2430 5738 2295 5738
+4 1 0 50 -1 18 5 0.0000 4 60 90 2363 5771 20\001
+-6
+6 1170 3690 1350 3870
+1 4 0 1 0 7 50 -1 -1 0.000 1 0.0000 1238 3758 68 68 1305 3758 1170 3758
+4 1 0 50 -1 18 5 0.0000 4 60 90 1238 3791 11\001
+-6
+6 1530 3555 1710 3735
+1 4 0 1 0 7 50 -1 -1 0.000 1 0.0000 1598 3623 68 68 1665 3623 1530 3623
+4 1 0 50 -1 18 5 0.0000 4 60 90 1598 3656 10\001
+-6
+6 720 4095 900 4275
+1 4 0 1 0 7 50 -1 -1 0.000 1 0.0000 788 4163 68 68 855 4163 720 4163
+4 1 0 50 -1 18 5 0.0000 4 60 90 788 4196 14\001
+-6
+6 855 3645 1035 3825
+1 4 0 1 0 7 50 -1 -1 0.000 1 0.0000 923 3713 68 68 990 3713 855 3713
+4 1 0 50 -1 18 5 0.0000 4 60 90 923 3746 21\001
+-6
+6 4275 5130 4455 5310
+1 4 0 1 0 7 50 -1 -1 0.000 1 0.0000 4343 5198 68 68 4410 5198 4275 5198
+4 1 0 50 -1 18 5 0.0000 4 60 90 4343 5231 21\001
+-6
+6 2295 4140 2430 4275
+1 4 0 1 0 7 50 -1 -1 0.000 1 0.0000 2363 4208 68 68 2430 4208 2295 4208
+4 1 0 50 -1 18 5 0.0000 4 60 45 2363 4241 4\001
+-6
+6 2475 3870 2655 4050
+1 4 0 1 0 7 50 -1 -1 0.000 1 0.0000 2543 3938 68 68 2610 3938 2475 3938
+4 1 0 50 -1 18 5 0.0000 4 60 90 2543 3971 22\001
+-6
+6 4275 5310 4455 5490
+1 4 0 1 0 7 50 -1 -1 0.000 1 0.0000 4343 5378 68 68 4410 5378 4275 5378
+4 1 0 50 -1 18 5 0.0000 4 60 90 4343 5411 22\001
+-6
+1 2 0 1 0 6 50 -1 20 0.000 1 0.0000 1350 4612 225 112 1125 4612 1575 4612
+1 2 0 1 0 6 50 -1 20 0.000 1 0.0000 2250 5422 225 112 2025 5422 2475 5422
+1 2 0 1 0 6 50 -1 20 0.000 1 0.0000 2250 1912 225 112 2025 1912 2475 1912
+1 2 0 1 0 7 50 -1 20 0.000 1 0.0000 1125 3487 225 112 900 3487 1350 3487
+1 2 0 1 0 7 50 -1 20 0.000 1 0.0000 2250 3712 225 112 2025 3712 2475 3712
+1 2 0 1 0 7 50 -1 20 0.000 1 0.0000 2250 4612 225 112 2025 4612 2475 4612
+1 2 0 1 0 7 50 -1 20 0.000 1 0.0000 2250 6187 225 112 2025 6187 2475 6187
+1 2 0 1 0 6 50 -1 20 0.000 1 0.0000 2250 2812 225 112 2025 2812 2475 2812
+1 2 0 1 0 7 50 -1 20 0.000 1 0.0000 3375 2362 225 112 3150 2362 3600 2362
+1 2 0 1 0 7 50 -1 20 0.000 1 0.0000 2250 1012 225 112 2025 1012 2475 1012
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 2250 1125 2250 1800
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 2250 4725 2250 5310
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 2250 5535 2250 6075
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 8550 5805 4500 5805
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 3
+ 6885 5900 6930 5990 6975 5810
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 3
+ 7605 5890 7650 5980 7695 5800
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 8550 6030 4500 6030
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 8550 6255 4500 6255
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 8550 6480 4500 6480
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 3
+ 6885 6570 6930 6660 6975 6480
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 3
+ 7605 6570 7650 6660 7695 6480
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 3
+ 7965 6570 8010 6660 8055 6480
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 5310 5589 5310 6921
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 5670 5589 5670 6921
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 6030 5589 6030 6921
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 6390 5589 6390 6921
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 6750 5589 6750 6921
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 7110 5589 7110 6921
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 7470 5589 7470 6921
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 7830 5589 7830 6921
+2 1 0 2 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 4950 5589 4950 6921
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 8190 5589 8190 6921
+2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
+ 4500 5580 8550 5580 8550 6930 4500 6930 4500 5580
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 8550 6705 4500 6705
+3 0 0 1 0 7 50 -1 -1 0.000 0 1 0 4
+ 1 1 1.00 60.00 120.00
+ 2475 2835 3150 3375 3150 5625 2475 6120
+ 0.000 1.000 1.000 0.000
+3 0 0 1 0 7 50 -1 -1 0.000 0 1 0 4
+ 1 1 1.00 60.00 120.00
+ 2250 2700 2475 2475 2475 2250 2250 2025
+ 0.000 1.000 1.000 0.000
+3 0 0 1 0 7 50 -1 -1 0.000 0 1 0 3
+ 1 1 1.00 60.00 120.00
+ 3375 2250 2925 2025 2475 1935
+ 0.000 1.000 0.000
+3 0 0 1 0 7 50 -1 -1 0.000 0 1 0 3
+ 1 1 1.00 60.00 120.00
+ 3375 2475 3375 2700 2475 2835
+ 0.000 1.000 0.000
+3 0 0 1 0 7 50 -1 -1 0.000 0 1 0 4
+ 1 1 1.00 60.00 120.00
+ 3420 2475 3420 4320 3150 5850 2475 6165
+ 0.000 1.000 1.000 0.000
+3 0 0 1 0 7 50 -1 -1 0.000 0 1 0 3
+ 1 1 1.00 60.00 120.00
+ 1125 3375 1125 2925 2025 2790
+ 0.000 1.000 0.000
+3 0 0 1 0 7 50 -1 -1 0.000 0 1 0 3
+ 1 1 1.00 60.00 120.00
+ 1125 3375 1125 2250 2025 1935
+ 0.000 1.000 0.000
+3 0 0 1 0 7 50 -1 -1 0.000 0 1 0 6
+ 1 1 1.00 60.00 120.00
+ 2475 1890 3825 1800 3825 2520 3825 4500 3150 6075 2475 6210
+ 0.000 1.000 1.000 1.000 1.000 0.000
+3 0 0 1 0 7 50 -1 -1 0.000 0 1 0 4
+ 1 1 1.00 60.00 120.00
+ 2250 2025 2025 2250 2025 2475 2250 2700
+ 0.000 1.000 1.000 0.000
+3 0 0 1 0 7 50 -1 -1 0.000 0 1 0 2
+ 1 1 1.00 60.00 120.00
+ 2250 3825 2250 4500
+ 0.000 0.000
+3 0 0 1 0 7 50 -1 -1 0.000 0 1 0 3
+ 1 1 1.00 60.00 120.00
+ 2475 1980 2880 2115 3150 2340
+ 0.000 1.000 0.000
+3 0 0 1 0 7 50 -1 -1 0.000 0 1 0 2
+ 1 1 1.00 60.00 120.00
+ 2250 2925 2250 3600
+ 0.000 0.000
+3 0 0 1 0 7 50 -1 -1 0.000 0 1 0 4
+ 1 1 1.00 60.00 120.00
+ 2205 3825 2070 4140 1622 4221 1440 4500
+ 0.000 1.000 1.000 0.000
+3 0 0 1 0 7 50 -1 -1 0.000 0 1 0 4
+ 1 1 1.00 60.00 120.00
+ 1350 4725 1350 4950 1485 5760 2025 6165
+ 0.000 1.000 1.000 0.000
+3 0 0 1 0 7 50 -1 -1 0.000 0 1 0 7
+ 1 1 1.00 60.00 120.00
+ 1125 4590 720 4455 675 4050 675 3600 675 2250 1350 1800
+ 2025 1935
+ 0.000 1.000 1.000 1.000 1.000 1.000 0.000
+3 0 0 1 0 7 50 -1 -1 0.000 0 1 0 3
+ 1 1 1.00 60.00 120.00
+ 1260 4500 1125 4320 1125 3600
+ 0.000 1.000 0.000
+3 0 0 1 0 7 50 -1 -1 0.000 0 1 0 4
+ 1 1 1.00 60.00 120.00
+ 1350 4500 1440 3645 1575 3330 2070 2880
+ 0.000 1.000 1.000 0.000
+3 0 0 1 32 7 51 -1 -1 0.000 0 1 0 5
+ 1 1 1.00 60.00 120.00
+ 1035 3600 990 4365 990 5040 1395 5895 2025 6210
+ 0.000 1.000 1.000 1.000 0.000
+3 0 0 1 32 7 51 -1 -1 0.000 0 1 0 5
+ 1 1 1.00 60.00 120.00
+ 2340 3825 2385 4005 2925 4275 2655 4815 2295 5310
+ 0.000 1.000 1.000 1.000 0.000
+4 0 0 50 -1 14 6 0.0000 4 75 2880 4500 1710 ASS-CON: ssui(): connect_server() == SN_ERR_NONE\001
+4 0 0 50 -1 14 6 0.0000 4 60 540 4500 1350 INI-REQ: \001
+4 0 0 50 -1 14 6 0.0000 4 75 3720 4500 1530 REQ-ASS: prepare_conn_request(): srv_redispatch_connect() == 0\001
+4 0 4 50 -1 14 10 0.0000 4 90 90 2475 2700 4\001
+4 0 4 50 -1 14 10 0.0000 4 90 90 1620 4500 6\001
+4 0 0 50 -1 14 6 0.0000 4 75 3360 4500 1890 CON-EST: sess_update_st_con_tcp(): !timeout && !conn_err\001
+4 0 0 50 -1 14 6 0.0000 4 75 2460 4500 3510 TAR-ASS: ssui(): SI_FL_EXP && SN_ASSIGNED\001
+4 0 0 50 -1 14 6 0.0000 4 75 3420 4500 2970 ASS-REQ: connect_server: conn_retries == 0 && PR_O_REDISP\001
+4 0 0 50 -1 14 6 0.0000 4 75 2460 4500 2610 QUE-REQ: ssui(): !pend_pos && SN_ASSIGNED\001
+4 0 0 50 -1 14 6 0.0000 4 75 2520 4500 2790 QUE-REQ: ssui(): !pend_pos && !SN_ASSIGNED\001
+4 0 0 50 -1 14 6 0.0000 4 75 3300 4500 4230 QUE-CLO: ssui(): pend_pos && (SI_FL_EXP || req_aborted)\001
+4 0 0 50 -1 14 6 0.0000 4 75 2520 4500 3690 TAR-REQ: ssui(): SI_FL_EXP && !SN_ASSIGNED\001
+4 0 0 50 -1 14 6 0.0000 4 75 3960 4500 4545 ASS-CLO: PR_O_REDISP && SN_REDIRECTABLE && perform_http_redirect()\001
+4 0 0 50 -1 14 6 0.0000 4 75 4440 4500 2430 REQ-QUE: prepare_conn_request(): srv_redispatch_connect() != 0 (SI_ST_QUE)\001
+4 0 0 50 -1 14 6 0.0000 4 75 4200 4500 4050 REQ-CLO: prepare_conn_request(): srv_redispatch_connect() != 0 (error)\001
+4 0 0 50 -1 14 6 0.0000 4 75 4320 4500 4410 ASS-CLO: ssui(): connect_server() == SN_ERR_INTERNAL || conn_retries < 0\001
+4 0 0 50 -1 14 6 0.0000 4 75 3120 4500 4680 CON-CER: sess_update_st_con_tcp(): timeout/SI_FL_ERR\001
+4 0 0 50 -1 14 6 0.0000 4 75 3600 4500 4860 CER-CLO: sess_update_st_cer(): (ERR/EXP) && conn_retries < 0\001
+4 0 0 50 -1 14 6 0.0000 4 75 4200 4500 3870 CER-REQ: sess_update_st_cer(): timeout && !conn_retries && PR_O_REDISP\001
+4 0 0 50 -1 14 6 0.0000 4 75 3600 4500 3330 CER-TAR: sess_update_st_cer(): conn_err && conn_retries >= 0\001
+4 0 0 50 -1 14 6 0.0000 4 75 4620 4500 3150 CER-ASS: sess_update_st_cer(): timeout && (conn_retries >= 0 || !PR_O_REDISP)\001
+4 0 4 50 -1 14 10 0.0000 4 90 90 1305 3375 3\001
+4 0 4 50 -1 14 10 0.0000 4 90 90 2430 4500 7\001
+4 0 4 50 -1 14 10 0.0000 4 90 90 2430 3600 5\001
+4 0 4 50 -1 14 10 0.0000 4 90 90 3555 2250 2\001
+4 0 4 50 -1 14 10 0.0000 4 90 90 2430 1800 1\001
+4 0 4 50 -1 14 10 0.0000 4 90 90 2430 900 0\001
+4 0 4 50 -1 14 10 0.0000 4 90 90 2430 5310 8\001
+4 0 0 50 -1 14 6 0.0000 4 75 3000 4500 2070 EST-DIS: stream_sock_read/write/shutr/shutw: close\001
+4 0 0 50 -1 14 6 0.0000 4 75 1980 4500 2250 EST-DIS: process_session(): error\001
+4 0 0 50 -1 14 6 0.0000 4 75 2100 4500 5040 DIS-CLO: process_session(): cleanup\001
+4 1 0 50 -1 14 10 0.0000 4 90 270 2250 5490 DIS\001
+4 1 0 50 -1 14 10 0.0000 4 90 270 1350 4680 CER\001
+4 1 0 50 -1 14 10 0.0000 4 105 270 2250 1980 REQ\001
+4 1 0 50 -1 14 10 0.0000 4 90 270 1125 3555 TAR\001
+4 1 0 50 -1 14 10 0.0000 4 90 270 2250 2880 ASS\001
+4 1 0 50 -1 14 10 0.0000 4 105 270 3375 2430 QUE\001
+4 1 0 50 -1 14 10 0.0000 4 90 270 2250 3780 CON\001
+4 1 0 50 -1 14 10 0.0000 4 90 270 2250 4680 EST\001
+4 1 0 50 -1 14 10 0.0000 4 90 270 2250 6255 CLO\001
+4 1 0 50 -1 14 10 0.0000 4 90 270 2250 1080 INI\001
+4 0 0 50 -1 14 6 0.0000 4 75 2820 4500 5220 TAR-CLO: sess_update_stream_int(): client abort\001
+4 0 4 50 -1 14 10 0.0000 4 90 90 2385 6075 9\001
+4 0 0 50 -1 14 6 0.0000 4 75 2820 4500 5400 CON-DIS: sess_update_st_con_tcp(): client abort\001
+4 1 0 50 -1 14 8 0.0000 4 15 75 5130 5985 -\001
+4 1 0 50 -1 14 8 0.0000 4 15 75 5490 5985 -\001
+4 1 0 50 -1 14 8 0.0000 4 15 75 5850 5985 -\001
+4 1 0 50 -1 14 8 0.0000 4 15 75 6210 5985 -\001
+4 1 0 50 -1 14 8 0.0000 4 15 75 6570 5985 -\001
+4 1 0 50 -1 14 8 0.0000 4 15 75 7290 5985 -\001
+4 1 0 50 -1 14 8 0.0000 4 15 75 8010 5985 -\001
+4 1 0 50 -1 16 7 0.0000 4 75 90 4725 5985 fd\001
+4 1 0 50 -1 14 8 0.0000 4 75 225 5130 5760 INI\001
+4 1 0 50 -1 16 7 0.0000 4 75 240 4725 5760 state\001
+4 1 0 50 -1 14 8 0.0000 4 90 225 5490 5760 REQ\001
+4 1 0 50 -1 14 8 0.0000 4 90 225 5850 5760 QUE\001
+4 1 0 50 -1 14 8 0.0000 4 75 225 6210 5760 TAR\001
+4 1 0 50 -1 14 8 0.0000 4 75 225 6570 5760 ASS\001
+4 1 0 50 -1 14 8 0.0000 4 75 225 6930 5760 CON\001
+4 1 0 50 -1 14 8 0.0000 4 75 225 7290 5760 CER\001
+4 1 0 50 -1 14 8 0.0000 4 75 225 7650 5760 EST\001
+4 1 0 50 -1 14 8 0.0000 4 75 75 8010 6210 X\001
+4 1 0 50 -1 14 8 0.0000 4 75 75 5850 6210 0\001
+4 1 0 50 -1 14 8 0.0000 4 75 75 5130 6210 0\001
+4 1 0 50 -1 16 7 0.0000 4 75 225 4725 6210 ERR\001
+4 1 0 50 -1 16 7 0.0000 4 75 225 4725 6435 EXP\001
+4 1 0 50 -1 14 8 0.0000 4 75 75 8010 6435 X\001
+4 1 0 50 -1 14 8 0.0000 4 75 75 5490 6210 X\001
+4 1 0 50 -1 14 8 0.0000 4 75 75 6210 6210 X\001
+4 1 0 50 -1 14 8 0.0000 4 75 75 6570 6210 X\001
+4 1 0 50 -1 14 8 0.0000 4 75 75 6570 6435 X\001
+4 1 0 50 -1 14 8 0.0000 4 75 75 5490 6435 X\001
+4 1 0 50 -1 14 8 0.0000 4 75 75 5130 6435 0\001
+4 1 0 50 -1 14 8 0.0000 4 75 75 5850 6435 0\001
+4 1 0 50 -1 14 8 0.0000 4 75 75 6210 6435 0\001
+4 1 0 50 -1 14 8 0.0000 4 75 75 7290 6435 X\001
+4 1 0 50 -1 14 8 0.0000 4 75 75 6930 6435 X\001
+4 1 0 50 -1 14 8 0.0000 4 75 75 7290 6210 X\001
+4 1 0 50 -1 14 8 0.0000 4 75 75 6930 6210 X\001
+4 1 0 50 -1 14 8 0.0000 4 75 75 7650 6210 X\001
+4 1 0 50 -1 14 8 0.0000 4 75 75 7650 6435 X\001
+4 1 0 50 -1 16 7 0.0000 4 60 240 4725 6660 sess\001
+4 1 0 50 -1 14 8 0.0000 4 75 225 8370 5760 CLO\001
+4 1 0 50 -1 14 8 0.0000 4 75 225 8010 5760 DIS\001
+4 1 0 50 -1 14 8 0.0000 4 15 75 8370 5985 -\001
+4 1 0 50 -1 14 8 0.0000 4 75 75 8370 6210 X\001
+4 1 0 50 -1 14 8 0.0000 4 75 75 8370 6435 X\001
+4 1 0 50 -1 14 8 0.0000 4 15 75 5130 6660 -\001
+4 1 0 50 -1 14 8 0.0000 4 15 75 5490 6660 -\001
+4 1 0 50 -1 14 8 0.0000 4 15 75 5850 6660 -\001
+4 1 0 50 -1 14 8 0.0000 4 15 75 6210 6660 -\001
+4 1 0 50 -1 14 8 0.0000 4 15 75 6570 6660 -\001
+4 1 0 50 -1 14 8 0.0000 4 15 75 7290 6660 -\001
+4 1 0 50 -1 14 8 0.0000 4 15 75 8370 6660 -\001
+4 0 0 50 -1 16 6 0.0000 4 90 5010 675 7335 Note: states painted yellow above are transient ; process_session() will never leave a stream interface in any of those upon return.\001
+4 1 0 50 -1 16 7 0.0000 4 75 285 4725 6840 SHUT\001
+4 1 0 50 -1 14 8 0.0000 4 75 75 7650 6840 0\001
+4 1 0 50 -1 14 8 0.0000 4 75 75 8010 6840 1\001
+4 1 0 50 -1 14 8 0.0000 4 75 75 8370 6840 1\001
+4 1 0 50 -1 14 8 0.0000 4 15 75 7290 6840 -\001
+4 1 0 50 -1 14 8 0.0000 4 75 75 6930 6840 0\001
+4 1 0 50 -1 14 8 0.0000 4 75 75 6570 6840 0\001
+4 1 0 50 -1 14 8 0.0000 4 75 75 6210 6840 0\001
+4 1 0 50 -1 14 8 0.0000 4 75 75 5850 6840 0\001
+4 1 0 50 -1 14 8 0.0000 4 75 75 5490 6840 0\001
+4 1 0 50 -1 14 8 0.0000 4 75 75 5130 6840 0\001
--- /dev/null
+ -----------------------
+ HAProxy Starter Guide
+ -----------------------
+ version 1.6
+
+
+This document is an introduction to HAProxy for all those who don't know it, as
+well as for those who want to re-discover it when they know older versions. Its
+primary focus is to provide users with all the elements to decide if HAProxy is
+the product they're looking for or not. Advanced users may find here some parts
+of solutions to some ideas they had just because they were not aware of a given
+new feature. Some sizing information are also provided, the product's lifecycle
+is explained, and comparisons with partially overlapping products are provided.
+
+This document doesn't provide any configuration help nor hint, but it explains
+where to find the relevant documents. The summary below is meant to help you
+search sections by name and navigate through the document.
+
+Note to documentation contributors :
+ This document is formatted with 80 columns per line, with even number of
+ spaces for indentation and without tabs. Please follow these rules strictly
+ so that it remains easily printable everywhere. If you add sections, please
+ update the summary below for easier searching.
+
+
+Summary
+-------
+
+1. Available documentation
+
+2. Quick introduction to load balancing and load balancers
+
+3. Introduction to HAProxy
+3.1. What HAProxy is and is not
+3.2. How HAProxy works
+3.3. Basic features
+3.3.1. Proxying
+3.3.2. SSL
+3.3.3. Monitoring
+3.3.4. High availability
+3.3.5. Load balancing
+3.3.6. Stickiness
+3.3.7. Sampling and converting information
+3.3.8. Maps
+3.3.9. ACLs and conditions
+3.3.10. Content switching
+3.3.11. Stick-tables
+3.3.12. Formated strings
+3.3.13. HTTP rewriting and redirection
+3.3.14. Server protection
+3.3.15. Logging
+3.3.16. Statistics
+3.4. Advanced features
+3.4.1. Management
+3.4.2. System-specific capabilities
+3.4.3. Scripting
+3.5. Sizing
+3.6. How to get HAProxy
+
+4. Companion products and alternatives
+4.1. Apache HTTP server
+4.2. NGINX
+4.3. Varnish
+4.4. Alternatives
+
+
+1. Available documentation
+--------------------------
+
+The complete HAProxy documentation is contained in the following documents.
+Please ensure to consult the relevant documentation to save time and to get the
+most accurate response to your needs. Also please refrain from sending questions
+to the mailing list whose responses are present in these documents.
+
+ - intro.txt (this document) : it presents the basics of load balancing,
+ HAProxy as a product, what it does, what it doesn't do, some known traps to
+ avoid, some OS-specific limitations, how to get it, how it evolves, how to
+ ensure you're running with all known fixes how to update it, complements and
+ alternatives.
+
+ - management.txt : it explains how to start haproxy, how to manage it at
+ runtime, how to manage it on multiple nodes, how to proceed with seamless
+ upgrades.
+
+ - configuration.txt : the reference manual details all configuration keywords
+ and their options. It is used when a configuration change is needed.
+
+ - architecture.txt : the architecture manual explains how to best architect a
+ load-balanced infrastructure and how to interact with third party products.
+
+ - coding-style.txt : this is for developers who want to propose some code to
+ the project. It explains the style to adopt for the code. It's not very
+ strict and not all the code base completely respects it but contributions
+ which diverge too much from it will be rejected.
+
+ - proxy-protocol.txt : this is the de-facto specification of the PROXY
+ protocol which is implemented by HAProxy and a number of third party
+ products.
+
+ - README : how to build haproxy from sources
+
+
+2. Quick introduction to load balancing and load balancers
+----------------------------------------------------------
+
+Load balancing consists in aggregating multiple components in order to achieve
+a total processing capacity above each component's individual capacity, without
+any intervention from the end user and in a scalable way. This results in more
+operations being performed simultaneously by the time it takes a component to
+perform only one. A single operation however will still be performed on a single
+component at a time and will not get faster than without load balancing. It
+always requires at least as many operations as available components and an
+efficient load balancing mechanism to make use of all components and to fully
+benefit from the load balancing. A good example of this is the number of lanes
+on a highway which allows as many cars to pass during the same time frame
+without increasing their individual speed.
+
+Examples of load balancing :
+
+ - Process scheduling in multi-processor systems
+ - Link load balancing (eg: EtherChannel, Bonding)
+ - IP address load balancing (eg: ECMP, DNS roundrobin)
+ - Server load balancing (via load balancers)
+
+The mechanism or component which performs the load balancing operation is
+called a load balancer. In web environments these components are called a
+"network load balancer", and more commonly a "load balancer" given that this
+activity is by far the best known case of load balancing.
+
+A load balancer may act :
+
+ - at the link level : this is called link load balancing, and it consists in
+ chosing what network link to send a packet to;
+
+ - at the network level : this is called network load balancing, and it
+ consists in chosing what route a series of packets will follow;
+
+ - at the server level : this is called server load balancing and it consists
+ in deciding what server will process a connection or request.
+
+Two distinct technologies exist and address different needs, though with some
+overlapping. In each case it is important to keep in mind that load balancing
+consists in diverting the traffic from its natural flow and that doing so always
+requires a minimum of care to maintain the required level of consistency between
+all routing decisions.
+
+The first one acts at the packet level and processes packets more or less
+individually. There is a 1-to-1 relation between input and output packets, so
+it is possible to follow the traffic on both sides of the load balancer using a
+regular network sniffer. This technology can be very cheap and extremely fast.
+It is usually implemented in hardware (ASICs) allowing to reach line rate, such
+as switches doing ECMP. Usually stateless, it can also be stateful (consider
+the session a packet belongs to and called layer4-LB or L4), may support DSR
+(direct server return, without passing through the LB again) if the packets
+were not modified, but provides almost no content awareness. This technology is
+very well suited to network-level load balancing, though it is sometimes used
+for very basic server load balancing at high speed.
+
+The second one acts on session contents. It requires that the input streams is
+reassembled and processed as a whole. The contents may be modified, and the
+output stream is segmented into new packets. For this reason it is generally
+performed by proxies and they're often called layer 7 load balancers or L7.
+This implies that there are two distinct connections on each side, and that
+there is no relation between input and output packets sizes nor counts. Clients
+and servers are not required to use the same protocol (for example IPv4 vs
+IPv6, clear vs SSL). The operations are always stateful, and the return traffic
+must pass through the load balancer. The extra processing comes with a cost so
+it's not always possible to achieve line rate, especially with small packets.
+On the other hand, it offers wide possibilities and is generally achieved by
+pure software, even if embedded into hardware appliances. This technology is
+very well suited for server load balancing.
+
+Packet-based load balancers are generally deployed in cut-through mode, so they
+are installed on the normal path of the traffic and divert it according to the
+configuration. The return traffic doesn't necessarily pass through the load
+balancer. Some modifications may be applied to the network destination address
+in order to direct the traffic to the proper destination. In this case, it is
+mandatory that the return traffic passes through the load balancer. If the
+routes doesn't make this possible, the load balancer may also replace the
+packets' source address with its own in order to force the return traffic to
+pass through it.
+
+Proxy-based load balancers are deployed as a server with their own IP address
+and ports, without architecture changes. Sometimes this requires to perform some
+adaptations to the applications so that clients are properly directed to the
+load balancer's IP address and not directly to the server's. Some load balancers
+may have to adjust some servers' responses to make this possible (eg: the HTTP
+Location header field used in HTTP redirects). Some proxy-based load balancers
+may intercept traffic for an address they don't own, and spoof the client's
+address when connecting to the server. This allows them to be deployed as if
+they were a regular router or firewall, in a cut-through mode very similar to
+the packet based load balancers. This is particularly appreciated for products
+which combine both packet mode and proxy mode. In this case DSR is obviously
+still not possible and the return traffic still has to be routed back to the
+load balancer.
+
+A very scalable layered approach would consist in having a front router which
+receives traffic from multiple load balanced links, and uses ECMP to distribute
+this traffic to a first layer of multiple stateful packet-based load balancers
+(L4). These L4 load balancers in turn pass the traffic to an even larger number
+of proxy-based load balancers (L7), which have to parse the contents to decide
+what server will ultimately receive the traffic.
+
+The number of components and possible paths for the traffic increases the risk
+of failure; in very large environments, it is even normal to permanently have
+a few faulty components being fixed or replaced. Load balancing done without
+awareness of the whole stack's health significantly degrades availability. For
+this reason, any sane load balancer will verify that the components it intends
+to deliver the traffic to are still alive and reachable, and it will stop
+delivering traffic to faulty ones. This can be achieved using various methods.
+
+The most common one consists in periodically sending probes to ensure the
+component is still operational. These probes are called "health checks". They
+must be representative of the type of failure to address. For example a ping-
+based check will not detect that a web server has crashed and doesn't listen to
+a port anymore, while a connection to the port will verify this, and a more
+advanced request may even validate that the server still works and that the
+database it relies on is still accessible. Health checks often involve a few
+retries to cover for occasional measuring errors. The period between checks
+must be small enough to ensure the faulty component is not used for too long
+after an error occurs.
+
+Other methods consist in sampling the production traffic sent to a destination
+to observe if it is processed correctly or not, and to evince the components
+which return inappropriate responses. However this requires to sacrify a part
+of the production traffic and this is not always acceptable. A combination of
+these two mechanisms provides the best of both worlds, with both of them being
+used to detect a fault, and only health checks to detect the end of the fault.
+A last method involves centralized reporting : a central monitoring agent
+periodically updates all load balancers about all components' state. This gives
+a global view of the infrastructure to all components, though sometimes with
+less accuracy or responsiveness. It's best suited for environments with many
+load balancers and many servers.
+
+Layer 7 load balancers also face another challenge known as stickiness or
+persistence. The principle is that they generally have to direct multiple
+subsequent requests or connections from a same origin (such as an end user) to
+the same target. The best known example is the shopping cart on an online
+store. If each click leads to a new connection, the user must always be sent
+to the server which holds his shopping cart. Content-awareness makes it easier
+to spot some elements in the request to identify the server to deliver it to,
+but that's not always enough. For example if the source address is used as a
+key to pick a server, it can be decided that a hash-based algorithm will be
+used and that a given IP address will always be sent to the same server based
+on a divide of the address by the number of available servers. But if one
+server fails, the result changes and all users are suddenly sent to a different
+server and lose their shopping cart. The solution against this issue consists
+in memorizing the chosen target so that each time the same visitor is seen,
+he's directed to the same server regardless of the number of available servers.
+The information may be stored in the load balancer's memory, in which case it
+may have to be replicated to other load balancers if it's not alone, or it may
+be stored in the client's memory using various methods provided that the client
+is able to present this information back with every request (cookie insertion,
+redirection to a sub-domain, etc). This mechanism provides the extra benefit of
+not having to rely on unstable or unevenly distributed information (such as the
+source IP address). This is in fact the strongest reason to adopt a layer 7
+load balancer instead of a layer 4 one.
+
+In order to extract information such as a cookie, a host header field, a URL
+or whatever, a load balancer may need to decrypt SSL/TLS traffic and even
+possibly to reencrypt it when passing it to the server. This expensive task
+explains why in some high-traffic infrastructures, sometimes there may be a
+lot of load balancers.
+
+Since a layer 7 load balancer may perform a number of complex operations on the
+traffic (decrypt, parse, modify, match cookies, decide what server to send to,
+etc), it can definitely cause some trouble and will very commonly be accused of
+being responsible for a lot of trouble that it only revealed. Often it will be
+discovered that servers are unstable and periodically go up and down, or for
+web servers, that they deliver pages with some hard-coded links forcing the
+clients to connect directly to one specific server without passing via the load
+balancer, or that they take ages to respond under high load causing timeouts.
+That's why logging is an extremely important aspect of layer 7 load balancing.
+Once a trouble is reported, it is important to figure if the load balancer took
+a wrong decision and if so why so that it doesn't happen anymore.
+
+
+3. Introduction to HAProxy
+--------------------------
+
+HAProxy is written "HAProxy" to designate the product, "haproxy" to designate
+the executable program, software package or a process, though both are commonly
+used for both purposes, and is pronounced H-A-Proxy. Very early it used to stand
+for "high availability proxy" and the name was written in two separate words,
+though by now it means nothing else than "HAProxy".
+
+
+3.1. What HAProxy is and is not
+-------------------------------
+
+HAProxy is :
+
+ - a TCP proxy : it can accept a TCP connection from a listening socket,
+ connect to a server and attach these sockets together allowing traffic to
+ flow in both directions;
+
+ - an HTTP reverse-proxy (called a "gateway" in HTTP terminology) : it presents
+ itself as a server, receives HTTP requests over connections accepted on a
+ listening TCP socket, and passes the requests from these connections to
+ servers using different connections.
+
+ - an SSL terminator / initiator / offloader : SSL/TLS may be used on the
+ connection coming from the client, on the connection going to the server,
+ or even on both connections.
+
+ - a TCP normalizer : since connections are locally terminated by the operating
+ system, there is no relation between both sides, so abnormal traffic such as
+ invalid packets, flag combinations, window advertisements, sequence numbers,
+ incomplete connections (SYN floods), or so will not be passed to the other
+ side. This protects fragile TCP stacks from protocol attacks, and also
+ allows to optimize the connection parameters with the client without having
+ to modify the servers' TCP stack settings.
+
+ - an HTTP normalizer : when configured to process HTTP traffic, only valid
+ complete requests are passed. This protects against a lot of protocol-based
+ attacks. Additionally, protocol deviations for which there is a tolerance
+ in the specification are fixed so that they don't cause problem on the
+ servers (eg: multiple-line headers).
+
+ - an HTTP fixing tool : it can modify / fix / add / remove / rewrite the URL
+ or any request or response header. This helps fixing interoperability issues
+ in complex environments.
+
+ - a content-based switch : it can consider any element from the request to
+ decide what server to pass the request or connection to. Thus it is possible
+ to handle multiple protocols over a same port (eg: http, https, ssh).
+
+ - a server load balancer : it can load balance TCP connections and HTTP
+ requests. In TCP mode, load balancing decisions are taken for the whole
+ connection. In HTTP mode, decisions are taken per request.
+
+ - a traffic regulator : it can apply some rate limiting at various points,
+ protect the servers against overloading, adjust traffic priorities based on
+ the contents, and even pass such information to lower layers and outer
+ network components by marking packets.
+
+ - a protection against DDoS and service abuse : it can maintain a wide number
+ of statistics per IP address, URL, cookie, etc and detect when an abuse is
+ happening, then take action (slow down the offenders, block them, send them
+ to outdated contents, etc).
+
+ - an observation point for network troubleshooting : due to the precision of
+ the information reported in logs, it is often used to narrow down some
+ network-related issues.
+
+ - an HTTP compression offloader : it can compress responses which were not
+ compressed by the server, thus reducing the page load time for clients with
+ poor connectivity or using high-latency, mobile networks.
+
+HAProxy is not :
+
+ - an explicit HTTP proxy, ie, the proxy that browsers use to reach the
+ internet. There are excellent open-source software dedicated for this task,
+ such as Squid. However HAProxy can be installed in front of such a proxy to
+ provide load balancing and high availability.
+
+ - a caching proxy : it will return as-is the contents its received from the
+ server and will not interfere with any caching policy. There are excellent
+ open-source software for this task such as Varnish. HAProxy can be installed
+ in front of such a cache to provide SSL offloading, and scalability through
+ smart load balancing.
+
+ - a data scrubber : it will not modify the body of requests nor responses.
+
+ - a web server : during startup, it isolates itself inside a chroot jail and
+ drops its privileges, so that it will not perform any single file-system
+ access once started. As such it cannot be turned into a web server. There
+ are excellent open-source software for this such as Apache or Nginx, and
+ HAProxy can be installed in front of them to provide load balancing and
+ high availability.
+
+ - a packet-based load balancer : it will not see IP packets nor UDP datagrams,
+ will not perform NAT or even less DSR. These are tasks for lower layers.
+ Some kernel-based components such as IPVS (Linux Virtual Server) already do
+ this pretty well and complement perfectly with HAProxy.
+
+
+3.2. How HAProxy works
+----------------------
+
+HAProxy is a single-threaded, event-driven, non-blocking engine combining a very
+fast I/O layer with a priority-based scheduler. As it is designed with a data
+forwarding goal in mind, its architecture is optimized to move data as fast as
+possible with the least possible operations. As such it implements a layered
+model offering bypass mechanisms at each level ensuring data don't reach higher
+levels when not needed. Most of the processing is performed in the kernel, and
+HAProxy does its best to help the kernel do the work as fast as possible by
+giving some hints or by avoiding certain operation when it guesses they could
+be grouped later. As a result, typical figures show 15% of the processing time
+spent in HAProxy versus 85% in the kernel in TCP or HTTP close mode, and about
+30% for HAProxy versus 70% for the kernel in HTTP keep-alive mode.
+
+A single process can run many proxy instances; configurations as large as
+300000 distinct proxies in a single process were reported to run fine. Thus
+there is usually no need to start more than one process for all instances.
+
+It is possible to make HAProxy run over multiple processes, but it comes with
+a few limitations. In general it doesn't make sense in HTTP close or TCP modes
+because the kernel-side doesn't scale very well with some operations such as
+connect(). It scales pretty well for HTTP keep-alive mode but the performance
+that can be achieved out of a single process generaly outperforms common needs
+by an order of magnitude. It does however make sense when used as an SSL
+offloader, and this feature is well supported in multi-process mode.
+
+HAProxy only requires the haproxy executable and a configuration file to run.
+For logging it is highly recommended to have a properly configured syslog daemon
+and log rotations in place. The configuration files are parsed before starting,
+then HAProxy tries to bind all listening sockets, and refuses to start if
+anything fails. Past this point it cannot fail anymore. This means that there
+are no runtime failures and that if it accepts to start, it will work until it
+is stopped.
+
+Once HAProxy is started, it does exactly 3 things :
+
+ - process incoming connections;
+
+ - periodically check the servers' status (known as health checks);
+
+ - exchange information with other haproxy nodes.
+
+Processing incoming connections is by far the most complex task as it depends
+on a lot of configuration possibilities, but it can be summarized as the 9 steps
+below :
+
+ - accept incoming connections from listening sockets that belong to a
+ configuration entity known as a "frontend", which references one or multiple
+ listening addresses;
+
+ - apply the frontend-specific processing rules to these connections that may
+ result in blocking them, modifying some headers, or intercepting them to
+ execute some internal applets such as the statistics page or the CLI;
+
+ - pass these incoming connections to another configuration entity representing
+ a server farm known as a "backend", which contains the list of servers and
+ the load balancing strategy for this server farm;
+
+ - apply the backend-specific processing rules to these connections;
+
+ - decide which server to forward the connection to according to the load
+ balancing strategy;
+
+ - apply the backend-specific processing rules to the response data;
+
+ - apply the frontend-specific processing rules to the response data;
+
+ - emit a log to report what happened in fine details;
+
+ - in HTTP, loop back to the second step to wait for a new request, otherwise
+ close the connection.
+
+Frontends and backends are sometimes considered as half-proxies, since they only
+look at one side of an end-to-end connection; the frontend only cares about the
+clients while the backend only cares about the servers. HAProxy also supports
+full proxies which are exactly the union of a frontend and a backend. When HTTP
+processing is desired, the configuration will generally be split into frontends
+and backends as they open a lot of possibilities since any frontend may pass a
+connection to any backend. With TCP-only proxies, using frontends and backends
+rarely provides a benefit and the configuration can be more readable with full
+proxies.
+
+
+3.3. Basic features
+-------------------
+
+This section will enumerate a number of features that HAProxy implements, some
+of which are generally expected from any modern load balancer, and some of
+which are a direct benefit of HAProxy's architecture. More advanced features
+will be detailed in the next section.
+
+
+3.3.1. Basic features : Proxying
+--------------------------------
+
+Proxying is the action of transferring data between a client and a server over
+two independant connections. The following basic features are supported by
+HAProxy regarding proxying and connection management :
+
+ - Provide the server with a clean connection to protect them against any
+ client-side defect or attack;
+
+ - Listen to multiple IP address and/or ports, even port ranges;
+
+ - Transparent accept : intercept traffic targetting any arbitrary IP address
+ that doesn't even belong to the local system;
+
+ - Server port doesn't need to be related to listening port, and may even be
+ translated by a fixed offset (useful with ranges);
+
+ - Transparent connect : spoof the client's (or any) IP address if needed
+ when connecting to the server;
+
+ - Provide a reliable return IP address to the servers in multi-site LBs;
+
+ - Offload the server thanks to buffers and possibly short-lived connections
+ to reduce their concurrent connection count and their memory footprint;
+
+ - Optimize TCP stacks (eg: SACK), congestion control, and reduce RTT impacts;
+
+ - Support different protocol families on both sides (eg: IPv4/IPv6/Unix);
+
+ - Timeout enforcement : HAProxy supports multiple levels of timeouts depending
+ on the stage the connection is, so that a dead client or server, or an
+ attacker cannot be granted resources for too long;
+
+ - Protocol validation: HTTP, SSL, or payload are inspected and invalid
+ protocol elements are rejected, unless instructed to accept them anyway;
+
+ - Policy enforcement : ensure that only what is allowed may be forwarded;
+
+ - Both incoming and outgoing connections may be limited to certain network
+ namespaces (Linux only), making it easy to build a cross-container,
+ multi-tenant load balancer;
+
+ - PROXY protocol presents the client's IP address to the server even for
+ non-HTTP traffic. This is an HAProxy extension that was adopted by a number
+ of third-party products by now, at least these ones at the time of writing :
+ - client : haproxy, stud, stunnel, exaproxy, ELB, squid
+ - server : haproxy, stud, postfix, exim, nginx, squid, node.js, varnish
+
+
+3.3.2. Basic features : SSL
+---------------------------
+
+HAProxy's SSL stack is recognized as one of the most featureful according to
+Google's engineers (http://istlsfastyet.com/). The most commonly used features
+making it quite complete are :
+
+ - SNI-based multi-hosting with no limit on sites count and focus on
+ performance. At least one deployment is known for running 50000 domains
+ with their respective certificates;
+
+ - support for wildcard certificates reduces the need for many certificates ;
+
+ - certificate-based client authentication with configurable policies on
+ failure to present a valid certificate. This allows to present a different
+ server farm to regenerate the client certificate for example;
+
+ - authentication of the backend server ensures the backend server is the real
+ one and not a man in the middle;
+
+ - authentication with the backend server lets the backend server it's really
+ the expected haproxy node that is connecting to it;
+
+ - TLS NPN and ALPN extensions make it possible to reliably offload SPDY/HTTP2
+ connections and pass them in clear text to backend servers;
+
+ - OCSP stapling further reduces first page load time by delivering inline an
+ OCSP response when the client requests a Certificate Status Request;
+
+ - Dynamic record sizing provides both high performance and low latency, and
+ significantly reduces page load time by letting the browser start to fetch
+ new objects while packets are still in flight;
+
+ - permanent access to all relevant SSL/TLS layer information for logging,
+ access control, reporting etc... These elements can be embedded into HTTP
+ header or even as a PROXY protocol extension so that the offloaded server
+ gets all the information it would have had if it performed the SSL
+ termination itself.
+
+ - Detect, log and block certain known attacks even on vulnerable SSL libs,
+ such as the Heartbleed attack affecting certain versions of OpenSSL.
+
+ - support for stateless session resumption (RFC 5077 TLS Ticket extension).
+ TLS tickets can be updated from CLI which provides them means to implement
+ Perfect Forward Secrecy by frequently rotating the tickets.
+
+
+3.3.3. Basic features : Monitoring
+----------------------------------
+
+HAProxy focuses a lot on availability. As such it cares about servers state,
+and about reporting its own state to other network components :
+
+ - Servers state is continuously monitored using per-server parameters. This
+ ensures the path to the server is operational for regular traffic;
+
+ - Health checks support two hysteresis for up and down transitions in order
+ to protect against state flapping;
+
+ - Checks can be sent to a different address/port/protocol : this makes it
+ easy to check a single service that is considered representative of multiple
+ ones, for example the HTTPS port for an HTTP+HTTPS server.
+
+ - Servers can track other servers and go down simultaneously : this ensures
+ that servers hosting multiple services can fail atomically and that noone
+ will be sent to a partially failed server;
+
+ - Agents may be deployed on the server to monitor load and health : a server
+ may be interested in reporting its load, operational status, administrative
+ status independantly from what health checks can see. By running a simple
+ agent on the server, it's possible to consider the server's view of its own
+ health in addition to the health checks validating the whole path;
+
+ - Various check methods are available : TCP connect, HTTP request, SMTP hello,
+ SSL hello, LDAP, SQL, Redis, send/expect scripts, all with/without SSL;
+
+ - State change is notified in the logs and stats page with the failure reason
+ (eg: the HTTP response received at the moment the failure was detected). An
+ e-mail can also be sent to a configurable address upon such a change ;
+
+ - Server state is also reported on the stats interface and can be used to take
+ routing decisions so that traffic may be sent to different farms depending
+ on their sizes and/or health (eg: loss of an inter-DC link);
+
+ - HAProxy can use health check requests to pass information to the servers,
+ such as their names, weight, the number of other servers in the farm etc...
+ so that servers can adjust their response and decisions based on this
+ knowledge (eg: postpone backups to keep more CPU available);
+
+ - Servers can use health checks to report more detailed state than just on/off
+ (eg: I would like to stop, please stop sending new visitors);
+
+ - HAProxy itself can report its state to external components such as routers
+ or other load balancers, allowing to build very complete multi-path and
+ multi-layer infrastructures.
+
+
+3.3.4. Basic features : High availability
+-----------------------------------------
+
+Just like any serious load balancer, HAProxy cares a lot about availability to
+ensure the best global service continuity :
+
+ - Only valid servers are used ; the other ones are automatically evinced from
+ load balancing farms ; under certain conditions it is still possible to
+ force to use them though;
+
+ - Support for a graceful shutdown so that it is possible to take servers out
+ of a farm without affecting any connection;
+
+ - Backup servers are automatically used when active servers are down and
+ replace them so that sessions are not lost when possible. This also allows
+ to build multiple paths to reach the same server (eg: multiple interfaces);
+
+ - Ability to return a global failed status for a farm when too many servers
+ are down. This, combined with the monitoring capabilities makes it possible
+ for an upstream component to choose a different LB node for a given service;
+
+ - Stateless design makes it easy to build clusters : by design, HAProxy does
+ its best to ensure the highest service continuity without having to store
+ information that could be lost in the event of a failure. This ensures that
+ a takeover is the most seamless possible;
+
+ - Integrates well with standard VRRP daemon keepalived : HAProxy easily tells
+ keepalived about its state and copes very will with floating virtual IP
+ addresses. Note: only use IP redundancy protocols (VRRP/CARP) over cluster-
+ based solutions (Heartbeat, ...) as they're the ones offering the fastest,
+ most seamless, and most reliable switchover.
+
+
+3.3.5. Basic features : Load balancing
+--------------------------------------
+
+HAProxy offers a fairly complete set of load balancing features, most of which
+are unfortunately not available in a number of other load balancing products :
+
+ - no less than 9 load balancing algorithms are supported, some of which apply
+ to input data to offer an infinite list of possibilities. The most common
+ ones are round-robin (for short connections, pick each server in turn),
+ leastconn (for long connections, pick the least recently used of the servers
+ with the lowest connection count), source (for SSL farms or terminal server
+ farms, the server directly depends on the client's source address), uri (for
+ HTTP caches, the server directly depends on the HTTP URI), hdr (the server
+ directly depends on the contents of a specific HTTP header field), first
+ (for short-lived virtual machines, all connections are packed on the
+ smallest possible subset of servers so that unused ones can be powered
+ down);
+
+ - all algorithms above support per-server weights so that it is possible to
+ accommodate from different server generations in a farm, or direct a small
+ fraction of the traffic to specific servers (debug mode, running the next
+ version of the software, etc);
+
+ - dynamic weights are supported for round-robin, leastconn and consistent
+ hashing ; this allows server weights to be modified on the fly from the CLI
+ or even by an agent running on the server;
+
+ - slow-start is supported whenever a dynamic weight is supported; this allows
+ a server to progressively take the traffic. This is an important feature
+ for fragile application servers which require to compile classes at runtime
+ as well as cold caches which need to fill up before being run at full
+ throttle;
+
+ - hashing can apply to various elements such as client's source address, URL
+ components, query string element, header field values, POST parameter, RDP
+ cookie;
+
+ - consistent hashing protects server farms against massive redistribution when
+ adding or removing servers in a farm. That's very important in large cache
+ farms and it allows slow-start to be used to refill cold caches;
+
+ - a number of internal metrics such as the number of connections per server,
+ per backend, the amount of available connection slots in a backend etc makes
+ it possible to build very advanced load balancing strategies.
+
+
+3.3.6. Basic features : Stickiness
+----------------------------------
+
+Application load balancing would be useless without stickiness. HAProxy provides
+a fairly comprehensive set of possibilities to maintain a visitor on the same
+server even across various events such as server addition/removal, down/up
+cycles, and some methods are designed to be resistant to the distance between
+multiple load balancing nodes in that they don't require any replication :
+
+ - stickiness information can be individually matched and learned from
+ different places if desired. For example a JSESSIONID cookie may be matched
+ both in a cookie and in the URL. Up to 8 parallel sources can be learned at
+ the same time and each of them may point to a different stick-table;
+
+ - stickiness information can come from anything that can be seen within a
+ request or response, including source address, TCP payload offset and
+ length, HTTTP query string elements, header field values, cookies, and so
+ on...
+
+ - stick-tables are replicated between all nodes in a multi-master fashion ;
+
+ - commonly used elements such as SSL-ID or RDP cookies (for TSE farms) are
+ directly accessible to ease manipulation;
+
+ - all sticking rules may be dynamically conditionned by ACLs;
+
+ - it is possible to decide not to stick to certain servers, such as backup
+ servers, so that when the nominal server comes back, it automatically takes
+ the load back. This is often used in multi-path environments;
+
+ - in HTTP it is often prefered not to learn anything and instead manipulate
+ a cookie dedicated to stickiness. For this, it's possible to detect,
+ rewrite, insert or prefix such a cookie to let the client remember what
+ server was assigned;
+
+ - the server may decide to change or clean the stickiness cookie on logout,
+ so that leaving visitors are automatically unbound from the server;
+
+ - using ACL-based rules it is also possible to selectively ignore or enforce
+ stickiness regardless of the server's state; combined with advanced health
+ checks, that helps admins verify that the server they're installing is up
+ and running before presenting it to the whole world;
+
+ - an innovative mechanism to set a maximum idle time and duration on cookies
+ ensures that stickiness can be smoothly stopped on devices which are never
+ closed (smartphones, TVs, home appliances) without having to store them on
+ persistent storage;
+
+ - multiple server entries may share the same stickiness keys so that
+ stickiness is not lost in multi-path environments when one path goes down;
+
+ - soft-stop ensures that only users with stickiness information will continue
+ to reach the server they've been assigned to but no new users will go there.
+
+
+3.3.7. Basic features : Sampling and converting information
+-----------------------------------------------------------
+
+HAProxy supports information sampling using a wide set of "sample fetch
+functions". The principle is to extract pieces of information known as samples,
+for immediate use. This is used for stickiness, to build conditions, to produce
+information in logs or to enrich HTTP headers.
+
+Samples can be fetched from various sources :
+
+ - constants : integers, strings, IP addresses, binary blocks;
+
+ - the process : date, environment variables, server/frontend/backend/process
+ state, byte/connection counts/rates, queue length, random generator, ...
+
+ - variables : per-session, per-request, per-response variables;
+
+ - the client connection : source and destination addresses and ports, and all
+ related statistics counters;
+
+ - the SSL client session : protocol, version, algorithm, cipher, key size,
+ session ID, all client and server certificate fields, certificate serial,
+ SNI, ALPN, NPN, client support for certain extensions;
+
+ - request and response buffers contents : arbitrary payload at offset/length,
+ data length, RDP cookie, decoding of SSL hello type, decoding of TLS SNI;
+
+ - HTTP (request and response) : method, URI, path, query string arguments,
+ status code, headers values, positionnal header value, cookies, captures,
+ authentication, body elements;
+
+A sample may then pass through a number of operators known as "converters" to
+experience some transformation. A converter consumes a sample and produces a
+new one, possibly of a completely different type. For example, a converter may
+be used to return only the integer length of the input string, or could turn a
+string to upper case. Any arbitrary number of converters may be applied in
+series to a sample before final use. Among all available sample converters, the
+following ones are the most commonly used :
+
+ - arithmetic and logic operators : they make it possible to perform advanced
+ computation on input data, such as computing ratios, percentages or simply
+ converting from one unit to another one;
+
+ - IP address masks are useful when some addresses need to be grouped by larger
+ networks;
+
+ - data representation : url-decode, base64, hex, JSON strings, hashing;
+
+ - string conversion : extract substrings at fixed positions, fixed length,
+ extract specific fields around certain delimiters, extract certain words,
+ change case, apply regex-based substitution ;
+
+ - date conversion : convert to http date format, convert local to UTC and
+ conversely, add or remove offset;
+
+ - lookup an entry in a stick table to find statistics or assigned server;
+
+ - map-based key-to-value conversion from a file (mostly used for geolocation).
+
+
+3.3.8. Basic features : Maps
+----------------------------
+
+Maps are a powerful type of converter consisting in loading a two-columns file
+into memory at boot time, then looking up each input sample from the first
+column and either returning the corresponding pattern on the second column if
+the entry was found, or returning a default value. The output information also
+being a sample, it can in turn experience other transformations including other
+map lookups. Maps are most commonly used to translate the client's IP address
+to an AS number or country code since they support a longest match for network
+addresses but they can be used for various other purposes.
+
+Part of their strength comes from being updatable on the fly either from the CLI
+or from certain actions using other samples, making them capable of storing and
+retrieving information between subsequent accesses. Another strength comes from
+the binary tree based indexation which makes them extremely fast event when they
+contain hundreds of thousands of entries, making geolocation very cheap and easy
+to set up.
+
+
+3.3.9. Basic features : ACLs and conditions
+-------------------------------------------
+
+Most operations in HAProxy can be made conditional. Conditions are built by
+combining multiple ACLs using logic operators (AND, OR, NOT). Each ACL is a
+series of tests based on the following elements :
+
+ - a sample fetch method to retrieve the element to test ;
+
+ - an optional series of converters to transform the element ;
+
+ - a list of patterns to match against ;
+
+ - a matching method to indicate how to compare the patterns with the sample
+
+For example, the sample may be taken from the HTTP "Host" header, it could then
+be converted to lower case, then matched against a number of regex patterns
+using the regex matching method.
+
+Technically, ACLs are built on the same core as the maps, they share the exact
+same internal structure, pattern matching methods and performance. The only real
+difference is that instead of returning a sample, they only return "found" or
+or "not found". In terms of usage, ACL patterns may be declared inline in the
+configuration file and do not require their own file. ACLs may be named for ease
+of use or to make configurations understandable. A named ACL may be declared
+multiple times and it will evaluate all definitions in turn until one matches.
+
+About 13 different pattern matching methods are provided, among which IP address
+mask, integer ranges, substrings, regex. They work like functions, and just like
+with any programming language, only what is needed is evaluated, so when a
+condition involving an OR is already true, next ones are not evaluated, and
+similarly when a condition involving an AND is already false, the rest of the
+condition is not evaluated.
+
+There is no practical limit to the number of declared ACLs, and a handful of
+commonly used ones are provided. However experience has shown that setups using
+a lot of named ACLs are quite hard to troubleshoot and that sometimes using
+anynmous ACLs inline is easier as it requires less references out of the scope
+being analysed.
+
+
+3.3.10. Basic features : Content switching
+------------------------------------------
+
+HAProxy implements a mechanism known as content-based switching. The principle
+is that a connection or request arrives on a frontend, then the information
+carried with this request or connection are processed, and at this point it is
+possible to write ACLs-based conditions making use of these information to
+decide what backend will process the request. Thus the traffic is directed to
+one backend or another based on the request's contents. The most common example
+consists in using the Host header and/or elements from the path (sub-directories
+or file-name extensions) to decide whether an HTTP request targets a static
+object or the application, and to route static objects traffic to a backend made
+of fast and light servers, and all the remaining traffic to a more complex
+application server, thus constituting a fine-grained virtual hosting solution.
+This is quite convenient to make multiple technologies coexist as a more global
+solution.
+
+Another use case of content-switching consists in using different load balancing
+algorithms depending on various criteria. A cache may use a URI hash while an
+application would use round robin.
+
+Last but not least, it allows multiple customers to use a small share of a
+common resource by enforcing per-backend (thus per-customer connection limits).
+
+Content switching rules scale very well, though their performance may depend on
+the number and complexity of the ACLs in use. But it is also possible to write
+dynamic content switching rules where a sample value directly turns into a
+backend name and without making use of ACLs at all. Such configurations have
+been reported to work fine at least with 300000 backends in production.
+
+
+3.3.11. Basic features : Stick-tables
+-------------------------------------
+
+Stick-tables are commonly used to store stickiness information, that is, to keep
+a reference to the server a certain visitor was directed to. The key is then the
+identifier associated with the visitor (its source address, the SSL ID of the
+connection, an HTTP or RDP cookie, the customer number extracted from the URL or
+from the payload, ...) and the stored value is then the server's identifier.
+
+Stick tables may use 3 different types of samples for their keys : integers,
+strings and addresses. Only one stick-table may be referenced in a proxy, and it
+is designated everywhere with the proxy name. Up to 8 key may be tracked in
+parallel. The server identifier is committed during request or response
+processing once both the key and the server are known.
+
+Stick-table contents may be replicated in active-active mode with other HAProxy
+nodes known as "peers" as well as with the new process during a reload operation
+so that all load balancing nodes share the same information and take the same
+routing decision if a client's requests are spread over multiple nodes.
+
+Since stick-tables are indexed on what allows to recognize a client, they are
+often also used to store extra information such as per-client statistics. The
+extra statistics take some extra space and need to be explicitly declared. The
+type of statistics that may be stored includes the input and output bandwidth,
+the number of concurrent connections, the connection rate and count over a
+period, the amount and frequency of errors, some specific tags and counters,
+etc... In order to support keeping such information without being forced to
+stick to a given server, a special "tracking" feature is implemented and allows
+to track up to 3 simultaneous keys from different tables at the same time
+regardless of stickiness rules. Each stored statistics may be searched, dumped
+and cleared from the CLI and adds to the live troubleshooting capabilities.
+
+While this mechanism can be used to surclass a returning visitor or to adjust
+the delivered quality of service depending on good or bad behaviour, it is
+mostly used to fight against service abuse and more generally DDoS as it allows
+to build complex models to detect certain bad behaviours at a high processing
+speed.
+
+
+3.3.12. Basic features : Formated strings
+-----------------------------------------
+
+There are many places where HAProxy needs to manipulate character strings, such
+as logs, redirects, header additions, and so on. In order to provide the
+greatest flexibility, the notion of formated strings was introduced, initially
+for logging purposes, which explains why it's still called "log-format". These
+strings contain escape characters allowing to introduce various dynamic data
+including variables and sample fetch expressions into strings, and even to
+adjust the encoding while the result is being turned into a string (for example,
+adding quotes). This provides a powerful way to build header contents or to
+customize log lines. Additionally, in order to remain simple to build most
+common strings, about 50 special tags are provided as shortcuts for information
+commonly used in logs.
+
+
+3.3.13. Basic features : HTTP rewriting and redirection
+-------------------------------------------------------
+
+Installing a load balancer in front of an application that was never designed
+for this can be a challenging task without the proper tools. One of the most
+commonly requested operation in this case is to adjust requests and response
+headers to make the load balancer appear as the origin server and to fix hard
+coded information. This comes with changing the path in requests (which is
+strongly advised against), modifying Host header field, modifying the Location
+response header field for redirects, modifying the path and domain attribute
+for cookies, and so on. It also happens that a number of servers are somewhat
+verbose and tend to leak too much information in the response, making them more
+vulnerable to targetted attacks. While it's theorically not the role of a load
+balancer to clean this up, in practice it's located at the best place in the
+infrastructure to guarantee that everything is cleaned up.
+
+Similarly, sometimes the load balancer will have to intercept some requests and
+respond with a redirect to a new target URL. While some people tend to confuse
+redirects and rewriting, these are two completely different concepts, since the
+rewriting makes the client and the server see different things (and disagree on
+the location of the page being visited) while redirects ask the client to visit
+the new URL so that it sees the same location as the server.
+
+In order to do this, HAProxy supports various possibilities for rewriting and
+redirect, among which :
+
+ - regex-based URL and header rewriting in requests and responses. Regex are
+ the most commonly used tool to modify header values since they're easy to
+ manipulate and well understood;
+
+ - headers may also be appended, deleted or replaced based on formated strings
+ so that it is possible to pass information there (eg: client side TLS
+ algorithm and cipher);
+
+ - HTTP redirects can use any 3xx code to a relative, absolute, or completely
+ dynamic (formated string) URI;
+
+ - HTTP redirects also support some extra options such as setting or clearing
+ a specific cookie, dropping the query string, appending a slash if missing,
+ and so on;
+
+ - all operations support ACL-based conditions;
+
+
+3.3.14. Basic features : Server protection
+------------------------------------------
+
+HAProxy does a lot to maximize service availability, and for this it deploys
+large efforts to protect servers against overloading and attacks. The first
+and most important point is that only complete and valid requests are forwarded
+to the servers. The initial reason is that HAProxy needs to find the protocol
+elements it needs to stay synchronized with the byte stream, and the second
+reason is that until the request is complete, there is no way to know if some
+elements will change its semantics. The direct benefit from this is that servers
+are not exposed to invalid or incomplete requests. This is a very effective
+protection against slowloris attacks, which have almost no impact on HAProxy.
+
+Another important point is that HAProxy contains buffers to store requests and
+responses, and that by only sending a request to a server when it's complete and
+by reading the whole response very quickly from the local network, the server
+side connection is used for a very short time and this preserves server
+resources as much as possible.
+
+A direct extension to this is that HAProxy can artificially limit the number of
+concurrent connections or outstanding requests to a server, which guarantees
+that the server will never be overloaded even if it continuously runs at 100% of
+its capacity during traffic spikes. All excess requests will simply be queued to
+be processed when one slot is released. In the end, this huge resource savings
+most often ensures so much better server response times that it ends up actually
+being faster than by overloading the server. Queued requests may be redispatched
+to other servers, or even aborted in queue when the client aborts, which also
+protects the servers against the "reload effect", where each click on "reload"
+by a visitor on a slow-loading page usually induces a new request and maintains
+the server in an overloaded state.
+
+The slow-start mechanism also protects restarting servers against high traffic
+levels while they're still finalizing their startup or compiling some classes.
+
+Regarding the protocol-level protection, it is possible to relax the HTTP parser
+to accept non stardard-compliant but harmless requests or responses and even to
+fix them. This allows bogus applications to be accessible while a fix is being
+developped. In parallel, offending messages are completely captured with a
+detailed report that help developers spot the issue in the application. The most
+dangerous protocol violations are properly detected and dealt with and fixed.
+For example malformed requests or responses with two Content-length headers are
+either fixed if the values are exactly the same, or rejected if they differ,
+since it becomes a security problem. Protocol inspection is not limited to HTTP,
+it is also available for other protocols like TLS or RDP.
+
+When a protocol violation or attack is detected, there are various options to
+respond to the user, such as returning the common "HTTP 400 bad request",
+closing the connection with a TCP reset, faking an error after a long delay
+("tarpit") to confuse the attacker. All of these contribute to protecting the
+servers by discouraging the offending client from pursuing an attack that
+becomes very expensive to maintain.
+
+HAProxy also proposes some more advanced options to protect against accidental
+data leaks and session crossing. Not only it can log suspicious server responses
+but it will also log and optionally block a response which might affect a given
+visitors' confidentiality. One such example is a cacheable cookie appearing in a
+cacheable response and which may result in an intermediary cache to deliver it
+to another visitor, causing an accidental session sharing.
+
+
+3.3.15. Basic features : Logging
+--------------------------------
+
+Logging is an extremely important feature for a load balancer, first because a
+load balancer is often accused of the trouble it reveals, and second because it
+is placed at a critical point in an infrastructure where all normal and abnormal
+activity needs to be analysed and correlated with other components.
+
+HAProxy provides very detailed logs, with millisecond accuracy and the exact
+connection accept time that can be searched in firewalls logs (eg: for NAT
+correlation). By default, TCP and HTTP logs are quite detailed an contain
+everything needed for troubleshooting, such as source IP address and port,
+frontend, backend, server, timers (request receipt duration, queue duration,
+connection setup time, response headers time, data transfer time), global
+process state, connection counts, queue status, retries count, detailed
+stickiness actions and disconnect reasons, header captures with a safe output
+encoding. It is then possible to extend or replace this format to include any
+sampled data, variables, captures, resulting in very detailed information. For
+example it is possible to log the number cumulated requests for this client or
+the number of different URLs for the client.
+
+The log level may be adjusted per request using standard ACLs, so it is possible
+to automatically silent some logs considered as pollution and instead raise
+warnings when some abnormal behaviour happen for a small part of the traffic
+(eg: too many URLs or HTTP errors for a source address). Administrative logs are
+also emitted with their own levels to inform about the loss or recovery of a
+server for example.
+
+Each frontend and backend may use multiple independant log outputs, which eases
+multi-tenancy. Logs are preferably sent over UDP, maybe JSON-encoded, and are
+truncated after a configurable line length in order to guarantee delivery.
+
+
+3.3.16. Basic features : Statistics
+-----------------------------------
+
+HAProxy provides a web-based statistics reporting interface with authentication,
+security levels and scopes. It is thus possible to provide each hosted customer
+with his own page showing only his own instances. This page can be located in a
+hidden URL part of the regular web site so that no new port needs to be opened.
+This page may also report the availability of other HAProxy nodes so that it is
+easy to spot if everything works as expected at a glance. The view is synthetic
+with a lot of details accessible (such as error causes, last access and last
+change duration, etc), which are also accessible as a CSV table that other tools
+may import to draw graphs. The page may self-refresh to be used as a monitoring
+page on a large display. In administration mode, the page also allows to change
+server state to ease maintenance operations.
+
+
+3.4. Advanced features
+----------------------
+
+3.4.1. Advanced features : Management
+-------------------------------------
+
+HAProxy is designed to remain extremely stable and safe to manage in a regular
+production environment. It is provided as a single executable file which doesn't
+require any installation process. Multiple versions can easily coexist, meaning
+that it's possible (and recommended) to upgrade instances progressively by
+order of criticity instead of migrating all of them at once. Configuration files
+are easily versionned. Configuration checking is done off-line so it doesn't
+require to restart a service that will possibly fail. During configuration
+checks, a number of advanced mistakes may be detected (eg: for example, a rule
+hiding another one, or stickiness that will not work) and detailed warnings and
+configuration hints are proposed to fix them. Backwards configuration file
+compatibility goes very far away in time, with version 1.5 still fully
+supporting configurations for versions 1.1 written 13 years before, and 1.6
+only dropping support for almost unused, obsolete keywords that can be done
+differently. The configuration and software upgrade mechanism is smooth and non
+disruptive in that it allows old and new processes to coexist on the system,
+each handling its own connections. System status, build options and library
+compatibility are reported on startup.
+
+Some advanced features allow an application administrator to smoothly stop a
+server, detect when there's no activity on it anymore, then take it off-line,
+stop it, upgrade it and ensure it doesn't take any traffic while being upgraded,
+then test it again through the normal path without opening it to the public, and
+all of this without touching HAProxy at all. This ensures that even complicated
+production operations may be done during opening hours with all technical
+resources available.
+
+The process tries to save resources as much as possible, uses memory pools to
+save on allocation time and limit memory fragmentation, releases payload buffers
+as soon as their contents are sent, and supports enforcing strong memory limits
+above which connections have to wait for a buffer to become available instead of
+allocating more memory. This system helps guarantee memory usage in certain
+strict environments.
+
+A command line interface (CLI) is available as a UNIX or TCP socket, to perform
+a number of operations and to retrieve troubleshooting information. Everything
+done on this socket doesn't require a configuration change, so it is mostly used
+for temporary changes. Using this interface it is possible to change a server's
+address, weight and status, to consult statistics and clear counters, dump and
+clear stickiness tables, possibly selectively by key criteria, dump and kill
+client-side and server-side connections, dump captured errors with a detailed
+analysis of the exact cause and location of the error, dump, add and remove
+entries from ACLs and maps, update TLS shared secrets, apply connection limits
+and rate limits on the fly to arbitrary frontends (useful in shared hosting
+environments), and disable a specific frontend to release a listening port
+(useful when daytime operations are forbidden and a fix is needed nonetheless).
+
+For environments where SNMP is mandatory, at least two agents exist, one is
+provided with the HAProxy sources and relies on the Net-SNMP perl module.
+Another one is provided with the commercial packages and doesn't require Perl.
+Both are roughly equivalent in terms of coverage.
+
+It is often recommended to install 4 utilities on the machine where HAProxy is
+deployed :
+
+ - socat (in order to connect to the CLI, though certain forks of netcat can
+ also do it to some extents);
+
+ - halog from the latest HAProxy version : this is the log analysis tool, it
+ parses native TCP and HTTP logs extremely fast (1 to 2 GB per second) and
+ extracts useful information and statistics such as requests per URL, per
+ source address, URLs sorted by response time or error rate, termination
+ codes etc... It was designed to be deployed on the production servers to
+ help troubleshoot live issues so it has to be there ready to be used;
+
+ - tcpdump : this is highly recommended to take the network traces needed to
+ troubleshoot an issue that was made visible in the logs. There is a moment
+ where application and haproxy's analysis will diverge and the network traces
+ are the only way to say who's right and who's wrong. It's also fairly common
+ to detect bugs in network stacks and hypervisors thanks to tcpdump;
+
+ - strace : it is tcpdump's companion. It will report what HAProxy really sees
+ and will help sort out the issues the operating system is responsible for
+ from the ones HAProxy is responsible for. Strace is often requested when a
+ bug in HAProxy is suspected;
+
+
+3.4.2. Advanced features : System-specific capabilities
+-------------------------------------------------------
+
+Depending on the operating system HAProxy is deployed on, certain extra features
+may be available or needed. While it is supported on a number of platforms,
+HAProxy is primarily developped on Linux, which explains why some features are
+only available on this platform.
+
+The transparent bind and connect features, the support for binding connections
+to a specific network interface, as well as the ability to bind multiple
+processes to the same IP address and ports are only available on Linux and BSD
+systems, though only Linux performs a kernel-side load balancing of the incoming
+requests between the available processes.
+
+On Linux, there are also a number of extra features and optimizations including
+support for network namespaces (also known as "containers") allowing HAProxy to
+be a gateway between all containers, the ability to set the MSS, Netfilter marks
+and IP TOS field on the client side connection, support for TCP FastOpen on the
+listening side, TCP user timeouts to let the kernel quickly kill connections
+when it detects the client has disappeared before the configured timeouts, TCP
+splicing to let the kernel forward data between the two sides of a connections
+thus avoiding multiple memory copies, the ability to enable the "defer-accept"
+bind option to only get notified of an incoming connection once data become
+available in the kernel buffers, and the ability to send the request with the
+ACK confirming a connect (sometimes called "biggy-back") which is enabled with
+the "tcp-smart-connect" option. On Linux, HAProxy also takes great care of
+manipulating the TCP delayed ACKs to save as many packets as possible on the
+network.
+
+Some systems have an unreliable clock which jumps back and forth in the past
+and in the future. This used to happen with some NUMA systems where multiple
+processors didn't see the exact same time of day, and recently it became more
+common in virtualized environments where the virtual clock has no relation with
+the real clock, resulting in huge time jumps (sometimes up to 30 seconds have
+been observed). This causes a lot of trouble with respect to timeout enforcement
+in general. Due to this flaw of these systems, HAProxy maintains its own
+monotonic clock which is based on the system's clock but where drift is measured
+and compensated for. This ensures that even with a very bad system clock, timers
+remain reasonably accurate and timeouts continue to work. Note that this problem
+affects all the software running on such systems and is not specific to HAProxy.
+The common effects are spurious timeouts or application freezes. Thus if this
+behaviour is detected on a system, it must be fixed, regardless of the fact that
+HAProxy protects itself against it.
+
+
+3.4.3. Advanced features : Scripting
+------------------------------------
+
+HAProxy can be built with support for the Lua embedded language, which opens a
+wide area of new possibilities related to complex manipulation of requests or
+responses, routing decisions, statistics processing and so on. Using Lua it is
+even possible to establish parallel connections to other servers to exchange
+information. This way it becomes possible (though complex) to develop an
+authentication system for example. Please refer to the documentation in the file
+"doc/lua-api/index.rst" for more information on how to use Lua.
+
+
+3.5. Sizing
+-----------
+
+Typical CPU usage figures show 15% of the processing time spent in HAProxy
+versus 85% in the kernel in TCP or HTTP close mode, and about 30% for HAProxy
+versus 70% for the kernel in HTTP keep-alive mode. This means that the operating
+system and its tuning have a strong impact on the global performance.
+
+Usages vary a lot between users, some focus on bandwidth, other ones on request
+rate, others on connection concurrency, others on SSL performance. this section
+aims at providing a few elements to help in this task.
+
+It is important to keep in mind that every operation comes with a cost, so each
+individual operation adds its overhead on top of the other ones, which may be
+negligible in certain circumstances, and which may dominate in other cases.
+
+When processing the requests from a connection, we can say that :
+
+ - forwarding data costs less than parsing request or response headers;
+
+ - parsing request or response headers cost less than establishing then closing
+ a connection to a server;
+
+ - establishing an closing a connection costs less than a TLS resume operation;
+
+ - a TLS resume operation costs less than a full TLS handshake with a key
+ computation;
+
+ - an idle connection costs less CPU than a connection whose buffers hold data;
+
+ - a TLS context costs even more memory than a connection with data;
+
+So in practice, it is cheaper to process payload bytes than header bytes, thus
+it is easier to achieve high network bandwidth with large objects (few requests
+per volume unit) than with small objects (many requests per volume unit). This
+explains why maximum bandwidth is always measured with large objects, while
+request rate or connection rates are measured with small objects.
+
+Some operations scale well on multiple process spread over multiple processors,
+and others don't scale as well. Network bandwidth doesn't scale very far because
+the CPU is rarely the bottleneck for large objects, it's mostly the network
+bandwidth and data busses to reach the network interfaces. The connection rate
+doesn't scale well over multiple processors due to a few locks in the system
+when dealing with the local ports table. The request rate over persistent
+connections scales very well as it doesn't involve much memory nor network
+bandwidth and doesn't require to access locked structures. TLS key computation
+scales very well as it's totally CPU-bound. TLS resume scales moderately well,
+but reaches its limits around 4 processes where the overhead of accessing the
+shared table offsets the small gains expected from more power.
+
+The performance numbers one can expect from a very well tuned system are in the
+following range. It is important to take them as orders of magnitude and to
+expect significant variations in any direction based on the processor, IRQ
+setting, memory type, network interface type, operating system tuning and so on.
+
+The following numbers were found on a Core i7 running at 3.7 GHz equiped with
+a dual-port 10 Gbps NICs running Linux kernel 3.10, HAProxy 1.6 and OpenSSL
+1.0.2. HAProxy was running as a single process on a single dedicated CPU core,
+and two extra cores were dedicated to network interrupts :
+
+ - 20 Gbps of maximum network bandwidth in clear text for objects 256 kB or
+ higher, 10 Gbps for 41kB or higher;
+
+ - 4.6 Gbps of TLS traffic using AES256-GCM cipher with large objects;
+
+ - 83000 TCP connections per second from client to server;
+
+ - 82000 HTTP connections per second from client to server;
+
+ - 97000 HTTP requests per second in server-close mode (keep-alive with the
+ client, close with the server);
+
+ - 243000 HTTP requests per second in end-to-end keep-alive mode;
+
+ - 300000 filtered TCP connections per second (anti-DDoS)
+
+ - 160000 HTTPS requests per second in keep-alive mode over persistent TLS
+ connections;
+
+ - 13100 HTTPS requests per second using TLS resumed connections;
+
+ - 1300 HTTPS connections per second using TLS connections renegociated with
+ RSA2048;
+
+ - 20000 concurrent saturated connections per GB of RAM, including the memory
+ required for system buffers; it is possible to do better with careful tuning
+ but this setting it easy to achieve.
+
+ - about 8000 concurrent TLS connections (client-side only) per GB of RAM,
+ including the memory required for system buffers;
+
+ - about 5000 concurrent end-to-end TLS connections (both sides) per GB of
+ RAM including the memory required for system buffers;
+
+Thus a good rule of thumb to keep in mind is that the request rate is divided
+by 10 between TLS keep-alive and TLS resume, and between TLS resume and TLS
+renegociation, while it's only divided by 3 between HTTP keep-alive and HTTP
+close. Another good rule of thumb is to remember that a high frequency core
+with AES instructions can do around 5 Gbps of AES-GCM per core.
+
+Having more core rarely helps (except for TLS) and is even counter-productive
+due to the lower frequency. In general a small number of high frequency cores
+is better.
+
+Another good rule of thumb is to consider that on the same server, HAProxy will
+be able to saturate :
+
+ - about 5-10 static file servers or caching proxies;
+
+ - about 100 anti-virus proxies;
+
+ - and about 100-1000 application servers depending on the technology in use.
+
+
+3.6. How to get HAProxy
+-----------------------
+
+HAProxy is an opensource project covered by the GPLv2 license, meaning that
+everyone is allowed to redistribute it provided that access to the sources is
+also provided upon request, especially if any modifications were made.
+
+HAProxy evolves as a main development branch called "master" or "mainline", from
+which new branches are derived once the code is considered stable. A lot of web
+sites run some development branches in production on a voluntarily basis, either
+to participate to the project or because they need a bleeding edge feature, and
+their feedback is highly valuable to fix bugs and judge the overall quality and
+stability of the version being developped.
+
+The new branches that are created when the code is stable enough constitute a
+stable version and are generally maintained for several years, so that there is
+no emergency to migrate to a newer branch even when you're not on the latest.
+Once a stable branch is issued, it may only receive bug fixes, and very rarely
+minor feature updates when that makes users' life easier. All fixes that go into
+a stable branch necessarily come from the master branch. This guarantees that no
+fix will be lost after an upgrade. For this reason, if you fix a bug, please
+make the patch against the master branch, not the stable branch. You may even
+discover it was already fixed. This process also ensures that regressions in a
+stable branch are extremely rare, so there is never any excuse for not upgrading
+to the latest version in your current branch.
+
+Branches are numberred with two digits delimited with a dot, such as "1.6". A
+complete version includes one or two sub-version numbers indicating the level of
+fix. For example, version 1.5.14 is the 14th fix release in branch 1.5 after
+version 1.5.0 was issued. It contains 126 fixes for individual bugs, 24 updates
+on the documentation, and 75 other backported patches, most of which were needed
+to fix the aforementionned 126 bugs. An existing feature may never be modified
+nor removed in a stable branch, in order to guarantee that upgrades within the
+same branch will always be harmless.
+
+HAProxy is available from multiple sources, at different release rhythms :
+
+ - The official community web site : http://www.haproxy.org/ : this site
+ provides the sources of the latest development release, all stable releases,
+ as well as nightly snapshots for each branch. The release cycle is not fast,
+ several months between stable releases, or between development snapshots.
+ Very old versions are still supported there. Everything is provided as
+ sources only, so whatever comes from there needs to be rebuilt and/or
+ repackaged;
+
+ - A number of operating systems such as Linux distributions and BSD ports.
+ These systems generally provide long-term maintained versions which do not
+ always contain all the fixes from the official ones, but which at least
+ contain the critical fixes. It often is a good option for most users who do
+ not seek advanced configurations and just want to keep updates easy;
+
+ - Commercial versions from http://www.haproxy.com/ : these are supported
+ professional packages built for various operating systems or provided as
+ appliances, based on the latest stable versions and including a number of
+ features backported from the next release for which there is a strong
+ demand. It is the best option for users seeking the latest features with
+ the reliability of a stable branch, the fastest response time to fix bugs,
+ or simply support contracts on top of an opensource product;
+
+
+In order to ensure that the version you're using is the latest one in your
+branch, you need to proceed this way :
+
+ - verify which HAProxy executable you're running : some systems ship it by
+ default and administrators install their versions somewhere else on the
+ system, so it is important to verify in the startup scripts which one is
+ used;
+
+ - determine which source your HAProxy version comes from. For this, it's
+ generally sufficient to type "haproxy -v". A development version will
+ appear like this, with the "dev" word after the branch number :
+
+ HA-Proxy version 1.6-dev3-385ecc-68 2015/08/18
+
+ A stable version will appear like this, as well as unmodified stable
+ versions provided by operating system vendors :
+
+ HA-Proxy version 1.5.14 2015/07/02
+
+ And a nightly snapshot of a stable version will appear like this with an
+ hexadecimal sequence after the version, and with the date of the snapshot
+ instead of the date of the release :
+
+ HA-Proxy version 1.5.14-e4766ba 2015/07/29
+
+ Any other format may indicate a system-specific package with its own
+ patch set. For example HAProxy Enterprise versions will appear with the
+ following format (<branch>-<latest commit>-<revision>) :
+
+ HA-Proxy version 1.5.0-994126-357 2015/07/02
+
+ - for system-specific packages, you have to check with your vendor's package
+ repository or update system to ensure that your system is still supported,
+ and that fixes are still provided for your branch. For community versions
+ coming from haproxy.org, just visit the site, verify the status of your
+ branch and compare the latest version with yours to see if you're on the
+ latest one. If not you can upgrade. If your branch is not maintained
+ anymore, you're definitely very late and will have to consider an upgrade
+ to a more recent branch (carefully read the README when doing so).
+
+HAProxy will have to be updated according to the source it came from. Usually it
+follows the system vendor's way of upgrading a package. If it was taken from
+sources, please read the README file in the sources directory after extracting
+the sources and follow the instructions for your operating system.
+
+
+4. Companion products and alternatives
+--------------------------------------
+
+HAProxy integrates fairly well with certain products listed below, which is why
+they are mentionned here even if not directly related to HAProxy.
+
+
+4.1. Apache HTTP server
+-----------------------
+
+Apache is the de-facto standard HTTP server. It's a very complete and modular
+project supporting both file serving and dynamic contents. It can serve as a
+frontend for some application servers. In can even proxy requests and cache
+responses. In all of these use cases, a front load balancer is commonly needed.
+Apache can work in various modes, certain being heavier than other ones. Certain
+modules still require the heavier pre-forked model and will prevent Apache from
+scaling well with a high number of connections. In this case HAProxy can provide
+a tremendous help by enforcing the per-server connection limits to a safe value
+and will significantly speed up the server and preserve its resources that will
+be better used by the application.
+
+Apache can extract the client's address from the X-Forwarded-For header by using
+the "mod_rpaf" extension. HAProxy will automatically feed this header when
+"option forwardfor" is specified in its configuration. HAProxy may also offer a
+nice protection to Apache when exposed to the internet, where it will better
+resist to a wide number of types of DoS.
+
+
+4.2. NGINX
+----------
+
+NGINX is the second de-facto standard HTTP server. Just like Apache, it covers a
+wide range of features. NGINX is built on a similar model as HAProxy so it has
+no problem dealing with tens of thousands of concurrent connections. When used
+as a gateway to some applications (eg: using the included PHP FPM), it can often
+be beneficial to set up some frontend connection limiting to reduce the load
+on the PHP application. HAProxy will clearly be useful there both as a regular
+load balancer and as the traffic regulator to speed up PHP by decongestionning
+it. Also since both products use very little CPU thanks to their event-driven
+architecture, it's often easy to install both of them on the same system. NGINX
+implements HAProxy's PROXY protocol, thus it is easy for HAProxy to pass the
+client's connection information to NGINX so that the application gets all the
+relevant information. Some benchmarks have also shown that for large static
+file serving, implementing consistent hash on HAProxy in front of NGINX can be
+beneficial by optimizing the OS' cache hit ratio, which is basically multiplied
+by the number of server nodes.
+
+
+4.3. Varnish
+------------
+
+Varnish is a smart caching reverse-proxy, probably best described as a web
+application accelerator. Varnish doesn't implement SSL/TLS and wants to dedicate
+all of its CPU cycles to what it does best. Varnish also implements HAProxy's
+PROXY protocol so that HAProxy can very easily be deployed in front of Varnish
+as an SSL offloader as well as a load balancer and pass it all relevant client
+information. Also, Varnish naturally supports decompression from the cache when
+a server has provided a compressed object, but doesn't compress however. HAProxy
+can then be used to compress outgoing data when backend servers do not implement
+compression, though it's rarely a good idea to compress on the load balancer
+unless the traffic is low.
+
+When building large caching farms across multiple nodes, HAProxy can make use of
+consistent URL hashing to intelligently distribute the load to the caching nodes
+and avoid cache duplication, resulting in a total cache size which is the sum of
+all caching nodes.
+
+
+4.4. Alternatives
+-----------------
+
+Linux Virtual Server (LVS or IPVS) is the layer 4 load balancer included within
+the Linux kernel. It works at the packet level and handles TCP and UDP. In most
+cases it's more a complement than an alternative since it doesn't have layer 7
+knowledge at all.
+
+Pound is another well-known load balancer. It's much simpler and has much less
+features than HAProxy but for many very basic setups both can be used. Its
+author has always focused on code auditability first and wants to maintain the
+set of features low. Its thread-based architecture scales less well with high
+connection counts, but it's a good product.
+
+Pen is a quite light load balancer. It supports SSL, maintains persistence using
+a fixed-size table of its clients' IP addresses. It supports a packet-oriented
+mode allowing it to support direct server return and UDP to some extents. It is
+meant for small loads (the persistence table only has 2048 entries).
+
+NGINX can do some load balancing to some extents, though it's clearly not its
+primary function. Production traffic is used to detect server failures, the
+load balancing algorithms are more limited, and the stickiness is very limited.
+But it can make sense in some simple deployment scenarios where it is already
+present. The good thing is that since it integrates very well with HAProxy,
+there's nothing wrong with adding HAProxy later when its limits have been faced.
+
+Varnish also does some load balancing of its backend servers and does support
+real health checks. It doesn't implement stickiness however, so just like with
+NGINX, as long as stickiness is not needed that can be enough to start with.
+And similarly, since HAProxy and Varnish integrate so well together, it's easy
+to add it later into the mix to complement the feature set.
+
--- /dev/null
+ GNU LESSER GENERAL PUBLIC LICENSE
+ Version 2.1, February 1999
+
+ Copyright (C) 1991, 1999 Free Software Foundation, Inc.
+ 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ Everyone is permitted to copy and distribute verbatim copies
+ of this license document, but changing it is not allowed.
+
+[This is the first released version of the Lesser GPL. It also counts
+ as the successor of the GNU Library Public License, version 2, hence
+ the version number 2.1.]
+
+ Preamble
+
+ The licenses for most software are designed to take away your
+freedom to share and change it. By contrast, the GNU General Public
+Licenses are intended to guarantee your freedom to share and change
+free software--to make sure the software is free for all its users.
+
+ This license, the Lesser General Public License, applies to some
+specially designated software packages--typically libraries--of the
+Free Software Foundation and other authors who decide to use it. You
+can use it too, but we suggest you first think carefully about whether
+this license or the ordinary General Public License is the better
+strategy to use in any particular case, based on the explanations below.
+
+ When we speak of free software, we are referring to freedom of use,
+not price. Our General Public Licenses are designed to make sure that
+you have the freedom to distribute copies of free software (and charge
+for this service if you wish); that you receive source code or can get
+it if you want it; that you can change the software and use pieces of
+it in new free programs; and that you are informed that you can do
+these things.
+
+ To protect your rights, we need to make restrictions that forbid
+distributors to deny you these rights or to ask you to surrender these
+rights. These restrictions translate to certain responsibilities for
+you if you distribute copies of the library or if you modify it.
+
+ For example, if you distribute copies of the library, whether gratis
+or for a fee, you must give the recipients all the rights that we gave
+you. You must make sure that they, too, receive or can get the source
+code. If you link other code with the library, you must provide
+complete object files to the recipients, so that they can relink them
+with the library after making changes to the library and recompiling
+it. And you must show them these terms so they know their rights.
+
+ We protect your rights with a two-step method: (1) we copyright the
+library, and (2) we offer you this license, which gives you legal
+permission to copy, distribute and/or modify the library.
+
+ To protect each distributor, we want to make it very clear that
+there is no warranty for the free library. Also, if the library is
+modified by someone else and passed on, the recipients should know
+that what they have is not the original version, so that the original
+author's reputation will not be affected by problems that might be
+introduced by others.
+\f
+ Finally, software patents pose a constant threat to the existence of
+any free program. We wish to make sure that a company cannot
+effectively restrict the users of a free program by obtaining a
+restrictive license from a patent holder. Therefore, we insist that
+any patent license obtained for a version of the library must be
+consistent with the full freedom of use specified in this license.
+
+ Most GNU software, including some libraries, is covered by the
+ordinary GNU General Public License. This license, the GNU Lesser
+General Public License, applies to certain designated libraries, and
+is quite different from the ordinary General Public License. We use
+this license for certain libraries in order to permit linking those
+libraries into non-free programs.
+
+ When a program is linked with a library, whether statically or using
+a shared library, the combination of the two is legally speaking a
+combined work, a derivative of the original library. The ordinary
+General Public License therefore permits such linking only if the
+entire combination fits its criteria of freedom. The Lesser General
+Public License permits more lax criteria for linking other code with
+the library.
+
+ We call this license the "Lesser" General Public License because it
+does Less to protect the user's freedom than the ordinary General
+Public License. It also provides other free software developers Less
+of an advantage over competing non-free programs. These disadvantages
+are the reason we use the ordinary General Public License for many
+libraries. However, the Lesser license provides advantages in certain
+special circumstances.
+
+ For example, on rare occasions, there may be a special need to
+encourage the widest possible use of a certain library, so that it becomes
+a de-facto standard. To achieve this, non-free programs must be
+allowed to use the library. A more frequent case is that a free
+library does the same job as widely used non-free libraries. In this
+case, there is little to gain by limiting the free library to free
+software only, so we use the Lesser General Public License.
+
+ In other cases, permission to use a particular library in non-free
+programs enables a greater number of people to use a large body of
+free software. For example, permission to use the GNU C Library in
+non-free programs enables many more people to use the whole GNU
+operating system, as well as its variant, the GNU/Linux operating
+system.
+
+ Although the Lesser General Public License is Less protective of the
+users' freedom, it does ensure that the user of a program that is
+linked with the Library has the freedom and the wherewithal to run
+that program using a modified version of the Library.
+
+ The precise terms and conditions for copying, distribution and
+modification follow. Pay close attention to the difference between a
+"work based on the library" and a "work that uses the library". The
+former contains code derived from the library, whereas the latter must
+be combined with the library in order to run.
+\f
+ GNU LESSER GENERAL PUBLIC LICENSE
+ TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
+
+ 0. This License Agreement applies to any software library or other
+program which contains a notice placed by the copyright holder or
+other authorized party saying it may be distributed under the terms of
+this Lesser General Public License (also called "this License").
+Each licensee is addressed as "you".
+
+ A "library" means a collection of software functions and/or data
+prepared so as to be conveniently linked with application programs
+(which use some of those functions and data) to form executables.
+
+ The "Library", below, refers to any such software library or work
+which has been distributed under these terms. A "work based on the
+Library" means either the Library or any derivative work under
+copyright law: that is to say, a work containing the Library or a
+portion of it, either verbatim or with modifications and/or translated
+straightforwardly into another language. (Hereinafter, translation is
+included without limitation in the term "modification".)
+
+ "Source code" for a work means the preferred form of the work for
+making modifications to it. For a library, complete source code means
+all the source code for all modules it contains, plus any associated
+interface definition files, plus the scripts used to control compilation
+and installation of the library.
+
+ Activities other than copying, distribution and modification are not
+covered by this License; they are outside its scope. The act of
+running a program using the Library is not restricted, and output from
+such a program is covered only if its contents constitute a work based
+on the Library (independent of the use of the Library in a tool for
+writing it). Whether that is true depends on what the Library does
+and what the program that uses the Library does.
+
+ 1. You may copy and distribute verbatim copies of the Library's
+complete source code as you receive it, in any medium, provided that
+you conspicuously and appropriately publish on each copy an
+appropriate copyright notice and disclaimer of warranty; keep intact
+all the notices that refer to this License and to the absence of any
+warranty; and distribute a copy of this License along with the
+Library.
+
+ You may charge a fee for the physical act of transferring a copy,
+and you may at your option offer warranty protection in exchange for a
+fee.
+\f
+ 2. You may modify your copy or copies of the Library or any portion
+of it, thus forming a work based on the Library, and copy and
+distribute such modifications or work under the terms of Section 1
+above, provided that you also meet all of these conditions:
+
+ a) The modified work must itself be a software library.
+
+ b) You must cause the files modified to carry prominent notices
+ stating that you changed the files and the date of any change.
+
+ c) You must cause the whole of the work to be licensed at no
+ charge to all third parties under the terms of this License.
+
+ d) If a facility in the modified Library refers to a function or a
+ table of data to be supplied by an application program that uses
+ the facility, other than as an argument passed when the facility
+ is invoked, then you must make a good faith effort to ensure that,
+ in the event an application does not supply such function or
+ table, the facility still operates, and performs whatever part of
+ its purpose remains meaningful.
+
+ (For example, a function in a library to compute square roots has
+ a purpose that is entirely well-defined independent of the
+ application. Therefore, Subsection 2d requires that any
+ application-supplied function or table used by this function must
+ be optional: if the application does not supply it, the square
+ root function must still compute square roots.)
+
+These requirements apply to the modified work as a whole. If
+identifiable sections of that work are not derived from the Library,
+and can be reasonably considered independent and separate works in
+themselves, then this License, and its terms, do not apply to those
+sections when you distribute them as separate works. But when you
+distribute the same sections as part of a whole which is a work based
+on the Library, the distribution of the whole must be on the terms of
+this License, whose permissions for other licensees extend to the
+entire whole, and thus to each and every part regardless of who wrote
+it.
+
+Thus, it is not the intent of this section to claim rights or contest
+your rights to work written entirely by you; rather, the intent is to
+exercise the right to control the distribution of derivative or
+collective works based on the Library.
+
+In addition, mere aggregation of another work not based on the Library
+with the Library (or with a work based on the Library) on a volume of
+a storage or distribution medium does not bring the other work under
+the scope of this License.
+
+ 3. You may opt to apply the terms of the ordinary GNU General Public
+License instead of this License to a given copy of the Library. To do
+this, you must alter all the notices that refer to this License, so
+that they refer to the ordinary GNU General Public License, version 2,
+instead of to this License. (If a newer version than version 2 of the
+ordinary GNU General Public License has appeared, then you can specify
+that version instead if you wish.) Do not make any other change in
+these notices.
+\f
+ Once this change is made in a given copy, it is irreversible for
+that copy, so the ordinary GNU General Public License applies to all
+subsequent copies and derivative works made from that copy.
+
+ This option is useful when you wish to copy part of the code of
+the Library into a program that is not a library.
+
+ 4. You may copy and distribute the Library (or a portion or
+derivative of it, under Section 2) in object code or executable form
+under the terms of Sections 1 and 2 above provided that you accompany
+it with the complete corresponding machine-readable source code, which
+must be distributed under the terms of Sections 1 and 2 above on a
+medium customarily used for software interchange.
+
+ If distribution of object code is made by offering access to copy
+from a designated place, then offering equivalent access to copy the
+source code from the same place satisfies the requirement to
+distribute the source code, even though third parties are not
+compelled to copy the source along with the object code.
+
+ 5. A program that contains no derivative of any portion of the
+Library, but is designed to work with the Library by being compiled or
+linked with it, is called a "work that uses the Library". Such a
+work, in isolation, is not a derivative work of the Library, and
+therefore falls outside the scope of this License.
+
+ However, linking a "work that uses the Library" with the Library
+creates an executable that is a derivative of the Library (because it
+contains portions of the Library), rather than a "work that uses the
+library". The executable is therefore covered by this License.
+Section 6 states terms for distribution of such executables.
+
+ When a "work that uses the Library" uses material from a header file
+that is part of the Library, the object code for the work may be a
+derivative work of the Library even though the source code is not.
+Whether this is true is especially significant if the work can be
+linked without the Library, or if the work is itself a library. The
+threshold for this to be true is not precisely defined by law.
+
+ If such an object file uses only numerical parameters, data
+structure layouts and accessors, and small macros and small inline
+functions (ten lines or less in length), then the use of the object
+file is unrestricted, regardless of whether it is legally a derivative
+work. (Executables containing this object code plus portions of the
+Library will still fall under Section 6.)
+
+ Otherwise, if the work is a derivative of the Library, you may
+distribute the object code for the work under the terms of Section 6.
+Any executables containing that work also fall under Section 6,
+whether or not they are linked directly with the Library itself.
+\f
+ 6. As an exception to the Sections above, you may also combine or
+link a "work that uses the Library" with the Library to produce a
+work containing portions of the Library, and distribute that work
+under terms of your choice, provided that the terms permit
+modification of the work for the customer's own use and reverse
+engineering for debugging such modifications.
+
+ You must give prominent notice with each copy of the work that the
+Library is used in it and that the Library and its use are covered by
+this License. You must supply a copy of this License. If the work
+during execution displays copyright notices, you must include the
+copyright notice for the Library among them, as well as a reference
+directing the user to the copy of this License. Also, you must do one
+of these things:
+
+ a) Accompany the work with the complete corresponding
+ machine-readable source code for the Library including whatever
+ changes were used in the work (which must be distributed under
+ Sections 1 and 2 above); and, if the work is an executable linked
+ with the Library, with the complete machine-readable "work that
+ uses the Library", as object code and/or source code, so that the
+ user can modify the Library and then relink to produce a modified
+ executable containing the modified Library. (It is understood
+ that the user who changes the contents of definitions files in the
+ Library will not necessarily be able to recompile the application
+ to use the modified definitions.)
+
+ b) Use a suitable shared library mechanism for linking with the
+ Library. A suitable mechanism is one that (1) uses at run time a
+ copy of the library already present on the user's computer system,
+ rather than copying library functions into the executable, and (2)
+ will operate properly with a modified version of the library, if
+ the user installs one, as long as the modified version is
+ interface-compatible with the version that the work was made with.
+
+ c) Accompany the work with a written offer, valid for at
+ least three years, to give the same user the materials
+ specified in Subsection 6a, above, for a charge no more
+ than the cost of performing this distribution.
+
+ d) If distribution of the work is made by offering access to copy
+ from a designated place, offer equivalent access to copy the above
+ specified materials from the same place.
+
+ e) Verify that the user has already received a copy of these
+ materials or that you have already sent this user a copy.
+
+ For an executable, the required form of the "work that uses the
+Library" must include any data and utility programs needed for
+reproducing the executable from it. However, as a special exception,
+the materials to be distributed need not include anything that is
+normally distributed (in either source or binary form) with the major
+components (compiler, kernel, and so on) of the operating system on
+which the executable runs, unless that component itself accompanies
+the executable.
+
+ It may happen that this requirement contradicts the license
+restrictions of other proprietary libraries that do not normally
+accompany the operating system. Such a contradiction means you cannot
+use both them and the Library together in an executable that you
+distribute.
+\f
+ 7. You may place library facilities that are a work based on the
+Library side-by-side in a single library together with other library
+facilities not covered by this License, and distribute such a combined
+library, provided that the separate distribution of the work based on
+the Library and of the other library facilities is otherwise
+permitted, and provided that you do these two things:
+
+ a) Accompany the combined library with a copy of the same work
+ based on the Library, uncombined with any other library
+ facilities. This must be distributed under the terms of the
+ Sections above.
+
+ b) Give prominent notice with the combined library of the fact
+ that part of it is a work based on the Library, and explaining
+ where to find the accompanying uncombined form of the same work.
+
+ 8. You may not copy, modify, sublicense, link with, or distribute
+the Library except as expressly provided under this License. Any
+attempt otherwise to copy, modify, sublicense, link with, or
+distribute the Library is void, and will automatically terminate your
+rights under this License. However, parties who have received copies,
+or rights, from you under this License will not have their licenses
+terminated so long as such parties remain in full compliance.
+
+ 9. You are not required to accept this License, since you have not
+signed it. However, nothing else grants you permission to modify or
+distribute the Library or its derivative works. These actions are
+prohibited by law if you do not accept this License. Therefore, by
+modifying or distributing the Library (or any work based on the
+Library), you indicate your acceptance of this License to do so, and
+all its terms and conditions for copying, distributing or modifying
+the Library or works based on it.
+
+ 10. Each time you redistribute the Library (or any work based on the
+Library), the recipient automatically receives a license from the
+original licensor to copy, distribute, link with or modify the Library
+subject to these terms and conditions. You may not impose any further
+restrictions on the recipients' exercise of the rights granted herein.
+You are not responsible for enforcing compliance by third parties with
+this License.
+\f
+ 11. If, as a consequence of a court judgment or allegation of patent
+infringement or for any other reason (not limited to patent issues),
+conditions are imposed on you (whether by court order, agreement or
+otherwise) that contradict the conditions of this License, they do not
+excuse you from the conditions of this License. If you cannot
+distribute so as to satisfy simultaneously your obligations under this
+License and any other pertinent obligations, then as a consequence you
+may not distribute the Library at all. For example, if a patent
+license would not permit royalty-free redistribution of the Library by
+all those who receive copies directly or indirectly through you, then
+the only way you could satisfy both it and this License would be to
+refrain entirely from distribution of the Library.
+
+If any portion of this section is held invalid or unenforceable under any
+particular circumstance, the balance of the section is intended to apply,
+and the section as a whole is intended to apply in other circumstances.
+
+It is not the purpose of this section to induce you to infringe any
+patents or other property right claims or to contest validity of any
+such claims; this section has the sole purpose of protecting the
+integrity of the free software distribution system which is
+implemented by public license practices. Many people have made
+generous contributions to the wide range of software distributed
+through that system in reliance on consistent application of that
+system; it is up to the author/donor to decide if he or she is willing
+to distribute software through any other system and a licensee cannot
+impose that choice.
+
+This section is intended to make thoroughly clear what is believed to
+be a consequence of the rest of this License.
+
+ 12. If the distribution and/or use of the Library is restricted in
+certain countries either by patents or by copyrighted interfaces, the
+original copyright holder who places the Library under this License may add
+an explicit geographical distribution limitation excluding those countries,
+so that distribution is permitted only in or among countries not thus
+excluded. In such case, this License incorporates the limitation as if
+written in the body of this License.
+
+ 13. The Free Software Foundation may publish revised and/or new
+versions of the Lesser General Public License from time to time.
+Such new versions will be similar in spirit to the present version,
+but may differ in detail to address new problems or concerns.
+
+Each version is given a distinguishing version number. If the Library
+specifies a version number of this License which applies to it and
+"any later version", you have the option of following the terms and
+conditions either of that version or of any later version published by
+the Free Software Foundation. If the Library does not specify a
+license version number, you may choose any version ever published by
+the Free Software Foundation.
+\f
+ 14. If you wish to incorporate parts of the Library into other free
+programs whose distribution conditions are incompatible with these,
+write to the author to ask for permission. For software which is
+copyrighted by the Free Software Foundation, write to the Free
+Software Foundation; we sometimes make exceptions for this. Our
+decision will be guided by the two goals of preserving the free status
+of all derivatives of our free software and of promoting the sharing
+and reuse of software generally.
+
+ NO WARRANTY
+
+ 15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO
+WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW.
+EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR
+OTHER PARTIES PROVIDE THE LIBRARY "AS IS" WITHOUT WARRANTY OF ANY
+KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE
+LIBRARY IS WITH YOU. SHOULD THE LIBRARY PROVE DEFECTIVE, YOU ASSUME
+THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
+
+ 16. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN
+WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY
+AND/OR REDISTRIBUTE THE LIBRARY AS PERMITTED ABOVE, BE LIABLE TO YOU
+FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR
+CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE
+LIBRARY (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING
+RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A
+FAILURE OF THE LIBRARY TO OPERATE WITH ANY OTHER SOFTWARE), EVEN IF
+SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH
+DAMAGES.
+
+ END OF TERMS AND CONDITIONS
+\f
+ How to Apply These Terms to Your New Libraries
+
+ If you develop a new library, and you want it to be of the greatest
+possible use to the public, we recommend making it free software that
+everyone can redistribute and change. You can do so by permitting
+redistribution under these terms (or, alternatively, under the terms of the
+ordinary General Public License).
+
+ To apply these terms, attach the following notices to the library. It is
+safest to attach them to the start of each source file to most effectively
+convey the exclusion of warranty; and each file should have at least the
+"copyright" line and a pointer to where the full notice is found.
+
+ <one line to give the library's name and a brief idea of what it does.>
+ Copyright (C) <year> <name of author>
+
+ This library is free software; you can redistribute it and/or
+ modify it under the terms of the GNU Lesser General Public
+ License as published by the Free Software Foundation; either
+ version 2.1 of the License, or (at your option) any later version.
+
+ This library is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ Lesser General Public License for more details.
+
+ You should have received a copy of the GNU Lesser General Public
+ License along with this library; if not, write to the Free Software
+ Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+Also add information on how to contact you by electronic and paper mail.
+
+You should also get your employer (if you work as a programmer) or your
+school, if any, to sign a "copyright disclaimer" for the library, if
+necessary. Here is a sample; alter the names:
+
+ Yoyodyne, Inc., hereby disclaims all copyright interest in the
+ library `Frob' (a library for tweaking knobs) written by James Random Hacker.
+
+ <signature of Ty Coon>, 1 April 1990
+ Ty Coon, President of Vice
+
+That's all there is to it!
+
+
--- /dev/null
+SYN cookie analysis on 3.10
+
+include/net/request_sock.h:
+
+static inline int reqsk_queue_is_full(const struct request_sock_queue *queue)
+{
+ return queue->listen_opt->qlen >> queue->listen_opt->max_qlen_log;
+}
+
+include/net/inet_connection_sock.h:
+
+static inline int inet_csk_reqsk_queue_is_full(const struct sock *sk)
+{
+ return reqsk_queue_is_full(&inet_csk(sk)->icsk_accept_queue);
+}
+
+max_qlen_log is computed to equal log2(min(min(listen_backlog,somaxconn), sysctl_max_syn_backlog),
+and this is done this way following this path :
+
+ socket.c:listen(fd, backlog) :
+
+ backlog = min(backlog, somaxconn)
+ => af_inet.c:inet_listen(sock, backlog)
+
+ => inet_connection_sock.c:inet_csk_listen_start(sk, backlog)
+
+ sk_max_ack_backlog = backlog
+ => request_sock.c:reqsk_queue_alloc(sk, backlog (=nr_table_entries))
+
+ nr_table_entries = min_t(u32, nr_table_entries, sysctl_max_syn_backlog);
+ nr_table_entries = max_t(u32, nr_table_entries, 8);
+ nr_table_entries = roundup_pow_of_two(nr_table_entries + 1);
+ for (lopt->max_qlen_log = 3;
+ (1 << lopt->max_qlen_log) < nr_table_entries;
+ lopt->max_qlen_log++);
+
+
+tcp_ipv4.c:tcp_v4_conn_request()
+ - inet_csk_reqsk_queue_is_full() returns true when the listening socket's
+ qlen is larger than 1 << max_qlen_log, so basically qlen >= min(backlog,max_backlog)
+
+ - tcp_syn_flood_action() returns true when sysctl_tcp_syncookies is set. It
+ also emits a warning once per listening socket when activating the feature.
+
+ if (inet_csk_reqsk_queue_is_full(sk) && !isn) {
+ want_cookie = tcp_syn_flood_action(sk, skb, "TCP");
+ if (!want_cookie)
+ goto drop;
+ }
+
+ => when the socket's current backlog is >= min(backlog,max_backlog),
+ either tcp_syn_cookies is set so we set want_cookie to 1, or we drop.
+
+
+ /* Accept backlog is full. If we have already queued enough
+ * of warm entries in syn queue, drop request. It is better than
+ * clogging syn queue with openreqs with exponentially increasing
+ * timeout.
+ */
+
+sock.h:sk_acceptq_is_full() = sk_ack_backlog > sk_max_ack_backlog
+ = sk_ack_backlog > min(somaxconn, listen_backlog)
+
+ if (sk_acceptq_is_full(sk) && inet_csk_reqsk_queue_young(sk) > 1) {
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_LISTENOVERFLOWS);
+ goto drop;
+ }
+
+====> the following algorithm is applied in the reverse order but with these
+ priorities :
+
+ 1) IF socket's accept queue >= min(somaxconn, listen_backlog) THEN drop
+
+ 2) IF socket's SYN backlog < min(somaxconn, listen_backlog, tcp_max_syn_backlog) THEN accept
+
+ 3) IF tcp_syn_cookies THEN send_syn_cookie
+
+ 4) otherwise drop
+
+====> the problem is the accept queue being filled, but it's supposed to be
+ filled only with validated client requests (step 1).
+
+
+
+ req = inet_reqsk_alloc(&tcp_request_sock_ops);
+ if (!req)
+ goto drop;
+
+ ...
+ if (!sysctl_tcp_syncookies &&
+ (sysctl_max_syn_backlog - inet_csk_reqsk_queue_len(sk) <
+ (sysctl_max_syn_backlog >> 2)) &&
+ !tcp_peer_is_proven(req, dst, false)) {
+ /* Without syncookies last quarter of
+ * backlog is filled with destinations,
+ * proven to be alive.
+ * It means that we continue to communicate
+ * to destinations, already remembered
+ * to the moment of synflood.
+ */
+ LIMIT_NETDEBUG(KERN_DEBUG pr_fmt("drop open request from %pI4/%u\n"),
+ &saddr, ntohs(tcp_hdr(skb)->source));
+ goto drop_and_release;
+ }
+
+
--- /dev/null
+# Makefile for Sphinx documentation
+#
+
+# You can set these variables from the command line.
+SPHINXOPTS =
+SPHINXBUILD = sphinx-build
+PAPER =
+BUILDDIR = _build
+
+# Internal variables.
+PAPEROPT_a4 = -D latex_paper_size=a4
+PAPEROPT_letter = -D latex_paper_size=letter
+ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
+# the i18n builder cannot share the environment and doctrees with the others
+I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
+
+.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext
+
+help:
+ @echo "Please use \`make <target>' where <target> is one of"
+ @echo " html to make standalone HTML files"
+ @echo " dirhtml to make HTML files named index.html in directories"
+ @echo " singlehtml to make a single large HTML file"
+ @echo " pickle to make pickle files"
+ @echo " json to make JSON files"
+ @echo " htmlhelp to make HTML files and a HTML help project"
+ @echo " qthelp to make HTML files and a qthelp project"
+ @echo " devhelp to make HTML files and a Devhelp project"
+ @echo " epub to make an epub"
+ @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
+ @echo " latexpdf to make LaTeX files and run them through pdflatex"
+ @echo " text to make text files"
+ @echo " man to make manual pages"
+ @echo " texinfo to make Texinfo files"
+ @echo " info to make Texinfo files and run them through makeinfo"
+ @echo " gettext to make PO message catalogs"
+ @echo " changes to make an overview of all changed/added/deprecated items"
+ @echo " linkcheck to check all external links for integrity"
+ @echo " doctest to run all doctests embedded in the documentation (if enabled)"
+
+clean:
+ -rm -rf $(BUILDDIR)/*
+
+html:
+ $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
+ @echo
+ @echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
+
+dirhtml:
+ $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
+ @echo
+ @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
+
+singlehtml:
+ $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
+ @echo
+ @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
+
+pickle:
+ $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
+ @echo
+ @echo "Build finished; now you can process the pickle files."
+
+json:
+ $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
+ @echo
+ @echo "Build finished; now you can process the JSON files."
+
+htmlhelp:
+ $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
+ @echo
+ @echo "Build finished; now you can run HTML Help Workshop with the" \
+ ".hhp project file in $(BUILDDIR)/htmlhelp."
+
+qthelp:
+ $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
+ @echo
+ @echo "Build finished; now you can run "qcollectiongenerator" with the" \
+ ".qhcp project file in $(BUILDDIR)/qthelp, like this:"
+ @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/haproxy-lua.qhcp"
+ @echo "To view the help file:"
+ @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/haproxy-lua.qhc"
+
+devhelp:
+ $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
+ @echo
+ @echo "Build finished."
+ @echo "To view the help file:"
+ @echo "# mkdir -p $$HOME/.local/share/devhelp/haproxy-lua"
+ @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/haproxy-lua"
+ @echo "# devhelp"
+
+epub:
+ $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
+ @echo
+ @echo "Build finished. The epub file is in $(BUILDDIR)/epub."
+
+latex:
+ $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
+ @echo
+ @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
+ @echo "Run \`make' in that directory to run these through (pdf)latex" \
+ "(use \`make latexpdf' here to do that automatically)."
+
+latexpdf:
+ $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
+ @echo "Running LaTeX files through pdflatex..."
+ $(MAKE) -C $(BUILDDIR)/latex all-pdf
+ @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
+
+text:
+ $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
+ @echo
+ @echo "Build finished. The text files are in $(BUILDDIR)/text."
+
+man:
+ $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
+ @echo
+ @echo "Build finished. The manual pages are in $(BUILDDIR)/man."
+
+texinfo:
+ $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
+ @echo
+ @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
+ @echo "Run \`make' in that directory to run these through makeinfo" \
+ "(use \`make info' here to do that automatically)."
+
+info:
+ $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
+ @echo "Running Texinfo files through makeinfo..."
+ make -C $(BUILDDIR)/texinfo info
+ @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
+
+gettext:
+ $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
+ @echo
+ @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
+
+changes:
+ $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
+ @echo
+ @echo "The overview file is in $(BUILDDIR)/changes."
+
+linkcheck:
+ $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
+ @echo
+ @echo "Link check complete; look for any errors in the above output " \
+ "or in $(BUILDDIR)/linkcheck/output.txt."
+
+doctest:
+ $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
+ @echo "Testing of doctests in the sources finished, look at the " \
+ "results in $(BUILDDIR)/doctest/output.txt."
--- /dev/null
+#FIG 3.2 Produced by xfig version 3.2.5b
+Landscape
+Center
+Metric
+A4
+100.00
+Single
+-2
+1200 2
+1 1 0 1 0 7 50 -1 -1 0.000 1 0.0000 4500 1620 1260 585 4500 1620 5760 2205
+2 3 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 9
+ 1170 1350 1170 1890 2790 1890 2790 2070 3240 1620 2790 1170
+ 2790 1350 1170 1350 1170 1350
+2 3 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 9
+ 5760 1350 5760 1890 7380 1890 7380 2070 7830 1620 7380 1170
+ 7380 1350 5760 1350 5760 1350
+2 1 1 1 0 7 50 -1 -1 1.000 0 0 -1 1 0 2
+ 5 1 1.00 60.00 120.00
+ 6210 540 6210 1440
+2 1 1 1 0 7 50 -1 -1 1.000 0 0 -1 1 0 2
+ 5 1 1.00 60.00 120.00
+ 6210 2340 6210 1800
+2 1 1 1 0 7 50 -1 -1 1.000 0 0 -1 1 0 2
+ 5 1 1.00 60.00 120.00
+ 1350 2520 1350 1800
+2 1 1 1 0 7 50 -1 -1 1.000 0 0 -1 1 0 2
+ 5 1 1.00 60.00 120.00
+ 1350 360 1350 1440
+3 0 1 1 0 7 50 -1 -1 1.000 0 0 1 5
+ 5 1 1.00 60.00 120.00
+ 2970 1665 3105 1125 3330 900 3600 765 3915 720
+ 0.000 1.000 1.000 1.000 0.000
+3 0 1 1 0 7 50 -1 -1 1.000 0 0 1 5
+ 5 1 1.00 60.00 120.00
+ 6030 1665 5895 1125 5670 900 5400 765 5040 720
+ 0.000 1.000 1.000 1.000 0.000
+4 2 0 50 -1 16 12 0.0000 4 195 750 1080 1665 producer\001
+4 1 0 50 -1 16 12 0.0000 4 195 1785 4500 1575 HAProxy processing\001
+4 1 0 50 -1 16 12 0.0000 4 195 1260 4500 1815 (including Lua)\001
+4 0 0 50 -1 16 12 0.0000 4 105 855 7920 1665 consumer\001
+4 0 0 50 -1 12 12 0.0000 4 150 600 1440 2205 set()\001
+4 0 0 50 -1 12 12 0.0000 4 165 960 1440 2400 append()\001
+4 0 0 50 -1 16 12 0.0000 4 150 1260 1260 2700 write functions\001
+4 0 0 50 -1 16 12 0.0000 4 150 1230 1260 315 read functions\001
+4 0 0 50 -1 12 12 0.0000 4 165 600 1440 540 dup()\001
+4 0 0 50 -1 12 12 0.0000 4 165 600 1440 735 get()\001
+4 0 0 50 -1 12 12 0.0000 4 165 1200 1440 930 get_line()\001
+4 0 0 50 -1 12 12 0.0000 4 165 1440 1440 1125 get_in_len()\001
+4 1 0 50 -1 12 12 0.0000 4 150 1080 4500 765 forward()\001
+4 0 0 50 -1 16 12 0.0000 4 150 1260 6120 495 write functions\001
+4 0 0 50 -1 12 12 0.0000 4 150 720 6300 1110 send()\001
+4 0 0 50 -1 12 12 0.0000 4 165 1560 6255 2205 get_out_len()\001
+4 0 0 50 -1 16 12 0.0000 4 150 1230 6120 2520 read functions\001
+4 1 0 50 -1 16 12 0.0000 4 150 1650 4500 540 both side functions\001
--- /dev/null
+# -*- coding: utf-8 -*-
+#
+# haproxy-lua documentation build configuration file, created by
+# sphinx-quickstart on Tue Mar 10 11:15:09 2015.
+#
+# This file is execfile()d with the current directory set to its containing dir.
+#
+# Note that not all possible configuration values are present in this
+# autogenerated file.
+#
+# All configuration values have a default; values that are commented out
+# serve to show the default.
+
+import sys, os
+
+# If extensions (or modules to document with autodoc) are in another directory,
+# add these directories to sys.path here. If the directory is relative to the
+# documentation root, use os.path.abspath to make it absolute, like shown here.
+#sys.path.insert(0, os.path.abspath('.'))
+
+# -- General configuration -----------------------------------------------------
+
+# If your documentation needs a minimal Sphinx version, state it here.
+#needs_sphinx = '1.0'
+
+# Add any Sphinx extension module names here, as strings. They can be extensions
+# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
+extensions = []
+
+# Add any paths that contain templates here, relative to this directory.
+templates_path = ['_templates']
+
+# The suffix of source filenames.
+source_suffix = '.rst'
+
+# The encoding of source files.
+#source_encoding = 'utf-8-sig'
+
+# The master toctree document.
+master_doc = 'index'
+
+# General information about the project.
+project = u'haproxy-lua'
+copyright = u'2015, Thierry FOURNIER'
+
+# The version info for the project you're documenting, acts as replacement for
+# |version| and |release|, also used in various other places throughout the
+# built documents.
+#
+# The short X.Y version.
+version = '1.0'
+# The full version, including alpha/beta/rc tags.
+release = '1.0'
+
+# The language for content autogenerated by Sphinx. Refer to documentation
+# for a list of supported languages.
+#language = None
+
+# There are two options for replacing |today|: either, you set today to some
+# non-false value, then it is used:
+#today = ''
+# Else, today_fmt is used as the format for a strftime call.
+#today_fmt = '%B %d, %Y'
+
+# List of patterns, relative to source directory, that match files and
+# directories to ignore when looking for source files.
+exclude_patterns = ['_build']
+
+# The reST default role (used for this markup: `text`) to use for all documents.
+#default_role = None
+
+# If true, '()' will be appended to :func: etc. cross-reference text.
+#add_function_parentheses = True
+
+# If true, the current module name will be prepended to all description
+# unit titles (such as .. function::).
+#add_module_names = True
+
+# If true, sectionauthor and moduleauthor directives will be shown in the
+# output. They are ignored by default.
+#show_authors = False
+
+# The name of the Pygments (syntax highlighting) style to use.
+pygments_style = 'sphinx'
+
+# A list of ignored prefixes for module index sorting.
+#modindex_common_prefix = []
+
+
+# -- Options for HTML output ---------------------------------------------------
+
+# The theme to use for HTML and HTML Help pages. See the documentation for
+# a list of builtin themes.
+html_theme = 'default'
+
+# Theme options are theme-specific and customize the look and feel of a theme
+# further. For a list of options available for each theme, see the
+# documentation.
+#html_theme_options = {}
+
+# Add any paths that contain custom themes here, relative to this directory.
+#html_theme_path = []
+
+# The name for this set of Sphinx documents. If None, it defaults to
+# "<project> v<release> documentation".
+#html_title = None
+
+# A shorter title for the navigation bar. Default is the same as html_title.
+#html_short_title = None
+
+# The name of an image file (relative to this directory) to place at the top
+# of the sidebar.
+#html_logo = None
+
+# The name of an image file (within the static path) to use as favicon of the
+# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
+# pixels large.
+#html_favicon = None
+
+# Add any paths that contain custom static files (such as style sheets) here,
+# relative to this directory. They are copied after the builtin static files,
+# so a file named "default.css" will overwrite the builtin "default.css".
+html_static_path = ['_static']
+
+# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
+# using the given strftime format.
+#html_last_updated_fmt = '%b %d, %Y'
+
+# If true, SmartyPants will be used to convert quotes and dashes to
+# typographically correct entities.
+#html_use_smartypants = True
+
+# Custom sidebar templates, maps document names to template names.
+#html_sidebars = {}
+
+# Additional templates that should be rendered to pages, maps page names to
+# template names.
+#html_additional_pages = {}
+
+# If false, no module index is generated.
+#html_domain_indices = True
+
+# If false, no index is generated.
+#html_use_index = True
+
+# If true, the index is split into individual pages for each letter.
+#html_split_index = False
+
+# If true, links to the reST sources are added to the pages.
+#html_show_sourcelink = True
+
+# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
+#html_show_sphinx = True
+
+# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
+#html_show_copyright = True
+
+# If true, an OpenSearch description file will be output, and all pages will
+# contain a <link> tag referring to it. The value of this option must be the
+# base URL from which the finished HTML is served.
+#html_use_opensearch = ''
+
+# This is the file name suffix for HTML files (e.g. ".xhtml").
+#html_file_suffix = None
+
+# Output file base name for HTML help builder.
+htmlhelp_basename = 'haproxy-luadoc'
+
+
+# -- Options for LaTeX output --------------------------------------------------
+
+latex_elements = {
+# The paper size ('letterpaper' or 'a4paper').
+#'papersize': 'letterpaper',
+
+# The font size ('10pt', '11pt' or '12pt').
+#'pointsize': '10pt',
+
+# Additional stuff for the LaTeX preamble.
+#'preamble': '',
+}
+
+# Grouping the document tree into LaTeX files. List of tuples
+# (source start file, target name, title, author, documentclass [howto/manual]).
+latex_documents = [
+ ('index', 'haproxy-lua.tex', u'haproxy-lua Documentation',
+ u'Thierry FOURNIER', 'manual'),
+]
+
+# The name of an image file (relative to this directory) to place at the top of
+# the title page.
+#latex_logo = None
+
+# For "manual" documents, if this is true, then toplevel headings are parts,
+# not chapters.
+#latex_use_parts = False
+
+# If true, show page references after internal links.
+#latex_show_pagerefs = False
+
+# If true, show URL addresses after external links.
+#latex_show_urls = False
+
+# Documents to append as an appendix to all manuals.
+#latex_appendices = []
+
+# If false, no module index is generated.
+#latex_domain_indices = True
+
+
+# -- Options for manual page output --------------------------------------------
+
+# One entry per manual page. List of tuples
+# (source start file, name, description, authors, manual section).
+man_pages = [
+ ('index', 'haproxy-lua', u'haproxy-lua Documentation',
+ [u'Thierry FOURNIER'], 1)
+]
+
+# If true, show URL addresses after external links.
+#man_show_urls = False
+
+
+# -- Options for Texinfo output ------------------------------------------------
+
+# Grouping the document tree into Texinfo files. List of tuples
+# (source start file, target name, title, author,
+# dir menu entry, description, category)
+texinfo_documents = [
+ ('index', 'haproxy-lua', u'haproxy-lua Documentation',
+ u'Thierry FOURNIER', 'haproxy-lua', 'One line description of project.',
+ 'Miscellaneous'),
+]
+
+# Documents to append as an appendix to all manuals.
+#texinfo_appendices = []
+
+# If false, no module index is generated.
+#texinfo_domain_indices = True
+
+# How to display URL addresses: 'footnote', 'no', or 'inline'.
+#texinfo_show_urls = 'footnote'
--- /dev/null
+.. toctree::
+ :maxdepth: 2
+
+
+How Lua runs in HAProxy
+=======================
+
+HAProxy Lua running contexts
+----------------------------
+
+The Lua code executed in HAProxy can be processed in 2 main modes. The first one
+is the **initialisation mode**, and the second is the **runtime mode**.
+
+* In the **initialisation mode**, we can perform DNS solves, but we cannot
+ perform socket I/O. In this initialisation mode, HAProxy still blocked during
+ the execution of the Lua program.
+
+* In the **runtime mode**, we cannot perform DNS solves, but we can use sockets.
+ The execution of the Lua code is multiplexed with the requests processing, so
+ the Lua code seems to be run in blocking, but it is not the case.
+
+The Lua code is loaded in one or more files. These files contains main code and
+functions. Lua have 6 execution context.
+
+1. The Lua file **body context**. It is executed during the load of the Lua file
+ in the HAProxy `[global]` section with the directive `lua-load`. It is
+ executed in initialisation mode. This section is use for configuring Lua
+ bindings in HAProxy.
+
+2. The Lua **init context**. It is a Lua function executed just after the
+ HAProxy configuration parsing. The execution is in initialisation mode. In
+ this context the HAProxy environment are already initialized. It is useful to
+ check configuration, or initializing socket connections or tasks. These
+ functions are declared in the body context with the Lua function
+ `core.register_init()`. The prototype of the function is a simple function
+ without return value and without parameters, like this: `function fcn()`.
+
+3. The Lua **task context**. It is a Lua function executed after the start
+ of the HAProxy scheduler, and just after the declaration of the task with the
+ Lua function `core.register_task()`. This context can be concurrent with the
+ traffic processing. It is executed in runtime mode. The prototype of the
+ function is a simple function without return value and without parameters,
+ like this: `function fcn()`.
+
+4. The **action context**. It is a Lua function conditionally executed. These
+ actions are registered by the Lua directives "`core.register_action()`". The
+ prototype of the Lua called function is a function with doesn't returns
+ anything and that take an object of class TXN as entry. `function fcn(txn)`.
+
+5. The **sample-fetch context**. This function takes a TXN object as entry
+ argument and returns a string. These types of function cannot execute any
+ blocking function. They are useful to aggregate some of original HAProxy
+ sample-fetches and return the result. The prototype of the function is
+ `function string fcn(txn)`. These functions can be registered with the Lua
+ function `core.register_fetches()`. Each declared sample-fetch is prefixed by
+ the string "lua.".
+
+ **NOTE**: It is possible that this function cannot found the required data
+ in the original HAProxy sample-fetches, in this case, it cannot return the
+ result. This case is not yet supported
+
+6. The **converter context**. It is a Lua function that takes a string as input
+ and returns another string as output. These types of function are stateless,
+ it cannot access to any context. They don't execute any blocking function.
+ The call prototype is `function string fcn(string)`. This function can be
+ registered with the Lua function `core.register_converters()`. Each declared
+ converter is prefixed by the string "lua.".
+
+HAProxy Lua Hello world
+-----------------------
+
+HAProxy configuration file (`hello_world.conf`):
+
+::
+
+ global
+ lua-load hello_world.lua
+
+ listen proxy
+ bind 127.0.0.1:10001
+ tcp-request inspect-delay 1s
+ tcp-request content use-service lua.hello_world
+
+HAProxy Lua file (`hello_world.lua`):
+
+.. code-block:: lua
+
+ core.register_service("hello_world", "tcp", function(applet)
+ applet:send("hello world\n")
+ end)
+
+How to start HAProxy for testing this configuration:
+
+::
+
+ ./haproxy -f hello_world.conf
+
+On other terminal, you can test with telnet:
+
+::
+
+ #:~ telnet 127.0.0.1 10001
+ hello world
+
+Core class
+==========
+
+.. js:class:: core
+
+ The "core" class contains all the HAProxy core functions. These function are
+ useful for the controlling the execution flow, registering hooks, manipulating
+ global maps or ACL, ...
+
+ "core" class is basically provided with HAProxy. No `require` line is
+ required to uses these function.
+
+ The "core" class is static, it is not possible to create a new object of this
+ type.
+
+.. js:attribute:: core.emerg
+
+ :returns: integer
+
+ This attribute is an integer, it contains the value of the loglevel "emergency" (0).
+
+.. js:attribute:: core.alert
+
+ :returns: integer
+
+ This attribute is an integer, it contains the value of the loglevel "alert" (1).
+
+.. js:attribute:: core.crit
+
+ :returns: integer
+
+ This attribute is an integer, it contains the value of the loglevel "critical" (2).
+
+.. js:attribute:: core.err
+
+ :returns: integer
+
+ This attribute is an integer, it contains the value of the loglevel "error" (3).
+
+.. js:attribute:: core.warning
+
+ :returns: integer
+
+ This attribute is an integer, it contains the value of the loglevel "warning" (4).
+
+.. js:attribute:: core.notice
+
+ :returns: integer
+
+ This attribute is an integer, it contains the value of the loglevel "notice" (5).
+
+.. js:attribute:: core.info
+
+ :returns: integer
+
+ This attribute is an integer, it contains the value of the loglevel "info" (6).
+
+.. js:attribute:: core.debug
+
+ :returns: integer
+
+ This attribute is an integer, it contains the value of the loglevel "debug" (7).
+
+.. js:function:: core.log(loglevel, msg)
+
+ **context**: body, init, task, action, sample-fetch, converter
+
+ This function sends a log. The log is sent, according with the HAProxy
+ configuration file, on the default syslog server if it is configured and on
+ the stderr if it is allowed.
+
+ :param integer loglevel: Is the log level asociated with the message. It is a
+ number between 0 and 7.
+ :param string msg: The log content.
+ :see: core.emerg, core.alert, core.crit, core.err, core.warning, core.notice,
+ core.info, core.debug (log level definitions)
+ :see: code.Debug
+ :see: core.Info
+ :see: core.Warning
+ :see: core.Alert
+
+.. js:function:: core.Debug(msg)
+
+ **context**: body, init, task, action, sample-fetch, converter
+
+ :param string msg: The log content.
+ :see: log
+
+ Does the same job than:
+
+.. code-block:: lua
+
+ function Debug(msg)
+ core.log(core.debug, msg)
+ end
+..
+
+.. js:function:: core.Info(msg)
+
+ **context**: body, init, task, action, sample-fetch, converter
+
+ :param string msg: The log content.
+ :see: log
+
+.. code-block:: lua
+
+ function Info(msg)
+ core.log(core.info, msg)
+ end
+..
+
+.. js:function:: core.Warning(msg)
+
+ **context**: body, init, task, action, sample-fetch, converter
+
+ :param string msg: The log content.
+ :see: log
+
+.. code-block:: lua
+
+ function Warning(msg)
+ core.log(core.warning, msg)
+ end
+..
+
+.. js:function:: core.Alert(msg)
+
+ **context**: body, init, task, action, sample-fetch, converter
+
+ :param string msg: The log content.
+ :see: log
+
+.. code-block:: lua
+
+ function Alert(msg)
+ core.log(core.alert, msg)
+ end
+..
+
+.. js:function:: core.add_acl(filename, key)
+
+ **context**: init, task, action, sample-fetch, converter
+
+ Add the ACL *key* in the ACLs list referenced by the file *filename*.
+
+ :param string filename: the filename that reference the ACL entries.
+ :param string key: the key which will be added.
+
+.. js:function:: core.del_acl(filename, key)
+
+ **context**: init, task, action, sample-fetch, converter
+
+ Delete the ACL entry referenced by the key *key* in the list of ACLs
+ referenced by *filename*.
+
+ :param string filename: the filename that reference the ACL entries.
+ :param string key: the key which will be deleted.
+
+.. js:function:: core.del_map(filename, key)
+
+ **context**: init, task, action, sample-fetch, converter
+
+ Delete the map entry indexed with the specified key in the list of maps
+ referenced by his filename.
+
+ :param string filename: the filename that reference the map entries.
+ :param string key: the key which will be deleted.
+
+.. js:function:: core.msleep(milliseconds)
+
+ **context**: body, init, task, action
+
+ The `core.msleep()` stops the Lua execution between specified milliseconds.
+
+ :param integer milliseconds: the required milliseconds.
+
+.. js:function:: core.register_action(name, actions, func)
+
+ **context**: body
+
+ Register a Lua function executed as action. All the registered action can be
+ used in HAProxy with the prefix "lua.". An action gets a TXN object class as
+ input.
+
+ :param string name: is the name of the converter.
+ :param table actions: is a table of string describing the HAProxy actions who
+ want to register to. The expected actions are 'tcp-req',
+ 'tcp-res', 'http-req' or 'http-res'.
+ :param function func: is the Lua function called to work as converter.
+
+ The prototype of the Lua function used as argument is:
+
+.. code-block:: lua
+
+ function(txn)
+..
+
+ * **txn** (:ref:`txn_class`): this is a TXN object used for manipulating the
+ current request or TCP stream.
+
+ Here, an exemple of action registration. the action juste send an 'Hello world'
+ in the logs.
+
+.. code-block:: lua
+
+ core.register_action("hello-world", { "tcp-req", "http-req" }, function(txn)
+ txn:Info("Hello world")
+ end)
+..
+
+ This example code is used in HAproxy configuration like this:
+
+::
+
+ frontend tcp_frt
+ mode tcp
+ tcp-request content lua.hello-world
+
+ frontend http_frt
+ mode http
+ http-request lua.hello-world
+
+.. js:function:: core.register_converters(name, func)
+
+ **context**: body
+
+ Register a Lua function executed as converter. All the registered converters
+ can be used in HAProxy with the prefix "lua.". An converter get a string as
+ input and return a string as output. The registered function can take up to 9
+ values as parameter. All the value are strings.
+
+ :param string name: is the name of the converter.
+ :param function func: is the Lua function called to work as converter.
+
+ The prototype of the Lua function used as argument is:
+
+.. code-block:: lua
+
+ function(str, [p1 [, p2 [, ... [, p5]]]])
+..
+
+ * **str** (*string*): this is the input value automatically converted in
+ string.
+ * **p1** .. **p5** (*string*): this is a list of string arguments declared in
+ the haroxy configuration file. The number of arguments doesn't exceed 5.
+ The order and the nature of these is conventionally choose by the
+ developper.
+
+.. js:function:: core.register_fetches(name, func)
+
+ **context**: body
+
+ Register a Lua function executed as sample fetch. All the registered sample
+ fetchs can be used in HAProxy with the prefix "lua.". A Lua sample fetch
+ return a string as output. The registered function can take up to 9 values as
+ parameter. All the value are strings.
+
+ :param string name: is the name of the converter.
+ :param function func: is the Lua function called to work as sample fetch.
+
+ The prototype of the Lua function used as argument is:
+
+.. code-block:: lua
+
+ string function(txn, [p1 [, p2 [, ... [, p5]]]])
+..
+
+ * **txn** (:ref:`txn_class`): this is the txn object associated with the current
+ request.
+ * **p1** .. **p5** (*string*): this is a list of string arguments declared in
+ the haroxy configuration file. The number of arguments doesn't exceed 5.
+ The order and the nature of these is conventionally choose by the
+ developper.
+ * **Returns**: A string containing some data, ot nil if the value cannot be
+ returned now.
+
+ lua example code:
+
+.. code-block:: lua
+
+ core.register_fetches("hello", function(txn)
+ return "hello"
+ end)
+..
+
+ HAProxy example configuration:
+
+::
+
+ frontend example
+ http-request redirect location /%[lua.hello]
+
+.. js:function:: core.register_service(name, mode, func)
+
+ **context**: body
+
+ Register a Lua function executed as a service. All the registered service can
+ be used in HAProxy with the prefix "lua.". A service gets an object class as
+ input according with the required mode.
+
+ :param string name: is the name of the converter.
+ :param string mode: is string describing the required mode. Only 'tcp' or
+ 'http' are allowed.
+ :param function func: is the Lua function called to work as converter.
+
+ The prototype of the Lua function used as argument is:
+
+.. code-block:: lua
+
+ function(applet)
+..
+
+ * **applet** *applet* will be a :ref:`applettcp_class` or a
+ :ref:`applethttp_class`. It depends the type of registered applet. An applet
+ registered with the 'http' value for the *mode* parameter will gets a
+ :ref:`applethttp_class`. If the *mode* value is 'tcp', the applet will gets
+ a :ref:`applettcp_class`.
+
+ **warning**: Applets of type 'http' cannot be called from 'tcp-*'
+ rulesets. Only the 'http-*' rulesets are authorized, this means
+ that is not possible to call an HTTP applet from a proxy in tcp
+ mode. Applets of type 'tcp' can be called from anywhre.
+
+ Here, an exemple of service registration. the service just send an 'Hello world'
+ as an http response.
+
+.. code-block:: lua
+
+ core.register_service("hello-world", "http", function(applet)
+ local response = "Hello World !"
+ applet:set_status(200)
+ applet:add_header("content-length", string.len(response))
+ applet:add_header("content-type", "text/plain")
+ applet:start_response()
+ applet:send(response)
+ end)
+..
+
+ This example code is used in HAproxy configuration like this:
+
+::
+
+ frontend example
+ http-request use-service lua.hello-world
+
+.. js:function:: core.register_init(func)
+
+ **context**: body
+
+ Register a function executed after the configuration parsing. This is useful
+ to check any parameters.
+
+ :param function func: is the Lua function called to work as initializer.
+
+ The prototype of the Lua function used as argument is:
+
+.. code-block:: lua
+
+ function()
+..
+
+ It takes no input, and no output is expected.
+
+.. js:function:: core.register_task(func)
+
+ **context**: body, init, task, action, sample-fetch, converter
+
+ Register and start independent task. The task is started when the HAProxy
+ main scheduler starts. For example this type of tasks can be executed to
+ perform complex health checks.
+
+ :param function func: is the Lua function called to work as initializer.
+
+ The prototype of the Lua function used as argument is:
+
+.. code-block:: lua
+
+ function()
+..
+
+ It takes no input, and no output is expected.
+
+.. js:function:: core.set_nice(nice)
+
+ **context**: task, action, sample-fetch, converter
+
+ Change the nice of the current task or current session.
+
+ :param integer nice: the nice value, it must be between -1024 and 1024.
+
+.. js:function:: core.set_map(filename, key, value)
+
+ **context**: init, task, action, sample-fetch, converter
+
+ set the value *value* associated to the key *key* in the map referenced by
+ *filename*.
+
+ :param string filename: the Map reference
+ :param string key: the key to set or replace
+ :param string value: the associated value
+
+.. js:function:: core.sleep(int seconds)
+
+ **context**: body, init, task, action
+
+ The `core.sleep()` functions stop the Lua execution between specified seconds.
+
+ :param integer seconds: the required seconds.
+
+.. js:function:: core.tcp()
+
+ **context**: init, task, action
+
+ This function returns a new object of a *socket* class.
+
+ :returns: A :ref:`socket_class` object.
+
+.. js:function:: core.done(data)
+
+ **context**: body, init, task, action, sample-fetch, converter
+
+ :param any data: Return some data for the caller. It is useful with
+ sample-fetches and sample-converters.
+
+ Immediately stops the current Lua execution and returns to the caller which
+ may be a sample fetch, a converter or an action and returns the specified
+ value (ignored for actions). It is used when the LUA process finishes its
+ work and wants to give back the control to HAProxy without executing the
+ remaining code. It can be seen as a multi-level "return".
+
+.. js:function:: core.yield()
+
+ **context**: task, action, sample-fetch, converter
+
+ Give back the hand at the HAProxy scheduler. It is used when the LUA
+ processing consumes a lot of processing time.
+
+.. _fetches_class:
+
+Fetches class
+=============
+
+.. js:class:: Fetches
+
+ This class contains a lot of internal HAProxy sample fetches. See the
+ HAProxy "configuration.txt" documentation for more information about her
+ usage. they are the chapters 7.3.2 to 7.3.6.
+
+ **warning** some sample fetches are not available in some context. These
+ limitations are specified in this documentation when theire useful.
+
+ :see: TXN.f
+ :see: TXN.sf
+
+ Fetches are useful for:
+
+ * get system time,
+ * get environment variable,
+ * get random numbers,
+ * known backend status like the number of users in queue or the number of
+ connections established,
+ * client information like ip source or destination,
+ * deal with stick tables,
+ * Established SSL informations,
+ * HTTP information like headers or method.
+
+.. code-block:: lua
+
+ function action(txn)
+ -- Get source IP
+ local clientip = txn.f:src()
+ end
+..
+
+.. _converters_class:
+
+Converters class
+================
+
+.. js:class:: Converters
+
+ This class contains a lot of internal HAProxy sample converters. See the
+ HAProxy documentation "configuration.txt" for more information about her
+ usage. Its the chapter 7.3.1.
+
+ :see: TXN.c
+ :see: TXN.sc
+
+ Converters provides statefull transformation. They are useful for:
+
+ * converting input to base64,
+ * applying hash on input string (djb2, crc32, sdbm, wt6),
+ * format date,
+ * json escape,
+ * extracting preferred language comparing two lists,
+ * turn to lower or upper chars,
+ * deal with stick tables.
+
+.. _channel_class:
+
+Channel class
+=============
+
+.. js:class:: Channel
+
+ HAProxy uses two buffers for the processing of the requests. The first one is
+ used with the request data (from the client to the server) and the second is
+ used for the response data (from the server to the client).
+
+ Each buffer contains two types of data. The first type is the incoming data
+ waiting for a processing. The second part is the outgoing data already
+ processed. Usually, the incoming data is processed, after it is tagged as
+ outgoing data, and finally it is sent. The following functions provides tools
+ for manipulating these data in a buffer.
+
+ The following diagram shows where the channel class function are applied.
+
+ **Warning**: It is not possible to read from the response in request action,
+ and it is not possible to read for the request channel in response action.
+
+.. image:: _static/channel.png
+
+.. js:function:: Channel.dup(channel)
+
+ This function returns a string that contain the entire buffer. The data is
+ not remove from the buffer and can be reprocessed later.
+
+ If the buffer cant receive more data, a 'nil' value is returned.
+
+ :param class_channel channel: The manipulated Channel.
+ :returns: a string containing all the available data or nil.
+
+.. js:function:: Channel.get(channel)
+
+ This function returns a string that contain the entire buffer. The data is
+ consumed from the buffer.
+
+ If the buffer cant receive more data, a 'nil' value is returned.
+
+ :param class_channel channel: The manipulated Channel.
+ :returns: a string containing all the available data or nil.
+
+.. js:function:: Channel.getline(channel)
+
+ This function returns a string that contain the first line of the buffer. The
+ data is consumed. If the data returned doesn't contains a final '\n' its
+ assumed than its the last available data in the buffer.
+
+ If the buffer cant receive more data, a 'nil' value is returned.
+
+ :param class_channel channel: The manipulated Channel.
+ :returns: a string containing the available line or nil.
+
+.. js:function:: Channel.set(channel, string)
+
+ This function replace the content of the buffer by the string. The function
+ returns the copied length, otherwise, it returns -1.
+
+ The data set with this function are not send. They wait for the end of
+ HAProxy processing, so the buffer can be full.
+
+ :param class_channel channel: The manipulated Channel.
+ :param string string: The data which will sent.
+ :returns: an integer containing the amount of bytes copied or -1.
+
+.. js:function:: Channel.append(channel, string)
+
+ This function append the string argument to the content of the buffer. The
+ function returns the copied length, otherwise, it returns -1.
+
+ The data set with this function are not send. They wait for the end of
+ HAProxy processing, so the buffer can be full.
+
+ :param class_channel channel: The manipulated Channel.
+ :param string string: The data which will sent.
+ :returns: an integer containing the amount of bytes copied or -1.
+
+.. js:function:: Channel.send(channel, string)
+
+ This function required immediate send of the data. Unless if the connection
+ is close, the buffer is regularly flushed and all the string can be sent.
+
+ :param class_channel channel: The manipulated Channel.
+ :param string string: The data which will sent.
+ :returns: an integer containing the amount of bytes copied or -1.
+
+.. js:function:: Channel.get_in_length(channel)
+
+ This function returns the length of the input part of the buffer.
+
+ :param class_channel channel: The manipulated Channel.
+ :returns: an integer containing the amount of available bytes.
+
+.. js:function:: Channel.get_out_length(channel)
+
+ This function returns the length of the output part of the buffer.
+
+ :param class_channel channel: The manipulated Channel.
+ :returns: an integer containing the amount of available bytes.
+
+.. js:function:: Channel.forward(channel, int)
+
+ This function transfer bytes from the input part of the buffer to the output
+ part.
+
+ :param class_channel channel: The manipulated Channel.
+ :param integer int: The amount of data which will be forwarded.
+
+
+.. _http_class:
+
+HTTP class
+==========
+
+.. js:class:: HTTP
+
+ This class contain all the HTTP manipulation functions.
+
+.. js:function:: HTTP.req_get_headers(http)
+
+ Returns an array containing all the request headers.
+
+ :param class_http http: The related http object.
+ :returns: array of headers.
+ :see: HTTP.res_get_headers()
+
+ This is the form of the returned array:
+
+.. code-block:: lua
+
+ HTTP:req_get_headers()['<header-name>'][<header-index>] = "<header-value>"
+
+ local hdr = HTTP:req_get_headers()
+ hdr["host"][0] = "www.test.com"
+ hdr["accept"][0] = "audio/basic q=1"
+ hdr["accept"][1] = "audio/*, q=0.2"
+ hdr["accept"][2] = "*/*, q=0.1"
+..
+
+.. js:function:: HTTP.res_get_headers(http)
+
+ Returns an array containing all the response headers.
+
+ :param class_http http: The related http object.
+ :returns: array of headers.
+ :see: HTTP.req_get_headers()
+
+ This is the form of the returned array:
+
+.. code-block:: lua
+
+ HTTP:res_get_headers()['<header-name>'][<header-index>] = "<header-value>"
+
+ local hdr = HTTP:req_get_headers()
+ hdr["host"][0] = "www.test.com"
+ hdr["accept"][0] = "audio/basic q=1"
+ hdr["accept"][1] = "audio/*, q=0.2"
+ hdr["accept"][2] = "*.*, q=0.1"
+..
+
+.. js:function:: HTTP.req_add_header(http, name, value)
+
+ Appends an HTTP header field in the request whose name is
+ specified in "name" and whose value is defined in "value".
+
+ :param class_http http: The related http object.
+ :param string name: The header name.
+ :param string value: The header value.
+ :see: HTTP.res_add_header()
+
+.. js:function:: HTTP.res_add_header(http, name, value)
+
+ appends an HTTP header field in the response whose name is
+ specified in "name" and whose value is defined in "value".
+
+ :param class_http http: The related http object.
+ :param string name: The header name.
+ :param string value: The header value.
+ :see: HTTP.req_add_header()
+
+.. js:function:: HTTP.req_del_header(http, name)
+
+ Removes all HTTP header fields in the request whose name is
+ specified in "name".
+
+ :param class_http http: The related http object.
+ :param string name: The header name.
+ :see: HTTP.res_del_header()
+
+.. js:function:: HTTP.res_del_header(http, name)
+
+ Removes all HTTP header fields in the response whose name is
+ specified in "name".
+
+ :param class_http http: The related http object.
+ :param string name: The header name.
+ :see: HTTP.req_del_header()
+
+.. js:function:: HTTP.req_set_header(http, name, value)
+
+ This variable replace all occurence of all header "name", by only
+ one containing the "value".
+
+ :param class_http http: The related http object.
+ :param string name: The header name.
+ :param string value: The header value.
+ :see: HTTP.res_set_header()
+
+ This function does the same work as the folowwing code:
+
+.. code-block:: lua
+
+ function fcn(txn)
+ TXN.http:req_del_header("header")
+ TXN.http:req_add_header("header", "value")
+ end
+..
+
+.. js:function:: HTTP.res_set_header(http, name, value)
+
+ This variable replace all occurence of all header "name", by only
+ one containing the "value".
+
+ :param class_http http: The related http object.
+ :param string name: The header name.
+ :param string value: The header value.
+ :see: HTTP.req_rep_header()
+
+.. js:function:: HTTP.req_rep_header(http, name, regex, replace)
+
+ Matches the regular expression in all occurrences of header field "name"
+ according to "regex", and replaces them with the "replace" argument. The
+ replacement value can contain back references like \1, \2, ... This
+ function works with the request.
+
+ :param class_http http: The related http object.
+ :param string name: The header name.
+ :param string regex: The match regular expression.
+ :param string replace: The replacement value.
+ :see: HTTP.res_rep_header()
+
+.. js:function:: HTTP.res_rep_header(http, name, regex, string)
+
+ Matches the regular expression in all occurrences of header field "name"
+ according to "regex", and replaces them with the "replace" argument. The
+ replacement value can contain back references like \1, \2, ... This
+ function works with the request.
+
+ :param class_http http: The related http object.
+ :param string name: The header name.
+ :param string regex: The match regular expression.
+ :param string replace: The replacement value.
+ :see: HTTP.req_replace_header()
+
+.. js:function:: HTTP.req_replace_value(http, name, regex, replace)
+
+ Works like "HTTP.req_replace_header()" except that it matches the regex
+ against every comma-delimited value of the header field "name" instead of the
+ entire header.
+
+ :param class_http http: The related http object.
+ :param string name: The header name.
+ :param string regex: The match regular expression.
+ :param string replace: The replacement value.
+ :see: HTTP.req_replace_header()
+ :see: HTTP.res_replace_value()
+
+.. js:function:: HTTP.res_replace_value(http, name, regex, replace)
+
+ Works like "HTTP.res_replace_header()" except that it matches the regex
+ against every comma-delimited value of the header field "name" instead of the
+ entire header.
+
+ :param class_http http: The related http object.
+ :param string name: The header name.
+ :param string regex: The match regular expression.
+ :param string replace: The replacement value.
+ :see: HTTP.res_replace_header()
+ :see: HTTP.req_replace_value()
+
+.. js:function:: HTTP.req_set_method(http, method)
+
+ Rewrites the request method with the parameter "method".
+
+ :param class_http http: The related http object.
+ :param string method: The new method.
+
+.. js:function:: HTTP.req_set_path(http, path)
+
+ Rewrites the request path with the "path" parameter.
+
+ :param class_http http: The related http object.
+ :param string path: The new path.
+
+.. js:function:: HTTP.req_set_query(http, query)
+
+ Rewrites the request's query string which appears after the first question
+ mark ("?") with the parameter "query".
+
+ :param class_http http: The related http object.
+ :param string query: The new query.
+
+.. js:function:: HTTP.req_set_uri(http, uri)
+
+ Rewrites the request URI with the parameter "uri".
+
+ :param class_http http: The related http object.
+ :param string uri: The new uri.
+
+.. js:function:: HTTP.res_set_status(http, status)
+
+ Rewrites the response status code with the parameter "code". Note that the
+ reason is automatically adapted to the new code.
+
+ :param class_http http: The related http object.
+ :param integer status: The new response status code.
+
+.. _txn_class:
+
+TXN class
+=========
+
+.. js:class:: TXN
+
+ The txn class contain all the functions relative to the http or tcp
+ transaction (Note than a tcp stream is the same than a tcp transaction, but
+ an HTTP transaction is not the same than a tcp stream).
+
+ The usage of this class permits to retrieve data from the requests, alter it
+ and forward it.
+
+ All the functions provided by this class are available in the context
+ **sample-fetches** and **actions**.
+
+.. js:attribute:: TXN.c
+
+ :returns: An :ref:`converters_class`.
+
+ This attribute contains a Converters class object.
+
+.. js:attribute:: TXN.sc
+
+ :returns: An :ref:`converters_class`.
+
+ This attribute contains a Converters class object. The functions of
+ this object returns always a string.
+
+.. js:attribute:: TXN.f
+
+ :returns: An :ref:`fetches_class`.
+
+ This attribute contains a Fetches class object.
+
+.. js:attribute:: TXN.sf
+
+ :returns: An :ref:`fetches_class`.
+
+ This attribute contains a Fetches class object. The functions of
+ this object returns always a string.
+
+.. js:attribute:: TXN.req
+
+ :returns: An :ref:`channel_class`.
+
+ This attribute contains a channel class object for the request buffer.
+
+.. js:attribute:: TXN.res
+
+ :returns: An :ref:`channel_class`.
+
+ This attribute contains a channel class object for the response buffer.
+
+.. js:attribute:: TXN.http
+
+ :returns: An :ref:`http_class`.
+
+ This attribute contains an HTTP class object. It is avalaible only if the
+ proxy has the "mode http" enabled.
+
+.. js:function:: TXN.log(TXN, loglevel, msg)
+
+ This function sends a log. The log is sent, according with the HAProxy
+ configuration file, on the default syslog server if it is configured and on
+ the stderr if it is allowed.
+
+ :param class_txn txn: The class txn object containing the data.
+ :param integer loglevel: Is the log level asociated with the message. It is a
+ number between 0 and 7.
+ :param string msg: The log content.
+ :see: core.emerg, core.alert, core.crit, core.err, core.warning, core.notice,
+ core.info, core.debug (log level definitions)
+ :see: TXN.deflog
+ :see: TXN.Debug
+ :see: TXN.Info
+ :see: TXN.Warning
+ :see: TXN.Alert
+
+.. js:function:: TXN.deflog(TXN, msg)
+
+ Sends a log line with the default loglevel for the proxy ssociated with the
+ transaction.
+
+ :param class_txn txn: The class txn object containing the data.
+ :param string msg: The log content.
+ :see: TXN.log
+
+.. js:function:: TXN.Debug(txn, msg)
+
+ :param class_txn txn: The class txn object containing the data.
+ :param string msg: The log content.
+ :see: TXN.log
+
+ Does the same job than:
+
+.. code-block:: lua
+
+ function Debug(txn, msg)
+ TXN.log(txn, core.debug, msg)
+ end
+..
+
+.. js:function:: TXN.Info(txn, msg)
+
+ :param class_txn txn: The class txn object containing the data.
+ :param string msg: The log content.
+ :see: TXN.log
+
+.. code-block:: lua
+
+ function Debug(txn, msg)
+ TXN.log(txn, core.info, msg)
+ end
+..
+
+.. js:function:: TXN.Warning(txn, msg)
+
+ :param class_txn txn: The class txn object containing the data.
+ :param string msg: The log content.
+ :see: TXN.log
+
+.. code-block:: lua
+
+ function Debug(txn, msg)
+ TXN.log(txn, core.warning, msg)
+ end
+..
+
+.. js:function:: TXN.Alert(txn, msg)
+
+ :param class_txn txn: The class txn object containing the data.
+ :param string msg: The log content.
+ :see: TXN.log
+
+.. code-block:: lua
+
+ function Debug(txn, msg)
+ TXN.log(txn, core.alert, msg)
+ end
+..
+
+.. js:function:: TXN.get_priv(txn)
+
+ Return Lua data stored in the current transaction (with the `TXN.set_priv()`)
+ function. If no data are stored, it returns a nil value.
+
+ :param class_txn txn: The class txn object containing the data.
+ :returns: the opaque data previsously stored, or nil if nothing is
+ avalaible.
+
+.. js:function:: TXN.set_priv(txn, data)
+
+ Store any data in the current HAProxy transaction. This action replace the
+ old stored data.
+
+ :param class_txn txn: The class txn object containing the data.
+ :param opaque data: The data which is stored in the transaction.
+
+.. js:function:: TXN.set_var(TXN, var, value)
+
+ Converts a Lua type in a HAProxy type and store it in a variable <var>.
+
+ :param class_txn txn: The class txn object containing the data.
+ :param string var: The variable name according with the HAProxy variable syntax.
+ :param opaque value: The data which is stored in the variable.
+
+.. js:function:: TXN.get_var(TXN, var)
+
+ Returns data stored in the variable <var> converter in Lua type.
+
+ :param class_txn txn: The class txn object containing the data.
+ :param string var: The variable name according with the HAProxy variable syntax.
+
+.. js:function:: TXN.get_headers(txn)
+
+ This function returns an array of headers.
+
+ :param class_txn txn: The class txn object containing the data.
+ :returns: an array of headers.
+
+.. js:function:: TXN.done(txn)
+
+ This function terminates processing of the transaction and the associated
+ session. It can be used when a critical error is detected or to terminate
+ processing after some data have been returned to the client (eg: a redirect).
+
+ :param class_txn txn: The class txn object containing the data.
+
+.. js:function:: TXN.set_loglevel(txn, loglevel)
+
+ Is used to change the log level of the current request. The "loglevel" must
+ be an integer between 0 and 7.
+
+ :param class_txn txn: The class txn object containing the data.
+ :param integer loglevel: The required log level. This variable can be one of
+ :see: core.<loglevel>
+
+.. js:function:: TXN.set_tos(txn, tos)
+
+ Is used to set the TOS or DSCP field value of packets sent to the client to
+ the value passed in "tos" on platforms which support this.
+
+ :param class_txn txn: The class txn object containing the data.
+ :param integer tos: The new TOS os DSCP.
+
+.. js:function:: TXN.set_mark(txn, mark)
+
+ Is used to set the Netfilter MARK on all packets sent to the client to the
+ value passed in "mark" on platforms which support it.
+
+ :param class_txn txn: The class txn object containing the data.
+ :param integer mark: The mark value.
+
+.. _socket_class:
+
+Socket class
+============
+
+.. js:class:: Socket
+
+ This class must be compatible with the Lua Socket class. Only the 'client'
+ functions are available. See the Lua Socket documentation:
+
+ `http://w3.impa.br/~diego/software/luasocket/tcp.html
+ <http://w3.impa.br/~diego/software/luasocket/tcp.html>`_
+
+.. js:function:: Socket.close(socket)
+
+ Closes a TCP object. The internal socket used by the object is closed and the
+ local address to which the object was bound is made available to other
+ applications. No further operations (except for further calls to the close
+ method) are allowed on a closed Socket.
+
+ :param class_socket socket: Is the manipulated Socket.
+
+ Note: It is important to close all used sockets once they are not needed,
+ since, in many systems, each socket uses a file descriptor, which are limited
+ system resources. Garbage-collected objects are automatically closed before
+ destruction, though.
+
+.. js:function:: Socket.connect(socket, address[, port])
+
+ Attempts to connect a socket object to a remote host.
+
+
+ In case of error, the method returns nil followed by a string describing the
+ error. In case of success, the method returns 1.
+
+ :param class_socket socket: Is the manipulated Socket.
+ :param string address: can be an IP address or a host name. See below for more
+ information.
+ :param integer port: must be an integer number in the range [1..64K].
+ :returns: 1 or nil.
+
+ an address field extension permits to use the connect() function to connect to
+ other stream than TCP. The syntax containing a simpleipv4 or ipv6 address is
+ the basically expected format. This format requires the port.
+
+ Other format accepted are a socket path like "/socket/path", it permits to
+ connect to a socket. abstract namespaces are supported with the prefix
+ "abns@", and finaly a filedescriotr can be passed with the prefix "fd@".
+ The prefix "ipv4@", "ipv6@" and "unix@" are also supported. The port can be
+ passed int the string. The syntax "127.0.0.1:1234" is valid. in this case, the
+ parameter *port* is ignored.
+
+.. js:function:: Socket.connect_ssl(socket, address, port)
+
+ Same behavior than the function socket:connect, but uses SSL.
+
+ :param class_socket socket: Is the manipulated Socket.
+ :returns: 1 or nil.
+
+.. js:function:: Socket.getpeername(socket)
+
+ Returns information about the remote side of a connected client object.
+
+ Returns a string with the IP address of the peer, followed by the port number
+ that peer is using for the connection. In case of error, the method returns
+ nil.
+
+ :param class_socket socket: Is the manipulated Socket.
+ :returns: a string containing the server information.
+
+.. js:function:: Socket.getsockname(socket)
+
+ Returns the local address information associated to the object.
+
+ The method returns a string with local IP address and a number with the port.
+ In case of error, the method returns nil.
+
+ :param class_socket socket: Is the manipulated Socket.
+ :returns: a string containing the client information.
+
+.. js:function:: Socket.receive(socket, [pattern [, prefix]])
+
+ Reads data from a client object, according to the specified read pattern.
+ Patterns follow the Lua file I/O format, and the difference in performance
+ between all patterns is negligible.
+
+ :param class_socket socket: Is the manipulated Socket.
+ :param string|integer pattern: Describe what is required (see below).
+ :param string prefix: A string which will be prefix the returned data.
+ :returns: a string containing the required data or nil.
+
+ Pattern can be any of the following:
+
+ * **`*a`**: reads from the socket until the connection is closed. No
+ end-of-line translation is performed;
+
+ * **`*l`**: reads a line of text from the Socket. The line is terminated by a
+ LF character (ASCII 10), optionally preceded by a CR character
+ (ASCII 13). The CR and LF characters are not included in the
+ returned line. In fact, all CR characters are ignored by the
+ pattern. This is the default pattern.
+
+ * **number**: causes the method to read a specified number of bytes from the
+ Socket. Prefix is an optional string to be concatenated to the
+ beginning of any received data before return.
+
+ * **empty**: If the pattern is left empty, the default option is `*l`.
+
+ If successful, the method returns the received pattern. In case of error, the
+ method returns nil followed by an error message which can be the string
+ 'closed' in case the connection was closed before the transmission was
+ completed or the string 'timeout' in case there was a timeout during the
+ operation. Also, after the error message, the function returns the partial
+ result of the transmission.
+
+ Important note: This function was changed severely. It used to support
+ multiple patterns (but I have never seen this feature used) and now it
+ doesn't anymore. Partial results used to be returned in the same way as
+ successful results. This last feature violated the idea that all functions
+ should return nil on error. Thus it was changed too.
+
+.. js:function:: Socket.send(socket, data [, start [, end ]])
+
+ Sends data through client object.
+
+ :param class_socket socket: Is the manipulated Socket.
+ :param string data: The data that will be sent.
+ :param integer start: The start position in the buffer of the data which will
+ be sent.
+ :param integer end: The end position in the buffer of the data which will
+ be sent.
+ :returns: see below.
+
+ Data is the string to be sent. The optional arguments i and j work exactly
+ like the standard string.sub Lua function to allow the selection of a
+ substring to be sent.
+
+ If successful, the method returns the index of the last byte within [start,
+ end] that has been sent. Notice that, if start is 1 or absent, this is
+ effectively the total number of bytes sent. In case of error, the method
+ returns nil, followed by an error message, followed by the index of the last
+ byte within [start, end] that has been sent. You might want to try again from
+ the byte following that. The error message can be 'closed' in case the
+ connection was closed before the transmission was completed or the string
+ 'timeout' in case there was a timeout during the operation.
+
+ Note: Output is not buffered. For small strings, it is always better to
+ concatenate them in Lua (with the '..' operator) and send the result in one
+ call instead of calling the method several times.
+
+.. js:function:: Socket.setoption(socket, option [, value])
+
+ Just implemented for compatibility, this cal does nothing.
+
+.. js:function:: Socket.settimeout(socket, value [, mode])
+
+ Changes the timeout values for the object. All I/O operations are blocking.
+ That is, any call to the methods send, receive, and accept will block
+ indefinitely, until the operation completes. The settimeout method defines a
+ limit on the amount of time the I/O methods can block. When a timeout time
+ has elapsed, the affected methods give up and fail with an error code.
+
+ The amount of time to wait is specified as the value parameter, in seconds.
+
+ The timeout modes are bot implemented, the only settable timeout is the
+ inactivity time waiting for complete the internal buffer send or waiting for
+ receive data.
+
+ :param class_socket socket: Is the manipulated Socket.
+ :param integer value: The timeout value.
+
+.. _map_class:
+
+Map class
+=========
+
+.. js:class:: Map
+
+ This class permits to do some lookup in HAProxy maps. The declared maps can
+ be modified during the runtime throught the HAProxy management socket.
+
+.. code-block:: lua
+
+ default = "usa"
+
+ -- Create and load map
+ geo = Map.new("geo.map", Map.ip);
+
+ -- Create new fetch that returns the user country
+ core.register_fetches("country", function(txn)
+ local src;
+ local loc;
+
+ src = txn.f:fhdr("x-forwarded-for");
+ if (src == nil) then
+ src = txn.f:src()
+ if (src == nil) then
+ return default;
+ end
+ end
+
+ -- Perform lookup
+ loc = geo:lookup(src);
+
+ if (loc == nil) then
+ return default;
+ end
+
+ return loc;
+ end);
+
+.. js:attribute:: Map.int
+
+ See the HAProxy configuration.txt file, chapter "Using ACLs and fetching
+ samples" ans subchapter "ACL basics" to understand this pattern matching
+ method.
+
+.. js:attribute:: Map.ip
+
+ See the HAProxy configuration.txt file, chapter "Using ACLs and fetching
+ samples" ans subchapter "ACL basics" to understand this pattern matching
+ method.
+
+.. js:attribute:: Map.str
+
+ See the HAProxy configuration.txt file, chapter "Using ACLs and fetching
+ samples" ans subchapter "ACL basics" to understand this pattern matching
+ method.
+
+.. js:attribute:: Map.beg
+
+ See the HAProxy configuration.txt file, chapter "Using ACLs and fetching
+ samples" ans subchapter "ACL basics" to understand this pattern matching
+ method.
+
+.. js:attribute:: Map.sub
+
+ See the HAProxy configuration.txt file, chapter "Using ACLs and fetching
+ samples" ans subchapter "ACL basics" to understand this pattern matching
+ method.
+
+.. js:attribute:: Map.dir
+
+ See the HAProxy configuration.txt file, chapter "Using ACLs and fetching
+ samples" ans subchapter "ACL basics" to understand this pattern matching
+ method.
+
+.. js:attribute:: Map.dom
+
+ See the HAProxy configuration.txt file, chapter "Using ACLs and fetching
+ samples" ans subchapter "ACL basics" to understand this pattern matching
+ method.
+
+.. js:attribute:: Map.end
+
+ See the HAProxy configuration.txt file, chapter "Using ACLs and fetching
+ samples" ans subchapter "ACL basics" to understand this pattern matching
+ method.
+
+.. js:attribute:: Map.reg
+
+ See the HAProxy configuration.txt file, chapter "Using ACLs and fetching
+ samples" ans subchapter "ACL basics" to understand this pattern matching
+ method.
+
+
+.. js:function:: Map.new(file, method)
+
+ Creates and load a map.
+
+ :param string file: Is the file containing the map.
+ :param integer method: Is the map pattern matching method. See the attributes
+ of the Map class.
+ :returns: a class Map object.
+ :see: The Map attributes.
+
+.. js:function:: Map.lookup(map, str)
+
+ Perform a lookup in a map.
+
+ :param class_map map: Is the class Map object.
+ :param string str: Is the string used as key.
+ :returns: a string containing the result or nil if no match.
+
+.. js:function:: Map.slookup(map, str)
+
+ Perform a lookup in a map.
+
+ :param class_map map: Is the class Map object.
+ :param string str: Is the string used as key.
+ :returns: a string containing the result or empty string if no match.
+
+.. _applethttp_class:
+
+AppletHTTP class
+================
+
+.. js:class:: AppletHTTP
+
+ This class is used with applets that requires the 'http' mode. The http applet
+ can be registered with the *core.register_service()* function. They are used
+ for processing an http request like a server in back of HAProxy.
+
+ This is an hello world sample code:
+
+.. code-block:: lua
+
+ core.register_service("hello-world", "http", function(applet)
+ local response = "Hello World !"
+ applet:set_status(200)
+ applet:add_header("content-length", string.len(response))
+ applet:add_header("content-type", "text/plain")
+ applet:start_response()
+ applet:send(response)
+ end)
+
+.. js:attribute:: AppletHTTP.c
+
+ :returns: A :ref:`converters_class`
+
+ This attribute contains a Converters class object.
+
+.. js:attribute:: AppletHTTP.sc
+
+ :returns: A :ref:`converters_class`
+
+ This attribute contains a Converters class object. The
+ functions of this object returns always a string.
+
+.. js:attribute:: AppletHTTP.f
+
+ :returns: A :ref:`fetches_class`
+
+ This attribute contains a Fetches class object. Note that the
+ applet execution place cannot access to a valid HAProxy core HTTP
+ transaction, so some sample fecthes related to the HTTP dependant
+ values (hdr, path, ...) are not available.
+
+.. js:attribute:: AppletHTTP.sf
+
+ :returns: A :ref:`fetches_class`
+
+ This attribute contains a Fetches class object. The functions of
+ this object returns always a string. Note that the applet
+ execution place cannot access to a valid HAProxy core HTTP
+ transaction, so some sample fecthes related to the HTTP dependant
+ values (hdr, path, ...) are not available.
+
+.. js:attribute:: AppletHTTP.method
+
+ :returns: string
+
+ The attribute method returns a string containing the HTTP
+ method.
+
+.. js:attribute:: AppletHTTP.version
+
+ :returns: string
+
+ The attribute version, returns a string containing the HTTP
+ request version.
+
+.. js:attribute:: AppletHTTP.path
+
+ :returns: string
+
+ The attribute path returns a string containing the HTTP
+ request path.
+
+.. js:attribute:: AppletHTTP.qs
+
+ :returns: string
+
+ The attribute qs returns a string containing the HTTP
+ request query string.
+
+.. js:attribute:: AppletHTTP.length
+
+ :returns: integer
+
+ The attribute length returns an integer containing the HTTP
+ body length.
+
+.. js:attribute:: AppletHTTP.headers
+
+ :returns: array
+
+ The attribute headers returns an array containing the HTTP
+ headers. The header names are always in lower case. As the header name can be
+ encountered more than once in each request, the value is indexed with 0 as
+ first index value. The array have this form:
+
+.. code-block:: lua
+
+ AppletHTTP.headers['<header-name>'][<header-index>] = "<header-value>"
+
+ AppletHTTP.headers["host"][0] = "www.test.com"
+ AppletHTTP.headers["accept"][0] = "audio/basic q=1"
+ AppletHTTP.headers["accept"][1] = "audio/*, q=0.2"
+ AppletHTTP.headers["accept"][2] = "*/*, q=0.1"
+..
+
+.. js:attribute:: AppletHTTP.headers
+
+ Contains an array containing all the request headers.
+
+.. js:function:: AppletHTTP.set_status(applet, code)
+
+ This function sets the HTTP status code for the response. The allowed code are
+ from 100 to 599.
+
+ :param class_AppletHTTP applet: An :ref:`applethttp_class`
+ :param integer code: the status code returned to the client.
+
+.. js:function:: AppletHTTP.add_header(applet, name, value)
+
+ This function add an header in the response. Duplicated headers are not
+ collapsed. The special header *content-length* is used to determinate the
+ response length. If it not exists, a *transfer-encoding: chunked* is set, and
+ all the write from the funcion *AppletHTTP:send()* become a chunk.
+
+ :param class_AppletHTTP applet: An :ref:`applethttp_class`
+ :param string name: the header name
+ :param string value: the header value
+
+.. js:function:: AppletHTTP.start_response(applet)
+
+ This function indicates to the HTTP engine that it can process and send the
+ response headers. After this called we cannot add headers to the response; We
+ cannot use the *AppletHTTP:send()* function if the
+ *AppletHTTP:start_response()* is not called.
+
+ :param class_AppletHTTP applet: An :ref:`applethttp_class`
+
+.. js:function:: AppletHTTP.getline(applet)
+
+ This function returns a string containing one line from the http body. If the
+ data returned doesn't contains a final '\\n' its assumed than its the last
+ available data before the end of stream.
+
+ :param class_AppletHTTP applet: An :ref:`applethttp_class`
+ :returns: a string. The string can be empty if we reach the end of the stream.
+
+.. js:function:: AppletHTTP.receive(applet, [size])
+
+ Reads data from the HTTP body, according to the specified read *size*. If the
+ *size* is missing, the function tries to read all the content of the stream
+ until the end. If the *size* is bigger than the http body, it returns the
+ amount of data avalaible.
+
+ :param class_AppletHTTP applet: An :ref:`applethttp_class`
+ :param integer size: the required read size.
+ :returns: always return a string,the string can be empty is the connexion is
+ closed.
+
+.. js:function:: AppletHTTP.send(applet, msg)
+
+ Send the message *msg* on the http request body.
+
+ :param class_AppletHTTP applet: An :ref:`applethttp_class`
+ :param string msg: the message to send.
+
+.. _applettcp_class:
+
+AppletTCP class
+===============
+
+.. js:class:: AppletTCP
+
+ This class is used with applets that requires the 'tcp' mode. The tcp applet
+ can be registered with the *core.register_service()* function. They are used
+ for processing a tcp stream like a server in back of HAProxy.
+
+.. js:attribute:: AppletTCP.c
+
+ :returns: A :ref:`converters_class`
+
+ This attribute contains a Converters class object.
+
+.. js:attribute:: AppletTCP.sc
+
+ :returns: A :ref:`converters_class`
+
+ This attribute contains a Converters class object. The
+ functions of this object returns always a string.
+
+.. js:attribute:: AppletTCP.f
+
+ :returns: A :ref:`fetches_class`
+
+ This attribute contains a Fetches class object.
+
+.. js:attribute:: AppletTCP.sf
+
+ :returns: A :ref:`fetches_class`
+
+ This attribute contains a Fetches class object.
+
+.. js:function:: AppletTCP.getline(applet)
+
+ This function returns a string containing one line from the stream. If the
+ data returned doesn't contains a final '\\n' its assumed than its the last
+ available data before the end of stream.
+
+ :param class_AppletTCP applet: An :ref:`applettcp_class`
+ :returns: a string. The string can be empty if we reach the end of the stream.
+
+.. js:function:: AppletTCP.receive(applet, [size])
+
+ Reads data from the TCP stream, according to the specified read *size*. If the
+ *size* is missing, the function tries to read all the content of the stream
+ until the end.
+
+ :param class_AppletTCP applet: An :ref:`applettcp_class`
+ :param integer size: the required read size.
+ :returns: always return a string,the string can be empty is the connexion is
+ closed.
+
+.. js:function:: AppletTCP.send(appletmsg)
+
+ Send the message on the stream.
+
+ :param class_AppletTCP applet: An :ref:`applettcp_class`
+ :param string msg: the message to send.
+
+External Lua libraries
+======================
+
+A lot of useful lua libraries can be found here:
+
+* `https://lua-toolbox.com/ <https://lua-toolbox.com/>`_
+
+Redis acces:
+
+* `https://github.com/nrk/redis-lua <https://github.com/nrk/redis-lua>`_
+
+This is an example about the usage of the Redis library with HAProxy. Note that
+each call of any function of this library can throw an error if the socket
+connection fails.
+
+.. code-block:: lua
+
+ -- load the redis library
+ local redis = require("redis");
+
+ function do_something(txn)
+
+ -- create and connect new tcp socket
+ local tcp = core.tcp();
+ tcp:settimeout(1);
+ tcp:connect("127.0.0.1", 6379);
+
+ -- use the redis library with this new socket
+ local client = redis.connect({socket=tcp});
+ client:ping();
+
+ end
+
+OpenSSL:
+
+* `http://mkottman.github.io/luacrypto/index.html
+ <http://mkottman.github.io/luacrypto/index.html>`_
+
+* `https://github.com/brunoos/luasec/wiki
+ <https://github.com/brunoos/luasec/wiki>`_
+
--- /dev/null
+ Lua: Architecture and first steps
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ version 1.0
+
+ author: Thierry FOURNIER
+ contact: tfournier at arpalert dot org
+
+
+
+HAProxy is a powerful load balancer. It embeds many options and many
+configuration styles in order to give a solution to many load balancing
+problems. However, HAProxy is not universal and some special or specific
+problems doesn't have solution with the native software.
+
+This text is not a full explanation of the Lua syntax.
+
+This text is not a replacement of the HAProxy Lua API documentation. The API
+documentation can be found at the project root, in the documentation directory.
+The goal of this text is to discover how Lua is implemented in HAProxy and using
+it efficiently.
+
+However, this can be read by Lua beginners. Some examples are detailed.
+
+Why a scripting language in HAProxy
+===================================
+
+HAProxy 1.5 makes at possible to do many things using samples, but some people
+wants to more combining results of samples fetches, programming conditions and
+loops which is not possible. Sometimes people implement these functionnalities
+in patches which have no meaning outside their network. These people must
+maintain these patches, or worse we must integrate them in the HAProxy
+mainstream.
+
+Their need is to have an embedded programming language in order to no longer
+modify the HAProxy source code, but to write their own control code. Lua is
+encountered very often in the software industry, and in some open source
+projects. It is easy to understand, efficient, light without external
+dependancies, and leaves the resource control to the implementation. Its design
+is close to the HAProxy philosophy which uses components for what they do
+perfectly.
+
+The HAProxy control block allows one to take a decision based on the comparison
+between samples and patterns. The samples are extracted using fetch functions
+easily extensible, and are used by actions which are also extensible. It seems
+natural to allow Lua to give samples, modify them, and to be an action target.
+So, Lua uses the same entities as the configuration language. This is the most
+natural and reliable way fir the Lua integration. So, the Lua engine allow one
+to add new sample fetch functions, new converter functions and new actions.
+These new entities can access the existing samples fetches and converters
+allowing to extend them without rewriting them.
+
+The writing of the first Lua functions shows that implementing complex concepts
+like protocol analysers is easy and can be extended to full services. It appears
+that these services are not easy to implement with the HAProxy configuration
+model which is base on four steps: fetch, convert, compare and action. HAProxy
+is extended with a notion of services which are a formalisation of the existing
+services like stats, cli and peers. The service is an autonomous entity with a
+behaviour pattern close to that of an external client or server. The Lua engine
+inherits from this new service and offers new possibilities for writing
+services.
+
+This scripting language is useful for testing new features as proof of concept.
+Later, if there is general interest, the proof of concept could be integrated
+with C language in the HAProxy core.
+
+The HAProxy Lua integration also provides also a simple way for distributing Lua
+packages. The final user needs only to install the Lua file, load it in HAProxy
+and follow the attached documentation.
+
+Design and technical things
+===========================
+
+Lua is integrated into the HAProxy event driven core. We want to preserve the
+fast processing of HAProxy. To ensure this, we implement some technical concepts
+between HAProxy and the Lua library.
+
+The following paragraph also describes the interactions between Lua and HAProxy
+from a technical point of view.
+
+Prerequisite
+-----------
+
+Reading the following documentation links is required to understand the
+current paragraph:
+
+ HAProxy doc: http://cbonte.github.io/haproxy-dconv/configuration-1.6.html
+ Lua API: http://www.lua.org/manual/5.3/
+ HAProxy API: http://www.arpalert.org/src/haproxy-lua-api/1.6/index.html
+ Lua guide: http://www.lua.org/pil/
+
+more about Lua choice
+---------------------
+
+Lua language is very simple to extend. It is easy to add new functions written
+in C in the core language. It not require to embed very intrusive libraries, and
+we do not change compilation processes.
+
+The amount of memory consumed can be controlled, and the issues due to lack of
+memory are perfectly caught. The maximum amount of memory allowed for the Lua
+processes is configurable. If some memory is missing, the current Lua action
+fails, and the HAProxy processing flow continues.
+
+Lua provides a way for implementing event driven design. When the Lua code
+wants to do a blocking action, the action is started, it executes non blocking
+operations, and returns control to the HAProxy scheduler when it needs to wait
+for some external event.
+
+The Lua process can be interrupted after a number of instructions executed. The
+Lua execution will resume later. This is a useful way for controlling the
+execution time. This system also keeps HAProxy responsive. When the Lua
+execution is interrupted, HAProxy accepts some connections or transfers pending
+data. The Lua execution does not block the main HAProxy processing, except in
+some cases which we will see later.
+
+Lua function integration
+------------------------
+
+The Lua actions, sample fetches, converters and services are integrated in
+HAProxy with "register_*" functions. The register system is a choice for
+providing HAProxy Lua packages easily. The register system adds new sample
+fetches, converters, actions or services usable in the HAProxy configuration
+file.
+
+The register system is defined in the "core" functions collection. This
+collection is provided by HAProxy and is always available. Below, the list of
+these functions:
+
+ - core.register_action()
+ - core.register_converters()
+ - core.register_fetches()
+ - core.register_init()
+ - core.register_service()
+ - core.register_task()
+
+These functions are the execution entry points.
+
+HTTP action must be used for manipulating HTTP request headers. This action
+can not manipulates HTTP content. It is dangerous to use the channel
+manipulation object with an HTTP request in an HTTP action. The channel
+manipulation can transform a valid request in an invalid request. In this case,
+the action will never resume and the processing will be frozen. HAProxy
+discards the request after the reception timeout.
+
+Non blocking design
+-------------------
+
+HAProxy is an event driven software, so blocking system calls are absolutely
+forbidden. However, the Lua allows to do blocking actions. When an action
+blocks, HAProxy is waiting and do nothing, so the basic functionalities like
+accepting connections or forwarding data are blocked while the end of the system
+call. In this case HAProxy will be less responsive.
+
+This is very insidious because when the developer tries to execute its Lua code
+with only one stream, HAProxy seems to run fine. When the code is used with
+production stream, HAProxy encounters some slow processing, and it cannot
+hold the load.
+
+However, during the initialisation state, you can obviously using blocking
+functions. There are typically used for loading files.
+
+The list of prohibited standard Lua functions during the runtime contains all
+that do filesystem access:
+
+ - os.remove()
+ - os.rename()
+ - os.tmpname()
+ - package.*()
+ - io.*()
+ - file.*()
+
+Some other functions are prohibited:
+
+ - os.execute(), waits for the end of the required execution blocking HAProxy.
+
+ - os.exit(), is not really dangerous for the process, but its not the good way
+ for exiting the HAProxy process.
+
+ - print(), writes data on stdout. In some cases these writes are blocking, the
+ best practice is reserving this call for debugging. We must prefer
+ to use core.log() or TXN.log() for sending messages.
+
+Some HAProxy functions have a blocking behaviour pattern in the Lua code, but
+there are compatible with the non blocking design. These functions are:
+
+ - All the socket class
+ - core.sleep()
+
+Responsive design
+-----------------
+
+HAProxy must process connexions accept, forwarding data and processing timeouts
+as soon as possible. The first thing is to believe that a Lua script with a long
+execution time should impact the expected responsive behaviour.
+
+It is not the case, the Lua script execution are regularly interrupted, and
+HAProxy can process other things. These interruptions are exprimed in number of
+Lua instructions. The number of interruptions between two interrupts is
+configured with the following "tune" option:
+
+ tune.lua.forced-yield <nb>
+
+The default value is 10 000. For determining it, I ran benchmark on my laptop.
+I executed a Lua loop between 10 seconds with differents values for the
+"tune.lua.forced-yield" option, and I noted the results:
+
+ configured | Number of
+ instructions | loops executed
+ between two | in milions
+ forced yields |
+ ---------------+---------------
+ 10 | 160
+ 500 | 670
+ 1000 | 680
+ 5000 | 700
+ 7000 | 700
+ 8000 | 700
+ 9000 | 710 <- ceil
+ 10000 | 710
+ 100000 | 710
+ 1000000 | 710
+
+The result showed that from 9000 instructions between two interrupt, we reached
+a ceil, so the default parameter is 10 000.
+
+When HAProxy interrupts the Lua processing, we have two states possible:
+
+ - Lua is resumable, and it returns control to the HAProxy scheduler,
+ - Lua is not resumable, and we just check the execution timeout.
+
+The second case occurs if it is required by the HAProxy core. This state is
+forced if the Lua is processed in a non resumable HAProxy part, like sample
+fetches or converters.
+
+It occurs also if the Lua is non resumable. For example, if some code is
+executed through the Lua pcall() function, the execution is not resumable. This
+is explained later.
+
+So, the Lua code must be fast and simple when is executed as sample fetches and
+converters, it could be slow and complex when is executed as actions and
+services.
+
+Execution time
+--------------
+
+The Lua execution time is measured and limited. Each group of functions have its
+own timeout configured. The time measured is the real Lua execution time, and
+not the difference between the end time and the start time. The groups are:
+
+ - main code and init are not submitted to the timeout,
+ - fetches, converters and action have a default timeout of 4s,
+ - task, by default does not have timeout,
+ - service have a default timeout of 4s.
+
+The corresponding tune option are:
+
+ - tune.lua.session-timeout (fetches, converters and action)
+ - tune.lua.task-timeout (task)
+ - tune.lua.service-timeout (services)
+
+The tasks does not have a timeout because it runs in background along the
+HAProxy process life.
+
+For example, if an Lua script is executed during 1,1s and the script executes a
+sleep of 1 second, the effective measured running time is 0,1s.
+
+This timeout is useful for preventing infinite loops. During the runtime, it
+should never triggered.
+
+The stack and the coprocess
+---------------------------
+
+The Lua execution is organized around a stack. Each Lua action, even out of the
+effective execution, affects the stack. HAProxy integration uses one main stack,
+which is common for all the process, and a secondary one used as coprocess.
+After the initialization, the main stack is no longer used by HAProxy, except
+for global storage. The second type of stack is used by all the Lua functions
+called from different Lua actions declared in HAProxy. The main stack permits
+to store coroutines pointers, and some global variables.
+
+Do you want to see an example of how seems Lua C development around a stack ?
+Some examples follows. This first one, is a simple addition:
+
+ lua_pushnumber(L, 1)
+ lua_pushnumber(L, 2)
+ lua_arith(L, LUA_OPADD)
+
+Its easy, we push 1 on the stack, after, we push 2, and finally, we perform an
+addition. The two top entries of the stack are added, poped, and the result is
+pushed. It is a classic way with a stack.
+
+Now an example for constructing array and objects. Its little bit more
+complicated. The difficult consist to keep in mind the state of the stack while
+we write the code. The goal is to create the entity described below. Note that
+the notation "*1" is a metatable reference. The metatable will be explained
+later.
+
+ name*1 = {
+ [0] = <userdata>,
+ }
+
+ *1 = {
+ "__index" = {
+ "method1" = <function>,
+ "method2" = <function>
+ }
+ "__gc" = <function>
+ }
+
+Let's go:
+
+ lua_newtable() // The "name" table
+ lua_newtable() // The metatable *1
+ lua_pushstring("__index")
+ lua_newtable() // The "__index" table
+ lua_pushstring("method1")
+ lua_pushfunction(function)
+ lua_settable(-3) // -3 is an index in the stack. insert method1
+ lua_pushstring("method2")
+ lua_pushfunction(function)
+ lua_settable(-3) // insert method2
+ lua_settable(-3) // insert "__index"
+ lua_pushstring("__gc")
+ lua_pushfunction(function)
+ lua_settable() // insert "__gc"
+ lua_setmetatable(-1) // attach metatable to "name"
+ lua_pushnumber(0)
+ lua_pushuserdata(userdata)
+ lua_settable(-3)
+ lua_setglobal("name")
+
+So, coding for Lua in C, is not complex, but it needs some mental gymnastic.
+
+The object concept and the HAProxy format
+-----------------------------------------
+
+The objects seems to not be a native concept. An Lua object is a table. We can
+note that the table notation accept three forms:
+
+ 1. mytable["entry"](mytable, "param")
+ 2. mytable.entry(mytable, "param")
+ 3. mytable:entry("param")
+
+These three notation have the same behaviour pattern: a function is executed
+with the itself table as first parameter and string "param" as second parameter
+The notation with [] is commonly used for storing data in a hash table, and the
+dotted notation is used for objects. The notation with ":" indicates that the
+first parameter is the element at the left of the symbol ":".
+
+So, an object is a table and each entry of the table is a variable. A variable
+can be a function. These are the first concepts of the object notation in the
+Lua, but it is not the end.
+
+With the objects, we usually expect classes and inheritance. This is the role of
+the metable. A metable is a table with predefined entries. These entries modify
+the default behaviour of the table. The simplest example is the "__index" entry.
+If this entry exists, it is called when a value is requested in the table. The
+behaviour is the following:
+
+ 1 - looks in the table if the entry exists, and if it the case, return it
+
+ 2 - looks if a metatable exists, and if the "__index" entry exists
+
+ 3 - if "__index" is a function, execute it with the key as parameter, and
+ returns the result of the function.
+
+ 4 - if "__index" is a table, looks if the requested entry exists, and if
+ exists, return it.
+
+ 5 - if not exists, return to step 2
+
+The behaviour of the point 5 represents the inheritance.
+
+In HAProxy all the provided objects are tables, the entry "[0]" contains private
+data, there are often userdata or lightuserdata. The matatable is registered in
+the global part of the main Lua stack, and it is called with the case sensitive
+class name. A great part of these class must not be used directly because it
+requires an initialisation using the HAProxy internal structs.
+
+The HAProxy objects uses unified conventions. An Lua object is always a table.
+In most cases, an HAProxy Lua object need some private data. These are always
+set in the index [0] of the array. The metatable entry "__tostring" returns the
+object name.
+
+The Lua developer can add entries to the HAProxy object. He just works carefully
+and prevent to modify the index [0].
+
+Common HAproxy objects are:
+
+ - TXN : manipulates the transaction between the client and the server
+ - Channel : manipulates proxified data between the client and the server
+ - HTTP : manipulates HTTP between the client and the server
+ - Map : manipulates HAProxy maps.
+ - Fetches : access to all HAProxy sample fetches
+ - Converters : access to all HAProxy sample converters
+ - AppletTCP : process client request like a TCP server
+ - AppletHTTP : process client request like an HTTP server
+ - Socket : establish tcp connection to a server (ipv4/ipv6/socket/ssl/...)
+
+The garbage collector and the memory allocation
+-----------------------------------------------
+
+Lua doesn't really have a global memory limit, but HAProxy implements it. This
+permits to control the amount of memory dedicated to the Lua processes. It is
+specially useful with embedded environments.
+
+When the memory limit is reached, HAProxy refuses to give more memory to the Lua
+scripts. The current Lua execution is terminated with an error and HAProxy
+continue its processing.
+
+The max amount of memory is configured with the option:
+
+ tune.lua.maxmem
+
+As many other script languages, Lua uses a garbage collector for reusing its
+memory. The Lua developper can work without memory preoccupation. Usually, the
+garbage collector is controlled by the Lua core, but sometimes it will be useful
+to run when the user/developer requires. So the garbage collector can be called
+from C part or Lua part.
+
+Sometimes, objects using lightuserdata or userdata requires to free some memory
+block or close filedescriptor not controlled by the Lua. A dedicated garbage
+collection function is providedthrought the metatable. It is referenced with the
+special entry "__gc".
+
+Generally, in HAProxy, the garbage collector does this job without any
+intervention. However some object uses a great amount of memory, and we want to
+release as quick as possible. The problem is that only the GC knows if the object
+is in use or not. The reason is simple variable containing objects can be shared
+between coroutines and the main thread, so an object can used everywhere in
+HAProxy.
+
+The only one example is the HAProxy sockets. These are explained later, just for
+understanding the GC issues, a quick overview of the socket follows. The HAProxy
+socket uses an internal session and stream, these sessions uses resources like
+memory and file descriptor and in some cases keeps a socket open while it is no
+loner used by Lua.
+
+If the HAProxy socket is used, we forcing a garbage collector cycle after the
+end of each function using HAProxy socket. The reason is simple: if the socket
+is no longer used, we want to close the connection quickly.
+
+A special flag is used in HAProxy indicating that a HAProxy socket is created.
+If this flag is set, a full GC cycle is started after each Lua action. This is
+not free, we loose about 10% of performances, but it is the only way for closing
+sockets quickly.
+
+The yield concept / longjmp issues
+----------------------------------
+
+The "yield" is an action which does some Lua processing in pause and give back
+the hand to the HAProxy core. This action is do when the Lua needs to wait about
+data or other things. The most basically example is the sleep() function. In a
+event driven software the code must not process blocking systems call, so the
+sleep blocks the software between a lot of time. In HAProxy, an Lua sleep does a
+yield, and ask to the scheduler to be waked up in a required sleep time.
+Meanwhile, the HAProxy scheduler dos other things, like accepting new connection
+or forwarding data.
+
+A yield is also executed regularly, after a lot of Lua instruction processed.
+This yield permits to control the effective execution time, and also give back
+the hand to the haproxy core. When HAProxy finish to process the pending jobs,
+the Lua execution continue.
+
+This special "yield" uses the Lua "debug" functions. Lua provides a debug method
+called "lua_sethook()" which permits to interrupt the execution after some
+configured condition and call a function. This condition used in HAProxy is
+a number of instruction processed and when a function returns. The function
+called controls the effective execution time, and if it is possible send a
+"yield".
+
+The yield system is based on a couple setjmp/longjmp. In brief, the setjmp()
+stores a stack state, and the longjmp restores the stack in its state which had
+before the last Lua execution.
+
+Lua can immediately stop is execution if an error occurs. This system uses also
+the longjmp system. In HAProxy, we try to use this sytem only for unrecoverable
+errors. Maybe some trivial errors targets an exception, but we try to remove it.
+
+It seems that Lua uses the longjmp system for having a behaviour like the java
+try / catch. We can use the function pcall() to executes some code. The function
+pcall() run a setjmp(). So, if any error occurs while the Lua code execution,
+the flow immediately return from the pcall() with an error.
+
+The big issue of this behaviour is that we cannot do a yield. So if some Lua code
+executes a library using pcall for catching errors, HAProxy must be wait for the
+end of execution without processing any accept or any stream. The cause is the
+yield must be jump to the root of execution. The intermediate setjmp() avoid
+this behaviour.
+
+
+ HAproxy start Lua execution
+ + Lua puts a setjmp()
+ + Lua executes code
+ + Some code is executed in a pcall()
+ + pcall() puts a setjmp()
+ + Lua executes code
+ + A yield is require for a sleep function
+ it cannot be jumps to the Lua root execution.
+
+
+Another issue with the processing of strong errors is the manipulation of the
+Lua stack outside of an Lua processing. If one of the functions called occurs a
+strong error, the default behaviour is an abort(). It is not acceptable when
+HAProxy is in runtime mode. The Lua documentation propose to use another
+setjmp/longjmp to avoid the abort(). The goal is to puts a setjmp between
+manipulating the Lua stack and using an alternative "panic" function which jumps
+to the setjmp() in error case.
+
+All of these behaviours are very dangerous for the stability, and the internal
+HAProxy code must be modified with many precautions.
+
+For preserving a good behaviour of HAProxy, the yield is mandatory.
+Unfortunately, some HAProxy part are not adapted for resuming an execution after
+a yield. These part are the sample fetches and the sample converters. So, the
+Lua code written in these parts of HAProxy must be quickly executed, and can not
+do actions which require yield like TCP connection or simple sleep.
+
+HAproxy socket object
+---------------------
+
+The HAProxy design is optimized for the data transfers between a client and a
+server, and processing the many errors which can occurs during these exchanges.
+HAProxy is not designed for having a third connection established to a third
+party server.
+
+The solution consist to puts the main stream in pause waiting for the end of the
+exchanges with the third connection. This is completed by a signal between
+internal tasks. The following graph shows the HAProxy Lua socket:
+
+
+ +--------------------+
+ | Lua processing |
+ ------------------\ | creates socket | ------------------\
+ incoming request > | and puts the | Outgoing request >
+ ------------------/ | current processing | ------------------/
+ | in pause waiting |
+ | for TCP applet |
+ +-----------------+--+
+ ^ |
+ | |
+ | signal | read / write
+ | | data
+ | |
+ +-------------+---------+ v
+ | HAProxy internal +----------------+
+ | applet send signals | |
+ | when data is received | | -------------------\
+ | or some room is | Attached I/O | Client TCP stream >
+ | available | Buffers | -------------------/
+ +--------------------+--+ |
+ | |
+ +-------------------+
+
+
+A more detailed graph is available in the "doc/internals" directory.
+
+The HAProxy Lua socket uses a full HAProxy session / stream for establishing the
+connection. This mechanism provides all the facilities and HAProxy features,
+like the SSL stack, many socket type, and support for namespaces.
+Technically it support the proxy protocol, but there are no way to enable it.
+
+How compiling HAProxy with Lua
+==============================
+
+HAProxy 1.6 requires Lua 5.3. Lua 5.3 offers some features which makes easy the
+integration. Lua 5.3 is young, and some distros do not distribute it. Luckily,
+Lua is a great product because it does not require exotic dependencies, and its
+build process is really easy.
+
+The compilation process for linux is easy:
+
+ - download the source tarball
+ wget http://www.lua.org/ftp/lua-5.3.1.tar.gz
+
+ - untar it
+ tar xf lua-5.3.1.tar.gz
+
+ - enter the directory
+ cd lua-5.3.1
+
+ - build the library for linux
+ make linux
+
+ - install it:
+ sudo make INSTALL_TOP=/opt/lua-5.3.1
+
+HAProxy builds with your favourite options, plus the following options for
+embedding the Lua script language:
+
+ - download the source tarball
+ wget http://www.haproxy.org/download/1.6/src/haproxy-1.6.2.tar.gz
+
+ - untar it
+ tar xf haproxy-1.6.2.tar.gz
+
+ - enter the directory
+ cd haproxy-1.6.2
+
+ - build HAProxy:
+ make TARGET=linux \
+ USE_DL=1 \
+ USE_LUA=1 \
+ LUA_LIB=/opt/lua-5.3.1/lib \
+ LUA_INC=/opt/lua-5.3.1/include
+
+ - install it:
+ sudo make PREFIX=/opt/haproxy-1.6.2 install
+
+First steps with Lua
+====================
+
+Now, its time to using Lua in HAProxy.
+
+Start point
+-----------
+
+The HAProxy global directive "lua-load <file>" allow to load an lua file. This
+is the entry point. This load become during the configuration parsing, and the
+Lua file is immediately executed.
+
+All the register_*() function must be called at this time because there are used
+just after the processing of the global section, in the frontend/backend/listen
+sections.
+
+The most simple "Hello world !" is the following line a loaded Lua file:
+
+ core.Alert("Hello World !");
+
+It display a log during the HAProxy startup:
+
+ [alert] 285/083533 (14465) : Hello World !
+
+Default path and libraries
+--------------------------
+
+Lua can embed some libraries. These libraries can be included from different
+paths. It seems that Lua doesn't like subdirectories. In the following example, I
+try to load a compiled library, so the first line is Lua code, the second line
+is an 'strace' extract proving that the library was opened. The next lines are
+the associated error.
+
+ require("luac/concat")
+
+ open("./luac/concat.so", O_RDONLY|O_CLOEXEC) = 4
+
+ [ALERT] 293/175822 (22806) : parsing [commonstats.conf:15] : lua runtime
+ error: error loading module 'luac/concat' from file './luac/concat.so':
+ ./luac/concat.so: undefined symbol: luaopen_luac/concat
+
+Lua tries to load the C symbol 'luaopen_luac/concat'. When Lua tries to open a
+library, it tries to execute the function associated to the symbol
+"luaopen_<libname>".
+
+The variable "<libname>" is defined using the content of the variable
+"package.cpath" and/or "package.path". The default definition of the
+"package.cpath" (on my computer is ) variable is:
+
+ /usr/local/lib/lua/5.3/?.so;/usr/local/lib/lua/5.3/loadall.so;./?.so
+
+The "<libname>" is the content which replaces the symbol "<?>". In th previous
+example, its "luac/concat", and obviously the Lua core try to load the function
+associated with the symbol "luaopen_luac/concat".
+
+My conclusion is that Lua doesn't support subdirectories. So, for loading
+libraries in subdirectory, it must fill the variable with the name of this
+subdirectory. The extension .so must disappear, otherwise Lua try to execute the
+function associated with the symbol "luaopen_concat.so". The following syntax is
+correct:
+
+ package.cpath = package.cpath .. ";./luac/?.so"
+ require("concat")
+
+First useful example
+--------------------
+
+ core.register_fetches("my-hash", function(txn, salt)
+ return txn.sc:sdbm(salt .. txn.sf:req_fhdr("host") .. txn.sf:path() .. txn.sf:src(), 1)
+ end)
+
+You will see that these 3 line can generate a lot of explanations :)
+
+Core.register_fetches() is executed during the processing of the global section
+by the HAProxy configuration parser. A new sample fetch is declared with name
+"my-hash", this name is always prefixed by "lua.". So this new declared
+sample fetch will be used calling "lua.my-hash" in the HAProxy configuration
+file.
+
+The second parameter is an inline declared anonymous function. Note the closed
+parenthesis after the keyword "end" which end the function. The first parameter
+of these anonymous function is "txn". It an object of class TXN. It provides
+access functions. The second parameter is an arbitrary value provided by the
+HAProxy configuration file. This parameter is optional, the developer must
+check if its present.
+
+The anonymous function registration is executed when the HAProxy backend or
+frontend configuration references the sample fetch "lua.my-hash".
+
+This example can writed with an other style, like below:
+
+ function my_hash(txn, salt)
+ return txn.sc:sdbm(salt .. txn.sf:req_fhdr("host") .. txn.sf:path() .. txn.sf:src(), 1)
+ end
+
+ core.register_fetches("my-hash", my_hash)
+
+This second form is clearer, but the first one is compact.
+
+The operator ".." is a string concatenation. If one of the two operands are not a
+string, an error occurs and the execution is immediately stopped. This is
+important to keep in mind for the following things.
+
+Now I write the example on more than one line. Its an easiest way for commenting
+the code:
+
+ 1. function my_hash(txn, salt)
+ 2. local str = ""
+ 3. str = str .. salt
+ 4. str = str .. txn.sf:req_fhdr("host")
+ 5. str = str .. txn.sf:path()
+ 6. str = str .. txn.sf:src()
+ 7. local result = txn.sc:sdbm(str, 1)
+ 8. return result
+ 9. end
+ 10.
+ 11. core.register_fetches("my-hash", my_hash)
+
+local
+~~~~~
+
+The first keyword is "local". This is a really important keyword. You must
+understand that the function "my_hash" will be called for each HAProxy request
+using the declared sample fetch. So, this function can be executed many times in
+parallel.
+
+By default, Lua uses global variables. so in this example, il the variable "str"
+is declared without the keyword "local", it will be shared by all the parallel
+executions of the function and obviously, the content of the requests will be
+shared.
+
+This warning is very important. I tried to write useful Lua code like a rewrite
+of the statistics page, and its very hard to thing to declare each variable as
+"local".
+
+I guess than this behaviour will be the cause of many trouble on the mailing
+list.
+
+str = str ..
+~~~~~~~~~~~~
+
+Now a parenthesis about the form "str = str ..". This form allow to do string
+concatenations. Remember that Lua uses a garbage collector, so what happens when
+we do "str = str .. 'another string'" ?
+
+ str = str .. "another string"
+ ^ ^ ^ ^
+ 1 2 3 4
+
+Lua execute first the concatenation operator (3), it allocates memory for the
+resulting string and fill this memory with the concatenation of the operands 2
+and 4. Next, it free the variable 1, now the old content of 1 can be garbage
+collected. and finally, the new content of 1 is the concatenation.
+
+what the matter ? when we do this operation many times, we consume a lot of
+memory, and the string data is duplicated and move many times. So, this practice
+is expensive in execution time and memory consumption.
+
+There are easy ways to prevent this behaviour. I guess that a C binding for
+concatenation with chunks will be available ASAP (it is already written). I do
+some benchmarks. I compare the execution time of 1 000 times, 1 000
+concatenation of 10 bytes written in pure Lua and with a C library. The result is
+10 times faster in C (1s in Lua, and 0.1s in C).
+
+txn
+~~~
+
+txn is an HAProxy object of class TXN. The documentation is available in the
+HAProxy Lua API reference. This class allow the access to the native HAProxy
+sample fetches and converters. The object txn contains 2 members dedicated to
+the sample fetches and 2 members dedicated to the converters.
+
+The sample fetches members are "f" (as sample-Fetch) and "sf" (as String
+sample-Fetch). These two members contains exactly the same functions. All the
+HAProxy native sample fetches are available, obviously, the Lua registered sample
+fetches are not available. Unfortunately, HAProxy sample fetches names are not
+compatible with the Lua function names, and they are renames. The rename
+convention is simple, we replace all the '.', '+' and '-' by '_'. The '.' is the
+object member separator, and the "-" and "+" is math operator.
+
+Now, that I'm writing this article, I known the Lua better than I wrote the
+sample-fetches wrapper. The original HAProxy sample-fetches name should be used
+using alternative manner to call an object member, so the sample-fetch
+"req.fhdr" (actually renamed req_fhdr") is should be used like this:
+
+ txn.f["req.fhdr"](txn.f, ...)
+
+However, I think that this form is not elegant.
+
+The "s" collection return a data with a type near to the original returned type.
+A string return an Lua string, an integer returns an Lua integer and an IP
+address returns an Lua string. Sometime the data is not or not yet available, in
+this case it returns the Lua nil value.
+
+The "sf" collection guarantee that a string will be always returned. If the data
+is not available, an empty string is returned. The main usage of these collection
+is to concatenate the returned sample-fetches without testing each function.
+
+The parameters of the sample-fetches are according with the haproxy
+documentation.
+
+The converters runs exactly with the same manner as the sample fetches. The
+only one difference is that the fist parameter is the converter entry element.
+The "c" collection returns a precise result, and the "sc" collection returns
+always a string.
+
+The sample-fetches used in the example function are "txn.sf:req_fhdr()",
+"txn.sf:path()" and "txn.sf:src()". The converter are "txn.sc:sdbm()". The same
+function with the "s" collection of sample-fetches and the "c" collection of
+converter should be written like this:
+
+ 1. function my_hash(txn, salt)
+ 2. local str = ""
+ 3. str = str .. salt
+ 4. str = str .. tostring(txn.f:req_fhdr("host"))
+ 5. str = str .. tostring(txn.f:path())
+ 6. str = str .. tostring(txn.f:src())
+ 7. local result = tostring(txn.c:sdbm(str, 1))
+ 8. return result
+ 9. end
+ 10.
+ 11. core.register_fetches("my-hash", my_hash)
+
+tostring
+~~~~~~~~
+
+The function tostring ensure that its parameter is returned as a string. If the
+parameter is a table or a thread or anything that will not have any sense as a
+string, a form like the typename followed by a pointer is returned. For example:
+
+ t = {}
+ print(tostring(t))
+
+returns:
+
+ table: 0x15facc0
+
+For objects, if the special function __tostring() is registered in the attached
+metatable, it will be called with the table itself as first argument. The
+HAProxy objects returns its own type.
+
+About the converters entry point
+--------------------------------
+
+In HAProxy, a converter is a stateless function that takes a data as entry and
+returns a transformation of this data as output. In Lua it is exactly the same
+behaviour.
+
+So, the registered Lua function doesn't have any special parameters, juste a
+variable as input which contains the value to convert, and it must return data.
+
+The data required as input by the Lua converter is a string. So HAProxy will
+always provide a string as input. If the native sample fetch is not a string it
+will ne converted in best effort.
+
+The returned value will have anything type, it will be converted as sample of
+the near HAProxy type. The conversion rules from Lua variables to HAProxy
+samples are:
+
+ Lua | HAProxy sample types
+ -----------+---------------------
+ "number" | "sint"
+ "boolean" | "bool"
+ "string" | "str"
+ "userdata" | "bool" (false)
+ "nil" | "bool" (false)
+ "table" | "bool" (false)
+ "function" | "bool" (false)
+ "thread" | "bool" (false)
+
+The function used for registering a converter is:
+
+ core.register_converters()
+
+The task entry point
+--------------------
+
+The function "core.register_task(fcn)" executes once the function "fcn" when the
+scheduler starts. This way is used for executing background task. For example,
+you can use this functionnality for periodically checking the health of an other
+service, and giving the result to each proxy needing it.
+
+The task is started once, if you want periodic actions, you can use the
+"core.sleep()" or "core.msleep()" for waiting the next runtime.
+
+Storing Lua variable between function in the same session
+---------------------------------------------------------
+
+All the functions registered as action or sample fetch can share an Lua context.
+This context is a memory zone in the stack. sample fetch and action uses the
+same stack, so both can access to the context.
+
+The context is accessible via the function get_priv and set_priv provided by an
+object of class TXN. The value given to set_priv replaces the current stored
+value. This value can be a table, it is useful if a lot of data can be shared.
+
+If the value stored is a table, you can add or remove entries from the table
+without storing again the new table. Maybe an example will be clearer:
+
+ local t = {}
+ txn:set_priv(t)
+
+ t["entry1"] = "foo"
+ t["entry2"] = "bar"
+
+ -- this will display "foo"
+ print(txn:get_priv()["entry1"])
+
+HTTP actions
+============
+
+ ... comming soon ...
+
+Lua is fast, but my service require more execution speed
+========================================================
+
+We can wrote C modules for Lua. These modules must run with HAProxy while they
+are compliant with the HAProxy Lua version. A simple example is the "concat"
+module.
+
+It is very easy to write and compile a C Lua library, however, I don't see
+documentation about this process. So the current chapter is a quick howto.
+
+The entry point
+---------------
+
+The entry point is called "luaopen_<name>", where <name> is the name of the ".so"
+file. An hello world is like this:
+
+ #include <stdio.h>
+ #include <lua.h>
+ #include <lauxlib.h>
+
+ int luaopen_mymod(lua_State *L)
+ {
+ printf("Hello world\n");
+ return 0;
+ }
+
+The build
+---------
+
+The compilation of the source file requires the Lua "include" directory. The
+compilation and the link of the object file requires the -fPIC option. Thats
+all.
+
+ cc -I/opt/lua/include -fPIC -shared -o mymod.so mymod.c
+
+Usage
+-----
+
+You can load this module with the following Lua syntax:
+
+ require("mymod")
+
+When you start HAProxy, this module just print "Hello world" when its loaded.
+Please, remember that HAProxy doesn't allow blocking method, so if you write a
+function doing filesystem access or synchronous network access, all the HAProxy
+process will fail.
--- /dev/null
+ ------------------------
+ HAProxy Management Guide
+ ------------------------
+ version 1.6
+
+
+This document describes how to start, stop, manage, and troubleshoot HAProxy,
+as well as some known limitations and traps to avoid. It does not describe how
+to configure it (for this please read configuration.txt).
+
+Note to documentation contributors :
+ This document is formatted with 80 columns per line, with even number of
+ spaces for indentation and without tabs. Please follow these rules strictly
+ so that it remains easily printable everywhere. If you add sections, please
+ update the summary below for easier searching.
+
+
+Summary
+-------
+
+1. Prerequisites
+2. Quick reminder about HAProxy's architecture
+3. Starting HAProxy
+4. Stopping and restarting HAProxy
+5. File-descriptor limitations
+6. Memory management
+7. CPU usage
+8. Logging
+9. Statistics and monitoring
+9.1. CSV format
+9.2. Unix Socket commands
+10. Tricks for easier configuration management
+11. Well-known traps to avoid
+12. Debugging and performance issues
+13. Security considerations
+
+
+1. Prerequisites
+----------------
+
+In this document it is assumed that the reader has sufficient administration
+skills on a UNIX-like operating system, uses the shell on a daily basis and is
+familiar with troubleshooting utilities such as strace and tcpdump.
+
+
+2. Quick reminder about HAProxy's architecture
+----------------------------------------------
+
+HAProxy is a single-threaded, event-driven, non-blocking daemon. This means is
+uses event multiplexing to schedule all of its activities instead of relying on
+the system to schedule between multiple activities. Most of the time it runs as
+a single process, so the output of "ps aux" on a system will report only one
+"haproxy" process, unless a soft reload is in progress and an older process is
+finishing its job in parallel to the new one. It is thus always easy to trace
+its activity using the strace utility.
+
+HAProxy is designed to isolate itself into a chroot jail during startup, where
+it cannot perform any file-system access at all. This is also true for the
+libraries it depends on (eg: libc, libssl, etc). The immediate effect is that
+a running process will not be able to reload a configuration file to apply
+changes, instead a new process will be started using the updated configuration
+file. Some other less obvious effects are that some timezone files or resolver
+files the libc might attempt to access at run time will not be found, though
+this should generally not happen as they're not needed after startup. A nice
+consequence of this principle is that the HAProxy process is totally stateless,
+and no cleanup is needed after it's killed, so any killing method that works
+will do the right thing.
+
+HAProxy doesn't write log files, but it relies on the standard syslog protocol
+to send logs to a remote server (which is often located on the same system).
+
+HAProxy uses its internal clock to enforce timeouts, that is derived from the
+system's time but where unexpected drift is corrected. This is done by limiting
+the time spent waiting in poll() for an event, and measuring the time it really
+took. In practice it never waits more than one second. This explains why, when
+running strace over a completely idle process, periodic calls to poll() (or any
+of its variants) surrounded by two gettimeofday() calls are noticed. They are
+normal, completely harmless and so cheap that the load they imply is totally
+undetectable at the system scale, so there's nothing abnormal there. Example :
+
+ 16:35:40.002320 gettimeofday({1442759740, 2605}, NULL) = 0
+ 16:35:40.002942 epoll_wait(0, {}, 200, 1000) = 0
+ 16:35:41.007542 gettimeofday({1442759741, 7641}, NULL) = 0
+ 16:35:41.007998 gettimeofday({1442759741, 8114}, NULL) = 0
+ 16:35:41.008391 epoll_wait(0, {}, 200, 1000) = 0
+ 16:35:42.011313 gettimeofday({1442759742, 11411}, NULL) = 0
+
+HAProxy is a TCP proxy, not a router. It deals with established connections that
+have been validated by the kernel, and not with packets of any form nor with
+sockets in other states (eg: no SYN_RECV nor TIME_WAIT), though their existence
+may prevent it from binding a port. It relies on the system to accept incoming
+connections and to initiate outgoing connections. An immediate effect of this is
+that there is no relation between packets observed on the two sides of a
+forwarded connection, which can be of different size, numbers and even family.
+Since a connection may only be accepted from a socket in LISTEN state, all the
+sockets it is listening to are necessarily visible using the "netstat" utility
+to show listening sockets. Example :
+
+ # netstat -ltnp
+ Active Internet connections (only servers)
+ Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
+ tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1629/sshd
+ tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 2847/haproxy
+ tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 2847/haproxy
+
+
+3. Starting HAProxy
+-------------------
+
+HAProxy is started by invoking the "haproxy" program with a number of arguments
+passed on the command line. The actual syntax is :
+
+ $ haproxy [<options>]*
+
+where [<options>]* is any number of options. An option always starts with '-'
+followed by one of more letters, and possibly followed by one or multiple extra
+arguments. Without any option, HAProxy displays the help page with a reminder
+about supported options. Available options may vary slightly based on the
+operating system. A fair number of these options overlap with an equivalent one
+if the "global" section. In this case, the command line always has precedence
+over the configuration file, so that the command line can be used to quickly
+enforce some settings without touching the configuration files. The current
+list of options is :
+
+ -- <cfgfile>* : all the arguments following "--" are paths to configuration
+ file to be loaded and processed in the declaration order. It is mostly
+ useful when relying on the shell to load many files that are numerically
+ ordered. See also "-f". The difference between "--" and "-f" is that one
+ "-f" must be placed before each file name, while a single "--" is needed
+ before all file names. Both options can be used together, the command line
+ ordering still applies. When more than one file is specified, each file
+ must start on a section boundary, so the first keyword of each file must be
+ one of "global", "defaults", "peers", "listen", "frontend", "backend", and
+ so on. A file cannot contain just a server list for example.
+
+ -f <cfgfile> : adds <cfgfile> to the list of configuration files to be
+ loaded. Configuration files are loaded and processed in their declaration
+ order. This option may be specified multiple times to load multiple files.
+ See also "--". The difference between "--" and "-f" is that one "-f" must
+ be placed before each file name, while a single "--" is needed before all
+ file names. Both options can be used together, the command line ordering
+ still applies. When more than one file is specified, each file must start
+ on a section boundary, so the first keyword of each file must be one of
+ "global", "defaults", "peers", "listen", "frontend", "backend", and so
+ on. A file cannot contain just a server list for example.
+
+ -C <dir> : changes to directory <dir> before loading configuration
+ files. This is useful when using relative paths. Warning when using
+ wildcards after "--" which are in fact replaced by the shell before
+ starting haproxy.
+
+ -D : start as a daemon. The process detaches from the current terminal after
+ forking, and errors are not reported anymore in the terminal. It is
+ equivalent to the "daemon" keyword in the "global" section of the
+ configuration. It is recommended to always force it in any init script so
+ that a faulty configuration doesn't prevent the system from booting.
+
+ -Ds : work in systemd mode. Only used by the systemd wrapper.
+
+ -L <name> : change the local peer name to <name>, which defaults to the local
+ hostname. This is used only with peers replication.
+
+ -N <limit> : sets the default per-proxy maxconn to <limit> instead of the
+ builtin default value (usually 2000). Only useful for debugging.
+
+ -V : enable verbose mode (disables quiet mode). Reverts the effect of "-q" or
+ "quiet".
+
+ -c : only performs a check of the configuration files and exits before trying
+ to bind. The exit status is zero if everything is OK, or non-zero if an
+ error is encountered.
+
+ -d : enable debug mode. This disables daemon mode, forces the process to stay
+ in foreground and to show incoming and outgoing events. It is equivalent to
+ the "global" section's "debug" keyword. It must never be used in an init
+ script.
+
+ -dG : disable use of getaddrinfo() to resolve host names into addresses. It
+ can be used when suspecting that getaddrinfo() doesn't work as expected.
+ This option was made available because many bogus implementations of
+ getaddrinfo() exist on various systems and cause anomalies that are
+ difficult to troubleshoot.
+
+ -dM[<byte>] : forces memory poisonning, which means that each and every
+ memory region allocated with malloc() or pool_alloc2() will be filled with
+ <byte> before being passed to the caller. When <byte> is not specified, it
+ defaults to 0x50 ('P'). While this slightly slows down operations, it is
+ useful to reliably trigger issues resulting from missing initializations in
+ the code that cause random crashes. Note that -dM0 has the effect of
+ turning any malloc() into a calloc(). In any case if a bug appears or
+ disappears when using this option it means there is a bug in haproxy, so
+ please report it.
+
+ -dS : disable use of the splice() system call. It is equivalent to the
+ "global" section's "nosplice" keyword. This may be used when splice() is
+ suspected to behave improperly or to cause performance issues, or when
+ using strace to see the forwarded data (which do not appear when using
+ splice()).
+
+ -dV : disable SSL verify on the server side. It is equivalent to having
+ "ssl-server-verify none" in the "global" section. This is useful when
+ trying to reproduce production issues out of the production
+ environment. Never use this in an init script as it degrades SSL security
+ to the servers.
+
+ -db : disable background mode and multi-process mode. The process remains in
+ foreground. It is mainly used during development or during small tests, as
+ Ctrl-C is enough to stop the process. Never use it in an init script.
+
+ -de : disable the use of the "epoll" poller. It is equivalent to the "global"
+ section's keyword "noepoll". It is mostly useful when suspecting a bug
+ related to this poller. On systems supporting epoll, the fallback will
+ generally be the "poll" poller.
+
+ -dk : disable the use of the "kqueue" poller. It is equivalent to the
+ "global" section's keyword "nokqueue". It is mostly useful when suspecting
+ a bug related to this poller. On systems supporting kqueue, the fallback
+ will generally be the "poll" poller.
+
+ -dp : disable the use of the "poll" poller. It is equivalent to the "global"
+ section's keyword "nopoll". It is mostly useful when suspecting a bug
+ related to this poller. On systems supporting poll, the fallback will
+ generally be the "select" poller, which cannot be disabled and is limited
+ to 1024 file descriptors.
+
+ -m <limit> : limit the total allocatable memory to <limit> megabytes across
+ all processes. This may cause some connection refusals or some slowdowns
+ depending on the amount of memory needed for normal operations. This is
+ mostly used to force the processes to work in a constrained resource usage
+ scenario. It is important to note that the memory is not shared between
+ processes, so in a multi-process scenario, this value is first divided by
+ global.nbproc before forking.
+
+ -n <limit> : limits the per-process connection limit to <limit>. This is
+ equivalent to the global section's keyword "maxconn". It has precedence
+ over this keyword. This may be used to quickly force lower limits to avoid
+ a service outage on systems where resource limits are too low.
+
+ -p <file> : write all processes' pids into <file> during startup. This is
+ equivalent to the "global" section's keyword "pidfile". The file is opened
+ before entering the chroot jail, and after doing the chdir() implied by
+ "-C". Each pid appears on its own line.
+
+ -q : set "quiet" mode. This disables some messages during the configuration
+ parsing and during startup. It can be used in combination with "-c" to
+ just check if a configuration file is valid or not.
+
+ -sf <pid>* : send the "finish" signal (SIGUSR1) to older processes after boot
+ completion to ask them to finish what they are doing and to leave. <pid>
+ is a list of pids to signal (one per argument). The list ends on any
+ option starting with a "-". It is not a problem if the list of pids is
+ empty, so that it can be built on the fly based on the result of a command
+ like "pidof" or "pgrep".
+
+ -st <pid>* : send the "terminate" signal (SIGTERM) to older processes after
+ boot completion to terminate them immediately without finishing what they
+ were doing. <pid> is a list of pids to signal (one per argument). The list
+ is ends on any option starting with a "-". It is not a problem if the list
+ of pids is empty, so that it can be built on the fly based on the result of
+ a command like "pidof" or "pgrep".
+
+ -v : report the version and build date.
+
+ -vv : display the version, build options, libraries versions and usable
+ pollers. This output is systematically requested when filing a bug report.
+
+A safe way to start HAProxy from an init file consists in forcing the deamon
+mode, storing existing pids to a pid file and using this pid file to notify
+older processes to finish before leaving :
+
+ haproxy -f /etc/haproxy.cfg \
+ -D -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid)
+
+When the configuration is split into a few specific files (eg: tcp vs http),
+it is recommended to use the "-f" option :
+
+ haproxy -f /etc/haproxy/global.cfg -f /etc/haproxy/stats.cfg \
+ -f /etc/haproxy/default-tcp.cfg -f /etc/haproxy/tcp.cfg \
+ -f /etc/haproxy/default-http.cfg -f /etc/haproxy/http.cfg \
+ -D -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid)
+
+When an unknown number of files is expected, such as customer-specific files,
+it is recommended to assign them a name starting with a fixed-size sequence
+number and to use "--" to load them, possibly after loading some defaults :
+
+ haproxy -f /etc/haproxy/global.cfg -f /etc/haproxy/stats.cfg \
+ -f /etc/haproxy/default-tcp.cfg -f /etc/haproxy/tcp.cfg \
+ -f /etc/haproxy/default-http.cfg -f /etc/haproxy/http.cfg \
+ -D -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid) \
+ -f /etc/haproxy/default-customers.cfg -- /etc/haproxy/customers/*
+
+Sometimes a failure to start may happen for whatever reason. Then it is
+important to verify if the version of HAProxy you are invoking is the expected
+version and if it supports the features you are expecting (eg: SSL, PCRE,
+compression, Lua, etc). This can be verified using "haproxy -vv". Some
+important information such as certain build options, the target system and
+the versions of the libraries being used are reported there. It is also what
+you will systematically be asked for when posting a bug report :
+
+ $ haproxy -vv
+ HA-Proxy version 1.6-dev7-a088d3-4 2015/10/08
+ Copyright 2000-2015 Willy Tarreau <willy@haproxy.org>
+
+ Build options :
+ TARGET = linux2628
+ CPU = generic
+ CC = gcc
+ CFLAGS = -pg -O0 -g -fno-strict-aliasing -Wdeclaration-after-statement \
+ -DBUFSIZE=8030 -DMAXREWRITE=1030 -DSO_MARK=36 -DTCP_REPAIR=19
+ OPTIONS = USE_ZLIB=1 USE_DLMALLOC=1 USE_OPENSSL=1 USE_LUA=1 USE_PCRE=1
+
+ Default settings :
+ maxconn = 2000, bufsize = 8030, maxrewrite = 1030, maxpollevents = 200
+
+ Encrypted password support via crypt(3): yes
+ Built with zlib version : 1.2.6
+ Compression algorithms supported : identity("identity"), deflate("deflate"), \
+ raw-deflate("deflate"), gzip("gzip")
+ Built with OpenSSL version : OpenSSL 1.0.1o 12 Jun 2015
+ Running on OpenSSL version : OpenSSL 1.0.1o 12 Jun 2015
+ OpenSSL library supports TLS extensions : yes
+ OpenSSL library supports SNI : yes
+ OpenSSL library supports prefer-server-ciphers : yes
+ Built with PCRE version : 8.12 2011-01-15
+ PCRE library supports JIT : no (USE_PCRE_JIT not set)
+ Built with Lua version : Lua 5.3.1
+ Built with transparent proxy support using: IP_TRANSPARENT IP_FREEBIND
+
+ Available polling systems :
+ epoll : pref=300, test result OK
+ poll : pref=200, test result OK
+ select : pref=150, test result OK
+ Total: 3 (3 usable), will use epoll.
+
+The relevant information that many non-developer users can verify here are :
+ - the version : 1.6-dev7-a088d3-4 above means the code is currently at commit
+ ID "a088d3" which is the 4th one after after official version "1.6-dev7".
+ Version 1.6-dev7 would show as "1.6-dev7-8c1ad7". What matters here is in
+ fact "1.6-dev7". This is the 7th development version of what will become
+ version 1.6 in the future. A development version not suitable for use in
+ production (unless you know exactly what you are doing). A stable version
+ will show as a 3-numbers version, such as "1.5.14-16f863", indicating the
+ 14th level of fix on top of version 1.5. This is a production-ready version.
+
+ - the release date : 2015/10/08. It is represented in the universal
+ year/month/day format. Here this means August 8th, 2015. Given that stable
+ releases are issued every few months (1-2 months at the beginning, sometimes
+ 6 months once the product becomes very stable), if you're seeing an old date
+ here, it means you're probably affected by a number of bugs or security
+ issues that have since been fixed and that it might be worth checking on the
+ official site.
+
+ - build options : they are relevant to people who build their packages
+ themselves, they can explain why things are not behaving as expected. For
+ example the development version above was built for Linux 2.6.28 or later,
+ targetting a generic CPU (no CPU-specific optimizations), and lacks any
+ code optimization (-O0) so it will perform poorly in terms of performance.
+
+ - libraries versions : zlib version is reported as found in the library
+ itself. In general zlib is considered a very stable product and upgrades
+ are almost never needed. OpenSSL reports two versions, the version used at
+ build time and the one being used, as found on the system. These ones may
+ differ by the last letter but never by the numbers. The build date is also
+ reported because most OpenSSL bugs are security issues and need to be taken
+ seriously, so this library absolutely needs to be kept up to date. Seeing a
+ 4-months old version here is highly suspicious and indeed an update was
+ missed. PCRE provides very fast regular expressions and is highly
+ recommended. Certain of its extensions such as JIT are not present in all
+ versions and still young so some people prefer not to build with them,
+ which is why the biuld status is reported as well. Regarding the Lua
+ scripting language, HAProxy expects version 5.3 which is very young since
+ it was released a little time before HAProxy 1.6. It is important to check
+ on the Lua web site if some fixes are proposed for this branch.
+
+ - Available polling systems will affect the process's scalability when
+ dealing with more than about one thousand of concurrent connections. These
+ ones are only available when the correct system was indicated in the TARGET
+ variable during the build. The "epoll" mechanism is highly recommended on
+ Linux, and the kqueue mechanism is highly recommended on BSD. Lacking them
+ will result in poll() or even select() being used, causing a high CPU usage
+ when dealing with a lot of connections.
+
+
+4. Stopping and restarting HAProxy
+----------------------------------
+
+HAProxy supports a graceful and a hard stop. The hard stop is simple, when the
+SIGTERM signal is sent to the haproxy process, it immediately quits and all
+established connections are closed. The graceful stop is triggered when the
+SIGUSR1 signal is sent to the haproxy process. It consists in only unbinding
+from listening ports, but continue to process existing connections until they
+close. Once the last connection is closed, the process leaves.
+
+The hard stop method is used for the "stop" or "restart" actions of the service
+management script. The graceful stop is used for the "reload" action which
+tries to seamlessly reload a new configuration in a new process.
+
+Both of these signals may be sent by the new haproxy process itself during a
+reload or restart, so that they are sent at the latest possible moment and only
+if absolutely required. This is what is performed by the "-st" (hard) and "-sf"
+(graceful) options respectively.
+
+To understand better how these signals are used, it is important to understand
+the whole restart mechanism.
+
+First, an existing haproxy process is running. The administrator uses a system
+specific command such as "/etc/init.d/haproxy reload" to indicate he wants to
+take the new configuration file into effect. What happens then is the following.
+First, the service script (/etc/init.d/haproxy or equivalent) will verify that
+the configuration file parses correctly using "haproxy -c". After that it will
+try to start haproxy with this configuration file, using "-st" or "-sf".
+
+Then HAProxy tries to bind to all listening ports. If some fatal errors happen
+(eg: address not present on the system, permission denied), the process quits
+with an error. If a socket binding fails because a port is already in use, then
+the process will first send a SIGTTOU signal to all the pids specified in the
+"-st" or "-sf" pid list. This is what is called the "pause" signal. It instructs
+all existing haproxy processes to temporarily stop listening to their ports so
+that the new process can try to bind again. During this time, the old process
+continues to process existing connections. If the binding still fails (because
+for example a port is shared with another daemon), then the new process sends a
+SIGTTIN signal to the old processes to instruct them to resume operations just
+as if nothing happened. The old processes will then restart listening to the
+ports and continue to accept connections. Not that this mechanism is system
+dependant and some operating systems may not support it in multi-process mode.
+
+If the new process manages to bind correctly to all ports, then it sends either
+the SIGTERM (hard stop in case of "-st") or the SIGUSR1 (graceful stop in case
+of "-sf") to all processes to notify them that it is now in charge of operations
+and that the old processes will have to leave, either immediately or once they
+have finished their job.
+
+It is important to note that during this timeframe, there are two small windows
+of a few milliseconds each where it is possible that a few connection failures
+will be noticed during high loads. Typically observed failure rates are around
+1 failure during a reload operation every 10000 new connections per second,
+which means that a heavily loaded site running at 30000 new connections per
+second may see about 3 failed connection upon every reload. The two situations
+where this happens are :
+
+ - if the new process fails to bind due to the presence of the old process,
+ it will first have to go through the SIGTTOU+SIGTTIN sequence, which
+ typically lasts about one millisecond for a few tens of frontends, and
+ during which some ports will not be bound to the old process and not yet
+ bound to the new one. HAProxy works around this on systems that support the
+ SO_REUSEPORT socket options, as it allows the new process to bind without
+ first asking the old one to unbind. Most BSD systems have been supporting
+ this almost forever. Linux has been supporting this in version 2.0 and
+ dropped it around 2.2, but some patches were floating around by then. It
+ was reintroduced in kernel 3.9, so if you are observing a connection
+ failure rate above the one mentionned above, please ensure that your kernel
+ is 3.9 or newer, or that relevant patches were backported to your kernel
+ (less likely).
+
+ - when the old processes close the listening ports, the kernel may not always
+ redistribute any pending connection that was remaining in the socket's
+ backlog. Under high loads, a SYN packet may happen just before the socket
+ is closed, and will lead to an RST packet being sent to the client. In some
+ critical environments where even one drop is not acceptable, these ones are
+ sometimes dealt with using firewall rules to block SYN packets during the
+ reload, forcing the client to retransmit. This is totally system-dependent,
+ as some systems might be able to visit other listening queues and avoid
+ this RST. A second case concerns the ACK from the client on a local socket
+ that was in SYN_RECV state just before the close. This ACK will lead to an
+ RST packet while the haproxy process is still not aware of it. This one is
+ harder to get rid of, though the firewall filtering rules mentionned above
+ will work well if applied one second or so before restarting the process.
+
+For the vast majority of users, such drops will never ever happen since they
+don't have enough load to trigger the race conditions. And for most high traffic
+users, the failure rate is still fairly within the noise margin provided that at
+least SO_REUSEPORT is properly supported on their systems.
+
+
+5. File-descriptor limitations
+------------------------------
+
+In order to ensure that all incoming connections will successfully be served,
+HAProxy computes at load time the total number of file descriptors that will be
+needed during the process's life. A regular Unix process is generally granted
+1024 file descriptors by default, and a privileged process can raise this limit
+itself. This is one reason for starting HAProxy as root and letting it adjust
+the limit. The default limit of 1024 file descriptors roughly allow about 500
+concurrent connections to be processed. The computation is based on the global
+maxconn parameter which limits the total number of connections per process, the
+number of listeners, the number of servers which have a health check enabled,
+the agent checks, the peers, the loggers and possibly a few other technical
+requirements. A simple rough estimate of this number consists in simply
+doubling the maxconn value and adding a few tens to get the approximate number
+of file descriptors needed.
+
+Originally HAProxy did not know how to compute this value, and it was necessary
+to pass the value using the "ulimit-n" setting in the global section. This
+explains why even today a lot of configurations are seen with this setting
+present. Unfortunately it was often miscalculated resulting in connection
+failures when approaching maxconn instead of throttling incoming connection
+while waiting for the needed resources. For this reason it is important to
+remove any vestigal "ulimit-n" setting that can remain from very old versions.
+
+Raising the number of file descriptors to accept even moderate loads is
+mandatory but comes with some OS-specific adjustments. First, the select()
+polling system is limited to 1024 file descriptors. In fact on Linux it used
+to be capable of handling more but since certain OS ship with excessively
+restrictive SELinux policies forbidding the use of select() with more than
+1024 file descriptors, HAProxy now refuses to start in this case in order to
+avoid any issue at run time. On all supported operating systems, poll() is
+available and will not suffer from this limitation. It is automatically picked
+so there is nothing ot do to get a working configuration. But poll's becomes
+very slow when the number of file descriptors increases. While HAProxy does its
+best to limit this performance impact (eg: via the use of the internal file
+descriptor cache and batched processing), a good rule of thumb is that using
+poll() with more than a thousand concurrent connections will use a lot of CPU.
+
+For Linux systems base on kernels 2.6 and above, the epoll() system call will
+be used. It's a much more scalable mechanism relying on callbacks in the kernel
+that guarantee a constant wake up time regardless of the number of registered
+monitored file descriptors. It is automatically used where detected, provided
+that HAProxy had been built for one of the Linux flavors. Its presence and
+support can be verified using "haproxy -vv".
+
+For BSD systems which support it, kqueue() is available as an alternative. It
+is much faster than poll() and even slightly faster than epoll() thanks to its
+batched handling of changes. At least FreeBSD and OpenBSD support it. Just like
+with Linux's epoll(), its support and availability are reported in the output
+of "haproxy -vv".
+
+Having a good poller is one thing, but it is mandatory that the process can
+reach the limits. When HAProxy starts, it immediately sets the new process's
+file descriptor limits and verifies if it succeeds. In case of failure, it
+reports it before forking so that the administrator can see the problem. As
+long as the process is started by as root, there should be no reason for this
+setting to fail. However, it can fail if the process is started by an
+unprivileged user. If there is a compelling reason for *not* starting haproxy
+as root (eg: started by end users, or by a per-application account), then the
+file descriptor limit can be raised by the system administrator for this
+specific user. The effectiveness of the setting can be verified by issuing
+"ulimit -n" from the user's command line. It should reflect the new limit.
+
+Warning: when an unprivileged user's limits are changed in this user's account,
+it is fairly common that these values are only considered when the user logs in
+and not at all in some scripts run at system boot time nor in crontabs. This is
+totally dependent on the operating system, keep in mind to check "ulimit -n"
+before starting haproxy when running this way. The general advice is never to
+start haproxy as an unprivileged user for production purposes. Another good
+reason is that it prevents haproxy from enabling some security protections.
+
+Once it is certain that the system will allow the haproxy process to use the
+requested number of file descriptors, two new system-specific limits may be
+encountered. The first one is the system-wide file descriptor limit, which is
+the total number of file descriptors opened on the system, covering all
+processes. When this limit is reached, accept() or socket() will typically
+return ENFILE. The second one is the per-process hard limit on the number of
+file descriptors, it prevents setrlimit() from being set higher. Both are very
+dependent on the operating system. On Linux, the system limit is set at boot
+based on the amount of memory. It can be changed with the "fs.file-max" sysctl.
+And the per-process hard limit is set to 1048576 by default, but it can be
+changed using the "fs.nr_open" sysctl.
+
+File descriptor limitations may be observed on a running process when they are
+set too low. The strace utility will report that accept() and socket() return
+"-1 EMFILE" when the process's limits have been reached. In this case, simply
+raising the "ulimit-n" value (or removing it) will solve the problem. If these
+system calls return "-1 ENFILE" then it means that the kernel's limits have
+been reached and that something must be done on a system-wide parameter. These
+trouble must absolutely be addressed, as they result in high CPU usage (when
+accept() fails) and failed connections that are generally visible to the user.
+One solution also consists in lowering the global maxconn value to enforce
+serialization, and possibly to disable HTTP keep-alive to force connections
+to be released and reused faster.
+
+
+6. Memory management
+--------------------
+
+HAProxy uses a simple and fast pool-based memory management. Since it relies on
+a small number of different object types, it's much more efficient to pick new
+objects from a pool which already contains objects of the appropriate size than
+to call malloc() for each different size. The pools are organized as a stack or
+LIFO, so that newly allocated objects are taken from recently released objects
+still hot in the CPU caches. Pools of similar sizes are merged together, in
+order to limit memory fragmentation.
+
+By default, since the focus is set on performance, each released object is put
+back into the pool it came from, and allocated objects are never freed since
+they are expected to be reused very soon.
+
+On the CLI, it is possible to check how memory is being used in pools thanks to
+the "show pools" command :
+
+ > show pools
+ Dumping pools usage. Use SIGQUIT to flush them.
+ - Pool pipe (32 bytes) : 5 allocated (160 bytes), 5 used, 3 users [SHARED]
+ - Pool hlua_com (48 bytes) : 0 allocated (0 bytes), 0 used, 1 users [SHARED]
+ - Pool vars (64 bytes) : 0 allocated (0 bytes), 0 used, 2 users [SHARED]
+ - Pool task (112 bytes) : 5 allocated (560 bytes), 5 used, 1 users [SHARED]
+ - Pool session (128 bytes) : 1 allocated (128 bytes), 1 used, 2 users [SHARED]
+ - Pool http_txn (272 bytes) : 0 allocated (0 bytes), 0 used, 1 users [SHARED]
+ - Pool connection (352 bytes) : 2 allocated (704 bytes), 2 used, 1 users [SHARED]
+ - Pool hdr_idx (416 bytes) : 0 allocated (0 bytes), 0 used, 1 users [SHARED]
+ - Pool stream (864 bytes) : 1 allocated (864 bytes), 1 used, 1 users [SHARED]
+ - Pool requri (1024 bytes) : 0 allocated (0 bytes), 0 used, 1 users [SHARED]
+ - Pool buffer (8064 bytes) : 3 allocated (24192 bytes), 2 used, 1 users [SHARED]
+ Total: 11 pools, 26608 bytes allocated, 18544 used.
+
+The pool name is only indicative, it's the name of the first object type using
+this pool. The size in parenthesis is the object size for objects in this pool.
+Object sizes are always rounded up to the closest multiple of 16 bytes. The
+number of objects currently allocated and the equivalent number of bytes is
+reported so that it is easy to know which pool is responsible for the highest
+memory usage. The number of objects currently in use is reported as well in the
+"used" field. The difference between "allocated" and "used" corresponds to the
+objects that have been freed and are available for immediate use.
+
+It is possible to limit the amount of memory allocated per process using the
+"-m" command line option, followed by a number of megabytes. It covers all of
+the process's addressable space, so that includes memory used by some libraries
+as well as the stack, but it is a reliable limit when building a resource
+constrained system. It works the same way as "ulimit -v" on systems which have
+it, or "ulimit -d" for the other ones.
+
+If a memory allocation fails due to the memory limit being reached or because
+the system doesn't have any enough memory, then haproxy will first start to
+free all available objects from all pools before attempting to allocate memory
+again. This mechanism of releasing unused memory can be triggered by sending
+the signal SIGQUIT to the haproxy process. When doing so, the pools state prior
+to the flush will also be reported to stderr when the process runs in
+foreground.
+
+During a reload operation, the process switched to the graceful stop state also
+automatically performs some flushes after releasing any connection so that all
+possible memory is released to save it for the new process.
+
+
+7. CPU usage
+------------
+
+HAProxy normally spends most of its time in the system and a smaller part in
+userland. A finely tuned 3.5 GHz CPU can sustain a rate about 80000 end-to-end
+connection setups and closes per second at 100% CPU on a single core. When one
+core is saturated, typical figures are :
+ - 95% system, 5% user for long TCP connections or large HTTP objects
+ - 85% system and 15% user for short TCP connections or small HTTP objects in
+ close mode
+ - 70% system and 30% user for small HTTP objects in keep-alive mode
+
+The amount of rules processing and regular expressions will increase the user
+land part. The presence of firewall rules, connection tracking, complex routing
+tables in the system will instead increase the system part.
+
+On most systems, the CPU time observed during network transfers can be cut in 4
+parts :
+ - the interrupt part, which concerns all the processing performed upon I/O
+ receipt, before the target process is even known. Typically Rx packets are
+ accounted for in interrupt. On some systems such as Linux where interrupt
+ processing may be deferred to a dedicated thread, it can appear as softirq,
+ and the thread is called ksoftirqd/0 (for CPU 0). The CPU taking care of
+ this load is generally defined by the hardware settings, though in the case
+ of softirq it is often possible to remap the processing to another CPU.
+ This interrupt part will often be perceived as parasitic since it's not
+ associated with any process, but it actually is some processing being done
+ to prepare the work for the process.
+
+ - the system part, which concerns all the processing done using kernel code
+ called from userland. System calls are accounted as system for example. All
+ synchronously delivered Tx packets will be accounted for as system time. If
+ some packets have to be deferred due to queues filling up, they may then be
+ processed in interrupt context later (eg: upon receipt of an ACK opening a
+ TCP window).
+
+ - the user part, which exclusively runs application code in userland. HAProxy
+ runs exclusively in this part, though it makes heavy use of system calls.
+ Rules processing, regular expressions, compression, encryption all add to
+ the user portion of CPU consumption.
+
+ - the idle part, which is what the CPU does when there is nothing to do. For
+ example HAProxy waits for an incoming connection, or waits for some data to
+ leave, meaning the system is waiting for an ACK from the client to push
+ these data.
+
+In practice regarding HAProxy's activity, it is in general reasonably accurate
+(but totally inexact) to consider that interrupt/softirq are caused by Rx
+processing in kernel drivers, that user-land is caused by layer 7 processing
+in HAProxy, and that system time is caused by network processing on the Tx
+path.
+
+Since HAProxy runs around an event loop, it waits for new events using poll()
+(or any alternative) and processes all these events as fast as possible before
+going back to poll() waiting for new events. It measures the time spent waiting
+in poll() compared to the time spent doing processing events. The ratio of
+polling time vs total time is called the "idle" time, it's the amount of time
+spent waiting for something to happen. This ratio is reported in the stats page
+on the "idle" line, or "Idle_pct" on the CLI. When it's close to 100%, it means
+the load is extremely low. When it's close to 0%, it means that there is
+constantly some activity. While it cannot be very accurate on an overloaded
+system due to other processes possibly preempting the CPU from the haproxy
+process, it still provides a good estimate about how HAProxy considers it is
+working : if the load is low and the idle ratio is low as well, it may indicate
+that HAProxy has a lot of work to do, possibly due to very expensive rules that
+have to be processed. Conversely, if HAProxy indicates the idle is close to
+100% while things are slow, it means that it cannot do anything to speed things
+up because it is already waiting for incoming data to process. In the example
+below, haproxy is completely idle :
+
+ $ echo "show info" | socat - /var/run/haproxy.sock | grep ^Idle
+ Idle_pct: 100
+
+When the idle ratio starts to become very low, it is important to tune the
+system and place processes and interrupts correctly to save the most possible
+CPU resources for all tasks. If a firewall is present, it may be worth trying
+to disable it or to tune it to ensure it is not responsible for a large part
+of the performance limitation. It's worth noting that unloading a stateful
+firewall generally reduces both the amount of interrupt/softirq and of system
+usage since such firewalls act both on the Rx and the Tx paths. On Linux,
+unloading the nf_conntrack and ip_conntrack modules will show whether there is
+anything to gain. If so, then the module runs with default settings and you'll
+have to figure how to tune it for better performance. In general this consists
+in considerably increasing the hash table size. On FreeBSD, "pfctl -d" will
+disable the "pf" firewall and its stateful engine at the same time.
+
+If it is observed that a lot of time is spent in interrupt/softirq, it is
+important to ensure that they don't run on the same CPU. Most systems tend to
+pin the tasks on the CPU where they receive the network traffic because for
+certain workloads it improves things. But with heavily network-bound workloads
+it is the opposite as the haproxy process will have to fight against its kernel
+counterpart. Pinning haproxy to one CPU core and the interrupts to another one,
+all sharing the same L3 cache tends to sensibly increase network performance
+because in practice the amount of work for haproxy and the network stack are
+quite close, so they can almost fill an entire CPU each. On Linux this is done
+using taskset (for haproxy) or using cpu-map (from the haproxy config), and the
+interrupts are assigned under /proc/irq. Many network interfaces support
+multiple queues and multiple interrupts. In general it helps to spread them
+across a small number of CPU cores provided they all share the same L3 cache.
+Please always stop irq_balance which always does the worst possible thing on
+such workloads.
+
+For CPU-bound workloads consisting in a lot of SSL traffic or a lot of
+compression, it may be worth using multiple processes dedicated to certain
+tasks, though there is no universal rule here and experimentation will have to
+be performed.
+
+In order to increase the CPU capacity, it is possible to make HAProxy run as
+several processes, using the "nbproc" directive in the global section. There
+are some limitations though :
+ - health checks are run per process, so the target servers will get as many
+ checks as there are running processes ;
+ - maxconn values and queues are per-process so the correct value must be set
+ to avoid overloading the servers ;
+ - outgoing connections should avoid using port ranges to avoid conflicts
+ - stick-tables are per process and are not shared between processes ;
+ - each peers section may only run on a single process at a time ;
+ - the CLI operations will only act on a single process at a time.
+
+With this in mind, it appears that the easiest setup often consists in having
+one first layer running on multiple processes and in charge for the heavy
+processing, passing the traffic to a second layer running in a single process.
+This mechanism is suited to SSL and compression which are the two CPU-heavy
+features. Instances can easily be chained over UNIX sockets (which are cheaper
+than TCP sockets and which do not waste ports), adn the proxy protocol which is
+useful to pass client information to the next stage. When doing so, it is
+generally a good idea to bind all the single-process tasks to process number 1
+and extra tasks to next processes, as this will make it easier to generate
+similar configurations for different machines.
+
+On Linux versions 3.9 and above, running HAProxy in multi-process mode is much
+more efficient when each process uses a distinct listening socket on the same
+IP:port ; this will make the kernel evenly distribute the load across all
+processes instead of waking them all up. Please check the "process" option of
+the "bind" keyword lines in the configuration manual for more information.
+
+
+8. Logging
+----------
+
+For logging, HAProxy always relies on a syslog server since it does not perform
+any file-system access. The standard way of using it is to send logs over UDP
+to the log server (by default on port 514). Very commonly this is configured to
+127.0.0.1 where the local syslog daemon is running, but it's also used over the
+network to log to a central server. The central server provides additional
+benefits especially in active-active scenarios where it is desirable to keep
+the logs merged in arrival order. HAProxy may also make use of a UNIX socket to
+send its logs to the local syslog daemon, but it is not recommended at all,
+because if the syslog server is restarted while haproxy runs, the socket will
+be replaced and new logs will be lost. Since HAProxy will be isolated inside a
+chroot jail, it will not have the ability to reconnect to the new socket. It
+has also been observed in field that the log buffers in use on UNIX sockets are
+very small and lead to lost messages even at very light loads. But this can be
+fine for testing however.
+
+It is recommended to add the following directive to the "global" section to
+make HAProxy log to the local daemon using facility "local0" :
+
+ log 127.0.0.1:514 local0
+
+and then to add the following one to each "defaults" section or to each frontend
+and backend section :
+
+ log global
+
+This way, all logs will be centralized through the global definition of where
+the log server is.
+
+Some syslog daemons do not listen to UDP traffic by default, so depending on
+the daemon being used, the syntax to enable this will vary :
+
+ - on sysklogd, you need to pass argument "-r" on the daemon's command line
+ so that it listens to a UDP socket for "remote" logs ; note that there is
+ no way to limit it to address 127.0.0.1 so it will also receive logs from
+ remote systems ;
+
+ - on rsyslogd, the following lines must be added to the configuration file :
+
+ $ModLoad imudp
+ $UDPServerAddress *
+ $UDPServerRun 514
+
+ - on syslog-ng, a new source can be created the following way, it then needs
+ to be added as a valid source in one of the "log" directives :
+
+ source s_udp {
+ udp(ip(127.0.0.1) port(514));
+ };
+
+Please consult your syslog daemon's manual for more information. If no logs are
+seen in the system's log files, please consider the following tests :
+
+ - restart haproxy. Each frontend and backend logs one line indicating it's
+ starting. If these logs are received, it means logs are working.
+
+ - run "strace -tt -s100 -etrace=sendmsg -p <haproxy's pid>" and perform some
+ activity that you expect to be logged. You should see the log messages
+ being sent using sendmsg() there. If they don't appear, restart using
+ strace on top of haproxy. If you still see no logs, it definitely means
+ that something is wrong in your configuration.
+
+ - run tcpdump to watch for port 514, for example on the loopback interface if
+ the traffic is being sent locally : "tcpdump -As0 -ni lo port 514". If the
+ packets are seen there, it's the proof they're sent then the syslogd daemon
+ needs to be troubleshooted.
+
+While traffic logs are sent from the frontends (where the incoming connections
+are accepted), backends also need to be able to send logs in order to report a
+server state change consecutive to a health check. Please consult HAProxy's
+configuration manual for more information regarding all possible log settings.
+
+It is convenient to chose a facility that is not used by other deamons. HAProxy
+examples often suggest "local0" for traffic logs and "local1" for admin logs
+because they're never seen in field. A single facility would be enough as well.
+Having separate logs is convenient for log analysis, but it's also important to
+remember that logs may sometimes convey confidential information, and as such
+they must not be mixed with other logs that may accidently be handed out to
+unauthorized people.
+
+For in-field troubleshooting without impacting the server's capacity too much,
+it is recommended to make use of the "halog" utility provided with HAProxy.
+This is sort of a grep-like utility designed to process HAProxy log files at
+a very fast data rate. Typical figures range between 1 and 2 GB of logs per
+second. It is capable of extracting only certain logs (eg: search for some
+classes of HTTP status codes, connection termination status, search by response
+time ranges, look for errors only), count lines, limit the output to a number
+of lines, and perform some more advanced statistics such as sorting servers
+by response time or error counts, sorting URLs by time or count, sorting client
+addresses by access count, and so on. It is pretty convenient to quickly spot
+anomalies such as a bot looping on the site, and block them.
+
+
+9. Statistics and monitoring
+----------------------------
+
+It is possible to query HAProxy about its status. The most commonly used
+mechanism is the HTTP statistics page. This page also exposes an alternative
+CSV output format for monitoring tools. The same format is provided on the
+Unix socket.
+
+
+9.1. CSV format
+---------------
+
+The statistics may be consulted either from the unix socket or from the HTTP
+page. Both means provide a CSV format whose fields follow. The first line
+begins with a sharp ('#') and has one word per comma-delimited field which
+represents the title of the column. All other lines starting at the second one
+use a classical CSV format using a comma as the delimiter, and the double quote
+('"') as an optional text delimiter, but only if the enclosed text is ambiguous
+(if it contains a quote or a comma). The double-quote character ('"') in the
+text is doubled ('""'), which is the format that most tools recognize. Please
+do not insert any column before these ones in order not to break tools which
+use hard-coded column positions.
+
+In brackets after each field name are the types which may have a value for
+that field. The types are L (Listeners), F (Frontends), B (Backends), and
+S (Servers).
+
+ 0. pxname [LFBS]: proxy name
+ 1. svname [LFBS]: service name (FRONTEND for frontend, BACKEND for backend,
+ any name for server/listener)
+ 2. qcur [..BS]: current queued requests. For the backend this reports the
+ number queued without a server assigned.
+ 3. qmax [..BS]: max value of qcur
+ 4. scur [LFBS]: current sessions
+ 5. smax [LFBS]: max sessions
+ 6. slim [LFBS]: configured session limit
+ 7. stot [LFBS]: cumulative number of connections
+ 8. bin [LFBS]: bytes in
+ 9. bout [LFBS]: bytes out
+ 10. dreq [LFB.]: requests denied because of security concerns.
+ - For tcp this is because of a matched tcp-request content rule.
+ - For http this is because of a matched http-request or tarpit rule.
+ 11. dresp [LFBS]: responses denied because of security concerns.
+ - For http this is because of a matched http-request rule, or
+ "option checkcache".
+ 12. ereq [LF..]: request errors. Some of the possible causes are:
+ - early termination from the client, before the request has been sent.
+ - read error from the client
+ - client timeout
+ - client closed connection
+ - various bad requests from the client.
+ - request was tarpitted.
+ 13. econ [..BS]: number of requests that encountered an error trying to
+ connect to a backend server. The backend stat is the sum of the stat
+ for all servers of that backend, plus any connection errors not
+ associated with a particular server (such as the backend having no
+ active servers).
+ 14. eresp [..BS]: response errors. srv_abrt will be counted here also.
+ Some other errors are:
+ - write error on the client socket (won't be counted for the server stat)
+ - failure applying filters to the response.
+ 15. wretr [..BS]: number of times a connection to a server was retried.
+ 16. wredis [..BS]: number of times a request was redispatched to another
+ server. The server value counts the number of times that server was
+ switched away from.
+ 17. status [LFBS]: status (UP/DOWN/NOLB/MAINT/MAINT(via)...)
+ 18. weight [..BS]: total weight (backend), server weight (server)
+ 19. act [..BS]: number of active servers (backend), server is active (server)
+ 20. bck [..BS]: number of backup servers (backend), server is backup (server)
+ 21. chkfail [...S]: number of failed checks. (Only counts checks failed when
+ the server is up.)
+ 22. chkdown [..BS]: number of UP->DOWN transitions. The backend counter counts
+ transitions to the whole backend being down, rather than the sum of the
+ counters for each server.
+ 23. lastchg [..BS]: number of seconds since the last UP<->DOWN transition
+ 24. downtime [..BS]: total downtime (in seconds). The value for the backend
+ is the downtime for the whole backend, not the sum of the server downtime.
+ 25. qlimit [...S]: configured maxqueue for the server, or nothing in the
+ value is 0 (default, meaning no limit)
+ 26. pid [LFBS]: process id (0 for first instance, 1 for second, ...)
+ 27. iid [LFBS]: unique proxy id
+ 28. sid [L..S]: server id (unique inside a proxy)
+ 29. throttle [...S]: current throttle percentage for the server, when
+ slowstart is active, or no value if not in slowstart.
+ 30. lbtot [..BS]: total number of times a server was selected, either for new
+ sessions, or when re-dispatching. The server counter is the number
+ of times that server was selected.
+ 31. tracked [...S]: id of proxy/server if tracking is enabled.
+ 32. type [LFBS]: (0=frontend, 1=backend, 2=server, 3=socket/listener)
+ 33. rate [.FBS]: number of sessions per second over last elapsed second
+ 34. rate_lim [.F..]: configured limit on new sessions per second
+ 35. rate_max [.FBS]: max number of new sessions per second
+ 36. check_status [...S]: status of last health check, one of:
+ UNK -> unknown
+ INI -> initializing
+ SOCKERR -> socket error
+ L4OK -> check passed on layer 4, no upper layers testing enabled
+ L4TOUT -> layer 1-4 timeout
+ L4CON -> layer 1-4 connection problem, for example
+ "Connection refused" (tcp rst) or "No route to host" (icmp)
+ L6OK -> check passed on layer 6
+ L6TOUT -> layer 6 (SSL) timeout
+ L6RSP -> layer 6 invalid response - protocol error
+ L7OK -> check passed on layer 7
+ L7OKC -> check conditionally passed on layer 7, for example 404 with
+ disable-on-404
+ L7TOUT -> layer 7 (HTTP/SMTP) timeout
+ L7RSP -> layer 7 invalid response - protocol error
+ L7STS -> layer 7 response error, for example HTTP 5xx
+ 37. check_code [...S]: layer5-7 code, if available
+ 38. check_duration [...S]: time in ms took to finish last health check
+ 39. hrsp_1xx [.FBS]: http responses with 1xx code
+ 40. hrsp_2xx [.FBS]: http responses with 2xx code
+ 41. hrsp_3xx [.FBS]: http responses with 3xx code
+ 42. hrsp_4xx [.FBS]: http responses with 4xx code
+ 43. hrsp_5xx [.FBS]: http responses with 5xx code
+ 44. hrsp_other [.FBS]: http responses with other codes (protocol error)
+ 45. hanafail [...S]: failed health checks details
+ 46. req_rate [.F..]: HTTP requests per second over last elapsed second
+ 47. req_rate_max [.F..]: max number of HTTP requests per second observed
+ 48. req_tot [.F..]: total number of HTTP requests received
+ 49. cli_abrt [..BS]: number of data transfers aborted by the client
+ 50. srv_abrt [..BS]: number of data transfers aborted by the server
+ (inc. in eresp)
+ 51. comp_in [.FB.]: number of HTTP response bytes fed to the compressor
+ 52. comp_out [.FB.]: number of HTTP response bytes emitted by the compressor
+ 53. comp_byp [.FB.]: number of bytes that bypassed the HTTP compressor
+ (CPU/BW limit)
+ 54. comp_rsp [.FB.]: number of HTTP responses that were compressed
+ 55. lastsess [..BS]: number of seconds since last session assigned to
+ server/backend
+ 56. last_chk [...S]: last health check contents or textual error
+ 57. last_agt [...S]: last agent check contents or textual error
+ 58. qtime [..BS]: the average queue time in ms over the 1024 last requests
+ 59. ctime [..BS]: the average connect time in ms over the 1024 last requests
+ 60. rtime [..BS]: the average response time in ms over the 1024 last requests
+ (0 for TCP)
+ 61. ttime [..BS]: the average total session time in ms over the 1024 last
+ requests
+
+
+9.2. Unix Socket commands
+-------------------------
+
+The stats socket is not enabled by default. In order to enable it, it is
+necessary to add one line in the global section of the haproxy configuration.
+A second line is recommended to set a larger timeout, always appreciated when
+issuing commands by hand :
+
+ global
+ stats socket /var/run/haproxy.sock mode 600 level admin
+ stats timeout 2m
+
+It is also possible to add multiple instances of the stats socket by repeating
+the line, and make them listen to a TCP port instead of a UNIX socket. This is
+never done by default because this is dangerous, but can be handy in some
+situations :
+
+ global
+ stats socket /var/run/haproxy.sock mode 600 level admin
+ stats socket ipv4@192.168.0.1:9999 level admin
+ stats timeout 2m
+
+To access the socket, an external utility such as "socat" is required. Socat is
+a swiss-army knife to connect anything to anything. We use it to connect
+terminals to the socket, or a couple of stdin/stdout pipes to it for scripts.
+The two main syntaxes we'll use are the following :
+
+ # socat /var/run/haproxy.sock stdio
+ # socat /var/run/haproxy.sock readline
+
+The first one is used with scripts. It is possible to send the output of a
+script to haproxy, and pass haproxy's output to another script. That's useful
+for retrieving counters or attack traces for example.
+
+The second one is only useful for issuing commands by hand. It has the benefit
+that the terminal is handled by the readline library which supports line
+editing and history, which is very convenient when issuing repeated commands
+(eg: watch a counter).
+
+The socket supports two operation modes :
+ - interactive
+ - non-interactive
+
+The non-interactive mode is the default when socat connects to the socket. In
+this mode, a single line may be sent. It is processed as a whole, responses are
+sent back, and the connection closes after the end of the response. This is the
+mode that scripts and monitoring tools use. It is possible to send multiple
+commands in this mode, they need to be delimited by a semi-colon (';'). For
+example :
+
+ # echo "show info;show stat;show table" | socat /var/run/haproxy stdio
+
+The interactive mode displays a prompt ('>') and waits for commands to be
+entered on the line, then processes them, and displays the prompt again to wait
+for a new command. This mode is entered via the "prompt" command which must be
+sent on the first line in non-interactive mode. The mode is a flip switch, if
+"prompt" is sent in interactive mode, it is disabled and the connection closes
+after processing the last command of the same line.
+
+For this reason, when debugging by hand, it's quite common to start with the
+"prompt" command :
+
+ # socat /var/run/haproxy readline
+ prompt
+ > show info
+ ...
+ >
+
+Since multiple commands may be issued at once, haproxy uses the empty line as a
+delimiter to mark an end of output for each command, and takes care of ensuring
+that no command can emit an empty line on output. A script can thus easily
+parse the output even when multiple commands were pipelined on a single line.
+
+It is important to understand that when multiple haproxy processes are started
+on the same sockets, any process may pick up the request and will output its
+own stats.
+
+The list of commands currently supported on the stats socket is provided below.
+If an unknown command is sent, haproxy displays the usage message which reminds
+all supported commands. Some commands support a more complex syntax, generally
+it will explain what part of the command is invalid when this happens.
+
+add acl <acl> <pattern>
+ Add an entry into the acl <acl>. <acl> is the #<id> or the <file> returned by
+ "show acl". This command does not verify if the entry already exists. This
+ command cannot be used if the reference <acl> is a file also used with a map.
+ In this case, you must use the command "add map" in place of "add acl".
+
+add map <map> <key> <value>
+ Add an entry into the map <map> to associate the value <value> to the key
+ <key>. This command does not verify if the entry already exists. It is
+ mainly used to fill a map after a clear operation. Note that if the reference
+ <map> is a file and is shared with a map, this map will contain also a new
+ pattern entry.
+
+clear counters
+ Clear the max values of the statistics counters in each proxy (frontend &
+ backend) and in each server. The cumulated counters are not affected. This
+ can be used to get clean counters after an incident, without having to
+ restart nor to clear traffic counters. This command is restricted and can
+ only be issued on sockets configured for levels "operator" or "admin".
+
+clear counters all
+ Clear all statistics counters in each proxy (frontend & backend) and in each
+ server. This has the same effect as restarting. This command is restricted
+ and can only be issued on sockets configured for level "admin".
+
+clear acl <acl>
+ Remove all entries from the acl <acl>. <acl> is the #<id> or the <file>
+ returned by "show acl". Note that if the reference <acl> is a file and is
+ shared with a map, this map will be also cleared.
+
+clear map <map>
+ Remove all entries from the map <map>. <map> is the #<id> or the <file>
+ returned by "show map". Note that if the reference <map> is a file and is
+ shared with a acl, this acl will be also cleared.
+
+clear table <table> [ data.<type> <operator> <value> ] | [ key <key> ]
+ Remove entries from the stick-table <table>.
+
+ This is typically used to unblock some users complaining they have been
+ abusively denied access to a service, but this can also be used to clear some
+ stickiness entries matching a server that is going to be replaced (see "show
+ table" below for details). Note that sometimes, removal of an entry will be
+ refused because it is currently tracked by a session. Retrying a few seconds
+ later after the session ends is usual enough.
+
+ In the case where no options arguments are given all entries will be removed.
+
+ When the "data." form is used entries matching a filter applied using the
+ stored data (see "stick-table" in section 4.2) are removed. A stored data
+ type must be specified in <type>, and this data type must be stored in the
+ table otherwise an error is reported. The data is compared according to
+ <operator> with the 64-bit integer <value>. Operators are the same as with
+ the ACLs :
+
+ - eq : match entries whose data is equal to this value
+ - ne : match entries whose data is not equal to this value
+ - le : match entries whose data is less than or equal to this value
+ - ge : match entries whose data is greater than or equal to this value
+ - lt : match entries whose data is less than this value
+ - gt : match entries whose data is greater than this value
+
+ When the key form is used the entry <key> is removed. The key must be of the
+ same type as the table, which currently is limited to IPv4, IPv6, integer and
+ string.
+
+ Example :
+ $ echo "show table http_proxy" | socat stdio /tmp/sock1
+ >>> # table: http_proxy, type: ip, size:204800, used:2
+ >>> 0x80e6a4c: key=127.0.0.1 use=0 exp=3594729 gpc0=0 conn_rate(30000)=1 \
+ bytes_out_rate(60000)=187
+ >>> 0x80e6a80: key=127.0.0.2 use=0 exp=3594740 gpc0=1 conn_rate(30000)=10 \
+ bytes_out_rate(60000)=191
+
+ $ echo "clear table http_proxy key 127.0.0.1" | socat stdio /tmp/sock1
+
+ $ echo "show table http_proxy" | socat stdio /tmp/sock1
+ >>> # table: http_proxy, type: ip, size:204800, used:1
+ >>> 0x80e6a80: key=127.0.0.2 use=0 exp=3594740 gpc0=1 conn_rate(30000)=10 \
+ bytes_out_rate(60000)=191
+ $ echo "clear table http_proxy data.gpc0 eq 1" | socat stdio /tmp/sock1
+ $ echo "show table http_proxy" | socat stdio /tmp/sock1
+ >>> # table: http_proxy, type: ip, size:204800, used:1
+
+del acl <acl> [<key>|#<ref>]
+ Delete all the acl entries from the acl <acl> corresponding to the key <key>.
+ <acl> is the #<id> or the <file> returned by "show acl". If the <ref> is used,
+ this command delete only the listed reference. The reference can be found with
+ listing the content of the acl. Note that if the reference <acl> is a file and
+ is shared with a map, the entry will be also deleted in the map.
+
+del map <map> [<key>|#<ref>]
+ Delete all the map entries from the map <map> corresponding to the key <key>.
+ <map> is the #<id> or the <file> returned by "show map". If the <ref> is used,
+ this command delete only the listed reference. The reference can be found with
+ listing the content of the map. Note that if the reference <map> is a file and
+ is shared with a acl, the entry will be also deleted in the map.
+
+disable agent <backend>/<server>
+ Mark the auxiliary agent check as temporarily stopped.
+
+ In the case where an agent check is being run as a auxiliary check, due
+ to the agent-check parameter of a server directive, new checks are only
+ initialised when the agent is in the enabled. Thus, disable agent will
+ prevent any new agent checks from begin initiated until the agent
+ re-enabled using enable agent.
+
+ When an agent is disabled the processing of an auxiliary agent check that
+ was initiated while the agent was set as enabled is as follows: All
+ results that would alter the weight, specifically "drain" or a weight
+ returned by the agent, are ignored. The processing of agent check is
+ otherwise unchanged.
+
+ The motivation for this feature is to allow the weight changing effects
+ of the agent checks to be paused to allow the weight of a server to be
+ configured using set weight without being overridden by the agent.
+
+ This command is restricted and can only be issued on sockets configured for
+ level "admin".
+
+disable frontend <frontend>
+ Mark the frontend as temporarily stopped. This corresponds to the mode which
+ is used during a soft restart : the frontend releases the port but can be
+ enabled again if needed. This should be used with care as some non-Linux OSes
+ are unable to enable it back. This is intended to be used in environments
+ where stopping a proxy is not even imaginable but a misconfigured proxy must
+ be fixed. That way it's possible to release the port and bind it into another
+ process to restore operations. The frontend will appear with status "STOP"
+ on the stats page.
+
+ The frontend may be specified either by its name or by its numeric ID,
+ prefixed with a sharp ('#').
+
+ This command is restricted and can only be issued on sockets configured for
+ level "admin".
+
+disable health <backend>/<server>
+ Mark the primary health check as temporarily stopped. This will disable
+ sending of health checks, and the last health check result will be ignored.
+ The server will be in unchecked state and considered UP unless an auxiliary
+ agent check forces it down.
+
+ This command is restricted and can only be issued on sockets configured for
+ level "admin".
+
+disable server <backend>/<server>
+ Mark the server DOWN for maintenance. In this mode, no more checks will be
+ performed on the server until it leaves maintenance.
+ If the server is tracked by other servers, those servers will be set to DOWN
+ during the maintenance.
+
+ In the statistics page, a server DOWN for maintenance will appear with a
+ "MAINT" status, its tracking servers with the "MAINT(via)" one.
+
+ Both the backend and the server may be specified either by their name or by
+ their numeric ID, prefixed with a sharp ('#').
+
+ This command is restricted and can only be issued on sockets configured for
+ level "admin".
+
+enable agent <backend>/<server>
+ Resume auxiliary agent check that was temporarily stopped.
+
+ See "disable agent" for details of the effect of temporarily starting
+ and stopping an auxiliary agent.
+
+ This command is restricted and can only be issued on sockets configured for
+ level "admin".
+
+enable frontend <frontend>
+ Resume a frontend which was temporarily stopped. It is possible that some of
+ the listening ports won't be able to bind anymore (eg: if another process
+ took them since the 'disable frontend' operation). If this happens, an error
+ is displayed. Some operating systems might not be able to resume a frontend
+ which was disabled.
+
+ The frontend may be specified either by its name or by its numeric ID,
+ prefixed with a sharp ('#').
+
+ This command is restricted and can only be issued on sockets configured for
+ level "admin".
+
+enable health <backend>/<server>
+ Resume a primary health check that was temporarily stopped. This will enable
+ sending of health checks again. Please see "disable health" for details.
+
+ This command is restricted and can only be issued on sockets configured for
+ level "admin".
+
+enable server <backend>/<server>
+ If the server was previously marked as DOWN for maintenance, this marks the
+ server UP and checks are re-enabled.
+
+ Both the backend and the server may be specified either by their name or by
+ their numeric ID, prefixed with a sharp ('#').
+
+ This command is restricted and can only be issued on sockets configured for
+ level "admin".
+
+get map <map> <value>
+get acl <acl> <value>
+ Lookup the value <value> in the map <map> or in the ACL <acl>. <map> or <acl>
+ are the #<id> or the <file> returned by "show map" or "show acl". This command
+ returns all the matching patterns associated with this map. This is useful for
+ debugging maps and ACLs. The output format is composed by one line par
+ matching type. Each line is composed by space-delimited series of words.
+
+ The first two words are:
+
+ <match method>: The match method applied. It can be "found", "bool",
+ "int", "ip", "bin", "len", "str", "beg", "sub", "dir",
+ "dom", "end" or "reg".
+
+ <match result>: The result. Can be "match" or "no-match".
+
+ The following words are returned only if the pattern matches an entry.
+
+ <index type>: "tree" or "list". The internal lookup algorithm.
+
+ <case>: "case-insensitive" or "case-sensitive". The
+ interpretation of the case.
+
+ <entry matched>: match="<entry>". Return the matched pattern. It is
+ useful with regular expressions.
+
+ The two last word are used to show the returned value and its type. With the
+ "acl" case, the pattern doesn't exist.
+
+ return=nothing: No return because there are no "map".
+ return="<value>": The value returned in the string format.
+ return=cannot-display: The value cannot be converted as string.
+
+ type="<type>": The type of the returned sample.
+
+get weight <backend>/<server>
+ Report the current weight and the initial weight of server <server> in
+ backend <backend> or an error if either doesn't exist. The initial weight is
+ the one that appears in the configuration file. Both are normally equal
+ unless the current weight has been changed. Both the backend and the server
+ may be specified either by their name or by their numeric ID, prefixed with a
+ sharp ('#').
+
+help
+ Print the list of known keywords and their basic usage. The same help screen
+ is also displayed for unknown commands.
+
+prompt
+ Toggle the prompt at the beginning of the line and enter or leave interactive
+ mode. In interactive mode, the connection is not closed after a command
+ completes. Instead, the prompt will appear again, indicating the user that
+ the interpreter is waiting for a new command. The prompt consists in a right
+ angle bracket followed by a space "> ". This mode is particularly convenient
+ when one wants to periodically check information such as stats or errors.
+ It is also a good idea to enter interactive mode before issuing a "help"
+ command.
+
+quit
+ Close the connection when in interactive mode.
+
+set map <map> [<key>|#<ref>] <value>
+ Modify the value corresponding to each key <key> in a map <map>. <map> is the
+ #<id> or <file> returned by "show map". If the <ref> is used in place of
+ <key>, only the entry pointed by <ref> is changed. The new value is <value>.
+
+set maxconn frontend <frontend> <value>
+ Dynamically change the specified frontend's maxconn setting. Any positive
+ value is allowed including zero, but setting values larger than the global
+ maxconn does not make much sense. If the limit is increased and connections
+ were pending, they will immediately be accepted. If it is lowered to a value
+ below the current number of connections, new connections acceptation will be
+ delayed until the threshold is reached. The frontend might be specified by
+ either its name or its numeric ID prefixed with a sharp ('#').
+
+set maxconn global <maxconn>
+ Dynamically change the global maxconn setting within the range defined by the
+ initial global maxconn setting. If it is increased and connections were
+ pending, they will immediately be accepted. If it is lowered to a value below
+ the current number of connections, new connections acceptation will be
+ delayed until the threshold is reached. A value of zero restores the initial
+ setting.
+
+set rate-limit connections global <value>
+ Change the process-wide connection rate limit, which is set by the global
+ 'maxconnrate' setting. A value of zero disables the limitation. This limit
+ applies to all frontends and the change has an immediate effect. The value
+ is passed in number of connections per second.
+
+set rate-limit http-compression global <value>
+ Change the maximum input compression rate, which is set by the global
+ 'maxcomprate' setting. A value of zero disables the limitation. The value is
+ passed in number of kilobytes per second. The value is available in the "show
+ info" on the line "CompressBpsRateLim" in bytes.
+
+set rate-limit sessions global <value>
+ Change the process-wide session rate limit, which is set by the global
+ 'maxsessrate' setting. A value of zero disables the limitation. This limit
+ applies to all frontends and the change has an immediate effect. The value
+ is passed in number of sessions per second.
+
+set rate-limit ssl-sessions global <value>
+ Change the process-wide SSL session rate limit, which is set by the global
+ 'maxsslrate' setting. A value of zero disables the limitation. This limit
+ applies to all frontends and the change has an immediate effect. The value
+ is passed in number of sessions per second sent to the SSL stack. It applies
+ before the handshake in order to protect the stack against handshake abuses.
+
+set server <backend>/<server> addr <ip4 or ip6 address>
+ Replace the current IP address of a server by the one provided.
+
+set server <backend>/<server> agent [ up | down ]
+ Force a server's agent to a new state. This can be useful to immediately
+ switch a server's state regardless of some slow agent checks for example.
+ Note that the change is propagated to tracking servers if any.
+
+set server <backend>/<server> health [ up | stopping | down ]
+ Force a server's health to a new state. This can be useful to immediately
+ switch a server's state regardless of some slow health checks for example.
+ Note that the change is propagated to tracking servers if any.
+
+set server <backend>/<server> state [ ready | drain | maint ]
+ Force a server's administrative state to a new state. This can be useful to
+ disable load balancing and/or any traffic to a server. Setting the state to
+ "ready" puts the server in normal mode, and the command is the equivalent of
+ the "enable server" command. Setting the state to "maint" disables any traffic
+ to the server as well as any health checks. This is the equivalent of the
+ "disable server" command. Setting the mode to "drain" only removes the server
+ from load balancing but still allows it to be checked and to accept new
+ persistent connections. Changes are propagated to tracking servers if any.
+
+set server <backend>/<server> weight <weight>[%]
+ Change a server's weight to the value passed in argument. This is the exact
+ equivalent of the "set weight" command below.
+
+set ssl ocsp-response <response>
+ This command is used to update an OCSP Response for a certificate (see "crt"
+ on "bind" lines). Same controls are performed as during the initial loading of
+ the response. The <response> must be passed as a base64 encoded string of the
+ DER encoded response from the OCSP server.
+
+ Example:
+ openssl ocsp -issuer issuer.pem -cert server.pem \
+ -host ocsp.issuer.com:80 -respout resp.der
+ echo "set ssl ocsp-response $(base64 -w 10000 resp.der)" | \
+ socat stdio /var/run/haproxy.stat
+
+set ssl tls-key <id> <tlskey>
+ Set the next TLS key for the <id> listener to <tlskey>. This key becomes the
+ ultimate key, while the penultimate one is used for encryption (others just
+ decrypt). The oldest TLS key present is overwritten. <id> is either a numeric
+ #<id> or <file> returned by "show tls-keys". <tlskey> is a base64 encoded 48
+ bit TLS ticket key (ex. openssl rand -base64 48).
+
+set table <table> key <key> [data.<data_type> <value>]*
+ Create or update a stick-table entry in the table. If the key is not present,
+ an entry is inserted. See stick-table in section 4.2 to find all possible
+ values for <data_type>. The most likely use consists in dynamically entering
+ entries for source IP addresses, with a flag in gpc0 to dynamically block an
+ IP address or affect its quality of service. It is possible to pass multiple
+ data_types in a single call.
+
+set timeout cli <delay>
+ Change the CLI interface timeout for current connection. This can be useful
+ during long debugging sessions where the user needs to constantly inspect
+ some indicators without being disconnected. The delay is passed in seconds.
+
+set weight <backend>/<server> <weight>[%]
+ Change a server's weight to the value passed in argument. If the value ends
+ with the '%' sign, then the new weight will be relative to the initially
+ configured weight. Absolute weights are permitted between 0 and 256.
+ Relative weights must be positive with the resulting absolute weight is
+ capped at 256. Servers which are part of a farm running a static
+ load-balancing algorithm have stricter limitations because the weight
+ cannot change once set. Thus for these servers, the only accepted values
+ are 0 and 100% (or 0 and the initial weight). Changes take effect
+ immediately, though certain LB algorithms require a certain amount of
+ requests to consider changes. A typical usage of this command is to
+ disable a server during an update by setting its weight to zero, then to
+ enable it again after the update by setting it back to 100%. This command
+ is restricted and can only be issued on sockets configured for level
+ "admin". Both the backend and the server may be specified either by their
+ name or by their numeric ID, prefixed with a sharp ('#').
+
+show errors [<iid>]
+ Dump last known request and response errors collected by frontends and
+ backends. If <iid> is specified, the limit the dump to errors concerning
+ either frontend or backend whose ID is <iid>. This command is restricted
+ and can only be issued on sockets configured for levels "operator" or
+ "admin".
+
+ The errors which may be collected are the last request and response errors
+ caused by protocol violations, often due to invalid characters in header
+ names. The report precisely indicates what exact character violated the
+ protocol. Other important information such as the exact date the error was
+ detected, frontend and backend names, the server name (when known), the
+ internal session ID and the source address which has initiated the session
+ are reported too.
+
+ All characters are returned, and non-printable characters are encoded. The
+ most common ones (\t = 9, \n = 10, \r = 13 and \e = 27) are encoded as one
+ letter following a backslash. The backslash itself is encoded as '\\' to
+ avoid confusion. Other non-printable characters are encoded '\xNN' where
+ NN is the two-digits hexadecimal representation of the character's ASCII
+ code.
+
+ Lines are prefixed with the position of their first character, starting at 0
+ for the beginning of the buffer. At most one input line is printed per line,
+ and large lines will be broken into multiple consecutive output lines so that
+ the output never goes beyond 79 characters wide. It is easy to detect if a
+ line was broken, because it will not end with '\n' and the next line's offset
+ will be followed by a '+' sign, indicating it is a continuation of previous
+ line.
+
+ Example :
+ $ echo "show errors" | socat stdio /tmp/sock1
+ >>> [04/Mar/2009:15:46:56.081] backend http-in (#2) : invalid response
+ src 127.0.0.1, session #54, frontend fe-eth0 (#1), server s2 (#1)
+ response length 213 bytes, error at position 23:
+
+ 00000 HTTP/1.0 200 OK\r\n
+ 00017 header/bizarre:blah\r\n
+ 00038 Location: blah\r\n
+ 00054 Long-line: this is a very long line which should b
+ 00104+ e broken into multiple lines on the output buffer,
+ 00154+ otherwise it would be too large to print in a ter
+ 00204+ minal\r\n
+ 00211 \r\n
+
+ In the example above, we see that the backend "http-in" which has internal
+ ID 2 has blocked an invalid response from its server s2 which has internal
+ ID 1. The request was on session 54 initiated by source 127.0.0.1 and
+ received by frontend fe-eth0 whose ID is 1. The total response length was
+ 213 bytes when the error was detected, and the error was at byte 23. This
+ is the slash ('/') in header name "header/bizarre", which is not a valid
+ HTTP character for a header name.
+
+show backend
+ Dump the list of backends available in the running process
+
+show info
+ Dump info about haproxy status on current process.
+
+show map [<map>]
+ Dump info about map converters. Without argument, the list of all available
+ maps is returned. If a <map> is specified, its contents are dumped. <map> is
+ the #<id> or <file>. The first column is a unique identifier. It can be used
+ as reference for the operation "del map" and "set map". The second column is
+ the pattern and the third column is the sample if available. The data returned
+ are not directly a list of available maps, but are the list of all patterns
+ composing any map. Many of these patterns can be shared with ACL.
+
+show acl [<acl>]
+ Dump info about acl converters. Without argument, the list of all available
+ acls is returned. If a <acl> is specified, its contents are dumped. <acl> if
+ the #<id> or <file>. The dump format is the same than the map even for the
+ sample value. The data returned are not a list of available ACL, but are the
+ list of all patterns composing any ACL. Many of these patterns can be shared
+ with maps.
+
+show pools
+ Dump the status of internal memory pools. This is useful to track memory
+ usage when suspecting a memory leak for example. It does exactly the same
+ as the SIGQUIT when running in foreground except that it does not flush
+ the pools.
+
+show servers state [<backend>]
+ Dump the state of the servers found in the running configuration. A backend
+ name or identifier may be provided to limit the output to this backend only.
+
+ The dump has the following format:
+ - first line contains the format version (1 in this specification);
+ - second line contains the column headers, prefixed by a sharp ('#');
+ - third line and next ones contain data;
+ - each line starting by a sharp ('#') is considered as a comment.
+
+ Since multiple versions of the ouptput may co-exist, below is the list of
+ fields and their order per file format version :
+ 1:
+ be_id: Backend unique id.
+ be_name: Backend label.
+ srv_id: Server unique id (in the backend).
+ srv_name: Server label.
+ srv_addr: Server IP address.
+ srv_op_state: Server operational state (UP/DOWN/...).
+ In source code: SRV_ST_*.
+ srv_admin_state: Server administrative state (MAINT/DRAIN/...).
+ In source code: SRV_ADMF_*.
+ srv_uweight: User visible server's weight.
+ srv_iweight: Server's initial weight.
+ srv_time_since_last_change: Time since last operational change.
+ srv_check_status: Last health check status.
+ srv_check_result: Last check result (FAILED/PASSED/...).
+ In source code: CHK_RES_*.
+ srv_check_health: Checks rise / fall current counter.
+ srv_check_state: State of the check (ENABLED/PAUSED/...).
+ In source code: CHK_ST_*.
+ srv_agent_state: State of the agent check (ENABLED/PAUSED/...).
+ In source code: CHK_ST_*.
+ bk_f_forced_id: Flag to know if the backend ID is forced by
+ configuration.
+ srv_f_forced_id: Flag to know if the server's ID is forced by
+ configuration.
+
+show sess
+ Dump all known sessions. Avoid doing this on slow connections as this can
+ be huge. This command is restricted and can only be issued on sockets
+ configured for levels "operator" or "admin".
+
+show sess <id>
+ Display a lot of internal information about the specified session identifier.
+ This identifier is the first field at the beginning of the lines in the dumps
+ of "show sess" (it corresponds to the session pointer). Those information are
+ useless to most users but may be used by haproxy developers to troubleshoot a
+ complex bug. The output format is intentionally not documented so that it can
+ freely evolve depending on demands. You may find a description of all fields
+ returned in src/dumpstats.c
+
+ The special id "all" dumps the states of all sessions, which must be avoided
+ as much as possible as it is highly CPU intensive and can take a lot of time.
+
+show stat [<iid> <type> <sid>]
+ Dump statistics in the CSV format. By passing <id>, <type> and <sid>, it is
+ possible to dump only selected items :
+ - <iid> is a proxy ID, -1 to dump everything
+ - <type> selects the type of dumpable objects : 1 for frontends, 2 for
+ backends, 4 for servers, -1 for everything. These values can be ORed,
+ for example:
+ 1 + 2 = 3 -> frontend + backend.
+ 1 + 2 + 4 = 7 -> frontend + backend + server.
+ - <sid> is a server ID, -1 to dump everything from the selected proxy.
+
+ Example :
+ $ echo "show info;show stat" | socat stdio unix-connect:/tmp/sock1
+ >>> Name: HAProxy
+ Version: 1.4-dev2-49
+ Release_date: 2009/09/23
+ Nbproc: 1
+ Process_num: 1
+ (...)
+
+ # pxname,svname,qcur,qmax,scur,smax,slim,stot,bin,bout,dreq, (...)
+ stats,FRONTEND,,,0,0,1000,0,0,0,0,0,0,,,,,OPEN,,,,,,,,,1,1,0, (...)
+ stats,BACKEND,0,0,0,0,1000,0,0,0,0,0,,0,0,0,0,UP,0,0,0,,0,250,(...)
+ (...)
+ www1,BACKEND,0,0,0,0,1000,0,0,0,0,0,,0,0,0,0,UP,1,1,0,,0,250, (...)
+
+ $
+
+ Here, two commands have been issued at once. That way it's easy to find
+ which process the stats apply to in multi-process mode. Notice the empty
+ line after the information output which marks the end of the first block.
+ A similar empty line appears at the end of the second block (stats) so that
+ the reader knows the output has not been truncated.
+
+show stat resolvers [<resolvers section id>]
+ Dump statistics for the given resolvers section, or all resolvers sections
+ if no section is supplied.
+
+ For each name server, the following counters are reported:
+ sent: number of DNS requests sent to this server
+ valid: number of DNS valid responses received from this server
+ update: number of DNS responses used to update the server's IP address
+ cname: number of CNAME responses
+ cname_error: CNAME errors encountered with this server
+ any_err: number of empty response (IE: server does not support ANY type)
+ nx: non existent domain response received from this server
+ timeout: how many time this server did not answer in time
+ refused: number of requests refused by this server
+ other: any other DNS errors
+ invalid: invalid DNS response (from a protocol point of view)
+ too_big: too big response
+ outdated: number of response arrived too late (after an other name server)
+
+show table
+ Dump general information on all known stick-tables. Their name is returned
+ (the name of the proxy which holds them), their type (currently zero, always
+ IP), their size in maximum possible number of entries, and the number of
+ entries currently in use.
+
+ Example :
+ $ echo "show table" | socat stdio /tmp/sock1
+ >>> # table: front_pub, type: ip, size:204800, used:171454
+ >>> # table: back_rdp, type: ip, size:204800, used:0
+
+show table <name> [ data.<type> <operator> <value> ] | [ key <key> ]
+ Dump contents of stick-table <name>. In this mode, a first line of generic
+ information about the table is reported as with "show table", then all
+ entries are dumped. Since this can be quite heavy, it is possible to specify
+ a filter in order to specify what entries to display.
+
+ When the "data." form is used the filter applies to the stored data (see
+ "stick-table" in section 4.2). A stored data type must be specified
+ in <type>, and this data type must be stored in the table otherwise an
+ error is reported. The data is compared according to <operator> with the
+ 64-bit integer <value>. Operators are the same as with the ACLs :
+
+ - eq : match entries whose data is equal to this value
+ - ne : match entries whose data is not equal to this value
+ - le : match entries whose data is less than or equal to this value
+ - ge : match entries whose data is greater than or equal to this value
+ - lt : match entries whose data is less than this value
+ - gt : match entries whose data is greater than this value
+
+
+ When the key form is used the entry <key> is shown. The key must be of the
+ same type as the table, which currently is limited to IPv4, IPv6, integer,
+ and string.
+
+ Example :
+ $ echo "show table http_proxy" | socat stdio /tmp/sock1
+ >>> # table: http_proxy, type: ip, size:204800, used:2
+ >>> 0x80e6a4c: key=127.0.0.1 use=0 exp=3594729 gpc0=0 conn_rate(30000)=1 \
+ bytes_out_rate(60000)=187
+ >>> 0x80e6a80: key=127.0.0.2 use=0 exp=3594740 gpc0=1 conn_rate(30000)=10 \
+ bytes_out_rate(60000)=191
+
+ $ echo "show table http_proxy data.gpc0 gt 0" | socat stdio /tmp/sock1
+ >>> # table: http_proxy, type: ip, size:204800, used:2
+ >>> 0x80e6a80: key=127.0.0.2 use=0 exp=3594740 gpc0=1 conn_rate(30000)=10 \
+ bytes_out_rate(60000)=191
+
+ $ echo "show table http_proxy data.conn_rate gt 5" | \
+ socat stdio /tmp/sock1
+ >>> # table: http_proxy, type: ip, size:204800, used:2
+ >>> 0x80e6a80: key=127.0.0.2 use=0 exp=3594740 gpc0=1 conn_rate(30000)=10 \
+ bytes_out_rate(60000)=191
+
+ $ echo "show table http_proxy key 127.0.0.2" | \
+ socat stdio /tmp/sock1
+ >>> # table: http_proxy, type: ip, size:204800, used:2
+ >>> 0x80e6a80: key=127.0.0.2 use=0 exp=3594740 gpc0=1 conn_rate(30000)=10 \
+ bytes_out_rate(60000)=191
+
+ When the data criterion applies to a dynamic value dependent on time such as
+ a bytes rate, the value is dynamically computed during the evaluation of the
+ entry in order to decide whether it has to be dumped or not. This means that
+ such a filter could match for some time then not match anymore because as
+ time goes, the average event rate drops.
+
+ It is possible to use this to extract lists of IP addresses abusing the
+ service, in order to monitor them or even blacklist them in a firewall.
+ Example :
+ $ echo "show table http_proxy data.gpc0 gt 0" \
+ | socat stdio /tmp/sock1 \
+ | fgrep 'key=' | cut -d' ' -f2 | cut -d= -f2 > abusers-ip.txt
+ ( or | awk '/key/{ print a[split($2,a,"=")]; }' )
+
+show tls-keys
+ Dump all loaded TLS ticket keys. The TLS ticket key reference ID and the
+ file from which the keys have been loaded is shown. Both of those can be
+ used to update the TLS keys using "set ssl tls-key".
+
+shutdown frontend <frontend>
+ Completely delete the specified frontend. All the ports it was bound to will
+ be released. It will not be possible to enable the frontend anymore after
+ this operation. This is intended to be used in environments where stopping a
+ proxy is not even imaginable but a misconfigured proxy must be fixed. That
+ way it's possible to release the port and bind it into another process to
+ restore operations. The frontend will not appear at all on the stats page
+ once it is terminated.
+
+ The frontend may be specified either by its name or by its numeric ID,
+ prefixed with a sharp ('#').
+
+ This command is restricted and can only be issued on sockets configured for
+ level "admin".
+
+shutdown session <id>
+ Immediately terminate the session matching the specified session identifier.
+ This identifier is the first field at the beginning of the lines in the dumps
+ of "show sess" (it corresponds to the session pointer). This can be used to
+ terminate a long-running session without waiting for a timeout or when an
+ endless transfer is ongoing. Such terminated sessions are reported with a 'K'
+ flag in the logs.
+
+shutdown sessions server <backend>/<server>
+ Immediately terminate all the sessions attached to the specified server. This
+ can be used to terminate long-running sessions after a server is put into
+ maintenance mode, for instance. Such terminated sessions are reported with a
+ 'K' flag in the logs.
+
+
+10. Tricks for easier configuration management
+----------------------------------------------
+
+It is very common that two HAProxy nodes constituting a cluster share exactly
+the same configuration modulo a few addresses. Instead of having to maintain a
+duplicate configuration for each node, which will inevitably diverge, it is
+possible to include environment variables in the configuration. Thus multiple
+configuration may share the exact same file with only a few different system
+wide environment variables. This started in version 1.5 where only addresses
+were allowed to include environment variables, and 1.6 goes further by
+supporting environment variables everywhere. The syntax is the same as in the
+UNIX shell, a variable starts with a dollar sign ('$'), followed by an opening
+curly brace ('{'), then the variable name followed by the closing brace ('}').
+Except for addresses, environment variables are only interpreted in arguments
+surrounded with double quotes (this was necessary not to break existing setups
+using regular expressions involving the dollar symbol).
+
+Environment variables also make it convenient to write configurations which are
+expected to work on various sites where only the address changes. It can also
+permit to remove passwords from some configs. Example below where the the file
+"site1.env" file is sourced by the init script upon startup :
+
+ $ cat site1.env
+ LISTEN=192.168.1.1
+ CACHE_PFX=192.168.11
+ SERVER_PFX=192.168.22
+ LOGGER=192.168.33.1
+ STATSLP=admin:pa$$w0rd
+ ABUSERS=/etc/haproxy/abuse.lst
+ TIMEOUT=10s
+
+ $ cat haproxy.cfg
+ global
+ log "${LOGGER}:514" local0
+
+ defaults
+ mode http
+ timeout client "${TIMEOUT}"
+ timeout server "${TIMEOUT}"
+ timeout connect 5s
+
+ frontend public
+ bind "${LISTEN}:80"
+ http-request reject if { src -f "${ABUSERS}" }
+ stats uri /stats
+ stats auth "${STATSLP}"
+ use_backend cache if { path_end .jpg .css .ico }
+ default_backend server
+
+ backend cache
+ server cache1 "${CACHE_PFX}.1:18080" check
+ server cache2 "${CACHE_PFX}.2:18080" check
+
+ backend server
+ server cache1 "${SERVER_PFX}.1:8080" check
+ server cache2 "${SERVER_PFX}.2:8080" check
+
+
+11. Well-known traps to avoid
+-----------------------------
+
+Once in a while, someone reports that after a system reboot, the haproxy
+service wasn't started, and that once they start it by hand it works. Most
+often, these people are running a clustered IP address mechanism such as
+keepalived, to assign the service IP address to the master node only, and while
+it used to work when they used to bind haproxy to address 0.0.0.0, it stopped
+working after they bound it to the virtual IP address. What happens here is
+that when the service starts, the virtual IP address is not yet owned by the
+local node, so when HAProxy wants to bind to it, the system rejects this
+because it is not a local IP address. The fix doesn't consist in delaying the
+haproxy service startup (since it wouldn't stand a restart), but instead to
+properly configure the system to allow binding to non-local addresses. This is
+easily done on Linux by setting the net.ipv4.ip_nonlocal_bind sysctl to 1. This
+is also needed in order to transparently intercept the IP traffic that passes
+through HAProxy for a specific target address.
+
+Multi-process configurations involving source port ranges may apparently seem
+to work but they will cause some random failures under high loads because more
+than one process may try to use the same source port to connect to the same
+server, which is not possible. The system will report an error and a retry will
+happen, picking another port. A high value in the "retries" parameter may hide
+the effect to a certain extent but this also comes with increased CPU usage and
+processing time. Logs will also report a certain number of retries. For this
+reason, port ranges should be avoided in multi-process configurations.
+
+Since HAProxy uses SO_REUSEPORT and supports having multiple independant
+processes bound to the same IP:port, during troubleshooting it can happen that
+an old process was not stopped before a new one was started. This provides
+absurd test results which tend to indicate that any change to the configuration
+is ignored. The reason is that in fact even the new process is restarted with a
+new configuration, the old one also gets some incoming connections and
+processes them, returning unexpected results. When in doubt, just stop the new
+process and try again. If it still works, it very likely means that an old
+process remains alive and has to be stopped. Linux's "netstat -lntp" is of good
+help here.
+
+When adding entries to an ACL from the command line (eg: when blacklisting a
+source address), it is important to keep in mind that these entries are not
+synchronized to the file and that if someone reloads the configuration, these
+updates will be lost. While this is often the desired effect (for blacklisting)
+it may not necessarily match expectations when the change was made as a fix for
+a problem. See the "add acl" action of the CLI interface.
+
+
+12. Debugging and performance issues
+------------------------------------
+
+When HAProxy is started with the "-d" option, it will stay in the foreground
+and will print one line per event, such as an incoming connection, the end of a
+connection, and for each request or response header line seen. This debug
+output is emitted before the contents are processed, so they don't consider the
+local modifications. The main use is to show the request and response without
+having to run a network sniffer. The output is less readable when multiple
+connections are handled in parallel, though the "debug2ansi" and "debug2html"
+scripts found in the examples/ directory definitely help here by coloring the
+output.
+
+If a request or response is rejected because HAProxy finds it is malformed, the
+best thing to do is to connect to the CLI and issue "show errors", which will
+report the last captured faulty request and response for each frontend and
+backend, with all the necessary information to indicate precisely the first
+character of the input stream that was rejected. This is sometimes needed to
+prove to customers or to developers that a bug is present in their code. In
+this case it is often possible to relax the checks (but still keep the
+captures) using "option accept-invalid-http-request" or its equivalent for
+responses coming from the server "option accept-invalid-http-response". Please
+see the configuration manual for more details.
+
+Example :
+
+ > show errors
+ Total events captured on [13/Oct/2015:13:43:47.169] : 1
+
+ [13/Oct/2015:13:43:40.918] frontend HAProxyLocalStats (#2): invalid request
+ backend <NONE> (#-1), server <NONE> (#-1), event #0
+ src 127.0.0.1:51981, session #0, session flags 0x00000080
+ HTTP msg state 26, msg flags 0x00000000, tx flags 0x00000000
+ HTTP chunk len 0 bytes, HTTP body len 0 bytes
+ buffer flags 0x00808002, out 0 bytes, total 31 bytes
+ pending 31 bytes, wrapping at 8040, error at position 13:
+
+ 00000 GET /invalid request HTTP/1.1\r\n
+
+
+The output of "show info" on the CLI provides a number of useful information
+regarding the maximum connection rate ever reached, maximum SSL key rate ever
+reached, and in general all information which can help to explain temporary
+issues regarding CPU or memory usage. Example :
+
+ > show info
+ Name: HAProxy
+ Version: 1.6-dev7-e32d18-17
+ Release_date: 2015/10/12
+ Nbproc: 1
+ Process_num: 1
+ Pid: 7949
+ Uptime: 0d 0h02m39s
+ Uptime_sec: 159
+ Memmax_MB: 0
+ Ulimit-n: 120032
+ Maxsock: 120032
+ Maxconn: 60000
+ Hard_maxconn: 60000
+ CurrConns: 0
+ CumConns: 3
+ CumReq: 3
+ MaxSslConns: 0
+ CurrSslConns: 0
+ CumSslConns: 0
+ Maxpipes: 0
+ PipesUsed: 0
+ PipesFree: 0
+ ConnRate: 0
+ ConnRateLimit: 0
+ MaxConnRate: 1
+ SessRate: 0
+ SessRateLimit: 0
+ MaxSessRate: 1
+ SslRate: 0
+ SslRateLimit: 0
+ MaxSslRate: 0
+ SslFrontendKeyRate: 0
+ SslFrontendMaxKeyRate: 0
+ SslFrontendSessionReuse_pct: 0
+ SslBackendKeyRate: 0
+ SslBackendMaxKeyRate: 0
+ SslCacheLookups: 0
+ SslCacheMisses: 0
+ CompressBpsIn: 0
+ CompressBpsOut: 0
+ CompressBpsRateLim: 0
+ ZlibMemUsage: 0
+ MaxZlibMemUsage: 0
+ Tasks: 5
+ Run_queue: 1
+ Idle_pct: 100
+ node: wtap
+ description:
+
+When an issue seems to randomly appear on a new version of HAProxy (eg: every
+second request is aborted, occasional crash, etc), it is worth trying to enable
+memory poisonning so that each call to malloc() is immediately followed by the
+filling of the memory area with a configurable byte. By default this byte is
+0x50 (ASCII for 'P'), but any other byte can be used, including zero (which
+will have the same effect as a calloc() and which may make issues disappear).
+Memory poisonning is enabled on the command line using the "-dM" option. It
+slightly hurts performance and is not recommended for use in production. If
+an issue happens all the time with it or never happens when poisoonning uses
+byte zero, it clearly means you've found a bug and you definitely need to
+report it. Otherwise if there's no clear change, the problem it is not related.
+
+When debugging some latency issues, it is important to use both strace and
+tcpdump on the local machine, and another tcpdump on the remote system. The
+reason for this is that there are delays everywhere in the processing chain and
+it is important to know which one is causing latency to know where to act. In
+practice, the local tcpdump will indicate when the input data come in. Strace
+will indicate when haproxy receives these data (using recv/recvfrom). Warning,
+openssl uses read()/write() syscalls instead of recv()/send(). Strace will also
+show when haproxy sends the data, and tcpdump will show when the system sends
+these data to the interface. Then the external tcpdump will show when the data
+sent are really received (since the local one only shows when the packets are
+queued). The benefit of sniffing on the local system is that strace and tcpdump
+will use the same reference clock. Strace should be used with "-tts200" to get
+complete timestamps and report large enough chunks of data to read them.
+Tcpdump should be used with "-nvvttSs0" to report full packets, real sequence
+numbers and complete timestamps.
+
+In practice, received data are almost always immediately received by haproxy
+(unless the machine has a saturated CPU or these data are invalid and not
+delivered). If these data are received but not sent, it generally is because
+the output buffer is saturated (ie: recipient doesn't consume the data fast
+enough). This can be confirmed by seeing that the polling doesn't notify of
+the ability to write on the output file descriptor for some time (it's often
+easier to spot in the strace output when the data finally leave and then roll
+back to see when the write event was notified). It generally matches an ACK
+received from the recipient, and detected by tcpdump. Once the data are sent,
+they may spend some time in the system doing nothing. Here again, the TCP
+congestion window may be limited and not allow these data to leave, waiting for
+an ACK to open the window. If the traffic is idle and the data take 40 ms or
+200 ms to leave, it's a different issue (which is not an issue), it's the fact
+that the Nagle algorithm prevents empty packets from leaving immediately, in
+hope that they will be merged with subsequent data. HAProxy automatically
+disables Nagle in pure TCP mode and in tunnels. However it definitely remains
+enabled when forwarding an HTTP body (and this contributes to the performance
+improvement there by reducing the number of packets). Some HTTP non-compliant
+applications may be sensitive to the latency when delivering incomplete HTTP
+response messages. In this case you will have to enable "option http-no-delay"
+to disable Nagle in order to work around their design, keeping in mind that any
+other proxy in the chain may similarly be impacted. If tcpdump reports that data
+leave immediately but the other end doesn't see them quickly, it can mean there
+is a congestionned WAN link, a congestionned LAN with flow control enabled and
+preventing the data from leaving, or more commonly that HAProxy is in fact
+running in a virtual machine and that for whatever reason the hypervisor has
+decided that the data didn't need to be sent immediately. In virtualized
+environments, latency issues are almost always caused by the virtualization
+layer, so in order to save time, it's worth first comparing tcpdump in the VM
+and on the external components. Any difference has to be credited to the
+hypervisor and its accompanying drivers.
+
+When some TCP SACK segments are seen in tcpdump traces (using -vv), it always
+means that the side sending them has got the proof of a lost packet. While not
+seeing them doesn't mean there are no losses, seeing them definitely means the
+network is lossy. Losses are normal on a network, but at a rate where SACKs are
+not noticeable at the naked eye. If they appear a lot in the traces, it is
+worth investigating exactly what happens and where the packets are lost. HTTP
+doesn't cope well with TCP losses, which introduce huge latencies.
+
+The "netstat -i" command will report statistics per interface. An interface
+where the Rx-Ovr counter grows indicates that the system doesn't have enough
+resources to receive all incoming packets and that they're lost before being
+processed by the network driver. Rx-Drp indicates that some received packets
+were lost in the network stack because the application doesn't process them
+fast enough. This can happen during some attacks as well. Tx-Drp means that
+the output queues were full and packets had to be dropped. When using TCP it
+should be very rare, but will possibly indicte a saturated outgoing link.
+
+
+13. Security considerations
+---------------------------
+
+HAProxy is designed to run with very limited privileges. The standard way to
+use it is to isolate it into a chroot jail and to drop its privileges to a
+non-root user without any permissions inside this jail so that if any future
+vulnerability were to be discovered, its compromise would not affect the rest
+of the system.
+
+In order to perfom a chroot, it first needs to be started as a root user. It is
+pointless to build hand-made chroots to start the process there, these ones are
+painful to build, are never properly maintained and always contain way more
+bugs than the main file-system. And in case of compromise, the intruder can use
+the purposely built file-system. Unfortunately many administrators confuse
+"start as root" and "run as root", resulting in the uid change to be done prior
+to starting haproxy, and reducing the effective security restrictions.
+
+HAProxy will need to be started as root in order to :
+ - adjust the file descriptor limits
+ - bind to privileged port numbers
+ - bind to a specific network interface
+ - transparently listen to a foreign address
+ - isolate itself inside the chroot jail
+ - drop to another non-privileged UID
+
+HAProxy may require to be run as root in order to :
+ - bind to an interface for outgoing connections
+ - bind to privileged source ports for outgoing connections
+ - transparently bind to a foreing address for outgoing connections
+
+Most users will never need the "run as root" case. But the "start as root"
+covers most usages.
+
+A safe configuration will have :
+
+ - a chroot statement pointing to an empty location without any access
+ permissions. This can be prepared this way on the UNIX command line :
+
+ # mkdir /var/empty && chmod 0 /var/empty || echo "Failed"
+
+ and referenced like this in the HAProxy configuration's global section :
+
+ chroot /var/empty
+
+ - both a uid/user and gid/group statements in the global section :
+
+ user haproxy
+ group haproxy
+
+ - a stats socket whose mode, uid and gid are set to match the user and/or
+ group allowed to access the CLI so that nobody may access it :
+
+ stats socket /var/run/haproxy.stat uid hatop gid hatop mode 600
+
--- /dev/null
+Linux network namespace support for HAProxy
+===========================================
+
+HAProxy supports proxying between Linux network namespaces. This
+feature can be used, for example, in a multi-tenant networking
+environment to proxy between different networks. HAProxy can also act
+as a front-end proxy for non namespace-aware services.
+
+The proxy protocol has been extended to support transferring the
+namespace information, so the originating namespace information can be
+kept. This is useful when chaining multiple proxies and services.
+
+To enable Linux namespace support, compile HAProxy with the `USE_NS=1`
+make option.
+
+
+## Setting up namespaces on Linux
+
+To create network namespaces, use the 'ip netns' command. See the
+manual page ip-netns(8) for details.
+
+Make sure that the file descriptors representing the network namespace
+are located under `/var/run/netns`.
+
+For example, you can create a network namespace and assign one of the
+networking interfaces to the new namespace:
+
+```
+$ ip netns add netns1
+$ ip link set eth7 netns netns1
+```
+
+
+## Listing namespaces in the configuration file
+
+HAProxy uses namespaces explicitly listed in its configuration file.
+If you are not using namespace information received through the proxy
+protocol, this usually means that you must specify namespaces for
+listeners and servers in the configuration file with the 'namespace'
+keyword.
+
+However, if you're using the namespace information received through
+the proxy protocol to determine the namespace of servers (see
+'namespace * below'), you have to explicitly list all allowed
+namespaces in the namespace_list section of your configuration file:
+
+```
+namespace_list
+ namespace netns1
+ namespace netns2
+```
+
+
+## Namespace information flow
+
+The haproxy process always runs in the namespace it was started on.
+This is the default namespace.
+
+The bind addresses of listeners can have their namespace specified in
+the configuration file. Unless specified, sockets associated with
+listener bind addresses are created in the default namespace. For
+example, this creates a listener in the netns2 namespace:
+
+```
+frontend f_example
+ bind 192.168.1.1:80 namespace netns2
+ default_backend http
+```
+
+Each client connection is associated with its source namespace. By
+default, this is the namespace of the bind socket it arrived on, but
+can be overridden by information received through the proxy protocol.
+Proxy protocol v2 supports transferring namespace information, so if
+it is enabled for the listener, it can override the associated
+namespace of the connection.
+
+Servers can have their namespaces specified in the configuration file
+with the 'namespace' keyword:
+
+```
+backend b_example
+ server s1 192.168.1.100:80 namespace netns2
+```
+
+If no namespace is set for a server, it is assumed that it is in the
+default namespace. When specified, outbound sockets to the server are
+created in the network namespace configured. To create the outbound
+(server) connection in the namespace associated with the client, use
+the '*' namespace. This is especially useful when using the
+destination address and namespace received from the proxy protocol.
+
+```
+frontend f_example
+ bind 192.168.1.1:9990 accept-proxy
+ default_backend b_example
+
+backend b_example
+ mode tcp
+ source 0.0.0.0 usesrc clientip
+ server snodes * namespace *
+```
+
+If HAProxy is configured to send proxy protocol v2 headers to the
+server, the outgoing header will always contain the namespace
+associated with the client connection, not the namespace configured
+for the server.
--- /dev/null
+2015/08/24 Willy Tarreau
+ HAProxy Technologies
+ The PROXY protocol
+ Versions 1 & 2
+
+Abstract
+
+ The PROXY protocol provides a convenient way to safely transport connection
+ information such as a client's address across multiple layers of NAT or TCP
+ proxies. It is designed to require little changes to existing components and
+ to limit the performance impact caused by the processing of the transported
+ information.
+
+
+Revision history
+
+ 2010/10/29 - first version
+ 2011/03/20 - update: implementation and security considerations
+ 2012/06/21 - add support for binary format
+ 2012/11/19 - final review and fixes
+ 2014/05/18 - modify and extend PROXY protocol version 2
+ 2014/06/11 - fix example code to consider ver+cmd merge
+ 2014/06/14 - fix v2 header check in example code, and update Forwarded spec
+ 2014/07/12 - update list of implementations (add Squid)
+ 2015/05/02 - update list of implementations and format of the TLV add-ons
+
+
+1. Background
+
+Relaying TCP connections through proxies generally involves a loss of the
+original TCP connection parameters such as source and destination addresses,
+ports, and so on. Some protocols make it a little bit easier to transfer such
+information. For SMTP, Postfix authors have proposed the XCLIENT protocol [1]
+which received broad adoption and is particularly suited to mail exchanges.
+For HTTP, there is the "Forwarded" extension [2], which aims at replacing the
+omnipresent "X-Forwarded-For" header which carries information about the
+original source address, and the less common X-Original-To which carries
+information about the destination address.
+
+However, both mechanisms require a knowledge of the underlying protocol to be
+implemented in intermediaries.
+
+Then comes a new class of products which we'll call "dumb proxies", not because
+they don't do anything, but because they're processing protocol-agnostic data.
+Both Stunnel[3] and Stud[4] are examples of such "dumb proxies". They talk raw
+TCP on one side, and raw SSL on the other one, and do that reliably, without
+any knowledge of what protocol is transported on top of the connection. Haproxy
+running in pure TCP mode obviously falls into that category as well.
+
+The problem with such a proxy when it is combined with another one such as
+haproxy, is to adapt it to talk the higher level protocol. A patch is available
+for Stunnel to make it capable of inserting an X-Forwarded-For header in the
+first HTTP request of each incoming connection. Haproxy is able not to add
+another one when the connection comes from Stunnel, so that it's possible to
+hide it from the servers.
+
+The typical architecture becomes the following one :
+
+
+ +--------+ HTTP :80 +----------+
+ | client | --------------------------------> | |
+ | | | haproxy, |
+ +--------+ +---------+ | 1 or 2 |
+ / / HTTPS | stunnel | HTTP :81 | listening|
+ <________/ ---------> | (server | ---------> | ports |
+ | mode) | | |
+ +---------+ +----------+
+
+
+The problem appears when haproxy runs with keep-alive on the side towards the
+client. The Stunnel patch will only add the X-Forwarded-For header to the first
+request of each connection and all subsequent requests will not have it. One
+solution could be to improve the patch to make it support keep-alive and parse
+all forwarded data, whether they're announced with a Content-Length or with a
+Transfer-Encoding, taking care of special methods such as HEAD which announce
+data without transfering them, etc... In fact, it would require implementing a
+full HTTP stack in Stunnel. It would then become a lot more complex, a lot less
+reliable and would not anymore be the "dumb proxy" that fits every purposes.
+
+In practice, we don't need to add a header for each request because we'll emit
+the exact same information every time : the information related to the client
+side connection. We could then cache that information in haproxy and use it for
+every other request. But that becomes dangerous and is still limited to HTTP
+only.
+
+Another approach consists in prepending each connection with a header reporting
+the characteristics of the other side's connection. This method is simpler to
+implement, does not require any protocol-specific knowledge on either side, and
+completely fits the purpose since what is desired precisely is to know the
+other side's connection endpoints. It is easy to perform for the sender (just
+send a short header once the connection is established) and to parse for the
+receiver (simply perform one read() on the incoming connection to fill in
+addresses after an accept). The protocol used to carry connection information
+across proxies was thus called the PROXY protocol.
+
+
+2. The PROXY protocol header
+
+This document uses a few terms that are worth explaining here :
+ - "connection initiator" is the party requesting a new connection
+ - "connection target" is the party accepting a connection request
+ - "client" is the party for which a connection was requested
+ - "server" is the party to which the client desired to connect
+ - "proxy" is the party intercepting and relaying the connection
+ from the client to the server.
+ - "sender" is the party sending data over a connection.
+ - "receiver" is the party receiving data from the sender.
+ - "header" or "PROXY protocol header" is the block of connection information
+ the connection initiator prepends at the beginning of a connection, which
+ makes it the sender from the protocol point of view.
+
+The PROXY protocol's goal is to fill the server's internal structures with the
+information collected by the proxy that the server would have been able to get
+by itself if the client was connecting directly to the server instead of via a
+proxy. The information carried by the protocol are the ones the server would
+get using getsockname() and getpeername() :
+ - address family (AF_INET for IPv4, AF_INET6 for IPv6, AF_UNIX)
+ - socket protocol (SOCK_STREAM for TCP, SOCK_DGRAM for UDP)
+ - layer 3 source and destination addresses
+ - layer 4 source and destination ports if any
+
+Unlike the XCLIENT protocol, the PROXY protocol was designed with limited
+extensibility in order to help the receiver parse it very fast. Version 1 was
+focused on keeping it human-readable for better debugging possibilities, which
+is always desirable for early adoption when few implementations exist. Version
+2 adds support for a binary encoding of the header which is much more efficient
+to produce and to parse, especially when dealing with IPv6 addresses that are
+expensive to emit in ASCII form and to parse.
+
+In both cases, the protocol simply consists in an easily parsable header placed
+by the connection initiator at the beginning of each connection. The protocol
+is intentionally stateless in that it does not expect the sender to wait for
+the receiver before sending the header, nor the receiver to send anything back.
+
+This specification supports two header formats, a human-readable format which
+is the only format supported in version 1 of the protocol, and a binary format
+which is only supported in version 2. Both formats were designed to ensure that
+the header cannot be confused with common higher level protocols such as HTTP,
+SSL/TLS, FTP or SMTP, and that both formats are easily distinguishable one from
+each other for the receiver.
+
+Version 1 senders MAY only produce the human-readable header format. Version 2
+senders MAY only produce the binary header format. Version 1 receivers MUST at
+least implement the human-readable header format. Version 2 receivers MUST at
+least implement the binary header format, and it is recommended that they also
+implement the human-readable header format for better interoperability and ease
+of upgrade when facing version 1 senders.
+
+Both formats are designed to fit in the smallest TCP segment that any TCP/IP
+host is required to support (576 - 40 = 536 bytes). This ensures that the whole
+header will always be delivered at once when the socket buffers are still empty
+at the beginning of a connection. The sender must always ensure that the header
+is sent at once, so that the transport layer maintains atomicity along the path
+to the receiver. The receiver may be tolerant to partial headers or may simply
+drop the connection when receiving a partial header. Recommendation is to be
+tolerant, but implementation constraints may not always easily permit this. It
+is important to note that nothing forces any intermediary to forward the whole
+header at once, because TCP is a streaming protocol which may be processed one
+byte at a time if desired, causing the header to be fragmented when reaching
+the receiver. But due to the places where such a protocol is used, the above
+simplification generally is acceptable because the risk of crossing such a
+device handling one byte at a time is close to zero.
+
+The receiver MUST NOT start processing the connection before it receives a
+complete and valid PROXY protocol header. This is particularly important for
+protocols where the receiver is expected to speak first (eg: SMTP, FTP or SSH).
+The receiver may apply a short timeout and decide to abort the connection if
+the protocol header is not seen within a few seconds (at least 3 seconds to
+cover a TCP retransmit).
+
+The receiver MUST be configured to only receive the protocol described in this
+specification and MUST not try to guess whether the protocol header is present
+or not. This means that the protocol explicitly prevents port sharing between
+public and private access. Otherwise it would open a major security breach by
+allowing untrusted parties to spoof their connection addresses. The receiver
+SHOULD ensure proper access filtering so that only trusted proxies are allowed
+to use this protocol.
+
+Some proxies are smart enough to understand transported protocols and to reuse
+idle server connections for multiple messages. This typically happens in HTTP
+where requests from multiple clients may be sent over the same connection. Such
+proxies MUST NOT implement this protocol on multiplexed connections because the
+receiver would use the address advertised in the PROXY header as the address of
+all forwarded requests's senders. In fact, such proxies are not dumb proxies,
+and since they do have a complete understanding of the transported protocol,
+they MUST use the facilities provided by this protocol to present the client's
+address.
+
+
+2.1. Human-readable header format (Version 1)
+
+This is the format specified in version 1 of the protocol. It consists in one
+line of ASCII text matching exactly the following block, sent immediately and
+at once upon the connection establishment and prepended before any data flowing
+from the sender to the receiver :
+
+ - a string identifying the protocol : "PROXY" ( \x50 \x52 \x4F \x58 \x59 )
+ Seeing this string indicates that this is version 1 of the protocol.
+
+ - exactly one space : " " ( \x20 )
+
+ - a string indicating the proxied INET protocol and family. As of version 1,
+ only "TCP4" ( \x54 \x43 \x50 \x34 ) for TCP over IPv4, and "TCP6"
+ ( \x54 \x43 \x50 \x36 ) for TCP over IPv6 are allowed. Other, unsupported,
+ or unknown protocols must be reported with the name "UNKNOWN" ( \x55 \x4E
+ \x4B \x4E \x4F \x57 \x4E ). For "UNKNOWN", the rest of the line before the
+ CRLF may be omitted by the sender, and the receiver must ignore anything
+ presented before the CRLF is found. Note that an earlier version of this
+ specification suggested to use this when sending health checks, but this
+ causes issues with servers that reject the "UNKNOWN" keyword. Thus is it
+ now recommended not to send "UNKNOWN" when the connection is expected to
+ be accepted, but only when it is not possible to correctly fill the PROXY
+ line.
+
+ - exactly one space : " " ( \x20 )
+
+ - the layer 3 source address in its canonical format. IPv4 addresses must be
+ indicated as a series of exactly 4 integers in the range [0..255] inclusive
+ written in decimal representation separated by exactly one dot between each
+ other. Heading zeroes are not permitted in front of numbers in order to
+ avoid any possible confusion with octal numbers. IPv6 addresses must be
+ indicated as series of 4 hexadecimal digits (upper or lower case) delimited
+ by colons between each other, with the acceptance of one double colon
+ sequence to replace the largest acceptable range of consecutive zeroes. The
+ total number of decoded bits must exactly be 128. The advertised protocol
+ family dictates what format to use.
+
+ - exactly one space : " " ( \x20 )
+
+ - the layer 3 destination address in its canonical format. It is the same
+ format as the layer 3 source address and matches the same family.
+
+ - exactly one space : " " ( \x20 )
+
+ - the TCP source port represented as a decimal integer in the range
+ [0..65535] inclusive. Heading zeroes are not permitted in front of numbers
+ in order to avoid any possible confusion with octal numbers.
+
+ - exactly one space : " " ( \x20 )
+
+ - the TCP destination port represented as a decimal integer in the range
+ [0..65535] inclusive. Heading zeroes are not permitted in front of numbers
+ in order to avoid any possible confusion with octal numbers.
+
+ - the CRLF sequence ( \x0D \x0A )
+
+
+The maximum line lengths the receiver must support including the CRLF are :
+ - TCP/IPv4 :
+ "PROXY TCP4 255.255.255.255 255.255.255.255 65535 65535\r\n"
+ => 5 + 1 + 4 + 1 + 15 + 1 + 15 + 1 + 5 + 1 + 5 + 2 = 56 chars
+
+ - TCP/IPv6 :
+ "PROXY TCP6 ffff:f...f:ffff ffff:f...f:ffff 65535 65535\r\n"
+ => 5 + 1 + 4 + 1 + 39 + 1 + 39 + 1 + 5 + 1 + 5 + 2 = 104 chars
+
+ - unknown connection (short form) :
+ "PROXY UNKNOWN\r\n"
+ => 5 + 1 + 7 + 2 = 15 chars
+
+ - worst case (optional fields set to 0xff) :
+ "PROXY UNKNOWN ffff:f...f:ffff ffff:f...f:ffff 65535 65535\r\n"
+ => 5 + 1 + 7 + 1 + 39 + 1 + 39 + 1 + 5 + 1 + 5 + 2 = 107 chars
+
+So a 108-byte buffer is always enough to store all the line and a trailing zero
+for string processing.
+
+The receiver must wait for the CRLF sequence before starting to decode the
+addresses in order to ensure they are complete and properly parsed. If the CRLF
+sequence is not found in the first 107 characters, the receiver should declare
+the line invalid. A receiver may reject an incomplete line which does not
+contain the CRLF sequence in the first atomic read operation. The receiver must
+not tolerate a single CR or LF character to end the line when a complete CRLF
+sequence is expected.
+
+Any sequence which does not exactly match the protocol must be discarded and
+cause the receiver to abort the connection. It is recommended to abort the
+connection as soon as possible so that the sender gets a chance to notice the
+anomaly and log it.
+
+If the announced transport protocol is "UNKNOWN", then the receiver knows that
+the sender speaks the correct PROXY protocol with the appropriate version, and
+SHOULD accept the connection and use the real connection's parameters as if
+there were no PROXY protocol header on the wire. However, senders SHOULD not
+use the "UNKNOWN" protocol when they are the initiators of outgoing connections
+because some receivers may reject them. When a load balancing proxy has to send
+health checks to a server, it SHOULD build a valid PROXY line which it will
+fill with a getsockname()/getpeername() pair indicating the addresses used. It
+is important to understand that doing so is not appropriate when some source
+address translation is performed between the sender and the receiver.
+
+An example of such a line before an HTTP request would look like this (CR
+marked as "\r" and LF marked as "\n") :
+
+ PROXY TCP4 192.168.0.1 192.168.0.11 56324 443\r\n
+ GET / HTTP/1.1\r\n
+ Host: 192.168.0.11\r\n
+ \r\n
+
+For the sender, the header line is easy to put into the output buffers once the
+connection is established. Note that since the line is always shorter than an
+MSS, the sender is guaranteed to always be able to emit it at once and should
+not even bother handling partial sends. For the receiver, once the header is
+parsed, it is easy to skip it from the input buffers. Please consult section 9
+for implementation suggestions.
+
+
+2.2. Binary header format (version 2)
+
+Producing human-readable IPv6 addresses and parsing them is very inefficient,
+due to the multiple possible representation formats and the handling of compact
+address format. It was also not possible to specify address families outside
+IPv4/IPv6 nor non-TCP protocols. Another drawback of the human-readable format
+is the fact that implementations need to parse all characters to find the
+trailing CRLF, which makes it harder to read only the exact bytes count. Last,
+the UNKNOWN address type has not always been accepted by servers as a valid
+protocol because of its imprecise meaning.
+
+Version 2 of the protocol thus introduces a new binary format which remains
+distinguishable from version 1 and from other commonly used protocols. It was
+specially designed in order to be incompatible with a wide range of protocols
+and to be rejected by a number of common implementations of these protocols
+when unexpectedly presented (please see section 7). Also for better processing
+efficiency, IPv4 and IPv6 addresses are respectively aligned on 4 and 16 bytes
+boundaries.
+
+The binary header format starts with a constant 12 bytes block containing the
+protocol signature :
+
+ \x0D \x0A \x0D \x0A \x00 \x0D \x0A \x51 \x55 \x49 \x54 \x0A
+
+Note that this block contains a null byte at the 5th position, so it must not
+be handled as a null-terminated string.
+
+The next byte (the 13th one) is the protocol version and command.
+
+The highest four bits contains the version. As of this specification, it must
+always be sent as \x2 and the receiver must only accept this value.
+
+The lowest four bits represents the command :
+ - \x0 : LOCAL : the connection was established on purpose by the proxy
+ without being relayed. The connection endpoints are the sender and the
+ receiver. Such connections exist when the proxy sends health-checks to the
+ server. The receiver must accept this connection as valid and must use the
+ real connection endpoints and discard the protocol block including the
+ family which is ignored.
+
+ - \x1 : PROXY : the connection was established on behalf of another node,
+ and reflects the original connection endpoints. The receiver must then use
+ the information provided in the protocol block to get original the address.
+
+ - other values are unassigned and must not be emitted by senders. Receivers
+ must drop connections presenting unexpected values here.
+
+The 14th byte contains the transport protocol and address family. The highest 4
+bits contain the address family, the lowest 4 bits contain the protocol.
+
+The address family maps to the original socket family without necessarily
+matching the values internally used by the system. It may be one of :
+
+ - 0x0 : AF_UNSPEC : the connection is forwarded for an unknown, unspecified
+ or unsupported protocol. The sender should use this family when sending
+ LOCAL commands or when dealing with unsupported protocol families. The
+ receiver is free to accept the connection anyway and use the real endpoint
+ addresses or to reject it. The receiver should ignore address information.
+
+ - 0x1 : AF_INET : the forwarded connection uses the AF_INET address family
+ (IPv4). The addresses are exactly 4 bytes each in network byte order,
+ followed by transport protocol information (typically ports).
+
+ - 0x2 : AF_INET6 : the forwarded connection uses the AF_INET6 address family
+ (IPv6). The addresses are exactly 16 bytes each in network byte order,
+ followed by transport protocol information (typically ports).
+
+ - 0x3 : AF_UNIX : the forwarded connection uses the AF_UNIX address family
+ (UNIX). The addresses are exactly 108 bytes each.
+
+ - other values are unspecified and must not be emitted in version 2 of this
+ protocol and must be rejected as invalid by receivers.
+
+The transport protocol is specified in the lowest 4 bits of the the 14th byte :
+
+ - 0x0 : UNSPEC : the connection is forwarded for an unknown, unspecified
+ or unsupported protocol. The sender should use this family when sending
+ LOCAL commands or when dealing with unsupported protocol families. The
+ receiver is free to accept the connection anyway and use the real endpoint
+ addresses or to reject it. The receiver should ignore address information.
+
+ - 0x1 : STREAM : the forwarded connection uses a SOCK_STREAM protocol (eg:
+ TCP or UNIX_STREAM). When used with AF_INET/AF_INET6 (TCP), the addresses
+ are followed by the source and destination ports represented on 2 bytes
+ each in network byte order.
+
+ - 0x2 : DGRAM : the forwarded connection uses a SOCK_DGRAM protocol (eg:
+ UDP or UNIX_DGRAM). When used with AF_INET/AF_INET6 (UDP), the addresses
+ are followed by the source and destination ports represented on 2 bytes
+ each in network byte order.
+
+ - other values are unspecified and must not be emitted in version 2 of this
+ protocol and must be rejected as invalid by receivers.
+
+In practice, the following protocol bytes are expected :
+
+ - \x00 : UNSPEC : the connection is forwarded for an unknown, unspecified
+ or unsupported protocol. The sender should use this family when sending
+ LOCAL commands or when dealing with unsupported protocol families. When
+ used with a LOCAL command, the receiver must accept the connection and
+ ignore any address information. For other commands, the receiver is free
+ to accept the connection anyway and use the real endpoints addresses or to
+ reject the connection. The receiver should ignore address information.
+
+ - \x11 : TCP over IPv4 : the forwarded connection uses TCP over the AF_INET
+ protocol family. Address length is 2*4 + 2*2 = 12 bytes.
+
+ - \x12 : UDP over IPv4 : the forwarded connection uses UDP over the AF_INET
+ protocol family. Address length is 2*4 + 2*2 = 12 bytes.
+
+ - \x21 : TCP over IPv6 : the forwarded connection uses TCP over the AF_INET6
+ protocol family. Address length is 2*16 + 2*2 = 36 bytes.
+
+ - \x22 : UDP over IPv6 : the forwarded connection uses UDP over the AF_INET6
+ protocol family. Address length is 2*16 + 2*2 = 36 bytes.
+
+ - \x31 : UNIX stream : the forwarded connection uses SOCK_STREAM over the
+ AF_UNIX protocol family. Address length is 2*108 = 216 bytes.
+
+ - \x32 : UNIX datagram : the forwarded connection uses SOCK_DGRAM over the
+ AF_UNIX protocol family. Address length is 2*108 = 216 bytes.
+
+
+Only the UNSPEC protocol byte (\x00) is mandatory. A receiver is not required
+to implement other ones, provided that it automatically falls back to the
+UNSPEC mode for the valid combinations above that it does not support.
+
+The 15th and 16th bytes is the address length in bytes in network endien order.
+It is used so that the receiver knows how many address bytes to skip even when
+it does not implement the presented protocol. Thus the length of the protocol
+header in bytes is always exactly 16 + this value. When a sender presents a
+LOCAL connection, it should not present any address so it sets this field to
+zero. Receivers MUST always consider this field to skip the appropriate number
+of bytes and must not assume zero is presented for LOCAL connections. When a
+receiver accepts an incoming connection showing an UNSPEC address family or
+protocol, it may or may not decide to log the address information if present.
+
+So the 16-byte version 2 header can be described this way :
+
+ struct proxy_hdr_v2 {
+ uint8_t sig[12]; /* hex 0D 0A 0D 0A 00 0D 0A 51 55 49 54 0A */
+ uint8_t ver_cmd; /* protocol version and command */
+ uint8_t fam; /* protocol family and address */
+ uint16_t len; /* number of following bytes part of the header */
+ };
+
+Starting from the 17th byte, addresses are presented in network byte order.
+The address order is always the same :
+ - source layer 3 address in network byte order
+ - destination layer 3 address in network byte order
+ - source layer 4 address if any, in network byte order (port)
+ - destination layer 4 address if any, in network byte order (port)
+
+The address block may directly be sent from or received into the following
+union which makes it easy to cast from/to the relevant socket native structs
+depending on the address type :
+
+ union proxy_addr {
+ struct { /* for TCP/UDP over IPv4, len = 12 */
+ uint32_t src_addr;
+ uint32_t dst_addr;
+ uint16_t src_port;
+ uint16_t dst_port;
+ } ipv4_addr;
+ struct { /* for TCP/UDP over IPv6, len = 36 */
+ uint8_t src_addr[16];
+ uint8_t dst_addr[16];
+ uint16_t src_port;
+ uint16_t dst_port;
+ } ipv6_addr;
+ struct { /* for AF_UNIX sockets, len = 216 */
+ uint8_t src_addr[108];
+ uint8_t dst_addr[108];
+ } unix_addr;
+ };
+
+The sender must ensure that all the protocol header is sent at once. This block
+is always smaller than an MSS, so there is no reason for it to be segmented at
+the beginning of the connection. The receiver should also process the header
+at once. The receiver must not start to parse an address before the whole
+address block is received. The receiver must also reject incoming connections
+containing partial protocol headers.
+
+A receiver may be configured to support both version 1 and version 2 of the
+protocol. Identifying the protocol version is easy :
+
+ - if the incoming byte count is 16 or above and the 13 first bytes match
+ the protocol signature block followed by the protocol version 2 :
+
+ \x0D\x0A\x0D\x0A\x00\x0D\x0A\x51\x55\x49\x54\x0A\x02
+
+ - otherwise, if the incoming byte count is 8 or above, and the 5 first
+ characters match the ASCII representation of "PROXY" then the protocol
+ must be parsed as version 1 :
+
+ \x50\x52\x4F\x58\x59
+
+ - otherwise the protocol is not covered by this specification and the
+ connection must be dropped.
+
+If the length specified in the PROXY protocol header indicates that additional
+bytes are part of the header beyond the address information, a receiver may
+choose to skip over and ignore those bytes, or attempt to interpret those
+bytes.
+
+The information in those bytes will be arranged in Type-Length-Value (TLV
+vectors) in the following format. The first byte is the Type of the vector.
+The second two bytes represent the length in bytes of the value (not included
+the Type and Length bytes), and following the length field is the number of
+bytes specified by the length.
+
+ struct pp2_tlv {
+ uint8_t type;
+ uint8_t length_hi;
+ uint8_t length_lo;
+ uint8_t value[0];
+ };
+
+The following types have already been registered for the <type> field :
+
+ #define PP2_TYPE_ALPN 0x01
+ #define PP2_TYPE_AUTHORITY 0x02
+ #define PP2_TYPE_SSL 0x20
+ #define PP2_SUBTYPE_SSL_VERSION 0x21
+ #define PP2_SUBTYPE_SSL_CN 0x22
+ #define PP2_TYPE_NETNS 0x30
+
+
+2.2.1. The PP2_TYPE_SSL type and subtypes
+
+For the type PP2_TYPE_SSL, the value is itselv a defined like this :
+
+ struct pp2_tlv_ssl {
+ uint8_t client;
+ uint32_t verify;
+ struct pp2_tlv sub_tlv[0];
+ };
+
+The <verify> field will be zero if the client presented a certificate
+and it was successfully verified, and non-zero otherwise.
+
+The <client> field is made of a bit field from the following values,
+indicating which element is present :
+
+ #define PP2_CLIENT_SSL 0x01
+ #define PP2_CLIENT_CERT_CONN 0x02
+ #define PP2_CLIENT_CERT_SESS 0x04
+
+Note, that each of these elements may lead to extra data being appended to
+this TLV using a second level of TLV encapsulation. It is thus possible to
+find multiple TLV values after this field. The total length of the pp2_tlv_ssl
+TLV will reflect this.
+
+The PP2_CLIENT_SSL flag indicates that the client connected over SSL/TLS. When
+this field is present, the string representation of the TLS version is appended
+at the end of the field in the TLV format using the type PP2_SUBTYPE_SSL_VERSION.
+
+PP2_CLIENT_CERT_CONN indicates that the client provided a certificate over the
+current connection. PP2_CLIENT_CERT_SESS indicates that the client provided a
+certificate at least once over the TLS session this connection belongs to.
+
+In all cases, the string representation (in UTF8) of the Common Name field
+(OID: 2.5.4.3) of the client certificate's DistinguishedName, is appended
+using the TLV format and the type PP2_SUBTYPE_SSL_CN.
+
+
+2.2.2. The PP2_TYPE_NETNS type
+
+The type PP2_TYPE_NETNS defines the value as the string representation of the
+namespace's name.
+
+
+3. Implementations
+
+Haproxy 1.5 implements version 1 of the PROXY protocol on both sides :
+ - the listening sockets accept the protocol when the "accept-proxy" setting
+ is passed to the "bind" keyword. Connections accepted on such listeners
+ will behave just as if the source really was the one advertised in the
+ protocol. This is true for logging, ACLs, content filtering, transparent
+ proxying, etc...
+
+ - the protocol may be used to connect to servers if the "send-proxy" setting
+ is present on the "server" line. It is enabled on a per-server basis, so it
+ is possible to have it enabled for remote servers only and still have local
+ ones behave differently. If the incoming connection was accepted with the
+ "accept-proxy", then the relayed information is the one advertised in this
+ connection's PROXY line.
+
+ - Haproxy 1.5 also implements version 2 of the PROXY protocol as a sender. In
+ addition, a TLV with limited, optional, SSL information has been added.
+
+Stunnel added support for version 1 of the protocol for outgoing connections in
+version 4.45.
+
+Stud added support for version 1 of the protocol for outgoing connections on
+2011/06/29.
+
+Postfix added support for version 1 of the protocol for incoming connections
+in smtpd and postscreen in version 2.10.
+
+A patch is available for Stud[5] to implement version 1 of the protocol on
+incoming connections.
+
+Support for versions 1 and 2 of the protocol was added to Varnish 4.1 [6].
+
+Exim added support for version 1 and version 2 of the protocol for incoming
+connections on 2014/05/13, and will be released as part of version 4.83.
+
+Squid added support for versions 1 and 2 of the protocol in version 3.5 [7].
+
+Jetty 9.3.0 supports protocol version 1.
+
+The protocol is simple enough that it is expected that other implementations
+will appear, especially in environments such as SMTP, IMAP, FTP, RDP where the
+client's address is an important piece of information for the server and some
+intermediaries. In fact, several proprietary deployments have already done so
+on FTP and SMTP servers.
+
+Proxy developers are encouraged to implement this protocol, because it will
+make their products much more transparent in complex infrastructures, and will
+get rid of a number of issues related to logging and access control.
+
+
+4. Architectural benefits
+4.1. Multiple layers
+
+Using the PROXY protocol instead of transparent proxy provides several benefits
+in multiple-layer infrastructures. The first immediate benefit is that it
+becomes possible to chain multiple layers of proxies and always present the
+original IP address. for instance, let's consider the following 2-layer proxy
+architecture :
+
+ Internet
+ ,---. | client to PX1:
+ ( X ) | native protocol
+ `---' |
+ | V
+ +--+--+ +-----+
+ | FW1 |------| PX1 |
+ +--+--+ +-----+ | PX1 to PX2: PROXY + native
+ | V
+ +--+--+ +-----+
+ | FW2 |------| PX2 |
+ +--+--+ +-----+ | PX2 to SRV: PROXY + native
+ | V
+ +--+--+
+ | SRV |
+ +-----+
+
+Firewall FW1 receives traffic from internet-based clients and forwards it to
+reverse-proxy PX1. PX1 adds a PROXY header then forwards to PX2 via FW2. PX2
+is configured to read the PROXY header and to emit it on output. It then joins
+the origin server SRV and presents the original client's address there. Since
+all TCP connections endpoints are real machines and are not spoofed, there is
+no issue for the return traffic to pass via the firewalls and reverse proxies.
+Using transparent proxy, this would be quite difficult because the firewalls
+would have to deal with the client's address coming from the proxies in the DMZ
+and would have to correctly route the return traffic there instead of using the
+default route.
+
+
+4.2. IPv4 and IPv6 integration
+
+The protocol also eases IPv4 and IPv6 integration : if only the first layer
+(FW1 and PX1) is IPv6-capable, it is still possible to present the original
+client's IPv6 address to the target server eventhough the whole chain is only
+connected via IPv4.
+
+
+4.3. Multiple return paths
+
+When transparent proxy is used, it is not possible to run multiple proxies
+because the return traffic would follow the default route instead of finding
+the proper proxy. Some tricks are sometimes possible using multiple server
+addresses and policy routing but these are very limited.
+
+Using the PROXY protocol, this problem disappears as the servers don't need
+to route to the client, just to the proxy that forwarded the connection. So
+it is perfectly possible to run a proxy farm in front of a very large server
+farm and have it working effortless, even when dealing with multiple sites.
+
+This is particularly important in Cloud-like environments where there is little
+choice of binding to random addresses and where the lower processing power per
+node generally requires multiple front nodes.
+
+The example below illustrates the following case : virtualized infrastructures
+are deployed in 3 datacenters (DC1..DC3). Each DC uses its own VIP which is
+handled by the hosting provider's layer 3 load balancer. This load balancer
+routes the traffic to a farm of layer 7 SSL/cache offloaders which load balance
+among their local servers. The VIPs are advertised by geolocalised DNS so that
+clients generally stick to a given DC. Since clients are not guaranteed to
+stick to one DC, the L7 load balancing proxies have to know the other DCs'
+servers that may be reached via the hosting provider's LAN or via the internet.
+The L7 proxies use the PROXY protocol to join the servers behind them, so that
+even inter-DC traffic can forward the original client's address and the return
+path is unambiguous. This would not be possible using transparent proxy because
+most often the L7 proxies would not be able to spoof an address, and this would
+never work between datacenters.
+
+ Internet
+
+ DC1 DC2 DC3
+ ,---. ,---. ,---.
+ ( X ) ( X ) ( X )
+ `---' `---' `---'
+ | +-------+ | +-------+ | +-------+
+ +----| L3 LB | +----| L3 LB | +----| L3 LB |
+ | +-------+ | +-------+ | +-------+
+ ------+------- ~ ~ ~ ------+------- ~ ~ ~ ------+-------
+ ||||| |||| ||||| |||| ||||| ||||
+ 50 SRV 4 PX 50 SRV 4 PX 50 SRV 4 PX
+
+
+5. Security considerations
+
+Version 1 of the protocol header (the human-readable format) was designed so as
+to be distinguishable from HTTP. It will not parse as a valid HTTP request and
+an HTTP request will not parse as a valid proxy request. Version 2 add to use a
+non-parsable binary signature to make many products fail on this block. The
+signature was designed to cause immediate failure on HTTP, SSL/TLS, SMTP, FTP,
+and POP. It also causes aborts on LDAP and RDP servers (see section 6). That
+makes it easier to enforce its use under certain connections and at the same
+time, it ensures that improperly configured servers are quickly detected.
+
+Implementers should be very careful about not trying to automatically detect
+whether they have to decode the header or not, but rather they must only rely
+on a configuration parameter. Indeed, if the opportunity is left to a normal
+client to use the protocol, he will be able to hide his activities or make them
+appear as coming from someone else. However, accepting the header only from a
+number of known sources should be safe.
+
+
+6. Validation
+
+The version 2 protocol signature has been sent to a wide variety of protocols
+and implementations including old ones. The following protocol and products
+have been tested to ensure the best possible behaviour when the signature was
+presented, even with minimal implementations :
+
+ - HTTP :
+ - Apache 1.3.33 : connection abort => pass/optimal
+ - Nginx 0.7.69 : 400 Bad Request + abort => pass/optimal
+ - lighttpd 1.4.20 : 400 Bad Request + abort => pass/optimal
+ - thttpd 2.20c : 400 Bad Request + abort => pass/optimal
+ - mini-httpd-1.19 : 400 Bad Request + abort => pass/optimal
+ - haproxy 1.4.21 : 400 Bad Request + abort => pass/optimal
+ - Squid 3 : 400 Bad Request + abort => pass/optimal
+ - SSL :
+ - stud 0.3.47 : connection abort => pass/optimal
+ - stunnel 4.45 : connection abort => pass/optimal
+ - nginx 0.7.69 : 400 Bad Request + abort => pass/optimal
+ - FTP :
+ - Pure-ftpd 1.0.20 : 3*500 then 221 Goodbye => pass/optimal
+ - vsftpd 2.0.1 : 3*530 then 221 Goodbye => pass/optimal
+ - SMTP :
+ - postfix 2.3 : 3*500 + 221 Bye => pass/optimal
+ - exim 4.69 : 554 + connection abort => pass/optimal
+ - POP :
+ - dovecot 1.0.10 : 3*ERR + Logout => pass/optimal
+ - IMAP :
+ - dovecot 1.0.10 : 5*ERR + hang => pass/non-optimal
+ - LDAP :
+ - openldap 2.3 : abort => pass/optimal
+ - SSH :
+ - openssh 3.9p1 : abort => pass/optimal
+ - RDP :
+ - Windows XP SP3 : abort => pass/optimal
+
+This means that most protocols and implementations will not be confused by an
+incoming connection exhibiting the protocol signature, which avoids issues when
+facing misconfigurations.
+
+
+7. Future developments
+
+It is possible that the protocol may slightly evolve to present other
+information such as the incoming network interface, or the origin addresses in
+case of network address translation happening before the first proxy, but this
+is not identified as a requirement right now. Some deep thinking has been spent
+on this and it appears that trying to add a few more information open a pandora
+box with many information from MAC addresses to SSL client certificates, which
+would make the protocol much more complex. So at this point it is not planned.
+Suggestions on improvements are welcome.
+
+
+8. Contacts and links
+
+Please use w@1wt.eu to send any comments to the author.
+
+The following links were referenced in the document.
+
+[1] http://www.postfix.org/XCLIENT_README.html
+[2] http://tools.ietf.org/html/rfc7239
+[3] http://www.stunnel.org/
+[4] https://github.com/bumptech/stud
+[5] https://github.com/bumptech/stud/pull/81
+[6] https://www.varnish-cache.org/docs/trunk/phk/ssl_again.html
+[7] http://wiki.squid-cache.org/Squid-3.5
+
+
+9. Sample code
+
+The code below is an example of how a receiver may deal with both versions of
+the protocol header for TCP over IPv4 or IPv6. The function is supposed to be
+called upon a read event. Addresses may be directly copied into their final
+memory location since they're transported in network byte order. The sending
+side is even simpler and can easily be deduced from this sample code.
+
+ struct sockaddr_storage from; /* already filled by accept() */
+ struct sockaddr_storage to; /* already filled by getsockname() */
+ const char v2sig[12] = "\x0D\x0A\x0D\x0A\x00\x0D\x0A\x51\x55\x49\x54\x0A";
+
+ /* returns 0 if needs to poll, <0 upon error or >0 if it did the job */
+ int read_evt(int fd)
+ {
+ union {
+ struct {
+ char line[108];
+ } v1;
+ struct {
+ uint8_t sig[12];
+ uint8_t ver_cmd;
+ uint8_t fam;
+ uint16_t len;
+ union {
+ struct { /* for TCP/UDP over IPv4, len = 12 */
+ uint32_t src_addr;
+ uint32_t dst_addr;
+ uint16_t src_port;
+ uint16_t dst_port;
+ } ip4;
+ struct { /* for TCP/UDP over IPv6, len = 36 */
+ uint8_t src_addr[16];
+ uint8_t dst_addr[16];
+ uint16_t src_port;
+ uint16_t dst_port;
+ } ip6;
+ struct { /* for AF_UNIX sockets, len = 216 */
+ uint8_t src_addr[108];
+ uint8_t dst_addr[108];
+ } unx;
+ } addr;
+ } v2;
+ } hdr;
+
+ int size, ret;
+
+ do {
+ ret = recv(fd, &hdr, sizeof(hdr), MSG_PEEK);
+ } while (ret == -1 && errno == EINTR);
+
+ if (ret == -1)
+ return (errno == EAGAIN) ? 0 : -1;
+
+ if (ret >= 16 && memcmp(&hdr.v2, v2sig, 12) == 0 &&
+ (hdr.v2.ver_cmd & 0xF0) == 0x20) {
+ size = 16 + hdr.v2.len;
+ if (ret < size)
+ return -1; /* truncated or too large header */
+
+ switch (hdr.v2.ver_cmd & 0xF) {
+ case 0x01: /* PROXY command */
+ switch (hdr.v2.fam) {
+ case 0x11: /* TCPv4 */
+ ((struct sockaddr_in *)&from)->sin_family = AF_INET;
+ ((struct sockaddr_in *)&from)->sin_addr.s_addr =
+ hdr.v2.addr.ip4.src_addr;
+ ((struct sockaddr_in *)&from)->sin_port =
+ hdr.v2.addr.ip4.src_port;
+ ((struct sockaddr_in *)&to)->sin_family = AF_INET;
+ ((struct sockaddr_in *)&to)->sin_addr.s_addr =
+ hdr.v2.addr.ip4.dst_addr;
+ ((struct sockaddr_in *)&to)->sin_port =
+ hdr.v2.addr.ip4.dst_port;
+ goto done;
+ case 0x21: /* TCPv6 */
+ ((struct sockaddr_in6 *)&from)->sin6_family = AF_INET6;
+ memcpy(&((struct sockaddr_in6 *)&from)->sin6_addr,
+ hdr.v2.addr.ip6.src_addr, 16);
+ ((struct sockaddr_in6 *)&from)->sin6_port =
+ hdr.v2.addr.ip6.src_port;
+ ((struct sockaddr_in6 *)&to)->sin6_family = AF_INET6;
+ memcpy(&((struct sockaddr_in6 *)&to)->sin6_addr,
+ hdr.v2.addr.ip6.dst_addr, 16);
+ ((struct sockaddr_in6 *)&to)->sin6_port =
+ hdr.v2.addr.ip6.dst_port;
+ goto done;
+ }
+ /* unsupported protocol, keep local connection address */
+ break;
+ case 0x00: /* LOCAL command */
+ /* keep local connection address for LOCAL */
+ break;
+ default:
+ return -1; /* not a supported command */
+ }
+ }
+ else if (ret >= 8 && memcmp(hdr.v1.line, "PROXY", 5) == 0) {
+ char *end = memchr(hdr.v1.line, '\r', ret - 1);
+ if (!end || end[1] != '\n')
+ return -1; /* partial or invalid header */
+ *end = '\0'; /* terminate the string to ease parsing */
+ size = end + 2 - hdr.v1.line; /* skip header + CRLF */
+ /* parse the V1 header using favorite address parsers like inet_pton.
+ * return -1 upon error, or simply fall through to accept.
+ */
+ }
+ else {
+ /* Wrong protocol */
+ return -1;
+ }
+
+ done:
+ /* we need to consume the appropriate amount of data from the socket */
+ do {
+ ret = recv(fd, &hdr, size, 0);
+ } while (ret == -1 && errno == EINTR);
+ return (ret >= 0) ? 1 : -1;
+ }
--- /dev/null
+#FIG 3.2
+Portrait
+Center
+Metric
+A4
+100.00
+Single
+-2
+1200 2
+6 900 4770 1575 5220
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 3
+ 900 4770 1125 4995 1125 5220
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 3
+ 1575 4770 1350 4995 1350 5220
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 1170 4995 1170 5220
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 1215 4995 1215 5220
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 1260 4995 1260 5220
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 1305 4995 1305 5220
+2 3 0 1 7 7 52 -1 20 0.000 2 0 -1 0 0 7
+ 900 4770 1125 4995 1125 5220 1350 5220 1350 4995 1575 4770
+ 900 4770
+-6
+6 2250 4770 2925 5220
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 3
+ 2250 4770 2475 4995 2475 5220
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 3
+ 2925 4770 2700 4995 2700 5220
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 2520 4995 2520 5220
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 2565 4995 2565 5220
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 2610 4995 2610 5220
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 2655 4995 2655 5220
+2 3 0 1 7 7 52 -1 20 0.000 2 0 -1 0 0 7
+ 2250 4770 2475 4995 2475 5220 2700 5220 2700 4995 2925 4770
+ 2250 4770
+-6
+6 1710 3420 2115 3870
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 1710 3780 2115 3780
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 1710 3825 2115 3825
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 1710 3735 2115 3735
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 1710 3690 2115 3690
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 1710 3645 2115 3645
+2 1 0 1 0 6 51 -1 20 0.000 0 0 -1 0 0 4
+ 1710 3420 1710 3870 2115 3870 2115 3420
+-6
+1 2 0 1 0 7 51 -1 20 0.000 1 0.0000 1935 2182 450 113 1485 2182 2385 2182
+1 2 0 1 0 7 51 -1 20 0.000 0 0.0000 2790 3082 450 113 2340 3082 3240 3082
+1 2 0 1 0 7 51 -1 20 0.000 1 0.0000 1935 1367 450 113 1485 1367 2385 1367
+1 2 0 1 0 7 51 -1 20 0.000 1 0.0000 1035 3082 450 113 585 3082 1485 3082
+2 1 2 1 0 2 53 -1 -1 3.000 0 0 -1 0 0 2
+ 2745 3870 3015 3870
+2 1 2 1 0 2 53 -1 -1 3.000 0 0 -1 0 0 2
+ 2745 4320 3015 4320
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 60.00
+ 2970 5085 2745 5085
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 60.00
+ 2205 5085 2430 5085
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 60.00
+ 1620 5085 1395 5085
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 60.00
+ 855 5085 1080 5085
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 3
+ 0 0 1.00 60.00 60.00
+ 1890 3870 1440 4320 1440 4770
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 3
+ 0 0 1.00 60.00 60.00
+ 1935 3870 2385 4320 2385 4770
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 0 0 1.00 60.00 60.00
+ 2610 4320 2610 4770
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 0 0 1.00 60.00 60.00
+ 2835 3195 2835 4770
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 3
+ 0 0 1.00 60.00 60.00
+ 2745 3195 2610 3330 2610 3870
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 0 0 1.00 60.00 60.00
+ 1935 2295 1935 3420
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 3
+ 0 0 1.00 60.00 60.00
+ 1080 3195 1215 3330 1215 3870
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 3
+ 0 0 1.00 60.00 60.00
+ 1890 2295 1035 2745 1035 2970
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 3
+ 0 0 1.00 60.00 60.00
+ 1980 2295 2790 2745 2790 2970
+2 1 0 1 0 2 50 -1 -1 0.000 0 0 -1 1 0 2
+ 0 0 1.00 60.00 60.00
+ 1935 1485 1935 2070
+2 1 1 1 0 2 50 -1 -1 4.000 0 0 -1 1 0 5
+ 0 0 1.00 60.00 60.00
+ 810 5220 450 5220 450 2160 1080 2160 1485 2160
+2 1 1 1 0 2 50 -1 -1 4.000 0 0 -1 1 0 5
+ 0 0 1.00 60.00 60.00
+ 3060 5220 3375 5220 3375 2160 2655 2160 2385 2160
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 0 0 1.00 60.00 60.00
+ 1215 4320 1215 4770
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
+ 0 0 1.00 60.00 60.00
+ 990 3195 990 4770
+2 1 0 1 0 2 50 -1 -1 0.000 0 0 -1 1 0 2
+ 0 0 1.00 60.00 60.00
+ 1935 855 1935 1260
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 3
+ 0 0 1.00 60.00 60.00
+ 1620 1440 900 2025 900 2970
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 3
+ 0 0 1.00 60.00 60.00
+ 2205 1440 2925 2025 2925 2970
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 1125 4230 1350 4230
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 1125 4275 1350 4275
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 1125 4185 1350 4185
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 1125 4140 1350 4140
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 1125 4095 1350 4095
+2 1 0 1 0 3 51 -1 20 0.000 0 0 -1 0 0 4
+ 1125 3870 1125 4320 1350 4320 1350 3870
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 2700 4230 2475 4230
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 2700 4275 2475 4275
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 2700 4185 2475 4185
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 2700 4140 2475 4140
+2 1 0 1 0 3 51 -1 20 0.000 0 0 -1 0 0 4
+ 2700 3870 2700 4320 2475 4320 2475 3870
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 1125 4050 1350 4050
+2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
+ 1125 4005 1350 4005
+2 1 0 1 0 2 53 -1 -1 0.000 0 0 -1 1 1 2
+ 0 0 1.00 60.00 60.00
+ 0 0 1.00 60.00 60.00
+ 900 3870 900 4320
+2 1 2 1 0 2 53 -1 -1 3.000 0 0 -1 0 0 2
+ 855 3870 1125 3870
+2 1 2 1 0 2 53 -1 -1 3.000 0 0 -1 0 0 2
+ 855 4320 1125 4320
+2 1 0 1 0 2 53 -1 -1 0.000 0 0 -1 1 1 2
+ 0 0 1.00 60.00 60.00
+ 0 0 1.00 60.00 60.00
+ 2970 3870 2970 4320
+4 0 0 53 -1 16 7 0.0000 4 75 195 1260 3510 Yes\001
+4 2 0 53 -1 16 7 0.0000 4 75 135 945 3510 No\001
+4 2 0 53 -1 16 7 0.0000 4 75 195 2565 3510 Yes\001
+4 0 0 53 -1 16 7 0.0000 4 75 135 2880 3510 No\001
+4 1 0 50 -1 16 6 0.0000 4 75 210 1935 4140 global\001
+4 1 0 50 -1 16 6 0.0000 4 60 225 1935 4230 queue\001
+4 1 0 50 -1 16 8 1.5708 4 120 1005 405 4680 Redispatch on error\001
+4 1 0 53 -1 14 6 1.5708 4 60 480 2205 3645 maxqueue\001
+4 1 0 50 -1 18 8 0.0000 4 90 165 1935 2205 LB\001
+4 1 0 53 -1 16 7 1.5708 4 90 870 2070 2880 server, all are full.\001
+4 1 0 50 -1 18 8 0.0000 4 90 360 1935 1395 cookie\001
+4 1 0 50 -1 16 10 0.0000 4 135 1200 1935 765 Incoming request\001
+4 1 0 53 -1 16 7 1.5708 4 75 480 1890 1755 no cookie\001
+4 1 0 53 -1 16 7 1.5708 4 75 600 1890 2880 no available\001
+4 0 0 53 -1 16 7 5.6200 4 75 735 2340 1530 SRV2 selected\001
+4 1 0 50 -1 16 10 0.0000 4 105 405 1260 5445 SRV1\001
+4 1 0 50 -1 16 10 0.0000 4 105 405 2610 5445 SRV2\001
+4 2 0 53 -1 16 7 0.4712 4 75 735 1665 2385 SRV1 selected\001
+4 0 0 53 -1 16 7 5.8119 4 75 735 2205 2385 SRV2 selected\001
+4 2 0 53 -1 16 7 0.6632 4 75 735 1485 1530 SRV1 selected\001
+4 0 0 53 -1 14 6 0.0000 4 45 420 2880 5040 maxconn\001
+4 1 0 50 -1 16 8 1.5708 4 120 1005 3510 4680 Redispatch on error\001
+4 1 0 50 -1 18 8 0.0000 4 90 615 2790 3105 SRV2 full ?\001
+4 1 0 50 -1 18 8 0.0000 4 90 615 1035 3105 SRV1 full ?\001
+4 1 0 53 -1 14 6 1.5708 4 60 480 855 4095 maxqueue\001
+4 1 0 53 -1 14 6 1.5708 4 60 480 3105 4095 maxqueue\001
--- /dev/null
+ GNU LESSER GENERAL PUBLIC LICENSE
+ Version 2.1, February 1999
+
+ Copyright (C) 1991, 1999 Free Software Foundation, Inc.
+ 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ Everyone is permitted to copy and distribute verbatim copies
+ of this license document, but changing it is not allowed.
+
+[This is the first released version of the Lesser GPL. It also counts
+ as the successor of the GNU Library Public License, version 2, hence
+ the version number 2.1.]
+
+ Preamble
+
+ The licenses for most software are designed to take away your
+freedom to share and change it. By contrast, the GNU General Public
+Licenses are intended to guarantee your freedom to share and change
+free software--to make sure the software is free for all its users.
+
+ This license, the Lesser General Public License, applies to some
+specially designated software packages--typically libraries--of the
+Free Software Foundation and other authors who decide to use it. You
+can use it too, but we suggest you first think carefully about whether
+this license or the ordinary General Public License is the better
+strategy to use in any particular case, based on the explanations below.
+
+ When we speak of free software, we are referring to freedom of use,
+not price. Our General Public Licenses are designed to make sure that
+you have the freedom to distribute copies of free software (and charge
+for this service if you wish); that you receive source code or can get
+it if you want it; that you can change the software and use pieces of
+it in new free programs; and that you are informed that you can do
+these things.
+
+ To protect your rights, we need to make restrictions that forbid
+distributors to deny you these rights or to ask you to surrender these
+rights. These restrictions translate to certain responsibilities for
+you if you distribute copies of the library or if you modify it.
+
+ For example, if you distribute copies of the library, whether gratis
+or for a fee, you must give the recipients all the rights that we gave
+you. You must make sure that they, too, receive or can get the source
+code. If you link other code with the library, you must provide
+complete object files to the recipients, so that they can relink them
+with the library after making changes to the library and recompiling
+it. And you must show them these terms so they know their rights.
+
+ We protect your rights with a two-step method: (1) we copyright the
+library, and (2) we offer you this license, which gives you legal
+permission to copy, distribute and/or modify the library.
+
+ To protect each distributor, we want to make it very clear that
+there is no warranty for the free library. Also, if the library is
+modified by someone else and passed on, the recipients should know
+that what they have is not the original version, so that the original
+author's reputation will not be affected by problems that might be
+introduced by others.
+\f
+ Finally, software patents pose a constant threat to the existence of
+any free program. We wish to make sure that a company cannot
+effectively restrict the users of a free program by obtaining a
+restrictive license from a patent holder. Therefore, we insist that
+any patent license obtained for a version of the library must be
+consistent with the full freedom of use specified in this license.
+
+ Most GNU software, including some libraries, is covered by the
+ordinary GNU General Public License. This license, the GNU Lesser
+General Public License, applies to certain designated libraries, and
+is quite different from the ordinary General Public License. We use
+this license for certain libraries in order to permit linking those
+libraries into non-free programs.
+
+ When a program is linked with a library, whether statically or using
+a shared library, the combination of the two is legally speaking a
+combined work, a derivative of the original library. The ordinary
+General Public License therefore permits such linking only if the
+entire combination fits its criteria of freedom. The Lesser General
+Public License permits more lax criteria for linking other code with
+the library.
+
+ We call this license the "Lesser" General Public License because it
+does Less to protect the user's freedom than the ordinary General
+Public License. It also provides other free software developers Less
+of an advantage over competing non-free programs. These disadvantages
+are the reason we use the ordinary General Public License for many
+libraries. However, the Lesser license provides advantages in certain
+special circumstances.
+
+ For example, on rare occasions, there may be a special need to
+encourage the widest possible use of a certain library, so that it becomes
+a de-facto standard. To achieve this, non-free programs must be
+allowed to use the library. A more frequent case is that a free
+library does the same job as widely used non-free libraries. In this
+case, there is little to gain by limiting the free library to free
+software only, so we use the Lesser General Public License.
+
+ In other cases, permission to use a particular library in non-free
+programs enables a greater number of people to use a large body of
+free software. For example, permission to use the GNU C Library in
+non-free programs enables many more people to use the whole GNU
+operating system, as well as its variant, the GNU/Linux operating
+system.
+
+ Although the Lesser General Public License is Less protective of the
+users' freedom, it does ensure that the user of a program that is
+linked with the Library has the freedom and the wherewithal to run
+that program using a modified version of the Library.
+
+ The precise terms and conditions for copying, distribution and
+modification follow. Pay close attention to the difference between a
+"work based on the library" and a "work that uses the library". The
+former contains code derived from the library, whereas the latter must
+be combined with the library in order to run.
+\f
+ GNU LESSER GENERAL PUBLIC LICENSE
+ TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
+
+ 0. This License Agreement applies to any software library or other
+program which contains a notice placed by the copyright holder or
+other authorized party saying it may be distributed under the terms of
+this Lesser General Public License (also called "this License").
+Each licensee is addressed as "you".
+
+ A "library" means a collection of software functions and/or data
+prepared so as to be conveniently linked with application programs
+(which use some of those functions and data) to form executables.
+
+ The "Library", below, refers to any such software library or work
+which has been distributed under these terms. A "work based on the
+Library" means either the Library or any derivative work under
+copyright law: that is to say, a work containing the Library or a
+portion of it, either verbatim or with modifications and/or translated
+straightforwardly into another language. (Hereinafter, translation is
+included without limitation in the term "modification".)
+
+ "Source code" for a work means the preferred form of the work for
+making modifications to it. For a library, complete source code means
+all the source code for all modules it contains, plus any associated
+interface definition files, plus the scripts used to control compilation
+and installation of the library.
+
+ Activities other than copying, distribution and modification are not
+covered by this License; they are outside its scope. The act of
+running a program using the Library is not restricted, and output from
+such a program is covered only if its contents constitute a work based
+on the Library (independent of the use of the Library in a tool for
+writing it). Whether that is true depends on what the Library does
+and what the program that uses the Library does.
+
+ 1. You may copy and distribute verbatim copies of the Library's
+complete source code as you receive it, in any medium, provided that
+you conspicuously and appropriately publish on each copy an
+appropriate copyright notice and disclaimer of warranty; keep intact
+all the notices that refer to this License and to the absence of any
+warranty; and distribute a copy of this License along with the
+Library.
+
+ You may charge a fee for the physical act of transferring a copy,
+and you may at your option offer warranty protection in exchange for a
+fee.
+\f
+ 2. You may modify your copy or copies of the Library or any portion
+of it, thus forming a work based on the Library, and copy and
+distribute such modifications or work under the terms of Section 1
+above, provided that you also meet all of these conditions:
+
+ a) The modified work must itself be a software library.
+
+ b) You must cause the files modified to carry prominent notices
+ stating that you changed the files and the date of any change.
+
+ c) You must cause the whole of the work to be licensed at no
+ charge to all third parties under the terms of this License.
+
+ d) If a facility in the modified Library refers to a function or a
+ table of data to be supplied by an application program that uses
+ the facility, other than as an argument passed when the facility
+ is invoked, then you must make a good faith effort to ensure that,
+ in the event an application does not supply such function or
+ table, the facility still operates, and performs whatever part of
+ its purpose remains meaningful.
+
+ (For example, a function in a library to compute square roots has
+ a purpose that is entirely well-defined independent of the
+ application. Therefore, Subsection 2d requires that any
+ application-supplied function or table used by this function must
+ be optional: if the application does not supply it, the square
+ root function must still compute square roots.)
+
+These requirements apply to the modified work as a whole. If
+identifiable sections of that work are not derived from the Library,
+and can be reasonably considered independent and separate works in
+themselves, then this License, and its terms, do not apply to those
+sections when you distribute them as separate works. But when you
+distribute the same sections as part of a whole which is a work based
+on the Library, the distribution of the whole must be on the terms of
+this License, whose permissions for other licensees extend to the
+entire whole, and thus to each and every part regardless of who wrote
+it.
+
+Thus, it is not the intent of this section to claim rights or contest
+your rights to work written entirely by you; rather, the intent is to
+exercise the right to control the distribution of derivative or
+collective works based on the Library.
+
+In addition, mere aggregation of another work not based on the Library
+with the Library (or with a work based on the Library) on a volume of
+a storage or distribution medium does not bring the other work under
+the scope of this License.
+
+ 3. You may opt to apply the terms of the ordinary GNU General Public
+License instead of this License to a given copy of the Library. To do
+this, you must alter all the notices that refer to this License, so
+that they refer to the ordinary GNU General Public License, version 2,
+instead of to this License. (If a newer version than version 2 of the
+ordinary GNU General Public License has appeared, then you can specify
+that version instead if you wish.) Do not make any other change in
+these notices.
+\f
+ Once this change is made in a given copy, it is irreversible for
+that copy, so the ordinary GNU General Public License applies to all
+subsequent copies and derivative works made from that copy.
+
+ This option is useful when you wish to copy part of the code of
+the Library into a program that is not a library.
+
+ 4. You may copy and distribute the Library (or a portion or
+derivative of it, under Section 2) in object code or executable form
+under the terms of Sections 1 and 2 above provided that you accompany
+it with the complete corresponding machine-readable source code, which
+must be distributed under the terms of Sections 1 and 2 above on a
+medium customarily used for software interchange.
+
+ If distribution of object code is made by offering access to copy
+from a designated place, then offering equivalent access to copy the
+source code from the same place satisfies the requirement to
+distribute the source code, even though third parties are not
+compelled to copy the source along with the object code.
+
+ 5. A program that contains no derivative of any portion of the
+Library, but is designed to work with the Library by being compiled or
+linked with it, is called a "work that uses the Library". Such a
+work, in isolation, is not a derivative work of the Library, and
+therefore falls outside the scope of this License.
+
+ However, linking a "work that uses the Library" with the Library
+creates an executable that is a derivative of the Library (because it
+contains portions of the Library), rather than a "work that uses the
+library". The executable is therefore covered by this License.
+Section 6 states terms for distribution of such executables.
+
+ When a "work that uses the Library" uses material from a header file
+that is part of the Library, the object code for the work may be a
+derivative work of the Library even though the source code is not.
+Whether this is true is especially significant if the work can be
+linked without the Library, or if the work is itself a library. The
+threshold for this to be true is not precisely defined by law.
+
+ If such an object file uses only numerical parameters, data
+structure layouts and accessors, and small macros and small inline
+functions (ten lines or less in length), then the use of the object
+file is unrestricted, regardless of whether it is legally a derivative
+work. (Executables containing this object code plus portions of the
+Library will still fall under Section 6.)
+
+ Otherwise, if the work is a derivative of the Library, you may
+distribute the object code for the work under the terms of Section 6.
+Any executables containing that work also fall under Section 6,
+whether or not they are linked directly with the Library itself.
+\f
+ 6. As an exception to the Sections above, you may also combine or
+link a "work that uses the Library" with the Library to produce a
+work containing portions of the Library, and distribute that work
+under terms of your choice, provided that the terms permit
+modification of the work for the customer's own use and reverse
+engineering for debugging such modifications.
+
+ You must give prominent notice with each copy of the work that the
+Library is used in it and that the Library and its use are covered by
+this License. You must supply a copy of this License. If the work
+during execution displays copyright notices, you must include the
+copyright notice for the Library among them, as well as a reference
+directing the user to the copy of this License. Also, you must do one
+of these things:
+
+ a) Accompany the work with the complete corresponding
+ machine-readable source code for the Library including whatever
+ changes were used in the work (which must be distributed under
+ Sections 1 and 2 above); and, if the work is an executable linked
+ with the Library, with the complete machine-readable "work that
+ uses the Library", as object code and/or source code, so that the
+ user can modify the Library and then relink to produce a modified
+ executable containing the modified Library. (It is understood
+ that the user who changes the contents of definitions files in the
+ Library will not necessarily be able to recompile the application
+ to use the modified definitions.)
+
+ b) Use a suitable shared library mechanism for linking with the
+ Library. A suitable mechanism is one that (1) uses at run time a
+ copy of the library already present on the user's computer system,
+ rather than copying library functions into the executable, and (2)
+ will operate properly with a modified version of the library, if
+ the user installs one, as long as the modified version is
+ interface-compatible with the version that the work was made with.
+
+ c) Accompany the work with a written offer, valid for at
+ least three years, to give the same user the materials
+ specified in Subsection 6a, above, for a charge no more
+ than the cost of performing this distribution.
+
+ d) If distribution of the work is made by offering access to copy
+ from a designated place, offer equivalent access to copy the above
+ specified materials from the same place.
+
+ e) Verify that the user has already received a copy of these
+ materials or that you have already sent this user a copy.
+
+ For an executable, the required form of the "work that uses the
+Library" must include any data and utility programs needed for
+reproducing the executable from it. However, as a special exception,
+the materials to be distributed need not include anything that is
+normally distributed (in either source or binary form) with the major
+components (compiler, kernel, and so on) of the operating system on
+which the executable runs, unless that component itself accompanies
+the executable.
+
+ It may happen that this requirement contradicts the license
+restrictions of other proprietary libraries that do not normally
+accompany the operating system. Such a contradiction means you cannot
+use both them and the Library together in an executable that you
+distribute.
+\f
+ 7. You may place library facilities that are a work based on the
+Library side-by-side in a single library together with other library
+facilities not covered by this License, and distribute such a combined
+library, provided that the separate distribution of the work based on
+the Library and of the other library facilities is otherwise
+permitted, and provided that you do these two things:
+
+ a) Accompany the combined library with a copy of the same work
+ based on the Library, uncombined with any other library
+ facilities. This must be distributed under the terms of the
+ Sections above.
+
+ b) Give prominent notice with the combined library of the fact
+ that part of it is a work based on the Library, and explaining
+ where to find the accompanying uncombined form of the same work.
+
+ 8. You may not copy, modify, sublicense, link with, or distribute
+the Library except as expressly provided under this License. Any
+attempt otherwise to copy, modify, sublicense, link with, or
+distribute the Library is void, and will automatically terminate your
+rights under this License. However, parties who have received copies,
+or rights, from you under this License will not have their licenses
+terminated so long as such parties remain in full compliance.
+
+ 9. You are not required to accept this License, since you have not
+signed it. However, nothing else grants you permission to modify or
+distribute the Library or its derivative works. These actions are
+prohibited by law if you do not accept this License. Therefore, by
+modifying or distributing the Library (or any work based on the
+Library), you indicate your acceptance of this License to do so, and
+all its terms and conditions for copying, distributing or modifying
+the Library or works based on it.
+
+ 10. Each time you redistribute the Library (or any work based on the
+Library), the recipient automatically receives a license from the
+original licensor to copy, distribute, link with or modify the Library
+subject to these terms and conditions. You may not impose any further
+restrictions on the recipients' exercise of the rights granted herein.
+You are not responsible for enforcing compliance by third parties with
+this License.
+\f
+ 11. If, as a consequence of a court judgment or allegation of patent
+infringement or for any other reason (not limited to patent issues),
+conditions are imposed on you (whether by court order, agreement or
+otherwise) that contradict the conditions of this License, they do not
+excuse you from the conditions of this License. If you cannot
+distribute so as to satisfy simultaneously your obligations under this
+License and any other pertinent obligations, then as a consequence you
+may not distribute the Library at all. For example, if a patent
+license would not permit royalty-free redistribution of the Library by
+all those who receive copies directly or indirectly through you, then
+the only way you could satisfy both it and this License would be to
+refrain entirely from distribution of the Library.
+
+If any portion of this section is held invalid or unenforceable under any
+particular circumstance, the balance of the section is intended to apply,
+and the section as a whole is intended to apply in other circumstances.
+
+It is not the purpose of this section to induce you to infringe any
+patents or other property right claims or to contest validity of any
+such claims; this section has the sole purpose of protecting the
+integrity of the free software distribution system which is
+implemented by public license practices. Many people have made
+generous contributions to the wide range of software distributed
+through that system in reliance on consistent application of that
+system; it is up to the author/donor to decide if he or she is willing
+to distribute software through any other system and a licensee cannot
+impose that choice.
+
+This section is intended to make thoroughly clear what is believed to
+be a consequence of the rest of this License.
+
+ 12. If the distribution and/or use of the Library is restricted in
+certain countries either by patents or by copyrighted interfaces, the
+original copyright holder who places the Library under this License may add
+an explicit geographical distribution limitation excluding those countries,
+so that distribution is permitted only in or among countries not thus
+excluded. In such case, this License incorporates the limitation as if
+written in the body of this License.
+
+ 13. The Free Software Foundation may publish revised and/or new
+versions of the Lesser General Public License from time to time.
+Such new versions will be similar in spirit to the present version,
+but may differ in detail to address new problems or concerns.
+
+Each version is given a distinguishing version number. If the Library
+specifies a version number of this License which applies to it and
+"any later version", you have the option of following the terms and
+conditions either of that version or of any later version published by
+the Free Software Foundation. If the Library does not specify a
+license version number, you may choose any version ever published by
+the Free Software Foundation.
+\f
+ 14. If you wish to incorporate parts of the Library into other free
+programs whose distribution conditions are incompatible with these,
+write to the author to ask for permission. For software which is
+copyrighted by the Free Software Foundation, write to the Free
+Software Foundation; we sometimes make exceptions for this. Our
+decision will be guided by the two goals of preserving the free status
+of all derivatives of our free software and of promoting the sharing
+and reuse of software generally.
+
+ NO WARRANTY
+
+ 15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO
+WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW.
+EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR
+OTHER PARTIES PROVIDE THE LIBRARY "AS IS" WITHOUT WARRANTY OF ANY
+KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE
+LIBRARY IS WITH YOU. SHOULD THE LIBRARY PROVE DEFECTIVE, YOU ASSUME
+THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
+
+ 16. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN
+WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY
+AND/OR REDISTRIBUTE THE LIBRARY AS PERMITTED ABOVE, BE LIABLE TO YOU
+FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR
+CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE
+LIBRARY (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING
+RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A
+FAILURE OF THE LIBRARY TO OPERATE WITH ANY OTHER SOFTWARE), EVEN IF
+SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH
+DAMAGES.
+
+ END OF TERMS AND CONDITIONS
+\f
+ How to Apply These Terms to Your New Libraries
+
+ If you develop a new library, and you want it to be of the greatest
+possible use to the public, we recommend making it free software that
+everyone can redistribute and change. You can do so by permitting
+redistribution under these terms (or, alternatively, under the terms of the
+ordinary General Public License).
+
+ To apply these terms, attach the following notices to the library. It is
+safest to attach them to the start of each source file to most effectively
+convey the exclusion of warranty; and each file should have at least the
+"copyright" line and a pointer to where the full notice is found.
+
+ <one line to give the library's name and a brief idea of what it does.>
+ Copyright (C) <year> <name of author>
+
+ This library is free software; you can redistribute it and/or
+ modify it under the terms of the GNU Lesser General Public
+ License as published by the Free Software Foundation; either
+ version 2.1 of the License, or (at your option) any later version.
+
+ This library is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ Lesser General Public License for more details.
+
+ You should have received a copy of the GNU Lesser General Public
+ License along with this library; if not, write to the Free Software
+ Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+Also add information on how to contact you by electronic and paper mail.
+
+You should also get your employer (if you work as a programmer) or your
+school, if any, to sign a "copyright disclaimer" for the library, if
+necessary. Here is a sample; alter the names:
+
+ Yoyodyne, Inc., hereby disclaims all copyright interest in the
+ library `Frob' (a library for tweaking knobs) written by James Random Hacker.
+
+ <signature of Ty Coon>, 1 April 1990
+ Ty Coon, President of Vice
+
+That's all there is to it!
+
+
--- /dev/null
+/*
+ * ebtree/compiler.h
+ * This files contains some compiler-specific settings.
+ *
+ * Copyright (C) 2000-2009 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#include <common/compiler.h>
+
--- /dev/null
+/*
+ * Elastic Binary Trees - exported functions for operations on 32bit nodes.
+ * Version 6.0.6
+ * (C) 2002-2011 - Willy Tarreau <w@1wt.eu>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+/* Consult eb32tree.h for more details about those functions */
+
+#include "eb32tree.h"
+
+REGPRM2 struct eb32_node *eb32_insert(struct eb_root *root, struct eb32_node *new)
+{
+ return __eb32_insert(root, new);
+}
+
+REGPRM2 struct eb32_node *eb32i_insert(struct eb_root *root, struct eb32_node *new)
+{
+ return __eb32i_insert(root, new);
+}
+
+REGPRM2 struct eb32_node *eb32_lookup(struct eb_root *root, u32 x)
+{
+ return __eb32_lookup(root, x);
+}
+
+REGPRM2 struct eb32_node *eb32i_lookup(struct eb_root *root, s32 x)
+{
+ return __eb32i_lookup(root, x);
+}
+
+/*
+ * Find the last occurrence of the highest key in the tree <root>, which is
+ * equal to or less than <x>. NULL is returned is no key matches.
+ */
+REGPRM2 struct eb32_node *eb32_lookup_le(struct eb_root *root, u32 x)
+{
+ struct eb32_node *node;
+ eb_troot_t *troot;
+
+ troot = root->b[EB_LEFT];
+ if (unlikely(troot == NULL))
+ return NULL;
+
+ while (1) {
+ if ((eb_gettag(troot) == EB_LEAF)) {
+ /* We reached a leaf, which means that the whole upper
+ * parts were common. We will return either the current
+ * node or its next one if the former is too small.
+ */
+ node = container_of(eb_untag(troot, EB_LEAF),
+ struct eb32_node, node.branches);
+ if (node->key <= x)
+ return node;
+ /* return prev */
+ troot = node->node.leaf_p;
+ break;
+ }
+ node = container_of(eb_untag(troot, EB_NODE),
+ struct eb32_node, node.branches);
+
+ if (node->node.bit < 0) {
+ /* We're at the top of a dup tree. Either we got a
+ * matching value and we return the rightmost node, or
+ * we don't and we skip the whole subtree to return the
+ * prev node before the subtree. Note that since we're
+ * at the top of the dup tree, we can simply return the
+ * prev node without first trying to escape from the
+ * tree.
+ */
+ if (node->key <= x) {
+ troot = node->node.branches.b[EB_RGHT];
+ while (eb_gettag(troot) != EB_LEAF)
+ troot = (eb_untag(troot, EB_NODE))->b[EB_RGHT];
+ return container_of(eb_untag(troot, EB_LEAF),
+ struct eb32_node, node.branches);
+ }
+ /* return prev */
+ troot = node->node.node_p;
+ break;
+ }
+
+ if (((x ^ node->key) >> node->node.bit) >= EB_NODE_BRANCHES) {
+ /* No more common bits at all. Either this node is too
+ * small and we need to get its highest value, or it is
+ * too large, and we need to get the prev value.
+ */
+ if ((node->key >> node->node.bit) < (x >> node->node.bit)) {
+ troot = node->node.branches.b[EB_RGHT];
+ return eb32_entry(eb_walk_down(troot, EB_RGHT), struct eb32_node, node);
+ }
+
+ /* Further values will be too high here, so return the prev
+ * unique node (if it exists).
+ */
+ troot = node->node.node_p;
+ break;
+ }
+ troot = node->node.branches.b[(x >> node->node.bit) & EB_NODE_BRANCH_MASK];
+ }
+
+ /* If we get here, it means we want to report previous node before the
+ * current one which is not above. <troot> is already initialised to
+ * the parent's branches.
+ */
+ while (eb_gettag(troot) == EB_LEFT) {
+ /* Walking up from left branch. We must ensure that we never
+ * walk beyond root.
+ */
+ if (unlikely(eb_clrtag((eb_untag(troot, EB_LEFT))->b[EB_RGHT]) == NULL))
+ return NULL;
+ troot = (eb_root_to_node(eb_untag(troot, EB_LEFT)))->node_p;
+ }
+ /* Note that <troot> cannot be NULL at this stage */
+ troot = (eb_untag(troot, EB_RGHT))->b[EB_LEFT];
+ node = eb32_entry(eb_walk_down(troot, EB_RGHT), struct eb32_node, node);
+ return node;
+}
+
+/*
+ * Find the first occurrence of the lowest key in the tree <root>, which is
+ * equal to or greater than <x>. NULL is returned is no key matches.
+ */
+REGPRM2 struct eb32_node *eb32_lookup_ge(struct eb_root *root, u32 x)
+{
+ struct eb32_node *node;
+ eb_troot_t *troot;
+
+ troot = root->b[EB_LEFT];
+ if (unlikely(troot == NULL))
+ return NULL;
+
+ while (1) {
+ if ((eb_gettag(troot) == EB_LEAF)) {
+ /* We reached a leaf, which means that the whole upper
+ * parts were common. We will return either the current
+ * node or its next one if the former is too small.
+ */
+ node = container_of(eb_untag(troot, EB_LEAF),
+ struct eb32_node, node.branches);
+ if (node->key >= x)
+ return node;
+ /* return next */
+ troot = node->node.leaf_p;
+ break;
+ }
+ node = container_of(eb_untag(troot, EB_NODE),
+ struct eb32_node, node.branches);
+
+ if (node->node.bit < 0) {
+ /* We're at the top of a dup tree. Either we got a
+ * matching value and we return the leftmost node, or
+ * we don't and we skip the whole subtree to return the
+ * next node after the subtree. Note that since we're
+ * at the top of the dup tree, we can simply return the
+ * next node without first trying to escape from the
+ * tree.
+ */
+ if (node->key >= x) {
+ troot = node->node.branches.b[EB_LEFT];
+ while (eb_gettag(troot) != EB_LEAF)
+ troot = (eb_untag(troot, EB_NODE))->b[EB_LEFT];
+ return container_of(eb_untag(troot, EB_LEAF),
+ struct eb32_node, node.branches);
+ }
+ /* return next */
+ troot = node->node.node_p;
+ break;
+ }
+
+ if (((x ^ node->key) >> node->node.bit) >= EB_NODE_BRANCHES) {
+ /* No more common bits at all. Either this node is too
+ * large and we need to get its lowest value, or it is too
+ * small, and we need to get the next value.
+ */
+ if ((node->key >> node->node.bit) > (x >> node->node.bit)) {
+ troot = node->node.branches.b[EB_LEFT];
+ return eb32_entry(eb_walk_down(troot, EB_LEFT), struct eb32_node, node);
+ }
+
+ /* Further values will be too low here, so return the next
+ * unique node (if it exists).
+ */
+ troot = node->node.node_p;
+ break;
+ }
+ troot = node->node.branches.b[(x >> node->node.bit) & EB_NODE_BRANCH_MASK];
+ }
+
+ /* If we get here, it means we want to report next node after the
+ * current one which is not below. <troot> is already initialised
+ * to the parent's branches.
+ */
+ while (eb_gettag(troot) != EB_LEFT)
+ /* Walking up from right branch, so we cannot be below root */
+ troot = (eb_root_to_node(eb_untag(troot, EB_RGHT)))->node_p;
+
+ /* Note that <troot> cannot be NULL at this stage */
+ troot = (eb_untag(troot, EB_LEFT))->b[EB_RGHT];
+ if (eb_clrtag(troot) == NULL)
+ return NULL;
+
+ node = eb32_entry(eb_walk_down(troot, EB_LEFT), struct eb32_node, node);
+ return node;
+}
--- /dev/null
+/*
+ * Elastic Binary Trees - macros and structures for operations on 32bit nodes.
+ * Version 6.0.6
+ * (C) 2002-2011 - Willy Tarreau <w@1wt.eu>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _EB32TREE_H
+#define _EB32TREE_H
+
+#include "ebtree.h"
+
+
+/* Return the structure of type <type> whose member <member> points to <ptr> */
+#define eb32_entry(ptr, type, member) container_of(ptr, type, member)
+
+#define EB32_ROOT EB_ROOT
+#define EB32_TREE_HEAD EB_TREE_HEAD
+
+/* These types may sometimes already be defined */
+typedef unsigned int u32;
+typedef signed int s32;
+
+/* This structure carries a node, a leaf, and a key. It must start with the
+ * eb_node so that it can be cast into an eb_node. We could also have put some
+ * sort of transparent union here to reduce the indirection level, but the fact
+ * is, the end user is not meant to manipulate internals, so this is pointless.
+ */
+struct eb32_node {
+ struct eb_node node; /* the tree node, must be at the beginning */
+ u32 key;
+};
+
+/*
+ * Exported functions and macros.
+ * Many of them are always inlined because they are extremely small, and
+ * are generally called at most once or twice in a program.
+ */
+
+/* Return leftmost node in the tree, or NULL if none */
+static inline struct eb32_node *eb32_first(struct eb_root *root)
+{
+ return eb32_entry(eb_first(root), struct eb32_node, node);
+}
+
+/* Return rightmost node in the tree, or NULL if none */
+static inline struct eb32_node *eb32_last(struct eb_root *root)
+{
+ return eb32_entry(eb_last(root), struct eb32_node, node);
+}
+
+/* Return next node in the tree, or NULL if none */
+static inline struct eb32_node *eb32_next(struct eb32_node *eb32)
+{
+ return eb32_entry(eb_next(&eb32->node), struct eb32_node, node);
+}
+
+/* Return previous node in the tree, or NULL if none */
+static inline struct eb32_node *eb32_prev(struct eb32_node *eb32)
+{
+ return eb32_entry(eb_prev(&eb32->node), struct eb32_node, node);
+}
+
+/* Return next leaf node within a duplicate sub-tree, or NULL if none. */
+static inline struct eb32_node *eb32_next_dup(struct eb32_node *eb32)
+{
+ return eb32_entry(eb_next_dup(&eb32->node), struct eb32_node, node);
+}
+
+/* Return previous leaf node within a duplicate sub-tree, or NULL if none. */
+static inline struct eb32_node *eb32_prev_dup(struct eb32_node *eb32)
+{
+ return eb32_entry(eb_prev_dup(&eb32->node), struct eb32_node, node);
+}
+
+/* Return next node in the tree, skipping duplicates, or NULL if none */
+static inline struct eb32_node *eb32_next_unique(struct eb32_node *eb32)
+{
+ return eb32_entry(eb_next_unique(&eb32->node), struct eb32_node, node);
+}
+
+/* Return previous node in the tree, skipping duplicates, or NULL if none */
+static inline struct eb32_node *eb32_prev_unique(struct eb32_node *eb32)
+{
+ return eb32_entry(eb_prev_unique(&eb32->node), struct eb32_node, node);
+}
+
+/* Delete node from the tree if it was linked in. Mark the node unused. Note
+ * that this function relies on a non-inlined generic function: eb_delete.
+ */
+static inline void eb32_delete(struct eb32_node *eb32)
+{
+ eb_delete(&eb32->node);
+}
+
+/*
+ * The following functions are not inlined by default. They are declared
+ * in eb32tree.c, which simply relies on their inline version.
+ */
+REGPRM2 struct eb32_node *eb32_lookup(struct eb_root *root, u32 x);
+REGPRM2 struct eb32_node *eb32i_lookup(struct eb_root *root, s32 x);
+REGPRM2 struct eb32_node *eb32_lookup_le(struct eb_root *root, u32 x);
+REGPRM2 struct eb32_node *eb32_lookup_ge(struct eb_root *root, u32 x);
+REGPRM2 struct eb32_node *eb32_insert(struct eb_root *root, struct eb32_node *new);
+REGPRM2 struct eb32_node *eb32i_insert(struct eb_root *root, struct eb32_node *new);
+
+/*
+ * The following functions are less likely to be used directly, because their
+ * code is larger. The non-inlined version is preferred.
+ */
+
+/* Delete node from the tree if it was linked in. Mark the node unused. */
+static forceinline void __eb32_delete(struct eb32_node *eb32)
+{
+ __eb_delete(&eb32->node);
+}
+
+/*
+ * Find the first occurence of a key in the tree <root>. If none can be
+ * found, return NULL.
+ */
+static forceinline struct eb32_node *__eb32_lookup(struct eb_root *root, u32 x)
+{
+ struct eb32_node *node;
+ eb_troot_t *troot;
+ u32 y;
+ int node_bit;
+
+ troot = root->b[EB_LEFT];
+ if (unlikely(troot == NULL))
+ return NULL;
+
+ while (1) {
+ if ((eb_gettag(troot) == EB_LEAF)) {
+ node = container_of(eb_untag(troot, EB_LEAF),
+ struct eb32_node, node.branches);
+ if (node->key == x)
+ return node;
+ else
+ return NULL;
+ }
+ node = container_of(eb_untag(troot, EB_NODE),
+ struct eb32_node, node.branches);
+ node_bit = node->node.bit;
+
+ y = node->key ^ x;
+ if (!y) {
+ /* Either we found the node which holds the key, or
+ * we have a dup tree. In the later case, we have to
+ * walk it down left to get the first entry.
+ */
+ if (node_bit < 0) {
+ troot = node->node.branches.b[EB_LEFT];
+ while (eb_gettag(troot) != EB_LEAF)
+ troot = (eb_untag(troot, EB_NODE))->b[EB_LEFT];
+ node = container_of(eb_untag(troot, EB_LEAF),
+ struct eb32_node, node.branches);
+ }
+ return node;
+ }
+
+ if ((y >> node_bit) >= EB_NODE_BRANCHES)
+ return NULL; /* no more common bits */
+
+ troot = node->node.branches.b[(x >> node_bit) & EB_NODE_BRANCH_MASK];
+ }
+}
+
+/*
+ * Find the first occurence of a signed key in the tree <root>. If none can
+ * be found, return NULL.
+ */
+static forceinline struct eb32_node *__eb32i_lookup(struct eb_root *root, s32 x)
+{
+ struct eb32_node *node;
+ eb_troot_t *troot;
+ u32 key = x ^ 0x80000000;
+ u32 y;
+ int node_bit;
+
+ troot = root->b[EB_LEFT];
+ if (unlikely(troot == NULL))
+ return NULL;
+
+ while (1) {
+ if ((eb_gettag(troot) == EB_LEAF)) {
+ node = container_of(eb_untag(troot, EB_LEAF),
+ struct eb32_node, node.branches);
+ if (node->key == (u32)x)
+ return node;
+ else
+ return NULL;
+ }
+ node = container_of(eb_untag(troot, EB_NODE),
+ struct eb32_node, node.branches);
+ node_bit = node->node.bit;
+
+ y = node->key ^ x;
+ if (!y) {
+ /* Either we found the node which holds the key, or
+ * we have a dup tree. In the later case, we have to
+ * walk it down left to get the first entry.
+ */
+ if (node_bit < 0) {
+ troot = node->node.branches.b[EB_LEFT];
+ while (eb_gettag(troot) != EB_LEAF)
+ troot = (eb_untag(troot, EB_NODE))->b[EB_LEFT];
+ node = container_of(eb_untag(troot, EB_LEAF),
+ struct eb32_node, node.branches);
+ }
+ return node;
+ }
+
+ if ((y >> node_bit) >= EB_NODE_BRANCHES)
+ return NULL; /* no more common bits */
+
+ troot = node->node.branches.b[(key >> node_bit) & EB_NODE_BRANCH_MASK];
+ }
+}
+
+/* Insert eb32_node <new> into subtree starting at node root <root>.
+ * Only new->key needs be set with the key. The eb32_node is returned.
+ * If root->b[EB_RGHT]==1, the tree may only contain unique keys.
+ */
+static forceinline struct eb32_node *
+__eb32_insert(struct eb_root *root, struct eb32_node *new) {
+ struct eb32_node *old;
+ unsigned int side;
+ eb_troot_t *troot, **up_ptr;
+ u32 newkey; /* caching the key saves approximately one cycle */
+ eb_troot_t *root_right;
+ eb_troot_t *new_left, *new_rght;
+ eb_troot_t *new_leaf;
+ int old_node_bit;
+
+ side = EB_LEFT;
+ troot = root->b[EB_LEFT];
+ root_right = root->b[EB_RGHT];
+ if (unlikely(troot == NULL)) {
+ /* Tree is empty, insert the leaf part below the left branch */
+ root->b[EB_LEFT] = eb_dotag(&new->node.branches, EB_LEAF);
+ new->node.leaf_p = eb_dotag(root, EB_LEFT);
+ new->node.node_p = NULL; /* node part unused */
+ return new;
+ }
+
+ /* The tree descent is fairly easy :
+ * - first, check if we have reached a leaf node
+ * - second, check if we have gone too far
+ * - third, reiterate
+ * Everywhere, we use <new> for the node node we are inserting, <root>
+ * for the node we attach it to, and <old> for the node we are
+ * displacing below <new>. <troot> will always point to the future node
+ * (tagged with its type). <side> carries the side the node <new> is
+ * attached to below its parent, which is also where previous node
+ * was attached. <newkey> carries the key being inserted.
+ */
+ newkey = new->key;
+
+ while (1) {
+ if (eb_gettag(troot) == EB_LEAF) {
+ /* insert above a leaf */
+ old = container_of(eb_untag(troot, EB_LEAF),
+ struct eb32_node, node.branches);
+ new->node.node_p = old->node.leaf_p;
+ up_ptr = &old->node.leaf_p;
+ break;
+ }
+
+ /* OK we're walking down this link */
+ old = container_of(eb_untag(troot, EB_NODE),
+ struct eb32_node, node.branches);
+ old_node_bit = old->node.bit;
+
+ /* Stop going down when we don't have common bits anymore. We
+ * also stop in front of a duplicates tree because it means we
+ * have to insert above.
+ */
+
+ if ((old_node_bit < 0) || /* we're above a duplicate tree, stop here */
+ (((new->key ^ old->key) >> old_node_bit) >= EB_NODE_BRANCHES)) {
+ /* The tree did not contain the key, so we insert <new> before the node
+ * <old>, and set ->bit to designate the lowest bit position in <new>
+ * which applies to ->branches.b[].
+ */
+ new->node.node_p = old->node.node_p;
+ up_ptr = &old->node.node_p;
+ break;
+ }
+
+ /* walk down */
+ root = &old->node.branches;
+ side = (newkey >> old_node_bit) & EB_NODE_BRANCH_MASK;
+ troot = root->b[side];
+ }
+
+ new_left = eb_dotag(&new->node.branches, EB_LEFT);
+ new_rght = eb_dotag(&new->node.branches, EB_RGHT);
+ new_leaf = eb_dotag(&new->node.branches, EB_LEAF);
+
+ /* We need the common higher bits between new->key and old->key.
+ * What differences are there between new->key and the node here ?
+ * NOTE that bit(new) is always < bit(root) because highest
+ * bit of new->key and old->key are identical here (otherwise they
+ * would sit on different branches).
+ */
+
+ // note that if EB_NODE_BITS > 1, we should check that it's still >= 0
+ new->node.bit = flsnz(new->key ^ old->key) - EB_NODE_BITS;
+
+ if (new->key == old->key) {
+ new->node.bit = -1; /* mark as new dup tree, just in case */
+
+ if (likely(eb_gettag(root_right))) {
+ /* we refuse to duplicate this key if the tree is
+ * tagged as containing only unique keys.
+ */
+ return old;
+ }
+
+ if (eb_gettag(troot) != EB_LEAF) {
+ /* there was already a dup tree below */
+ struct eb_node *ret;
+ ret = eb_insert_dup(&old->node, &new->node);
+ return container_of(ret, struct eb32_node, node);
+ }
+ /* otherwise fall through */
+ }
+
+ if (new->key >= old->key) {
+ new->node.branches.b[EB_LEFT] = troot;
+ new->node.branches.b[EB_RGHT] = new_leaf;
+ new->node.leaf_p = new_rght;
+ *up_ptr = new_left;
+ }
+ else {
+ new->node.branches.b[EB_LEFT] = new_leaf;
+ new->node.branches.b[EB_RGHT] = troot;
+ new->node.leaf_p = new_left;
+ *up_ptr = new_rght;
+ }
+
+ /* Ok, now we are inserting <new> between <root> and <old>. <old>'s
+ * parent is already set to <new>, and the <root>'s branch is still in
+ * <side>. Update the root's leaf till we have it. Note that we can also
+ * find the side by checking the side of new->node.node_p.
+ */
+
+ root->b[side] = eb_dotag(&new->node.branches, EB_NODE);
+ return new;
+}
+
+/* Insert eb32_node <new> into subtree starting at node root <root>, using
+ * signed keys. Only new->key needs be set with the key. The eb32_node
+ * is returned. If root->b[EB_RGHT]==1, the tree may only contain unique keys.
+ */
+static forceinline struct eb32_node *
+__eb32i_insert(struct eb_root *root, struct eb32_node *new) {
+ struct eb32_node *old;
+ unsigned int side;
+ eb_troot_t *troot, **up_ptr;
+ int newkey; /* caching the key saves approximately one cycle */
+ eb_troot_t *root_right;
+ eb_troot_t *new_left, *new_rght;
+ eb_troot_t *new_leaf;
+ int old_node_bit;
+
+ side = EB_LEFT;
+ troot = root->b[EB_LEFT];
+ root_right = root->b[EB_RGHT];
+ if (unlikely(troot == NULL)) {
+ /* Tree is empty, insert the leaf part below the left branch */
+ root->b[EB_LEFT] = eb_dotag(&new->node.branches, EB_LEAF);
+ new->node.leaf_p = eb_dotag(root, EB_LEFT);
+ new->node.node_p = NULL; /* node part unused */
+ return new;
+ }
+
+ /* The tree descent is fairly easy :
+ * - first, check if we have reached a leaf node
+ * - second, check if we have gone too far
+ * - third, reiterate
+ * Everywhere, we use <new> for the node node we are inserting, <root>
+ * for the node we attach it to, and <old> for the node we are
+ * displacing below <new>. <troot> will always point to the future node
+ * (tagged with its type). <side> carries the side the node <new> is
+ * attached to below its parent, which is also where previous node
+ * was attached. <newkey> carries a high bit shift of the key being
+ * inserted in order to have negative keys stored before positive
+ * ones.
+ */
+ newkey = new->key + 0x80000000;
+
+ while (1) {
+ if (eb_gettag(troot) == EB_LEAF) {
+ old = container_of(eb_untag(troot, EB_LEAF),
+ struct eb32_node, node.branches);
+ new->node.node_p = old->node.leaf_p;
+ up_ptr = &old->node.leaf_p;
+ break;
+ }
+
+ /* OK we're walking down this link */
+ old = container_of(eb_untag(troot, EB_NODE),
+ struct eb32_node, node.branches);
+ old_node_bit = old->node.bit;
+
+ /* Stop going down when we don't have common bits anymore. We
+ * also stop in front of a duplicates tree because it means we
+ * have to insert above.
+ */
+
+ if ((old_node_bit < 0) || /* we're above a duplicate tree, stop here */
+ (((new->key ^ old->key) >> old_node_bit) >= EB_NODE_BRANCHES)) {
+ /* The tree did not contain the key, so we insert <new> before the node
+ * <old>, and set ->bit to designate the lowest bit position in <new>
+ * which applies to ->branches.b[].
+ */
+ new->node.node_p = old->node.node_p;
+ up_ptr = &old->node.node_p;
+ break;
+ }
+
+ /* walk down */
+ root = &old->node.branches;
+ side = (newkey >> old_node_bit) & EB_NODE_BRANCH_MASK;
+ troot = root->b[side];
+ }
+
+ new_left = eb_dotag(&new->node.branches, EB_LEFT);
+ new_rght = eb_dotag(&new->node.branches, EB_RGHT);
+ new_leaf = eb_dotag(&new->node.branches, EB_LEAF);
+
+ /* We need the common higher bits between new->key and old->key.
+ * What differences are there between new->key and the node here ?
+ * NOTE that bit(new) is always < bit(root) because highest
+ * bit of new->key and old->key are identical here (otherwise they
+ * would sit on different branches).
+ */
+
+ // note that if EB_NODE_BITS > 1, we should check that it's still >= 0
+ new->node.bit = flsnz(new->key ^ old->key) - EB_NODE_BITS;
+
+ if (new->key == old->key) {
+ new->node.bit = -1; /* mark as new dup tree, just in case */
+
+ if (likely(eb_gettag(root_right))) {
+ /* we refuse to duplicate this key if the tree is
+ * tagged as containing only unique keys.
+ */
+ return old;
+ }
+
+ if (eb_gettag(troot) != EB_LEAF) {
+ /* there was already a dup tree below */
+ struct eb_node *ret;
+ ret = eb_insert_dup(&old->node, &new->node);
+ return container_of(ret, struct eb32_node, node);
+ }
+ /* otherwise fall through */
+ }
+
+ if ((s32)new->key >= (s32)old->key) {
+ new->node.branches.b[EB_LEFT] = troot;
+ new->node.branches.b[EB_RGHT] = new_leaf;
+ new->node.leaf_p = new_rght;
+ *up_ptr = new_left;
+ }
+ else {
+ new->node.branches.b[EB_LEFT] = new_leaf;
+ new->node.branches.b[EB_RGHT] = troot;
+ new->node.leaf_p = new_left;
+ *up_ptr = new_rght;
+ }
+
+ /* Ok, now we are inserting <new> between <root> and <old>. <old>'s
+ * parent is already set to <new>, and the <root>'s branch is still in
+ * <side>. Update the root's leaf till we have it. Note that we can also
+ * find the side by checking the side of new->node.node_p.
+ */
+
+ root->b[side] = eb_dotag(&new->node.branches, EB_NODE);
+ return new;
+}
+
+#endif /* _EB32_TREE_H */
--- /dev/null
+/*
+ * Elastic Binary Trees - exported functions for operations on 64bit nodes.
+ * Version 6.0.6
+ * (C) 2002-2011 - Willy Tarreau <w@1wt.eu>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+/* Consult eb64tree.h for more details about those functions */
+
+#include "eb64tree.h"
+
+REGPRM2 struct eb64_node *eb64_insert(struct eb_root *root, struct eb64_node *new)
+{
+ return __eb64_insert(root, new);
+}
+
+REGPRM2 struct eb64_node *eb64i_insert(struct eb_root *root, struct eb64_node *new)
+{
+ return __eb64i_insert(root, new);
+}
+
+REGPRM2 struct eb64_node *eb64_lookup(struct eb_root *root, u64 x)
+{
+ return __eb64_lookup(root, x);
+}
+
+REGPRM2 struct eb64_node *eb64i_lookup(struct eb_root *root, s64 x)
+{
+ return __eb64i_lookup(root, x);
+}
+
+/*
+ * Find the last occurrence of the highest key in the tree <root>, which is
+ * equal to or less than <x>. NULL is returned is no key matches.
+ */
+REGPRM2 struct eb64_node *eb64_lookup_le(struct eb_root *root, u64 x)
+{
+ struct eb64_node *node;
+ eb_troot_t *troot;
+
+ troot = root->b[EB_LEFT];
+ if (unlikely(troot == NULL))
+ return NULL;
+
+ while (1) {
+ if ((eb_gettag(troot) == EB_LEAF)) {
+ /* We reached a leaf, which means that the whole upper
+ * parts were common. We will return either the current
+ * node or its next one if the former is too small.
+ */
+ node = container_of(eb_untag(troot, EB_LEAF),
+ struct eb64_node, node.branches);
+ if (node->key <= x)
+ return node;
+ /* return prev */
+ troot = node->node.leaf_p;
+ break;
+ }
+ node = container_of(eb_untag(troot, EB_NODE),
+ struct eb64_node, node.branches);
+
+ if (node->node.bit < 0) {
+ /* We're at the top of a dup tree. Either we got a
+ * matching value and we return the rightmost node, or
+ * we don't and we skip the whole subtree to return the
+ * prev node before the subtree. Note that since we're
+ * at the top of the dup tree, we can simply return the
+ * prev node without first trying to escape from the
+ * tree.
+ */
+ if (node->key <= x) {
+ troot = node->node.branches.b[EB_RGHT];
+ while (eb_gettag(troot) != EB_LEAF)
+ troot = (eb_untag(troot, EB_NODE))->b[EB_RGHT];
+ return container_of(eb_untag(troot, EB_LEAF),
+ struct eb64_node, node.branches);
+ }
+ /* return prev */
+ troot = node->node.node_p;
+ break;
+ }
+
+ if (((x ^ node->key) >> node->node.bit) >= EB_NODE_BRANCHES) {
+ /* No more common bits at all. Either this node is too
+ * small and we need to get its highest value, or it is
+ * too large, and we need to get the prev value.
+ */
+ if ((node->key >> node->node.bit) < (x >> node->node.bit)) {
+ troot = node->node.branches.b[EB_RGHT];
+ return eb64_entry(eb_walk_down(troot, EB_RGHT), struct eb64_node, node);
+ }
+
+ /* Further values will be too high here, so return the prev
+ * unique node (if it exists).
+ */
+ troot = node->node.node_p;
+ break;
+ }
+ troot = node->node.branches.b[(x >> node->node.bit) & EB_NODE_BRANCH_MASK];
+ }
+
+ /* If we get here, it means we want to report previous node before the
+ * current one which is not above. <troot> is already initialised to
+ * the parent's branches.
+ */
+ while (eb_gettag(troot) == EB_LEFT) {
+ /* Walking up from left branch. We must ensure that we never
+ * walk beyond root.
+ */
+ if (unlikely(eb_clrtag((eb_untag(troot, EB_LEFT))->b[EB_RGHT]) == NULL))
+ return NULL;
+ troot = (eb_root_to_node(eb_untag(troot, EB_LEFT)))->node_p;
+ }
+ /* Note that <troot> cannot be NULL at this stage */
+ troot = (eb_untag(troot, EB_RGHT))->b[EB_LEFT];
+ node = eb64_entry(eb_walk_down(troot, EB_RGHT), struct eb64_node, node);
+ return node;
+}
+
+/*
+ * Find the first occurrence of the lowest key in the tree <root>, which is
+ * equal to or greater than <x>. NULL is returned is no key matches.
+ */
+REGPRM2 struct eb64_node *eb64_lookup_ge(struct eb_root *root, u64 x)
+{
+ struct eb64_node *node;
+ eb_troot_t *troot;
+
+ troot = root->b[EB_LEFT];
+ if (unlikely(troot == NULL))
+ return NULL;
+
+ while (1) {
+ if ((eb_gettag(troot) == EB_LEAF)) {
+ /* We reached a leaf, which means that the whole upper
+ * parts were common. We will return either the current
+ * node or its next one if the former is too small.
+ */
+ node = container_of(eb_untag(troot, EB_LEAF),
+ struct eb64_node, node.branches);
+ if (node->key >= x)
+ return node;
+ /* return next */
+ troot = node->node.leaf_p;
+ break;
+ }
+ node = container_of(eb_untag(troot, EB_NODE),
+ struct eb64_node, node.branches);
+
+ if (node->node.bit < 0) {
+ /* We're at the top of a dup tree. Either we got a
+ * matching value and we return the leftmost node, or
+ * we don't and we skip the whole subtree to return the
+ * next node after the subtree. Note that since we're
+ * at the top of the dup tree, we can simply return the
+ * next node without first trying to escape from the
+ * tree.
+ */
+ if (node->key >= x) {
+ troot = node->node.branches.b[EB_LEFT];
+ while (eb_gettag(troot) != EB_LEAF)
+ troot = (eb_untag(troot, EB_NODE))->b[EB_LEFT];
+ return container_of(eb_untag(troot, EB_LEAF),
+ struct eb64_node, node.branches);
+ }
+ /* return next */
+ troot = node->node.node_p;
+ break;
+ }
+
+ if (((x ^ node->key) >> node->node.bit) >= EB_NODE_BRANCHES) {
+ /* No more common bits at all. Either this node is too
+ * large and we need to get its lowest value, or it is too
+ * small, and we need to get the next value.
+ */
+ if ((node->key >> node->node.bit) > (x >> node->node.bit)) {
+ troot = node->node.branches.b[EB_LEFT];
+ return eb64_entry(eb_walk_down(troot, EB_LEFT), struct eb64_node, node);
+ }
+
+ /* Further values will be too low here, so return the next
+ * unique node (if it exists).
+ */
+ troot = node->node.node_p;
+ break;
+ }
+ troot = node->node.branches.b[(x >> node->node.bit) & EB_NODE_BRANCH_MASK];
+ }
+
+ /* If we get here, it means we want to report next node after the
+ * current one which is not below. <troot> is already initialised
+ * to the parent's branches.
+ */
+ while (eb_gettag(troot) != EB_LEFT)
+ /* Walking up from right branch, so we cannot be below root */
+ troot = (eb_root_to_node(eb_untag(troot, EB_RGHT)))->node_p;
+
+ /* Note that <troot> cannot be NULL at this stage */
+ troot = (eb_untag(troot, EB_LEFT))->b[EB_RGHT];
+ if (eb_clrtag(troot) == NULL)
+ return NULL;
+
+ node = eb64_entry(eb_walk_down(troot, EB_LEFT), struct eb64_node, node);
+ return node;
+}
--- /dev/null
+/*
+ * Elastic Binary Trees - macros and structures for operations on 64bit nodes.
+ * Version 6.0.6
+ * (C) 2002-2011 - Willy Tarreau <w@1wt.eu>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _EB64TREE_H
+#define _EB64TREE_H
+
+#include "ebtree.h"
+
+
+/* Return the structure of type <type> whose member <member> points to <ptr> */
+#define eb64_entry(ptr, type, member) container_of(ptr, type, member)
+
+#define EB64_ROOT EB_ROOT
+#define EB64_TREE_HEAD EB_TREE_HEAD
+
+/* These types may sometimes already be defined */
+typedef unsigned long long u64;
+typedef signed long long s64;
+
+/* This structure carries a node, a leaf, and a key. It must start with the
+ * eb_node so that it can be cast into an eb_node. We could also have put some
+ * sort of transparent union here to reduce the indirection level, but the fact
+ * is, the end user is not meant to manipulate internals, so this is pointless.
+ */
+struct eb64_node {
+ struct eb_node node; /* the tree node, must be at the beginning */
+ u64 key;
+};
+
+/*
+ * Exported functions and macros.
+ * Many of them are always inlined because they are extremely small, and
+ * are generally called at most once or twice in a program.
+ */
+
+/* Return leftmost node in the tree, or NULL if none */
+static inline struct eb64_node *eb64_first(struct eb_root *root)
+{
+ return eb64_entry(eb_first(root), struct eb64_node, node);
+}
+
+/* Return rightmost node in the tree, or NULL if none */
+static inline struct eb64_node *eb64_last(struct eb_root *root)
+{
+ return eb64_entry(eb_last(root), struct eb64_node, node);
+}
+
+/* Return next node in the tree, or NULL if none */
+static inline struct eb64_node *eb64_next(struct eb64_node *eb64)
+{
+ return eb64_entry(eb_next(&eb64->node), struct eb64_node, node);
+}
+
+/* Return previous node in the tree, or NULL if none */
+static inline struct eb64_node *eb64_prev(struct eb64_node *eb64)
+{
+ return eb64_entry(eb_prev(&eb64->node), struct eb64_node, node);
+}
+
+/* Return next leaf node within a duplicate sub-tree, or NULL if none. */
+static inline struct eb64_node *eb64_next_dup(struct eb64_node *eb64)
+{
+ return eb64_entry(eb_next_dup(&eb64->node), struct eb64_node, node);
+}
+
+/* Return previous leaf node within a duplicate sub-tree, or NULL if none. */
+static inline struct eb64_node *eb64_prev_dup(struct eb64_node *eb64)
+{
+ return eb64_entry(eb_prev_dup(&eb64->node), struct eb64_node, node);
+}
+
+/* Return next node in the tree, skipping duplicates, or NULL if none */
+static inline struct eb64_node *eb64_next_unique(struct eb64_node *eb64)
+{
+ return eb64_entry(eb_next_unique(&eb64->node), struct eb64_node, node);
+}
+
+/* Return previous node in the tree, skipping duplicates, or NULL if none */
+static inline struct eb64_node *eb64_prev_unique(struct eb64_node *eb64)
+{
+ return eb64_entry(eb_prev_unique(&eb64->node), struct eb64_node, node);
+}
+
+/* Delete node from the tree if it was linked in. Mark the node unused. Note
+ * that this function relies on a non-inlined generic function: eb_delete.
+ */
+static inline void eb64_delete(struct eb64_node *eb64)
+{
+ eb_delete(&eb64->node);
+}
+
+/*
+ * The following functions are not inlined by default. They are declared
+ * in eb64tree.c, which simply relies on their inline version.
+ */
+REGPRM2 struct eb64_node *eb64_lookup(struct eb_root *root, u64 x);
+REGPRM2 struct eb64_node *eb64i_lookup(struct eb_root *root, s64 x);
+REGPRM2 struct eb64_node *eb64_lookup_le(struct eb_root *root, u64 x);
+REGPRM2 struct eb64_node *eb64_lookup_ge(struct eb_root *root, u64 x);
+REGPRM2 struct eb64_node *eb64_insert(struct eb_root *root, struct eb64_node *new);
+REGPRM2 struct eb64_node *eb64i_insert(struct eb_root *root, struct eb64_node *new);
+
+/*
+ * The following functions are less likely to be used directly, because their
+ * code is larger. The non-inlined version is preferred.
+ */
+
+/* Delete node from the tree if it was linked in. Mark the node unused. */
+static forceinline void __eb64_delete(struct eb64_node *eb64)
+{
+ __eb_delete(&eb64->node);
+}
+
+/*
+ * Find the first occurence of a key in the tree <root>. If none can be
+ * found, return NULL.
+ */
+static forceinline struct eb64_node *__eb64_lookup(struct eb_root *root, u64 x)
+{
+ struct eb64_node *node;
+ eb_troot_t *troot;
+ u64 y;
+
+ troot = root->b[EB_LEFT];
+ if (unlikely(troot == NULL))
+ return NULL;
+
+ while (1) {
+ if ((eb_gettag(troot) == EB_LEAF)) {
+ node = container_of(eb_untag(troot, EB_LEAF),
+ struct eb64_node, node.branches);
+ if (node->key == x)
+ return node;
+ else
+ return NULL;
+ }
+ node = container_of(eb_untag(troot, EB_NODE),
+ struct eb64_node, node.branches);
+
+ y = node->key ^ x;
+ if (!y) {
+ /* Either we found the node which holds the key, or
+ * we have a dup tree. In the later case, we have to
+ * walk it down left to get the first entry.
+ */
+ if (node->node.bit < 0) {
+ troot = node->node.branches.b[EB_LEFT];
+ while (eb_gettag(troot) != EB_LEAF)
+ troot = (eb_untag(troot, EB_NODE))->b[EB_LEFT];
+ node = container_of(eb_untag(troot, EB_LEAF),
+ struct eb64_node, node.branches);
+ }
+ return node;
+ }
+
+ if ((y >> node->node.bit) >= EB_NODE_BRANCHES)
+ return NULL; /* no more common bits */
+
+ troot = node->node.branches.b[(x >> node->node.bit) & EB_NODE_BRANCH_MASK];
+ }
+}
+
+/*
+ * Find the first occurence of a signed key in the tree <root>. If none can
+ * be found, return NULL.
+ */
+static forceinline struct eb64_node *__eb64i_lookup(struct eb_root *root, s64 x)
+{
+ struct eb64_node *node;
+ eb_troot_t *troot;
+ u64 key = x ^ (1ULL << 63);
+ u64 y;
+
+ troot = root->b[EB_LEFT];
+ if (unlikely(troot == NULL))
+ return NULL;
+
+ while (1) {
+ if ((eb_gettag(troot) == EB_LEAF)) {
+ node = container_of(eb_untag(troot, EB_LEAF),
+ struct eb64_node, node.branches);
+ if (node->key == (u64)x)
+ return node;
+ else
+ return NULL;
+ }
+ node = container_of(eb_untag(troot, EB_NODE),
+ struct eb64_node, node.branches);
+
+ y = node->key ^ x;
+ if (!y) {
+ /* Either we found the node which holds the key, or
+ * we have a dup tree. In the later case, we have to
+ * walk it down left to get the first entry.
+ */
+ if (node->node.bit < 0) {
+ troot = node->node.branches.b[EB_LEFT];
+ while (eb_gettag(troot) != EB_LEAF)
+ troot = (eb_untag(troot, EB_NODE))->b[EB_LEFT];
+ node = container_of(eb_untag(troot, EB_LEAF),
+ struct eb64_node, node.branches);
+ }
+ return node;
+ }
+
+ if ((y >> node->node.bit) >= EB_NODE_BRANCHES)
+ return NULL; /* no more common bits */
+
+ troot = node->node.branches.b[(key >> node->node.bit) & EB_NODE_BRANCH_MASK];
+ }
+}
+
+/* Insert eb64_node <new> into subtree starting at node root <root>.
+ * Only new->key needs be set with the key. The eb64_node is returned.
+ * If root->b[EB_RGHT]==1, the tree may only contain unique keys.
+ */
+static forceinline struct eb64_node *
+__eb64_insert(struct eb_root *root, struct eb64_node *new) {
+ struct eb64_node *old;
+ unsigned int side;
+ eb_troot_t *troot;
+ u64 newkey; /* caching the key saves approximately one cycle */
+ eb_troot_t *root_right;
+ int old_node_bit;
+
+ side = EB_LEFT;
+ troot = root->b[EB_LEFT];
+ root_right = root->b[EB_RGHT];
+ if (unlikely(troot == NULL)) {
+ /* Tree is empty, insert the leaf part below the left branch */
+ root->b[EB_LEFT] = eb_dotag(&new->node.branches, EB_LEAF);
+ new->node.leaf_p = eb_dotag(root, EB_LEFT);
+ new->node.node_p = NULL; /* node part unused */
+ return new;
+ }
+
+ /* The tree descent is fairly easy :
+ * - first, check if we have reached a leaf node
+ * - second, check if we have gone too far
+ * - third, reiterate
+ * Everywhere, we use <new> for the node node we are inserting, <root>
+ * for the node we attach it to, and <old> for the node we are
+ * displacing below <new>. <troot> will always point to the future node
+ * (tagged with its type). <side> carries the side the node <new> is
+ * attached to below its parent, which is also where previous node
+ * was attached. <newkey> carries the key being inserted.
+ */
+ newkey = new->key;
+
+ while (1) {
+ if (unlikely(eb_gettag(troot) == EB_LEAF)) {
+ eb_troot_t *new_left, *new_rght;
+ eb_troot_t *new_leaf, *old_leaf;
+
+ old = container_of(eb_untag(troot, EB_LEAF),
+ struct eb64_node, node.branches);
+
+ new_left = eb_dotag(&new->node.branches, EB_LEFT);
+ new_rght = eb_dotag(&new->node.branches, EB_RGHT);
+ new_leaf = eb_dotag(&new->node.branches, EB_LEAF);
+ old_leaf = eb_dotag(&old->node.branches, EB_LEAF);
+
+ new->node.node_p = old->node.leaf_p;
+
+ /* Right here, we have 3 possibilities :
+ - the tree does not contain the key, and we have
+ new->key < old->key. We insert new above old, on
+ the left ;
+
+ - the tree does not contain the key, and we have
+ new->key > old->key. We insert new above old, on
+ the right ;
+
+ - the tree does contain the key, which implies it
+ is alone. We add the new key next to it as a
+ first duplicate.
+
+ The last two cases can easily be partially merged.
+ */
+
+ if (new->key < old->key) {
+ new->node.leaf_p = new_left;
+ old->node.leaf_p = new_rght;
+ new->node.branches.b[EB_LEFT] = new_leaf;
+ new->node.branches.b[EB_RGHT] = old_leaf;
+ } else {
+ /* we may refuse to duplicate this key if the tree is
+ * tagged as containing only unique keys.
+ */
+ if ((new->key == old->key) && eb_gettag(root_right))
+ return old;
+
+ /* new->key >= old->key, new goes the right */
+ old->node.leaf_p = new_left;
+ new->node.leaf_p = new_rght;
+ new->node.branches.b[EB_LEFT] = old_leaf;
+ new->node.branches.b[EB_RGHT] = new_leaf;
+
+ if (new->key == old->key) {
+ new->node.bit = -1;
+ root->b[side] = eb_dotag(&new->node.branches, EB_NODE);
+ return new;
+ }
+ }
+ break;
+ }
+
+ /* OK we're walking down this link */
+ old = container_of(eb_untag(troot, EB_NODE),
+ struct eb64_node, node.branches);
+ old_node_bit = old->node.bit;
+
+ /* Stop going down when we don't have common bits anymore. We
+ * also stop in front of a duplicates tree because it means we
+ * have to insert above.
+ */
+
+ if ((old_node_bit < 0) || /* we're above a duplicate tree, stop here */
+ (((new->key ^ old->key) >> old_node_bit) >= EB_NODE_BRANCHES)) {
+ /* The tree did not contain the key, so we insert <new> before the node
+ * <old>, and set ->bit to designate the lowest bit position in <new>
+ * which applies to ->branches.b[].
+ */
+ eb_troot_t *new_left, *new_rght;
+ eb_troot_t *new_leaf, *old_node;
+
+ new_left = eb_dotag(&new->node.branches, EB_LEFT);
+ new_rght = eb_dotag(&new->node.branches, EB_RGHT);
+ new_leaf = eb_dotag(&new->node.branches, EB_LEAF);
+ old_node = eb_dotag(&old->node.branches, EB_NODE);
+
+ new->node.node_p = old->node.node_p;
+
+ if (new->key < old->key) {
+ new->node.leaf_p = new_left;
+ old->node.node_p = new_rght;
+ new->node.branches.b[EB_LEFT] = new_leaf;
+ new->node.branches.b[EB_RGHT] = old_node;
+ }
+ else if (new->key > old->key) {
+ old->node.node_p = new_left;
+ new->node.leaf_p = new_rght;
+ new->node.branches.b[EB_LEFT] = old_node;
+ new->node.branches.b[EB_RGHT] = new_leaf;
+ }
+ else {
+ struct eb_node *ret;
+ ret = eb_insert_dup(&old->node, &new->node);
+ return container_of(ret, struct eb64_node, node);
+ }
+ break;
+ }
+
+ /* walk down */
+ root = &old->node.branches;
+#if BITS_PER_LONG >= 64
+ side = (newkey >> old_node_bit) & EB_NODE_BRANCH_MASK;
+#else
+ side = newkey;
+ side >>= old_node_bit;
+ if (old_node_bit >= 32) {
+ side = newkey >> 32;
+ side >>= old_node_bit & 0x1F;
+ }
+ side &= EB_NODE_BRANCH_MASK;
+#endif
+ troot = root->b[side];
+ }
+
+ /* Ok, now we are inserting <new> between <root> and <old>. <old>'s
+ * parent is already set to <new>, and the <root>'s branch is still in
+ * <side>. Update the root's leaf till we have it. Note that we can also
+ * find the side by checking the side of new->node.node_p.
+ */
+
+ /* We need the common higher bits between new->key and old->key.
+ * What differences are there between new->key and the node here ?
+ * NOTE that bit(new) is always < bit(root) because highest
+ * bit of new->key and old->key are identical here (otherwise they
+ * would sit on different branches).
+ */
+ // note that if EB_NODE_BITS > 1, we should check that it's still >= 0
+ new->node.bit = fls64(new->key ^ old->key) - EB_NODE_BITS;
+ root->b[side] = eb_dotag(&new->node.branches, EB_NODE);
+
+ return new;
+}
+
+/* Insert eb64_node <new> into subtree starting at node root <root>, using
+ * signed keys. Only new->key needs be set with the key. The eb64_node
+ * is returned. If root->b[EB_RGHT]==1, the tree may only contain unique keys.
+ */
+static forceinline struct eb64_node *
+__eb64i_insert(struct eb_root *root, struct eb64_node *new) {
+ struct eb64_node *old;
+ unsigned int side;
+ eb_troot_t *troot;
+ u64 newkey; /* caching the key saves approximately one cycle */
+ eb_troot_t *root_right;
+ int old_node_bit;
+
+ side = EB_LEFT;
+ troot = root->b[EB_LEFT];
+ root_right = root->b[EB_RGHT];
+ if (unlikely(troot == NULL)) {
+ /* Tree is empty, insert the leaf part below the left branch */
+ root->b[EB_LEFT] = eb_dotag(&new->node.branches, EB_LEAF);
+ new->node.leaf_p = eb_dotag(root, EB_LEFT);
+ new->node.node_p = NULL; /* node part unused */
+ return new;
+ }
+
+ /* The tree descent is fairly easy :
+ * - first, check if we have reached a leaf node
+ * - second, check if we have gone too far
+ * - third, reiterate
+ * Everywhere, we use <new> for the node node we are inserting, <root>
+ * for the node we attach it to, and <old> for the node we are
+ * displacing below <new>. <troot> will always point to the future node
+ * (tagged with its type). <side> carries the side the node <new> is
+ * attached to below its parent, which is also where previous node
+ * was attached. <newkey> carries a high bit shift of the key being
+ * inserted in order to have negative keys stored before positive
+ * ones.
+ */
+ newkey = new->key ^ (1ULL << 63);
+
+ while (1) {
+ if (unlikely(eb_gettag(troot) == EB_LEAF)) {
+ eb_troot_t *new_left, *new_rght;
+ eb_troot_t *new_leaf, *old_leaf;
+
+ old = container_of(eb_untag(troot, EB_LEAF),
+ struct eb64_node, node.branches);
+
+ new_left = eb_dotag(&new->node.branches, EB_LEFT);
+ new_rght = eb_dotag(&new->node.branches, EB_RGHT);
+ new_leaf = eb_dotag(&new->node.branches, EB_LEAF);
+ old_leaf = eb_dotag(&old->node.branches, EB_LEAF);
+
+ new->node.node_p = old->node.leaf_p;
+
+ /* Right here, we have 3 possibilities :
+ - the tree does not contain the key, and we have
+ new->key < old->key. We insert new above old, on
+ the left ;
+
+ - the tree does not contain the key, and we have
+ new->key > old->key. We insert new above old, on
+ the right ;
+
+ - the tree does contain the key, which implies it
+ is alone. We add the new key next to it as a
+ first duplicate.
+
+ The last two cases can easily be partially merged.
+ */
+
+ if ((s64)new->key < (s64)old->key) {
+ new->node.leaf_p = new_left;
+ old->node.leaf_p = new_rght;
+ new->node.branches.b[EB_LEFT] = new_leaf;
+ new->node.branches.b[EB_RGHT] = old_leaf;
+ } else {
+ /* we may refuse to duplicate this key if the tree is
+ * tagged as containing only unique keys.
+ */
+ if ((new->key == old->key) && eb_gettag(root_right))
+ return old;
+
+ /* new->key >= old->key, new goes the right */
+ old->node.leaf_p = new_left;
+ new->node.leaf_p = new_rght;
+ new->node.branches.b[EB_LEFT] = old_leaf;
+ new->node.branches.b[EB_RGHT] = new_leaf;
+
+ if (new->key == old->key) {
+ new->node.bit = -1;
+ root->b[side] = eb_dotag(&new->node.branches, EB_NODE);
+ return new;
+ }
+ }
+ break;
+ }
+
+ /* OK we're walking down this link */
+ old = container_of(eb_untag(troot, EB_NODE),
+ struct eb64_node, node.branches);
+ old_node_bit = old->node.bit;
+
+ /* Stop going down when we don't have common bits anymore. We
+ * also stop in front of a duplicates tree because it means we
+ * have to insert above.
+ */
+
+ if ((old_node_bit < 0) || /* we're above a duplicate tree, stop here */
+ (((new->key ^ old->key) >> old_node_bit) >= EB_NODE_BRANCHES)) {
+ /* The tree did not contain the key, so we insert <new> before the node
+ * <old>, and set ->bit to designate the lowest bit position in <new>
+ * which applies to ->branches.b[].
+ */
+ eb_troot_t *new_left, *new_rght;
+ eb_troot_t *new_leaf, *old_node;
+
+ new_left = eb_dotag(&new->node.branches, EB_LEFT);
+ new_rght = eb_dotag(&new->node.branches, EB_RGHT);
+ new_leaf = eb_dotag(&new->node.branches, EB_LEAF);
+ old_node = eb_dotag(&old->node.branches, EB_NODE);
+
+ new->node.node_p = old->node.node_p;
+
+ if ((s64)new->key < (s64)old->key) {
+ new->node.leaf_p = new_left;
+ old->node.node_p = new_rght;
+ new->node.branches.b[EB_LEFT] = new_leaf;
+ new->node.branches.b[EB_RGHT] = old_node;
+ }
+ else if ((s64)new->key > (s64)old->key) {
+ old->node.node_p = new_left;
+ new->node.leaf_p = new_rght;
+ new->node.branches.b[EB_LEFT] = old_node;
+ new->node.branches.b[EB_RGHT] = new_leaf;
+ }
+ else {
+ struct eb_node *ret;
+ ret = eb_insert_dup(&old->node, &new->node);
+ return container_of(ret, struct eb64_node, node);
+ }
+ break;
+ }
+
+ /* walk down */
+ root = &old->node.branches;
+#if BITS_PER_LONG >= 64
+ side = (newkey >> old_node_bit) & EB_NODE_BRANCH_MASK;
+#else
+ side = newkey;
+ side >>= old_node_bit;
+ if (old_node_bit >= 32) {
+ side = newkey >> 32;
+ side >>= old_node_bit & 0x1F;
+ }
+ side &= EB_NODE_BRANCH_MASK;
+#endif
+ troot = root->b[side];
+ }
+
+ /* Ok, now we are inserting <new> between <root> and <old>. <old>'s
+ * parent is already set to <new>, and the <root>'s branch is still in
+ * <side>. Update the root's leaf till we have it. Note that we can also
+ * find the side by checking the side of new->node.node_p.
+ */
+
+ /* We need the common higher bits between new->key and old->key.
+ * What differences are there between new->key and the node here ?
+ * NOTE that bit(new) is always < bit(root) because highest
+ * bit of new->key and old->key are identical here (otherwise they
+ * would sit on different branches).
+ */
+ // note that if EB_NODE_BITS > 1, we should check that it's still >= 0
+ new->node.bit = fls64(new->key ^ old->key) - EB_NODE_BITS;
+ root->b[side] = eb_dotag(&new->node.branches, EB_NODE);
+
+ return new;
+}
+
+#endif /* _EB64_TREE_H */
--- /dev/null
+/*
+ * Elastic Binary Trees - exported functions for Indirect Multi-Byte data nodes.
+ * Version 6.0.6
+ * (C) 2002-2011 - Willy Tarreau <w@1wt.eu>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+/* Consult ebimtree.h for more details about those functions */
+
+#include "ebpttree.h"
+#include "ebimtree.h"
+
+/* Find the first occurence of a key of <len> bytes in the tree <root>.
+ * If none can be found, return NULL.
+ */
+REGPRM3 struct ebpt_node *
+ebim_lookup(struct eb_root *root, const void *x, unsigned int len)
+{
+ return __ebim_lookup(root, x, len);
+}
+
+/* Insert ebpt_node <new> into subtree starting at node root <root>.
+ * Only new->key needs be set with the key. The ebpt_node is returned.
+ * If root->b[EB_RGHT]==1, the tree may only contain unique keys. The
+ * len is specified in bytes.
+ */
+REGPRM3 struct ebpt_node *
+ebim_insert(struct eb_root *root, struct ebpt_node *new, unsigned int len)
+{
+ return __ebim_insert(root, new, len);
+}
--- /dev/null
+/*
+ * Elastic Binary Trees - macros for Indirect Multi-Byte data nodes.
+ * Version 6.0.6
+ * (C) 2002-2011 - Willy Tarreau <w@1wt.eu>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _EBIMTREE_H
+#define _EBIMTREE_H
+
+#include <string.h>
+#include "ebtree.h"
+#include "ebpttree.h"
+
+/* These functions and macros rely on Pointer nodes and use the <key> entry as
+ * a pointer to an indirect key. Most operations are performed using ebpt_*.
+ */
+
+/* The following functions are not inlined by default. They are declared
+ * in ebimtree.c, which simply relies on their inline version.
+ */
+REGPRM3 struct ebpt_node *ebim_lookup(struct eb_root *root, const void *x, unsigned int len);
+REGPRM3 struct ebpt_node *ebim_insert(struct eb_root *root, struct ebpt_node *new, unsigned int len);
+
+/* Find the first occurence of a key of a least <len> bytes matching <x> in the
+ * tree <root>. The caller is responsible for ensuring that <len> will not exceed
+ * the common parts between the tree's keys and <x>. In case of multiple matches,
+ * the leftmost node is returned. This means that this function can be used to
+ * lookup string keys by prefix if all keys in the tree are zero-terminated. If
+ * no match is found, NULL is returned. Returns first node if <len> is zero.
+ */
+static forceinline struct ebpt_node *
+__ebim_lookup(struct eb_root *root, const void *x, unsigned int len)
+{
+ struct ebpt_node *node;
+ eb_troot_t *troot;
+ int pos, side;
+ int node_bit;
+
+ troot = root->b[EB_LEFT];
+ if (unlikely(troot == NULL))
+ goto ret_null;
+
+ if (unlikely(len == 0))
+ goto walk_down;
+
+ pos = 0;
+ while (1) {
+ if (eb_gettag(troot) == EB_LEAF) {
+ node = container_of(eb_untag(troot, EB_LEAF),
+ struct ebpt_node, node.branches);
+ if (memcmp(node->key + pos, x, len) != 0)
+ goto ret_null;
+ else
+ goto ret_node;
+ }
+ node = container_of(eb_untag(troot, EB_NODE),
+ struct ebpt_node, node.branches);
+
+ node_bit = node->node.bit;
+ if (node_bit < 0) {
+ /* We have a dup tree now. Either it's for the same
+ * value, and we walk down left, or it's a different
+ * one and we don't have our key.
+ */
+ if (memcmp(node->key + pos, x, len) != 0)
+ goto ret_null;
+ else
+ goto walk_left;
+ }
+
+ /* OK, normal data node, let's walk down. We check if all full
+ * bytes are equal, and we start from the last one we did not
+ * completely check. We stop as soon as we reach the last byte,
+ * because we must decide to go left/right or abort.
+ */
+ node_bit = ~node_bit + (pos << 3) + 8; // = (pos<<3) + (7 - node_bit)
+ if (node_bit < 0) {
+ /* This surprizing construction gives better performance
+ * because gcc does not try to reorder the loop. Tested to
+ * be fine with 2.95 to 4.2.
+ */
+ while (1) {
+ if (*(unsigned char*)(node->key + pos++) ^ *(unsigned char*)(x++))
+ goto ret_null; /* more than one full byte is different */
+ if (--len == 0)
+ goto walk_left; /* return first node if all bytes matched */
+ node_bit += 8;
+ if (node_bit >= 0)
+ break;
+ }
+ }
+
+ /* here we know that only the last byte differs, so node_bit < 8.
+ * We have 2 possibilities :
+ * - more than the last bit differs => return NULL
+ * - walk down on side = (x[pos] >> node_bit) & 1
+ */
+ side = *(unsigned char *)x >> node_bit;
+ if (((*(unsigned char*)(node->key + pos) >> node_bit) ^ side) > 1)
+ goto ret_null;
+ side &= 1;
+ troot = node->node.branches.b[side];
+ }
+ walk_left:
+ troot = node->node.branches.b[EB_LEFT];
+ walk_down:
+ while (eb_gettag(troot) != EB_LEAF)
+ troot = (eb_untag(troot, EB_NODE))->b[EB_LEFT];
+ node = container_of(eb_untag(troot, EB_LEAF),
+ struct ebpt_node, node.branches);
+ ret_node:
+ return node;
+ ret_null:
+ return NULL;
+}
+
+/* Insert ebpt_node <new> into subtree starting at node root <root>.
+ * Only new->key needs be set with the key. The ebpt_node is returned.
+ * If root->b[EB_RGHT]==1, the tree may only contain unique keys. The
+ * len is specified in bytes.
+ */
+static forceinline struct ebpt_node *
+__ebim_insert(struct eb_root *root, struct ebpt_node *new, unsigned int len)
+{
+ struct ebpt_node *old;
+ unsigned int side;
+ eb_troot_t *troot;
+ eb_troot_t *root_right;
+ int diff;
+ int bit;
+ int old_node_bit;
+
+ side = EB_LEFT;
+ troot = root->b[EB_LEFT];
+ root_right = root->b[EB_RGHT];
+ if (unlikely(troot == NULL)) {
+ /* Tree is empty, insert the leaf part below the left branch */
+ root->b[EB_LEFT] = eb_dotag(&new->node.branches, EB_LEAF);
+ new->node.leaf_p = eb_dotag(root, EB_LEFT);
+ new->node.node_p = NULL; /* node part unused */
+ return new;
+ }
+
+ len <<= 3;
+
+ /* The tree descent is fairly easy :
+ * - first, check if we have reached a leaf node
+ * - second, check if we have gone too far
+ * - third, reiterate
+ * Everywhere, we use <new> for the node node we are inserting, <root>
+ * for the node we attach it to, and <old> for the node we are
+ * displacing below <new>. <troot> will always point to the future node
+ * (tagged with its type). <side> carries the side the node <new> is
+ * attached to below its parent, which is also where previous node
+ * was attached.
+ */
+
+ bit = 0;
+ while (1) {
+ if (unlikely(eb_gettag(troot) == EB_LEAF)) {
+ eb_troot_t *new_left, *new_rght;
+ eb_troot_t *new_leaf, *old_leaf;
+
+ old = container_of(eb_untag(troot, EB_LEAF),
+ struct ebpt_node, node.branches);
+
+ new_left = eb_dotag(&new->node.branches, EB_LEFT);
+ new_rght = eb_dotag(&new->node.branches, EB_RGHT);
+ new_leaf = eb_dotag(&new->node.branches, EB_LEAF);
+ old_leaf = eb_dotag(&old->node.branches, EB_LEAF);
+
+ new->node.node_p = old->node.leaf_p;
+
+ /* Right here, we have 3 possibilities :
+ * - the tree does not contain the key, and we have
+ * new->key < old->key. We insert new above old, on
+ * the left ;
+ *
+ * - the tree does not contain the key, and we have
+ * new->key > old->key. We insert new above old, on
+ * the right ;
+ *
+ * - the tree does contain the key, which implies it
+ * is alone. We add the new key next to it as a
+ * first duplicate.
+ *
+ * The last two cases can easily be partially merged.
+ */
+ bit = equal_bits(new->key, old->key, bit, len);
+
+ /* Note: we can compare more bits than the current node's because as
+ * long as they are identical, we know we descend along the correct
+ * side. However we don't want to start to compare past the end.
+ */
+ diff = 0;
+ if (((unsigned)bit >> 3) < len)
+ diff = cmp_bits(new->key, old->key, bit);
+
+ if (diff < 0) {
+ new->node.leaf_p = new_left;
+ old->node.leaf_p = new_rght;
+ new->node.branches.b[EB_LEFT] = new_leaf;
+ new->node.branches.b[EB_RGHT] = old_leaf;
+ } else {
+ /* we may refuse to duplicate this key if the tree is
+ * tagged as containing only unique keys.
+ */
+ if (diff == 0 && eb_gettag(root_right))
+ return old;
+
+ /* new->key >= old->key, new goes the right */
+ old->node.leaf_p = new_left;
+ new->node.leaf_p = new_rght;
+ new->node.branches.b[EB_LEFT] = old_leaf;
+ new->node.branches.b[EB_RGHT] = new_leaf;
+
+ if (diff == 0) {
+ new->node.bit = -1;
+ root->b[side] = eb_dotag(&new->node.branches, EB_NODE);
+ return new;
+ }
+ }
+ break;
+ }
+
+ /* OK we're walking down this link */
+ old = container_of(eb_untag(troot, EB_NODE),
+ struct ebpt_node, node.branches);
+ old_node_bit = old->node.bit;
+
+ /* Stop going down when we don't have common bits anymore. We
+ * also stop in front of a duplicates tree because it means we
+ * have to insert above. Note: we can compare more bits than
+ * the current node's because as long as they are identical, we
+ * know we descend along the correct side.
+ */
+ if (old_node_bit < 0) {
+ /* we're above a duplicate tree, we must compare till the end */
+ bit = equal_bits(new->key, old->key, bit, len);
+ goto dup_tree;
+ }
+ else if (bit < old_node_bit) {
+ bit = equal_bits(new->key, old->key, bit, old_node_bit);
+ }
+
+ if (bit < old_node_bit) { /* we don't have all bits in common */
+ /* The tree did not contain the key, so we insert <new> before the node
+ * <old>, and set ->bit to designate the lowest bit position in <new>
+ * which applies to ->branches.b[].
+ */
+ eb_troot_t *new_left, *new_rght;
+ eb_troot_t *new_leaf, *old_node;
+
+ dup_tree:
+ new_left = eb_dotag(&new->node.branches, EB_LEFT);
+ new_rght = eb_dotag(&new->node.branches, EB_RGHT);
+ new_leaf = eb_dotag(&new->node.branches, EB_LEAF);
+ old_node = eb_dotag(&old->node.branches, EB_NODE);
+
+ new->node.node_p = old->node.node_p;
+
+ /* Note: we can compare more bits than the current node's because as
+ * long as they are identical, we know we descend along the correct
+ * side. However we don't want to start to compare past the end.
+ */
+ diff = 0;
+ if (((unsigned)bit >> 3) < len)
+ diff = cmp_bits(new->key, old->key, bit);
+
+ if (diff < 0) {
+ new->node.leaf_p = new_left;
+ old->node.node_p = new_rght;
+ new->node.branches.b[EB_LEFT] = new_leaf;
+ new->node.branches.b[EB_RGHT] = old_node;
+ }
+ else if (diff > 0) {
+ old->node.node_p = new_left;
+ new->node.leaf_p = new_rght;
+ new->node.branches.b[EB_LEFT] = old_node;
+ new->node.branches.b[EB_RGHT] = new_leaf;
+ }
+ else {
+ struct eb_node *ret;
+ ret = eb_insert_dup(&old->node, &new->node);
+ return container_of(ret, struct ebpt_node, node);
+ }
+ break;
+ }
+
+ /* walk down */
+ root = &old->node.branches;
+ side = (((unsigned char *)new->key)[old_node_bit >> 3] >> (~old_node_bit & 7)) & 1;
+ troot = root->b[side];
+ }
+
+ /* Ok, now we are inserting <new> between <root> and <old>. <old>'s
+ * parent is already set to <new>, and the <root>'s branch is still in
+ * <side>. Update the root's leaf till we have it. Note that we can also
+ * find the side by checking the side of new->node.node_p.
+ */
+
+ /* We need the common higher bits between new->key and old->key.
+ * This number of bits is already in <bit>.
+ */
+ new->node.bit = bit;
+ root->b[side] = eb_dotag(&new->node.branches, EB_NODE);
+ return new;
+}
+
+#endif /* _EBIMTREE_H */
--- /dev/null
+/*
+ * Elastic Binary Trees - exported functions for Indirect String data nodes.
+ * Version 6.0.6
+ * (C) 2002-2011 - Willy Tarreau <w@1wt.eu>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+/* Consult ebistree.h for more details about those functions */
+
+#include "ebistree.h"
+
+/* Find the first occurence of a zero-terminated string <x> in the tree <root>.
+ * It's the caller's reponsibility to use this function only on trees which
+ * only contain zero-terminated strings. If none can be found, return NULL.
+ */
+REGPRM2 struct ebpt_node *ebis_lookup(struct eb_root *root, const char *x)
+{
+ return __ebis_lookup(root, x);
+}
+
+/* Insert ebpt_node <new> into subtree starting at node root <root>. Only
+ * new->key needs be set with the zero-terminated string key. The ebpt_node is
+ * returned. If root->b[EB_RGHT]==1, the tree may only contain unique keys. The
+ * caller is responsible for properly terminating the key with a zero.
+ */
+REGPRM2 struct ebpt_node *ebis_insert(struct eb_root *root, struct ebpt_node *new)
+{
+ return __ebis_insert(root, new);
+}
--- /dev/null
+/*
+ * Elastic Binary Trees - macros to manipulate Indirect String data nodes.
+ * Version 6.0.6
+ * (C) 2002-2011 - Willy Tarreau <w@1wt.eu>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+/* These functions and macros rely on Multi-Byte nodes */
+
+#ifndef _EBISTREE_H
+#define _EBISTREE_H
+
+#include <string.h>
+#include "ebtree.h"
+#include "ebpttree.h"
+#include "ebimtree.h"
+
+/* These functions and macros rely on Pointer nodes and use the <key> entry as
+ * a pointer to an indirect key. Most operations are performed using ebpt_*.
+ */
+
+/* The following functions are not inlined by default. They are declared
+ * in ebistree.c, which simply relies on their inline version.
+ */
+REGPRM2 struct ebpt_node *ebis_lookup(struct eb_root *root, const char *x);
+REGPRM2 struct ebpt_node *ebis_insert(struct eb_root *root, struct ebpt_node *new);
+
+/* Find the first occurence of a length <len> string <x> in the tree <root>.
+ * It's the caller's reponsibility to use this function only on trees which
+ * only contain zero-terminated strings, and that no null character is present
+ * in string <x> in the first <len> chars. If none can be found, return NULL.
+ */
+static forceinline struct ebpt_node *
+ebis_lookup_len(struct eb_root *root, const char *x, unsigned int len)
+{
+ struct ebpt_node *node;
+
+ node = ebim_lookup(root, x, len);
+ if (!node || ((const char *)node->key)[len] != 0)
+ return NULL;
+ return node;
+}
+
+/* Find the first occurence of a zero-terminated string <x> in the tree <root>.
+ * It's the caller's reponsibility to use this function only on trees which
+ * only contain zero-terminated strings. If none can be found, return NULL.
+ */
+static forceinline struct ebpt_node *__ebis_lookup(struct eb_root *root, const void *x)
+{
+ struct ebpt_node *node;
+ eb_troot_t *troot;
+ int bit;
+ int node_bit;
+
+ troot = root->b[EB_LEFT];
+ if (unlikely(troot == NULL))
+ return NULL;
+
+ bit = 0;
+ while (1) {
+ if ((eb_gettag(troot) == EB_LEAF)) {
+ node = container_of(eb_untag(troot, EB_LEAF),
+ struct ebpt_node, node.branches);
+ if (strcmp(node->key, x) == 0)
+ return node;
+ else
+ return NULL;
+ }
+ node = container_of(eb_untag(troot, EB_NODE),
+ struct ebpt_node, node.branches);
+ node_bit = node->node.bit;
+
+ if (node_bit < 0) {
+ /* We have a dup tree now. Either it's for the same
+ * value, and we walk down left, or it's a different
+ * one and we don't have our key.
+ */
+ if (strcmp(node->key, x) != 0)
+ return NULL;
+
+ troot = node->node.branches.b[EB_LEFT];
+ while (eb_gettag(troot) != EB_LEAF)
+ troot = (eb_untag(troot, EB_NODE))->b[EB_LEFT];
+ node = container_of(eb_untag(troot, EB_LEAF),
+ struct ebpt_node, node.branches);
+ return node;
+ }
+
+ /* OK, normal data node, let's walk down but don't compare data
+ * if we already reached the end of the key.
+ */
+ if (likely(bit >= 0)) {
+ bit = string_equal_bits(x, node->key, bit);
+ if (likely(bit < node_bit)) {
+ if (bit >= 0)
+ return NULL; /* no more common bits */
+
+ /* bit < 0 : we reached the end of the key. If we
+ * are in a tree with unique keys, we can return
+ * this node. Otherwise we have to walk it down
+ * and stop comparing bits.
+ */
+ if (eb_gettag(root->b[EB_RGHT]))
+ return node;
+ }
+ /* if the bit is larger than the node's, we must bound it
+ * because we might have compared too many bytes with an
+ * inappropriate leaf. For a test, build a tree from "0",
+ * "WW", "W", "S" inserted in this exact sequence and lookup
+ * "W" => "S" is returned without this assignment.
+ */
+ else
+ bit = node_bit;
+ }
+
+ troot = node->node.branches.b[(((unsigned char*)x)[node_bit >> 3] >>
+ (~node_bit & 7)) & 1];
+ }
+}
+
+/* Insert ebpt_node <new> into subtree starting at node root <root>. Only
+ * new->key needs be set with the zero-terminated string key. The ebpt_node is
+ * returned. If root->b[EB_RGHT]==1, the tree may only contain unique keys. The
+ * caller is responsible for properly terminating the key with a zero.
+ */
+static forceinline struct ebpt_node *
+__ebis_insert(struct eb_root *root, struct ebpt_node *new)
+{
+ struct ebpt_node *old;
+ unsigned int side;
+ eb_troot_t *troot;
+ eb_troot_t *root_right;
+ int diff;
+ int bit;
+ int old_node_bit;
+
+ side = EB_LEFT;
+ troot = root->b[EB_LEFT];
+ root_right = root->b[EB_RGHT];
+ if (unlikely(troot == NULL)) {
+ /* Tree is empty, insert the leaf part below the left branch */
+ root->b[EB_LEFT] = eb_dotag(&new->node.branches, EB_LEAF);
+ new->node.leaf_p = eb_dotag(root, EB_LEFT);
+ new->node.node_p = NULL; /* node part unused */
+ return new;
+ }
+
+ /* The tree descent is fairly easy :
+ * - first, check if we have reached a leaf node
+ * - second, check if we have gone too far
+ * - third, reiterate
+ * Everywhere, we use <new> for the node node we are inserting, <root>
+ * for the node we attach it to, and <old> for the node we are
+ * displacing below <new>. <troot> will always point to the future node
+ * (tagged with its type). <side> carries the side the node <new> is
+ * attached to below its parent, which is also where previous node
+ * was attached.
+ */
+
+ bit = 0;
+ while (1) {
+ if (unlikely(eb_gettag(troot) == EB_LEAF)) {
+ eb_troot_t *new_left, *new_rght;
+ eb_troot_t *new_leaf, *old_leaf;
+
+ old = container_of(eb_untag(troot, EB_LEAF),
+ struct ebpt_node, node.branches);
+
+ new_left = eb_dotag(&new->node.branches, EB_LEFT);
+ new_rght = eb_dotag(&new->node.branches, EB_RGHT);
+ new_leaf = eb_dotag(&new->node.branches, EB_LEAF);
+ old_leaf = eb_dotag(&old->node.branches, EB_LEAF);
+
+ new->node.node_p = old->node.leaf_p;
+
+ /* Right here, we have 3 possibilities :
+ * - the tree does not contain the key, and we have
+ * new->key < old->key. We insert new above old, on
+ * the left ;
+ *
+ * - the tree does not contain the key, and we have
+ * new->key > old->key. We insert new above old, on
+ * the right ;
+ *
+ * - the tree does contain the key, which implies it
+ * is alone. We add the new key next to it as a
+ * first duplicate.
+ *
+ * The last two cases can easily be partially merged.
+ */
+ if (bit >= 0)
+ bit = string_equal_bits(new->key, old->key, bit);
+
+ if (bit < 0) {
+ /* key was already there */
+
+ /* we may refuse to duplicate this key if the tree is
+ * tagged as containing only unique keys.
+ */
+ if (eb_gettag(root_right))
+ return old;
+
+ /* new arbitrarily goes to the right and tops the dup tree */
+ old->node.leaf_p = new_left;
+ new->node.leaf_p = new_rght;
+ new->node.branches.b[EB_LEFT] = old_leaf;
+ new->node.branches.b[EB_RGHT] = new_leaf;
+ new->node.bit = -1;
+ root->b[side] = eb_dotag(&new->node.branches, EB_NODE);
+ return new;
+ }
+
+ diff = cmp_bits(new->key, old->key, bit);
+ if (diff < 0) {
+ /* new->key < old->key, new takes the left */
+ new->node.leaf_p = new_left;
+ old->node.leaf_p = new_rght;
+ new->node.branches.b[EB_LEFT] = new_leaf;
+ new->node.branches.b[EB_RGHT] = old_leaf;
+ } else {
+ /* new->key > old->key, new takes the right */
+ old->node.leaf_p = new_left;
+ new->node.leaf_p = new_rght;
+ new->node.branches.b[EB_LEFT] = old_leaf;
+ new->node.branches.b[EB_RGHT] = new_leaf;
+ }
+ break;
+ }
+
+ /* OK we're walking down this link */
+ old = container_of(eb_untag(troot, EB_NODE),
+ struct ebpt_node, node.branches);
+ old_node_bit = old->node.bit;
+
+ /* Stop going down when we don't have common bits anymore. We
+ * also stop in front of a duplicates tree because it means we
+ * have to insert above. Note: we can compare more bits than
+ * the current node's because as long as they are identical, we
+ * know we descend along the correct side.
+ */
+ if (bit >= 0 && (bit < old_node_bit || old_node_bit < 0))
+ bit = string_equal_bits(new->key, old->key, bit);
+
+ if (unlikely(bit < 0)) {
+ /* Perfect match, we must only stop on head of dup tree
+ * or walk down to a leaf.
+ */
+ if (old_node_bit < 0) {
+ /* We know here that string_equal_bits matched all
+ * bits and that we're on top of a dup tree, then
+ * we can perform the dup insertion and return.
+ */
+ struct eb_node *ret;
+ ret = eb_insert_dup(&old->node, &new->node);
+ return container_of(ret, struct ebpt_node, node);
+ }
+ /* OK so let's walk down */
+ }
+ else if (bit < old_node_bit || old_node_bit < 0) {
+ /* The tree did not contain the key, or we stopped on top of a dup
+ * tree, possibly containing the key. In the former case, we insert
+ * <new> before the node <old>, and set ->bit to designate the lowest
+ * bit position in <new> which applies to ->branches.b[]. In the later
+ * case, we add the key to the existing dup tree. Note that we cannot
+ * enter here if we match an intermediate node's key that is not the
+ * head of a dup tree.
+ */
+ eb_troot_t *new_left, *new_rght;
+ eb_troot_t *new_leaf, *old_node;
+
+ new_left = eb_dotag(&new->node.branches, EB_LEFT);
+ new_rght = eb_dotag(&new->node.branches, EB_RGHT);
+ new_leaf = eb_dotag(&new->node.branches, EB_LEAF);
+ old_node = eb_dotag(&old->node.branches, EB_NODE);
+
+ new->node.node_p = old->node.node_p;
+
+ /* we can never match all bits here */
+ diff = cmp_bits(new->key, old->key, bit);
+ if (diff < 0) {
+ new->node.leaf_p = new_left;
+ old->node.node_p = new_rght;
+ new->node.branches.b[EB_LEFT] = new_leaf;
+ new->node.branches.b[EB_RGHT] = old_node;
+ }
+ else {
+ old->node.node_p = new_left;
+ new->node.leaf_p = new_rght;
+ new->node.branches.b[EB_LEFT] = old_node;
+ new->node.branches.b[EB_RGHT] = new_leaf;
+ }
+ break;
+ }
+
+ /* walk down */
+ root = &old->node.branches;
+ side = (((unsigned char *)new->key)[old_node_bit >> 3] >> (~old_node_bit & 7)) & 1;
+ troot = root->b[side];
+ }
+
+ /* Ok, now we are inserting <new> between <root> and <old>. <old>'s
+ * parent is already set to <new>, and the <root>'s branch is still in
+ * <side>. Update the root's leaf till we have it. Note that we can also
+ * find the side by checking the side of new->node.node_p.
+ */
+
+ /* We need the common higher bits between new->key and old->key.
+ * This number of bits is already in <bit>.
+ * NOTE: we can't get here whit bit < 0 since we found a dup !
+ */
+ new->node.bit = bit;
+ root->b[side] = eb_dotag(&new->node.branches, EB_NODE);
+ return new;
+}
+
+#endif /* _EBISTREE_H */
--- /dev/null
+/*
+ * Elastic Binary Trees - exported functions for Multi-Byte data nodes.
+ * Version 6.0.6
+ * (C) 2002-2011 - Willy Tarreau <w@1wt.eu>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+/* Consult ebmbtree.h for more details about those functions */
+
+#include "ebmbtree.h"
+
+/* Find the first occurence of a key of <len> bytes in the tree <root>.
+ * If none can be found, return NULL.
+ */
+REGPRM3 struct ebmb_node *
+ebmb_lookup(struct eb_root *root, const void *x, unsigned int len)
+{
+ return __ebmb_lookup(root, x, len);
+}
+
+/* Insert ebmb_node <new> into subtree starting at node root <root>.
+ * Only new->key needs be set with the key. The ebmb_node is returned.
+ * If root->b[EB_RGHT]==1, the tree may only contain unique keys. The
+ * len is specified in bytes.
+ */
+REGPRM3 struct ebmb_node *
+ebmb_insert(struct eb_root *root, struct ebmb_node *new, unsigned int len)
+{
+ return __ebmb_insert(root, new, len);
+}
+
+/* Find the first occurence of the longest prefix matching a key <x> in the
+ * tree <root>. It's the caller's responsibility to ensure that key <x> is at
+ * least as long as the keys in the tree. If none can be found, return NULL.
+ */
+REGPRM2 struct ebmb_node *
+ebmb_lookup_longest(struct eb_root *root, const void *x)
+{
+ return __ebmb_lookup_longest(root, x);
+}
+
+/* Find the first occurence of a prefix matching a key <x> of <pfx> BITS in the
+ * tree <root>. If none can be found, return NULL.
+ */
+REGPRM3 struct ebmb_node *
+ebmb_lookup_prefix(struct eb_root *root, const void *x, unsigned int pfx)
+{
+ return __ebmb_lookup_prefix(root, x, pfx);
+}
+
+/* Insert ebmb_node <new> into a prefix subtree starting at node root <root>.
+ * Only new->key and new->pfx need be set with the key and its prefix length.
+ * Note that bits between <pfx> and <len> are theorically ignored and should be
+ * zero, as it is not certain yet that they will always be ignored everywhere
+ * (eg in bit compare functions).
+ * The ebmb_node is returned.
+ * If root->b[EB_RGHT]==1, the tree may only contain unique keys. The
+ * len is specified in bytes.
+ */
+REGPRM3 struct ebmb_node *
+ebmb_insert_prefix(struct eb_root *root, struct ebmb_node *new, unsigned int len)
+{
+ return __ebmb_insert_prefix(root, new, len);
+}
--- /dev/null
+/*
+ * Elastic Binary Trees - macros and structures for Multi-Byte data nodes.
+ * Version 6.0.6
+ * (C) 2002-2011 - Willy Tarreau <w@1wt.eu>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _EBMBTREE_H
+#define _EBMBTREE_H
+
+#include <string.h>
+#include "ebtree.h"
+
+/* Return the structure of type <type> whose member <member> points to <ptr> */
+#define ebmb_entry(ptr, type, member) container_of(ptr, type, member)
+
+#define EBMB_ROOT EB_ROOT
+#define EBMB_TREE_HEAD EB_TREE_HEAD
+
+/* This structure carries a node, a leaf, and a key. It must start with the
+ * eb_node so that it can be cast into an eb_node. We could also have put some
+ * sort of transparent union here to reduce the indirection level, but the fact
+ * is, the end user is not meant to manipulate internals, so this is pointless.
+ * The 'node.bit' value here works differently from scalar types, as it contains
+ * the number of identical bits between the two branches.
+ */
+struct ebmb_node {
+ struct eb_node node; /* the tree node, must be at the beginning */
+ unsigned char key[0]; /* the key, its size depends on the application */
+};
+
+/*
+ * Exported functions and macros.
+ * Many of them are always inlined because they are extremely small, and
+ * are generally called at most once or twice in a program.
+ */
+
+/* Return leftmost node in the tree, or NULL if none */
+static forceinline struct ebmb_node *ebmb_first(struct eb_root *root)
+{
+ return ebmb_entry(eb_first(root), struct ebmb_node, node);
+}
+
+/* Return rightmost node in the tree, or NULL if none */
+static forceinline struct ebmb_node *ebmb_last(struct eb_root *root)
+{
+ return ebmb_entry(eb_last(root), struct ebmb_node, node);
+}
+
+/* Return next node in the tree, or NULL if none */
+static forceinline struct ebmb_node *ebmb_next(struct ebmb_node *ebmb)
+{
+ return ebmb_entry(eb_next(&ebmb->node), struct ebmb_node, node);
+}
+
+/* Return previous node in the tree, or NULL if none */
+static forceinline struct ebmb_node *ebmb_prev(struct ebmb_node *ebmb)
+{
+ return ebmb_entry(eb_prev(&ebmb->node), struct ebmb_node, node);
+}
+
+/* Return next leaf node within a duplicate sub-tree, or NULL if none. */
+static inline struct ebmb_node *ebmb_next_dup(struct ebmb_node *ebmb)
+{
+ return ebmb_entry(eb_next_dup(&ebmb->node), struct ebmb_node, node);
+}
+
+/* Return previous leaf node within a duplicate sub-tree, or NULL if none. */
+static inline struct ebmb_node *ebmb_prev_dup(struct ebmb_node *ebmb)
+{
+ return ebmb_entry(eb_prev_dup(&ebmb->node), struct ebmb_node, node);
+}
+
+/* Return next node in the tree, skipping duplicates, or NULL if none */
+static forceinline struct ebmb_node *ebmb_next_unique(struct ebmb_node *ebmb)
+{
+ return ebmb_entry(eb_next_unique(&ebmb->node), struct ebmb_node, node);
+}
+
+/* Return previous node in the tree, skipping duplicates, or NULL if none */
+static forceinline struct ebmb_node *ebmb_prev_unique(struct ebmb_node *ebmb)
+{
+ return ebmb_entry(eb_prev_unique(&ebmb->node), struct ebmb_node, node);
+}
+
+/* Delete node from the tree if it was linked in. Mark the node unused. Note
+ * that this function relies on a non-inlined generic function: eb_delete.
+ */
+static forceinline void ebmb_delete(struct ebmb_node *ebmb)
+{
+ eb_delete(&ebmb->node);
+}
+
+/* The following functions are not inlined by default. They are declared
+ * in ebmbtree.c, which simply relies on their inline version.
+ */
+REGPRM3 struct ebmb_node *ebmb_lookup(struct eb_root *root, const void *x, unsigned int len);
+REGPRM3 struct ebmb_node *ebmb_insert(struct eb_root *root, struct ebmb_node *new, unsigned int len);
+REGPRM2 struct ebmb_node *ebmb_lookup_longest(struct eb_root *root, const void *x);
+REGPRM3 struct ebmb_node *ebmb_lookup_prefix(struct eb_root *root, const void *x, unsigned int pfx);
+REGPRM3 struct ebmb_node *ebmb_insert_prefix(struct eb_root *root, struct ebmb_node *new, unsigned int len);
+
+/* The following functions are less likely to be used directly, because their
+ * code is larger. The non-inlined version is preferred.
+ */
+
+/* Delete node from the tree if it was linked in. Mark the node unused. */
+static forceinline void __ebmb_delete(struct ebmb_node *ebmb)
+{
+ __eb_delete(&ebmb->node);
+}
+
+/* Find the first occurence of a key of a least <len> bytes matching <x> in the
+ * tree <root>. The caller is responsible for ensuring that <len> will not exceed
+ * the common parts between the tree's keys and <x>. In case of multiple matches,
+ * the leftmost node is returned. This means that this function can be used to
+ * lookup string keys by prefix if all keys in the tree are zero-terminated. If
+ * no match is found, NULL is returned. Returns first node if <len> is zero.
+ */
+static forceinline struct ebmb_node *__ebmb_lookup(struct eb_root *root, const void *x, unsigned int len)
+{
+ struct ebmb_node *node;
+ eb_troot_t *troot;
+ int pos, side;
+ int node_bit;
+
+ troot = root->b[EB_LEFT];
+ if (unlikely(troot == NULL))
+ goto ret_null;
+
+ if (unlikely(len == 0))
+ goto walk_down;
+
+ pos = 0;
+ while (1) {
+ if (eb_gettag(troot) == EB_LEAF) {
+ node = container_of(eb_untag(troot, EB_LEAF),
+ struct ebmb_node, node.branches);
+ if (memcmp(node->key + pos, x, len) != 0)
+ goto ret_null;
+ else
+ goto ret_node;
+ }
+ node = container_of(eb_untag(troot, EB_NODE),
+ struct ebmb_node, node.branches);
+
+ node_bit = node->node.bit;
+ if (node_bit < 0) {
+ /* We have a dup tree now. Either it's for the same
+ * value, and we walk down left, or it's a different
+ * one and we don't have our key.
+ */
+ if (memcmp(node->key + pos, x, len) != 0)
+ goto ret_null;
+ else
+ goto walk_left;
+ }
+
+ /* OK, normal data node, let's walk down. We check if all full
+ * bytes are equal, and we start from the last one we did not
+ * completely check. We stop as soon as we reach the last byte,
+ * because we must decide to go left/right or abort.
+ */
+ node_bit = ~node_bit + (pos << 3) + 8; // = (pos<<3) + (7 - node_bit)
+ if (node_bit < 0) {
+ /* This surprizing construction gives better performance
+ * because gcc does not try to reorder the loop. Tested to
+ * be fine with 2.95 to 4.2.
+ */
+ while (1) {
+ if (node->key[pos++] ^ *(unsigned char*)(x++))
+ goto ret_null; /* more than one full byte is different */
+ if (--len == 0)
+ goto walk_left; /* return first node if all bytes matched */
+ node_bit += 8;
+ if (node_bit >= 0)
+ break;
+ }
+ }
+
+ /* here we know that only the last byte differs, so node_bit < 8.
+ * We have 2 possibilities :
+ * - more than the last bit differs => return NULL
+ * - walk down on side = (x[pos] >> node_bit) & 1
+ */
+ side = *(unsigned char *)x >> node_bit;
+ if (((node->key[pos] >> node_bit) ^ side) > 1)
+ goto ret_null;
+ side &= 1;
+ troot = node->node.branches.b[side];
+ }
+ walk_left:
+ troot = node->node.branches.b[EB_LEFT];
+ walk_down:
+ while (eb_gettag(troot) != EB_LEAF)
+ troot = (eb_untag(troot, EB_NODE))->b[EB_LEFT];
+ node = container_of(eb_untag(troot, EB_LEAF),
+ struct ebmb_node, node.branches);
+ ret_node:
+ return node;
+ ret_null:
+ return NULL;
+}
+
+/* Insert ebmb_node <new> into subtree starting at node root <root>.
+ * Only new->key needs be set with the key. The ebmb_node is returned.
+ * If root->b[EB_RGHT]==1, the tree may only contain unique keys. The
+ * len is specified in bytes. It is absolutely mandatory that this length
+ * is the same for all keys in the tree. This function cannot be used to
+ * insert strings.
+ */
+static forceinline struct ebmb_node *
+__ebmb_insert(struct eb_root *root, struct ebmb_node *new, unsigned int len)
+{
+ struct ebmb_node *old;
+ unsigned int side;
+ eb_troot_t *troot, **up_ptr;
+ eb_troot_t *root_right;
+ int diff;
+ int bit;
+ eb_troot_t *new_left, *new_rght;
+ eb_troot_t *new_leaf;
+ int old_node_bit;
+
+ side = EB_LEFT;
+ troot = root->b[EB_LEFT];
+ root_right = root->b[EB_RGHT];
+ if (unlikely(troot == NULL)) {
+ /* Tree is empty, insert the leaf part below the left branch */
+ root->b[EB_LEFT] = eb_dotag(&new->node.branches, EB_LEAF);
+ new->node.leaf_p = eb_dotag(root, EB_LEFT);
+ new->node.node_p = NULL; /* node part unused */
+ return new;
+ }
+
+ /* The tree descent is fairly easy :
+ * - first, check if we have reached a leaf node
+ * - second, check if we have gone too far
+ * - third, reiterate
+ * Everywhere, we use <new> for the node node we are inserting, <root>
+ * for the node we attach it to, and <old> for the node we are
+ * displacing below <new>. <troot> will always point to the future node
+ * (tagged with its type). <side> carries the side the node <new> is
+ * attached to below its parent, which is also where previous node
+ * was attached.
+ */
+
+ bit = 0;
+ while (1) {
+ if (unlikely(eb_gettag(troot) == EB_LEAF)) {
+ /* insert above a leaf */
+ old = container_of(eb_untag(troot, EB_LEAF),
+ struct ebmb_node, node.branches);
+ new->node.node_p = old->node.leaf_p;
+ up_ptr = &old->node.leaf_p;
+ goto check_bit_and_break;
+ }
+
+ /* OK we're walking down this link */
+ old = container_of(eb_untag(troot, EB_NODE),
+ struct ebmb_node, node.branches);
+ old_node_bit = old->node.bit;
+
+ if (unlikely(old->node.bit < 0)) {
+ /* We're above a duplicate tree, so we must compare the whole value */
+ new->node.node_p = old->node.node_p;
+ up_ptr = &old->node.node_p;
+ check_bit_and_break:
+ bit = equal_bits(new->key, old->key, bit, len << 3);
+ break;
+ }
+
+ /* Stop going down when we don't have common bits anymore. We
+ * also stop in front of a duplicates tree because it means we
+ * have to insert above. Note: we can compare more bits than
+ * the current node's because as long as they are identical, we
+ * know we descend along the correct side.
+ */
+
+ bit = equal_bits(new->key, old->key, bit, old_node_bit);
+ if (unlikely(bit < old_node_bit)) {
+ /* The tree did not contain the key, so we insert <new> before the
+ * node <old>, and set ->bit to designate the lowest bit position in
+ * <new> which applies to ->branches.b[].
+ */
+ new->node.node_p = old->node.node_p;
+ up_ptr = &old->node.node_p;
+ break;
+ }
+ /* we don't want to skip bits for further comparisons, so we must limit <bit>.
+ * However, since we're going down around <old_node_bit>, we know it will be
+ * properly matched, so we can skip this bit.
+ */
+ bit = old_node_bit + 1;
+
+ /* walk down */
+ root = &old->node.branches;
+ side = old_node_bit & 7;
+ side ^= 7;
+ side = (new->key[old_node_bit >> 3] >> side) & 1;
+ troot = root->b[side];
+ }
+
+ new_left = eb_dotag(&new->node.branches, EB_LEFT);
+ new_rght = eb_dotag(&new->node.branches, EB_RGHT);
+ new_leaf = eb_dotag(&new->node.branches, EB_LEAF);
+
+ new->node.bit = bit;
+
+ /* Note: we can compare more bits than the current node's because as
+ * long as they are identical, we know we descend along the correct
+ * side. However we don't want to start to compare past the end.
+ */
+ diff = 0;
+ if (((unsigned)bit >> 3) < len)
+ diff = cmp_bits(new->key, old->key, bit);
+
+ if (diff == 0) {
+ new->node.bit = -1; /* mark as new dup tree, just in case */
+
+ if (likely(eb_gettag(root_right))) {
+ /* we refuse to duplicate this key if the tree is
+ * tagged as containing only unique keys.
+ */
+ return old;
+ }
+
+ if (eb_gettag(troot) != EB_LEAF) {
+ /* there was already a dup tree below */
+ struct eb_node *ret;
+ ret = eb_insert_dup(&old->node, &new->node);
+ return container_of(ret, struct ebmb_node, node);
+ }
+ /* otherwise fall through */
+ }
+
+ if (diff >= 0) {
+ new->node.branches.b[EB_LEFT] = troot;
+ new->node.branches.b[EB_RGHT] = new_leaf;
+ new->node.leaf_p = new_rght;
+ *up_ptr = new_left;
+ }
+ else if (diff < 0) {
+ new->node.branches.b[EB_LEFT] = new_leaf;
+ new->node.branches.b[EB_RGHT] = troot;
+ new->node.leaf_p = new_left;
+ *up_ptr = new_rght;
+ }
+
+ /* Ok, now we are inserting <new> between <root> and <old>. <old>'s
+ * parent is already set to <new>, and the <root>'s branch is still in
+ * <side>. Update the root's leaf till we have it. Note that we can also
+ * find the side by checking the side of new->node.node_p.
+ */
+
+ root->b[side] = eb_dotag(&new->node.branches, EB_NODE);
+ return new;
+}
+
+
+/* Find the first occurence of the longest prefix matching a key <x> in the
+ * tree <root>. It's the caller's responsibility to ensure that key <x> is at
+ * least as long as the keys in the tree. Note that this can be ensured by
+ * having a byte at the end of <x> which cannot be part of any prefix, typically
+ * the trailing zero for a string. If none can be found, return NULL.
+ */
+static forceinline struct ebmb_node *__ebmb_lookup_longest(struct eb_root *root, const void *x)
+{
+ struct ebmb_node *node;
+ eb_troot_t *troot, *cover;
+ int pos, side;
+ int node_bit;
+
+ troot = root->b[EB_LEFT];
+ if (unlikely(troot == NULL))
+ return NULL;
+
+ cover = NULL;
+ pos = 0;
+ while (1) {
+ if ((eb_gettag(troot) == EB_LEAF)) {
+ node = container_of(eb_untag(troot, EB_LEAF),
+ struct ebmb_node, node.branches);
+ if (check_bits(x - pos, node->key, pos, node->node.pfx))
+ goto not_found;
+
+ return node;
+ }
+ node = container_of(eb_untag(troot, EB_NODE),
+ struct ebmb_node, node.branches);
+
+ node_bit = node->node.bit;
+ if (node_bit < 0) {
+ /* We have a dup tree now. Either it's for the same
+ * value, and we walk down left, or it's a different
+ * one and we don't have our key.
+ */
+ if (check_bits(x - pos, node->key, pos, node->node.pfx))
+ goto not_found;
+
+ troot = node->node.branches.b[EB_LEFT];
+ while (eb_gettag(troot) != EB_LEAF)
+ troot = (eb_untag(troot, EB_NODE))->b[EB_LEFT];
+ node = container_of(eb_untag(troot, EB_LEAF),
+ struct ebmb_node, node.branches);
+ return node;
+ }
+
+ node_bit >>= 1; /* strip cover bit */
+ node_bit = ~node_bit + (pos << 3) + 8; // = (pos<<3) + (7 - node_bit)
+ if (node_bit < 0) {
+ /* This uncommon construction gives better performance
+ * because gcc does not try to reorder the loop. Tested to
+ * be fine with 2.95 to 4.2.
+ */
+ while (1) {
+ x++; pos++;
+ if (node->key[pos-1] ^ *(unsigned char*)(x-1))
+ goto not_found; /* more than one full byte is different */
+ node_bit += 8;
+ if (node_bit >= 0)
+ break;
+ }
+ }
+
+ /* here we know that only the last byte differs, so 0 <= node_bit <= 7.
+ * We have 2 possibilities :
+ * - more than the last bit differs => data does not match
+ * - walk down on side = (x[pos] >> node_bit) & 1
+ */
+ side = *(unsigned char *)x >> node_bit;
+ if (((node->key[pos] >> node_bit) ^ side) > 1)
+ goto not_found;
+
+ if (!(node->node.bit & 1)) {
+ /* This is a cover node, let's keep a reference to it
+ * for later. The covering subtree is on the left, and
+ * the covered subtree is on the right, so we have to
+ * walk down right.
+ */
+ cover = node->node.branches.b[EB_LEFT];
+ troot = node->node.branches.b[EB_RGHT];
+ continue;
+ }
+ side &= 1;
+ troot = node->node.branches.b[side];
+ }
+
+ not_found:
+ /* Walk down last cover tre if it exists. It does not matter if cover is NULL */
+ return ebmb_entry(eb_walk_down(cover, EB_LEFT), struct ebmb_node, node);
+}
+
+
+/* Find the first occurence of a prefix matching a key <x> of <pfx> BITS in the
+ * tree <root>. It's the caller's responsibility to ensure that key <x> is at
+ * least as long as the keys in the tree. Note that this can be ensured by
+ * having a byte at the end of <x> which cannot be part of any prefix, typically
+ * the trailing zero for a string. If none can be found, return NULL.
+ */
+static forceinline struct ebmb_node *__ebmb_lookup_prefix(struct eb_root *root, const void *x, unsigned int pfx)
+{
+ struct ebmb_node *node;
+ eb_troot_t *troot;
+ int pos, side;
+ int node_bit;
+
+ troot = root->b[EB_LEFT];
+ if (unlikely(troot == NULL))
+ return NULL;
+
+ pos = 0;
+ while (1) {
+ if ((eb_gettag(troot) == EB_LEAF)) {
+ node = container_of(eb_untag(troot, EB_LEAF),
+ struct ebmb_node, node.branches);
+ if (node->node.pfx != pfx)
+ return NULL;
+ if (check_bits(x - pos, node->key, pos, node->node.pfx))
+ return NULL;
+ return node;
+ }
+ node = container_of(eb_untag(troot, EB_NODE),
+ struct ebmb_node, node.branches);
+
+ node_bit = node->node.bit;
+ if (node_bit < 0) {
+ /* We have a dup tree now. Either it's for the same
+ * value, and we walk down left, or it's a different
+ * one and we don't have our key.
+ */
+ if (node->node.pfx != pfx)
+ return NULL;
+ if (check_bits(x - pos, node->key, pos, node->node.pfx))
+ return NULL;
+
+ troot = node->node.branches.b[EB_LEFT];
+ while (eb_gettag(troot) != EB_LEAF)
+ troot = (eb_untag(troot, EB_NODE))->b[EB_LEFT];
+ node = container_of(eb_untag(troot, EB_LEAF),
+ struct ebmb_node, node.branches);
+ return node;
+ }
+
+ node_bit >>= 1; /* strip cover bit */
+ node_bit = ~node_bit + (pos << 3) + 8; // = (pos<<3) + (7 - node_bit)
+ if (node_bit < 0) {
+ /* This uncommon construction gives better performance
+ * because gcc does not try to reorder the loop. Tested to
+ * be fine with 2.95 to 4.2.
+ */
+ while (1) {
+ x++; pos++;
+ if (node->key[pos-1] ^ *(unsigned char*)(x-1))
+ return NULL; /* more than one full byte is different */
+ node_bit += 8;
+ if (node_bit >= 0)
+ break;
+ }
+ }
+
+ /* here we know that only the last byte differs, so 0 <= node_bit <= 7.
+ * We have 2 possibilities :
+ * - more than the last bit differs => data does not match
+ * - walk down on side = (x[pos] >> node_bit) & 1
+ */
+ side = *(unsigned char *)x >> node_bit;
+ if (((node->key[pos] >> node_bit) ^ side) > 1)
+ return NULL;
+
+ if (!(node->node.bit & 1)) {
+ /* This is a cover node, it may be the entry we're
+ * looking for. We already know that it matches all the
+ * bits, let's compare prefixes and descend the cover
+ * subtree if they match.
+ */
+ if ((unsigned short)node->node.bit >> 1 == pfx)
+ troot = node->node.branches.b[EB_LEFT];
+ else
+ troot = node->node.branches.b[EB_RGHT];
+ continue;
+ }
+ side &= 1;
+ troot = node->node.branches.b[side];
+ }
+}
+
+
+/* Insert ebmb_node <new> into a prefix subtree starting at node root <root>.
+ * Only new->key and new->pfx need be set with the key and its prefix length.
+ * Note that bits between <pfx> and <len> are theorically ignored and should be
+ * zero, as it is not certain yet that they will always be ignored everywhere
+ * (eg in bit compare functions).
+ * The ebmb_node is returned.
+ * If root->b[EB_RGHT]==1, the tree may only contain unique keys. The
+ * len is specified in bytes.
+ */
+static forceinline struct ebmb_node *
+__ebmb_insert_prefix(struct eb_root *root, struct ebmb_node *new, unsigned int len)
+{
+ struct ebmb_node *old;
+ unsigned int side;
+ eb_troot_t *troot, **up_ptr;
+ eb_troot_t *root_right;
+ int diff;
+ int bit;
+ eb_troot_t *new_left, *new_rght;
+ eb_troot_t *new_leaf;
+ int old_node_bit;
+
+ side = EB_LEFT;
+ troot = root->b[EB_LEFT];
+ root_right = root->b[EB_RGHT];
+ if (unlikely(troot == NULL)) {
+ /* Tree is empty, insert the leaf part below the left branch */
+ root->b[EB_LEFT] = eb_dotag(&new->node.branches, EB_LEAF);
+ new->node.leaf_p = eb_dotag(root, EB_LEFT);
+ new->node.node_p = NULL; /* node part unused */
+ return new;
+ }
+
+ len <<= 3;
+ if (len > new->node.pfx)
+ len = new->node.pfx;
+
+ /* The tree descent is fairly easy :
+ * - first, check if we have reached a leaf node
+ * - second, check if we have gone too far
+ * - third, reiterate
+ * Everywhere, we use <new> for the node node we are inserting, <root>
+ * for the node we attach it to, and <old> for the node we are
+ * displacing below <new>. <troot> will always point to the future node
+ * (tagged with its type). <side> carries the side the node <new> is
+ * attached to below its parent, which is also where previous node
+ * was attached.
+ */
+
+ bit = 0;
+ while (1) {
+ if (unlikely(eb_gettag(troot) == EB_LEAF)) {
+ /* Insert above a leaf. Note that this leaf could very
+ * well be part of a cover node.
+ */
+ old = container_of(eb_untag(troot, EB_LEAF),
+ struct ebmb_node, node.branches);
+ new->node.node_p = old->node.leaf_p;
+ up_ptr = &old->node.leaf_p;
+ goto check_bit_and_break;
+ }
+
+ /* OK we're walking down this link */
+ old = container_of(eb_untag(troot, EB_NODE),
+ struct ebmb_node, node.branches);
+ old_node_bit = old->node.bit;
+ /* Note that old_node_bit can be :
+ * < 0 : dup tree
+ * = 2N : cover node for N bits
+ * = 2N+1 : normal node at N bits
+ */
+
+ if (unlikely(old_node_bit < 0)) {
+ /* We're above a duplicate tree, so we must compare the whole value */
+ new->node.node_p = old->node.node_p;
+ up_ptr = &old->node.node_p;
+ check_bit_and_break:
+ /* No need to compare everything if the leaves are shorter than the new one. */
+ if (len > old->node.pfx)
+ len = old->node.pfx;
+ bit = equal_bits(new->key, old->key, bit, len);
+ break;
+ }
+
+ /* WARNING: for the two blocks below, <bit> is counted in half-bits */
+
+ bit = equal_bits(new->key, old->key, bit, old_node_bit >> 1);
+ bit = (bit << 1) + 1; // assume comparisons with normal nodes
+
+ /* we must always check that our prefix is larger than the nodes
+ * we visit, otherwise we have to stop going down. The following
+ * test is able to stop before both normal and cover nodes.
+ */
+ if (bit >= (new->node.pfx << 1) && (new->node.pfx << 1) < old_node_bit) {
+ /* insert cover node here on the left */
+ new->node.node_p = old->node.node_p;
+ up_ptr = &old->node.node_p;
+ new->node.bit = new->node.pfx << 1;
+ diff = -1;
+ goto insert_above;
+ }
+
+ if (unlikely(bit < old_node_bit)) {
+ /* The tree did not contain the key, so we insert <new> before the
+ * node <old>, and set ->bit to designate the lowest bit position in
+ * <new> which applies to ->branches.b[]. We know that the bit is not
+ * greater than the prefix length thanks to the test above.
+ */
+ new->node.node_p = old->node.node_p;
+ up_ptr = &old->node.node_p;
+ new->node.bit = bit;
+ diff = cmp_bits(new->key, old->key, bit >> 1);
+ goto insert_above;
+ }
+
+ if (!(old_node_bit & 1)) {
+ /* if we encounter a cover node with our exact prefix length, it's
+ * necessarily the same value, so we insert there as a duplicate on
+ * the left. For that, we go down on the left and the leaf detection
+ * code will finish the job.
+ */
+ if ((new->node.pfx << 1) == old_node_bit) {
+ root = &old->node.branches;
+ side = EB_LEFT;
+ troot = root->b[side];
+ continue;
+ }
+
+ /* cover nodes are always walked through on the right */
+ side = EB_RGHT;
+ bit = old_node_bit >> 1; /* recheck that bit */
+ root = &old->node.branches;
+ troot = root->b[side];
+ continue;
+ }
+
+ /* we don't want to skip bits for further comparisons, so we must limit <bit>.
+ * However, since we're going down around <old_node_bit>, we know it will be
+ * properly matched, so we can skip this bit.
+ */
+ old_node_bit >>= 1;
+ bit = old_node_bit + 1;
+
+ /* walk down */
+ root = &old->node.branches;
+ side = old_node_bit & 7;
+ side ^= 7;
+ side = (new->key[old_node_bit >> 3] >> side) & 1;
+ troot = root->b[side];
+ }
+
+ /* Right here, we have 4 possibilities :
+ * - the tree does not contain any leaf matching the
+ * key, and we have new->key < old->key. We insert
+ * new above old, on the left ;
+ *
+ * - the tree does not contain any leaf matching the
+ * key, and we have new->key > old->key. We insert
+ * new above old, on the right ;
+ *
+ * - the tree does contain the key with the same prefix
+ * length. We add the new key next to it as a first
+ * duplicate (since it was alone).
+ *
+ * The last two cases can easily be partially merged.
+ *
+ * - the tree contains a leaf matching the key, we have
+ * to insert above it as a cover node. The leaf with
+ * the shortest prefix becomes the left subtree and
+ * the leaf with the longest prefix becomes the right
+ * one. The cover node gets the min of both prefixes
+ * as its new bit.
+ */
+
+ /* first we want to ensure that we compare the correct bit, which means
+ * the largest common to both nodes.
+ */
+ if (bit > new->node.pfx)
+ bit = new->node.pfx;
+ if (bit > old->node.pfx)
+ bit = old->node.pfx;
+
+ new->node.bit = (bit << 1) + 1; /* assume normal node by default */
+
+ /* if one prefix is included in the second one, we don't compare bits
+ * because they won't necessarily match, we just proceed with a cover
+ * node insertion.
+ */
+ diff = 0;
+ if (bit < old->node.pfx && bit < new->node.pfx)
+ diff = cmp_bits(new->key, old->key, bit);
+
+ if (diff == 0) {
+ /* Both keys match. Either it's a duplicate entry or we have to
+ * put the shortest prefix left and the largest one right below
+ * a new cover node. By default, diff==0 means we'll be inserted
+ * on the right.
+ */
+ new->node.bit--; /* anticipate cover node insertion */
+ if (new->node.pfx == old->node.pfx) {
+ new->node.bit = -1; /* mark as new dup tree, just in case */
+
+ if (unlikely(eb_gettag(root_right))) {
+ /* we refuse to duplicate this key if the tree is
+ * tagged as containing only unique keys.
+ */
+ return old;
+ }
+
+ if (eb_gettag(troot) != EB_LEAF) {
+ /* there was already a dup tree below */
+ struct eb_node *ret;
+ ret = eb_insert_dup(&old->node, &new->node);
+ return container_of(ret, struct ebmb_node, node);
+ }
+ /* otherwise fall through to insert first duplicate */
+ }
+ /* otherwise we just rely on the tests below to select the right side */
+ else if (new->node.pfx < old->node.pfx)
+ diff = -1; /* force insertion to left side */
+ }
+
+ insert_above:
+ new_left = eb_dotag(&new->node.branches, EB_LEFT);
+ new_rght = eb_dotag(&new->node.branches, EB_RGHT);
+ new_leaf = eb_dotag(&new->node.branches, EB_LEAF);
+
+ if (diff >= 0) {
+ new->node.branches.b[EB_LEFT] = troot;
+ new->node.branches.b[EB_RGHT] = new_leaf;
+ new->node.leaf_p = new_rght;
+ *up_ptr = new_left;
+ }
+ else {
+ new->node.branches.b[EB_LEFT] = new_leaf;
+ new->node.branches.b[EB_RGHT] = troot;
+ new->node.leaf_p = new_left;
+ *up_ptr = new_rght;
+ }
+
+ root->b[side] = eb_dotag(&new->node.branches, EB_NODE);
+ return new;
+}
+
+
+
+#endif /* _EBMBTREE_H */
+
--- /dev/null
+/*
+ * Elastic Binary Trees - exported functions for operations on pointer nodes.
+ * Version 6.0.6
+ * (C) 2002-2011 - Willy Tarreau <w@1wt.eu>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+/* Consult ebpttree.h for more details about those functions */
+
+#include "ebpttree.h"
+
+REGPRM2 struct ebpt_node *ebpt_insert(struct eb_root *root, struct ebpt_node *new)
+{
+ return __ebpt_insert(root, new);
+}
+
+REGPRM2 struct ebpt_node *ebpt_lookup(struct eb_root *root, void *x)
+{
+ return __ebpt_lookup(root, x);
+}
+
+/*
+ * Find the last occurrence of the highest key in the tree <root>, which is
+ * equal to or less than <x>. NULL is returned is no key matches.
+ */
+REGPRM2 struct ebpt_node *ebpt_lookup_le(struct eb_root *root, void *x)
+{
+ struct ebpt_node *node;
+ eb_troot_t *troot;
+
+ troot = root->b[EB_LEFT];
+ if (unlikely(troot == NULL))
+ return NULL;
+
+ while (1) {
+ if ((eb_gettag(troot) == EB_LEAF)) {
+ /* We reached a leaf, which means that the whole upper
+ * parts were common. We will return either the current
+ * node or its next one if the former is too small.
+ */
+ node = container_of(eb_untag(troot, EB_LEAF),
+ struct ebpt_node, node.branches);
+ if (node->key <= x)
+ return node;
+ /* return prev */
+ troot = node->node.leaf_p;
+ break;
+ }
+ node = container_of(eb_untag(troot, EB_NODE),
+ struct ebpt_node, node.branches);
+
+ if (node->node.bit < 0) {
+ /* We're at the top of a dup tree. Either we got a
+ * matching value and we return the rightmost node, or
+ * we don't and we skip the whole subtree to return the
+ * prev node before the subtree. Note that since we're
+ * at the top of the dup tree, we can simply return the
+ * prev node without first trying to escape from the
+ * tree.
+ */
+ if (node->key <= x) {
+ troot = node->node.branches.b[EB_RGHT];
+ while (eb_gettag(troot) != EB_LEAF)
+ troot = (eb_untag(troot, EB_NODE))->b[EB_RGHT];
+ return container_of(eb_untag(troot, EB_LEAF),
+ struct ebpt_node, node.branches);
+ }
+ /* return prev */
+ troot = node->node.node_p;
+ break;
+ }
+
+ if ((((ptr_t)x ^ (ptr_t)node->key) >> node->node.bit) >= EB_NODE_BRANCHES) {
+ /* No more common bits at all. Either this node is too
+ * small and we need to get its highest value, or it is
+ * too large, and we need to get the prev value.
+ */
+ if (((ptr_t)node->key >> node->node.bit) < ((ptr_t)x >> node->node.bit)) {
+ troot = node->node.branches.b[EB_RGHT];
+ return ebpt_entry(eb_walk_down(troot, EB_RGHT), struct ebpt_node, node);
+ }
+
+ /* Further values will be too high here, so return the prev
+ * unique node (if it exists).
+ */
+ troot = node->node.node_p;
+ break;
+ }
+ troot = node->node.branches.b[((ptr_t)x >> node->node.bit) & EB_NODE_BRANCH_MASK];
+ }
+
+ /* If we get here, it means we want to report previous node before the
+ * current one which is not above. <troot> is already initialised to
+ * the parent's branches.
+ */
+ while (eb_gettag(troot) == EB_LEFT) {
+ /* Walking up from left branch. We must ensure that we never
+ * walk beyond root.
+ */
+ if (unlikely(eb_clrtag((eb_untag(troot, EB_LEFT))->b[EB_RGHT]) == NULL))
+ return NULL;
+ troot = (eb_root_to_node(eb_untag(troot, EB_LEFT)))->node_p;
+ }
+ /* Note that <troot> cannot be NULL at this stage */
+ troot = (eb_untag(troot, EB_RGHT))->b[EB_LEFT];
+ node = ebpt_entry(eb_walk_down(troot, EB_RGHT), struct ebpt_node, node);
+ return node;
+}
+
+/*
+ * Find the first occurrence of the lowest key in the tree <root>, which is
+ * equal to or greater than <x>. NULL is returned is no key matches.
+ */
+REGPRM2 struct ebpt_node *ebpt_lookup_ge(struct eb_root *root, void *x)
+{
+ struct ebpt_node *node;
+ eb_troot_t *troot;
+
+ troot = root->b[EB_LEFT];
+ if (unlikely(troot == NULL))
+ return NULL;
+
+ while (1) {
+ if ((eb_gettag(troot) == EB_LEAF)) {
+ /* We reached a leaf, which means that the whole upper
+ * parts were common. We will return either the current
+ * node or its next one if the former is too small.
+ */
+ node = container_of(eb_untag(troot, EB_LEAF),
+ struct ebpt_node, node.branches);
+ if (node->key >= x)
+ return node;
+ /* return next */
+ troot = node->node.leaf_p;
+ break;
+ }
+ node = container_of(eb_untag(troot, EB_NODE),
+ struct ebpt_node, node.branches);
+
+ if (node->node.bit < 0) {
+ /* We're at the top of a dup tree. Either we got a
+ * matching value and we return the leftmost node, or
+ * we don't and we skip the whole subtree to return the
+ * next node after the subtree. Note that since we're
+ * at the top of the dup tree, we can simply return the
+ * next node without first trying to escape from the
+ * tree.
+ */
+ if (node->key >= x) {
+ troot = node->node.branches.b[EB_LEFT];
+ while (eb_gettag(troot) != EB_LEAF)
+ troot = (eb_untag(troot, EB_NODE))->b[EB_LEFT];
+ return container_of(eb_untag(troot, EB_LEAF),
+ struct ebpt_node, node.branches);
+ }
+ /* return next */
+ troot = node->node.node_p;
+ break;
+ }
+
+ if ((((ptr_t)x ^ (ptr_t)node->key) >> node->node.bit) >= EB_NODE_BRANCHES) {
+ /* No more common bits at all. Either this node is too
+ * large and we need to get its lowest value, or it is too
+ * small, and we need to get the next value.
+ */
+ if (((ptr_t)node->key >> node->node.bit) > ((ptr_t)x >> node->node.bit)) {
+ troot = node->node.branches.b[EB_LEFT];
+ return ebpt_entry(eb_walk_down(troot, EB_LEFT), struct ebpt_node, node);
+ }
+
+ /* Further values will be too low here, so return the next
+ * unique node (if it exists).
+ */
+ troot = node->node.node_p;
+ break;
+ }
+ troot = node->node.branches.b[((ptr_t)x >> node->node.bit) & EB_NODE_BRANCH_MASK];
+ }
+
+ /* If we get here, it means we want to report next node after the
+ * current one which is not below. <troot> is already initialised
+ * to the parent's branches.
+ */
+ while (eb_gettag(troot) != EB_LEFT)
+ /* Walking up from right branch, so we cannot be below root */
+ troot = (eb_root_to_node(eb_untag(troot, EB_RGHT)))->node_p;
+
+ /* Note that <troot> cannot be NULL at this stage */
+ troot = (eb_untag(troot, EB_LEFT))->b[EB_RGHT];
+ if (eb_clrtag(troot) == NULL)
+ return NULL;
+
+ node = ebpt_entry(eb_walk_down(troot, EB_LEFT), struct ebpt_node, node);
+ return node;
+}
--- /dev/null
+/*
+ * Elastic Binary Trees - macros and structures for operations on pointer nodes.
+ * Version 6.0.6
+ * (C) 2002-2011 - Willy Tarreau <w@1wt.eu>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _EBPTTREE_H
+#define _EBPTTREE_H
+
+#include "ebtree.h"
+#include "eb32tree.h"
+#include "eb64tree.h"
+
+
+/* Return the structure of type <type> whose member <member> points to <ptr> */
+#define ebpt_entry(ptr, type, member) container_of(ptr, type, member)
+
+#define EBPT_ROOT EB_ROOT
+#define EBPT_TREE_HEAD EB_TREE_HEAD
+
+/* on *almost* all platforms, a pointer can be cast into a size_t which is unsigned */
+#ifndef PTR_INT_TYPE
+#define PTR_INT_TYPE size_t
+#endif
+
+typedef PTR_INT_TYPE ptr_t;
+
+/* This structure carries a node, a leaf, and a key. It must start with the
+ * eb_node so that it can be cast into an eb_node. We could also have put some
+ * sort of transparent union here to reduce the indirection level, but the fact
+ * is, the end user is not meant to manipulate internals, so this is pointless.
+ * Internally, it is automatically cast as an eb32_node or eb64_node.
+ */
+struct ebpt_node {
+ struct eb_node node; /* the tree node, must be at the beginning */
+ void *key;
+};
+
+/*
+ * Exported functions and macros.
+ * Many of them are always inlined because they are extremely small, and
+ * are generally called at most once or twice in a program.
+ */
+
+/* Return leftmost node in the tree, or NULL if none */
+static forceinline struct ebpt_node *ebpt_first(struct eb_root *root)
+{
+ return ebpt_entry(eb_first(root), struct ebpt_node, node);
+}
+
+/* Return rightmost node in the tree, or NULL if none */
+static forceinline struct ebpt_node *ebpt_last(struct eb_root *root)
+{
+ return ebpt_entry(eb_last(root), struct ebpt_node, node);
+}
+
+/* Return next node in the tree, or NULL if none */
+static forceinline struct ebpt_node *ebpt_next(struct ebpt_node *ebpt)
+{
+ return ebpt_entry(eb_next(&ebpt->node), struct ebpt_node, node);
+}
+
+/* Return previous node in the tree, or NULL if none */
+static forceinline struct ebpt_node *ebpt_prev(struct ebpt_node *ebpt)
+{
+ return ebpt_entry(eb_prev(&ebpt->node), struct ebpt_node, node);
+}
+
+/* Return next leaf node within a duplicate sub-tree, or NULL if none. */
+static inline struct ebpt_node *ebpt_next_dup(struct ebpt_node *ebpt)
+{
+ return ebpt_entry(eb_next_dup(&ebpt->node), struct ebpt_node, node);
+}
+
+/* Return previous leaf node within a duplicate sub-tree, or NULL if none. */
+static inline struct ebpt_node *ebpt_prev_dup(struct ebpt_node *ebpt)
+{
+ return ebpt_entry(eb_prev_dup(&ebpt->node), struct ebpt_node, node);
+}
+
+/* Return next node in the tree, skipping duplicates, or NULL if none */
+static forceinline struct ebpt_node *ebpt_next_unique(struct ebpt_node *ebpt)
+{
+ return ebpt_entry(eb_next_unique(&ebpt->node), struct ebpt_node, node);
+}
+
+/* Return previous node in the tree, skipping duplicates, or NULL if none */
+static forceinline struct ebpt_node *ebpt_prev_unique(struct ebpt_node *ebpt)
+{
+ return ebpt_entry(eb_prev_unique(&ebpt->node), struct ebpt_node, node);
+}
+
+/* Delete node from the tree if it was linked in. Mark the node unused. Note
+ * that this function relies on a non-inlined generic function: eb_delete.
+ */
+static forceinline void ebpt_delete(struct ebpt_node *ebpt)
+{
+ eb_delete(&ebpt->node);
+}
+
+/*
+ * The following functions are inlined but derived from the integer versions.
+ */
+static forceinline struct ebpt_node *ebpt_lookup(struct eb_root *root, void *x)
+{
+ if (sizeof(void *) == 4)
+ return (struct ebpt_node *)eb32_lookup(root, (u32)(PTR_INT_TYPE)x);
+ else
+ return (struct ebpt_node *)eb64_lookup(root, (u64)(PTR_INT_TYPE)x);
+}
+
+static forceinline struct ebpt_node *ebpt_lookup_le(struct eb_root *root, void *x)
+{
+ if (sizeof(void *) == 4)
+ return (struct ebpt_node *)eb32_lookup_le(root, (u32)(PTR_INT_TYPE)x);
+ else
+ return (struct ebpt_node *)eb64_lookup_le(root, (u64)(PTR_INT_TYPE)x);
+}
+
+static forceinline struct ebpt_node *ebpt_lookup_ge(struct eb_root *root, void *x)
+{
+ if (sizeof(void *) == 4)
+ return (struct ebpt_node *)eb32_lookup_ge(root, (u32)(PTR_INT_TYPE)x);
+ else
+ return (struct ebpt_node *)eb64_lookup_ge(root, (u64)(PTR_INT_TYPE)x);
+}
+
+static forceinline struct ebpt_node *ebpt_insert(struct eb_root *root, struct ebpt_node *new)
+{
+ if (sizeof(void *) == 4)
+ return (struct ebpt_node *)eb32_insert(root, (struct eb32_node *)new);
+ else
+ return (struct ebpt_node *)eb64_insert(root, (struct eb64_node *)new);
+}
+
+/*
+ * The following functions are less likely to be used directly, because
+ * their code is larger. The non-inlined version is preferred.
+ */
+
+/* Delete node from the tree if it was linked in. Mark the node unused. */
+static forceinline void __ebpt_delete(struct ebpt_node *ebpt)
+{
+ __eb_delete(&ebpt->node);
+}
+
+static forceinline struct ebpt_node *__ebpt_lookup(struct eb_root *root, void *x)
+{
+ if (sizeof(void *) == 4)
+ return (struct ebpt_node *)__eb32_lookup(root, (u32)(PTR_INT_TYPE)x);
+ else
+ return (struct ebpt_node *)__eb64_lookup(root, (u64)(PTR_INT_TYPE)x);
+}
+
+static forceinline struct ebpt_node *__ebpt_insert(struct eb_root *root, struct ebpt_node *new)
+{
+ if (sizeof(void *) == 4)
+ return (struct ebpt_node *)__eb32_insert(root, (struct eb32_node *)new);
+ else
+ return (struct ebpt_node *)__eb64_insert(root, (struct eb64_node *)new);
+}
+
+#endif /* _EBPT_TREE_H */
--- /dev/null
+/*
+ * Elastic Binary Trees - exported functions for String data nodes.
+ * Version 6.0.6
+ * (C) 2002-2011 - Willy Tarreau <w@1wt.eu>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+/* Consult ebsttree.h for more details about those functions */
+
+#include "ebsttree.h"
+
+/* Find the first occurence of a zero-terminated string <x> in the tree <root>.
+ * It's the caller's reponsibility to use this function only on trees which
+ * only contain zero-terminated strings. If none can be found, return NULL.
+ */
+REGPRM2 struct ebmb_node *ebst_lookup(struct eb_root *root, const char *x)
+{
+ return __ebst_lookup(root, x);
+}
+
+/* Insert ebmb_node <new> into subtree starting at node root <root>. Only
+ * new->key needs be set with the zero-terminated string key. The ebmb_node is
+ * returned. If root->b[EB_RGHT]==1, the tree may only contain unique keys. The
+ * caller is responsible for properly terminating the key with a zero.
+ */
+REGPRM2 struct ebmb_node *ebst_insert(struct eb_root *root, struct ebmb_node *new)
+{
+ return __ebst_insert(root, new);
+}
--- /dev/null
+/*
+ * Elastic Binary Trees - macros to manipulate String data nodes.
+ * Version 6.0.6
+ * (C) 2002-2011 - Willy Tarreau <w@1wt.eu>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+/* These functions and macros rely on Multi-Byte nodes */
+
+#ifndef _EBSTTREE_H
+#define _EBSTTREE_H
+
+#include "ebtree.h"
+#include "ebmbtree.h"
+
+/* The following functions are not inlined by default. They are declared
+ * in ebsttree.c, which simply relies on their inline version.
+ */
+REGPRM2 struct ebmb_node *ebst_lookup(struct eb_root *root, const char *x);
+REGPRM2 struct ebmb_node *ebst_insert(struct eb_root *root, struct ebmb_node *new);
+
+/* Find the first occurence of a length <len> string <x> in the tree <root>.
+ * It's the caller's reponsibility to use this function only on trees which
+ * only contain zero-terminated strings, and that no null character is present
+ * in string <x> in the first <len> chars. If none can be found, return NULL.
+ */
+static forceinline struct ebmb_node *
+ebst_lookup_len(struct eb_root *root, const char *x, unsigned int len)
+{
+ struct ebmb_node *node;
+
+ node = ebmb_lookup(root, x, len);
+ if (!node || node->key[len] != 0)
+ return NULL;
+ return node;
+}
+
+/* Find the first occurence of a zero-terminated string <x> in the tree <root>.
+ * It's the caller's reponsibility to use this function only on trees which
+ * only contain zero-terminated strings. If none can be found, return NULL.
+ */
+static forceinline struct ebmb_node *__ebst_lookup(struct eb_root *root, const void *x)
+{
+ struct ebmb_node *node;
+ eb_troot_t *troot;
+ int bit;
+ int node_bit;
+
+ troot = root->b[EB_LEFT];
+ if (unlikely(troot == NULL))
+ return NULL;
+
+ bit = 0;
+ while (1) {
+ if ((eb_gettag(troot) == EB_LEAF)) {
+ node = container_of(eb_untag(troot, EB_LEAF),
+ struct ebmb_node, node.branches);
+ if (strcmp((char *)node->key, x) == 0)
+ return node;
+ else
+ return NULL;
+ }
+ node = container_of(eb_untag(troot, EB_NODE),
+ struct ebmb_node, node.branches);
+ node_bit = node->node.bit;
+
+ if (node_bit < 0) {
+ /* We have a dup tree now. Either it's for the same
+ * value, and we walk down left, or it's a different
+ * one and we don't have our key.
+ */
+ if (strcmp((char *)node->key, x) != 0)
+ return NULL;
+
+ troot = node->node.branches.b[EB_LEFT];
+ while (eb_gettag(troot) != EB_LEAF)
+ troot = (eb_untag(troot, EB_NODE))->b[EB_LEFT];
+ node = container_of(eb_untag(troot, EB_LEAF),
+ struct ebmb_node, node.branches);
+ return node;
+ }
+
+ /* OK, normal data node, let's walk down but don't compare data
+ * if we already reached the end of the key.
+ */
+ if (likely(bit >= 0)) {
+ bit = string_equal_bits(x, node->key, bit);
+ if (likely(bit < node_bit)) {
+ if (bit >= 0)
+ return NULL; /* no more common bits */
+
+ /* bit < 0 : we reached the end of the key. If we
+ * are in a tree with unique keys, we can return
+ * this node. Otherwise we have to walk it down
+ * and stop comparing bits.
+ */
+ if (eb_gettag(root->b[EB_RGHT]))
+ return node;
+ }
+ /* if the bit is larger than the node's, we must bound it
+ * because we might have compared too many bytes with an
+ * inappropriate leaf. For a test, build a tree from "0",
+ * "WW", "W", "S" inserted in this exact sequence and lookup
+ * "W" => "S" is returned without this assignment.
+ */
+ else
+ bit = node_bit;
+ }
+
+ troot = node->node.branches.b[(((unsigned char*)x)[node_bit >> 3] >>
+ (~node_bit & 7)) & 1];
+ }
+}
+
+/* Insert ebmb_node <new> into subtree starting at node root <root>. Only
+ * new->key needs be set with the zero-terminated string key. The ebmb_node is
+ * returned. If root->b[EB_RGHT]==1, the tree may only contain unique keys. The
+ * caller is responsible for properly terminating the key with a zero.
+ */
+static forceinline struct ebmb_node *
+__ebst_insert(struct eb_root *root, struct ebmb_node *new)
+{
+ struct ebmb_node *old;
+ unsigned int side;
+ eb_troot_t *troot;
+ eb_troot_t *root_right;
+ int diff;
+ int bit;
+ int old_node_bit;
+
+ side = EB_LEFT;
+ troot = root->b[EB_LEFT];
+ root_right = root->b[EB_RGHT];
+ if (unlikely(troot == NULL)) {
+ /* Tree is empty, insert the leaf part below the left branch */
+ root->b[EB_LEFT] = eb_dotag(&new->node.branches, EB_LEAF);
+ new->node.leaf_p = eb_dotag(root, EB_LEFT);
+ new->node.node_p = NULL; /* node part unused */
+ return new;
+ }
+
+ /* The tree descent is fairly easy :
+ * - first, check if we have reached a leaf node
+ * - second, check if we have gone too far
+ * - third, reiterate
+ * Everywhere, we use <new> for the node node we are inserting, <root>
+ * for the node we attach it to, and <old> for the node we are
+ * displacing below <new>. <troot> will always point to the future node
+ * (tagged with its type). <side> carries the side the node <new> is
+ * attached to below its parent, which is also where previous node
+ * was attached.
+ */
+
+ bit = 0;
+ while (1) {
+ if (unlikely(eb_gettag(troot) == EB_LEAF)) {
+ eb_troot_t *new_left, *new_rght;
+ eb_troot_t *new_leaf, *old_leaf;
+
+ old = container_of(eb_untag(troot, EB_LEAF),
+ struct ebmb_node, node.branches);
+
+ new_left = eb_dotag(&new->node.branches, EB_LEFT);
+ new_rght = eb_dotag(&new->node.branches, EB_RGHT);
+ new_leaf = eb_dotag(&new->node.branches, EB_LEAF);
+ old_leaf = eb_dotag(&old->node.branches, EB_LEAF);
+
+ new->node.node_p = old->node.leaf_p;
+
+ /* Right here, we have 3 possibilities :
+ * - the tree does not contain the key, and we have
+ * new->key < old->key. We insert new above old, on
+ * the left ;
+ *
+ * - the tree does not contain the key, and we have
+ * new->key > old->key. We insert new above old, on
+ * the right ;
+ *
+ * - the tree does contain the key, which implies it
+ * is alone. We add the new key next to it as a
+ * first duplicate.
+ *
+ * The last two cases can easily be partially merged.
+ */
+ if (bit >= 0)
+ bit = string_equal_bits(new->key, old->key, bit);
+
+ if (bit < 0) {
+ /* key was already there */
+
+ /* we may refuse to duplicate this key if the tree is
+ * tagged as containing only unique keys.
+ */
+ if (eb_gettag(root_right))
+ return old;
+
+ /* new arbitrarily goes to the right and tops the dup tree */
+ old->node.leaf_p = new_left;
+ new->node.leaf_p = new_rght;
+ new->node.branches.b[EB_LEFT] = old_leaf;
+ new->node.branches.b[EB_RGHT] = new_leaf;
+ new->node.bit = -1;
+ root->b[side] = eb_dotag(&new->node.branches, EB_NODE);
+ return new;
+ }
+
+ diff = cmp_bits(new->key, old->key, bit);
+ if (diff < 0) {
+ /* new->key < old->key, new takes the left */
+ new->node.leaf_p = new_left;
+ old->node.leaf_p = new_rght;
+ new->node.branches.b[EB_LEFT] = new_leaf;
+ new->node.branches.b[EB_RGHT] = old_leaf;
+ } else {
+ /* new->key > old->key, new takes the right */
+ old->node.leaf_p = new_left;
+ new->node.leaf_p = new_rght;
+ new->node.branches.b[EB_LEFT] = old_leaf;
+ new->node.branches.b[EB_RGHT] = new_leaf;
+ }
+ break;
+ }
+
+ /* OK we're walking down this link */
+ old = container_of(eb_untag(troot, EB_NODE),
+ struct ebmb_node, node.branches);
+ old_node_bit = old->node.bit;
+
+ /* Stop going down when we don't have common bits anymore. We
+ * also stop in front of a duplicates tree because it means we
+ * have to insert above. Note: we can compare more bits than
+ * the current node's because as long as they are identical, we
+ * know we descend along the correct side.
+ */
+ if (bit >= 0 && (bit < old_node_bit || old_node_bit < 0))
+ bit = string_equal_bits(new->key, old->key, bit);
+
+ if (unlikely(bit < 0)) {
+ /* Perfect match, we must only stop on head of dup tree
+ * or walk down to a leaf.
+ */
+ if (old_node_bit < 0) {
+ /* We know here that string_equal_bits matched all
+ * bits and that we're on top of a dup tree, then
+ * we can perform the dup insertion and return.
+ */
+ struct eb_node *ret;
+ ret = eb_insert_dup(&old->node, &new->node);
+ return container_of(ret, struct ebmb_node, node);
+ }
+ /* OK so let's walk down */
+ }
+ else if (bit < old_node_bit || old_node_bit < 0) {
+ /* The tree did not contain the key, or we stopped on top of a dup
+ * tree, possibly containing the key. In the former case, we insert
+ * <new> before the node <old>, and set ->bit to designate the lowest
+ * bit position in <new> which applies to ->branches.b[]. In the later
+ * case, we add the key to the existing dup tree. Note that we cannot
+ * enter here if we match an intermediate node's key that is not the
+ * head of a dup tree.
+ */
+ eb_troot_t *new_left, *new_rght;
+ eb_troot_t *new_leaf, *old_node;
+
+ new_left = eb_dotag(&new->node.branches, EB_LEFT);
+ new_rght = eb_dotag(&new->node.branches, EB_RGHT);
+ new_leaf = eb_dotag(&new->node.branches, EB_LEAF);
+ old_node = eb_dotag(&old->node.branches, EB_NODE);
+
+ new->node.node_p = old->node.node_p;
+
+ /* we can never match all bits here */
+ diff = cmp_bits(new->key, old->key, bit);
+ if (diff < 0) {
+ new->node.leaf_p = new_left;
+ old->node.node_p = new_rght;
+ new->node.branches.b[EB_LEFT] = new_leaf;
+ new->node.branches.b[EB_RGHT] = old_node;
+ }
+ else {
+ old->node.node_p = new_left;
+ new->node.leaf_p = new_rght;
+ new->node.branches.b[EB_LEFT] = old_node;
+ new->node.branches.b[EB_RGHT] = new_leaf;
+ }
+ break;
+ }
+
+ /* walk down */
+ root = &old->node.branches;
+ side = (new->key[old_node_bit >> 3] >> (~old_node_bit & 7)) & 1;
+ troot = root->b[side];
+ }
+
+ /* Ok, now we are inserting <new> between <root> and <old>. <old>'s
+ * parent is already set to <new>, and the <root>'s branch is still in
+ * <side>. Update the root's leaf till we have it. Note that we can also
+ * find the side by checking the side of new->node.node_p.
+ */
+
+ /* We need the common higher bits between new->key and old->key.
+ * This number of bits is already in <bit>.
+ * NOTE: we can't get here whit bit < 0 since we found a dup !
+ */
+ new->node.bit = bit;
+ root->b[side] = eb_dotag(&new->node.branches, EB_NODE);
+ return new;
+}
+
+#endif /* _EBSTTREE_H */
+
--- /dev/null
+/*
+ * Elastic Binary Trees - exported generic functions
+ * Version 6.0.6
+ * (C) 2002-2011 - Willy Tarreau <w@1wt.eu>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#include "ebtree.h"
+
+void eb_delete(struct eb_node *node)
+{
+ __eb_delete(node);
+}
+
+/* used by insertion primitives */
+REGPRM1 struct eb_node *eb_insert_dup(struct eb_node *sub, struct eb_node *new)
+{
+ return __eb_insert_dup(sub, new);
+}
--- /dev/null
+/*
+ * Elastic Binary Trees - generic macros and structures.
+ * Version 6.0.6
+ * (C) 2002-2011 - Willy Tarreau <w@1wt.eu>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+
+
+/*
+ General idea:
+ -------------
+ In a radix binary tree, we may have up to 2N-1 nodes for N keys if all of
+ them are leaves. If we find a way to differentiate intermediate nodes (later
+ called "nodes") and final nodes (later called "leaves"), and we associate
+ them by two, it is possible to build sort of a self-contained radix tree with
+ intermediate nodes always present. It will not be as cheap as the ultree for
+ optimal cases as shown below, but the optimal case almost never happens :
+
+ Eg, to store 8, 10, 12, 13, 14 :
+
+ ultree this theorical tree
+
+ 8 8
+ / \ / \
+ 10 12 10 12
+ / \ / \
+ 13 14 12 14
+ / \
+ 12 13
+
+ Note that on real-world tests (with a scheduler), is was verified that the
+ case with data on an intermediate node never happens. This is because the
+ data spectrum is too large for such coincidences to happen. It would require
+ for instance that a task has its expiration time at an exact second, with
+ other tasks sharing that second. This is too rare to try to optimize for it.
+
+ What is interesting is that the node will only be added above the leaf when
+ necessary, which implies that it will always remain somewhere above it. So
+ both the leaf and the node can share the exact value of the leaf, because
+ when going down the node, the bit mask will be applied to comparisons. So we
+ are tempted to have one single key shared between the node and the leaf.
+
+ The bit only serves the nodes, and the dups only serve the leaves. So we can
+ put a lot of information in common. This results in one single entity with
+ two branch pointers and two parent pointers, one for the node part, and one
+ for the leaf part :
+
+ node's leaf's
+ parent parent
+ | |
+ [node] [leaf]
+ / \
+ left right
+ branch branch
+
+ The node may very well refer to its leaf counterpart in one of its branches,
+ indicating that its own leaf is just below it :
+
+ node's
+ parent
+ |
+ [node]
+ / \
+ left [leaf]
+ branch
+
+ Adding keys in such a tree simply consists in inserting nodes between
+ other nodes and/or leaves :
+
+ [root]
+ |
+ [node2]
+ / \
+ [leaf1] [node3]
+ / \
+ [leaf2] [leaf3]
+
+ On this diagram, we notice that [node2] and [leaf2] have been pulled away
+ from each other due to the insertion of [node3], just as if there would be
+ an elastic between both parts. This elastic-like behaviour gave its name to
+ the tree : "Elastic Binary Tree", or "EBtree". The entity which associates a
+ node part and a leaf part will be called an "EB node".
+
+ We also notice on the diagram that there is a root entity required to attach
+ the tree. It only contains two branches and there is nothing above it. This
+ is an "EB root". Some will note that [leaf1] has no [node1]. One property of
+ the EBtree is that all nodes have their branches filled, and that if a node
+ has only one branch, it does not need to exist. Here, [leaf1] was added
+ below [root] and did not need any node.
+
+ An EB node contains :
+ - a pointer to the node's parent (node_p)
+ - a pointer to the leaf's parent (leaf_p)
+ - two branches pointing to lower nodes or leaves (branches)
+ - a bit position (bit)
+ - an optional key.
+
+ The key here is optional because it's used only during insertion, in order
+ to classify the nodes. Nothing else in the tree structure requires knowledge
+ of the key. This makes it possible to write type-agnostic primitives for
+ everything, and type-specific insertion primitives. This has led to consider
+ two types of EB nodes. The type-agnostic ones will serve as a header for the
+ other ones, and will simply be called "struct eb_node". The other ones will
+ have their type indicated in the structure name. Eg: "struct eb32_node" for
+ nodes carrying 32 bit keys.
+
+ We will also node that the two branches in a node serve exactly the same
+ purpose as an EB root. For this reason, a "struct eb_root" will be used as
+ well inside the struct eb_node. In order to ease pointer manipulation and
+ ROOT detection when walking upwards, all the pointers inside an eb_node will
+ point to the eb_root part of the referenced EB nodes, relying on the same
+ principle as the linked lists in Linux.
+
+ Another important point to note, is that when walking inside a tree, it is
+ very convenient to know where a node is attached in its parent, and what
+ type of branch it has below it (leaf or node). In order to simplify the
+ operations and to speed up the processing, it was decided in this specific
+ implementation to use the lowest bit from the pointer to designate the side
+ of the upper pointers (left/right) and the type of a branch (leaf/node).
+ This practise is not mandatory by design, but an implementation-specific
+ optimisation permitted on all platforms on which data must be aligned. All
+ known 32 bit platforms align their integers and pointers to 32 bits, leaving
+ the two lower bits unused. So, we say that the pointers are "tagged". And
+ since they designate pointers to root parts, we simply call them
+ "tagged root pointers", or "eb_troot" in the code.
+
+ Duplicate keys are stored in a special manner. When inserting a key, if
+ the same one is found, then an incremental binary tree is built at this
+ place from these keys. This ensures that no special case has to be written
+ to handle duplicates when walking through the tree or when deleting entries.
+ It also guarantees that duplicates will be walked in the exact same order
+ they were inserted. This is very important when trying to achieve fair
+ processing distribution for instance.
+
+ Algorithmic complexity can be derived from 3 variables :
+ - the number of possible different keys in the tree : P
+ - the number of entries in the tree : N
+ - the number of duplicates for one key : D
+
+ Note that this tree is deliberately NOT balanced. For this reason, the worst
+ case may happen with a small tree (eg: 32 distinct keys of one bit). BUT,
+ the operations required to manage such data are so much cheap that they make
+ it worth using it even under such conditions. For instance, a balanced tree
+ may require only 6 levels to store those 32 keys when this tree will
+ require 32. But if per-level operations are 5 times cheaper, it wins.
+
+ Minimal, Maximal and Average times are specified in number of operations.
+ Minimal is given for best condition, Maximal for worst condition, and the
+ average is reported for a tree containing random keys. An operation
+ generally consists in jumping from one node to the other.
+
+ Complexity :
+ - lookup : min=1, max=log(P), avg=log(N)
+ - insertion from root : min=1, max=log(P), avg=log(N)
+ - insertion of dups : min=1, max=log(D), avg=log(D)/2 after lookup
+ - deletion : min=1, max=1, avg=1
+ - prev/next : min=1, max=log(P), avg=2 :
+ N/2 nodes need 1 hop => 1*N/2
+ N/4 nodes need 2 hops => 2*N/4
+ N/8 nodes need 3 hops => 3*N/8
+ ...
+ N/x nodes need log(x) hops => log2(x)*N/x
+ Total cost for all N nodes : sum[i=1..N](log2(i)*N/i) = N*sum[i=1..N](log2(i)/i)
+ Average cost across N nodes = total / N = sum[i=1..N](log2(i)/i) = 2
+
+ This design is currently limited to only two branches per node. Most of the
+ tree descent algorithm would be compatible with more branches (eg: 4, to cut
+ the height in half), but this would probably require more complex operations
+ and the deletion algorithm would be problematic.
+
+ Useful properties :
+ - a node is always added above the leaf it is tied to, and never can get
+ below nor in another branch. This implies that leaves directly attached
+ to the root do not use their node part, which is indicated by a NULL
+ value in node_p. This also enhances the cache efficiency when walking
+ down the tree, because when the leaf is reached, its node part will
+ already have been visited (unless it's the first leaf in the tree).
+
+ - pointers to lower nodes or leaves are stored in "branch" pointers. Only
+ the root node may have a NULL in either branch, it is not possible for
+ other branches. Since the nodes are attached to the left branch of the
+ root, it is not possible to see a NULL left branch when walking up a
+ tree. Thus, an empty tree is immediately identified by a NULL left
+ branch at the root. Conversely, the one and only way to identify the
+ root node is to check that it right branch is NULL. Note that the
+ NULL pointer may have a few low-order bits set.
+
+ - a node connected to its own leaf will have branch[0|1] pointing to
+ itself, and leaf_p pointing to itself.
+
+ - a node can never have node_p pointing to itself.
+
+ - a node is linked in a tree if and only if it has a non-null leaf_p.
+
+ - a node can never have both branches equal, except for the root which can
+ have them both NULL.
+
+ - deletion only applies to leaves. When a leaf is deleted, its parent must
+ be released too (unless it's the root), and its sibling must attach to
+ the grand-parent, replacing the parent. Also, when a leaf is deleted,
+ the node tied to this leaf will be removed and must be released too. If
+ this node is different from the leaf's parent, the freshly released
+ leaf's parent will be used to replace the node which must go. A released
+ node will never be used anymore, so there's no point in tracking it.
+
+ - the bit index in a node indicates the bit position in the key which is
+ represented by the branches. That means that a node with (bit == 0) is
+ just above two leaves. Negative bit values are used to build a duplicate
+ tree. The first node above two identical leaves gets (bit == -1). This
+ value logarithmically decreases as the duplicate tree grows. During
+ duplicate insertion, a node is inserted above the highest bit value (the
+ lowest absolute value) in the tree during the right-sided walk. If bit
+ -1 is not encountered (highest < -1), we insert above last leaf.
+ Otherwise, we insert above the node with the highest value which was not
+ equal to the one of its parent + 1.
+
+ - the "eb_next" primitive walks from left to right, which means from lower
+ to higher keys. It returns duplicates in the order they were inserted.
+ The "eb_first" primitive returns the left-most entry.
+
+ - the "eb_prev" primitive walks from right to left, which means from
+ higher to lower keys. It returns duplicates in the opposite order they
+ were inserted. The "eb_last" primitive returns the right-most entry.
+
+ - a tree which has 1 in the lower bit of its root's right branch is a
+ tree with unique nodes. This means that when a node is inserted with
+ a key which already exists will not be inserted, and the previous
+ entry will be returned.
+
+ */
+
+#ifndef _EBTREE_H
+#define _EBTREE_H
+
+#include <stdlib.h>
+#include "compiler.h"
+
+static inline int flsnz8_generic(unsigned int x)
+{
+ int ret = 0;
+ if (x >> 4) { x >>= 4; ret += 4; }
+ return ret + ((0xFFFFAA50U >> (x << 1)) & 3) + 1;
+}
+
+/* Note: we never need to run fls on null keys, so we can optimize the fls
+ * function by removing a conditional jump.
+ */
+#if defined(__i386__) || defined(__x86_64__)
+/* this code is similar on 32 and 64 bit */
+static inline int flsnz(int x)
+{
+ int r;
+ __asm__("bsrl %1,%0\n"
+ : "=r" (r) : "rm" (x));
+ return r+1;
+}
+
+static inline int flsnz8(unsigned char x)
+{
+ int r;
+ __asm__("movzbl %%al, %%eax\n"
+ "bsrl %%eax,%0\n"
+ : "=r" (r) : "a" (x));
+ return r+1;
+}
+
+#else
+// returns 1 to 32 for 1<<0 to 1<<31. Undefined for 0.
+#define flsnz(___a) ({ \
+ register int ___x, ___bits = 0; \
+ ___x = (___a); \
+ if (___x & 0xffff0000) { ___x &= 0xffff0000; ___bits += 16;} \
+ if (___x & 0xff00ff00) { ___x &= 0xff00ff00; ___bits += 8;} \
+ if (___x & 0xf0f0f0f0) { ___x &= 0xf0f0f0f0; ___bits += 4;} \
+ if (___x & 0xcccccccc) { ___x &= 0xcccccccc; ___bits += 2;} \
+ if (___x & 0xaaaaaaaa) { ___x &= 0xaaaaaaaa; ___bits += 1;} \
+ ___bits + 1; \
+ })
+
+static inline int flsnz8(unsigned int x)
+{
+ return flsnz8_generic(x);
+}
+
+
+#endif
+
+static inline int fls64(unsigned long long x)
+{
+ unsigned int h;
+ unsigned int bits = 32;
+
+ h = x >> 32;
+ if (!h) {
+ h = x;
+ bits = 0;
+ }
+ return flsnz(h) + bits;
+}
+
+#define fls_auto(x) ((sizeof(x) > 4) ? fls64(x) : flsnz(x))
+
+/* Linux-like "container_of". It returns a pointer to the structure of type
+ * <type> which has its member <name> stored at address <ptr>.
+ */
+#ifndef container_of
+#define container_of(ptr, type, name) ((type *)(((void *)(ptr)) - ((long)&((type *)0)->name)))
+#endif
+
+/* returns a pointer to the structure of type <type> which has its member <name>
+ * stored at address <ptr>, unless <ptr> is 0, in which case 0 is returned.
+ */
+#ifndef container_of_safe
+#define container_of_safe(ptr, type, name) \
+ ({ void *__p = (ptr); \
+ __p ? (type *)(__p - ((long)&((type *)0)->name)) : (type *)0; \
+ })
+#endif
+
+/* Number of bits per node, and number of leaves per node */
+#define EB_NODE_BITS 1
+#define EB_NODE_BRANCHES (1 << EB_NODE_BITS)
+#define EB_NODE_BRANCH_MASK (EB_NODE_BRANCHES - 1)
+
+/* Be careful not to tweak those values. The walking code is optimized for NULL
+ * detection on the assumption that the following values are intact.
+ */
+#define EB_LEFT 0
+#define EB_RGHT 1
+#define EB_LEAF 0
+#define EB_NODE 1
+
+/* Tags to set in root->b[EB_RGHT] :
+ * - EB_NORMAL is a normal tree which stores duplicate keys.
+ * - EB_UNIQUE is a tree which stores unique keys.
+ */
+#define EB_NORMAL 0
+#define EB_UNIQUE 1
+
+/* This is the same as an eb_node pointer, except that the lower bit embeds
+ * a tag. See eb_dotag()/eb_untag()/eb_gettag(). This tag has two meanings :
+ * - 0=left, 1=right to designate the parent's branch for leaf_p/node_p
+ * - 0=link, 1=leaf to designate the branch's type for branch[]
+ */
+typedef void eb_troot_t;
+
+/* The eb_root connects the node which contains it, to two nodes below it, one
+ * of which may be the same node. At the top of the tree, we use an eb_root
+ * too, which always has its right branch NULL (+/1 low-order bits).
+ */
+struct eb_root {
+ eb_troot_t *b[EB_NODE_BRANCHES]; /* left and right branches */
+};
+
+/* The eb_node contains the two parts, one for the leaf, which always exists,
+ * and one for the node, which remains unused in the very first node inserted
+ * into the tree. This structure is 20 bytes per node on 32-bit machines. Do
+ * not change the order, benchmarks have shown that it's optimal this way.
+ */
+struct eb_node {
+ struct eb_root branches; /* branches, must be at the beginning */
+ eb_troot_t *node_p; /* link node's parent */
+ eb_troot_t *leaf_p; /* leaf node's parent */
+ short int bit; /* link's bit position. */
+ short unsigned int pfx; /* data prefix length, always related to leaf */
+} __attribute__((packed));
+
+/* Return the structure of type <type> whose member <member> points to <ptr> */
+#define eb_entry(ptr, type, member) container_of(ptr, type, member)
+
+/* The root of a tree is an eb_root initialized with both pointers NULL.
+ * During its life, only the left pointer will change. The right one will
+ * always remain NULL, which is the way we detect it.
+ */
+#define EB_ROOT \
+ (struct eb_root) { \
+ .b = {[0] = NULL, [1] = NULL }, \
+ }
+
+#define EB_ROOT_UNIQUE \
+ (struct eb_root) { \
+ .b = {[0] = NULL, [1] = (void *)1 }, \
+ }
+
+#define EB_TREE_HEAD(name) \
+ struct eb_root name = EB_ROOT
+
+
+/***************************************\
+ * Private functions. Not for end-user *
+\***************************************/
+
+/* Converts a root pointer to its equivalent eb_troot_t pointer,
+ * ready to be stored in ->branch[], leaf_p or node_p. NULL is not
+ * conserved. To be used with EB_LEAF, EB_NODE, EB_LEFT or EB_RGHT in <tag>.
+ */
+static inline eb_troot_t *eb_dotag(const struct eb_root *root, const int tag)
+{
+ return (eb_troot_t *)((void *)root + tag);
+}
+
+/* Converts an eb_troot_t pointer pointer to its equivalent eb_root pointer,
+ * for use with pointers from ->branch[], leaf_p or node_p. NULL is conserved
+ * as long as the tree is not corrupted. To be used with EB_LEAF, EB_NODE,
+ * EB_LEFT or EB_RGHT in <tag>.
+ */
+static inline struct eb_root *eb_untag(const eb_troot_t *troot, const int tag)
+{
+ return (struct eb_root *)((void *)troot - tag);
+}
+
+/* returns the tag associated with an eb_troot_t pointer */
+static inline int eb_gettag(eb_troot_t *troot)
+{
+ return (unsigned long)troot & 1;
+}
+
+/* Converts a root pointer to its equivalent eb_troot_t pointer and clears the
+ * tag, no matter what its value was.
+ */
+static inline struct eb_root *eb_clrtag(const eb_troot_t *troot)
+{
+ return (struct eb_root *)((unsigned long)troot & ~1UL);
+}
+
+/* Returns a pointer to the eb_node holding <root> */
+static inline struct eb_node *eb_root_to_node(struct eb_root *root)
+{
+ return container_of(root, struct eb_node, branches);
+}
+
+/* Walks down starting at root pointer <start>, and always walking on side
+ * <side>. It either returns the node hosting the first leaf on that side,
+ * or NULL if no leaf is found. <start> may either be NULL or a branch pointer.
+ * The pointer to the leaf (or NULL) is returned.
+ */
+static inline struct eb_node *eb_walk_down(eb_troot_t *start, unsigned int side)
+{
+ /* A NULL pointer on an empty tree root will be returned as-is */
+ while (eb_gettag(start) == EB_NODE)
+ start = (eb_untag(start, EB_NODE))->b[side];
+ /* NULL is left untouched (root==eb_node, EB_LEAF==0) */
+ return eb_root_to_node(eb_untag(start, EB_LEAF));
+}
+
+/* This function is used to build a tree of duplicates by adding a new node to
+ * a subtree of at least 2 entries. It will probably never be needed inlined,
+ * and it is not for end-user.
+ */
+static forceinline struct eb_node *
+__eb_insert_dup(struct eb_node *sub, struct eb_node *new)
+{
+ struct eb_node *head = sub;
+
+ eb_troot_t *new_left = eb_dotag(&new->branches, EB_LEFT);
+ eb_troot_t *new_rght = eb_dotag(&new->branches, EB_RGHT);
+ eb_troot_t *new_leaf = eb_dotag(&new->branches, EB_LEAF);
+
+ /* first, identify the deepest hole on the right branch */
+ while (eb_gettag(head->branches.b[EB_RGHT]) != EB_LEAF) {
+ struct eb_node *last = head;
+ head = container_of(eb_untag(head->branches.b[EB_RGHT], EB_NODE),
+ struct eb_node, branches);
+ if (head->bit > last->bit + 1)
+ sub = head; /* there's a hole here */
+ }
+
+ /* Here we have a leaf attached to (head)->b[EB_RGHT] */
+ if (head->bit < -1) {
+ /* A hole exists just before the leaf, we insert there */
+ new->bit = -1;
+ sub = container_of(eb_untag(head->branches.b[EB_RGHT], EB_LEAF),
+ struct eb_node, branches);
+ head->branches.b[EB_RGHT] = eb_dotag(&new->branches, EB_NODE);
+
+ new->node_p = sub->leaf_p;
+ new->leaf_p = new_rght;
+ sub->leaf_p = new_left;
+ new->branches.b[EB_LEFT] = eb_dotag(&sub->branches, EB_LEAF);
+ new->branches.b[EB_RGHT] = new_leaf;
+ return new;
+ } else {
+ int side;
+ /* No hole was found before a leaf. We have to insert above
+ * <sub>. Note that we cannot be certain that <sub> is attached
+ * to the right of its parent, as this is only true if <sub>
+ * is inside the dup tree, not at the head.
+ */
+ new->bit = sub->bit - 1; /* install at the lowest level */
+ side = eb_gettag(sub->node_p);
+ head = container_of(eb_untag(sub->node_p, side), struct eb_node, branches);
+ head->branches.b[side] = eb_dotag(&new->branches, EB_NODE);
+
+ new->node_p = sub->node_p;
+ new->leaf_p = new_rght;
+ sub->node_p = new_left;
+ new->branches.b[EB_LEFT] = eb_dotag(&sub->branches, EB_NODE);
+ new->branches.b[EB_RGHT] = new_leaf;
+ return new;
+ }
+}
+
+
+/**************************************\
+ * Public functions, for the end-user *
+\**************************************/
+
+/* Return non-zero if the tree is empty, otherwise zero */
+static inline int eb_is_empty(struct eb_root *root)
+{
+ return !root->b[EB_LEFT];
+}
+
+/* Return non-zero if the node is a duplicate, otherwise zero */
+static inline int eb_is_dup(struct eb_node *node)
+{
+ return node->bit < 0;
+}
+
+/* Return the first leaf in the tree starting at <root>, or NULL if none */
+static inline struct eb_node *eb_first(struct eb_root *root)
+{
+ return eb_walk_down(root->b[0], EB_LEFT);
+}
+
+/* Return the last leaf in the tree starting at <root>, or NULL if none */
+static inline struct eb_node *eb_last(struct eb_root *root)
+{
+ return eb_walk_down(root->b[0], EB_RGHT);
+}
+
+/* Return previous leaf node before an existing leaf node, or NULL if none. */
+static inline struct eb_node *eb_prev(struct eb_node *node)
+{
+ eb_troot_t *t = node->leaf_p;
+
+ while (eb_gettag(t) == EB_LEFT) {
+ /* Walking up from left branch. We must ensure that we never
+ * walk beyond root.
+ */
+ if (unlikely(eb_clrtag((eb_untag(t, EB_LEFT))->b[EB_RGHT]) == NULL))
+ return NULL;
+ t = (eb_root_to_node(eb_untag(t, EB_LEFT)))->node_p;
+ }
+ /* Note that <t> cannot be NULL at this stage */
+ t = (eb_untag(t, EB_RGHT))->b[EB_LEFT];
+ return eb_walk_down(t, EB_RGHT);
+}
+
+/* Return next leaf node after an existing leaf node, or NULL if none. */
+static inline struct eb_node *eb_next(struct eb_node *node)
+{
+ eb_troot_t *t = node->leaf_p;
+
+ while (eb_gettag(t) != EB_LEFT)
+ /* Walking up from right branch, so we cannot be below root */
+ t = (eb_root_to_node(eb_untag(t, EB_RGHT)))->node_p;
+
+ /* Note that <t> cannot be NULL at this stage */
+ t = (eb_untag(t, EB_LEFT))->b[EB_RGHT];
+ if (eb_clrtag(t) == NULL)
+ return NULL;
+ return eb_walk_down(t, EB_LEFT);
+}
+
+/* Return previous leaf node within a duplicate sub-tree, or NULL if none. */
+static inline struct eb_node *eb_prev_dup(struct eb_node *node)
+{
+ eb_troot_t *t = node->leaf_p;
+
+ while (eb_gettag(t) == EB_LEFT) {
+ /* Walking up from left branch. We must ensure that we never
+ * walk beyond root.
+ */
+ if (unlikely(eb_clrtag((eb_untag(t, EB_LEFT))->b[EB_RGHT]) == NULL))
+ return NULL;
+ /* if the current node leaves a dup tree, quit */
+ if ((eb_root_to_node(eb_untag(t, EB_LEFT)))->bit >= 0)
+ return NULL;
+ t = (eb_root_to_node(eb_untag(t, EB_LEFT)))->node_p;
+ }
+ /* Note that <t> cannot be NULL at this stage */
+ if ((eb_root_to_node(eb_untag(t, EB_RGHT)))->bit >= 0)
+ return NULL;
+ t = (eb_untag(t, EB_RGHT))->b[EB_LEFT];
+ return eb_walk_down(t, EB_RGHT);
+}
+
+/* Return next leaf node within a duplicate sub-tree, or NULL if none. */
+static inline struct eb_node *eb_next_dup(struct eb_node *node)
+{
+ eb_troot_t *t = node->leaf_p;
+
+ while (eb_gettag(t) != EB_LEFT) {
+ /* Walking up from right branch, so we cannot be below root */
+ /* if the current node leaves a dup tree, quit */
+ if ((eb_root_to_node(eb_untag(t, EB_RGHT)))->bit >= 0)
+ return NULL;
+ t = (eb_root_to_node(eb_untag(t, EB_RGHT)))->node_p;
+ }
+
+ /* Note that <t> cannot be NULL at this stage */
+ if ((eb_root_to_node(eb_untag(t, EB_LEFT)))->bit >= 0)
+ return NULL;
+ t = (eb_untag(t, EB_LEFT))->b[EB_RGHT];
+ if (eb_clrtag(t) == NULL)
+ return NULL;
+ return eb_walk_down(t, EB_LEFT);
+}
+
+/* Return previous leaf node before an existing leaf node, skipping duplicates,
+ * or NULL if none. */
+static inline struct eb_node *eb_prev_unique(struct eb_node *node)
+{
+ eb_troot_t *t = node->leaf_p;
+
+ while (1) {
+ if (eb_gettag(t) != EB_LEFT) {
+ node = eb_root_to_node(eb_untag(t, EB_RGHT));
+ /* if we're right and not in duplicates, stop here */
+ if (node->bit >= 0)
+ break;
+ t = node->node_p;
+ }
+ else {
+ /* Walking up from left branch. We must ensure that we never
+ * walk beyond root.
+ */
+ if (unlikely(eb_clrtag((eb_untag(t, EB_LEFT))->b[EB_RGHT]) == NULL))
+ return NULL;
+ t = (eb_root_to_node(eb_untag(t, EB_LEFT)))->node_p;
+ }
+ }
+ /* Note that <t> cannot be NULL at this stage */
+ t = (eb_untag(t, EB_RGHT))->b[EB_LEFT];
+ return eb_walk_down(t, EB_RGHT);
+}
+
+/* Return next leaf node after an existing leaf node, skipping duplicates, or
+ * NULL if none.
+ */
+static inline struct eb_node *eb_next_unique(struct eb_node *node)
+{
+ eb_troot_t *t = node->leaf_p;
+
+ while (1) {
+ if (eb_gettag(t) == EB_LEFT) {
+ if (unlikely(eb_clrtag((eb_untag(t, EB_LEFT))->b[EB_RGHT]) == NULL))
+ return NULL; /* we reached root */
+ node = eb_root_to_node(eb_untag(t, EB_LEFT));
+ /* if we're left and not in duplicates, stop here */
+ if (node->bit >= 0)
+ break;
+ t = node->node_p;
+ }
+ else {
+ /* Walking up from right branch, so we cannot be below root */
+ t = (eb_root_to_node(eb_untag(t, EB_RGHT)))->node_p;
+ }
+ }
+
+ /* Note that <t> cannot be NULL at this stage */
+ t = (eb_untag(t, EB_LEFT))->b[EB_RGHT];
+ if (eb_clrtag(t) == NULL)
+ return NULL;
+ return eb_walk_down(t, EB_LEFT);
+}
+
+
+/* Removes a leaf node from the tree if it was still in it. Marks the node
+ * as unlinked.
+ */
+static forceinline void __eb_delete(struct eb_node *node)
+{
+ __label__ delete_unlink;
+ unsigned int pside, gpside, sibtype;
+ struct eb_node *parent;
+ struct eb_root *gparent;
+
+ if (!node->leaf_p)
+ return;
+
+ /* we need the parent, our side, and the grand parent */
+ pside = eb_gettag(node->leaf_p);
+ parent = eb_root_to_node(eb_untag(node->leaf_p, pside));
+
+ /* We likely have to release the parent link, unless it's the root,
+ * in which case we only set our branch to NULL. Note that we can
+ * only be attached to the root by its left branch.
+ */
+
+ if (eb_clrtag(parent->branches.b[EB_RGHT]) == NULL) {
+ /* we're just below the root, it's trivial. */
+ parent->branches.b[EB_LEFT] = NULL;
+ goto delete_unlink;
+ }
+
+ /* To release our parent, we have to identify our sibling, and reparent
+ * it directly to/from the grand parent. Note that the sibling can
+ * either be a link or a leaf.
+ */
+
+ gpside = eb_gettag(parent->node_p);
+ gparent = eb_untag(parent->node_p, gpside);
+
+ gparent->b[gpside] = parent->branches.b[!pside];
+ sibtype = eb_gettag(gparent->b[gpside]);
+
+ if (sibtype == EB_LEAF) {
+ eb_root_to_node(eb_untag(gparent->b[gpside], EB_LEAF))->leaf_p =
+ eb_dotag(gparent, gpside);
+ } else {
+ eb_root_to_node(eb_untag(gparent->b[gpside], EB_NODE))->node_p =
+ eb_dotag(gparent, gpside);
+ }
+ /* Mark the parent unused. Note that we do not check if the parent is
+ * our own node, but that's not a problem because if it is, it will be
+ * marked unused at the same time, which we'll use below to know we can
+ * safely remove it.
+ */
+ parent->node_p = NULL;
+
+ /* The parent node has been detached, and is currently unused. It may
+ * belong to another node, so we cannot remove it that way. Also, our
+ * own node part might still be used. so we can use this spare node
+ * to replace ours if needed.
+ */
+
+ /* If our link part is unused, we can safely exit now */
+ if (!node->node_p)
+ goto delete_unlink;
+
+ /* From now on, <node> and <parent> are necessarily different, and the
+ * <node>'s node part is in use. By definition, <parent> is at least
+ * below <node>, so keeping its key for the bit string is OK.
+ */
+
+ parent->node_p = node->node_p;
+ parent->branches = node->branches;
+ parent->bit = node->bit;
+
+ /* We must now update the new node's parent... */
+ gpside = eb_gettag(parent->node_p);
+ gparent = eb_untag(parent->node_p, gpside);
+ gparent->b[gpside] = eb_dotag(&parent->branches, EB_NODE);
+
+ /* ... and its branches */
+ for (pside = 0; pside <= 1; pside++) {
+ if (eb_gettag(parent->branches.b[pside]) == EB_NODE) {
+ eb_root_to_node(eb_untag(parent->branches.b[pside], EB_NODE))->node_p =
+ eb_dotag(&parent->branches, pside);
+ } else {
+ eb_root_to_node(eb_untag(parent->branches.b[pside], EB_LEAF))->leaf_p =
+ eb_dotag(&parent->branches, pside);
+ }
+ }
+ delete_unlink:
+ /* Now the node has been completely unlinked */
+ node->leaf_p = NULL;
+ return; /* tree is not empty yet */
+}
+
+/* Compare blocks <a> and <b> byte-to-byte, from bit <ignore> to bit <len-1>.
+ * Return the number of equal bits between strings, assuming that the first
+ * <ignore> bits are already identical. It is possible to return slightly more
+ * than <len> bits if <len> does not stop on a byte boundary and we find exact
+ * bytes. Note that parts or all of <ignore> bits may be rechecked. It is only
+ * passed here as a hint to speed up the check.
+ */
+static forceinline int equal_bits(const unsigned char *a,
+ const unsigned char *b,
+ int ignore, int len)
+{
+ for (ignore >>= 3, a += ignore, b += ignore, ignore <<= 3;
+ ignore < len; ) {
+ unsigned char c;
+
+ a++; b++;
+ ignore += 8;
+ c = b[-1] ^ a[-1];
+
+ if (c) {
+ /* OK now we know that old and new differ at byte <ptr> and that <c> holds
+ * the bit differences. We have to find what bit is differing and report
+ * it as the number of identical bits. Note that low bit numbers are
+ * assigned to high positions in the byte, as we compare them as strings.
+ */
+ ignore -= flsnz8(c);
+ break;
+ }
+ }
+ return ignore;
+}
+
+/* check that the two blocks <a> and <b> are equal on <len> bits. If it is known
+ * they already are on some bytes, this number of equal bytes to be skipped may
+ * be passed in <skip>. It returns 0 if they match, otherwise non-zero.
+ */
+static forceinline int check_bits(const unsigned char *a,
+ const unsigned char *b,
+ int skip,
+ int len)
+{
+ int bit, ret;
+
+ /* This uncommon construction gives the best performance on x86 because
+ * it makes heavy use multiple-index addressing and parallel instructions,
+ * and it prevents gcc from reordering the loop since it is already
+ * properly oriented. Tested to be fine with 2.95 to 4.2.
+ */
+ bit = ~len + (skip << 3) + 9; // = (skip << 3) + (8 - len)
+ ret = a[skip] ^ b[skip];
+ if (unlikely(bit >= 0))
+ return ret >> bit;
+ while (1) {
+ skip++;
+ if (ret)
+ return ret;
+ ret = a[skip] ^ b[skip];
+ bit += 8;
+ if (bit >= 0)
+ return ret >> bit;
+ }
+}
+
+
+/* Compare strings <a> and <b> byte-to-byte, from bit <ignore> to the last 0.
+ * Return the number of equal bits between strings, assuming that the first
+ * <ignore> bits are already identical. Note that parts or all of <ignore> bits
+ * may be rechecked. It is only passed here as a hint to speed up the check.
+ * The caller is responsible for not passing an <ignore> value larger than any
+ * of the two strings. However, referencing any bit from the trailing zero is
+ * permitted. Equal strings are reported as a negative number of bits, which
+ * indicates the end was reached.
+ */
+static forceinline int string_equal_bits(const unsigned char *a,
+ const unsigned char *b,
+ int ignore)
+{
+ int beg;
+ unsigned char c;
+
+ beg = ignore >> 3;
+
+ /* skip known and identical bits. We stop at the first different byte
+ * or at the first zero we encounter on either side.
+ */
+ while (1) {
+ unsigned char d;
+
+ c = a[beg];
+ d = b[beg];
+ beg++;
+
+ c ^= d;
+ if (c)
+ break;
+ if (!d)
+ return -1;
+ }
+ /* OK now we know that a and b differ at byte <beg>, or that both are zero.
+ * We have to find what bit is differing and report it as the number of
+ * identical bits. Note that low bit numbers are assigned to high positions
+ * in the byte, as we compare them as strings.
+ */
+ return (beg << 3) - flsnz8(c);
+}
+
+static forceinline int cmp_bits(const unsigned char *a, const unsigned char *b, unsigned int pos)
+{
+ unsigned int ofs;
+ unsigned char bit_a, bit_b;
+
+ ofs = pos >> 3;
+ pos = ~pos & 7;
+
+ bit_a = (a[ofs] >> pos) & 1;
+ bit_b = (b[ofs] >> pos) & 1;
+
+ return bit_a - bit_b; /* -1: a<b; 0: a=b; 1: a>b */
+}
+
+static forceinline int get_bit(const unsigned char *a, unsigned int pos)
+{
+ unsigned int ofs;
+
+ ofs = pos >> 3;
+ pos = ~pos & 7;
+ return (a[ofs] >> pos) & 1;
+}
+
+/* These functions are declared in ebtree.c */
+void eb_delete(struct eb_node *node);
+REGPRM1 struct eb_node *eb_insert_dup(struct eb_node *sub, struct eb_node *new);
+
+#endif /* _EB_TREE_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+# This sample configuration makes extensive use of the ACLs. It requires
+# HAProxy version 1.3.12 minimum.
+
+global
+ log loghost local0
+ log localhost local0 err
+ maxconn 250
+ uid 71
+ gid 71
+ chroot /var/empty
+ pidfile /var/run/haproxy.pid
+ daemon
+ quiet
+
+frontend http-in
+ bind :80
+ mode http
+ log global
+ clitimeout 30000
+ option httplog
+ option dontlognull
+ #option logasap
+ option httpclose
+ maxconn 100
+
+ capture request header Host len 20
+ capture request header User-Agent len 16
+ capture request header Content-Length len 10
+ capture request header Referer len 20
+ capture response header Content-Length len 10
+
+ # block any unwanted source IP addresses or networks
+ acl forbidden_src src 0.0.0.0/7 224.0.0.0/3
+ acl forbidden_src src_port 0:1023
+ block if forbidden_src
+
+ # block requests beginning with http:// on wrong domains
+ acl dangerous_pfx url_beg -i http://
+ acl valid_pfx url_reg -i ^http://[^/]*1wt\.eu/
+ block if dangerous_pfx !valid_pfx
+
+ # block apache chunk exploit, ...
+ acl forbidden_hdrs hdr_sub(transfer-encoding) -i chunked
+ acl forbidden_hdrs hdr_beg(host) -i apache- localhost
+
+ # ... some HTTP content smugling and other various things
+ acl forbidden_hdrs hdr_cnt(host) gt 1
+ acl forbidden_hdrs hdr_cnt(content-length) gt 1
+ acl forbidden_hdrs hdr_val(content-length) lt 0
+ acl forbidden_hdrs hdr_cnt(proxy-authorization) gt 0
+ block if forbidden_hdrs
+
+ # block annoying worms that fill the logs...
+ acl forbidden_uris url_reg -i .*(\.|%2e)(\.|%2e)(%2f|%5c|/|\\\\)
+ acl forbidden_uris url_sub -i %00 <script xmlrpc.php
+ acl forbidden_uris path_end -i /root.exe /cmd.exe /default.ida /awstats.pl .asp .dll
+
+ # block other common attacks (awstats, manual discovery...)
+ acl forbidden_uris path_dir -i chat main.php read_dump.php viewtopic.php phpbb sumthin horde _vti_bin MSOffice
+ acl forbidden_uris url_reg -i (\.php\?temppath=|\.php\?setmodules=|[=:]http://)
+ block if forbidden_uris
+
+ # we rewrite the "options" request so that it only tries '*', and we
+ # only report GET, HEAD, POST and OPTIONS as valid methods
+ reqirep ^OPTIONS\ /.*HTTP/1\.[01]$ OPTIONS\ \\*\ HTTP/1.0
+ rspirep ^Allow:\ .* Allow:\ GET,\ HEAD,\ POST,\ OPTIONS
+
+ acl host_demo hdr_beg(host) -i demo.
+ acl host_www2 hdr_beg(host) -i www2.
+
+ use_backend demo if host_demo
+ use_backend www2 if host_www2
+ default_backend www
+
+backend www
+ mode http
+ source 192.168.21.2:0
+ balance roundrobin
+ cookie SERVERID
+ server www1 192.168.12.2:80 check inter 30000 rise 2 fall 3 maxconn 10
+ server back 192.168.11.2:80 check inter 30000 rise 2 fall 5 backup cookie back maxconn 8
+
+ # long timeout to support connection queueing
+ contimeout 20000
+ srvtimeout 20000
+ fullconn 100
+ redispatch
+ retries 3
+
+ option httpchk HEAD /
+ option forwardfor
+ option checkcache
+ option httpclose
+
+ # allow other syntactically valid requests, and block any other method
+ acl valid_method method GET HEAD POST OPTIONS
+ block if !valid_method
+ block if HTTP_URL_STAR !METH_OPTIONS
+ block if !HTTP_URL_SLASH !HTTP_URL_STAR !HTTP_URL_ABS
+
+ # remove unnecessary precisions on the server version. Let's say
+ # it's an apache under Unix on the Formilux Distro.
+ rspidel ^Server:\
+ rspadd Server:\ Apache\ (Unix;\ Formilux/0.1.8)
+
+defaults non_standard_bck
+ mode http
+ source 192.168.21.2:0
+ option forwardfor
+ option httpclose
+ balance roundrobin
+ fullconn 100
+ contimeout 20000
+ srvtimeout 20000
+ retries 2
+
+backend www2
+ server www2 192.168.22.2:80 maxconn 10
+
+# end of defaults
+defaults none
+
+backend demo
+ mode http
+ balance roundrobin
+ stats enable
+ stats uri /
+ stats scope http-in
+ stats scope www
+ stats scope demo
--- /dev/null
+global
+# chroot /var/empty/
+# uid 451
+# gid 451
+ log 192.168.131.214:8514 local4 debug
+ maxconn 8192
+
+defaults
+ timeout connect 3500
+ timeout queue 11000
+ timeout tarpit 12000
+ timeout client 30000
+ timeout http-request 40000
+ timeout http-keep-alive 5000
+ timeout server 40000
+ timeout check 7000
+
+ option contstats
+ option log-health-checks
+
+################################
+userlist customer1
+ group adm users tiger,xdb
+ group dev users scott,tiger
+ group uat users boss,xdb,tiger
+ user scott insecure-password cat
+ user tiger insecure-password dog
+ user xdb insecure-password hello
+ user boss password $6$k6y3o.eP$JlKBx9za966ud67qe45NSQYf8Nw.XFuk8QVRevoLh1XPCQDCBPjcU2JtGBSS0MOQW2PFxHSwRv6J.C0/D7cV91
+
+userlist customer1alt
+ group adm
+ group dev
+ group uat
+ user scott insecure-password cat groups dev
+ user tiger insecure-password dog groups adm,dev,uat
+ user xdb insecure-password hello groups adm,uat
+ user boss password $6$k6y3o.eP$JlKBx9za966ud67qe45NSQYf8Nw.XFuk8QVRevoLh1XPCQDCBPjcU2JtGBSS0MOQW2PFxHSwRv6J.C0/D7cV91 groups uat
+
+# Both customer1 and customer1alt userlist are functionally identical
+
+frontend c1
+ bind 127.101.128.1:8080
+ log global
+ mode http
+
+ acl host_stats hdr_beg(host) -i stats.local
+ acl host_dev hdr_beg(host) -i dev.local
+ acl host_uat hdr_beg(host) -i uat.local
+
+ acl auth_uat http_auth_group(customer1) uat
+
+ # auth for host_uat checked in frontend, use realm "uat"
+ http-request auth realm uat if host_uat !auth_uat
+
+ use_backend c1stats if host_stats
+ use_backend c1dev if host_dev
+ use_backend c1uat if host_uat
+
+backend c1uat
+ mode http
+ log global
+
+ server s6 192.168.152.206:80
+ server s7 192.168.152.207:80
+
+backend c1dev
+ mode http
+ log global
+
+ # require users from customer1 assigned to group dev
+ acl auth_ok http_auth_group(customer1) dev
+
+ # auth checked in backend, use default realm (c1dev)
+ http-request auth if !auth_ok
+
+ server s6 192.168.152.206:80
+ server s7 192.168.152.207:80
+
+backend c1stats
+ mode http
+ log global
+
+ # stats auth checked in backend, use default realm (Stats)
+ acl nagios src 192.168.126.31
+ acl guests src 192.168.162.0/24
+ acl auth_ok http_auth_group(customer1) adm
+
+ stats enable
+ stats refresh 60
+ stats uri /
+ stats scope c1
+ stats scope c1stats
+
+ # unconditionally deny guests, without checking auth or asking for a username/password
+ stats http-request deny if guests
+
+ # allow nagios without password, allow authenticated users
+ stats http-request allow if nagios
+ stats http-request allow if auth_ok
+
+ # ask for a username/password
+ stats http-request auth realm Stats
+
+
+################################
+userlist customer2
+ user peter insecure-password peter
+ user monica insecure-password monica
+
+frontend c2
+ bind 127.201.128.1:8080
+ log global
+ mode http
+
+ acl auth_ok http_auth(customer2)
+ acl host_b1 hdr(host) -i b1.local
+
+ http-request auth unless auth_ok
+
+ use_backend c2b1 if host_b1
+ default_backend c2b0
+
+backend c2b1
+ mode http
+ log global
+
+ server s1 192.168.152.201:80
+
+backend c2b0
+ mode http
+ log global
+
+ server s1 192.168.152.201:80
--- /dev/null
+#!/usr/bin/perl
+###################################################################################################################
+# $Id:: check 20 2007-02-23 14:26:44Z fabrice $
+# $Revision:: 20 $
+###################################################################################################################
+# Authors : Fabrice Dulaunoy <fabrice@dulaunoy.com>
+#
+# Copyright (C) 2006-2007 Fabrice Dulaunoy <fabrice@dulaunoy.com>
+#
+# This program is free software; you can redistribute it and/or modify it
+# under the terms of the GNU General Public License as published by the
+# Free Software Foundation; either version 2 of the License, or (at your
+# option) any later version. See <http://www.fsf.org/copyleft/gpl.txt>.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+# or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+# for more details.
+###################################################################################################################
+#
+###################################################################################################################
+
+use strict;
+
+package MyPackage;
+use Config::General;
+use Getopt::Std;
+use LWP::UserAgent;
+use URI;
+use File::Basename;
+
+# CVS VSERSION
+#my $VERSION = do { my @rev = ( q$Revision: 20 $ =~ /\d+/g ); sprintf "%d." . "%d" x $#rev, @rev };
+# SVN VERSION
+my $VERSION = sprintf "1.%02d", '$Revision: 20 $ ' =~ /(\d+)/;
+
+my %option;
+
+getopts( "vHhc:", \%option );
+
+if ( $option{ h } )
+{
+ print "Usage: $0 [options ...]\n\n";
+ print "Where options include:\n";
+ print "\t -h \t\t\tthis help (what else ?)\n";
+ print "\t -H \t\t\tshow a sample config file\n";
+ print "\t -v \t\t\tprint version and exit\n";
+ print "\t -c file \t\tuse config file (default /etc/check.conf)\n";
+ print "\n\t This is a small program parsing the config file \n";
+ print "\t and checking one or more condition on one or more servers\n";
+ print "\t these condition could be \n";
+ print "\t\t HTTP return code list (with optinal Host Header and optional Basic Authentication) \n";
+ print "\t\t a regex over a HTTP GET (with optinal Host Header and optional Basic Authentication)\n";
+ print "\t\t a regex over a FTP GET ( with optional Basic Authentication)\n";
+ print "\t\t a TCP open port\n";
+ print "\t the result state is an AND over all tests \n";
+ print "\t this result could be \n";
+ print "\t\t a simple HTTP return state (\"200 OK\" or \"503 Service Unavailable\" \n";
+ print "\t\t a HTML page with a status OK or NOK for each test\n";
+ print "\t\t a HTML page with a staus OK or NOK for each test in a row of a TABLE\n";
+ print "\n\t The natural complement of this tools is the poll_check tool\n";
+ print "\t The result code of this tools is designed to fit the HAPROXY requirement (test over a port not related to the WEB server)\n";
+}
+
+if ( $option{ H } )
+{
+ print "\t A sample config file could be:\n";
+ print <<'EOF';
+
+ ###########################################################
+ # listening port ( default 9898 )
+ port 9899
+
+ # on which IP to bind (default 127.0.0.1 ) * = all IP
+ host 10.2.1.1
+
+ # which client addr is allow ( default 127.0.0.0/8 )
+ #cidr_allow = 0.0.0.0/0
+
+ # verbosity from 0 to 4 (default 0 = no log )
+ log_level = 1
+
+ # daemonize (default 0 = no )
+ daemon = 1
+
+ # content put a HTML content after header
+ # (default 0 = no content 1 = html 2 = table )
+ content = 2
+
+ # reparse the config file at each request ( default 0 = no )
+ # only SIGHUP reread the config file)
+ reparse = 1
+
+ # pid_file (default /var/run/check.pid )
+ # $$$ = basename of config file
+ # $$ = PID
+ pid_file=/var/run/CHECK_$$$.pid
+
+ # log_file (default /var/log/check.log )
+ # $$$ = basename of config file
+ # $$ = PID
+ log_file=/var/log/CHECK_$$$.log
+
+ # number of servers to keep running (default = 5)
+ min_servers = 2
+
+ # number of servers to have waiting for requests (default = 2)
+ min_spare_servers = 1
+
+ # maximum number of servers to have waiting for requests (default = 10)
+ max_spare_servers =1
+
+ # number of servers (default = 50)
+ max_servers = 2
+
+
+ ###########################################################
+ # a server to check
+ # type could be get , regex or tcp
+ #
+ # get = do a http or ftp get and check the result code with
+ # the list, coma separated, provided ( default = 200,201 )
+ # hostheader is optional and send to the server if provided
+ #
+ # regex = do a http or ftp get and check the content result
+ # with regex provided
+ # hostheader is optional and send to the server if provided
+ #
+ # tcp = test if the tcp port provided is open
+ #
+ ###########################################################
+
+ <realserver>
+ url=http://127.0.0.1:80/apache2-default/index.html
+ type = get
+ code=200,201
+ hostheader = www.test.com
+ </realserver>
+
+
+ <realserver>
+ url=http://127.0.0.1:82/apache2-default/index.html
+ type = get
+ code=200,201
+ hostheader = www.myhost.com
+ </realserver>
+
+ <realserver>
+ url= http://10.2.2.1
+ type = regex
+ regex= /qdAbm/
+ </realserver>
+
+ <realserver>
+ type = tcp
+ url = 10.2.2.1
+ port =80
+ </realserver>
+
+ <realserver>
+ type = get
+ url = ftp://USER:PASSWORD@10.2.3.1
+ code=200,201
+ </realserver>
+ ###########################################################
+
+
+
+EOF
+
+}
+
+if ( $option{ h } || $option{ H } )
+{
+ exit;
+}
+
+if ( $option{ v } ) { print "$VERSION\n"; exit; }
+
+use vars qw(@ISA);
+use Net::Server::PreFork;
+@ISA = qw(Net::Server::PreFork);
+
+my $port;
+my $host;
+my $reparse;
+my $cidr_allow;
+my $log_level;
+my $log_file;
+my $pid_file;
+my $daemon;
+my $min_servers;
+my $min_spare_servers;
+my $max_spare_servers;
+my $max_servers;
+my $html_content;
+
+my $conf_file = $option{ c } || "/etc/check.conf";
+my $pwd = $ENV{ PWD };
+$conf_file =~ s/^\./$pwd/;
+$conf_file =~ s/^([^\/])/$pwd\/$1/;
+my $basename = basename( $conf_file, ( '.conf' ) );
+my $CONF = parse_conf( $conf_file );
+
+my $reparse_one = 0;
+
+$SIG{ HUP } = sub { $reparse_one = 1; };
+
+my @TEST;
+my $test_list = $CONF->{ realserver };
+if ( ref( $test_list ) eq "ARRAY" )
+{
+ @TEST = @{ $test_list };
+}
+else
+{
+ @TEST = ( $test_list );
+}
+
+my $server = MyPackage->new(
+ {
+ port => $port,
+ host => $host,
+ cidr_allow => $cidr_allow,
+ log_level => $log_level,
+ child_communication => 1,
+ setsid => $daemon,
+ log_file => $log_file,
+ pid_file => $pid_file,
+ min_servers => $min_servers,
+ min_spare_servers => $min_spare_servers,
+ max_spare_servers => $max_spare_servers,
+ max_servers => $max_servers,
+ }
+);
+
+$server->run();
+exit;
+
+sub process_request
+{
+ my $self = shift;
+ if ( $reparse || $reparse_one )
+ {
+ $CONF = parse_conf( $conf_file );
+ }
+ my $result;
+ my @TEST;
+ my $test_list = $CONF->{ realserver };
+
+ if ( ref( $test_list ) eq "ARRAY" )
+ {
+ @TEST = @{ $test_list };
+ }
+ else
+ {
+ @TEST = ( $test_list );
+ }
+
+ my $allow_code;
+ my $test_item;
+ my $html_data;
+ foreach my $test ( @TEST )
+ {
+ my $uri;
+ my $authority;
+ my $URL = $test->{ url };
+ $uri = URI->new( $URL );
+ $authority = $uri->authority;
+
+ if ( exists $test->{ type } )
+ {
+ if ( $test->{ type } =~ /get/i )
+ {
+ my $allow_code = $test->{ code } || '200,201';
+ $test_item++;
+ my $host = $test->{ hostheader } || $authority;
+ my $res = get( $URL, $allow_code, $host );
+ if ( $html_content == 1 )
+ {
+ if ( $res )
+ {
+ $html_data .= "GET OK $URL<br>\r\n";
+ }
+ else
+ {
+ $html_data .= "GET NOK $URL<br>\r\n";
+ }
+ }
+ if ( $html_content == 2 )
+ {
+ if ( $res )
+ {
+ $html_data .= "<tr><td>GET</td><td>OK</td><td>$URL</td></tr>\r\n";
+ }
+ else
+ {
+ $html_data .= "<tr><td>GET</td><td>NOK</td><td>$URL</td></tr>\r\n";
+ }
+ }
+ $result += $res;
+ }
+ if ( $test->{ type } =~ /regex/i )
+ {
+ my $regex = $test->{ regex };
+ $test_item++;
+ my $host = $test->{ hostheader } || $authority;
+ my $res = regex( $URL, $regex, $host );
+ if ( $html_content == 1 )
+ {
+ if ( $res )
+ {
+ $html_data .= "REGEX OK $URL<br>\r\n";
+ }
+ else
+ {
+ $html_data .= "REGEX NOK $URL<br>\r\n";
+ }
+ }
+ if ( $html_content == 2 )
+ {
+ if ( $res )
+ {
+ $html_data .= "<tr><td>REGEX</td><td>OK</td><td>$URL</td></tr>\r\n";
+ }
+ else
+ {
+ $html_data .= "<tr><td>REGEX</td><td>NOK</td><td>$URL</td></tr>\r\n";
+ }
+ }
+ $result += $res;
+ }
+ if ( $test->{ type } =~ /tcp/i )
+ {
+ $test_item++;
+ my $PORT = $test->{ port } || 80;
+ my $res = TCP( $URL, $PORT );
+ if ( $html_content == 1 )
+ {
+ if ( $res )
+ {
+ $html_data .= "TCP OK $URL<br>\r\n";
+ }
+ else
+ {
+ $html_data .= "TCP NOK $URL<br>\r\n";
+ }
+ }
+ if ( $html_content == 2 )
+ {
+ if ( $res )
+ {
+ $html_data .= "<tr><td>TCP</td><td>OK</td><td>$URL</td></tr>\r\n";
+ }
+ else
+ {
+ $html_data .= "<tr><td>TCP</td><td>NOK</td><td>$URL</td></tr>\r\n";
+ }
+ }
+ $result += $res;
+ }
+ }
+ }
+
+ my $len;
+ if ( $html_content == 1 )
+ {
+ $html_data = "\r\n<html><body>\r\n$html_data</body></html>\r\n";
+ $len = ( length( $html_data ) ) - 2;
+ }
+ if ( $html_content == 2 )
+ {
+ $html_data = "\r\n<table align='center' border='1' >\r\n$html_data</table>\r\n";
+ $len = ( length( $html_data ) ) - 2;
+ }
+
+ if ( $result != $test_item )
+ {
+ my $header = "HTTP/1.0 503 Service Unavailable\r\n";
+ if ( $html_content )
+ {
+ $header .= "Content-Length: $len\r\nContent-Type: text/html; charset=iso-8859-1\r\n";
+ }
+ print $header . $html_data;
+ return;
+ }
+ my $header = "HTTP/1.0 200 OK\r\n";
+ if ( $html_content )
+ {
+ $header .= "Content-Length: $len\r\nContent-Type: text/html; charset=iso-8859-1\r\n";
+ }
+ print $header. $html_data;
+}
+
+1;
+
+##########################################################
+##########################################################
+# function to REGEX on a GET from URL
+# arg: uri
+# regex to test (with extra parameter like perl e.g. /\bweb\d{2,3}/i )
+# IP
+# port (optionnal: default=80)
+# ret: 0 if no reply
+# 1 if reply
+##########################################################
+##########################################################
+sub regex
+{
+ my $url = shift;
+ my $regex = shift;
+ my $host = shift;
+
+ $regex =~ /\/(.*)\/(.*)/;
+ my $reg = $1;
+ my $ext = $2;
+ my %options;
+ $options{ 'agent' } = "LB_REGEX_PROBE/$VERSION";
+ $options{ 'timeout' } = 10;
+ my $ua = LWP::UserAgent->new( %options );
+ my $response = $ua->get( $url, "Host" => $host );
+ if ( $response->is_success )
+ {
+ my $html = $response->content;
+ if ( $ext =~ /i/ )
+ {
+ if ( $html =~ /$reg/si )
+ {
+ return 1;
+ }
+ }
+ else
+ {
+ if ( $html =~ /$reg/s )
+ {
+ return 1;
+ }
+ }
+ }
+ return 0;
+}
+
+##########################################################
+##########################################################
+# function to GET an URL (HTTP or FTP) ftp://FTPTest:6ccount4F@brice!@172.29.0.146
+# arg: uri
+# allowed code (comma seaparated)
+# IP
+# port (optionnal: default=80)
+# ret: 0 if not the expected vcode
+# 1 if the expected code is returned
+##########################################################
+##########################################################
+sub get
+{
+ my $url = shift;
+ my $code = shift;
+ my $host = shift;
+
+ $code =~ s/\s*//g;
+ my %codes = map { $_ => $_ } split /,/, $code;
+ my %options;
+ $options{ 'agent' } = "LB_HTTP_PROBE/$VERSION";
+ $options{ 'timeout' } = 10;
+ my $ua = LWP::UserAgent->new( %options );
+ my $response = $ua->get( $url, "Host" => $host );
+ if ( $response->is_success )
+ {
+ my $rc = $response->{ _rc };
+ if ( defined $codes{ $rc } )
+ {
+ return 1;
+ }
+ }
+ return 0;
+}
+
+##########################################################
+##########################################################
+# function to test a port on a host
+# arg: hostip
+# port
+# timeout
+# ret: 0 if not open
+# 1 if open
+##########################################################
+##########################################################
+sub TCP
+{
+ use IO::Socket::PortState qw(check_ports);
+ my $remote_host = shift;
+ my $remote_port = shift;
+ my $timeout = shift;
+
+ my %porthash = ( tcp => { $remote_port => { name => 'to_test', } } );
+ check_ports( $remote_host, $timeout, \%porthash );
+ return $porthash{ tcp }{ $remote_port }{ open };
+}
+
+##############################################
+# parse config file
+# IN: File PATH
+# Out: Ref to a hash with config data
+##############################################
+sub parse_conf
+{
+ my $file = shift;
+
+ my $conf = new Config::General(
+ -ConfigFile => $file,
+ -ExtendedAccess => 1,
+ -AllowMultiOptions => "yes"
+ );
+ my %config = $conf->getall;
+ $port = $config{ port } || 9898;
+ $host = $config{ host } || '127.0.0.1';
+ $reparse = $config{ reparse } || 0;
+ $cidr_allow = $config{ cidr_allow } || '127.0.0.0/8';
+ $log_level = $config{ log_level } || 0;
+ $log_file = $config{ log_file } || "/var/log/check.log";
+ $pid_file = $config{ pid_file } || "/var/run/check.pid";
+ $daemon = $config{ daemon } || 0;
+ $min_servers = $config{ min_servers } || 5;
+ $min_spare_servers = $config{ min_spare_servers } || 2;
+ $max_spare_servers = $config{ max_spare_servers } || 10;
+ $max_servers = $config{ max_servers } || 50;
+ $html_content = $config{ content } || 0;
+
+ $pid_file =~ s/\$\$\$/$basename/g;
+ $pid_file =~ s/\$\$/$$/g;
+ $log_file =~ s/\$\$\$/$basename/g;
+ $log_file =~ s/\$\$/$$/g;
+
+ if ( !( keys %{ $config{ realserver } } ) )
+ {
+ die "No farm to test\n";
+ }
+ return ( \%config );
+}
+
--- /dev/null
+
+# listening port ( default 9898 )
+port 9899
+
+# on which IP to bind (default 127.0.0.1 ) * = all IP
+#host 10.2.1.1
+
+# which client addr is allow ( default 127.0.0.0/8 )
+#cidr_allow = 0.0.0.0/0
+
+# verbosity from 0 to 4 (default 0 = no log )
+log_level = 1
+
+# daemonize (default 0 = no )
+daemon = 1
+
+# content put a HTML content after header
+# (default 0 = no content 1 = html 2 = table )
+content = 2
+
+# reparse the config file at each request ( default 0 = no )
+# only SIGHUP reread the config file)
+reparse = 1
+
+# pid_file (default /var/run/check.pid )
+# $$$ = basename of config file
+# $$ = PID
+pid_file=/var/run/CHECK_$$$.pid
+
+# log_file (default /var/log/check.log )
+# $$$ = basename of config file
+# $$ = PID
+log_file=/var/log/CHECK_$$$.log
+
+# number of servers to keep running (default = 5)
+min_servers = 2
+
+# number of servers to have waiting for requests (default = 2)
+min_spare_servers = 1
+
+# maximum number of servers to have waiting for requests (default = 10)
+max_spare_servers =1
+
+# number of servers (default = 50)
+max_servers = 2
+
+
+###########################################################
+# a server to check
+# type could be get , regex or tcp
+
+# get = do a http or ftp get and check the result code with
+# the list, coma separated, provided ( default = 200,201 )
+# hostheader is optional and send to the server if provided
+
+# regex = do a http or ftp get and check the content result
+# with regex provided
+# hostheader is optional and send to the server if provided
+
+# tcp = test if the tcp port provided is open
+
+#<realserver>
+# url=http://127.0.0.1:80/apache2-default/index.html
+# type = get
+# code=200,201
+# hostheader = www.test.com
+#</realserver>
+
+
+#<realserver>
+# url=http://127.0.0.1:82/apache2-default/index.html
+# type = get
+# code=200,201
+# hostheader = www.myhost.com
+#</realserver>
+
+<realserver>
+ url= http://10.2.2.1
+ type = regex
+ regex= /qdAbm/
+</realserver>
+
+<realserver>
+ type = tcp
+ url = 10.2.2.1
+ port =80
+</realserver>
+
+#<realserver>
+# type = get
+# url = ftp://FTPuser:FTPpassword@10.2.3.1
+# code=200,201
+#</realserver>
--- /dev/null
+#
+# This is a sample configuration. It illustrates how to separate static objects
+# traffic from dynamic traffic, and how to dynamically regulate the server load.
+#
+# It listens on 192.168.1.10:80, and directs all requests for Host 'img' or
+# URIs starting with /img or /css to a dedicated group of servers. URIs
+# starting with /admin/stats deliver the stats page.
+#
+
+global
+ maxconn 10000
+ stats socket /var/run/haproxy.stat mode 600 level admin
+ log 127.0.0.1 local0
+ uid 200
+ gid 200
+ chroot /var/empty
+ daemon
+
+# The public 'www' address in the DMZ
+frontend public
+ bind 192.168.1.10:80 name clear
+ #bind 192.168.1.10:443 ssl crt /etc/haproxy/haproxy.pem
+ mode http
+ log global
+ option httplog
+ option dontlognull
+ monitor-uri /monitoruri
+ maxconn 8000
+ timeout client 30s
+
+ stats uri /admin/stats
+ use_backend static if { hdr_beg(host) -i img }
+ use_backend static if { path_beg /img /css }
+ default_backend dynamic
+
+# The static backend backend for 'Host: img', /img and /css.
+backend static
+ mode http
+ balance roundrobin
+ option prefer-last-server
+ retries 2
+ option redispatch
+ timeout connect 5s
+ timeout server 5s
+ option httpchk HEAD /favicon.ico
+ server statsrv1 192.168.1.8:80 check inter 1000
+ server statsrv2 192.168.1.9:80 check inter 1000
+
+# the application servers go here
+backend dynamic
+ mode http
+ balance roundrobin
+ retries 2
+ option redispatch
+ timeout connect 5s
+ timeout server 30s
+ timeout queue 30s
+ option httpchk HEAD /login.php
+ cookie DYNSRV insert indirect nocache
+ fullconn 4000 # the servers will be used at full load above this number of connections
+ server dynsrv1 192.168.1.1:80 minconn 50 maxconn 500 cookie s1 check inter 1000
+ server dynsrv2 192.168.1.2:80 minconn 50 maxconn 500 cookie s2 check inter 1000
+ server dynsrv3 192.168.1.3:80 minconn 50 maxconn 500 cookie s3 check inter 1000
+ server dynsrv4 192.168.1.4:80 minconn 50 maxconn 500 cookie s4 check inter 1000
+
--- /dev/null
+#!/bin/sh
+tr -d '\015' | sed -e 's,\(: Cookie:.*$\),'$'\e''\[35m\1'$'\e''\[0m,gi' -e 's,\(: Set-Cookie:.*$\),'$'\e''\[31m\1'$'\e''\[0m,gi' -e 's,\(^[^:]*:[^:]*srvhdr.*\)$,'$'\e''\[32m\1'$'\e''\[0m,i' -e 's,\(^[^:]*:[^:]*clihdr.*\)$,'$'\e''\[34m\1'$'\e''\[0m,i'
--- /dev/null
+#!/bin/sh
+(echo '<html><body><pre>'; tr -d '\015' | sed -e 's,\(: Cookie:.*$\),<font color="#e000c0">\1</font>,gi' -e 's,\(: Set-Cookie:.*$\),<font color="#e0a000">\1</font>,gi' -e 's,\(^[^:]*:[^:]*srvhdr.*\)$,<font color="#00a000">\1</font>,i' -e 's,\(^[^:]*:[^:]*clihdr.*\)$,<font color="#0000c0">\1</font>,i' -e 's,\(^.*\)$,<tt>\1</tt>,' ; echo '</pre></body></html>')
--- /dev/null
+#!/bin/bash
+if [ $# -lt 2 ]; then
+ echo "Usage: $0 regex debug_file > extracted_file"
+ exit 1
+fi
+word=$1
+file=$2
+exec grep $(for i in $(grep $word $file |cut -f1 -d: | sort -u ) ; do echo -n '\('$i':\)\|'; done; echo '^$') $file
--- /dev/null
+HTTP/1.0 400 Bad request\r
+Cache-Control: no-cache\r
+Connection: close\r
+Content-Type: text/html\r
+\r
+<html><body><h1>400 Bad request</h1>
+Your browser sent an invalid request.
+</body></html>
+
--- /dev/null
+HTTP/1.0 403 Forbidden\r
+Cache-Control: no-cache\r
+Connection: close\r
+Content-Type: text/html\r
+\r
+<html><body><h1>403 Forbidden</h1>
+Request forbidden by administrative rules.
+</body></html>
+
--- /dev/null
+HTTP/1.0 408 Request Time-out\r
+Cache-Control: no-cache\r
+Connection: close\r
+Content-Type: text/html\r
+\r
+<html><body><h1>408 Request Time-out</h1>
+Your browser didn't send a complete request in time.
+</body></html>
+
--- /dev/null
+HTTP/1.0 500 Server Error\r
+Cache-Control: no-cache\r
+Connection: close\r
+Content-Type: text/html\r
+\r
+<html><body><h1>500 Server Error</h1>
+An internal server error occured.
+</body></html>
+
--- /dev/null
+HTTP/1.0 502 Bad Gateway\r
+Cache-Control: no-cache\r
+Connection: close\r
+Content-Type: text/html\r
+\r
+<html><body><h1>502 Bad Gateway</h1>
+The server returned an invalid or incomplete response.
+</body></html>
+
--- /dev/null
+HTTP/1.0 503 Service Unavailable\r
+Cache-Control: no-cache\r
+Connection: close\r
+Content-Type: text/html\r
+\r
+<html><body><h1>503 Service Unavailable</h1>
+No server is available to handle this request.
+</body></html>
+
--- /dev/null
+HTTP/1.0 504 Gateway Time-out\r
+Cache-Control: no-cache\r
+Connection: close\r
+Content-Type: text/html\r
+\r
+<html><body><h1>504 Gateway Time-out</h1>
+The server didn't respond in time.
+</body></html>
+
--- /dev/null
+These files are default error files that can be customized
+if necessary. They are complete HTTP responses, so that
+everything is possible, including using redirects or setting
+special headers.
+
+They can be used with the 'errorfile' keyword like this :
+
+ errorfile 503 /etc/haproxy/errors/503.http
+
--- /dev/null
+#!/bin/sh
+#
+# chkconfig: - 85 15
+# description: HA-Proxy is a TCP/HTTP reverse proxy which is particularly suited \
+# for high availability environments.
+# processname: haproxy
+# config: /etc/haproxy/haproxy.cfg
+# pidfile: /var/run/haproxy.pid
+
+# Script Author: Simon Matter <simon.matter@invoca.ch>
+# Version: 2004060600
+
+# Source function library.
+if [ -f /etc/init.d/functions ]; then
+ . /etc/init.d/functions
+elif [ -f /etc/rc.d/init.d/functions ] ; then
+ . /etc/rc.d/init.d/functions
+else
+ exit 0
+fi
+
+# Source networking configuration.
+. /etc/sysconfig/network
+
+# Check that networking is up.
+[ ${NETWORKING} = "no" ] && exit 0
+
+# This is our service name
+BASENAME=`basename $0`
+if [ -L $0 ]; then
+ BASENAME=`find $0 -name $BASENAME -printf %l`
+ BASENAME=`basename $BASENAME`
+fi
+
+BIN=/usr/sbin/$BASENAME
+
+CFG=/etc/$BASENAME/$BASENAME.cfg
+[ -f $CFG ] || exit 1
+
+PIDFILE=/var/run/$BASENAME.pid
+LOCKFILE=/var/lock/subsys/$BASENAME
+
+RETVAL=0
+
+start() {
+ quiet_check
+ if [ $? -ne 0 ]; then
+ echo "Errors found in configuration file, check it with '$BASENAME check'."
+ return 1
+ fi
+
+ echo -n "Starting $BASENAME: "
+ daemon $BIN -D -f $CFG -p $PIDFILE
+ RETVAL=$?
+ echo
+ [ $RETVAL -eq 0 ] && touch $LOCKFILE
+ return $RETVAL
+}
+
+stop() {
+ echo -n "Shutting down $BASENAME: "
+ killproc $BASENAME -USR1
+ RETVAL=$?
+ echo
+ [ $RETVAL -eq 0 ] && rm -f $LOCKFILE
+ [ $RETVAL -eq 0 ] && rm -f $PIDFILE
+ return $RETVAL
+}
+
+restart() {
+ quiet_check
+ if [ $? -ne 0 ]; then
+ echo "Errors found in configuration file, check it with '$BASENAME check'."
+ return 1
+ fi
+ stop
+ start
+}
+
+reload() {
+ if ! [ -s $PIDFILE ]; then
+ return 0
+ fi
+
+ quiet_check
+ if [ $? -ne 0 ]; then
+ echo "Errors found in configuration file, check it with '$BASENAME check'."
+ return 1
+ fi
+ $BIN -D -f $CFG -p $PIDFILE -sf $(cat $PIDFILE)
+}
+
+check() {
+ $BIN -c -q -V -f $CFG
+}
+
+quiet_check() {
+ $BIN -c -q -f $CFG
+}
+
+rhstatus() {
+ status $BASENAME
+}
+
+condrestart() {
+ [ -e $LOCKFILE ] && restart || :
+}
+
+# See how we were called.
+case "$1" in
+ start)
+ start
+ ;;
+ stop)
+ stop
+ ;;
+ restart)
+ restart
+ ;;
+ reload)
+ reload
+ ;;
+ condrestart)
+ condrestart
+ ;;
+ status)
+ rhstatus
+ ;;
+ check)
+ check
+ ;;
+ *)
+ echo $"Usage: $BASENAME {start|stop|restart|reload|condrestart|status|check}"
+ exit 1
+esac
+
+exit $?
--- /dev/null
+Summary: HA-Proxy is a TCP/HTTP reverse proxy for high availability environments
+Name: haproxy
+Version: 1.6.3
+Release: 1
+License: GPL
+Group: System Environment/Daemons
+URL: http://haproxy.1wt.eu/
+Source0: http://haproxy.1wt.eu/download/1.5/src/devel/%{name}-%{version}.tar.gz
+BuildRoot: %{_tmppath}/%{name}-%{version}-root
+BuildRequires: pcre-devel
+Requires: /sbin/chkconfig, /sbin/service
+
+%description
+HA-Proxy is a TCP/HTTP reverse proxy which is particularly suited for high
+availability environments. Indeed, it can:
+- route HTTP requests depending on statically assigned cookies
+- spread the load among several servers while assuring server persistence
+ through the use of HTTP cookies
+- switch to backup servers in the event a main one fails
+- accept connections to special ports dedicated to service monitoring
+- stop accepting connections without breaking existing ones
+- add/modify/delete HTTP headers both ways
+- block requests matching a particular pattern
+
+It needs very little resource. Its event-driven architecture allows it to easily
+handle thousands of simultaneous connections on hundreds of instances without
+risking the system's stability.
+
+%prep
+%setup -q
+
+# We don't want any perl dependecies in this RPM:
+%define __perl_requires /bin/true
+
+%build
+%{__make} USE_PCRE=1 DEBUG="" ARCH=%{_target_cpu} TARGET=linux26
+
+%install
+[ "%{buildroot}" != "/" ] && %{__rm} -rf %{buildroot}
+
+%{__install} -d %{buildroot}%{_sbindir}
+%{__install} -d %{buildroot}%{_sysconfdir}/rc.d/init.d
+%{__install} -d %{buildroot}%{_sysconfdir}/%{name}
+%{__install} -d %{buildroot}%{_mandir}/man1/
+
+%{__install} -s %{name} %{buildroot}%{_sbindir}/
+%{__install} -c -m 644 examples/%{name}.cfg %{buildroot}%{_sysconfdir}/%{name}/
+%{__install} -c -m 755 examples/%{name}.init %{buildroot}%{_sysconfdir}/rc.d/init.d/%{name}
+%{__install} -c -m 755 doc/%{name}.1 %{buildroot}%{_mandir}/man1/
+
+%clean
+[ "%{buildroot}" != "/" ] && %{__rm} -rf %{buildroot}
+
+%post
+/sbin/chkconfig --add %{name}
+
+%preun
+if [ $1 = 0 ]; then
+ /sbin/service %{name} stop >/dev/null 2>&1 || :
+ /sbin/chkconfig --del %{name}
+fi
+
+%postun
+if [ "$1" -ge "1" ]; then
+ /sbin/service %{name} condrestart >/dev/null 2>&1 || :
+fi
+
+%files
+%defattr(-,root,root)
+%doc CHANGELOG README examples/*.cfg doc/architecture.txt doc/configuration.txt doc/intro.txt doc/management.txt doc/proxy-protocol.txt
+%doc %{_mandir}/man1/%{name}.1*
+
+%attr(0755,root,root) %{_sbindir}/%{name}
+%dir %{_sysconfdir}/%{name}
+%attr(0644,root,root) %config(noreplace) %{_sysconfdir}/%{name}/%{name}.cfg
+%attr(0755,root,root) %config %{_sysconfdir}/rc.d/init.d/%{name}
+
+%changelog
+* Sun Dec 27 2015 Willy Tarreau <w@1wt.eu>
+- updated to 1.6.3
+
+* Tue Nov 3 2015 Willy Tarreau <w@1wt.eu>
+- updated to 1.6.2
+
+* Tue Oct 20 2015 Willy Tarreau <w@1wt.eu>
+- updated to 1.6.1
+
+* Tue Oct 13 2015 Willy Tarreau <w@1wt.eu>
+- updated to 1.6.0
+
+* Tue Oct 6 2015 Willy Tarreau <w@1wt.eu>
+- updated to 1.6-dev7
+
+* Mon Sep 28 2015 Willy Tarreau <w@1wt.eu>
+- updated to 1.6-dev6
+
+* Mon Sep 14 2015 Willy Tarreau <w@1wt.eu>
+- updated to 1.6-dev5
+
+* Sun Aug 30 2015 Willy Tarreau <w@1wt.eu>
+- updated to 1.6-dev4
+
+* Sun Aug 30 2015 Willy Tarreau <w@1wt.eu>
+- updated to 1.6-dev4
+
+* Wed Jul 22 2015 Willy Tarreau <w@1wt.eu>
+- updated to 1.6-dev3
+
+* Wed Jun 17 2015 Willy Tarreau <w@1wt.eu>
+- updated to 1.6-dev2
+
+* Wed Mar 11 2015 Willy Tarreau <w@1wt.eu>
+- updated to 1.6-dev1
+
+* Thu Jun 19 2014 Willy Tarreau <w@1wt.eu>
+- updated to 1.6-dev0
+
+* Thu Jun 19 2014 Willy Tarreau <w@1wt.eu>
+- updated to 1.5.0
+
+* Wed May 28 2014 Willy Tarreau <w@1wt.eu>
+- updated to 1.5-dev26
+
+* Sat May 10 2014 Willy Tarreau <w@1wt.eu>
+- updated to 1.5-dev25
+
+* Sat Apr 26 2014 Willy Tarreau <w@1wt.eu>
+- updated to 1.5-dev24
+
+* Wed Apr 23 2014 Willy Tarreau <w@1wt.eu>
+- updated to 1.5-dev23
+
+* Mon Feb 3 2014 Willy Tarreau <w@1wt.eu>
+- updated to 1.5-dev22
+
+* Tue Dec 17 2013 Willy Tarreau <w@1wt.eu>
+- updated to 1.5-dev21
+
+* Mon Dec 16 2013 Willy Tarreau <w@1wt.eu>
+- updated to 1.5-dev20
+
+* Mon Jun 17 2013 Willy Tarreau <w@1wt.eu>
+- updated to 1.5-dev19
+
+* Wed Apr 3 2013 Willy Tarreau <w@1wt.eu>
+- updated to 1.5-dev18
+
+* Fri Dec 28 2012 Willy Tarreau <w@1wt.eu>
+- updated to 1.5-dev17
+
+* Mon Dec 24 2012 Willy Tarreau <w@1wt.eu>
+- updated to 1.5-dev16
+
+* Wed Dec 12 2012 Willy Tarreau <w@1wt.eu>
+- updated to 1.5-dev15
+
+* Mon Nov 26 2012 Willy Tarreau <w@1wt.eu>
+- updated to 1.5-dev14
+
+* Thu Nov 22 2012 Willy Tarreau <w@1wt.eu>
+- updated to 1.5-dev13
+
+* Mon Sep 10 2012 Willy Tarreau <w@1wt.eu>
+- updated to 1.5-dev12
+
+* Mon Jun 4 2012 Willy Tarreau <w@1wt.eu>
+- updated to 1.5-dev11
+
+* Mon May 14 2012 Willy Tarreau <w@1wt.eu>
+- updated to 1.5-dev10
+
+* Tue May 8 2012 Willy Tarreau <w@1wt.eu>
+- updated to 1.5-dev9
+
+* Mon Mar 26 2012 Willy Tarreau <w@1wt.eu>
+- updated to 1.5-dev8
+
+* Sat Sep 10 2011 Willy Tarreau <w@1wt.eu>
+- updated to 1.5-dev7
+
+* Fri Apr 8 2011 Willy Tarreau <w@1wt.eu>
+- updated to 1.5-dev6
+
+* Tue Mar 29 2011 Willy Tarreau <w@1wt.eu>
+- updated to 1.5-dev5
+
+* Sun Mar 13 2011 Willy Tarreau <w@1wt.eu>
+- updated to 1.5-dev4
+
+* Thu Nov 11 2010 Willy Tarreau <w@1wt.eu>
+- updated to 1.5-dev3
+
+* Sat Aug 28 2010 Willy Tarreau <w@1wt.eu>
+- updated to 1.5-dev2
+
+* Wed Aug 25 2010 Willy Tarreau <w@1wt.eu>
+- updated to 1.5-dev1
+
+* Sun May 23 2010 Willy Tarreau <w@1wt.eu>
+- updated to 1.5-dev0
+
+* Sun May 16 2010 Willy Tarreau <w@1wt.eu>
+- updated to 1.4.6
+
+* Thu May 13 2010 Willy Tarreau <w@1wt.eu>
+- updated to 1.4.5
+
+* Wed Apr 7 2010 Willy Tarreau <w@1wt.eu>
+- updated to 1.4.4
+
+* Tue Mar 30 2010 Willy Tarreau <w@1wt.eu>
+- updated to 1.4.3
+
+* Wed Mar 17 2010 Willy Tarreau <w@1wt.eu>
+- updated to 1.4.2
+
+* Thu Mar 4 2010 Willy Tarreau <w@1wt.eu>
+- updated to 1.4.1
+
+* Fri Feb 26 2010 Willy Tarreau <w@1wt.eu>
+- updated to 1.4.0
+
+* Tue Feb 2 2010 Willy Tarreau <w@1wt.eu>
+- updated to 1.4-rc1
+
+* Mon Jan 25 2010 Willy Tarreau <w@1wt.eu>
+- updated to 1.4-dev8
+
+* Mon Jan 25 2010 Willy Tarreau <w@1wt.eu>
+- updated to 1.4-dev7
+
+* Fri Jan 8 2010 Willy Tarreau <w@1wt.eu>
+- updated to 1.4-dev6
+
+* Sun Jan 3 2010 Willy Tarreau <w@1wt.eu>
+- updated to 1.4-dev5
+
+* Mon Oct 12 2009 Willy Tarreau <w@1wt.eu>
+- updated to 1.4-dev4
+
+* Thu Sep 24 2009 Willy Tarreau <w@1wt.eu>
+- updated to 1.4-dev3
+
+* Sun Aug 9 2009 Willy Tarreau <w@1wt.eu>
+- updated to 1.4-dev2
+
+* Wed Jul 29 2009 Willy Tarreau <w@1wt.eu>
+- updated to 1.4-dev1
+
+* Tue Jun 09 2009 Willy Tarreau <w@1wt.eu>
+- updated to 1.4-dev0
+
+* Sun May 10 2009 Willy Tarreau <w@1wt.eu>
+- updated to 1.3.18
+
+* Sun Mar 29 2009 Willy Tarreau <w@1wt.eu>
+- updated to 1.3.17
+
+* Sun Mar 22 2009 Willy Tarreau <w@1wt.eu>
+- updated to 1.3.16
+
+* Sat Apr 19 2008 Willy Tarreau <w@1wt.eu>
+- updated to 1.3.15
+
+* Wed Dec 5 2007 Willy Tarreau <w@1wt.eu>
+- updated to 1.3.14
+
+* Thu Oct 18 2007 Willy Tarreau <w@1wt.eu>
+- updated to 1.3.13
+
+* Sun Jun 17 2007 Willy Tarreau <w@1wt.eu>
+- updated to 1.3.12
+
+* Sun Jun 3 2007 Willy Tarreau <w@1wt.eu>
+- updated to 1.3.11.4
+
+* Mon May 14 2007 Willy Tarreau <w@1wt.eu>
+- updated to 1.3.11.3
+
+* Mon May 14 2007 Willy Tarreau <w@1wt.eu>
+- updated to 1.3.11.2
+
+* Mon May 14 2007 Willy Tarreau <w@1wt.eu>
+- updated to 1.3.11.1
+
+* Mon May 14 2007 Willy Tarreau <w@1wt.eu>
+- updated to 1.3.11
+
+* Thu May 10 2007 Willy Tarreau <w@1wt.eu>
+- updated to 1.3.10.2
+
+* Tue May 09 2007 Willy Tarreau <w@1wt.eu>
+- updated to 1.3.10.1
+
+* Tue May 08 2007 Willy Tarreau <w@1wt.eu>
+- updated to 1.3.10
+
+* Sun Apr 15 2007 Willy Tarreau <w@1wt.eu>
+- updated to 1.3.9
+
+* Tue Apr 03 2007 Willy Tarreau <w@1wt.eu>
+- updated to 1.3.8.2
+
+* Sun Apr 01 2007 Willy Tarreau <w@1wt.eu>
+- updated to 1.3.8.1
+
+* Sun Mar 25 2007 Willy Tarreau <w@1wt.eu>
+- updated to 1.3.8
+
+* Wed Jan 26 2007 Willy Tarreau <w@1wt.eu>
+- updated to 1.3.7
+
+* Wed Jan 22 2007 Willy Tarreau <w@1wt.eu>
+- updated to 1.3.6
+
+* Wed Jan 07 2007 Willy Tarreau <w@1wt.eu>
+- updated to 1.3.5
+
+* Wed Jan 02 2007 Willy Tarreau <w@1wt.eu>
+- updated to 1.3.4
+
+* Wed Oct 15 2006 Willy Tarreau <w@1wt.eu>
+- updated to 1.3.3
+
+* Wed Sep 03 2006 Willy Tarreau <w@1wt.eu>
+- updated to 1.3.2
+
+* Wed Jul 09 2006 Willy Tarreau <w@1wt.eu>
+- updated to 1.3.1
+
+* Wed May 21 2006 Willy Tarreau <willy@w.ods.org>
+- updated to 1.2.14
+
+* Wed May 01 2006 Willy Tarreau <willy@w.ods.org>
+- updated to 1.2.13
+
+* Wed Apr 15 2006 Willy Tarreau <willy@w.ods.org>
+- updated to 1.2.12
+
+* Wed Mar 30 2006 Willy Tarreau <willy@w.ods.org>
+- updated to 1.2.11.1
+
+* Wed Mar 19 2006 Willy Tarreau <willy@w.ods.org>
+- updated to 1.2.10
+
+* Wed Mar 15 2006 Willy Tarreau <willy@w.ods.org>
+- updated to 1.2.9
+
+* Sat Jan 22 2005 Willy Tarreau <willy@w.ods.org>
+- updated to 1.2.3 (1.1.30)
+
+* Sun Nov 14 2004 Willy Tarreau <w@w.ods.org>
+- updated to 1.1.29
+- fixed path to config and init files
+- statically linked PCRE to increase portability to non-pcre systems
+
+* Sun Jun 6 2004 Willy Tarreau <willy@w.ods.org>
+- updated to 1.1.28
+- added config check support to the init script
+
+* Tue Oct 28 2003 Simon Matter <simon.matter@invoca.ch>
+- updated to 1.1.27
+- added pid support to the init script
+
+* Wed Oct 22 2003 Simon Matter <simon.matter@invoca.ch>
+- updated to 1.1.26
+
+* Thu Oct 16 2003 Simon Matter <simon.matter@invoca.ch>
+- initial build
--- /dev/null
+" Vim syntax file
+" Language: HAproxy
+" Maintainer: Bruno Michel <brmichel@free.fr>
+" Last Change: Mar 30, 2007
+" Version: 0.3
+" URL: http://haproxy.1wt.eu/
+" URL: http://vim.sourceforge.net/scripts/script.php?script_id=1845
+
+" It is suggested to add the following line to $HOME/.vimrc :
+" au BufRead,BufNewFile haproxy* set ft=haproxy
+
+" For version 5.x: Clear all syntax items
+" For version 6.x: Quit when a syntax file was already loaded
+if version < 600
+ syntax clear
+elseif exists("b:current_syntax")
+ finish
+endif
+
+if version >= 600
+ setlocal iskeyword=_,-,a-z,A-Z,48-57
+else
+ set iskeyword=_,-,a-z,A-Z,48-57
+endif
+
+
+" Escaped chars
+syn match hapEscape +\\\(\\\| \|n\|r\|t\|#\|x\x\x\)+
+
+" Comments
+syn match hapComment /#.*$/ contains=hapTodo
+syn keyword hapTodo contained TODO FIXME XXX
+syn case ignore
+
+" Sections
+syn match hapSection /^\s*\(global\|defaults\)/
+syn match hapSection /^\s*\(listen\|frontend\|backend\|ruleset\)/ skipwhite nextgroup=hapSectLabel
+syn match hapSectLabel /\S\+/ skipwhite nextgroup=hapIp1 contained
+syn match hapIp1 /\(\d\{1,3}\.\d\{1,3}\.\d\{1,3}\.\d\{1,3}\)\?:\d\{1,5}/ nextgroup=hapIp2 contained
+syn match hapIp2 /,\(\d\{1,3}\.\d\{1,3}\.\d\{1,3}\.\d\{1,3}\)\?:\d\{1,5}/hs=s+1 nextgroup=hapIp2 contained
+
+" Parameters
+syn keyword hapParam chroot cliexp clitimeout contimeout
+syn keyword hapParam daemon debug disabled
+syn keyword hapParam enabled
+syn keyword hapParam fullconn
+syn keyword hapParam gid grace group
+syn keyword hapParam maxconn monitor-uri
+syn keyword hapParam nbproc noepoll nopoll
+syn keyword hapParam pidfile
+syn keyword hapParam quiet
+syn keyword hapParam redispatch retries
+syn keyword hapParam reqallow reqdel reqdeny reqpass reqtarpit skipwhite nextgroup=hapRegexp
+syn keyword hapParam reqiallow reqidel reqideny reqipass reqitarpit skipwhite nextgroup=hapRegexp
+syn keyword hapParam rspdel rspdeny skipwhite nextgroup=hapRegexp
+syn keyword hapParam rspidel rspideny skipwhite nextgroup=hapRegexp
+syn keyword hapParam reqsetbe reqisetbe skipwhite nextgroup=hapRegexp2
+syn keyword hapParam reqadd reqiadd rspadd rspiadd
+syn keyword hapParam server source srvexp srvtimeout
+syn keyword hapParam uid ulimit-n user
+syn keyword hapParam reqrep reqirep rsprep rspirep skipwhite nextgroup=hapRegexp
+syn keyword hapParam errorloc errorloc302 errorloc303 skipwhite nextgroup=hapStatus
+syn keyword hapParam default_backend skipwhite nextgroup=hapSectLabel
+syn keyword hapParam appsession skipwhite nextgroup=hapAppSess
+syn keyword hapParam bind skipwhite nextgroup=hapIp1
+syn keyword hapParam balance skipwhite nextgroup=hapBalance
+syn keyword hapParam cookie skipwhite nextgroup=hapCookieNam
+syn keyword hapParam capture skipwhite nextgroup=hapCapture
+syn keyword hapParam dispatch skipwhite nextgroup=hapIpPort
+syn keyword hapParam source skipwhite nextgroup=hapIpPort
+syn keyword hapParam mode skipwhite nextgroup=hapMode
+syn keyword hapParam monitor-net skipwhite nextgroup=hapIPv4Mask
+syn keyword hapParam option skipwhite nextgroup=hapOption
+syn keyword hapParam stats skipwhite nextgroup=hapStats
+syn keyword hapParam server skipwhite nextgroup=hapServerN
+syn keyword hapParam source skipwhite nextgroup=hapServerEOL
+syn keyword hapParam log skipwhite nextgroup=hapGLog,hapLogIp
+
+" Options and additional parameters
+syn keyword hapAppSess contained len timeout
+syn keyword hapBalance contained roundrobin source
+syn keyword hapLen contained len
+syn keyword hapGLog contained global
+syn keyword hapMode contained http tcp health
+syn keyword hapOption contained abortonclose allbackups checkcache clitcpka dontlognull forceclose forwardfor
+syn keyword hapOption contained httpchk httpclose httplog keepalive logasap persist srvtcpka ssl-hello-chk
+syn keyword hapOption contained tcplog tcpka tcpsplice
+syn keyword hapOption contained except skipwhite nextgroup=hapIPv4Mask
+syn keyword hapStats contained uri realm auth scope enable
+syn keyword hapLogFac contained kern user mail daemon auth syslog lpr news nextgroup=hapLogLvl skipwhite
+syn keyword hapLogFac contained uucp cron auth2 ftp ntp audit alert cron2 nextgroup=hapLogLvl skipwhite
+syn keyword hapLogFac contained local0 local1 local2 local3 local4 local5 local6 local7 nextgroup=hapLogLvl skipwhite
+syn keyword hapLogLvl contained emerg alert crit err warning notice info debug
+syn keyword hapCookieKey contained rewrite insert nocache postonly indirect prefix nextgroup=hapCookieKey skipwhite
+syn keyword hapCapture contained cookie nextgroup=hapNameLen skipwhite
+syn keyword hapCapture contained request response nextgroup=hapHeader skipwhite
+syn keyword hapHeader contained header nextgroup=hapNameLen skipwhite
+syn keyword hapSrvKey contained backup cookie check inter rise fall port source minconn maxconn weight usesrc
+syn match hapStatus contained /\d\{3}/
+syn match hapIPv4Mask contained /\d\{1,3}\.\d\{1,3}\.\d\{1,3}\.\d\{1,3}\(\/\d\{1,2}\)\?/
+syn match hapLogIp contained /\d\{1,3}\.\d\{1,3}\.\d\{1,3}\.\d\{1,3}/ nextgroup=hapLogFac skipwhite
+syn match hapIpPort contained /\d\{1,3}\.\d\{1,3}\.\d\{1,3}\.\d\{1,3}:\d\{1,5}/
+syn match hapServerAd contained /\d\{1,3}\.\d\{1,3}\.\d\{1,3}\.\d\{1,3}\(:[+-]\?\d\{1,5}\)\?/ nextgroup=hapSrvEOL skipwhite
+syn match hapNameLen contained /\S\+/ nextgroup=hapLen skipwhite
+syn match hapCookieNam contained /\S\+/ nextgroup=hapCookieKey skipwhite
+syn match hapServerN contained /\S\+/ nextgroup=hapServerAd skipwhite
+syn region hapSrvEOL contained start=/\S/ end=/$/ contains=hapSrvKey
+syn region hapRegexp contained start=/\S/ end=/\(\s\|$\)/ skip=/\\ / nextgroup=hapRegRepl skipwhite
+syn region hapRegRepl contained start=/\S/ end=/$/ contains=hapComment,hapEscape,hapBackRef
+syn region hapRegexp2 contained start=/\S/ end=/\(\s\|$\)/ skip=/\\ / nextgroup=hapSectLabel skipwhite
+syn match hapBackref contained /\\\d/
+
+
+" Transparent is a Vim keyword, so we need a regexp to match it
+syn match hapParam +transparent+
+syn match hapOption +transparent+ contained
+
+
+" Define the default highlighting.
+" For version 5.7 and earlier: only when not done already
+" For version 5.8 and later: only when an item doesn't have highlighting yet
+if version < 508
+ command -nargs=+ HiLink hi link <args>
+else
+ command -nargs=+ HiLink hi def link <args>
+endif
+
+HiLink hapEscape SpecialChar
+HiLink hapBackRef Special
+HiLink hapComment Comment
+HiLink hapTodo Todo
+HiLink hapSection Constant
+HiLink hapSectLabel Identifier
+HiLink hapParam Keyword
+
+HiLink hapRegexp String
+HiLink hapRegexp2 hapRegexp
+HiLink hapIp1 Number
+HiLink hapIp2 hapIp1
+HiLink hapLogIp hapIp1
+HiLink hapIpPort hapIp1
+HiLink hapIPv4Mask hapIp1
+HiLink hapServerAd hapIp1
+HiLink hapStatus Number
+
+HiLink hapOption Operator
+HiLink hapAppSess hapOption
+HiLink hapBalance hapOption
+HiLink hapCapture hapOption
+HiLink hapCookieKey hapOption
+HiLink hapHeader hapOption
+HiLink hapGLog hapOption
+HiLink hapLogFac hapOption
+HiLink hapLogLvl hapOption
+HiLink hapMode hapOption
+HiLink hapStats hapOption
+HiLink hapLen hapOption
+HiLink hapSrvKey hapOption
+
+
+delcommand HiLink
+
+let b:current_syntax = "haproxy"
+" vim: ts=8
--- /dev/null
+#!/bin/sh
+#
+# config.rc sample with defaults :
+# service haproxy
+# config /etc/haproxy/haproxy.cfg
+# maxconn 1024
+#
+config="/etc/haproxy/haproxy.cfg"
+maxconn=1024
+
+bin=/usr/sbin/haproxy
+cmdline='$bin -D'
+
+. $ROOT/sbin/init.d/default
+
+if [ -e "$config" ]; then
+ maintfd=`grep '^\([^#]*\)\(listen\|server\)' $config|wc -l`
+else
+ maintfd=0
+fi
+
+maxfd=$[$maxconn*2 + $maintfd]
+if [ $maxfd -lt 100 ]; then
+ maxfd=100;
+fi
+cmdline="$cmdline -n $maxconn -f $config"
+ulimit -n $maxfd
+
+# to get a core when needed, uncomment the following :
+# cd /var/tmp
+# ulimit -c unlimited
+
+# soft stop
+function do_stop {
+ pids=`pidof -o $$ -- $PNAME`
+ if [ ! -z "$pids" ]; then
+ echo "Asking $PNAME to terminate gracefully..."
+ kill -USR1 $pids
+ echo "(use kill $pids to stop immediately)."
+ fi
+}
+
+# dump status
+function do_status {
+ pids=`pidof -o $$ -- $PNAME`
+ if [ ! -z "$pids" ]; then
+ echo "Dumping $PNAME status in logs."
+ kill -HUP $pids
+ else
+ echo "Process $PNAME is not running."
+ fi
+}
+
+main $*
+
--- /dev/null
+#
+# demo config for Proxy mode
+#
+
+global
+ maxconn 20000
+ ulimit-n 16384
+ log 127.0.0.1 local0
+ uid 200
+ gid 200
+ chroot /var/empty
+ nbproc 4
+ daemon
+
+frontend test-proxy
+ bind 192.168.200.10:8080
+ mode http
+ log global
+ option httplog
+ option dontlognull
+ option nolinger
+ option http_proxy
+ maxconn 8000
+ timeout client 30s
+
+ # layer3: Valid users
+ acl allow_host src 192.168.200.150/32
+ http-request deny if !allow_host
+
+ # layer7: prevent private network relaying
+ acl forbidden_dst url_ip 192.168.0.0/24
+ acl forbidden_dst url_ip 172.16.0.0/12
+ acl forbidden_dst url_ip 10.0.0.0/8
+ http-request deny if forbidden_dst
+
+ default_backend test-proxy-srv
+
+
+backend test-proxy-srv
+ mode http
+ timeout connect 5s
+ timeout server 5s
+ retries 2
+ option nolinger
+ option http_proxy
+
+ # layer7: Only GET method is valid
+ acl valid_method method GET
+ http-request deny if !valid_method
+
+ # layer7: protect bad reply
+ http-response deny if { res.hdr(content-type) audio/mp3 }
--- /dev/null
+Reloading HAProxy without impacting server states
+=================================================
+
+Of course, to fully understand below information please consult
+doc/configuration.txt to understand how each HAProxy directive works.
+
+In the mean line, we update HAProxy's configuration to tell it where to
+retrieve the last know trustable servers state.
+Then, before reloading HAProxy, we simply dump servers state from running
+process into the locations we pointed into the configuration.
+And voilà :)
+
+
+Using one file for all backends
+-------------------------------
+
+HAProxy configuration
+*********************
+
+ global
+ [...]
+ stats socket /var/run/haproxy/socket
+ server-state-file global
+ server-state-base /var/state/haproxy/
+
+ defaults
+ [...]
+ load-server-state-from-file global
+
+HAProxy init script
+*******************
+
+Run the following command BEFORE reloading:
+
+ socat /var/run/haproxy/socket - <<< "show servers state" > /var/state/haproxy/global
+
+
+Using one state file per backend
+--------------------------------
+
+HAProxy configuration
+*********************
+
+ global
+ [...]
+ stats socket /var/run/haproxy/socket
+ server-state-base /var/state/haproxy/
+
+ defaults
+ [...]
+ load-server-state-from-file local
+
+HAProxy init script
+*******************
+
+Run the following command BEFORE reloading:
+
+ for b in $(socat /var/run/haproxy/socket - <<< "show backend" | fgrep -v '#')
+ do
+ socat /var/run/haproxy/socket - <<< "show servers state $b" > /var/state/haproxy/$b
+ done
+
--- /dev/null
+# This configuration is a simplified example of how to use ssl on front
+# and backends with additional certificates loaded from a directory for SNI
+# capable clients.
+
+global
+ maxconn 100
+
+defaults
+ mode http
+ timeout connect 5s
+ timeout client 5s
+ timeout server 5s
+
+frontend myfrontend
+ # primary cert is /etc/cert/server.pem
+ # /etc/cert/certdir/ contains additional certificates for SNI clients
+ bind :443 ssl crt /etc/cert/server.pem crt /etc/cert/certdir/
+ bind :80
+ default_backend mybackend
+
+backend mybackend
+ # a http backend
+ server s3 10.0.0.3:80
+ # a https backend
+ server s4 10.0.0.3:443 ssl verify none
+
--- /dev/null
+#!/bin/bash
+
+## contrib by prizee.com
+
+socket='/var/run/haproxy.stat'
+
+if ! type socat >/dev/null 2>&1 ; then
+ echo "can't find socat in PATH" 1>&2
+ exit 1
+fi
+
+printUsage ()
+{
+ echo -e "Usage : $(basename $0) [options] -s section
+--section -s section\t: section to use ( --list format)
+Options :
+--socket -S [socket]\t: socket to use (default: /var/run/haproxy.stat)
+--list -l\t\t: print available sections
+--help -h\t\t: print this message"
+}
+
+getRawStat ()
+{
+ if [ ! -S $socket ] ; then
+ echo "$socket socket unavailable" 1>&2
+ exit 1
+ fi
+
+ if ! printf "show stat\n" | socat unix-connect:${socket} stdio | grep -v "^#" ; then
+ echo "cannot read $socket" 1>&2
+ exit 1
+ fi
+}
+
+getStat ()
+{
+ stats=$(getRawStat | grep $1 | awk -F "," '{print $5" "$8}')
+ export cumul=$(echo $stats | cut -d " " -f2)
+ export current=$(echo $stats | cut -d " " -f1)
+}
+
+showList ()
+{
+ getRawStat | awk -F "," '{print $1","$2}'
+}
+
+set -- `getopt -u -l socket:,section:,list,help -- s:S:lh "$@"`
+
+while true ; do
+ case $1 in
+ --socket|-S) socket=$2 ; shift 2 ;;
+ --section|-s) section=$2 ; shift 2 ;;
+ --help|-h) printUsage ; exit 0 ;;
+ --list|-l) showList ; exit 0 ;;
+ --) break ;;
+ esac
+done
+
+if [ "$section" = "" ] ; then
+ echo "section not specified, run '$(basename $0) --list' to know available sections" 1>&2
+ printUsage
+ exit 1
+fi
+
+cpt=0
+totalrate=0
+while true ; do
+ getStat $section
+ if [ "$cpt" -gt "0" ] ; then
+ sessionrate=$(($cumul-$oldcumul))
+ totalrate=$(($totalrate+$sessionrate))
+ averagerate=$(($totalrate/$cpt))
+ printf "$sessionrate sessions/s (avg: $averagerate )\t$current concurrent sessions\n"
+ fi
+ oldcumul=$cumul
+ sleep 1
+ cpt=$(($cpt+1))
+done
--- /dev/null
+#
+# This is an example of how to configure HAProxy to be used as a 'full transparent proxy' for a single backend server.
+#
+# Note that to actually make this work extra firewall/nat rules are required.
+# Also HAProxy needs to be compiled with support for this, in HAProxy1.5-dev19 you can check if this is the case with "haproxy -vv".
+#
+
+global
+defaults
+ timeout client 30s
+ timeout server 30s
+ timeout connect 30s
+
+frontend MyFrontend
+ bind 192.168.1.22:80
+ default_backend TransparentBack_http
+
+backend TransparentBack_http
+ mode http
+ source 0.0.0.0 usesrc client
+ server MyWebServer 192.168.0.40:80
+
+#
+# To create the the nat rules perform the following:
+#
+# ### (FreeBSD 8) ###
+# --- Step 1 ---
+# ipfw is needed to get 'reply traffic' back to the HAProxy process, this can be achieved by configuring a rule like this:
+# fwd localhost tcp from 192.168.0.40 80 to any in recv em0
+#
+# The following would be even better but this did not seam to work on the pfSense2.1 distribution of FreeBSD 8.3:
+# fwd 127.0.0.1:80 tcp from any 80 to any in recv ${outside_iface} uid ${proxy_uid}
+#
+# If only 'pf' is currently used some aditional steps are needed to load and configure ipfw:
+# You need to configure this to always run on startup:
+#
+# /sbin/kldload ipfw
+# /sbin/sysctl net.inet.ip.pfil.inbound="pf" net.inet6.ip6.pfil.inbound="pf" net.inet.ip.pfil.outbound="pf" net.inet6.ip6.pfil.outbound="pf"
+# /sbin/sysctl net.link.ether.ipfw=1
+# ipfw add 10 fwd localhost tcp from 192.168.0.40 80 to any in recv em0
+#
+# the above does the folowing:
+# - load the ipfw kernal module
+# - set pf as the outer firewall to keep control of routing packets for example to route them to a non-default gateway
+# - enable ipfw
+# - set a rule to catches reply traffic on em0 comming from the webserver
+#
+# --- Step 2 ---
+# To also make the client connection transparent its possible to redirect incomming requests to HAProxy with a pf rule:
+# rdr on em1 proto tcp from any to 192.168.0.40 port 80 -> 192.168.1.22
+# here em1 is the interface that faces the clients, and traffic that is originally send straight to the webserver is redirected to HAProxy
+#
+# ### (FreeBSD 9) (OpenBSD 4.4) ###
+# pf supports "divert-reply" which is probably better suited for the job above then ipfw..
+#
--- /dev/null
+/*
+ * include/common/accept4.h
+ * Definition of the accept4 system call for older Linux libc.
+ *
+ * Copyright 2000-2012 Willy Tarreau <w@1wt.eu>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ *
+ */
+
+#ifndef _COMMON_ACCEPT4_H
+#define _COMMON_ACCEPT4_H
+
+#if defined (__linux__) && defined(USE_ACCEPT4)
+
+#include <sys/types.h>
+#include <sys/socket.h>
+#include <sys/syscall.h>
+#include <errno.h>
+#include <fcntl.h>
+#include <unistd.h>
+#include <common/syscall.h>
+
+/* On recent Linux kernels, the accept4() syscall may be used to avoid an fcntl()
+ * call to set O_NONBLOCK on the resulting socket. It was introduced in Linux
+ * 2.6.28 and is not present in older libcs.
+ */
+#ifndef SOCK_NONBLOCK
+#define SOCK_NONBLOCK O_NONBLOCK
+#endif
+
+#if defined(USE_MY_ACCEPT4) || (!defined(SYS_ACCEPT4) && !defined(__NR_accept4))
+#if defined(CONFIG_HAP_LINUX_VSYSCALL) && defined(__linux__) && defined(__i386__)
+/* The syscall is redefined somewhere else */
+extern int accept4(int sockfd, struct sockaddr *addr, socklen_t *addrlen, int flags);
+#elif ACCEPT4_USE_SOCKETCALL
+static inline _syscall2(int, socketcall, int, call, unsigned long *, args);
+static int accept4(int sockfd, struct sockaddr *addr, socklen_t *addrlen, int flags)
+{
+ unsigned long args[4];
+
+ args[0] = (unsigned long)sockfd;
+ args[1] = (unsigned long)addr;
+ args[2] = (unsigned long)addrlen;
+ args[3] = (unsigned long)flags;
+ return socketcall(SYS_ACCEPT4, args);
+}
+#else
+static inline _syscall4(int, accept4, int, sockfd, struct sockaddr *, addr, socklen_t *, addrlen, int, flags);
+#endif /* VSYSCALL etc... */
+#endif /* USE_MY_ACCEPT4 */
+#endif /* __linux__ && USE_ACCEPT4 */
+#endif /* _COMMON_ACCEPT4_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/common/base64.h
+ * Ascii to Base64 conversion as described in RFC1421.
+ *
+ * Copyright 2006-2010 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#ifndef _COMMON_BASE64_H
+#define _COMMON_BASE64_H
+
+#include <common/config.h>
+
+int a2base64(char *in, int ilen, char *out, int olen);
+int base64dec(const char *in, size_t ilen, char *out, size_t olen);
+const char *s30tob64(int in, char *out);
+int b64tos30(const char *in);
+
+extern const char base64tab[];
+
+#endif /* _COMMON_BASE64_H */
--- /dev/null
+/*
+ * include/common/buffer.h
+ * Buffer management definitions, macros and inline functions.
+ *
+ * Copyright (C) 2000-2012 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _COMMON_BUFFER_H
+#define _COMMON_BUFFER_H
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+
+#include <common/chunk.h>
+#include <common/config.h>
+#include <common/memory.h>
+
+
+struct buffer {
+ char *p; /* buffer's start pointer, separates in and out data */
+ unsigned int size; /* buffer size in bytes */
+ unsigned int i; /* number of input bytes pending for analysis in the buffer */
+ unsigned int o; /* number of out bytes the sender can consume from this buffer */
+ char data[0]; /* <size> bytes */
+};
+
+extern struct pool_head *pool2_buffer;
+extern struct buffer buf_empty;
+extern struct buffer buf_wanted;
+
+int init_buffer();
+int buffer_replace2(struct buffer *b, char *pos, char *end, const char *str, int len);
+int buffer_insert_line2(struct buffer *b, char *pos, const char *str, int len);
+void buffer_dump(FILE *o, struct buffer *b, int from, int to);
+void buffer_slow_realign(struct buffer *buf);
+void buffer_bounce_realign(struct buffer *buf);
+
+/*****************************************************************/
+/* These functions are used to compute various buffer area sizes */
+/*****************************************************************/
+
+/* Returns an absolute pointer for a position relative to the current buffer's
+ * pointer. It is written so that it is optimal when <ofs> is a const. It is
+ * written as a macro instead of an inline function so that the compiler knows
+ * when it can optimize out the sign test on <ofs> when passed an unsigned int.
+ * Note that callers MUST cast <ofs> to int if they expect negative values.
+ */
+#define b_ptr(b, ofs) \
+ ({ \
+ char *__ret = (b)->p + (ofs); \
+ if ((ofs) > 0 && __ret >= (b)->data + (b)->size) \
+ __ret -= (b)->size; \
+ else if ((ofs) < 0 && __ret < (b)->data) \
+ __ret += (b)->size; \
+ __ret; \
+ })
+
+/* Advances the buffer by <adv> bytes, which means that the buffer
+ * pointer advances, and that as many bytes from in are transferred
+ * to out. The caller is responsible for ensuring that adv is always
+ * smaller than or equal to b->i.
+ */
+static inline void b_adv(struct buffer *b, unsigned int adv)
+{
+ b->i -= adv;
+ b->o += adv;
+ b->p = b_ptr(b, adv);
+}
+
+/* Rewinds the buffer by <adv> bytes, which means that the buffer pointer goes
+ * backwards, and that as many bytes from out are moved to in. The caller is
+ * responsible for ensuring that adv is always smaller than or equal to b->o.
+ */
+static inline void b_rew(struct buffer *b, unsigned int adv)
+{
+ b->i += adv;
+ b->o -= adv;
+ b->p = b_ptr(b, (int)-adv);
+}
+
+/* Returns the start of the input data in a buffer */
+static inline char *bi_ptr(const struct buffer *b)
+{
+ return b->p;
+}
+
+/* Returns the end of the input data in a buffer (pointer to next
+ * insertion point).
+ */
+static inline char *bi_end(const struct buffer *b)
+{
+ char *ret = b->p + b->i;
+
+ if (ret >= b->data + b->size)
+ ret -= b->size;
+ return ret;
+}
+
+/* Returns the amount of input data that can contiguously be read at once */
+static inline int bi_contig_data(const struct buffer *b)
+{
+ int data = b->data + b->size - b->p;
+
+ if (data > b->i)
+ data = b->i;
+ return data;
+}
+
+/* Returns the start of the output data in a buffer */
+static inline char *bo_ptr(const struct buffer *b)
+{
+ char *ret = b->p - b->o;
+
+ if (ret < b->data)
+ ret += b->size;
+ return ret;
+}
+
+/* Returns the end of the output data in a buffer */
+static inline char *bo_end(const struct buffer *b)
+{
+ return b->p;
+}
+
+/* Returns the amount of output data that can contiguously be read at once */
+static inline int bo_contig_data(const struct buffer *b)
+{
+ char *beg = b->p - b->o;
+
+ if (beg < b->data)
+ return b->data - beg;
+ return b->o;
+}
+
+/* Return the buffer's length in bytes by summing the input and the output */
+static inline int buffer_len(const struct buffer *buf)
+{
+ return buf->i + buf->o;
+}
+
+/* Return non-zero only if the buffer is not empty */
+static inline int buffer_not_empty(const struct buffer *buf)
+{
+ return buf->i | buf->o;
+}
+
+/* Return non-zero only if the buffer is empty */
+static inline int buffer_empty(const struct buffer *buf)
+{
+ return !buffer_not_empty(buf);
+}
+
+/* Returns non-zero if the buffer's INPUT is considered full, which means that
+ * it holds at least as much INPUT data as (size - reserve). This also means
+ * that data that are scheduled for output are considered as potential free
+ * space, and that the reserved space is always considered as not usable. This
+ * information alone cannot be used as a general purpose free space indicator.
+ * However it accurately indicates that too many data were fed in the buffer
+ * for an analyzer for instance. See the channel_may_recv() function for a more
+ * generic function taking everything into account.
+ */
+static inline int buffer_full(const struct buffer *b, unsigned int reserve)
+{
+ if (b == &buf_empty)
+ return 0;
+
+ return (b->i + reserve >= b->size);
+}
+
+/* Normalizes a pointer after a subtract */
+static inline char *buffer_wrap_sub(const struct buffer *buf, char *ptr)
+{
+ if (ptr < buf->data)
+ ptr += buf->size;
+ return ptr;
+}
+
+/* Normalizes a pointer after an addition */
+static inline char *buffer_wrap_add(const struct buffer *buf, char *ptr)
+{
+ if (ptr - buf->size >= buf->data)
+ ptr -= buf->size;
+ return ptr;
+}
+
+/* Return the maximum amount of bytes that can be written into the buffer,
+ * including reserved space which may be overwritten.
+ */
+static inline int buffer_total_space(const struct buffer *buf)
+{
+ return buf->size - buffer_len(buf);
+}
+
+/* Returns the number of contiguous bytes between <start> and <start>+<count>,
+ * and enforces a limit on buf->data + buf->size. <start> must be within the
+ * buffer.
+ */
+static inline int buffer_contig_area(const struct buffer *buf, const char *start, int count)
+{
+ if (count > buf->data - start + buf->size)
+ count = buf->data - start + buf->size;
+ return count;
+}
+
+/* Return the amount of bytes that can be written into the buffer at once,
+ * including reserved space which may be overwritten.
+ */
+static inline int buffer_contig_space(const struct buffer *buf)
+{
+ const char *left, *right;
+
+ if (buf->data + buf->o <= buf->p)
+ right = buf->data + buf->size;
+ else
+ right = buf->p + buf->size - buf->o;
+
+ left = buffer_wrap_add(buf, buf->p + buf->i);
+ return right - left;
+}
+
+/* Returns the amount of byte that can be written starting from <p> into the
+ * input buffer at once, including reserved space which may be overwritten.
+ * This is used by Lua to insert data in the input side just before the other
+ * data using buffer_replace(). The goal is to transfer these new data in the
+ * output buffer.
+ */
+static inline int bi_space_for_replace(const struct buffer *buf)
+{
+ const char *end;
+
+ /* If the input side data overflows, we cannot insert data contiguously. */
+ if (buf->p + buf->i >= buf->data + buf->size)
+ return 0;
+
+ /* Check the last byte used in the buffer, it may be a byte of the output
+ * side if the buffer wraps, or its the end of the buffer.
+ */
+ end = buffer_wrap_sub(buf, buf->p - buf->o);
+ if (end <= buf->p)
+ end = buf->data + buf->size;
+
+ /* Compute the amount of bytes which can be written. */
+ return end - (buf->p + buf->i);
+}
+
+
+/* Normalizes a pointer which is supposed to be relative to the beginning of a
+ * buffer, so that wrapping is correctly handled. The intent is to use this
+ * when increasing a pointer. Note that the wrapping test is only performed
+ * once, so the original pointer must be between ->data-size and ->data+2*size-1,
+ * otherwise an invalid pointer might be returned.
+ */
+static inline const char *buffer_pointer(const struct buffer *buf, const char *ptr)
+{
+ if (ptr < buf->data)
+ ptr += buf->size;
+ else if (ptr - buf->size >= buf->data)
+ ptr -= buf->size;
+ return ptr;
+}
+
+/* Returns the distance between two pointers, taking into account the ability
+ * to wrap around the buffer's end.
+ */
+static inline int buffer_count(const struct buffer *buf, const char *from, const char *to)
+{
+ int count = to - from;
+
+ count += count < 0 ? buf->size : 0;
+ return count;
+}
+
+/* returns the amount of pending bytes in the buffer. It is the amount of bytes
+ * that is not scheduled to be sent.
+ */
+static inline int buffer_pending(const struct buffer *buf)
+{
+ return buf->i;
+}
+
+/* Returns the size of the working area which the caller knows ends at <end>.
+ * If <end> equals buf->r (modulo size), then it means that the free area which
+ * follows is part of the working area. Otherwise, the working area stops at
+ * <end>. It always starts at buf->p. The work area includes the
+ * reserved area.
+ */
+static inline int buffer_work_area(const struct buffer *buf, const char *end)
+{
+ end = buffer_pointer(buf, end);
+ if (end == buffer_wrap_add(buf, buf->p + buf->i))
+ /* pointer exactly at end, lets push forwards */
+ end = buffer_wrap_sub(buf, buf->p - buf->o);
+ return buffer_count(buf, buf->p, end);
+}
+
+/* Return 1 if the buffer has less than 1/4 of its capacity free, otherwise 0 */
+static inline int buffer_almost_full(const struct buffer *buf)
+{
+ if (buf == &buf_empty)
+ return 0;
+
+ if (!buf->size || buffer_total_space(buf) < buf->size / 4)
+ return 1;
+ return 0;
+}
+
+/* Cut the first <n> pending bytes in a contiguous buffer. It is illegal to
+ * call this function with remaining data waiting to be sent (o > 0). The
+ * caller must ensure that <n> is smaller than the actual buffer's length.
+ * This is mainly used to remove empty lines at the beginning of a request
+ * or a response.
+ */
+static inline void bi_fast_delete(struct buffer *buf, int n)
+{
+ buf->i -= n;
+ buf->p += n;
+}
+
+/*
+ * Tries to realign the given buffer, and returns how many bytes can be written
+ * there at once without overwriting anything.
+ */
+static inline int buffer_realign(struct buffer *buf)
+{
+ if (!(buf->i | buf->o)) {
+ /* let's realign the buffer to optimize I/O */
+ buf->p = buf->data;
+ }
+ return buffer_contig_space(buf);
+}
+
+/* Schedule all remaining buffer data to be sent. ->o is not touched if it
+ * already covers those data. That permits doing a flush even after a forward,
+ * although not recommended.
+ */
+static inline void buffer_flush(struct buffer *buf)
+{
+ buf->p = buffer_wrap_add(buf, buf->p + buf->i);
+ buf->o += buf->i;
+ buf->i = 0;
+}
+
+/* This function writes the string <str> at position <pos> which must be in
+ * buffer <b>, and moves <end> just after the end of <str>. <b>'s parameters
+ * (l, r, lr) are updated to be valid after the shift. the shift value
+ * (positive or negative) is returned. If there's no space left, the move is
+ * not done. The function does not adjust ->o because it does not make sense
+ * to use it on data scheduled to be sent.
+ */
+static inline int buffer_replace(struct buffer *b, char *pos, char *end, const char *str)
+{
+ return buffer_replace2(b, pos, end, str, strlen(str));
+}
+
+/* Tries to write char <c> into output data at buffer <b>. Supports wrapping.
+ * Data are truncated if buffer is full.
+ */
+static inline void bo_putchr(struct buffer *b, char c)
+{
+ if (buffer_len(b) == b->size)
+ return;
+ *b->p = c;
+ b->p = b_ptr(b, 1);
+ b->o++;
+}
+
+/* Tries to copy block <blk> into output data at buffer <b>. Supports wrapping.
+ * Data are truncated if buffer is too short. It returns the number of bytes
+ * copied.
+ */
+static inline int bo_putblk(struct buffer *b, const char *blk, int len)
+{
+ int cur_len = buffer_len(b);
+ int half;
+
+ if (len > b->size - cur_len)
+ len = (b->size - cur_len);
+ if (!len)
+ return 0;
+
+ half = buffer_contig_space(b);
+ if (half > len)
+ half = len;
+
+ memcpy(b->p, blk, half);
+ b->p = b_ptr(b, half);
+ if (len > half) {
+ memcpy(b->p, blk, len - half);
+ b->p = b_ptr(b, half);
+ }
+ b->o += len;
+ return len;
+}
+
+/* Tries to copy string <str> into output data at buffer <b>. Supports wrapping.
+ * Data are truncated if buffer is too short. It returns the number of bytes
+ * copied.
+ */
+static inline int bo_putstr(struct buffer *b, const char *str)
+{
+ return bo_putblk(b, str, strlen(str));
+}
+
+/* Tries to copy chunk <chk> into output data at buffer <b>. Supports wrapping.
+ * Data are truncated if buffer is too short. It returns the number of bytes
+ * copied.
+ */
+static inline int bo_putchk(struct buffer *b, const struct chunk *chk)
+{
+ return bo_putblk(b, chk->str, chk->len);
+}
+
+/* Resets a buffer. The size is not touched. */
+static inline void b_reset(struct buffer *buf)
+{
+ buf->o = 0;
+ buf->i = 0;
+ buf->p = buf->data;
+}
+
+/* Allocates a buffer and replaces *buf with this buffer. If no memory is
+ * available, &buf_wanted is used instead. No control is made to check if *buf
+ * already pointed to another buffer. The allocated buffer is returned, or
+ * NULL in case no memory is available.
+ */
+static inline struct buffer *b_alloc(struct buffer **buf)
+{
+ struct buffer *b;
+
+ *buf = &buf_wanted;
+ b = pool_alloc_dirty(pool2_buffer);
+ if (likely(b)) {
+ b->size = pool2_buffer->size - sizeof(struct buffer);
+ b_reset(b);
+ *buf = b;
+ }
+ return b;
+}
+
+/* Allocates a buffer and replaces *buf with this buffer. If no memory is
+ * available, &buf_wanted is used instead. No control is made to check if *buf
+ * already pointed to another buffer. The allocated buffer is returned, or
+ * NULL in case no memory is available. The difference with b_alloc() is that
+ * this function only picks from the pool and never calls malloc(), so it can
+ * fail even if some memory is available.
+ */
+static inline struct buffer *b_alloc_fast(struct buffer **buf)
+{
+ struct buffer *b;
+
+ *buf = &buf_wanted;
+ b = pool_get_first(pool2_buffer);
+ if (likely(b)) {
+ b->size = pool2_buffer->size - sizeof(struct buffer);
+ b_reset(b);
+ *buf = b;
+ }
+ return b;
+}
+
+/* Releases buffer *buf (no check of emptiness) */
+static inline void __b_drop(struct buffer **buf)
+{
+ pool_free2(pool2_buffer, *buf);
+}
+
+/* Releases buffer *buf if allocated. */
+static inline void b_drop(struct buffer **buf)
+{
+ if (!(*buf)->size)
+ return;
+ __b_drop(buf);
+}
+
+/* Releases buffer *buf if allocated, and replaces it with &buf_empty. */
+static inline void b_free(struct buffer **buf)
+{
+ b_drop(buf);
+ *buf = &buf_empty;
+}
+
+/* Ensures that <buf> is allocated. If an allocation is needed, it ensures that
+ * there are still at least <margin> buffers available in the pool after this
+ * allocation so that we don't leave the pool in a condition where a session or
+ * a response buffer could not be allocated anymore, resulting in a deadlock.
+ * This means that we sometimes need to try to allocate extra entries even if
+ * only one buffer is needed.
+ */
+static inline struct buffer *b_alloc_margin(struct buffer **buf, int margin)
+{
+ struct buffer *next;
+
+ if ((*buf)->size)
+ return *buf;
+
+ /* fast path */
+ if ((pool2_buffer->allocated - pool2_buffer->used) > margin)
+ return b_alloc_fast(buf);
+
+ next = pool_refill_alloc(pool2_buffer, margin);
+ if (!next)
+ return next;
+
+ next->size = pool2_buffer->size - sizeof(struct buffer);
+ b_reset(next);
+ *buf = next;
+ return next;
+}
+
+#endif /* _COMMON_BUFFER_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/common/cfgparse.h
+ * Configuration parsing functions.
+ *
+ * Copyright (C) 2000-2012 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _COMMON_CFGPARSE_H
+#define _COMMON_CFGPARSE_H
+
+#include <common/compat.h>
+#include <common/config.h>
+#include <common/mini-clist.h>
+
+#include <proto/log.h>
+#include <proto/proxy.h>
+
+/* configuration sections */
+#define CFG_NONE 0
+#define CFG_GLOBAL 1
+#define CFG_LISTEN 2
+#define CFG_USERLIST 3
+#define CFG_PEERS 4
+
+struct cfg_keyword {
+ int section; /* section type for this keyword */
+ const char *kw; /* the keyword itself */
+ int (*parse)( /* 0=OK, <0=Alert, >0=Warning */
+ char **args, /* command line and arguments */
+ int section_type, /* current section CFG_{GLOBAL|LISTEN} */
+ struct proxy *curpx, /* current proxy (NULL in GLOBAL) */
+ struct proxy *defpx, /* default proxy (NULL in GLOBAL) */
+ const char *file, /* config file name */
+ int line, /* config file line number */
+ char **err); /* error or warning message output pointer */
+};
+
+/* A keyword list. It is a NULL-terminated array of keywords. It embeds a
+ * struct list in order to be linked to other lists, allowing it to easily
+ * be declared where it is needed, and linked without duplicating data nor
+ * allocating memory.
+ */
+struct cfg_kw_list {
+ struct list list;
+ struct cfg_keyword kw[VAR_ARRAY];
+};
+
+
+extern int cfg_maxpconn;
+extern int cfg_maxconn;
+
+int cfg_parse_global(const char *file, int linenum, char **args, int inv);
+int cfg_parse_listen(const char *file, int linenum, char **args, int inv);
+int readcfgfile(const char *file);
+void cfg_register_keywords(struct cfg_kw_list *kwl);
+void cfg_unregister_keywords(struct cfg_kw_list *kwl);
+void init_default_instance();
+int check_config_validity();
+int str2listener(char *str, struct proxy *curproxy, struct bind_conf *bind_conf, const char *file, int line, char **err);
+int cfg_register_section(char *section_name,
+ int (*section_parser)(const char *, int, char **, int));
+void cfg_unregister_sections(void);
+int warnif_misplaced_tcp_conn(struct proxy *proxy, const char *file, int line, const char *arg);
+int warnif_misplaced_tcp_cont(struct proxy *proxy, const char *file, int line, const char *arg);
+
+/*
+ * Sends a warning if proxy <proxy> does not have at least one of the
+ * capabilities in <cap>. An optionnal <hint> may be added at the end
+ * of the warning to help the user. Returns 1 if a warning was emitted
+ * or 0 if the condition is valid.
+ */
+static inline int warnifnotcap(struct proxy *proxy, int cap, const char *file, int line, const char *arg, const char *hint)
+{
+ char *msg;
+
+ switch (cap) {
+ case PR_CAP_BE: msg = "no backend"; break;
+ case PR_CAP_FE: msg = "no frontend"; break;
+ case PR_CAP_RS: msg = "no ruleset"; break;
+ case PR_CAP_BE|PR_CAP_FE: msg = "neither frontend nor backend"; break;
+ default: msg = "not enough"; break;
+ }
+
+ if (!(proxy->cap & cap)) {
+ Warning("parsing [%s:%d] : '%s' ignored because %s '%s' has %s capability.%s\n",
+ file, line, arg, proxy_type_str(proxy), proxy->id, msg, hint ? hint : "");
+ return 1;
+ }
+ return 0;
+}
+
+#endif /* _COMMON_CFGPARSE_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/common/chunk.h
+ * Chunk management definitions, macros and inline functions.
+ *
+ * Copyright (C) 2000-2012 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _TYPES_CHUNK_H
+#define _TYPES_CHUNK_H
+
+#include <stdlib.h>
+#include <string.h>
+
+#include <common/config.h>
+
+
+/* describes a chunk of string */
+struct chunk {
+ char *str; /* beginning of the string itself. Might not be 0-terminated */
+ int size; /* total size of the buffer, 0 if the *str is read-only */
+ int len; /* current size of the string from first to last char. <0 = uninit. */
+};
+
+/* function prototypes */
+
+int chunk_printf(struct chunk *chk, const char *fmt, ...)
+ __attribute__ ((format(printf, 2, 3)));
+
+int chunk_appendf(struct chunk *chk, const char *fmt, ...)
+ __attribute__ ((format(printf, 2, 3)));
+
+int chunk_htmlencode(struct chunk *dst, struct chunk *src);
+int chunk_asciiencode(struct chunk *dst, struct chunk *src, char qc);
+int chunk_strcmp(const struct chunk *chk, const char *str);
+int chunk_strcasecmp(const struct chunk *chk, const char *str);
+int alloc_trash_buffers(int bufsize);
+void free_trash_buffers(void);
+struct chunk *get_trash_chunk(void);
+
+static inline void chunk_reset(struct chunk *chk)
+{
+ chk->len = 0;
+}
+
+static inline void chunk_init(struct chunk *chk, char *str, size_t size)
+{
+ chk->str = str;
+ chk->len = 0;
+ chk->size = size;
+}
+
+/* report 0 in case of error, 1 if OK. */
+static inline int chunk_initlen(struct chunk *chk, char *str, size_t size, int len)
+{
+
+ if (size && len > size)
+ return 0;
+
+ chk->str = str;
+ chk->len = len;
+ chk->size = size;
+
+ return 1;
+}
+
+static inline void chunk_initstr(struct chunk *chk, char *str)
+{
+ chk->str = str;
+ chk->len = strlen(str);
+ chk->size = 0; /* mark it read-only */
+}
+
+static inline int chunk_strcpy(struct chunk *chk, const char *str)
+{
+ size_t len;
+
+ len = strlen(str);
+
+ if (unlikely(len > chk->size))
+ return 0;
+
+ chk->len = len;
+ memcpy(chk->str, str, len);
+
+ return 1;
+}
+
+static inline void chunk_drop(struct chunk *chk)
+{
+ chk->str = NULL;
+ chk->len = -1;
+ chk->size = 0;
+}
+
+static inline void chunk_destroy(struct chunk *chk)
+{
+ if (!chk->size)
+ return;
+
+ free(chk->str);
+ chunk_drop(chk);
+}
+
+/*
+ * frees the destination chunk if already allocated, allocates a new string,
+ * and copies the source into it. The pointer to the destination string is
+ * returned, or NULL if the allocation fails or if any pointer is NULL..
+ */
+static inline char *chunk_dup(struct chunk *dst, const struct chunk *src)
+{
+ if (!dst || !src || !src->str)
+ return NULL;
+ if (dst->str)
+ free(dst->str);
+ dst->len = src->len;
+ dst->str = (char *)malloc(dst->len);
+ memcpy(dst->str, src->str, dst->len);
+ return dst->str;
+}
+
+#endif /* _TYPES_CHUNK_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/common/compat.h
+ * Operating system compatibility interface.
+ *
+ * Copyright (C) 2000-2010 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _COMMON_COMPAT_H
+#define _COMMON_COMPAT_H
+
+#include <limits.h>
+/* This is needed on Linux for Netfilter includes */
+#include <sys/param.h>
+#include <sys/types.h>
+#include <sys/socket.h>
+#include <arpa/inet.h>
+#include <netinet/in.h>
+
+#ifndef BITS_PER_INT
+#define BITS_PER_INT (8*sizeof(int))
+#endif
+
+/* this is for libc5 for example */
+#ifndef TCP_NODELAY
+#define TCP_NODELAY 1
+#endif
+
+#ifndef SHUT_RD
+#define SHUT_RD 0
+#endif
+
+#ifndef SHUT_WR
+#define SHUT_WR 1
+#endif
+
+/* only Linux defines it */
+#ifndef MSG_NOSIGNAL
+#define MSG_NOSIGNAL 0
+#endif
+
+/* AIX does not define MSG_DONTWAIT. We'll define it to zero, and test it
+ * wherever appropriate.
+ */
+#ifndef MSG_DONTWAIT
+#define MSG_DONTWAIT 0
+#endif
+
+/* Only Linux defines MSG_MORE */
+#ifndef MSG_MORE
+#define MSG_MORE 0
+#endif
+
+/* On Linux 2.4 and above, MSG_TRUNC can be used on TCP sockets to drop any
+ * pending data. Let's rely on NETFILTER to detect if this is supported.
+ */
+#ifdef NETFILTER
+#define MSG_TRUNC_CLEARS_INPUT
+#endif
+
+/* Maximum path length, OS-dependant */
+#ifndef MAXPATHLEN
+#define MAXPATHLEN 128
+#endif
+
+/* On Linux, allows pipes to be resized */
+#ifndef F_SETPIPE_SZ
+#define F_SETPIPE_SZ (1024 + 7)
+#endif
+
+#if defined(TPROXY) && defined(NETFILTER)
+#include <linux/types.h>
+#include <linux/netfilter_ipv6.h>
+#include <linux/netfilter_ipv4.h>
+#endif
+
+/* On Linux, IP_TRANSPARENT and/or IP_FREEBIND generally require a kernel patch */
+#if defined(CONFIG_HAP_LINUX_TPROXY)
+#if !defined(IP_FREEBIND)
+#define IP_FREEBIND 15
+#endif /* !IP_FREEBIND */
+#if !defined(IP_TRANSPARENT)
+#define IP_TRANSPARENT 19
+#endif /* !IP_TRANSPARENT */
+#if !defined(IPV6_TRANSPARENT)
+#define IPV6_TRANSPARENT 75
+#endif /* !IPV6_TRANSPARENT */
+#endif /* CONFIG_HAP_LINUX_TPROXY */
+
+#if defined(IP_FREEBIND) \
+ || defined(IP_BINDANY) \
+ || defined(IPV6_BINDANY) \
+ || defined(SO_BINDANY) \
+ || defined(IP_TRANSPARENT) \
+ || defined(IPV6_TRANSPARENT)
+#define CONFIG_HAP_TRANSPARENT
+#endif
+
+/* We'll try to enable SO_REUSEPORT on Linux 2.4 and 2.6 if not defined.
+ * There are two families of values depending on the architecture. Those
+ * are at least valid on Linux 2.4 and 2.6, reason why we'll rely on the
+ * NETFILTER define.
+ */
+#if !defined(SO_REUSEPORT) && defined(NETFILTER)
+#if (SO_REUSEADDR == 2)
+#define SO_REUSEPORT 15
+#elif (SO_REUSEADDR == 0x0004)
+#define SO_REUSEPORT 0x0200
+#endif /* SO_REUSEADDR */
+#endif /* SO_REUSEPORT */
+
+/* only Linux defines TCP_FASTOPEN */
+#ifdef USE_TFO
+#ifndef TCP_FASTOPEN
+#define TCP_FASTOPEN 23
+#endif
+#endif
+
+/* FreeBSD doesn't define SOL_IP and prefers IPPROTO_IP */
+#ifndef SOL_IP
+#define SOL_IP IPPROTO_IP
+#endif
+
+/* If IPv6 is supported, define IN6_IS_ADDR_V4MAPPED() if missing. */
+#if defined(IPV6_TCLASS) && !defined(IN6_IS_ADDR_V4MAPPED)
+#define IN6_IS_ADDR_V4MAPPED(a) \
+((((const uint32_t *) (a))[0] == 0) \
+&& (((const uint32_t *) (a))[1] == 0) \
+&& (((const uint32_t *) (a))[2] == htonl (0xffff)))
+#endif
+
+#if defined(__dietlibc__)
+#include <strings.h>
+#endif
+
+#endif /* _COMMON_COMPAT_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/common/compiler.h
+ * This files contains some compiler-specific settings.
+ *
+ * Copyright (C) 2000-2009 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _COMMON_COMPILER_H
+#define _COMMON_COMPILER_H
+
+
+/*
+ * Gcc before 3.0 needs [0] to declare a variable-size array
+ */
+#ifndef VAR_ARRAY
+#if __GNUC__ < 3
+#define VAR_ARRAY 0
+#else
+#define VAR_ARRAY
+#endif
+#endif
+
+
+/* Support passing function parameters in registers. For this, the
+ * CONFIG_REGPARM macro has to be set to the maximal number of registers
+ * allowed. Some functions have intentionally received a regparm lower than
+ * their parameter count, it is in order to avoid register clobbering where
+ * they are called.
+ */
+#ifndef REGPRM1
+#if CONFIG_REGPARM >= 1 && __GNUC__ >= 3
+#define REGPRM1 __attribute__((regparm(1)))
+#else
+#define REGPRM1
+#endif
+#endif
+
+#ifndef REGPRM2
+#if CONFIG_REGPARM >= 2 && __GNUC__ >= 3
+#define REGPRM2 __attribute__((regparm(2)))
+#else
+#define REGPRM2 REGPRM1
+#endif
+#endif
+
+#ifndef REGPRM3
+#if CONFIG_REGPARM >= 3 && __GNUC__ >= 3
+#define REGPRM3 __attribute__((regparm(3)))
+#else
+#define REGPRM3 REGPRM2
+#endif
+#endif
+
+
+/* By default, gcc does not inline large chunks of code, but we want it to
+ * respect our choices.
+ */
+#if !defined(forceinline)
+#if __GNUC__ < 3
+#define forceinline inline
+#else
+#define forceinline inline __attribute__((always_inline))
+#endif
+#endif
+
+
+/*
+ * Gcc >= 3 provides the ability for the programme to give hints to the
+ * compiler about what branch of an if is most likely to be taken. This
+ * helps the compiler produce the most compact critical paths, which is
+ * generally better for the cache and to reduce the number of jumps.
+ */
+#if !defined(likely)
+#if __GNUC__ < 3
+#define __builtin_expect(x,y) (x)
+#define likely(x) (x)
+#define unlikely(x) (x)
+#elif __GNUC__ < 4
+/* gcc 3.x does the best job at this */
+#define likely(x) (__builtin_expect((x) != 0, 1))
+#define unlikely(x) (__builtin_expect((x) != 0, 0))
+#else
+/* GCC 4.x is stupid, it performs the comparison then compares it to 1,
+ * so we cheat in a dirty way to prevent it from doing this. This will
+ * only work with ints and booleans though.
+ */
+#define likely(x) (x)
+#define unlikely(x) (__builtin_expect((unsigned long)(x), 0))
+#endif
+#endif
+
+
+#endif /* _COMMON_COMPILER_H */
--- /dev/null
+/*
+ * include/common/config.h
+ * This files contains most of the user-configurable settings.
+ *
+ * Copyright (C) 2000-2009 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _COMMON_CONFIG_H
+#define _COMMON_CONFIG_H
+
+#include <common/compiler.h>
+#include <common/compat.h>
+#include <common/defaults.h>
+
+/* this reduces the number of calls to select() by choosing appropriate
+ * sheduler precision in milliseconds. It should be near the minimum
+ * time that is needed by select() to collect all events. All timeouts
+ * are rounded up by adding this value prior to pass it to select().
+ */
+#define SCHEDULER_RESOLUTION 9
+
+/* CONFIG_HAP_MEM_OPTIM
+ * This enables use of memory pools instead of malloc()/free(). There
+ * is no reason to disable it, except perhaps for rare debugging.
+ */
+#ifndef CONFIG_HAP_NO_MEM_OPTIM
+# define CONFIG_HAP_MEM_OPTIM
+#endif /* CONFIG_HAP_NO_MEM_OPTIM */
+
+/* CONFIG_HAP_MALLOC / CONFIG_HAP_CALLOC / CONFIG_HAP_FREE
+ * This macro allows to replace the malloc function with another one.
+ */
+#ifdef CONFIG_HAP_MALLOC
+#define MALLOC CONFIG_HAP_MALLOC
+#else
+#define MALLOC malloc
+#endif
+
+#ifdef CONFIG_HAP_CALLOC
+#define CALLOC CONFIG_HAP_CALLOC
+#else
+#define CALLOC calloc
+#endif
+
+#ifdef CONFIG_HAP_FREE
+#define FREE CONFIG_HAP_FREE
+#else
+#define FREE free
+#endif
+
+
+/* CONFIG_HAP_INLINE_FD_SET
+ * This makes use of inline FD_* macros instead of calling equivalent
+ * functions. Benchmarks on a Pentium-M show that using functions is
+ * generally twice as fast. So it's better to keep this option unset.
+ */
+//#undef CONFIG_HAP_INLINE_FD_SET
+
+#endif /* _COMMON_CONFIG_H */
--- /dev/null
+/*
+ include/common/debug.h
+ This files contains some macros to help debugging.
+
+ Copyright (C) 2000-2006 Willy Tarreau - w@1wt.eu
+
+ This library is free software; you can redistribute it and/or
+ modify it under the terms of the GNU Lesser General Public
+ License as published by the Free Software Foundation, version 2.1
+ exclusively.
+
+ This library is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ Lesser General Public License for more details.
+
+ You should have received a copy of the GNU Lesser General Public
+ License along with this library; if not, write to the Free Software
+ Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+*/
+
+#ifndef _COMMON_DEBUG_H
+#define _COMMON_DEBUG_H
+
+#include <common/config.h>
+#include <common/memory.h>
+
+#ifdef DEBUG_FULL
+#define DPRINTF(x...) fprintf(x)
+#else
+#define DPRINTF(x...)
+#endif
+
+#ifdef DEBUG_FSM
+#define FSM_PRINTF(x...) fprintf(x)
+#else
+#define FSM_PRINTF(x...)
+#endif
+
+/* This abort is more efficient than abort() because it does not mangle the
+ * stack and stops at the exact location we need.
+ */
+#define ABORT_NOW() (*(int*)0=0)
+
+/* this one is provided for easy code tracing.
+ * Usage: TRACE(strm||0, fmt, args...);
+ * TRACE(strm, "");
+ */
+#define TRACE(strm, fmt, args...) do { \
+ fprintf(stderr, \
+ "%d.%06d [%s:%d %s] [strm %p(%x)] " fmt "\n", \
+ (int)now.tv_sec, (int)now.tv_usec, \
+ __FILE__, __LINE__, __FUNCTION__, \
+ strm, strm?((struct stream *)strm)->uniq_id:~0U, \
+ ##args); \
+ } while (0)
+
+/* This one is useful to automatically apply poisonning on an area returned
+ * by malloc(). Only "p_" is required to make it work, and to define a poison
+ * byte using -dM.
+ */
+static inline void *p_malloc(size_t size)
+{
+ void *ret = malloc(size);
+ if (mem_poison_byte >= 0 && ret)
+ memset(ret, mem_poison_byte, size);
+ return ret;
+}
+
+#endif /* _COMMON_DEBUG_H */
--- /dev/null
+/*
+ * include/common/defaults.h
+ * Miscellaneous default values.
+ *
+ * Copyright (C) 2000-2010 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _COMMON_DEFAULTS_H
+#define _COMMON_DEFAULTS_H
+
+/*
+ * BUFSIZE defines the size of a read and write buffer. It is the maximum
+ * amount of bytes which can be stored by the proxy for each stream. However,
+ * when reading HTTP headers, the proxy needs some spare space to add or rewrite
+ * headers if needed. The size of this spare is defined with MAXREWRITE. So it
+ * is not possible to process headers longer than BUFSIZE-MAXREWRITE bytes. By
+ * default, BUFSIZE=16384 bytes and MAXREWRITE=min(1024,BUFSIZE/2), so the
+ * maximum length of headers accepted is 15360 bytes.
+ */
+#ifndef BUFSIZE
+#define BUFSIZE 16384
+#endif
+
+/* certain buffers may only be allocated for responses in order to avoid
+ * deadlocks caused by request queuing. 2 buffers is the absolute minimum
+ * acceptable to ensure that a request gaining access to a server can get
+ * a response buffer even if it doesn't completely flush the request buffer.
+ * The worst case is an applet making use of a request buffer that cannot
+ * completely be sent while the server starts to respond, and all unreserved
+ * buffers are allocated by request buffers from pending connections in the
+ * queue waiting for this one to flush. Both buffers reserved buffers may
+ * thus be used at the same time.
+ */
+#ifndef RESERVED_BUFS
+#define RESERVED_BUFS 2
+#endif
+
+// reserved buffer space for header rewriting
+#ifndef MAXREWRITE
+#define MAXREWRITE 1024
+#endif
+
+#ifndef REQURI_LEN
+#define REQURI_LEN 1024
+#endif
+
+#ifndef CAPTURE_LEN
+#define CAPTURE_LEN 64
+#endif
+
+#ifndef MAX_SYSLOG_LEN
+#define MAX_SYSLOG_LEN 1024
+#endif
+
+// maximum line size when parsing config
+#ifndef LINESIZE
+#define LINESIZE 2048
+#endif
+
+// max # args on a configuration line
+#define MAX_LINE_ARGS 64
+
+// max # args on a stats socket
+// This should cover at least 5 + twice the # of data_types
+#define MAX_STATS_ARGS 64
+
+// max # of matches per regexp
+#define MAX_MATCH 10
+
+// max # of headers in one HTTP request or response
+// By default, about 100 headers (+1 for the first line)
+#ifndef MAX_HTTP_HDR
+#define MAX_HTTP_HDR 101
+#endif
+
+// max # of headers in history when looking for header #-X
+#ifndef MAX_HDR_HISTORY
+#define MAX_HDR_HISTORY 10
+#endif
+
+// max # of stick counters per session (at least 3 for sc0..sc2)
+#ifndef MAX_SESS_STKCTR
+#define MAX_SESS_STKCTR 3
+#endif
+
+// max # of extra stick-table data types that can be registred at runtime
+#ifndef STKTABLE_EXTRA_DATA_TYPES
+#define STKTABLE_EXTRA_DATA_TYPES 0
+#endif
+
+// max # of loops we can perform around a read() which succeeds.
+// It's very frequent that the system returns a few TCP segments at a time.
+#ifndef MAX_READ_POLL_LOOPS
+#define MAX_READ_POLL_LOOPS 4
+#endif
+
+// minimum number of bytes read at once above which we don't try to read
+// more, in order not to risk facing an EAGAIN. Most often, if we read
+// at least 10 kB, we can consider that the system has tried to read a
+// full buffer and got multiple segments (>1 MSS for jumbo frames, >7 MSS
+// for normal frames) did not bother truncating the last segment.
+#ifndef MIN_RECV_AT_ONCE_ENOUGH
+#define MIN_RECV_AT_ONCE_ENOUGH (7*1448)
+#endif
+
+// The minimum number of bytes to be forwarded that is worth trying to splice.
+// Below 4kB, it's not worth allocating pipes nor pretending to zero-copy.
+#ifndef MIN_SPLICE_FORWARD
+#define MIN_SPLICE_FORWARD 4096
+#endif
+
+// the max number of events returned in one call to poll/epoll. Too small a
+// value will cause lots of calls, and too high a value may cause high latency.
+#ifndef MAX_POLL_EVENTS
+#define MAX_POLL_EVENTS 200
+#endif
+
+// cookie delimitor in "prefix" mode. This character is inserted between the
+// persistence cookie and the original value. The '~' is allowed by RFC2965,
+// and should not be too common in server names.
+#ifndef COOKIE_DELIM
+#define COOKIE_DELIM '~'
+#endif
+
+// this delimitor is used between a server's name and a last visit date in
+// cookies exchanged with the client.
+#ifndef COOKIE_DELIM_DATE
+#define COOKIE_DELIM_DATE '|'
+#endif
+
+#define CONN_RETRIES 3
+
+#define CHK_CONNTIME 2000
+#define DEF_CHKINTR 2000
+#define DEF_FALLTIME 3
+#define DEF_RISETIME 2
+#define DEF_AGENT_FALLTIME 1
+#define DEF_AGENT_RISETIME 1
+#define DEF_CHECK_REQ "OPTIONS / HTTP/1.0\r\n"
+#define DEF_CHECK_PATH ""
+#define DEF_SMTP_CHECK_REQ "HELO localhost\r\n"
+#define DEF_LDAP_CHECK_REQ "\x30\x0c\x02\x01\x01\x60\x07\x02\x01\x03\x04\x00\x80\x00"
+#define DEF_REDIS_CHECK_REQ "*1\r\n$4\r\nPING\r\n"
+
+#define DEF_HANA_ONERR HANA_ONERR_FAILCHK
+#define DEF_HANA_ERRLIMIT 10
+
+// X-Forwarded-For header default
+#define DEF_XFORWARDFOR_HDR "X-Forwarded-For"
+
+// X-Original-To header default
+#define DEF_XORIGINALTO_HDR "X-Original-To"
+
+/* Default connections limit.
+ *
+ * A system limit can be enforced at build time in order to avoid using haproxy
+ * beyond reasonable system limits. For this, just define SYSTEM_MAXCONN to the
+ * absolute limit accepted by the system. If the configuration specifies a
+ * higher value, it will be capped to SYSTEM_MAXCONN and a warning will be
+ * emitted. The only way to override this limit will be to set it via the
+ * command-line '-n' argument.
+ */
+#ifndef SYSTEM_MAXCONN
+#ifndef DEFAULT_MAXCONN
+#define DEFAULT_MAXCONN 2000
+#endif
+#else
+#undef DEFAULT_MAXCONN
+#define DEFAULT_MAXCONN SYSTEM_MAXCONN
+#endif
+
+/* Minimum check interval for spread health checks. Servers with intervals
+ * greater than or equal to this value will have their checks spread apart
+ * and will be considered when searching the minimal interval.
+ * Others will be ignored for the minimal interval and will have their checks
+ * scheduled on a different basis.
+ */
+#ifndef SRV_CHK_INTER_THRES
+#define SRV_CHK_INTER_THRES 1000
+#endif
+
+/* Specifies the string used to report the version and release date on the
+ * statistics page. May be defined to the empty string ("") to permanently
+ * disable the feature.
+ */
+#ifndef STATS_VERSION_STRING
+#define STATS_VERSION_STRING " version " HAPROXY_VERSION ", released " HAPROXY_DATE
+#endif
+
+/* Maximum signal queue size, and also number of different signals we can
+ * handle.
+ */
+#ifndef MAX_SIGNAL
+#define MAX_SIGNAL 256
+#endif
+
+/* Maximum host name length */
+#ifndef MAX_HOSTNAME_LEN
+#if MAXHOSTNAMELEN
+#define MAX_HOSTNAME_LEN MAXHOSTNAMELEN
+#else
+#define MAX_HOSTNAME_LEN 64
+#endif // MAXHOSTNAMELEN
+#endif // MAX_HOSTNAME_LEN
+
+/* Maximum health check description length */
+#ifndef HCHK_DESC_LEN
+#define HCHK_DESC_LEN 128
+#endif
+
+/* ciphers used as defaults on connect */
+#ifndef CONNECT_DEFAULT_CIPHERS
+#define CONNECT_DEFAULT_CIPHERS NULL
+#endif
+
+/* ciphers used as defaults on listeners */
+#ifndef LISTEN_DEFAULT_CIPHERS
+#define LISTEN_DEFAULT_CIPHERS NULL
+#endif
+
+/* named curve used as defaults for ECDHE ciphers */
+#ifndef ECDHE_DEFAULT_CURVE
+#define ECDHE_DEFAULT_CURVE "prime256v1"
+#endif
+
+/* ssl cache size */
+#ifndef SSLCACHESIZE
+#define SSLCACHESIZE 20000
+#endif
+
+/* ssl max dh param size */
+#ifndef SSL_DEFAULT_DH_PARAM
+#define SSL_DEFAULT_DH_PARAM 0
+#endif
+
+/* max memory cost per SSL session */
+#ifndef SSL_SESSION_MAX_COST
+#define SSL_SESSION_MAX_COST (16*1024) // measured
+#endif
+
+/* max memory cost per SSL handshake (on top of session) */
+#ifndef SSL_HANDSHAKE_MAX_COST
+#define SSL_HANDSHAKE_MAX_COST (76*1024) // measured
+#endif
+
+#ifndef DEFAULT_SSL_CTX_CACHE
+#define DEFAULT_SSL_CTX_CACHE 1000
+#endif
+
+/* approximate stream size (for maxconn estimate) */
+#ifndef STREAM_MAX_COST
+#define STREAM_MAX_COST (sizeof(struct stream) + \
+ 2 * sizeof(struct channel) + \
+ 2 * sizeof(struct connection) + \
+ REQURI_LEN + \
+ 2 * global.tune.cookie_len)
+#endif
+
+/* available memory estimate : count about 3% of overhead in various structures */
+#ifndef MEM_USABLE_RATIO
+#define MEM_USABLE_RATIO 0.97
+#endif
+
+/* Number of samples used to compute the times reported in stats. A power of
+ * two is highly recommended, and this value multiplied by the largest response
+ * time must not overflow and unsigned int. See freq_ctr.h for more information.
+ * We consider that values are accurate to 95% with two batches of samples below,
+ * so in order to advertise accurate times across 1k samples, we effectively
+ * measure over 512.
+ */
+#ifndef TIME_STATS_SAMPLES
+#define TIME_STATS_SAMPLES 512
+#endif
+
+/* max ocsp cert id asn1 encoded length */
+#ifndef OCSP_MAX_CERTID_ASN1_LENGTH
+#define OCSP_MAX_CERTID_ASN1_LENGTH 128
+#endif
+
+#ifndef OCSP_MAX_RESPONSE_TIME_SKEW
+#define OCSP_MAX_RESPONSE_TIME_SKEW 300
+#endif
+
+/* Number of TLS tickets to check, used for rotation */
+#ifndef TLS_TICKETS_NO
+#define TLS_TICKETS_NO 3
+#endif
+
+/* pattern lookup default cache size, in number of entries :
+ * 10k entries at 10k req/s mean 1% risk of a collision after 60 years, that's
+ * already much less than the memory's reliability in most machines and more
+ * durable than most admin's life expectancy. A collision will result in a
+ * valid result to be returned for a different entry from the same list.
+ */
+#ifndef DEFAULT_PAT_LRU_SIZE
+#define DEFAULT_PAT_LRU_SIZE 10000
+#endif
+
+#endif /* _COMMON_DEFAULTS_H */
--- /dev/null
+/*
+ * include/common/epoll.h
+ * epoll definitions for older libc.
+ *
+ * Copyright (C) 2000-2012 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+/*
+ * Those constants were found both in glibc and in the Linux kernel.
+ * They are provided here because the epoll() syscall is featured in
+ * some kernels but in not often included in the glibc, so it needs
+ * just a basic definition.
+ */
+
+#ifndef _COMMON_EPOLL_H
+#define _COMMON_EPOLL_H
+
+#if defined (__linux__) && defined(ENABLE_EPOLL)
+
+#ifndef USE_MY_EPOLL
+#include <sys/epoll.h>
+#else
+
+#include <errno.h>
+#include <sys/types.h>
+#include <linux/unistd.h>
+#include <sys/syscall.h>
+#include <common/config.h>
+#include <common/syscall.h>
+
+/* epoll_ctl() commands */
+#ifndef EPOLL_CTL_ADD
+#define EPOLL_CTL_ADD 1
+#define EPOLL_CTL_DEL 2
+#define EPOLL_CTL_MOD 3
+#endif
+
+/* events types (bit fields) */
+#ifndef EPOLLIN
+#define EPOLLIN 1
+#define EPOLLPRI 2
+#define EPOLLOUT 4
+#define EPOLLERR 8
+#define EPOLLHUP 16
+#define EPOLLONESHOT (1 << 30)
+#define EPOLLET (1 << 31)
+#endif
+
+struct epoll_event {
+ uint32_t events;
+ union {
+ void *ptr;
+ int fd;
+ uint32_t u32;
+ uint64_t u64;
+ } data;
+};
+
+#if defined(CONFIG_HAP_LINUX_VSYSCALL) && defined(__linux__) && defined(__i386__)
+/* Those are our self-defined functions */
+extern int epoll_create(int size);
+extern int epoll_ctl(int epfd, int op, int fd, struct epoll_event * event);
+extern int epoll_wait(int epfd, struct epoll_event * events, int maxevents, int timeout);
+#else
+
+/* We'll define a syscall, so for this we need __NR_splice. It should have
+ * been provided by syscall.h.
+ */
+#if !defined(__NR_epoll_ctl)
+#warning unsupported architecture, guessing __NR_epoll_create=254 like x86...
+#define __NR_epoll_create 254
+#define __NR_epoll_ctl 255
+#define __NR_epoll_wait 256
+#endif /* __NR_epoll_ctl */
+
+static inline _syscall1 (int, epoll_create, int, size);
+static inline _syscall4 (int, epoll_ctl, int, epfd, int, op, int, fd, struct epoll_event *, event);
+static inline _syscall4 (int, epoll_wait, int, epfd, struct epoll_event *, events, int, maxevents, int, timeout);
+#endif /* VSYSCALL */
+
+#endif /* USE_MY_EPOLL */
+
+#endif /* __linux__ && ENABLE_EPOLL */
+
+#endif /* _COMMON_EPOLL_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/common/errors.h
+ * Global error macros and constants
+ *
+ * Copyright (C) 2000-2010 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _COMMON_ERRORS_H
+#define _COMMON_ERRORS_H
+
+/* These flags may be used in various functions which are called from within
+ * loops (eg: to start all listeners from all proxies). They provide enough
+ * information to let the caller decide what to do. ERR_WARN and ERR_ALERT
+ * do not indicate any error, just that a message has been put in a shared
+ * buffer in order to be displayed by the caller.
+ */
+#define ERR_NONE 0x00 /* no error, no message returned */
+#define ERR_RETRYABLE 0x01 /* retryable error, may be cumulated */
+#define ERR_FATAL 0x02 /* fatal error, may be cumulated */
+#define ERR_ABORT 0x04 /* it's preferable to end any possible loop */
+#define ERR_WARN 0x08 /* a warning message has been returned */
+#define ERR_ALERT 0x10 /* an alert message has been returned */
+
+#define ERR_CODE (ERR_RETRYABLE|ERR_FATAL|ERR_ABORT) /* mask */
+
+
+/* These codes may be used by config parsing functions which detect errors and
+ * which need to inform the upper layer about them. They are all prefixed with
+ * "PE_" for "Parse Error". These codes will probably be extended, and functions
+ * making use of them should be documented as such. Only code PE_NONE (zero) may
+ * indicate a valid condition, all other ones must be caught as errors, event if
+ * unknown by the caller. This must not be used to forward warnings.
+ */
+enum {
+ PE_NONE = 0, /* no error */
+ PE_ENUM_OOR, /* enum data out of allowed range */
+ PE_EXIST, /* trying to create something which already exists */
+ PE_ARG_MISSING, /* mandatory argument not provided */
+ PE_ARG_NOT_USED, /* argument provided cannot be used */
+ PE_ARG_INVC, /* invalid char in argument (pointer not provided) */
+ PE_ARG_INVC_PTR, /* invalid char in argument (pointer provided) */
+ PE_ARG_NOT_FOUND, /* argument references something not found */
+};
+
+#endif /* _COMMON_ERRORS_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/common/hash.h
+ * Macros for different hashing function.
+ *
+ * Copyright (C) 2000-2011 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _COMMON_HASH_H_
+#define _COMMON_HASH_H_
+
+unsigned int hash_djb2(const char *key, int len);
+unsigned int hash_wt6(const char *key, int len);
+unsigned int hash_sdbm(const char *key, int len);
+unsigned int hash_crc32(const char *key, int len);
+
+#endif /* _COMMON_HASH_H_ */
--- /dev/null
+/*
+ * include/common/memory.h
+ * Memory management definitions..
+ *
+ * Copyright (C) 2000-2014 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _COMMON_MEMORY_H
+#define _COMMON_MEMORY_H
+
+#include <stdlib.h>
+#include <string.h>
+
+#include <common/config.h>
+#include <common/mini-clist.h>
+
+#define MEM_F_SHARED 0x1
+
+struct pool_head {
+ void **free_list;
+ struct list list; /* list of all known pools */
+ unsigned int used; /* how many chunks are currently in use */
+ unsigned int allocated; /* how many chunks have been allocated */
+ unsigned int limit; /* hard limit on the number of chunks */
+ unsigned int minavail; /* how many chunks are expected to be used */
+ unsigned int size; /* chunk size */
+ unsigned int flags; /* MEM_F_* */
+ unsigned int users; /* number of pools sharing this zone */
+ char name[12]; /* name of the pool */
+};
+
+/* poison each newly allocated area with this byte if >= 0 */
+extern int mem_poison_byte;
+
+/*
+ * This function destroys a pull by freeing it completely.
+ * This should be called only under extreme circumstances.
+ */
+static inline void pool_destroy(void **pool)
+{
+ void *temp, *next;
+ next = pool;
+ while (next) {
+ temp = next;
+ next = *(void **)temp;
+ free(temp);
+ }
+}
+
+/* Allocates new entries for pool <pool> until there are at least <avail> + 1
+ * available, then returns the last one for immediate use, so that at least
+ * <avail> are left available in the pool upon return. NULL is returned if the
+ * last entry could not be allocated. It's important to note that at least one
+ * allocation is always performed even if there are enough entries in the pool.
+ * A call to the garbage collector is performed at most once in case malloc()
+ * returns an error, before returning NULL.
+ */
+void *pool_refill_alloc(struct pool_head *pool, unsigned int avail);
+
+/* Try to find an existing shared pool with the same characteristics and
+ * returns it, otherwise creates this one. NULL is returned if no memory
+ * is available for a new creation.
+ */
+struct pool_head *create_pool(char *name, unsigned int size, unsigned int flags);
+
+/* Dump statistics on pools usage.
+ */
+void dump_pools_to_trash();
+void dump_pools(void);
+
+/*
+ * This function frees whatever can be freed in pool <pool>.
+ */
+void pool_flush2(struct pool_head *pool);
+
+/*
+ * This function frees whatever can be freed in all pools, but respecting
+ * the minimum thresholds imposed by owners.
+ */
+void pool_gc2();
+
+/*
+ * This function destroys a pull by freeing it completely.
+ * This should be called only under extreme circumstances.
+ */
+void *pool_destroy2(struct pool_head *pool);
+
+/*
+ * Returns a pointer to type <type> taken from the pool <pool_type> if
+ * available, otherwise returns NULL. No malloc() is attempted, and poisonning
+ * is never performed. The purpose is to get the fastest possible allocation.
+ */
+static inline void *pool_get_first(struct pool_head *pool)
+{
+ void *p;
+
+ if ((p = pool->free_list) != NULL) {
+ pool->free_list = *(void **)pool->free_list;
+ pool->used++;
+ }
+ return p;
+}
+
+/*
+ * Returns a pointer to type <type> taken from the pool <pool_type> or
+ * dynamically allocated. In the first case, <pool_type> is updated to point to
+ * the next element in the list. No memory poisonning is ever performed on the
+ * returned area.
+ */
+static inline void *pool_alloc_dirty(struct pool_head *pool)
+{
+ void *p;
+
+ if ((p = pool_get_first(pool)) == NULL)
+ p = pool_refill_alloc(pool, 0);
+
+ return p;
+}
+
+/*
+ * Returns a pointer to type <type> taken from the pool <pool_type> or
+ * dynamically allocated. In the first case, <pool_type> is updated to point to
+ * the next element in the list. Memory poisonning is performed if enabled.
+ */
+static inline void *pool_alloc2(struct pool_head *pool)
+{
+ void *p;
+
+ p = pool_alloc_dirty(pool);
+ if (p && mem_poison_byte >= 0)
+ memset(p, mem_poison_byte, pool->size);
+ return p;
+}
+
+/*
+ * Puts a memory area back to the corresponding pool.
+ * Items are chained directly through a pointer that
+ * is written in the beginning of the memory area, so
+ * there's no need for any carrier cell. This implies
+ * that each memory area is at least as big as one
+ * pointer. Just like with the libc's free(), nothing
+ * is done if <ptr> is NULL.
+ */
+static inline void pool_free2(struct pool_head *pool, void *ptr)
+{
+ if (likely(ptr != NULL)) {
+ *(void **)ptr= (void *)pool->free_list;
+ pool->free_list = (void *)ptr;
+ pool->used--;
+ }
+}
+
+
+#endif /* _COMMON_MEMORY_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/common/mini-clist.h
+ * Circular list manipulation macros and structures.
+ *
+ * Copyright (C) 2002-2014 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _COMMON_MINI_CLIST_H
+#define _COMMON_MINI_CLIST_H
+
+#include <common/config.h>
+
+/* these are circular or bidirectionnal lists only. Each list pointer points to
+ * another list pointer in a structure, and not the structure itself. The
+ * pointer to the next element MUST be the first one so that the list is easily
+ * cast as a single linked list or pointer.
+ */
+struct list {
+ struct list *n; /* next */
+ struct list *p; /* prev */
+};
+
+/* a back-ref is a pointer to a target list entry. It is used to detect when an
+ * element being deleted is currently being tracked by another user. The best
+ * example is a user dumping the session table. The table does not fit in the
+ * output buffer so we have to set a mark on a session and go on later. But if
+ * that marked session gets deleted, we don't want the user's pointer to go in
+ * the wild. So we can simply link this user's request to the list of this
+ * session's users, and put a pointer to the list element in ref, that will be
+ * used as the mark for next iteration.
+ */
+struct bref {
+ struct list users;
+ struct list *ref; /* pointer to the target's list entry */
+};
+
+/* a word list is a generic list with a pointer to a string in each element. */
+struct wordlist {
+ struct list list;
+ char *s;
+};
+
+/* this is the same as above with an additional pointer to a condition. */
+struct cond_wordlist {
+ struct list list;
+ void *cond;
+ char *s;
+};
+
+/* First undefine some macros which happen to also be defined on OpenBSD,
+ * in sys/queue.h, used by sys/event.h
+ */
+#undef LIST_HEAD
+#undef LIST_INIT
+#undef LIST_NEXT
+
+/* ILH = Initialized List Head : used to prevent gcc from moving an empty
+ * list to BSS. Some older version tend to trim all the array and cause
+ * corruption.
+ */
+#define ILH { .n = (struct list *)1, .p = (struct list *)2 }
+
+#define LIST_HEAD(a) ((void *)(&(a)))
+
+#define LIST_INIT(l) ((l)->n = (l)->p = (l))
+
+#define LIST_HEAD_INIT(l) { &l, &l }
+
+/* adds an element at the beginning of a list ; returns the element */
+#define LIST_ADD(lh, el) ({ (el)->n = (lh)->n; (el)->n->p = (lh)->n = (el); (el)->p = (lh); (el); })
+
+/* adds an element at the end of a list ; returns the element */
+#define LIST_ADDQ(lh, el) ({ (el)->p = (lh)->p; (el)->p->n = (lh)->p = (el); (el)->n = (lh); (el); })
+
+/* removes an element from a list and returns it */
+#define LIST_DEL(el) ({ typeof(el) __ret = (el); (el)->n->p = (el)->p; (el)->p->n = (el)->n; (__ret); })
+
+/* returns a pointer of type <pt> to a structure containing a list head called
+ * <el> at address <lh>. Note that <lh> can be the result of a function or macro
+ * since it's used only once.
+ * Example: LIST_ELEM(cur_node->args.next, struct node *, args)
+ */
+#define LIST_ELEM(lh, pt, el) ((pt)(((void *)(lh)) - ((void *)&((pt)NULL)->el)))
+
+/* checks if the list head <lh> is empty or not */
+#define LIST_ISEMPTY(lh) ((lh)->n == (lh))
+
+/* returns a pointer of type <pt> to a structure following the element
+ * which contains list head <lh>, which is known as element <el> in
+ * struct pt.
+ * Example: LIST_NEXT(args, struct node *, list)
+ */
+#define LIST_NEXT(lh, pt, el) (LIST_ELEM((lh)->n, pt, el))
+
+
+/* returns a pointer of type <pt> to a structure preceeding the element
+ * which contains list head <lh>, which is known as element <el> in
+ * struct pt.
+ */
+#undef LIST_PREV
+#define LIST_PREV(lh, pt, el) (LIST_ELEM((lh)->p, pt, el))
+
+/*
+ * Simpler FOREACH_ITEM macro inspired from Linux sources.
+ * Iterates <item> through a list of items of type "typeof(*item)" which are
+ * linked via a "struct list" member named <member>. A pointer to the head of
+ * the list is passed in <list_head>. No temporary variable is needed. Note
+ * that <item> must not be modified during the loop.
+ * Example: list_for_each_entry(cur_acl, known_acl, list) { ... };
+ */
+#define list_for_each_entry(item, list_head, member) \
+ for (item = LIST_ELEM((list_head)->n, typeof(item), member); \
+ &item->member != (list_head); \
+ item = LIST_ELEM(item->member.n, typeof(item), member))
+
+/*
+ * Simpler FOREACH_ITEM_SAFE macro inspired from Linux sources.
+ * Iterates <item> through a list of items of type "typeof(*item)" which are
+ * linked via a "struct list" member named <member>. A pointer to the head of
+ * the list is passed in <list_head>. A temporary variable <back> of same type
+ * as <item> is needed so that <item> may safely be deleted if needed.
+ * Example: list_for_each_entry_safe(cur_acl, tmp, known_acl, list) { ... };
+ */
+#define list_for_each_entry_safe(item, back, list_head, member) \
+ for (item = LIST_ELEM((list_head)->n, typeof(item), member), \
+ back = LIST_ELEM(item->member.n, typeof(item), member); \
+ &item->member != (list_head); \
+ item = back, back = LIST_ELEM(back->member.n, typeof(back), member))
+
+
+#endif /* _COMMON_MINI_CLIST_H */
--- /dev/null
+#ifndef _NAMESPACE_H
+#define _NAMESPACE_H
+
+#include <stdlib.h>
+#include <ebistree.h>
+
+struct netns_entry;
+int my_socketat(const struct netns_entry *ns, int domain, int type, int protocol);
+
+#ifdef CONFIG_HAP_NS
+
+struct netns_entry
+{
+ struct ebpt_node node;
+ size_t name_len;
+ int fd;
+};
+
+struct netns_entry* netns_store_insert(const char *ns_name);
+const struct netns_entry* netns_store_lookup(const char *ns_name, size_t ns_name_len);
+
+int netns_init(void);
+#endif /* CONFIG_HAP_NS */
+
+#endif /* _NAMESPACE_H */
--- /dev/null
+/*
+ Red Black Trees
+ (C) 1999 Andrea Arcangeli <andrea@suse.de>
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+ linux/include/linux/rbtree.h
+
+ To use rbtrees you'll have to implement your own insert and search cores.
+ This will avoid us to use callbacks and to drop drammatically performances.
+ I know it's not the cleaner way, but in C (not in C++) to get
+ performances and genericity...
+
+ Some example of insert and search follows here. The search is a plain
+ normal search over an ordered tree. The insert instead must be implemented
+ int two steps: as first thing the code must insert the element in
+ order as a red leaf in the tree, then the support library function
+ rb_insert_color() must be called. Such function will do the
+ not trivial work to rebalance the rbtree if necessary.
+
+-----------------------------------------------------------------------
+static inline struct page * rb_search_page_cache(struct inode * inode,
+ unsigned long offset)
+{
+ struct rb_node * n = inode->i_rb_page_cache.rb_node;
+ struct page * page;
+
+ while (n)
+ {
+ page = rb_entry(n, struct page, rb_page_cache);
+
+ if (offset < page->offset)
+ n = n->rb_left;
+ else if (offset > page->offset)
+ n = n->rb_right;
+ else
+ return page;
+ }
+ return NULL;
+}
+
+static inline struct page * __rb_insert_page_cache(struct inode * inode,
+ unsigned long offset,
+ struct rb_node * node)
+{
+ struct rb_node ** p = &inode->i_rb_page_cache.rb_node;
+ struct rb_node * parent = NULL;
+ struct page * page;
+
+ while (*p)
+ {
+ parent = *p;
+ page = rb_entry(parent, struct page, rb_page_cache);
+
+ if (offset < page->offset)
+ p = &(*p)->rb_left;
+ else if (offset > page->offset)
+ p = &(*p)->rb_right;
+ else
+ return page;
+ }
+
+ rb_link_node(node, parent, p);
+
+ return NULL;
+}
+
+static inline struct page * rb_insert_page_cache(struct inode * inode,
+ unsigned long offset,
+ struct rb_node * node)
+{
+ struct page * ret;
+ if ((ret = __rb_insert_page_cache(inode, offset, node)))
+ goto out;
+ rb_insert_color(node, &inode->i_rb_page_cache);
+ out:
+ return ret;
+}
+-----------------------------------------------------------------------
+*/
+
+#ifndef _LINUX_RBTREE_H
+#define _LINUX_RBTREE_H
+
+/*
+#include <linux/kernel.h>
+#include <linux/stddef.h>
+*/
+
+struct rb_node
+{
+ struct rb_node *rb_parent;
+ int rb_color;
+#define RB_RED 0
+#define RB_BLACK 1
+ struct rb_node *rb_right;
+ struct rb_node *rb_left;
+};
+
+struct rb_root
+{
+ struct rb_node *rb_node;
+};
+
+// Copy from linux kernel 2.6 source (kernel.h, stddef.h)
+#define container_of(ptr, type, member) ({ \
+ const typeof( ((type *)0)->member ) *__mptr = (ptr); \
+ (type *)( (char *)__mptr - offsetof(type,member) );})
+#define offsetof(TYPE, MEMBER) ((size_t) &((TYPE *)0)->MEMBER)
+
+
+#define RB_ROOT (struct rb_root) { NULL, }
+#define rb_entry(ptr, type, member) container_of(ptr, type, member)
+
+extern void rb_insert_color(struct rb_node *, struct rb_root *);
+extern void rb_erase(struct rb_node *, struct rb_root *);
+
+/* Find logical next and previous nodes in a tree */
+extern struct rb_node *rb_next(struct rb_node *);
+extern struct rb_node *rb_prev(struct rb_node *);
+extern struct rb_node *rb_first(struct rb_root *);
+extern struct rb_node *rb_last(struct rb_root *);
+
+/* Fast replacement of a single node without remove/rebalance/add/rebalance */
+extern void rb_replace_node(struct rb_node *victim, struct rb_node *new,
+ struct rb_root *root);
+
+static inline void rb_link_node(struct rb_node * node, struct rb_node * parent,
+ struct rb_node ** rb_link)
+{
+ node->rb_parent = parent;
+ node->rb_color = RB_RED;
+ node->rb_left = node->rb_right = NULL;
+
+ *rb_link = node;
+}
+
+#endif /* _LINUX_RBTREE_H */
--- /dev/null
+/*
+ * include/common/regex.h
+ * This file defines everything related to regular expressions.
+ *
+ * Copyright (C) 2000-2010 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _COMMON_REGEX_H
+#define _COMMON_REGEX_H
+
+#include <stdlib.h>
+#include <string.h>
+
+#include <common/config.h>
+
+#ifdef USE_PCRE
+#include <pcre.h>
+#include <pcreposix.h>
+
+/* For pre-8.20 PCRE compatibility */
+#ifndef PCRE_STUDY_JIT_COMPILE
+#define PCRE_STUDY_JIT_COMPILE 0
+#endif
+
+#else /* no PCRE */
+#include <regex.h>
+#endif
+
+struct my_regex {
+#ifdef USE_PCRE
+ pcre *reg;
+ pcre_extra *extra;
+#ifdef USE_PCRE_JIT
+#ifndef PCRE_CONFIG_JIT
+#error "The PCRE lib doesn't support JIT. Change your lib, or remove the option USE_PCRE_JIT."
+#endif
+#endif
+#else /* no PCRE */
+ regex_t regex;
+#endif
+};
+
+/* what to do when a header matches a regex */
+#define ACT_ALLOW 0 /* allow the request */
+#define ACT_REPLACE 1 /* replace the matching header */
+#define ACT_REMOVE 2 /* remove the matching header */
+#define ACT_DENY 3 /* deny the request */
+#define ACT_PASS 4 /* pass this header without allowing or denying the request */
+#define ACT_TARPIT 5 /* tarpit the connection matching this request */
+
+struct hdr_exp {
+ struct hdr_exp *next;
+ struct my_regex *preg; /* expression to look for */
+ int action; /* ACT_ALLOW, ACT_REPLACE, ACT_REMOVE, ACT_DENY */
+ const char *replace; /* expression to set instead */
+ void *cond; /* a possible condition or NULL */
+};
+
+extern regmatch_t pmatch[MAX_MATCH];
+
+/* "str" is the string that contain the regex to compile.
+ * "regex" is preallocated memory. After the execution of this function, this
+ * struct contain the compiled regex.
+ * "cs" is the case sensitive flag. If cs is true, case sensitive is enabled.
+ * "cap" is capture flag. If cap if true the regex can capture into
+ * parenthesis strings.
+ * "err" is the standar error message pointer.
+ *
+ * The function return 1 is succes case, else return 0 and err is filled.
+ */
+int regex_comp(const char *str, struct my_regex *regex, int cs, int cap, char **err);
+int exp_replace(char *dst, unsigned int dst_size, char *src, const char *str, const regmatch_t *matches);
+const char *check_replace_string(const char *str);
+const char *chain_regex(struct hdr_exp **head, struct my_regex *preg,
+ int action, const char *replace, void *cond);
+
+/* If the function doesn't match, it returns false, else it returns true.
+ */
+static inline int regex_exec(const struct my_regex *preg, char *subject) {
+#if defined(USE_PCRE) || defined(USE_PCRE_JIT)
+ if (pcre_exec(preg->reg, preg->extra, subject, strlen(subject), 0, 0, NULL, 0) < 0)
+ return 0;
+ return 1;
+#else
+ int match;
+ match = regexec(&preg->regex, subject, 0, NULL, 0);
+ if (match == REG_NOMATCH)
+ return 0;
+ return 1;
+#endif
+}
+
+/* Note that <subject> MUST be at least <length+1> characters long and must
+ * be writable because the function will temporarily force a zero past the
+ * last character.
+ *
+ * If the function doesn't match, it returns false, else it returns true.
+ */
+static inline int regex_exec2(const struct my_regex *preg, char *subject, int length) {
+#if defined(USE_PCRE) || defined(USE_PCRE_JIT)
+ if (pcre_exec(preg->reg, preg->extra, subject, length, 0, 0, NULL, 0) < 0)
+ return 0;
+ return 1;
+#else
+ int match;
+ char old_char = subject[length];
+ subject[length] = 0;
+ match = regexec(&preg->regex, subject, 0, NULL, 0);
+ subject[length] = old_char;
+ if (match == REG_NOMATCH)
+ return 0;
+ return 1;
+#endif
+}
+
+int regex_exec_match(const struct my_regex *preg, const char *subject,
+ size_t nmatch, regmatch_t pmatch[], int flags);
+int regex_exec_match2(const struct my_regex *preg, char *subject, int length,
+ size_t nmatch, regmatch_t pmatch[], int flags);
+
+static inline void regex_free(struct my_regex *preg) {
+#if defined(USE_PCRE) || defined(USE_PCRE_JIT)
+ pcre_free(preg->reg);
+/* PCRE < 8.20 requires pcre_free() while >= 8.20 requires pcre_study_free(),
+ * which is easily detected using PCRE_CONFIG_JIT.
+ */
+#ifdef PCRE_CONFIG_JIT
+ pcre_free_study(preg->extra);
+#else /* PCRE_CONFIG_JIT */
+ pcre_free(preg->extra);
+#endif /* PCRE_CONFIG_JIT */
+#else
+ regfree(&preg->regex);
+#endif
+}
+
+#endif /* _COMMON_REGEX_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/common/splice.h
+ * Splice definition for older Linux libc.
+ *
+ * Copyright 2000-2011 Willy Tarreau <w@1wt.eu>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ *
+ */
+
+#ifndef _COMMON_SPLICE_H
+#define _COMMON_SPLICE_H
+
+#if defined (__linux__) && defined(CONFIG_HAP_LINUX_SPLICE)
+
+#include <errno.h>
+#include <unistd.h>
+#include <sys/syscall.h>
+#include <common/syscall.h>
+
+/* On recent Linux kernels, the splice() syscall may be used for faster data copy.
+ * But it's not always defined on some OS versions, and it even happens that some
+ * definitions are wrong with some glibc due to an offset bug in syscall().
+ */
+
+#ifndef SPLICE_F_MOVE
+#define SPLICE_F_MOVE 0x1
+#endif
+
+#ifndef SPLICE_F_NONBLOCK
+#define SPLICE_F_NONBLOCK 0x2
+#endif
+
+#ifndef SPLICE_F_MORE
+#define SPLICE_F_MORE 0x4
+#endif
+
+#if defined(USE_MY_SPLICE)
+
+#if defined(CONFIG_HAP_LINUX_VSYSCALL) && defined(__linux__) && defined(__i386__)
+/* The syscall is redefined somewhere else */
+extern int splice(int fdin, loff_t *off_in, int fdout, loff_t *off_out, size_t len, unsigned long flags);
+#else
+
+/* We'll define a syscall, so for this we need __NR_splice. It should have
+ * been provided by syscall.h.
+ */
+#ifndef __NR_splice
+#warning unsupported architecture, guessing __NR_splice=313 like x86...
+#define __NR_splice 313
+#endif /* __NR_splice */
+
+static inline _syscall6(int, splice, int, fdin, loff_t *, off_in, int, fdout, loff_t *, off_out, size_t, len, unsigned long, flags);
+#endif /* VSYSCALL */
+
+#else
+/* use the system's definition */
+#include <fcntl.h>
+
+#endif /* USE_MY_SPLICE */
+
+#endif /* __linux__ && CONFIG_HAP_LINUX_SPLICE */
+
+#endif /* _COMMON_SPLICE_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/common/standard.h
+ * This files contains some general purpose functions and macros.
+ *
+ * Copyright (C) 2000-2010 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _COMMON_STANDARD_H
+#define _COMMON_STANDARD_H
+
+#include <limits.h>
+#include <string.h>
+#include <time.h>
+#include <sys/time.h>
+#include <sys/types.h>
+#include <sys/socket.h>
+#include <sys/un.h>
+#include <netinet/in.h>
+#include <arpa/inet.h>
+#include <common/chunk.h>
+#include <common/config.h>
+#include <eb32tree.h>
+
+#ifndef LLONG_MAX
+# define LLONG_MAX 9223372036854775807LL
+# define LLONG_MIN (-LLONG_MAX - 1LL)
+#endif
+
+#ifndef ULLONG_MAX
+# define ULLONG_MAX (LLONG_MAX * 2ULL + 1)
+#endif
+
+#ifndef LONGBITS
+#define LONGBITS ((unsigned int)sizeof(long) * 8)
+#endif
+
+/* size used for max length of decimal representation of long long int. */
+#define NB_LLMAX_STR (sizeof("-9223372036854775807")-1)
+
+/* number of itoa_str entries */
+#define NB_ITOA_STR 10
+
+/* maximum quoted string length (truncated above) */
+#define QSTR_SIZE 200
+#define NB_QSTR 10
+
+/****** string-specific macros and functions ******/
+/* if a > max, then bound <a> to <max>. The macro returns the new <a> */
+#define UBOUND(a, max) ({ typeof(a) b = (max); if ((a) > b) (a) = b; (a); })
+
+/* if a < min, then bound <a> to <min>. The macro returns the new <a> */
+#define LBOUND(a, min) ({ typeof(a) b = (min); if ((a) < b) (a) = b; (a); })
+
+/* returns 1 only if only zero or one bit is set in X, which means that X is a
+ * power of 2, and 0 otherwise */
+#define POWEROF2(x) (((x) & ((x)-1)) == 0)
+
+/* operators to compare values. They're ordered that way so that the lowest bit
+ * serves as a negation for the test and contains all tests that are not equal.
+ */
+enum {
+ STD_OP_LE = 0, STD_OP_GT = 1,
+ STD_OP_EQ = 2, STD_OP_NE = 3,
+ STD_OP_GE = 4, STD_OP_LT = 5,
+};
+
+enum http_scheme {
+ SCH_HTTP,
+ SCH_HTTPS,
+};
+
+struct split_url {
+ enum http_scheme scheme;
+ const char *host;
+ int host_len;
+};
+
+extern int itoa_idx; /* index of next itoa_str to use */
+
+/*
+ * copies at most <size-1> chars from <src> to <dst>. Last char is always
+ * set to 0, unless <size> is 0. The number of chars copied is returned
+ * (excluding the terminating zero).
+ * This code has been optimized for size and speed : on x86, it's 45 bytes
+ * long, uses only registers, and consumes only 4 cycles per char.
+ */
+extern int strlcpy2(char *dst, const char *src, int size);
+
+/*
+ * This function simply returns a locally allocated string containing
+ * the ascii representation for number 'n' in decimal.
+ */
+extern char itoa_str[][171];
+extern char *ultoa_r(unsigned long n, char *buffer, int size);
+extern char *lltoa_r(long long int n, char *buffer, int size);
+extern char *sltoa_r(long n, char *buffer, int size);
+extern const char *ulltoh_r(unsigned long long n, char *buffer, int size);
+static inline const char *ultoa(unsigned long n)
+{
+ return ultoa_r(n, itoa_str[0], sizeof(itoa_str[0]));
+}
+
+/*
+ * unsigned long long ASCII representation
+ *
+ * return the last char '\0' or NULL if no enough
+ * space in dst
+ */
+char *ulltoa(unsigned long long n, char *dst, size_t size);
+
+
+/*
+ * unsigned long ASCII representation
+ *
+ * return the last char '\0' or NULL if no enough
+ * space in dst
+ */
+char *ultoa_o(unsigned long n, char *dst, size_t size);
+
+/*
+ * signed long ASCII representation
+ *
+ * return the last char '\0' or NULL if no enough
+ * space in dst
+ */
+char *ltoa_o(long int n, char *dst, size_t size);
+
+/*
+ * signed long long ASCII representation
+ *
+ * return the last char '\0' or NULL if no enough
+ * space in dst
+ */
+char *lltoa(long long n, char *dst, size_t size);
+
+/*
+ * write a ascii representation of a unsigned into dst,
+ * return a pointer to the last character
+ * Pad the ascii representation with '0', using size.
+ */
+char *utoa_pad(unsigned int n, char *dst, size_t size);
+
+/*
+ * This function simply returns a locally allocated string containing the ascii
+ * representation for number 'n' in decimal, unless n is 0 in which case it
+ * returns the alternate string (or an empty string if the alternate string is
+ * NULL). It use is intended for limits reported in reports, where it's
+ * desirable not to display anything if there is no limit. Warning! it shares
+ * the same vector as ultoa_r().
+ */
+extern const char *limit_r(unsigned long n, char *buffer, int size, const char *alt);
+
+/* returns a locally allocated string containing the ASCII representation of
+ * the number 'n' in decimal. Up to NB_ITOA_STR calls may be used in the same
+ * function call (eg: printf), shared with the other similar functions making
+ * use of itoa_str[].
+ */
+static inline const char *U2A(unsigned long n)
+{
+ const char *ret = ultoa_r(n, itoa_str[itoa_idx], sizeof(itoa_str[0]));
+ if (++itoa_idx >= NB_ITOA_STR)
+ itoa_idx = 0;
+ return ret;
+}
+
+/* returns a locally allocated string containing the HTML representation of
+ * the number 'n' in decimal. Up to NB_ITOA_STR calls may be used in the same
+ * function call (eg: printf), shared with the other similar functions making
+ * use of itoa_str[].
+ */
+static inline const char *U2H(unsigned long long n)
+{
+ const char *ret = ulltoh_r(n, itoa_str[itoa_idx], sizeof(itoa_str[0]));
+ if (++itoa_idx >= NB_ITOA_STR)
+ itoa_idx = 0;
+ return ret;
+}
+
+/* returns a locally allocated string containing the HTML representation of
+ * the number 'n' in decimal. Up to NB_ITOA_STR calls may be used in the same
+ * function call (eg: printf), shared with the other similar functions making
+ * use of itoa_str[].
+ */
+static inline const char *LIM2A(unsigned long n, const char *alt)
+{
+ const char *ret = limit_r(n, itoa_str[itoa_idx], sizeof(itoa_str[0]), alt);
+ if (++itoa_idx >= NB_ITOA_STR)
+ itoa_idx = 0;
+ return ret;
+}
+
+/* returns a locally allocated string containing the quoted encoding of the
+ * input string. The output may be truncated to QSTR_SIZE chars, but it is
+ * guaranteed that the string will always be properly terminated. Quotes are
+ * encoded by doubling them as is commonly done in CSV files. QSTR_SIZE must
+ * always be at least 4 chars.
+ */
+const char *qstr(const char *str);
+
+/* returns <str> or its quote-encoded equivalent if it contains at least one
+ * quote or a comma. This is aimed at build CSV-compatible strings.
+ */
+static inline const char *cstr(const char *str)
+{
+ const char *p = str;
+
+ while (*p) {
+ if (*p == ',' || *p == '"')
+ return qstr(str);
+ p++;
+ }
+ return str;
+}
+
+/*
+ * Returns non-zero if character <s> is a hex digit (0-9, a-f, A-F), else zero.
+ */
+extern int ishex(char s);
+
+/*
+ * Return integer equivalent of character <c> for a hex digit (0-9, a-f, A-F),
+ * otherwise -1. This compact form helps gcc produce efficient code.
+ */
+static inline int hex2i(int c)
+{
+ if (unlikely((unsigned char)(c -= '0') > 9)) {
+ if (likely((unsigned char)(c -= 'A' - '0') > 5 &&
+ (unsigned char)(c -= 'a' - 'A') > 5))
+ c = -11;
+ c += 10;
+ }
+ return c;
+}
+
+/* rounds <i> down to the closest value having max 2 digits */
+unsigned int round_2dig(unsigned int i);
+
+/*
+ * Checks <name> for invalid characters. Valid chars are [A-Za-z0-9_:.-]. If an
+ * invalid character is found, a pointer to it is returned. If everything is
+ * fine, NULL is returned.
+ */
+extern const char *invalid_char(const char *name);
+
+/*
+ * Checks <domainname> for invalid characters. Valid chars are [A-Za-z0-9_.-].
+ * If an invalid character is found, a pointer to it is returned.
+ * If everything is fine, NULL is returned.
+ */
+extern const char *invalid_domainchar(const char *name);
+
+/*
+ * converts <str> to a locally allocated struct sockaddr_storage *, and a
+ * port range consisting in two integers. The low and high end are always set
+ * even if the port is unspecified, in which case (0,0) is returned. The low
+ * port is set in the sockaddr. Thus, it is enough to check the size of the
+ * returned range to know if an array must be allocated or not. The format is
+ * "addr[:[port[-port]]]", where "addr" can be a dotted IPv4 address, an IPv6
+ * address, a host name, or empty or "*" to indicate INADDR_ANY. If an IPv6
+ * address wants to ignore port, it must be terminated by a trailing colon (':').
+ * The IPv6 '::' address is IN6ADDR_ANY, so in order to bind to a given port on
+ * IPv6, use ":::port". NULL is returned if the host part cannot be resolved.
+ * If <pfx> is non-null, it is used as a string prefix before any path-based
+ * address (typically the path to a unix socket). If use_dns is not true,
+ * the funtion cannot accept the DNS resolution.
+ */
+struct sockaddr_storage *str2sa_range(const char *str, int *low, int *high, char **err, const char *pfx, char **fqdn, int use_dns);
+
+/* converts <str> to a struct in_addr containing a network mask. It can be
+ * passed in dotted form (255.255.255.0) or in CIDR form (24). It returns 1
+ * if the conversion succeeds otherwise non-zero.
+ */
+int str2mask(const char *str, struct in_addr *mask);
+
+/* convert <cidr> to struct in_addr <mask>. It returns 1 if the conversion
+ * succeeds otherwise non-zero.
+ */
+int cidr2dotted(int cidr, struct in_addr *mask);
+
+/*
+ * converts <str> to two struct in_addr* which must be pre-allocated.
+ * The format is "addr[/mask]", where "addr" cannot be empty, and mask
+ * is optionnal and either in the dotted or CIDR notation.
+ * Note: "addr" can also be a hostname. Returns 1 if OK, 0 if error.
+ */
+int str2net(const char *str, int resolve, struct in_addr *addr, struct in_addr *mask);
+
+/* str2ip and str2ip2:
+ *
+ * converts <str> to a struct sockaddr_storage* provided by the caller. The
+ * caller must have zeroed <sa> first, and may have set sa->ss_family to force
+ * parse a specific address format. If the ss_family is 0 or AF_UNSPEC, then
+ * the function tries to guess the address family from the syntax. If the
+ * family is forced and the format doesn't match, an error is returned. The
+ * string is assumed to contain only an address, no port. The address can be a
+ * dotted IPv4 address, an IPv6 address, a host name, or empty or "*" to
+ * indicate INADDR_ANY. NULL is returned if the host part cannot be resolved.
+ * The return address will only have the address family and the address set,
+ * all other fields remain zero. The string is not supposed to be modified.
+ * The IPv6 '::' address is IN6ADDR_ANY.
+ *
+ * str2ip2:
+ *
+ * If <resolve> is set, this function try to resolve DNS, otherwise, it returns
+ * NULL result.
+ */
+struct sockaddr_storage *str2ip2(const char *str, struct sockaddr_storage *sa, int resolve);
+static inline struct sockaddr_storage *str2ip(const char *str, struct sockaddr_storage *sa)
+{
+ return str2ip2(str, sa, 1);
+}
+
+/*
+ * converts <str> to two struct in6_addr* which must be pre-allocated.
+ * The format is "addr[/mask]", where "addr" cannot be empty, and mask
+ * is an optionnal number of bits (128 being the default).
+ * Returns 1 if OK, 0 if error.
+ */
+int str62net(const char *str, struct in6_addr *addr, unsigned char *mask);
+
+/*
+ * Parse IP address found in url.
+ */
+int url2ipv4(const char *addr, struct in_addr *dst);
+
+/*
+ * Resolve destination server from URL. Convert <str> to a sockaddr_storage*.
+ */
+int url2sa(const char *url, int ulen, struct sockaddr_storage *addr, struct split_url *out);
+
+/* Tries to convert a sockaddr_storage address to text form. Upon success, the
+ * address family is returned so that it's easy for the caller to adapt to the
+ * output format. Zero is returned if the address family is not supported. -1
+ * is returned upon error, with errno set. AF_INET, AF_INET6 and AF_UNIX are
+ * supported.
+ */
+int addr_to_str(struct sockaddr_storage *addr, char *str, int size);
+
+/* Tries to convert a sockaddr_storage port to text form. Upon success, the
+ * address family is returned so that it's easy for the caller to adapt to the
+ * output format. Zero is returned if the address family is not supported. -1
+ * is returned upon error, with errno set. AF_INET, AF_INET6 and AF_UNIX are
+ * supported.
+ */
+int port_to_str(struct sockaddr_storage *addr, char *str, int size);
+
+/* will try to encode the string <string> replacing all characters tagged in
+ * <map> with the hexadecimal representation of their ASCII-code (2 digits)
+ * prefixed by <escape>, and will store the result between <start> (included)
+ * and <stop> (excluded), and will always terminate the string with a '\0'
+ * before <stop>. The position of the '\0' is returned if the conversion
+ * completes. If bytes are missing between <start> and <stop>, then the
+ * conversion will be incomplete and truncated. If <stop> <= <start>, the '\0'
+ * cannot even be stored so we return <start> without writing the 0.
+ * The input string must also be zero-terminated.
+ */
+extern const char hextab[];
+char *encode_string(char *start, char *stop,
+ const char escape, const fd_set *map,
+ const char *string);
+
+/*
+ * Same behavior, except that it encodes chunk <chunk> instead of a string.
+ */
+char *encode_chunk(char *start, char *stop,
+ const char escape, const fd_set *map,
+ const struct chunk *chunk);
+
+
+/* Check a string for using it in a CSV output format. If the string contains
+ * one of the following four char <">, <,>, CR or LF, the string is
+ * encapsulated between <"> and the <"> are escaped by a <""> sequence.
+ * <str> is the input string to be escaped. The function assumes that
+ * the input string is null-terminated.
+ *
+ * If <quote> is 0, the result is returned escaped but without double quote.
+ * Is it useful if the escaped string is used between double quotes in the
+ * format.
+ *
+ * printf("..., \"%s\", ...\r\n", csv_enc(str, 0));
+ *
+ * If the <quote> is 1, the converter put the quotes only if any character is
+ * escaped. If the <quote> is 2, the converter put always the quotes.
+ *
+ * <output> is a struct chunk used for storing the output string if any
+ * change will be done.
+ *
+ * The function returns the converted string on this output. If an error
+ * occurs, the function return an empty string. This type of output is useful
+ * for using the function directly as printf() argument.
+ *
+ * If the output buffer is too short to conatin the input string, the result
+ * is truncated.
+ */
+const char *csv_enc(const char *str, int quote, struct chunk *output);
+
+/* Decode an URL-encoded string in-place. The resulting string might
+ * be shorter. If some forbidden characters are found, the conversion is
+ * aborted, the string is truncated before the issue and non-zero is returned,
+ * otherwise the operation returns non-zero indicating success.
+ */
+int url_decode(char *string);
+
+/* This one is 6 times faster than strtoul() on athlon, but does
+ * no check at all.
+ */
+static inline unsigned int __str2ui(const char *s)
+{
+ unsigned int i = 0;
+ while (*s) {
+ i = i * 10 - '0';
+ i += (unsigned char)*s++;
+ }
+ return i;
+}
+
+/* This one is 5 times faster than strtoul() on athlon with checks.
+ * It returns the value of the number composed of all valid digits read.
+ */
+static inline unsigned int __str2uic(const char *s)
+{
+ unsigned int i = 0;
+ unsigned int j;
+ while (1) {
+ j = (*s++) - '0';
+ if (j > 9)
+ break;
+ i *= 10;
+ i += j;
+ }
+ return i;
+}
+
+/* This one is 28 times faster than strtoul() on athlon, but does
+ * no check at all!
+ */
+static inline unsigned int __strl2ui(const char *s, int len)
+{
+ unsigned int i = 0;
+ while (len-- > 0) {
+ i = i * 10 - '0';
+ i += (unsigned char)*s++;
+ }
+ return i;
+}
+
+/* This one is 7 times faster than strtoul() on athlon with checks.
+ * It returns the value of the number composed of all valid digits read.
+ */
+static inline unsigned int __strl2uic(const char *s, int len)
+{
+ unsigned int i = 0;
+ unsigned int j, k;
+
+ while (len-- > 0) {
+ j = (*s++) - '0';
+ k = i * 10;
+ if (j > 9)
+ break;
+ i = k + j;
+ }
+ return i;
+}
+
+/* This function reads an unsigned integer from the string pointed to by <s>
+ * and returns it. The <s> pointer is adjusted to point to the first unread
+ * char. The function automatically stops at <end>.
+ */
+static inline unsigned int __read_uint(const char **s, const char *end)
+{
+ const char *ptr = *s;
+ unsigned int i = 0;
+ unsigned int j, k;
+
+ while (ptr < end) {
+ j = *ptr - '0';
+ k = i * 10;
+ if (j > 9)
+ break;
+ i = k + j;
+ ptr++;
+ }
+ *s = ptr;
+ return i;
+}
+
+unsigned long long int read_uint64(const char **s, const char *end);
+long long int read_int64(const char **s, const char *end);
+
+extern unsigned int str2ui(const char *s);
+extern unsigned int str2uic(const char *s);
+extern unsigned int strl2ui(const char *s, int len);
+extern unsigned int strl2uic(const char *s, int len);
+extern int strl2ic(const char *s, int len);
+extern int strl2irc(const char *s, int len, int *ret);
+extern int strl2llrc(const char *s, int len, long long *ret);
+extern int strl2llrc_dotted(const char *text, int len, long long *ret);
+extern unsigned int read_uint(const char **s, const char *end);
+unsigned int inetaddr_host(const char *text);
+unsigned int inetaddr_host_lim(const char *text, const char *stop);
+unsigned int inetaddr_host_lim_ret(char *text, char *stop, char **ret);
+
+static inline char *cut_crlf(char *s) {
+
+ while (*s != '\r' && *s != '\n') {
+ char *p = s++;
+
+ if (!*p)
+ return p;
+ }
+
+ *s++ = '\0';
+
+ return s;
+}
+
+static inline char *ltrim(char *s, char c) {
+
+ if (c)
+ while (*s == c)
+ s++;
+
+ return s;
+}
+
+static inline char *rtrim(char *s, char c) {
+
+ char *p = s + strlen(s);
+
+ while (p-- > s)
+ if (*p == c)
+ *p = '\0';
+ else
+ break;
+
+ return s;
+}
+
+static inline char *alltrim(char *s, char c) {
+
+ rtrim(s, c);
+
+ return ltrim(s, c);
+}
+
+/* This function converts the time_t value <now> into a broken out struct tm
+ * which must be allocated by the caller. It is highly recommended to use this
+ * function intead of localtime() because that one requires a time_t* which
+ * is not always compatible with tv_sec depending on OS/hardware combinations.
+ */
+static inline void get_localtime(const time_t now, struct tm *tm)
+{
+ localtime_r(&now, tm);
+}
+
+/* This function converts the time_t value <now> into a broken out struct tm
+ * which must be allocated by the caller. It is highly recommended to use this
+ * function intead of gmtime() because that one requires a time_t* which
+ * is not always compatible with tv_sec depending on OS/hardware combinations.
+ */
+static inline void get_gmtime(const time_t now, struct tm *tm)
+{
+ gmtime_r(&now, tm);
+}
+
+/* This function parses a time value optionally followed by a unit suffix among
+ * "d", "h", "m", "s", "ms" or "us". It converts the value into the unit
+ * expected by the caller. The computation does its best to avoid overflows.
+ * The value is returned in <ret> if everything is fine, and a NULL is returned
+ * by the function. In case of error, a pointer to the error is returned and
+ * <ret> is left untouched.
+ */
+extern const char *parse_time_err(const char *text, unsigned *ret, unsigned unit_flags);
+extern const char *parse_size_err(const char *text, unsigned *ret);
+
+/* unit flags to pass to parse_time_err */
+#define TIME_UNIT_US 0x0000
+#define TIME_UNIT_MS 0x0001
+#define TIME_UNIT_S 0x0002
+#define TIME_UNIT_MIN 0x0003
+#define TIME_UNIT_HOUR 0x0004
+#define TIME_UNIT_DAY 0x0005
+#define TIME_UNIT_MASK 0x0007
+
+#define SEC 1
+#define MINUTE (60 * SEC)
+#define HOUR (60 * MINUTE)
+#define DAY (24 * HOUR)
+
+/* Multiply the two 32-bit operands and shift the 64-bit result right 32 bits.
+ * This is used to compute fixed ratios by setting one of the operands to
+ * (2^32*ratio).
+ */
+static inline unsigned int mul32hi(unsigned int a, unsigned int b)
+{
+ return ((unsigned long long)a * b) >> 32;
+}
+
+/* gcc does not know when it can safely divide 64 bits by 32 bits. Use this
+ * function when you know for sure that the result fits in 32 bits, because
+ * it is optimal on x86 and on 64bit processors.
+ */
+static inline unsigned int div64_32(unsigned long long o1, unsigned int o2)
+{
+ unsigned int result;
+#ifdef __i386__
+ asm("divl %2"
+ : "=a" (result)
+ : "A"(o1), "rm"(o2));
+#else
+ result = o1 / o2;
+#endif
+ return result;
+}
+
+/* Simple popcountl implementation. It returns the number of ones in a word */
+static inline unsigned int my_popcountl(unsigned long a)
+{
+ unsigned int cnt;
+ for (cnt = 0; a; a >>= 1) {
+ if (a & 1)
+ cnt++;
+ }
+ return cnt;
+}
+
+/* Build a word with the <bits> lower bits set (reverse of my_popcountl) */
+static inline unsigned long nbits(int bits)
+{
+ if (--bits < 0)
+ return 0;
+ else
+ return (2UL << bits) - 1;
+}
+
+/*
+ * Parse binary string written in hexadecimal (source) and store the decoded
+ * result into binstr and set binstrlen to the lengh of binstr. Memory for
+ * binstr is allocated by the function. In case of error, returns 0 with an
+ * error message in err.
+ */
+int parse_binary(const char *source, char **binstr, int *binstrlen, char **err);
+
+/* copies at most <n> characters from <src> and always terminates with '\0' */
+char *my_strndup(const char *src, int n);
+
+/*
+ * search needle in haystack
+ * returns the pointer if found, returns NULL otherwise
+ */
+const void *my_memmem(const void *, size_t, const void *, size_t);
+
+/* This function returns the first unused key greater than or equal to <key> in
+ * ID tree <root>. Zero is returned if no place is found.
+ */
+unsigned int get_next_id(struct eb_root *root, unsigned int key);
+
+/* This function compares a sample word possibly followed by blanks to another
+ * clean word. The compare is case-insensitive. 1 is returned if both are equal,
+ * otherwise zero. This intends to be used when checking HTTP headers for some
+ * values.
+ */
+int word_match(const char *sample, int slen, const char *word, int wlen);
+
+/* Convert a fixed-length string to an IP address. Returns 0 in case of error,
+ * or the number of chars read in case of success.
+ */
+int buf2ip(const char *buf, size_t len, struct in_addr *dst);
+int buf2ip6(const char *buf, size_t len, struct in6_addr *dst);
+
+/* To be used to quote config arg positions. Returns the string at <ptr>
+ * surrounded by simple quotes if <ptr> is valid and non-empty, or "end of line"
+ * if ptr is NULL or empty. The string is locally allocated.
+ */
+const char *quote_arg(const char *ptr);
+
+/* returns an operator among STD_OP_* for string <str> or < 0 if unknown */
+int get_std_op(const char *str);
+
+/* hash a 32-bit integer to another 32-bit integer */
+extern unsigned int full_hash(unsigned int a);
+static inline unsigned int __full_hash(unsigned int a)
+{
+ /* This function is one of Bob Jenkins' full avalanche hashing
+ * functions, which when provides quite a good distribution for little
+ * input variations. The result is quite suited to fit over a 32-bit
+ * space with enough variations so that a randomly picked number falls
+ * equally before any server position.
+ * Check http://burtleburtle.net/bob/hash/integer.html for more info.
+ */
+ a = (a+0x7ed55d16) + (a<<12);
+ a = (a^0xc761c23c) ^ (a>>19);
+ a = (a+0x165667b1) + (a<<5);
+ a = (a+0xd3a2646c) ^ (a<<9);
+ a = (a+0xfd7046c5) + (a<<3);
+ a = (a^0xb55a4f09) ^ (a>>16);
+
+ /* ensure values are better spread all around the tree by multiplying
+ * by a large prime close to 3/4 of the tree.
+ */
+ return a * 3221225473U;
+}
+
+/* sets the address family to AF_UNSPEC so that is_addr() does not match */
+static inline void clear_addr(struct sockaddr_storage *addr)
+{
+ addr->ss_family = AF_UNSPEC;
+}
+
+/* returns non-zero if addr has a valid and non-null IPv4 or IPv6 address,
+ * otherwise zero.
+ */
+static inline int is_inet_addr(const struct sockaddr_storage *addr)
+{
+ int i;
+
+ switch (addr->ss_family) {
+ case AF_INET:
+ return *(int *)&((struct sockaddr_in *)addr)->sin_addr;
+ case AF_INET6:
+ for (i = 0; i < sizeof(struct in6_addr) / sizeof(int); i++)
+ if (((int *)&((struct sockaddr_in6 *)addr)->sin6_addr)[i] != 0)
+ return ((int *)&((struct sockaddr_in6 *)addr)->sin6_addr)[i];
+ }
+ return 0;
+}
+
+/* returns non-zero if addr has a valid and non-null IPv4 or IPv6 address,
+ * or is a unix address, otherwise returns zero.
+ */
+static inline int is_addr(const struct sockaddr_storage *addr)
+{
+ if (addr->ss_family == AF_UNIX)
+ return 1;
+ else
+ return is_inet_addr(addr);
+}
+
+/* returns port in network byte order */
+static inline int get_net_port(struct sockaddr_storage *addr)
+{
+ switch (addr->ss_family) {
+ case AF_INET:
+ return ((struct sockaddr_in *)addr)->sin_port;
+ case AF_INET6:
+ return ((struct sockaddr_in6 *)addr)->sin6_port;
+ }
+ return 0;
+}
+
+/* returns port in host byte order */
+static inline int get_host_port(struct sockaddr_storage *addr)
+{
+ switch (addr->ss_family) {
+ case AF_INET:
+ return ntohs(((struct sockaddr_in *)addr)->sin_port);
+ case AF_INET6:
+ return ntohs(((struct sockaddr_in6 *)addr)->sin6_port);
+ }
+ return 0;
+}
+
+/* returns address len for <addr>'s family, 0 for unknown families */
+static inline int get_addr_len(const struct sockaddr_storage *addr)
+{
+ switch (addr->ss_family) {
+ case AF_INET:
+ return sizeof(struct sockaddr_in);
+ case AF_INET6:
+ return sizeof(struct sockaddr_in6);
+ case AF_UNIX:
+ return sizeof(struct sockaddr_un);
+ }
+ return 0;
+}
+
+/* set port in host byte order */
+static inline int set_net_port(struct sockaddr_storage *addr, int port)
+{
+ switch (addr->ss_family) {
+ case AF_INET:
+ ((struct sockaddr_in *)addr)->sin_port = port;
+ case AF_INET6:
+ ((struct sockaddr_in6 *)addr)->sin6_port = port;
+ }
+ return 0;
+}
+
+/* set port in network byte order */
+static inline int set_host_port(struct sockaddr_storage *addr, int port)
+{
+ switch (addr->ss_family) {
+ case AF_INET:
+ ((struct sockaddr_in *)addr)->sin_port = htons(port);
+ case AF_INET6:
+ ((struct sockaddr_in6 *)addr)->sin6_port = htons(port);
+ }
+ return 0;
+}
+
+/* Return true if IPv4 address is part of the network */
+extern int in_net_ipv4(struct in_addr *addr, struct in_addr *mask, struct in_addr *net);
+
+/* Return true if IPv6 address is part of the network */
+extern int in_net_ipv6(struct in6_addr *addr, struct in6_addr *mask, struct in6_addr *net);
+
+/* Map IPv4 adress on IPv6 address, as specified in RFC 3513. */
+extern void v4tov6(struct in6_addr *sin6_addr, struct in_addr *sin_addr);
+
+/* Map IPv6 adress on IPv4 address, as specified in RFC 3513.
+ * Return true if conversion is possible and false otherwise.
+ */
+extern int v6tov4(struct in_addr *sin_addr, struct in6_addr *sin6_addr);
+
+char *human_time(int t, short hz_div);
+
+extern const char *monthname[];
+
+/* numeric timezone (that is, the hour and minute offset from UTC) */
+char localtimezone[6];
+
+/* date2str_log: write a date in the format :
+ * sprintf(str, "%02d/%s/%04d:%02d:%02d:%02d.%03d",
+ * tm.tm_mday, monthname[tm.tm_mon], tm.tm_year+1900,
+ * tm.tm_hour, tm.tm_min, tm.tm_sec, (int)date.tv_usec/1000);
+ *
+ * without using sprintf. return a pointer to the last char written (\0) or
+ * NULL if there isn't enough space.
+ */
+char *date2str_log(char *dest, struct tm *tm, struct timeval *date, size_t size);
+
+/* gmt2str_log: write a date in the format :
+ * "%02d/%s/%04d:%02d:%02d:%02d +0000" without using snprintf
+ * return a pointer to the last char written (\0) or
+ * NULL if there isn't enough space.
+ */
+char *gmt2str_log(char *dst, struct tm *tm, size_t size);
+
+/* localdate2str_log: write a date in the format :
+ * "%02d/%s/%04d:%02d:%02d:%02d +0000(local timezone)" without using snprintf
+ * return a pointer to the last char written (\0) or
+ * NULL if there isn't enough space.
+ */
+char *localdate2str_log(char *dst, struct tm *tm, size_t size);
+
+/* Dynamically allocates a string of the proper length to hold the formatted
+ * output. NULL is returned on error. The caller is responsible for freeing the
+ * memory area using free(). The resulting string is returned in <out> if the
+ * pointer is not NULL. A previous version of <out> might be used to build the
+ * new string, and it will be freed before returning if it is not NULL, which
+ * makes it possible to build complex strings from iterative calls without
+ * having to care about freeing intermediate values, as in the example below :
+ *
+ * memprintf(&err, "invalid argument: '%s'", arg);
+ * ...
+ * memprintf(&err, "parser said : <%s>\n", *err);
+ * ...
+ * free(*err);
+ *
+ * This means that <err> must be initialized to NULL before first invocation.
+ * The return value also holds the allocated string, which eases error checking
+ * and immediate consumption. If the output pointer is not used, NULL must be
+ * passed instead and it will be ignored. The returned message will then also
+ * be NULL so that the caller does not have to bother with freeing anything.
+ *
+ * It is also convenient to use it without any free except the last one :
+ * err = NULL;
+ * if (!fct1(err)) report(*err);
+ * if (!fct2(err)) report(*err);
+ * if (!fct3(err)) report(*err);
+ * free(*err);
+ */
+char *memprintf(char **out, const char *format, ...)
+ __attribute__ ((format(printf, 2, 3)));
+
+/* Used to add <level> spaces before each line of <out>, unless there is only one line.
+ * The input argument is automatically freed and reassigned. The result will have to be
+ * freed by the caller.
+ * Example of use :
+ * parse(cmd, &err); (callee: memprintf(&err, ...))
+ * fprintf(stderr, "Parser said: %s\n", indent_error(&err));
+ * free(err);
+ */
+char *indent_msg(char **out, int level);
+
+/* Convert occurrences of environment variables in the input string to their
+ * corresponding value. A variable is identified as a series of alphanumeric
+ * characters or underscores following a '$' sign. The <in> string must be
+ * free()able. NULL returns NULL. The resulting string might be reallocated if
+ * some expansion is made.
+ */
+char *env_expand(char *in);
+
+/* debugging macro to emit messages using write() on fd #-1 so that strace sees
+ * them.
+ */
+#define fddebug(msg...) do { char *_m = NULL; memprintf(&_m, ##msg); if (_m) write(-1, _m, strlen(_m)); free(_m); } while (0)
+
+/* used from everywhere just to drain results we don't want to read and which
+ * recent versions of gcc increasingly and annoyingly complain about.
+ */
+extern int shut_your_big_mouth_gcc_int;
+
+/* used from everywhere just to drain results we don't want to read and which
+ * recent versions of gcc increasingly and annoyingly complain about.
+ */
+static inline void shut_your_big_mouth_gcc(int r)
+{
+ shut_your_big_mouth_gcc_int = r;
+}
+
+/* same as strstr() but case-insensitive */
+const char *strnistr(const char *str1, int len_str1, const char *str2, int len_str2);
+
+
+/************************* Composite address manipulation *********************
+ * Composite addresses are simply unsigned long data in which the higher bits
+ * represent a pointer, and the two lower bits are flags. There are several
+ * places where we just want to associate one or two flags to a pointer (eg,
+ * to type it), and these functions permit this. The pointer is necessarily a
+ * 32-bit aligned pointer, as its two lower bits will be cleared and replaced
+ * with the flags.
+ *****************************************************************************/
+
+/* Masks the two lower bits of a composite address and converts it to a
+ * pointer. This is used to mix some bits with some aligned pointers to
+ * structs and to retrieve the original (32-bit aligned) pointer.
+ */
+static inline void *caddr_to_ptr(unsigned long caddr)
+{
+ return (void *)(caddr & ~3UL);
+}
+
+/* Only retrieves the two lower bits of a composite address. This is used to mix
+ * some bits with some aligned pointers to structs and to retrieve the original
+ * data (2 bits).
+ */
+static inline unsigned int caddr_to_data(unsigned long caddr)
+{
+ return (caddr & 3UL);
+}
+
+/* Combines the aligned pointer whose 2 lower bits will be masked with the bits
+ * from <data> to form a composite address. This is used to mix some bits with
+ * some aligned pointers to structs and to retrieve the original (32-bit aligned)
+ * pointer.
+ */
+static inline unsigned long caddr_from_ptr(void *ptr, unsigned int data)
+{
+ return (((unsigned long)ptr) & ~3UL) + (data & 3);
+}
+
+/* sets the 2 bits of <data> in the <caddr> composite address */
+static inline unsigned long caddr_set_flags(unsigned long caddr, unsigned int data)
+{
+ return caddr | (data & 3);
+}
+
+/* clears the 2 bits of <data> in the <caddr> composite address */
+static inline unsigned long caddr_clr_flags(unsigned long caddr, unsigned int data)
+{
+ return caddr & ~(unsigned long)(data & 3);
+}
+
+/* UTF-8 decoder status */
+#define UTF8_CODE_OK 0x00
+#define UTF8_CODE_OVERLONG 0x10
+#define UTF8_CODE_INVRANGE 0x20
+#define UTF8_CODE_BADSEQ 0x40
+
+unsigned char utf8_next(const char *s, int len, unsigned int *c);
+
+static inline unsigned char utf8_return_code(unsigned int code)
+{
+ return code & 0xf0;
+}
+
+static inline unsigned char utf8_return_length(unsigned char code)
+{
+ return code & 0x0f;
+}
+
+/* Turns 64-bit value <a> from host byte order to network byte order.
+ * The principle consists in letting the compiler detect we're playing
+ * with a union and simplify most or all operations. The asm-optimized
+ * htonl() version involving bswap (x86) / rev (arm) / other is a single
+ * operation on little endian, or a NOP on big-endian. In both cases,
+ * this lets the compiler "see" that we're rebuilding a 64-bit word from
+ * two 32-bit quantities that fit into a 32-bit register. In big endian,
+ * the whole code is optimized out. In little endian, with a decent compiler,
+ * a few bswap and 2 shifts are left, which is the minimum acceptable.
+ */
+#ifndef htonll
+static inline unsigned long long htonll(unsigned long long a)
+{
+ union {
+ struct {
+ unsigned int w1;
+ unsigned int w2;
+ } by32;
+ unsigned long long by64;
+ } w = { .by64 = a };
+ return ((unsigned long long)htonl(w.by32.w1) << 32) | htonl(w.by32.w2);
+}
+#endif
+
+/* Turns 64-bit value <a> from network byte order to host byte order. */
+#ifndef ntohll
+static inline unsigned long long ntohll(unsigned long long a)
+{
+ return htonll(a);
+}
+#endif
+
+/* returns a 64-bit a timestamp with the finest resolution available. The
+ * unit is intentionally not specified. It's mostly used to compare dates.
+ */
+#if defined(__i386__) || defined(__x86_64__)
+static inline unsigned long long rdtsc()
+{
+ unsigned int a, d;
+ asm volatile("rdtsc" : "=a" (a), "=d" (d));
+ return a + ((unsigned long long)d << 32);
+}
+#else
+static inline unsigned long long rdtsc()
+{
+ struct timeval tv;
+ gettimeofday(&tv, NULL);
+ return tv.tv_sec * 1000000 + tv.tv_usec;
+}
+#endif
+
+#endif /* _COMMON_STANDARD_H */
--- /dev/null
+/*
+ * include/common/syscall.h
+ * Redefinition of some missing OS-specific system calls.
+ *
+ * Copyright 2000-2012 Willy Tarreau <w@1wt.eu>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ *
+ */
+
+
+#ifndef _COMMON_SYSCALL_H
+#define _COMMON_SYSCALL_H
+
+#ifdef __linux__
+
+#include <errno.h>
+#include <unistd.h>
+#include <sys/syscall.h>
+
+/* On Linux, _syscall macros were removed after 2.6.18, but we still prefer
+ * them because syscall() is buggy on old libcs. If _syscall is not defined,
+ * we're on a recent kernel with a recent libc and we should be safe, so we
+ * emulate is using syscall().
+ */
+#ifndef _syscall1
+#define _syscall1(tr, nr, t1, n1) \
+ tr nr(t1 n1) { \
+ return syscall(__NR_##nr, n1); \
+ }
+#endif
+
+#ifndef _syscall2
+#define _syscall2(tr, nr, t1, n1, t2, n2) \
+ tr nr(t1 n1, t2 n2) { \
+ return syscall(__NR_##nr, n1, n2); \
+ }
+#endif
+
+#ifndef _syscall3
+#define _syscall3(tr, nr, t1, n1, t2, n2, t3, n3) \
+ tr nr(t1 n1, t2 n2, t3 n3) { \
+ return syscall(__NR_##nr, n1, n2, n3); \
+ }
+#endif
+
+#ifndef _syscall4
+#define _syscall4(tr, nr, t1, n1, t2, n2, t3, n3, t4, n4) \
+ tr nr(t1 n1, t2 n2, t3 n3, t4 n4) { \
+ return syscall(__NR_##nr, n1, n2, n3, n4); \
+ }
+#endif
+
+#ifndef _syscall5
+#define _syscall5(tr, nr, t1, n1, t2, n2, t3, n3, t4, n4, t5, n5) \
+ tr nr(t1 n1, t2 n2, t3 n3, t4 n4, t5 n5) { \
+ return syscall(__NR_##nr, n1, n2, n3, n4, n5); \
+ }
+#endif
+
+#ifndef _syscall6
+#define _syscall6(tr, nr, t1, n1, t2, n2, t3, n3, t4, n4, t5, n5, t6, n6) \
+ tr nr(t1 n1, t2 n2, t3 n3, t4 n4, t5 n5, t6 n6) { \
+ return syscall(__NR_##nr, n1, n2, n3, n4, n5, n6); \
+ }
+#endif
+
+
+/* Define some syscall numbers that are sometimes needed */
+
+/* Epoll was provided as a patch for 2.4 for a long time and was not always
+ * exported as a known sysctl number by libc.
+ */
+#if !defined(__NR_epoll_ctl)
+#if defined(__powerpc__) || defined(__powerpc64__)
+#define __NR_epoll_create 236
+#define __NR_epoll_ctl 237
+#define __NR_epoll_wait 238
+#elif defined(__sparc__) || defined(__sparc64__)
+#define __NR_epoll_create 193
+#define __NR_epoll_ctl 194
+#define __NR_epoll_wait 195
+#elif defined(__x86_64__)
+#define __NR_epoll_create 213
+#define __NR_epoll_ctl 214
+#define __NR_epoll_wait 215
+#elif defined(__alpha__)
+#define __NR_epoll_create 407
+#define __NR_epoll_ctl 408
+#define __NR_epoll_wait 409
+#elif defined (__i386__)
+#define __NR_epoll_create 254
+#define __NR_epoll_ctl 255
+#define __NR_epoll_wait 256
+#elif defined (__s390__) || defined(__s390x__)
+#define __NR_epoll_create 249
+#define __NR_epoll_ctl 250
+#define __NR_epoll_wait 251
+#endif /* $arch */
+#endif /* __NR_epoll_ctl */
+
+/* splice is even more recent than epoll. It appeared around 2.6.18 but was
+ * not in libc for a while.
+ */
+#ifndef __NR_splice
+#if defined(__powerpc__) || defined(__powerpc64__)
+#define __NR_splice 283
+#elif defined(__sparc__) || defined(__sparc64__)
+#define __NR_splice 232
+#elif defined(__x86_64__)
+#define __NR_splice 275
+#elif defined(__alpha__)
+#define __NR_splice 468
+#elif defined (__i386__)
+#define __NR_splice 313
+#elif defined(__s390__) || defined(__s390x__)
+#define __NR_splace 306
+#endif /* $arch */
+#endif /* __NR_splice */
+
+/* accept4() appeared in Linux 2.6.28, but it might not be in all libcs. Some
+ * archs have it as a native syscall, other ones use the socketcall instead.
+ */
+#ifndef __NR_accept4
+#if defined(__x86_64__)
+#define __NR_accept4 288
+#elif defined(__sparc__) || defined(__sparc64__)
+#define __NR_accept4 323
+#elif defined(__arm__) || defined(__thumb__)
+#define __NR_accept4 (__NR_SYSCALL_BASE+366)
+#else
+#define ACCEPT4_USE_SOCKETCALL 1
+#ifndef SYS_ACCEPT4
+#define SYS_ACCEPT4 18
+#endif /* SYS_ACCEPT4 */
+#endif /* $arch */
+#endif /* __NR_accept4 */
+
+#endif /* __linux__ */
+#endif /* _COMMON_SYSCALL_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ include/common/template.h
+ This file serves as a template for future include files.
+
+ Copyright (C) 2000-2006 Willy Tarreau - w@1wt.eu
+
+ This library is free software; you can redistribute it and/or
+ modify it under the terms of the GNU Lesser General Public
+ License as published by the Free Software Foundation, version 2.1
+ exclusively.
+
+ This library is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ Lesser General Public License for more details.
+
+ You should have received a copy of the GNU Lesser General Public
+ License along with this library; if not, write to the Free Software
+ Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+*/
+
+#ifndef _COMMON_TEMPLATE_H
+#define _COMMON_TEMPLATE_H
+
+#include <common/config.h>
+
+#endif /* _COMMON_TEMPLATE_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ include/common/ticks.h
+ Functions and macros for manipulation of expiration timers
+
+ Copyright (C) 2000-2009 Willy Tarreau - w@1wt.eu
+
+ This library is free software; you can redistribute it and/or
+ modify it under the terms of the GNU Lesser General Public
+ License as published by the Free Software Foundation, version 2.1
+ exclusively.
+
+ This library is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ Lesser General Public License for more details.
+
+ You should have received a copy of the GNU Lesser General Public
+ License along with this library; if not, write to the Free Software
+ Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+*/
+
+/*
+ * Using a mix of milliseconds and timeval for internal timers is expensive and
+ * overkill, because we don't need such a precision to compute timeouts.
+ * So we're converting them to "ticks".
+ *
+ * A tick is a representation of a date relative to another one, and is
+ * measured in milliseconds. The natural usage is to represent an absolute date
+ * relative to the current date. Since it is not practical to update all values
+ * each time the current date changes, instead we use the absolute date rounded
+ * down to fit in a tick. We then have to compare a tick to the current date to
+ * know whether it is in the future or in the past. If a tick is below the
+ * current date, it is in the past. If it is above, it is in the future. The
+ * values will wrap so we can't compare that easily, instead we check the sign
+ * of the difference between a tick and the current date.
+ *
+ * Proceeding like this allows us to manipulate dates that are stored in
+ * scalars with enough precision and range. For this reason, we store ticks in
+ * 32-bit integers. This is enough to handle dates that are between 24.85 days
+ * in the past and as much in the future.
+ *
+ * We must both support absolute dates (well in fact, dates relative to now+/-
+ * 24 days), and intervals (for timeouts). Both types need an "eternity" magic
+ * value. For optimal code generation, we'll use zero as the magic value
+ * indicating that an expiration timer or a timeout is not set. We have to
+ * check that we don't return this value when adding timeouts to <now>. If a
+ * computation returns 0, we must increase it to 1 (which will push the timeout
+ * 1 ms further). For this reason, timeouts must not be added by hand but via
+ * the dedicated tick_add() function.
+ */
+
+#ifndef _COMMON_TICKS_H
+#define _COMMON_TICKS_H
+
+#include <common/config.h>
+#include <common/standard.h>
+
+#define TICK_ETERNITY 0
+
+/* right now, ticks are milliseconds. Both negative ms and negative ticks
+ * indicate eternity.
+ */
+#define MS_TO_TICKS(ms) (ms)
+#define TICKS_TO_MS(tk) (tk)
+
+/* return 1 if tick is set, otherwise 0 */
+static inline int tick_isset(int expire)
+{
+ return expire != 0;
+}
+
+/* Add <timeout> to <now>, and return the resulting expiration date.
+ * <timeout> will not be checked for null values.
+ */
+static inline int tick_add(int now, int timeout)
+{
+ now += timeout;
+ if (unlikely(!now))
+ now++; /* unfortunate value */
+ return now;
+}
+
+/* add <timeout> to <now> if it is set, otherwise set it to eternity.
+ * Return the resulting expiration date.
+ */
+static inline int tick_add_ifset(int now, int timeout)
+{
+ if (!timeout)
+ return TICK_ETERNITY;
+ return tick_add(now, timeout);
+}
+
+/* return 1 if timer <t1> is before <t2>, none of which can be infinite. */
+static inline int tick_is_lt(int t1, int t2)
+{
+ return (t1 - t2) < 0;
+}
+
+/* return 1 if timer <t1> is before or equal to <t2>, none of which can be infinite. */
+static inline int tick_is_le(int t1, int t2)
+{
+ return (t1 - t2) <= 0;
+}
+
+/* return 1 if timer <timer> is expired at date <now>, otherwise zero */
+static inline int tick_is_expired(int timer, int now)
+{
+ if (unlikely(!tick_isset(timer)))
+ return 0;
+ if (unlikely((timer - now) <= 0))
+ return 1;
+ return 0;
+}
+
+/* return the first one of the two timers, both of which may be infinite */
+static inline int tick_first(int t1, int t2)
+{
+ if (!tick_isset(t1))
+ return t2;
+ if (!tick_isset(t2))
+ return t1;
+ if ((t1 - t2) <= 0)
+ return t1;
+ else
+ return t2;
+}
+
+/* return the first one of the two timers, where only the first one may be infinite */
+static inline int tick_first_2nz(int t1, int t2)
+{
+ if (!tick_isset(t1))
+ return t2;
+ if ((t1 - t2) <= 0)
+ return t1;
+ else
+ return t2;
+}
+
+/* return the number of ticks remaining from <now> to <exp>, or zero if expired */
+static inline int tick_remain(int now, int exp)
+{
+ if (tick_is_expired(exp, now))
+ return 0;
+ return exp - now;
+}
+
+#endif /* _COMMON_TICKS_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/common/time.h
+ * Time calculation functions and macros.
+ *
+ * Copyright (C) 2000-2011 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _COMMON_TIME_H
+#define _COMMON_TIME_H
+
+#include <stdlib.h>
+#include <sys/time.h>
+#include <common/config.h>
+#include <common/standard.h>
+
+/* eternity when exprimed in timeval */
+#ifndef TV_ETERNITY
+#define TV_ETERNITY (~0UL)
+#endif
+
+/* eternity when exprimed in ms */
+#ifndef TV_ETERNITY_MS
+#define TV_ETERNITY_MS (-1)
+#endif
+
+#define TIME_ETERNITY (TV_ETERNITY_MS)
+
+/* we want to be able to detect time jumps. Fix the maximum wait time to a low
+ * value so that we know the time has changed if we wait longer.
+ */
+#define MAX_DELAY_MS 1000
+
+
+/* returns the lowest delay amongst <old> and <new>, and respects TIME_ETERNITY */
+#define MINTIME(old, new) (((new)<0)?(old):(((old)<0||(new)<(old))?(new):(old)))
+#define SETNOW(a) (*a=now)
+
+extern unsigned int curr_sec_ms; /* millisecond of current second (0..999) */
+extern unsigned int ms_left_scaled; /* milliseconds left for current second (0..2^32-1) */
+extern unsigned int curr_sec_ms_scaled; /* millisecond of current second (0..2^32-1) */
+extern unsigned int now_ms; /* internal date in milliseconds (may wrap) */
+extern unsigned int samp_time; /* total elapsed time over current sample */
+extern unsigned int idle_time; /* total idle time over current sample */
+extern unsigned int idle_pct; /* idle to total ratio over last sample (percent) */
+extern struct timeval now; /* internal date is a monotonic function of real clock */
+extern struct timeval date; /* the real current date */
+extern struct timeval start_date; /* the process's start date */
+extern struct timeval before_poll; /* system date before calling poll() */
+extern struct timeval after_poll; /* system date after leaving poll() */
+
+
+/**** exported functions *************************************************/
+/*
+ * adds <ms> ms to <from>, set the result to <tv> and returns a pointer <tv>
+ */
+REGPRM3 struct timeval *tv_ms_add(struct timeval *tv, const struct timeval *from, int ms);
+
+/*
+ * compares <tv1> and <tv2> modulo 1ms: returns 0 if equal, -1 if tv1 < tv2, 1 if tv1 > tv2
+ * Must not be used when either argument is eternity. Use tv_ms_cmp2() for that.
+ */
+REGPRM2 int tv_ms_cmp(const struct timeval *tv1, const struct timeval *tv2);
+
+/*
+ * compares <tv1> and <tv2> modulo 1 ms: returns 0 if equal, -1 if tv1 < tv2, 1 if tv1 > tv2,
+ * assuming that TV_ETERNITY is greater than everything.
+ */
+REGPRM2 int tv_ms_cmp2(const struct timeval *tv1, const struct timeval *tv2);
+
+/**** general purpose functions and macros *******************************/
+
+
+/* tv_now: sets <tv> to the current time */
+REGPRM1 static inline struct timeval *tv_now(struct timeval *tv)
+{
+ gettimeofday(tv, NULL);
+ return tv;
+}
+
+/* tv_udpate_date: sets <date> to system time, and sets <now> to something as
+ * close as possible to real time, following a monotonic function. The main
+ * principle consists in detecting backwards and forwards time jumps and adjust
+ * an offset to correct them. This function should be called only once after
+ * each poll. The poll's timeout should be passed in <max_wait>, and the return
+ * value in <interrupted> (a non-zero value means that we have not expired the
+ * timeout).
+ */
+REGPRM2 void tv_update_date(int max_wait, int interrupted);
+
+/*
+ * sets a struct timeval to its highest value so that it can never happen
+ * note that only tv_usec is necessary to detect it since a tv_usec > 999999
+ * is normally not possible.
+ */
+REGPRM1 static inline struct timeval *tv_eternity(struct timeval *tv)
+{
+ tv->tv_sec = (typeof(tv->tv_sec))TV_ETERNITY;
+ tv->tv_usec = (typeof(tv->tv_usec))TV_ETERNITY;
+ return tv;
+}
+
+/*
+ * sets a struct timeval to 0
+ *
+ */
+REGPRM1 static inline struct timeval *tv_zero(struct timeval *tv) {
+ tv->tv_sec = tv->tv_usec = 0;
+ return tv;
+}
+
+/*
+ * returns non null if tv is [eternity], otherwise 0.
+ */
+#define tv_iseternity(tv) ((tv)->tv_usec == (typeof((tv)->tv_usec))TV_ETERNITY)
+
+/*
+ * returns 0 if tv is [eternity], otherwise non-zero.
+ */
+#define tv_isset(tv) ((tv)->tv_usec != (typeof((tv)->tv_usec))TV_ETERNITY)
+
+/*
+ * returns non null if tv is [0], otherwise 0.
+ */
+#define tv_iszero(tv) (((tv)->tv_sec | (tv)->tv_usec) == 0)
+
+/*
+ * Converts a struct timeval to a number of milliseconds.
+ */
+REGPRM1 static inline unsigned long __tv_to_ms(const struct timeval *tv)
+{
+ unsigned long ret;
+
+ ret = tv->tv_sec * 1000;
+ ret += tv->tv_usec / 1000;
+ return ret;
+}
+
+/*
+ * Converts a struct timeval to a number of milliseconds.
+ */
+REGPRM2 static inline struct timeval * __tv_from_ms(struct timeval *tv, unsigned long ms)
+{
+ tv->tv_sec = ms / 1000;
+ tv->tv_usec = (ms % 1000) * 1000;
+ return tv;
+}
+
+/* Return a number of 1024Hz ticks between 0 and 1023 for input number of
+ * usecs between 0 and 999999. This function has been optimized to remove
+ * any divide and multiply, as it is completely optimized away by the compiler
+ * on CPUs which don't have a fast multiply. Its avg error rate is 305 ppm,
+ * which is almost twice as low as a direct usec to ms conversion. This version
+ * also has the benefit of returning 1024 for 1000000.
+ */
+REGPRM1 static inline unsigned int __usec_to_1024th(unsigned int usec)
+{
+ return (usec * 1073 + 742516) >> 20;
+}
+
+
+/**** comparison functions and macros ***********************************/
+
+
+/* tv_cmp: compares <tv1> and <tv2> : returns 0 if equal, -1 if tv1 < tv2, 1 if tv1 > tv2. */
+REGPRM2 static inline int __tv_cmp(const struct timeval *tv1, const struct timeval *tv2)
+{
+ if ((unsigned)tv1->tv_sec < (unsigned)tv2->tv_sec)
+ return -1;
+ else if ((unsigned)tv1->tv_sec > (unsigned)tv2->tv_sec)
+ return 1;
+ else if ((unsigned)tv1->tv_usec < (unsigned)tv2->tv_usec)
+ return -1;
+ else if ((unsigned)tv1->tv_usec > (unsigned)tv2->tv_usec)
+ return 1;
+ else
+ return 0;
+}
+
+/* tv_iseq: compares <tv1> and <tv2> : returns 1 if tv1 == tv2, otherwise 0 */
+#define tv_iseq __tv_iseq
+REGPRM2 static inline int __tv_iseq(const struct timeval *tv1, const struct timeval *tv2)
+{
+ return ((unsigned)tv1->tv_sec == (unsigned)tv2->tv_sec) &&
+ ((unsigned)tv1->tv_usec == (unsigned)tv2->tv_usec);
+}
+
+/* tv_isgt: compares <tv1> and <tv2> : returns 1 if tv1 > tv2, otherwise 0 */
+#define tv_isgt _tv_isgt
+REGPRM2 int _tv_isgt(const struct timeval *tv1, const struct timeval *tv2);
+REGPRM2 static inline int __tv_isgt(const struct timeval *tv1, const struct timeval *tv2)
+{
+ return
+ ((unsigned)tv1->tv_sec == (unsigned)tv2->tv_sec) ?
+ ((unsigned)tv1->tv_usec > (unsigned)tv2->tv_usec) :
+ ((unsigned)tv1->tv_sec > (unsigned)tv2->tv_sec);
+}
+
+/* tv_isge: compares <tv1> and <tv2> : returns 1 if tv1 >= tv2, otherwise 0 */
+#define tv_isge __tv_isge
+REGPRM2 static inline int __tv_isge(const struct timeval *tv1, const struct timeval *tv2)
+{
+ return
+ ((unsigned)tv1->tv_sec == (unsigned)tv2->tv_sec) ?
+ ((unsigned)tv1->tv_usec >= (unsigned)tv2->tv_usec) :
+ ((unsigned)tv1->tv_sec > (unsigned)tv2->tv_sec);
+}
+
+/* tv_islt: compares <tv1> and <tv2> : returns 1 if tv1 < tv2, otherwise 0 */
+#define tv_islt __tv_islt
+REGPRM2 static inline int __tv_islt(const struct timeval *tv1, const struct timeval *tv2)
+{
+ return
+ ((unsigned)tv1->tv_sec == (unsigned)tv2->tv_sec) ?
+ ((unsigned)tv1->tv_usec < (unsigned)tv2->tv_usec) :
+ ((unsigned)tv1->tv_sec < (unsigned)tv2->tv_sec);
+}
+
+/* tv_isle: compares <tv1> and <tv2> : returns 1 if tv1 <= tv2, otherwise 0 */
+#define tv_isle _tv_isle
+REGPRM2 int _tv_isle(const struct timeval *tv1, const struct timeval *tv2);
+REGPRM2 static inline int __tv_isle(const struct timeval *tv1, const struct timeval *tv2)
+{
+ return
+ ((unsigned)tv1->tv_sec == (unsigned)tv2->tv_sec) ?
+ ((unsigned)tv1->tv_usec <= (unsigned)tv2->tv_usec) :
+ ((unsigned)tv1->tv_sec < (unsigned)tv2->tv_sec);
+}
+
+/*
+ * compares <tv1> and <tv2> modulo 1ms: returns 0 if equal, -1 if tv1 < tv2, 1 if tv1 > tv2
+ * Must not be used when either argument is eternity. Use tv_ms_cmp2() for that.
+ */
+#define tv_ms_cmp _tv_ms_cmp
+REGPRM2 int _tv_ms_cmp(const struct timeval *tv1, const struct timeval *tv2);
+REGPRM2 static inline int __tv_ms_cmp(const struct timeval *tv1, const struct timeval *tv2)
+{
+ if ((unsigned)tv1->tv_sec == (unsigned)tv2->tv_sec) {
+ if ((unsigned)tv2->tv_usec >= (unsigned)tv1->tv_usec + 1000)
+ return -1;
+ else if ((unsigned)tv1->tv_usec >= (unsigned)tv2->tv_usec + 1000)
+ return 1;
+ else
+ return 0;
+ }
+ else if (((unsigned)tv2->tv_sec > (unsigned)tv1->tv_sec + 1) ||
+ (((unsigned)tv2->tv_sec == (unsigned)tv1->tv_sec + 1) &&
+ ((unsigned)tv2->tv_usec + 1000000 >= (unsigned)tv1->tv_usec + 1000)))
+ return -1;
+ else if (((unsigned)tv1->tv_sec > (unsigned)tv2->tv_sec + 1) ||
+ (((unsigned)tv1->tv_sec == (unsigned)tv2->tv_sec + 1) &&
+ ((unsigned)tv1->tv_usec + 1000000 >= (unsigned)tv2->tv_usec + 1000)))
+ return 1;
+ else
+ return 0;
+}
+
+/*
+ * compares <tv1> and <tv2> modulo 1 ms: returns 0 if equal, -1 if tv1 < tv2, 1 if tv1 > tv2,
+ * assuming that TV_ETERNITY is greater than everything.
+ */
+#define tv_ms_cmp2 _tv_ms_cmp2
+REGPRM2 int _tv_ms_cmp2(const struct timeval *tv1, const struct timeval *tv2);
+REGPRM2 static inline int __tv_ms_cmp2(const struct timeval *tv1, const struct timeval *tv2)
+{
+ if (tv_iseternity(tv1))
+ if (tv_iseternity(tv2))
+ return 0; /* same */
+ else
+ return 1; /* tv1 later than tv2 */
+ else if (tv_iseternity(tv2))
+ return -1; /* tv2 later than tv1 */
+ return tv_ms_cmp(tv1, tv2);
+}
+
+/*
+ * compares <tv1> and <tv2> modulo 1 ms: returns 1 if tv1 <= tv2, 0 if tv1 > tv2,
+ * assuming that TV_ETERNITY is greater than everything. Returns 0 if tv1 is
+ * TV_ETERNITY, and always assumes that tv2 != TV_ETERNITY. Designed to replace
+ * occurrences of (tv_ms_cmp2(tv,now) <= 0).
+ */
+#define tv_ms_le2 _tv_ms_le2
+REGPRM2 int _tv_ms_le2(const struct timeval *tv1, const struct timeval *tv2);
+REGPRM2 static inline int __tv_ms_le2(const struct timeval *tv1, const struct timeval *tv2)
+{
+ if (likely((unsigned)tv1->tv_sec > (unsigned)tv2->tv_sec + 1))
+ return 0;
+
+ if (likely((unsigned)tv1->tv_sec < (unsigned)tv2->tv_sec))
+ return 1;
+
+ if (likely((unsigned)tv1->tv_sec == (unsigned)tv2->tv_sec)) {
+ if ((unsigned)tv2->tv_usec >= (unsigned)tv1->tv_usec + 1000)
+ return 1;
+ else
+ return 0;
+ }
+
+ if (unlikely(((unsigned)tv1->tv_sec == (unsigned)tv2->tv_sec + 1) &&
+ ((unsigned)tv1->tv_usec + 1000000 >= (unsigned)tv2->tv_usec + 1000)))
+ return 0;
+ else
+ return 1;
+}
+
+
+/**** operators **********************************************************/
+
+
+/*
+ * Returns the time in ms elapsed between tv1 and tv2, assuming that tv1<=tv2.
+ * Must not be used when either argument is eternity.
+ */
+#define tv_ms_elapsed __tv_ms_elapsed
+REGPRM2 unsigned long _tv_ms_elapsed(const struct timeval *tv1, const struct timeval *tv2);
+REGPRM2 static inline unsigned long __tv_ms_elapsed(const struct timeval *tv1, const struct timeval *tv2)
+{
+ unsigned long ret;
+
+ ret = ((signed long)(tv2->tv_sec - tv1->tv_sec)) * 1000;
+ ret += ((signed long)(tv2->tv_usec - tv1->tv_usec)) / 1000;
+ return ret;
+}
+
+/*
+ * returns the remaining time between tv1=now and event=tv2
+ * if tv2 is passed, 0 is returned.
+ * Must not be used when either argument is eternity.
+ */
+
+#define tv_ms_remain __tv_ms_remain
+REGPRM2 unsigned long _tv_ms_remain(const struct timeval *tv1, const struct timeval *tv2);
+REGPRM2 static inline unsigned long __tv_ms_remain(const struct timeval *tv1, const struct timeval *tv2)
+{
+ if (tv_ms_cmp(tv1, tv2) >= 0)
+ return 0; /* event elapsed */
+
+ return __tv_ms_elapsed(tv1, tv2);
+}
+
+/*
+ * returns the remaining time between tv1=now and event=tv2
+ * if tv2 is passed, 0 is returned.
+ * Returns TIME_ETERNITY if tv2 is eternity.
+ */
+#define tv_ms_remain2 _tv_ms_remain2
+REGPRM2 unsigned long _tv_ms_remain2(const struct timeval *tv1, const struct timeval *tv2);
+REGPRM2 static inline unsigned long __tv_ms_remain2(const struct timeval *tv1, const struct timeval *tv2)
+{
+ if (tv_iseternity(tv2))
+ return TIME_ETERNITY;
+
+ return tv_ms_remain(tv1, tv2);
+}
+
+/*
+ * adds <inc> to <from>, set the result to <tv> and returns a pointer <tv>
+ */
+#define tv_add _tv_add
+REGPRM3 struct timeval *_tv_add(struct timeval *tv, const struct timeval *from, const struct timeval *inc);
+REGPRM3 static inline struct timeval *__tv_add(struct timeval *tv, const struct timeval *from, const struct timeval *inc)
+{
+ tv->tv_usec = from->tv_usec + inc->tv_usec;
+ tv->tv_sec = from->tv_sec + inc->tv_sec;
+ if (tv->tv_usec >= 1000000) {
+ tv->tv_usec -= 1000000;
+ tv->tv_sec++;
+ }
+ return tv;
+}
+
+
+/*
+ * If <inc> is set, then add it to <from> and set the result to <tv>, then
+ * return 1, otherwise return 0. It is meant to be used in if conditions.
+ */
+#define tv_add_ifset _tv_add_ifset
+REGPRM3 int _tv_add_ifset(struct timeval *tv, const struct timeval *from, const struct timeval *inc);
+REGPRM3 static inline int __tv_add_ifset(struct timeval *tv, const struct timeval *from, const struct timeval *inc)
+{
+ if (tv_iseternity(inc))
+ return 0;
+ tv->tv_usec = from->tv_usec + inc->tv_usec;
+ tv->tv_sec = from->tv_sec + inc->tv_sec;
+ if (tv->tv_usec >= 1000000) {
+ tv->tv_usec -= 1000000;
+ tv->tv_sec++;
+ }
+ return 1;
+}
+
+/*
+ * adds <inc> to <tv> and returns a pointer <tv>
+ */
+REGPRM2 static inline struct timeval *__tv_add2(struct timeval *tv, const struct timeval *inc)
+{
+ tv->tv_usec += inc->tv_usec;
+ tv->tv_sec += inc->tv_sec;
+ if (tv->tv_usec >= 1000000) {
+ tv->tv_usec -= 1000000;
+ tv->tv_sec++;
+ }
+ return tv;
+}
+
+
+/*
+ * Computes the remaining time between tv1=now and event=tv2. if tv2 is passed,
+ * 0 is returned. The result is stored into tv.
+ */
+#define tv_remain _tv_remain
+REGPRM3 struct timeval *_tv_remain(const struct timeval *tv1, const struct timeval *tv2, struct timeval *tv);
+REGPRM3 static inline struct timeval *__tv_remain(const struct timeval *tv1, const struct timeval *tv2, struct timeval *tv)
+{
+ tv->tv_usec = tv2->tv_usec - tv1->tv_usec;
+ tv->tv_sec = tv2->tv_sec - tv1->tv_sec;
+ if ((signed)tv->tv_sec > 0) {
+ if ((signed)tv->tv_usec < 0) {
+ tv->tv_usec += 1000000;
+ tv->tv_sec--;
+ }
+ } else if (tv->tv_sec == 0) {
+ if ((signed)tv->tv_usec < 0)
+ tv->tv_usec = 0;
+ } else {
+ tv->tv_sec = 0;
+ tv->tv_usec = 0;
+ }
+ return tv;
+}
+
+
+/*
+ * Computes the remaining time between tv1=now and event=tv2. if tv2 is passed,
+ * 0 is returned. The result is stored into tv. Returns ETERNITY if tv2 is
+ * eternity.
+ */
+#define tv_remain2 _tv_remain2
+REGPRM3 struct timeval *_tv_remain2(const struct timeval *tv1, const struct timeval *tv2, struct timeval *tv);
+REGPRM3 static inline struct timeval *__tv_remain2(const struct timeval *tv1, const struct timeval *tv2, struct timeval *tv)
+{
+ if (tv_iseternity(tv2))
+ return tv_eternity(tv);
+ return __tv_remain(tv1, tv2, tv);
+}
+
+
+/*
+ * adds <ms> ms to <from>, set the result to <tv> and returns a pointer <tv>
+ */
+#define tv_ms_add _tv_ms_add
+REGPRM3 struct timeval *_tv_ms_add(struct timeval *tv, const struct timeval *from, int ms);
+REGPRM3 static inline struct timeval *__tv_ms_add(struct timeval *tv, const struct timeval *from, int ms)
+{
+ tv->tv_usec = from->tv_usec + (ms % 1000) * 1000;
+ tv->tv_sec = from->tv_sec + (ms / 1000);
+ while (tv->tv_usec >= 1000000) {
+ tv->tv_usec -= 1000000;
+ tv->tv_sec++;
+ }
+ return tv;
+}
+
+
+/*
+ * compares <tv1> and <tv2> : returns 1 if <tv1> is before <tv2>, otherwise 0.
+ * This should be very fast because it's used in schedulers.
+ * It has been optimized to return 1 (so call it in a loop which continues
+ * as long as tv1<=tv2)
+ */
+
+#define tv_isbefore(tv1, tv2) \
+ (unlikely((unsigned)(tv1)->tv_sec < (unsigned)(tv2)->tv_sec) ? 1 : \
+ (unlikely((unsigned)(tv1)->tv_sec > (unsigned)(tv2)->tv_sec) ? 0 : \
+ unlikely((unsigned)(tv1)->tv_usec < (unsigned)(tv2)->tv_usec)))
+
+/*
+ * returns the first event between <tv1> and <tv2> into <tvmin>.
+ * a zero tv is ignored. <tvmin> is returned. If <tvmin> is known
+ * to be the same as <tv1> or <tv2>, it is recommended to use
+ * tv_bound instead.
+ */
+#define tv_min(tvmin, tv1, tv2) ({ \
+ if (tv_isbefore(tv1, tv2)) { \
+ *tvmin = *tv1; \
+ } \
+ else { \
+ *tvmin = *tv2; \
+ } \
+ tvmin; \
+})
+
+/*
+ * returns the first event between <tv1> and <tv2> into <tvmin>.
+ * a zero tv is ignored. <tvmin> is returned. This function has been
+ * optimized to be called as tv_min(a,a,b) or tv_min(b,a,b).
+ */
+#define tv_bound(tv1, tv2) ({ \
+ if (tv_isbefore(tv2, tv1)) \
+ *tv1 = *tv2; \
+ tv1; \
+})
+
+/* Update the idle time value twice a second, to be called after
+ * tv_update_date() when called after poll(). It relies on <before_poll> to be
+ * updated to the system time before calling poll().
+ */
+static inline void measure_idle()
+{
+ /* Let's compute the idle to work ratio. We worked between after_poll
+ * and before_poll, and slept between before_poll and date. The idle_pct
+ * is updated at most twice every second. Note that the current second
+ * rarely changes so we avoid a multiply when not needed.
+ */
+ int delta;
+
+ if ((delta = date.tv_sec - before_poll.tv_sec))
+ delta *= 1000000;
+ idle_time += delta + (date.tv_usec - before_poll.tv_usec);
+
+ if ((delta = date.tv_sec - after_poll.tv_sec))
+ delta *= 1000000;
+ samp_time += delta + (date.tv_usec - after_poll.tv_usec);
+
+ after_poll.tv_sec = date.tv_sec; after_poll.tv_usec = date.tv_usec;
+ if (samp_time < 500000)
+ return;
+
+ idle_pct = (100 * idle_time + samp_time / 2) / samp_time;
+ idle_time = samp_time = 0;
+}
+
+#endif /* _COMMON_TIME_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/common/tools.h
+ * Trivial macros needed everywhere.
+ *
+ * Copyright (C) 2000-2011 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+*/
+
+#ifndef _COMMON_TOOLS_H
+#define _COMMON_TOOLS_H
+
+#include <sys/param.h>
+#include <common/config.h>
+
+#ifndef MIN
+#define MIN(a, b) (((a) < (b)) ? (a) : (b))
+#endif
+
+#ifndef MAX
+#define MAX(a, b) (((a) > (b)) ? (a) : (b))
+#endif
+
+/* return an integer of type <ret> with only the highest bit set. <ret> may be
+ * both a variable or a type.
+ */
+#define MID_RANGE(ret) ((typeof(ret))1 << (8*sizeof(ret) - 1))
+
+/* return the largest possible integer of type <ret>, with all bits set */
+#define MAX_RANGE(ret) (~(typeof(ret))0)
+
+#endif /* _COMMON_TOOLS_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * URI-based user authentication using the HTTP basic method.
+ *
+ * Copyright 2006-2011 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#ifndef _COMMON_URI_AUTH_H
+#define _COMMON_URI_AUTH_H
+
+#include <common/config.h>
+
+#include <types/auth.h>
+
+/* This is a list of proxies we are allowed to see. Later, it should go in the
+ * user list, but before this we need to support de/re-authentication.
+ */
+struct stat_scope {
+ struct stat_scope *next; /* next entry, NULL if none */
+ int px_len; /* proxy name length */
+ char *px_id; /* proxy id */
+};
+
+#define ST_HIDEVER 0x00000001 /* do not report the version and reldate */
+#define ST_SHNODE 0x00000002 /* show node name */
+#define ST_SHDESC 0x00000004 /* show description */
+#define ST_SHLGNDS 0x00000008 /* show legends */
+#define ST_CONVDONE 0x00000010 /* req_acl conversion done */
+
+/* later we may link them to support multiple URI matching */
+struct uri_auth {
+ int uri_len; /* the prefix length */
+ char *uri_prefix; /* the prefix we want to match */
+ char *auth_realm; /* the realm reported to the client */
+ char *node, *desc; /* node name & description reported in this stats */
+ int refresh; /* refresh interval for the browser (in seconds) */
+ int flags; /* some flags describing the statistics page */
+ struct stat_scope *scope; /* linked list of authorized proxies */
+ struct userlist *userlist; /* private userlist to emulate legacy "stats auth user:password" */
+ struct list http_req_rules; /* stats http-request rules : allow/deny/auth */
+ struct list admin_rules; /* 'stats admin' rules (chained) */
+ struct uri_auth *next; /* Used at deinit() to build a list of unique elements */
+};
+
+/* This is the default statistics URI */
+#ifdef CONFIG_STATS_DEFAULT_URI
+#define STATS_DEFAULT_URI CONFIG_STATS_DEFAULT_URI
+#else
+#define STATS_DEFAULT_URI "/haproxy?stats"
+#endif
+
+/* This is the default statistics realm */
+#ifdef CONFIG_STATS_DEFAULT_REALM
+#define STATS_DEFAULT_REALM CONFIG_STATS_DEFAULT_REALM
+#else
+#define STATS_DEFAULT_REALM "HAProxy Statistics"
+#endif
+
+
+struct stats_admin_rule {
+ struct list list; /* list linked to from the proxy */
+ struct acl_cond *cond; /* acl condition to meet */
+};
+
+
+/* Various functions used to set the fields during the configuration parsing.
+ * Please that all those function can initialize the root entry in order not to
+ * force the user to respect a certain order in the configuration file.
+ *
+ * Default values are used during initialization. Check STATS_DEFAULT_* for
+ * more information.
+ */
+struct uri_auth *stats_check_init_uri_auth(struct uri_auth **root);
+struct uri_auth *stats_set_uri(struct uri_auth **root, char *uri);
+struct uri_auth *stats_set_realm(struct uri_auth **root, char *realm);
+struct uri_auth *stats_set_refresh(struct uri_auth **root, int interval);
+struct uri_auth *stats_set_flag(struct uri_auth **root, int flag);
+struct uri_auth *stats_add_auth(struct uri_auth **root, char *user);
+struct uri_auth *stats_add_scope(struct uri_auth **root, char *scope);
+struct uri_auth *stats_set_node(struct uri_auth **root, char *name);
+struct uri_auth *stats_set_desc(struct uri_auth **root, char *desc);
+
+#endif /* _COMMON_URI_AUTH_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/common/version.h
+ * This file serves as a template for future include files.
+ *
+ * Copyright (C) 2000-2010 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _COMMON_VERSION_H
+#define _COMMON_VERSION_H
+
+#include <common/config.h>
+
+#ifdef CONFIG_PRODUCT_NAME
+#define PRODUCT_NAME CONFIG_PRODUCT_NAME
+#else
+#define PRODUCT_NAME "HAProxy"
+#endif
+
+#ifdef CONFIG_PRODUCT_BRANCH
+#define PRODUCT_BRANCH CONFIG_PRODUCT_BRANCH
+#else
+#define PRODUCT_BRANCH "1.5"
+#endif
+
+#ifdef CONFIG_PRODUCT_URL
+#define PRODUCT_URL CONFIG_PRODUCT_URL
+#else
+#define PRODUCT_URL "http://www.haproxy.org/"
+#endif
+
+#ifdef CONFIG_PRODUCT_URL_UPD
+#define PRODUCT_URL_UPD CONFIG_PRODUCT_URL_UPD
+#else
+#define PRODUCT_URL_UPD "http://www.haproxy.org/#down"
+#endif
+
+#ifdef CONFIG_PRODUCT_URL_DOC
+#define PRODUCT_URL_DOC CONFIG_PRODUCT_URL_DOC
+#else
+#define PRODUCT_URL_DOC "http://www.haproxy.org/#docs"
+#endif
+
+#ifdef CONFIG_HAPROXY_VERSION
+#define HAPROXY_VERSION CONFIG_HAPROXY_VERSION
+#else
+#error "Must define CONFIG_HAPROXY_VERSION"
+#endif
+
+#ifdef CONFIG_HAPROXY_DATE
+#define HAPROXY_DATE CONFIG_HAPROXY_DATE
+#else
+#error "Must define CONFIG_HAPROXY_DATE"
+#endif
+
+#endif /* _COMMON_VERSION_H */
+
--- /dev/null
+#ifndef _IMPORT_51D_H
+#define _IMPORT_51D_H
+
+#include <51Degrees.h>
+
+int init_51degrees(void);
+void deinit_51degrees(void);
+
+#endif
--- /dev/null
+#ifndef _IMPORT_DA_H
+#define _IMPORT_DA_H
+#ifdef USE_DEVICEATLAS
+
+#include <types/global.h>
+#include <dac.h>
+
+int init_deviceatlas(void);
+void deinit_deviceatlas(void);
+#endif
+#endif
--- /dev/null
+/*
+ * Copyright (C) 2015 Willy Tarreau <w@1wt.eu>
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining
+ * a copy of this software and associated documentation files (the
+ * "Software"), to deal in the Software without restriction, including
+ * without limitation the rights to use, copy, modify, merge, publish,
+ * distribute, sublicense, and/or sell copies of the Software, and to
+ * permit persons to whom the Software is furnished to do so, subject to
+ * the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+#include <eb64tree.h>
+
+/* The LRU supports a global cache shared between multiple domains and multiple
+ * versions of their datasets. The purpose is not to have to flush the whole
+ * LRU once a key is updated and not valid anymore (eg: ACL files), as well as
+ * to reliably support concurrent accesses and handle conflicts gracefully. For
+ * each key a pointer to a dataset and its internal data revision are stored.
+ * All lookups verify that these elements match those passed by the caller and
+ * only return a valid entry upon matching. Otherwise the entry is either
+ * allocated or recycled and considered new. New entries are always initialized
+ * with a NULL domain pointer which is used by the caller to detect that the
+ * entry is new and must be populated. Such entries never expire and are
+ * protected from the risk of being recycled. It's then the caller's
+ * responsibility to perform the operation and commit the entry with its latest
+ * result. This domain thus serves as a lock to protect the entry during all
+ * the computation needed to update it. In a simple use case where the cache is
+ * dedicated, it is recommended to pass the LRU head as the domain pointer and
+ * for example zero as the revision. The most common use case for the caller
+ * consists in simply checking that the return is not null and that the domain
+ * is not null, then to use the result. The get() function returns null if it
+ * cannot allocate a node (memory or key being currently updated).
+ */
+struct lru64_list {
+ struct lru64_list *n;
+ struct lru64_list *p;
+};
+
+struct lru64_head {
+ struct lru64_list list;
+ struct eb_root keys;
+ struct lru64 *spare;
+ int cache_size;
+ int cache_usage;
+};
+
+struct lru64 {
+ struct eb64_node node; /* indexing key, typically a hash64 */
+ struct lru64_list lru; /* LRU list */
+ void *domain; /* who this data belongs to */
+ unsigned long long revision; /* data revision (to avoid use-after-free) */
+ void *data; /* returned value, user decides how to use this */
+ void (*free)(void *data); /* function to release data, if needed */
+};
+
+
+struct lru64 *lru64_lookup(unsigned long long key, struct lru64_head *lru, void *domain, unsigned long long revision);
+struct lru64 *lru64_get(unsigned long long key, struct lru64_head *lru, void *domain, unsigned long long revision);
+void lru64_commit(struct lru64 *elem, void *data, void *domain, unsigned long long revision, void (*free)(void *));
+struct lru64_head *lru64_new(int size);
+int lru64_destroy(struct lru64_head *lru);
--- /dev/null
+/*
+ xxHash - Extremely Fast Hash algorithm
+ Header File
+ Copyright (C) 2012-2014, Yann Collet.
+ BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php)
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions are
+ met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above
+ copyright notice, this list of conditions and the following disclaimer
+ in the documentation and/or other materials provided with the
+ distribution.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+ You can contact the author at :
+ - xxHash source repository : http://code.google.com/p/xxhash/
+*/
+
+/* Notice extracted from xxHash homepage :
+
+xxHash is an extremely fast Hash algorithm, running at RAM speed limits.
+It also successfully passes all tests from the SMHasher suite.
+
+Comparison (single thread, Windows Seven 32 bits, using SMHasher on a Core 2 Duo @3GHz)
+
+Name Speed Q.Score Author
+xxHash 5.4 GB/s 10
+CrapWow 3.2 GB/s 2 Andrew
+MumurHash 3a 2.7 GB/s 10 Austin Appleby
+SpookyHash 2.0 GB/s 10 Bob Jenkins
+SBox 1.4 GB/s 9 Bret Mulvey
+Lookup3 1.2 GB/s 9 Bob Jenkins
+SuperFastHash 1.2 GB/s 1 Paul Hsieh
+CityHash64 1.05 GB/s 10 Pike & Alakuijala
+FNV 0.55 GB/s 5 Fowler, Noll, Vo
+CRC32 0.43 GB/s 9
+MD5-32 0.33 GB/s 10 Ronald L. Rivest
+SHA1-32 0.28 GB/s 10
+
+Q.Score is a measure of quality of the hash function.
+It depends on successfully passing SMHasher test set.
+10 is a perfect score.
+*/
+
+#pragma once
+
+#if defined (__cplusplus)
+extern "C" {
+#endif
+
+
+/*****************************
+ Includes
+*****************************/
+#include <stddef.h> /* size_t */
+
+
+/*****************************
+ Type
+*****************************/
+typedef enum { XXH_OK=0, XXH_ERROR } XXH_errorcode;
+
+
+
+/*****************************
+ Simple Hash Functions
+*****************************/
+
+unsigned int XXH32 (const void* input, size_t length, unsigned seed);
+unsigned long long XXH64 (const void* input, size_t length, unsigned long long seed);
+
+/*
+XXH32() :
+ Calculate the 32-bits hash of sequence "length" bytes stored at memory address "input".
+ The memory between input & input+length must be valid (allocated and read-accessible).
+ "seed" can be used to alter the result predictably.
+ This function successfully passes all SMHasher tests.
+ Speed on Core 2 Duo @ 3 GHz (single thread, SMHasher benchmark) : 5.4 GB/s
+XXH64() :
+ Calculate the 64-bits hash of sequence of length "len" stored at memory address "input".
+*/
+
+
+
+/*****************************
+ Advanced Hash Functions
+*****************************/
+typedef struct { long long ll[ 6]; } XXH32_state_t;
+typedef struct { long long ll[11]; } XXH64_state_t;
+
+/*
+These structures allow static allocation of XXH states.
+States must then be initialized using XXHnn_reset() before first use.
+
+If you prefer dynamic allocation, please refer to functions below.
+*/
+
+XXH32_state_t* XXH32_createState(void);
+XXH_errorcode XXH32_freeState(XXH32_state_t* statePtr);
+
+XXH64_state_t* XXH64_createState(void);
+XXH_errorcode XXH64_freeState(XXH64_state_t* statePtr);
+
+/*
+These functions create and release memory for XXH state.
+States must then be initialized using XXHnn_reset() before first use.
+*/
+
+
+XXH_errorcode XXH32_reset (XXH32_state_t* statePtr, unsigned seed);
+XXH_errorcode XXH32_update (XXH32_state_t* statePtr, const void* input, size_t length);
+unsigned int XXH32_digest (const XXH32_state_t* statePtr);
+
+XXH_errorcode XXH64_reset (XXH64_state_t* statePtr, unsigned long long seed);
+XXH_errorcode XXH64_update (XXH64_state_t* statePtr, const void* input, size_t length);
+unsigned long long XXH64_digest (const XXH64_state_t* statePtr);
+
+/*
+These functions calculate the xxHash of an input provided in multiple smaller packets,
+as opposed to an input provided as a single block.
+
+XXH state space must first be allocated, using either static or dynamic method provided above.
+
+Start a new hash by initializing state with a seed, using XXHnn_reset().
+
+Then, feed the hash state by calling XXHnn_update() as many times as necessary.
+Obviously, input must be valid, meaning allocated and read accessible.
+The function returns an error code, with 0 meaning OK, and any other value meaning there is an error.
+
+Finally, you can produce a hash anytime, by using XXHnn_digest().
+This function returns the final nn-bits hash.
+You can nonetheless continue feeding the hash state with more input,
+and therefore get some new hashes, by calling again XXHnn_digest().
+
+When you are done, don't forget to free XXH state space, using typically XXHnn_freeState().
+*/
+
+
+#if defined (__cplusplus)
+}
+#endif
--- /dev/null
+/*
+ * include/proto/acl.h
+ * This file provides interface definitions for ACL manipulation.
+ *
+ * Copyright (C) 2000-2013 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _PROTO_ACL_H
+#define _PROTO_ACL_H
+
+#include <common/config.h>
+#include <types/acl.h>
+#include <proto/sample.h>
+
+/*
+ * FIXME: we need destructor functions too !
+ */
+
+/* Negate an acl result. This turns (ACL_MATCH_FAIL, ACL_MATCH_MISS,
+ * ACL_MATCH_PASS) into (ACL_MATCH_PASS, ACL_MATCH_MISS, ACL_MATCH_FAIL).
+ */
+static inline enum acl_test_res acl_neg(enum acl_test_res res)
+{
+ return (3 >> res);
+}
+
+/* Convert an acl result to a boolean. Only ACL_MATCH_PASS returns 1. */
+static inline int acl_pass(enum acl_test_res res)
+{
+ return (res >> 1);
+}
+
+/* Return a pointer to the ACL <name> within the list starting at <head>, or
+ * NULL if not found.
+ */
+struct acl *find_acl_by_name(const char *name, struct list *head);
+
+/* Return a pointer to the ACL keyword <kw> within the list starting at <head>,
+ * or NULL if not found. Note that if <kw> contains an opening parenthesis,
+ * only the left part of it is checked.
+ */
+struct acl_keyword *find_acl_kw(const char *kw);
+
+/* Parse an ACL expression starting at <args>[0], and return it.
+ * Right now, the only accepted syntax is :
+ * <subject> [<value>...]
+ */
+struct acl_expr *parse_acl_expr(const char **args, char **err, struct arg_list *al, const char *file, int line);
+
+/* Purge everything in the acl <acl>, then return <acl>. */
+struct acl *prune_acl(struct acl *acl);
+
+/* Parse an ACL with the name starting at <args>[0], and with a list of already
+ * known ACLs in <acl>. If the ACL was not in the list, it will be added.
+ * A pointer to that ACL is returned.
+ *
+ * args syntax: <aclname> <acl_expr>
+ */
+struct acl *parse_acl(const char **args, struct list *known_acl, char **err, struct arg_list *al, const char *file, int line);
+
+/* Purge everything in the acl_cond <cond>, then return <cond>. */
+struct acl_cond *prune_acl_cond(struct acl_cond *cond);
+
+/* Parse an ACL condition starting at <args>[0], relying on a list of already
+ * known ACLs passed in <known_acl>. The new condition is returned (or NULL in
+ * case of low memory). Supports multiple conditions separated by "or".
+ */
+struct acl_cond *parse_acl_cond(const char **args, struct list *known_acl,
+ enum acl_cond_pol pol, char **err, struct arg_list *al,
+ const char *file, int line);
+
+/* Builds an ACL condition starting at the if/unless keyword. The complete
+ * condition is returned. NULL is returned in case of error or if the first
+ * word is neither "if" nor "unless". It automatically sets the file name and
+ * the line number in the condition for better error reporting, and sets the
+ * HTTP initialization requirements in the proxy. If <err> is not NULL, it will
+ * be set to an error message upon errors, that the caller will have to free.
+ */
+struct acl_cond *build_acl_cond(const char *file, int line, struct proxy *px, const char **args, char **err);
+
+/* Execute condition <cond> and return either ACL_TEST_FAIL, ACL_TEST_MISS or
+ * ACL_TEST_PASS depending on the test results. ACL_TEST_MISS may only be
+ * returned if <opt> does not contain SMP_OPT_FINAL, indicating that incomplete
+ * data is being examined. The function automatically sets SMP_OPT_ITERATE. This
+ * function only computes the condition, it does not apply the polarity required
+ * by IF/UNLESS, it's up to the caller to do this.
+ */
+enum acl_test_res acl_exec_cond(struct acl_cond *cond, struct proxy *px, struct session *sess, struct stream *strm, unsigned int opt);
+
+/* Returns a pointer to the first ACL conflicting with usage at place <where>
+ * which is one of the SMP_VAL_* bits indicating a check place, or NULL if
+ * no conflict is found. Only full conflicts are detected (ACL is not usable).
+ * Use the next function to check for useless keywords.
+ */
+const struct acl *acl_cond_conflicts(const struct acl_cond *cond, unsigned int where);
+
+/* Returns a pointer to the first ACL and its first keyword to conflict with
+ * usage at place <where> which is one of the SMP_VAL_* bits indicating a check
+ * place. Returns true if a conflict is found, with <acl> and <kw> set (if non
+ * null), or false if not conflict is found. The first useless keyword is
+ * returned.
+ */
+int acl_cond_kw_conflicts(const struct acl_cond *cond, unsigned int where, struct acl const **acl, char const **kw);
+
+/*
+ * Find targets for userlist and groups in acl. Function returns the number
+ * of errors or OK if everything is fine.
+ */
+int acl_find_targets(struct proxy *p);
+
+/* Return a pointer to the ACL <name> within the list starting at <head>, or
+ * NULL if not found.
+ */
+struct acl *find_acl_by_name(const char *name, struct list *head);
+
+/*
+ * Registers the ACL keyword list <kwl> as a list of valid keywords for next
+ * parsing sessions.
+ */
+void acl_register_keywords(struct acl_kw_list *kwl);
+
+/*
+ * Unregisters the ACL keyword list <kwl> from the list of valid keywords.
+ */
+void acl_unregister_keywords(struct acl_kw_list *kwl);
+
+/* initializes ACLs by resolving the sample fetch names they rely upon.
+ * Returns 0 on success, otherwise an error.
+ */
+int init_acl();
+
+
+#endif /* _PROTO_ACL_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/proto/action.h
+ * This file contains actions prototypes.
+ *
+ * Copyright (C) 2000-2010 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _PROTO_ACTION_H
+#define _PROTO_ACTION_H
+
+#include <types/action.h>
+
+static inline struct action_kw *action_lookup(struct list *keywords, const char *kw)
+{
+ struct action_kw_list *kw_list;
+ int i;
+
+ if (LIST_ISEMPTY(keywords))
+ return NULL;
+
+ list_for_each_entry(kw_list, keywords, list) {
+ for (i = 0; kw_list->kw[i].kw != NULL; i++) {
+ if (kw_list->kw[i].match_pfx &&
+ strncmp(kw, kw_list->kw[i].kw, strlen(kw_list->kw[i].kw)) == 0)
+ return &kw_list->kw[i];
+ if (!strcmp(kw, kw_list->kw[i].kw))
+ return &kw_list->kw[i];
+ }
+ }
+ return NULL;
+}
+
+static inline void action_build_list(struct list *keywords, struct chunk *chk)
+{
+ struct action_kw_list *kw_list;
+ int i;
+ char *p;
+ char *end;
+ int l;
+
+ p = chk->str;
+ end = p + chk->size - 1;
+ list_for_each_entry(kw_list, keywords, list) {
+ for (i = 0; kw_list->kw[i].kw != NULL; i++) {
+ l = snprintf(p, end - p, "'%s%s', ", kw_list->kw[i].kw, kw_list->kw[i].match_pfx ? "(*)" : "");
+ if (l > end - p)
+ continue;
+ p += l;
+ }
+ }
+ if (p > chk->str)
+ *(p-2) = '\0';
+ else
+ *p = '\0';
+}
+
+#endif /* _PROTO_ACTION_H */
--- /dev/null
+/*
+ * include/proto/applet.h
+ * This file contains applet function prototypes
+ *
+ * Copyright (C) 2000-2015 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _PROTO_APPLET_H
+#define _PROTO_APPLET_H
+
+#include <stdlib.h>
+
+#include <common/config.h>
+#include <common/mini-clist.h>
+#include <types/applet.h>
+#include <proto/connection.h>
+
+extern struct list applet_active_queue;
+
+void applet_run_active();
+
+/* Initializes all required fields for a new appctx. Note that it does the
+ * minimum acceptable initialization for an appctx. This means only the
+ * 3 integer states st0, st1, st2 are zeroed.
+ */
+static inline void appctx_init(struct appctx *appctx)
+{
+ appctx->st0 = appctx->st1 = appctx->st2 = 0;
+}
+
+/* Tries to allocate a new appctx and initialize its main fields. The appctx
+ * is returned on success, NULL on failure. The appctx must be released using
+ * pool_free2(connection) or appctx_free(), since it's allocated from the
+ * connection pool. <applet> is assigned as the applet, but it can be NULL.
+ */
+static inline struct appctx *appctx_new(struct applet *applet)
+{
+ struct appctx *appctx;
+
+ appctx = pool_alloc2(pool2_connection);
+ if (likely(appctx != NULL)) {
+ appctx->obj_type = OBJ_TYPE_APPCTX;
+ appctx->applet = applet;
+ appctx_init(appctx);
+ LIST_INIT(&appctx->runq);
+ }
+ return appctx;
+}
+
+/* Releases an appctx previously allocated by appctx_new(). Note that
+ * we share the connection pool.
+ */
+static inline void appctx_free(struct appctx *appctx)
+{
+ if (!LIST_ISEMPTY(&appctx->runq))
+ LIST_DEL(&appctx->runq);
+ pool_free2(pool2_connection, appctx);
+}
+
+/* wakes up an applet when conditions have changed */
+static inline void appctx_wakeup(struct appctx *appctx)
+{
+ if (LIST_ISEMPTY(&appctx->runq))
+ LIST_ADDQ(&applet_active_queue, &appctx->runq);
+}
+
+/* removes an applet from the list of active applets */
+static inline void appctx_pause(struct appctx *appctx)
+{
+ if (!LIST_ISEMPTY(&appctx->runq)) {
+ LIST_DEL(&appctx->runq);
+ LIST_INIT(&appctx->runq);
+ }
+}
+
+#endif /* _PROTO_APPLET_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/proto/arg.h
+ * This file contains functions and macros declarations for generic argument parsing.
+ *
+ * Copyright 2012 Willy Tarreau <w@1wt.eu>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _PROTO_ARG_H
+#define _PROTO_ARG_H
+
+#include <types/arg.h>
+
+/* Some macros used to build some arg list. We can declare various argument
+ * combinations from 0 to 7 args using a single 32-bit integer. The first
+ * argument of these macros is always the mandatory number of arguments, and
+ * remaining ones are optional args. Note: ARGM() may also be used to return
+ * the number of mandatory arguments in a mask.
+ */
+#define ARGM(m) \
+ (m & ARGM_MASK)
+
+#define ARG1(m, t1) \
+ (ARGM(m) + (ARGT_##t1 << (ARGM_BITS)))
+
+#define ARG2(m, t1, t2) \
+ (ARG1(m, t1) + (ARGT_##t2 << (ARGM_BITS + ARGT_BITS)))
+
+#define ARG3(m, t1, t2, t3) \
+ (ARG2(m, t1, t2) + (ARGT_##t3 << (ARGM_BITS + ARGT_BITS * 2)))
+
+#define ARG4(m, t1, t2, t3, t4) \
+ (ARG3(m, t1, t2, t3) + (ARGT_##t4 << (ARGM_BITS + ARGT_BITS * 3)))
+
+#define ARG5(m, t1, t2, t3, t4, t5) \
+ (ARG4(m, t1, t2, t3, t4) + (ARGT_##t5 << (ARGM_BITS + ARGT_BITS * 4)))
+
+/* Mapping between argument number and literal description. */
+extern const char *arg_type_names[];
+
+/* This dummy arg list may be used by default when no arg is found, it helps
+ * parsers by removing pointer checks.
+ */
+extern struct arg empty_arg_list[ARGM_NBARGS];
+
+struct arg_list *arg_list_clone(const struct arg_list *orig);
+struct arg_list *arg_list_add(struct arg_list *orig, struct arg *arg, int pos);
+int make_arg_list(const char *in, int len, unsigned int mask, struct arg **argp,
+ char **err_msg, const char **err_ptr, int *err_arg,
+ struct arg_list *al);
+
+#endif /* _PROTO_ARG_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * User authentication & authorization.
+ *
+ * Copyright 2010 Krzysztof Piotr Oledzki <ole@ans.pl>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#ifndef _PROTO_AUTH_H
+#define _PROTO_AUTH_H
+
+#include <common/config.h>
+#include <types/auth.h>
+
+extern struct userlist *userlist;
+
+struct userlist *auth_find_userlist(char *name);
+unsigned int auth_resolve_groups(struct userlist *l, char *groups);
+int userlist_postinit();
+void userlist_free(struct userlist *ul);
+struct pattern *pat_match_auth(struct sample *smp, struct pattern_expr *expr, int fill);
+int check_user(struct userlist *ul, const char *user, const char *pass);
+int check_group(struct userlist *ul, char *name);
+
+#endif /* _PROTO_AUTH_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
+
--- /dev/null
+/*
+ * include/proto/backend.h
+ * Functions prototypes for the backend.
+ *
+ * Copyright (C) 2000-2012 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _PROTO_BACKEND_H
+#define _PROTO_BACKEND_H
+
+#include <common/config.h>
+#include <common/time.h>
+
+#include <types/backend.h>
+#include <types/proxy.h>
+#include <types/server.h>
+#include <types/stream.h>
+
+int assign_server(struct stream *s);
+int assign_server_address(struct stream *s);
+int assign_server_and_queue(struct stream *s);
+int connect_server(struct stream *s);
+int srv_redispatch_connect(struct stream *t);
+const char *backend_lb_algo_str(int algo);
+int backend_parse_balance(const char **args, char **err, struct proxy *curproxy);
+int tcp_persist_rdp_cookie(struct stream *s, struct channel *req, int an_bit);
+
+int be_downtime(struct proxy *px);
+void recount_servers(struct proxy *px);
+void update_backend_weight(struct proxy *px);
+struct server *get_server_sh(struct proxy *px, const char *addr, int len);
+struct server *get_server_uh(struct proxy *px, char *uri, int uri_len);
+int be_lastsession(const struct proxy *be);
+
+/* set the time of last session on the backend */
+static void inline be_set_sess_last(struct proxy *be)
+{
+ be->be_counters.last_sess = now.tv_sec;
+}
+
+/* This function returns non-zero if the designated server is usable for LB
+ * according to its current weight and current state. Otherwise it returns 0.
+ */
+static inline int srv_is_usable(const struct server *srv)
+{
+ enum srv_state state = srv->state;
+
+ if (!srv->eweight)
+ return 0;
+ if (srv->admin & SRV_ADMF_MAINT)
+ return 0;
+ if (srv->admin & SRV_ADMF_DRAIN)
+ return 0;
+ switch (state) {
+ case SRV_ST_STARTING:
+ case SRV_ST_RUNNING:
+ return 1;
+ case SRV_ST_STOPPING:
+ case SRV_ST_STOPPED:
+ return 0;
+ }
+ return 0;
+}
+
+/* This function returns non-zero if the designated server was usable for LB
+ * according to its current weight and previous state. Otherwise it returns 0.
+ */
+static inline int srv_was_usable(const struct server *srv)
+{
+ enum srv_state state = srv->prev_state;
+
+ if (!srv->prev_eweight)
+ return 0;
+ if (srv->prev_admin & SRV_ADMF_MAINT)
+ return 0;
+ if (srv->prev_admin & SRV_ADMF_DRAIN)
+ return 0;
+ switch (state) {
+ case SRV_ST_STARTING:
+ case SRV_ST_RUNNING:
+ return 1;
+ case SRV_ST_STOPPING:
+ case SRV_ST_STOPPED:
+ return 0;
+ }
+ return 0;
+}
+
+/* This function commits the current server state and weight onto the previous
+ * ones in order to detect future changes.
+ */
+static inline void srv_lb_commit_status(struct server *srv)
+{
+ srv->prev_state = srv->state;
+ srv->prev_admin = srv->admin;
+ srv->prev_eweight = srv->eweight;
+}
+
+/* This function returns true when a server has experienced a change since last
+ * commit on its state or weight, otherwise zero.
+ */
+static inline int srv_lb_status_changed(const struct server *srv)
+{
+ return (srv->state != srv->prev_state ||
+ srv->admin != srv->prev_admin ||
+ srv->eweight != srv->prev_eweight);
+}
+
+/* sends a log message when a backend goes down, and also sets last
+ * change date.
+ */
+void set_backend_down(struct proxy *be);
+
+#endif /* _PROTO_BACKEND_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/proto/channel.h
+ * Channel management definitions, macros and inline functions.
+ *
+ * Copyright (C) 2000-2014 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _PROTO_CHANNEL_H
+#define _PROTO_CHANNEL_H
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+
+#include <common/config.h>
+#include <common/chunk.h>
+#include <common/ticks.h>
+#include <common/time.h>
+
+#include <types/channel.h>
+#include <types/global.h>
+#include <types/stream.h>
+#include <types/stream_interface.h>
+
+/* perform minimal intializations, report 0 in case of error, 1 if OK. */
+int init_channel();
+
+unsigned long long __channel_forward(struct channel *chn, unsigned long long bytes);
+
+/* SI-to-channel functions working with buffers */
+int bi_putblk(struct channel *chn, const char *str, int len);
+struct buffer *bi_swpbuf(struct channel *chn, struct buffer *buf);
+int bi_putchr(struct channel *chn, char c);
+int bi_getline_nc(struct channel *chn, char **blk1, int *len1, char **blk2, int *len2);
+int bi_getblk_nc(struct channel *chn, char **blk1, int *len1, char **blk2, int *len2);
+int bo_inject(struct channel *chn, const char *msg, int len);
+int bo_getline(struct channel *chn, char *str, int len);
+int bo_getblk(struct channel *chn, char *blk, int len, int offset);
+int bo_getline_nc(struct channel *chn, char **blk1, int *len1, char **blk2, int *len2);
+int bo_getblk_nc(struct channel *chn, char **blk1, int *len1, char **blk2, int *len2);
+
+
+/* returns a pointer to the stream the channel belongs to */
+static inline struct stream *chn_strm(const struct channel *chn)
+{
+ if (chn->flags & CF_ISRESP)
+ return LIST_ELEM(chn, struct stream *, res);
+ else
+ return LIST_ELEM(chn, struct stream *, req);
+}
+
+/* returns a pointer to the stream interface feeding the channel (producer) */
+static inline struct stream_interface *chn_prod(const struct channel *chn)
+{
+ if (chn->flags & CF_ISRESP)
+ return &LIST_ELEM(chn, struct stream *, res)->si[1];
+ else
+ return &LIST_ELEM(chn, struct stream *, req)->si[0];
+}
+
+/* returns a pointer to the stream interface consuming the channel (producer) */
+static inline struct stream_interface *chn_cons(const struct channel *chn)
+{
+ if (chn->flags & CF_ISRESP)
+ return &LIST_ELEM(chn, struct stream *, res)->si[0];
+ else
+ return &LIST_ELEM(chn, struct stream *, req)->si[1];
+}
+
+/* Initialize all fields in the channel. */
+static inline void channel_init(struct channel *chn)
+{
+ chn->buf = &buf_empty;
+ chn->to_forward = 0;
+ chn->last_read = now_ms;
+ chn->xfer_small = chn->xfer_large = 0;
+ chn->total = 0;
+ chn->pipe = NULL;
+ chn->analysers = 0;
+ chn->flags = 0;
+}
+
+/* Schedule up to <bytes> more bytes to be forwarded via the channel without
+ * notifying the owner task. Any data pending in the buffer are scheduled to be
+ * sent as well, in the limit of the number of bytes to forward. This must be
+ * the only method to use to schedule bytes to be forwarded. If the requested
+ * number is too large, it is automatically adjusted. The number of bytes taken
+ * into account is returned. Directly touching ->to_forward will cause lockups
+ * when buf->o goes down to zero if nobody is ready to push the remaining data.
+ */
+static inline unsigned long long channel_forward(struct channel *chn, unsigned long long bytes)
+{
+ /* hint: avoid comparisons on long long for the fast case, since if the
+ * length does not fit in an unsigned it, it will never be forwarded at
+ * once anyway.
+ */
+ if (bytes <= ~0U) {
+ unsigned int bytes32 = bytes;
+
+ if (bytes32 <= chn->buf->i) {
+ /* OK this amount of bytes might be forwarded at once */
+ b_adv(chn->buf, bytes32);
+ return bytes;
+ }
+ }
+ return __channel_forward(chn, bytes);
+}
+
+/*********************************************************************/
+/* These functions are used to compute various channel content sizes */
+/*********************************************************************/
+
+/* Reports non-zero if the channel is empty, which means both its
+ * buffer and pipe are empty. The construct looks strange but is
+ * jump-less and much more efficient on both 32 and 64-bit than
+ * the boolean test.
+ */
+static inline unsigned int channel_is_empty(struct channel *c)
+{
+ return !(c->buf->o | (long)c->pipe);
+}
+
+/* Returns non-zero if the channel is rewritable, which means that the buffer
+ * it is attached to has at least <maxrewrite> bytes immediately available.
+ * This is used to decide when a request or response may be parsed when some
+ * data from a previous exchange might still be present.
+ */
+static inline int channel_is_rewritable(const struct channel *chn)
+{
+ int rem = chn->buf->size;
+
+ rem -= chn->buf->o;
+ rem -= chn->buf->i;
+ rem -= global.tune.maxrewrite;
+ return rem >= 0;
+}
+
+/* Tells whether data are likely to leave the buffer. This is used to know when
+ * we can safely ignore the reserve since we know we cannot retry a connection.
+ * It returns zero if data are blocked, non-zero otherwise.
+ */
+static inline int channel_may_send(const struct channel *chn)
+{
+ return chn_cons(chn)->state == SI_ST_EST;
+}
+
+/* Returns the amount of bytes from the channel that are already scheduled for
+ * leaving (buf->o) or that are still part of the input and expected to be sent
+ * soon as covered by to_forward. This is useful to know by how much we can
+ * shrink the rewrite reserve during forwards. Buffer data are not considered
+ * in transit until the channel is connected, so that the reserve remains
+ * protected.
+ */
+static inline int channel_in_transit(const struct channel *chn)
+{
+ int ret;
+
+ if (!channel_may_send(chn))
+ return 0;
+
+ /* below, this is min(i, to_forward) optimized for the fast case */
+ if (chn->to_forward >= chn->buf->i ||
+ (CHN_INFINITE_FORWARD < MAX_RANGE(typeof(chn->buf->i)) &&
+ chn->to_forward == CHN_INFINITE_FORWARD))
+ ret = chn->buf->i;
+ else
+ ret = chn->to_forward;
+
+ ret += chn->buf->o;
+ return ret;
+}
+
+/* Returns non-zero if the channel can still receive data. This is used to
+ * decide when to stop reading into a buffer when we want to ensure that we
+ * leave the reserve untouched after all pending outgoing data are forwarded.
+ * The reserved space is taken into account if ->to_forward indicates that an
+ * end of transfer is close to happen. Note that both ->buf->o and ->to_forward
+ * are considered as available since they're supposed to leave the buffer. The
+ * test is optimized to avoid as many operations as possible for the fast case
+ * and to be used as an "if" condition.
+ */
+static inline int channel_may_recv(const struct channel *chn)
+{
+ int rem = chn->buf->size;
+
+ if (chn->buf == &buf_empty)
+ return 1;
+
+ rem -= chn->buf->o;
+ rem -= chn->buf->i;
+ if (!rem)
+ return 0; /* buffer already full */
+
+ /* now we know there's some room left, verify if we're touching
+ * the reserve with some permanent input data.
+ */
+ if (chn->to_forward >= chn->buf->i ||
+ (CHN_INFINITE_FORWARD < MAX_RANGE(typeof(chn->buf->i)) && // just there to ensure gcc
+ chn->to_forward == CHN_INFINITE_FORWARD)) // avoids the useless second
+ return 1; // test whenever possible
+
+ rem -= global.tune.maxrewrite;
+ rem += chn->buf->o;
+ rem += chn->to_forward;
+ return rem > 0;
+}
+
+/* Returns true if the channel's input is already closed */
+static inline int channel_input_closed(struct channel *chn)
+{
+ return ((chn->flags & CF_SHUTR) != 0);
+}
+
+/* Returns true if the channel's output is already closed */
+static inline int channel_output_closed(struct channel *chn)
+{
+ return ((chn->flags & CF_SHUTW) != 0);
+}
+
+/* Check channel timeouts, and set the corresponding flags. The likely/unlikely
+ * have been optimized for fastest normal path. The read/write timeouts are not
+ * set if there was activity on the channel. That way, we don't have to update
+ * the timeout on every I/O. Note that the analyser timeout is always checked.
+ */
+static inline void channel_check_timeouts(struct channel *chn)
+{
+ if (likely(!(chn->flags & (CF_SHUTR|CF_READ_TIMEOUT|CF_READ_ACTIVITY|CF_READ_NOEXP))) &&
+ unlikely(tick_is_expired(chn->rex, now_ms)))
+ chn->flags |= CF_READ_TIMEOUT;
+
+ if (likely(!(chn->flags & (CF_SHUTW|CF_WRITE_TIMEOUT|CF_WRITE_ACTIVITY))) &&
+ unlikely(tick_is_expired(chn->wex, now_ms)))
+ chn->flags |= CF_WRITE_TIMEOUT;
+
+ if (likely(!(chn->flags & CF_ANA_TIMEOUT)) &&
+ unlikely(tick_is_expired(chn->analyse_exp, now_ms)))
+ chn->flags |= CF_ANA_TIMEOUT;
+}
+
+/* Erase any content from channel <buf> and adjusts flags accordingly. Note
+ * that any spliced data is not affected since we may not have any access to
+ * it.
+ */
+static inline void channel_erase(struct channel *chn)
+{
+ chn->to_forward = 0;
+ b_reset(chn->buf);
+}
+
+/* marks the channel as "shutdown" ASAP for reads */
+static inline void channel_shutr_now(struct channel *chn)
+{
+ chn->flags |= CF_SHUTR_NOW;
+}
+
+/* marks the channel as "shutdown" ASAP for writes */
+static inline void channel_shutw_now(struct channel *chn)
+{
+ chn->flags |= CF_SHUTW_NOW;
+}
+
+/* marks the channel as "shutdown" ASAP in both directions */
+static inline void channel_abort(struct channel *chn)
+{
+ chn->flags |= CF_SHUTR_NOW | CF_SHUTW_NOW;
+ chn->flags &= ~CF_AUTO_CONNECT;
+}
+
+/* allow the consumer to try to establish a new connection. */
+static inline void channel_auto_connect(struct channel *chn)
+{
+ chn->flags |= CF_AUTO_CONNECT;
+}
+
+/* prevent the consumer from trying to establish a new connection, and also
+ * disable auto shutdown forwarding.
+ */
+static inline void channel_dont_connect(struct channel *chn)
+{
+ chn->flags &= ~(CF_AUTO_CONNECT|CF_AUTO_CLOSE);
+}
+
+/* allow the producer to forward shutdown requests */
+static inline void channel_auto_close(struct channel *chn)
+{
+ chn->flags |= CF_AUTO_CLOSE;
+}
+
+/* prevent the producer from forwarding shutdown requests */
+static inline void channel_dont_close(struct channel *chn)
+{
+ chn->flags &= ~CF_AUTO_CLOSE;
+}
+
+/* allow the producer to read / poll the input */
+static inline void channel_auto_read(struct channel *chn)
+{
+ chn->flags &= ~CF_DONT_READ;
+}
+
+/* prevent the producer from read / poll the input */
+static inline void channel_dont_read(struct channel *chn)
+{
+ chn->flags |= CF_DONT_READ;
+}
+
+
+/*************************************************/
+/* Buffer operations in the context of a channel */
+/*************************************************/
+
+
+/* Return the number of reserved bytes in the channel's visible
+ * buffer, which ensures that once all pending data are forwarded, the
+ * buffer still has global.tune.maxrewrite bytes free. The result is
+ * between 0 and global.tune.maxrewrite, which is itself smaller than
+ * any chn->size. Special care is taken to avoid any possible integer
+ * overflow in the operations.
+ */
+static inline int channel_reserved(const struct channel *chn)
+{
+ int reserved;
+
+ reserved = global.tune.maxrewrite - channel_in_transit(chn);
+ if (reserved < 0)
+ reserved = 0;
+ return reserved;
+}
+
+/* Return the max number of bytes the buffer can contain so that once all the
+ * data in transit are forwarded, the buffer still has global.tune.maxrewrite
+ * bytes free. The result sits between chn->size - maxrewrite and chn->size.
+ */
+static inline int channel_recv_limit(const struct channel *chn)
+{
+ return chn->buf->size - channel_reserved(chn);
+}
+
+/* Returns the amount of space available at the input of the buffer, taking the
+ * reserved space into account if ->to_forward indicates that an end of transfer
+ * is close to happen. The test is optimized to avoid as many operations as
+ * possible for the fast case.
+ */
+static inline int channel_recv_max(const struct channel *chn)
+{
+ int ret;
+
+ ret = channel_recv_limit(chn) - chn->buf->i - chn->buf->o;
+ if (ret < 0)
+ ret = 0;
+ return ret;
+}
+
+/* Truncate any unread data in the channel's buffer, and disable forwarding.
+ * Outgoing data are left intact. This is mainly to be used to send error
+ * messages after existing data.
+ */
+static inline void channel_truncate(struct channel *chn)
+{
+ if (!chn->buf->o)
+ return channel_erase(chn);
+
+ chn->to_forward = 0;
+ if (!chn->buf->i)
+ return;
+
+ chn->buf->i = 0;
+}
+
+/*
+ * Advance the channel buffer's read pointer by <len> bytes. This is useful
+ * when data have been read directly from the buffer. It is illegal to call
+ * this function with <len> causing a wrapping at the end of the buffer. It's
+ * the caller's responsibility to ensure that <len> is never larger than
+ * chn->o. Channel flag WRITE_PARTIAL is set.
+ */
+static inline void bo_skip(struct channel *chn, int len)
+{
+ chn->buf->o -= len;
+
+ if (buffer_empty(chn->buf))
+ chn->buf->p = chn->buf->data;
+
+ /* notify that some data was written to the SI from the buffer */
+ chn->flags |= CF_WRITE_PARTIAL;
+}
+
+/* Tries to copy chunk <chunk> into the channel's buffer after length controls.
+ * The chn->o and to_forward pointers are updated. If the channel's input is
+ * closed, -2 is returned. If the block is too large for this buffer, -3 is
+ * returned. If there is not enough room left in the buffer, -1 is returned.
+ * Otherwise the number of bytes copied is returned (0 being a valid number).
+ * Channel flag READ_PARTIAL is updated if some data can be transferred. The
+ * chunk's length is updated with the number of bytes sent.
+ */
+static inline int bi_putchk(struct channel *chn, struct chunk *chunk)
+{
+ int ret;
+
+ ret = bi_putblk(chn, chunk->str, chunk->len);
+ if (ret > 0)
+ chunk->len -= ret;
+ return ret;
+}
+
+/* Tries to copy string <str> at once into the channel's buffer after length
+ * controls. The chn->o and to_forward pointers are updated. If the channel's
+ * input is closed, -2 is returned. If the block is too large for this buffer,
+ * -3 is returned. If there is not enough room left in the buffer, -1 is
+ * returned. Otherwise the number of bytes copied is returned (0 being a valid
+ * number). Channel flag READ_PARTIAL is updated if some data can be
+ * transferred.
+ */
+static inline int bi_putstr(struct channel *chn, const char *str)
+{
+ return bi_putblk(chn, str, strlen(str));
+}
+
+/*
+ * Return one char from the channel's buffer. If the buffer is empty and the
+ * channel is closed, return -2. If the buffer is just empty, return -1. The
+ * buffer's pointer is not advanced, it's up to the caller to call bo_skip(buf,
+ * 1) when it has consumed the char. Also note that this function respects the
+ * chn->o limit.
+ */
+static inline int bo_getchr(struct channel *chn)
+{
+ /* closed or empty + imminent close = -2; empty = -1 */
+ if (unlikely((chn->flags & CF_SHUTW) || channel_is_empty(chn))) {
+ if (chn->flags & (CF_SHUTW|CF_SHUTW_NOW))
+ return -2;
+ return -1;
+ }
+ return *buffer_wrap_sub(chn->buf, chn->buf->p - chn->buf->o);
+}
+
+
+#endif /* _PROTO_CHANNEL_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ include/proto/checks.h
+ Functions prototypes for the checks.
+
+ Copyright (C) 2000-2009 Willy Tarreau - w@1wt.eu
+
+ This library is free software; you can redistribute it and/or
+ modify it under the terms of the GNU Lesser General Public
+ License as published by the Free Software Foundation, version 2.1
+ exclusively.
+
+ This library is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ Lesser General Public License for more details.
+
+ You should have received a copy of the GNU Lesser General Public
+ License along with this library; if not, write to the Free Software
+ Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+*/
+
+#ifndef _PROTO_CHECKS_H
+#define _PROTO_CHECKS_H
+
+#include <types/task.h>
+#include <common/config.h>
+
+const char *get_check_status_description(short check_status);
+const char *get_check_status_info(short check_status);
+int start_checks();
+void __health_adjust(struct server *s, short status);
+int trigger_resolution(struct server *s);
+
+extern struct data_cb check_conn_cb;
+
+/* Use this one only. This inline version only ensures that we don't
+ * call the function when the observe mode is disabled.
+ */
+static inline void health_adjust(struct server *s, short status)
+{
+ /* return now if observing nor health check is not enabled */
+ if (!s->observe || !s->check.task)
+ return;
+
+ return __health_adjust(s, status);
+}
+
+const char *init_check(struct check *check, int type);
+void free_check(struct check *check);
+
+void send_email_alert(struct server *s, int priority, const char *format, ...)
+ __attribute__ ((format(printf, 3, 4)));
+#endif /* _PROTO_CHECKS_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/proto/compression.h
+ * This file defines function prototypes for compression.
+ *
+ * Copyright 2012 (C) Exceliance, David Du Colombier <dducolombier@exceliance.fr>
+ * William Lallemand <wlallemand@exceliance.fr>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _PROTO_COMP_H
+#define _PROTO_COMP_H
+
+#include <types/compression.h>
+
+extern unsigned int compress_min_idle;
+
+int comp_append_type(struct comp *comp, const char *type);
+int comp_append_algo(struct comp *comp, const char *algo);
+
+int http_emit_chunk_size(char *end, unsigned int chksz);
+int http_compression_buffer_init(struct stream *s, struct buffer *in, struct buffer *out);
+int http_compression_buffer_add_data(struct stream *s, struct buffer *in, struct buffer *out);
+int http_compression_buffer_end(struct stream *s, struct buffer **in, struct buffer **out, int end);
+
+#ifdef USE_ZLIB
+extern long zlib_used_memory;
+#endif /* USE_ZLIB */
+
+#endif /* _PROTO_COMP_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/proto/connection.h
+ * This file contains connection function prototypes
+ *
+ * Copyright (C) 2000-2012 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _PROTO_CONNECTION_H
+#define _PROTO_CONNECTION_H
+
+#include <common/config.h>
+#include <common/memory.h>
+#include <types/connection.h>
+#include <types/listener.h>
+#include <proto/fd.h>
+#include <proto/obj_type.h>
+
+extern struct pool_head *pool2_connection;
+
+/* perform minimal intializations, report 0 in case of error, 1 if OK. */
+int init_connection();
+
+/* I/O callback for fd-based connections. It calls the read/write handlers
+ * provided by the connection's sock_ops. Returns 0.
+ */
+int conn_fd_handler(int fd);
+
+/* receive a PROXY protocol header over a connection */
+int conn_recv_proxy(struct connection *conn, int flag);
+int make_proxy_line(char *buf, int buf_len, struct server *srv, struct connection *remote);
+int make_proxy_line_v1(char *buf, int buf_len, struct sockaddr_storage *src, struct sockaddr_storage *dst);
+int make_proxy_line_v2(char *buf, int buf_len, struct server *srv, struct connection *remote);
+
+/* raw send() directly on the socket */
+int conn_sock_send(struct connection *conn, const void *buf, int len, int flags);
+
+/* drains any pending bytes from the socket */
+int conn_sock_drain(struct connection *conn);
+
+/* returns true is the transport layer is ready */
+static inline int conn_xprt_ready(const struct connection *conn)
+{
+ return (conn->flags & CO_FL_XPRT_READY);
+}
+
+/* returns true is the control layer is ready */
+static inline int conn_ctrl_ready(const struct connection *conn)
+{
+ return (conn->flags & CO_FL_CTRL_READY);
+}
+
+/* Calls the init() function of the transport layer if any and if not done yet,
+ * and sets the CO_FL_XPRT_READY flag to indicate it was properly initialized.
+ * Returns <0 in case of error.
+ */
+static inline int conn_xprt_init(struct connection *conn)
+{
+ int ret = 0;
+
+ if (!conn_xprt_ready(conn) && conn->xprt && conn->xprt->init)
+ ret = conn->xprt->init(conn);
+
+ if (ret >= 0)
+ conn->flags |= CO_FL_XPRT_READY;
+
+ return ret;
+}
+
+/* Calls the close() function of the transport layer if any and if not done
+ * yet, and clears the CO_FL_XPRT_READY flag. However this is not done if the
+ * CO_FL_XPRT_TRACKED flag is set, which allows logs to take data from the
+ * transport layer very late if needed.
+ */
+static inline void conn_xprt_close(struct connection *conn)
+{
+ if ((conn->flags & (CO_FL_XPRT_READY|CO_FL_XPRT_TRACKED)) == CO_FL_XPRT_READY) {
+ if (conn->xprt->close)
+ conn->xprt->close(conn);
+ conn->flags &= ~CO_FL_XPRT_READY;
+ }
+}
+
+/* Initializes the connection's control layer which essentially consists in
+ * registering the file descriptor for polling and setting the CO_FL_CTRL_READY
+ * flag. The caller is responsible for ensuring that the control layer is
+ * already assigned to the connection prior to the call.
+ */
+static inline void conn_ctrl_init(struct connection *conn)
+{
+ if (!conn_ctrl_ready(conn)) {
+ int fd = conn->t.sock.fd;
+
+ fd_insert(fd);
+ /* mark the fd as ready so as not to needlessly poll at the beginning */
+ fd_may_recv(fd);
+ fd_may_send(fd);
+ fdtab[fd].owner = conn;
+ fdtab[fd].iocb = conn_fd_handler;
+ conn->flags |= CO_FL_CTRL_READY;
+ }
+}
+
+/* Deletes the FD if the transport layer is already gone. Once done,
+ * it then removes the CO_FL_CTRL_READY flag.
+ */
+static inline void conn_ctrl_close(struct connection *conn)
+{
+ if ((conn->flags & (CO_FL_XPRT_READY|CO_FL_CTRL_READY)) == CO_FL_CTRL_READY) {
+ fd_delete(conn->t.sock.fd);
+ conn->flags &= ~CO_FL_CTRL_READY;
+ }
+}
+
+/* If the connection still has a transport layer, then call its close() function
+ * if any, and delete the file descriptor if a control layer is set. This is
+ * used to close everything at once and atomically. However this is not done if
+ * the CO_FL_XPRT_TRACKED flag is set, which allows logs to take data from the
+ * transport layer very late if needed.
+ */
+static inline void conn_full_close(struct connection *conn)
+{
+ conn_xprt_close(conn);
+ conn_ctrl_close(conn);
+}
+
+/* Force to close the connection whatever the tracking state. This is mainly
+ * used on the error path where the tracking does not make sense, or to kill
+ * an idle connection we want to abort immediately.
+ */
+static inline void conn_force_close(struct connection *conn)
+{
+ if (conn_xprt_ready(conn) && conn->xprt->close)
+ conn->xprt->close(conn);
+
+ if (conn_ctrl_ready(conn))
+ fd_delete(conn->t.sock.fd);
+
+ conn->flags &= ~(CO_FL_XPRT_READY|CO_FL_CTRL_READY);
+}
+
+/* Update polling on connection <c>'s file descriptor depending on its current
+ * state as reported in the connection's CO_FL_CURR_* flags, reports of EAGAIN
+ * in CO_FL_WAIT_*, and the sock layer expectations indicated by CO_FL_SOCK_*.
+ * The connection flags are updated with the new flags at the end of the
+ * operation. Polling is totally disabled if an error was reported.
+ */
+void conn_update_sock_polling(struct connection *c);
+
+/* Update polling on connection <c>'s file descriptor depending on its current
+ * state as reported in the connection's CO_FL_CURR_* flags, reports of EAGAIN
+ * in CO_FL_WAIT_*, and the data layer expectations indicated by CO_FL_DATA_*.
+ * The connection flags are updated with the new flags at the end of the
+ * operation. Polling is totally disabled if an error was reported.
+ */
+void conn_update_data_polling(struct connection *c);
+
+/* Refresh the connection's polling flags from its file descriptor status.
+ * This should be called at the beginning of a connection handler.
+ */
+static inline void conn_refresh_polling_flags(struct connection *conn)
+{
+ conn->flags &= ~(CO_FL_WAIT_ROOM | CO_FL_WAIT_DATA);
+
+ if (conn_ctrl_ready(conn)) {
+ unsigned int flags = conn->flags & ~(CO_FL_CURR_RD_ENA | CO_FL_CURR_WR_ENA);
+
+ if (fd_recv_active(conn->t.sock.fd))
+ flags |= CO_FL_CURR_RD_ENA;
+ if (fd_send_active(conn->t.sock.fd))
+ flags |= CO_FL_CURR_WR_ENA;
+ conn->flags = flags;
+ }
+}
+
+/* inspects c->flags and returns non-zero if DATA ENA changes from the CURR ENA
+ * or if the WAIT flags are set with their respective ENA flags. Additionally,
+ * non-zero is also returned if an error was reported on the connection. This
+ * function is used quite often and is inlined. In order to proceed optimally
+ * with very little code and CPU cycles, the bits are arranged so that a change
+ * can be detected by a few left shifts, a xor, and a mask. These operations
+ * detect when W&D are both enabled for either direction, when C&D differ for
+ * either direction and when Error is set. The trick consists in first keeping
+ * only the bits we're interested in, since they don't collide when shifted,
+ * and to perform the AND at the end. In practice, the compiler is able to
+ * replace the last AND with a TEST in boolean conditions. This results in
+ * checks that are done in 4-6 cycles and less than 30 bytes.
+ */
+static inline unsigned int conn_data_polling_changes(const struct connection *c)
+{
+ unsigned int f = c->flags;
+ f &= CO_FL_DATA_WR_ENA | CO_FL_DATA_RD_ENA | CO_FL_CURR_WR_ENA |
+ CO_FL_CURR_RD_ENA | CO_FL_ERROR;
+
+ f = (f ^ (f << 1)) & (CO_FL_CURR_WR_ENA|CO_FL_CURR_RD_ENA); /* test C ^ D */
+ return f & (CO_FL_CURR_WR_ENA | CO_FL_CURR_RD_ENA | CO_FL_ERROR);
+}
+
+/* inspects c->flags and returns non-zero if SOCK ENA changes from the CURR ENA
+ * or if the WAIT flags are set with their respective ENA flags. Additionally,
+ * non-zero is also returned if an error was reported on the connection. This
+ * function is used quite often and is inlined. In order to proceed optimally
+ * with very little code and CPU cycles, the bits are arranged so that a change
+ * can be detected by a few left shifts, a xor, and a mask. These operations
+ * detect when W&S are both enabled for either direction, when C&S differ for
+ * either direction and when Error is set. The trick consists in first keeping
+ * only the bits we're interested in, since they don't collide when shifted,
+ * and to perform the AND at the end. In practice, the compiler is able to
+ * replace the last AND with a TEST in boolean conditions. This results in
+ * checks that are done in 4-6 cycles and less than 30 bytes.
+ */
+static inline unsigned int conn_sock_polling_changes(const struct connection *c)
+{
+ unsigned int f = c->flags;
+ f &= CO_FL_SOCK_WR_ENA | CO_FL_SOCK_RD_ENA | CO_FL_CURR_WR_ENA |
+ CO_FL_CURR_RD_ENA | CO_FL_ERROR;
+
+ f = (f ^ (f << 2)) & (CO_FL_CURR_WR_ENA|CO_FL_CURR_RD_ENA); /* test C ^ S */
+ return f & (CO_FL_CURR_WR_ENA | CO_FL_CURR_RD_ENA | CO_FL_ERROR);
+}
+
+/* Automatically updates polling on connection <c> depending on the DATA flags
+ * if no handshake is in progress.
+ */
+static inline void conn_cond_update_data_polling(struct connection *c)
+{
+ if (!(c->flags & CO_FL_POLL_SOCK) && conn_data_polling_changes(c))
+ conn_update_data_polling(c);
+}
+
+/* Automatically updates polling on connection <c> depending on the SOCK flags
+ * if a handshake is in progress.
+ */
+static inline void conn_cond_update_sock_polling(struct connection *c)
+{
+ if ((c->flags & CO_FL_POLL_SOCK) && conn_sock_polling_changes(c))
+ conn_update_sock_polling(c);
+}
+
+/* Stop all polling on the fd. This might be used when an error is encountered
+ * for example.
+ */
+static inline void conn_stop_polling(struct connection *c)
+{
+ c->flags &= ~(CO_FL_CURR_RD_ENA | CO_FL_CURR_WR_ENA |
+ CO_FL_SOCK_RD_ENA | CO_FL_SOCK_WR_ENA |
+ CO_FL_DATA_RD_ENA | CO_FL_DATA_WR_ENA);
+ fd_stop_both(c->t.sock.fd);
+}
+
+/* Automatically update polling on connection <c> depending on the DATA and
+ * SOCK flags, and on whether a handshake is in progress or not. This may be
+ * called at any moment when there is a doubt about the effectiveness of the
+ * polling state, for instance when entering or leaving the handshake state.
+ */
+static inline void conn_cond_update_polling(struct connection *c)
+{
+ if (unlikely(c->flags & CO_FL_ERROR))
+ conn_stop_polling(c);
+ else if (!(c->flags & CO_FL_POLL_SOCK) && conn_data_polling_changes(c))
+ conn_update_data_polling(c);
+ else if ((c->flags & CO_FL_POLL_SOCK) && conn_sock_polling_changes(c))
+ conn_update_sock_polling(c);
+}
+
+/***** Event manipulation primitives for use by DATA I/O callbacks *****/
+/* The __conn_* versions do not propagate to lower layers and are only meant
+ * to be used by handlers called by the connection handler. The other ones
+ * may be used anywhere.
+ */
+static inline void __conn_data_want_recv(struct connection *c)
+{
+ c->flags |= CO_FL_DATA_RD_ENA;
+}
+
+static inline void __conn_data_stop_recv(struct connection *c)
+{
+ c->flags &= ~CO_FL_DATA_RD_ENA;
+}
+
+static inline void __conn_data_want_send(struct connection *c)
+{
+ c->flags |= CO_FL_DATA_WR_ENA;
+}
+
+static inline void __conn_data_stop_send(struct connection *c)
+{
+ c->flags &= ~CO_FL_DATA_WR_ENA;
+}
+
+static inline void __conn_data_stop_both(struct connection *c)
+{
+ c->flags &= ~(CO_FL_DATA_WR_ENA | CO_FL_DATA_RD_ENA);
+}
+
+static inline void conn_data_want_recv(struct connection *c)
+{
+ __conn_data_want_recv(c);
+ conn_cond_update_data_polling(c);
+}
+
+static inline void conn_data_stop_recv(struct connection *c)
+{
+ __conn_data_stop_recv(c);
+ conn_cond_update_data_polling(c);
+}
+
+static inline void conn_data_want_send(struct connection *c)
+{
+ __conn_data_want_send(c);
+ conn_cond_update_data_polling(c);
+}
+
+static inline void conn_data_stop_send(struct connection *c)
+{
+ __conn_data_stop_send(c);
+ conn_cond_update_data_polling(c);
+}
+
+static inline void conn_data_stop_both(struct connection *c)
+{
+ __conn_data_stop_both(c);
+ conn_cond_update_data_polling(c);
+}
+
+/***** Event manipulation primitives for use by handshake I/O callbacks *****/
+/* The __conn_* versions do not propagate to lower layers and are only meant
+ * to be used by handlers called by the connection handler. The other ones
+ * may be used anywhere.
+ */
+static inline void __conn_sock_want_recv(struct connection *c)
+{
+ c->flags |= CO_FL_SOCK_RD_ENA;
+}
+
+static inline void __conn_sock_stop_recv(struct connection *c)
+{
+ c->flags &= ~CO_FL_SOCK_RD_ENA;
+}
+
+static inline void __conn_sock_want_send(struct connection *c)
+{
+ c->flags |= CO_FL_SOCK_WR_ENA;
+}
+
+static inline void __conn_sock_stop_send(struct connection *c)
+{
+ c->flags &= ~CO_FL_SOCK_WR_ENA;
+}
+
+static inline void __conn_sock_stop_both(struct connection *c)
+{
+ c->flags &= ~(CO_FL_SOCK_WR_ENA | CO_FL_SOCK_RD_ENA);
+}
+
+static inline void conn_sock_want_recv(struct connection *c)
+{
+ __conn_sock_want_recv(c);
+ conn_cond_update_sock_polling(c);
+}
+
+static inline void conn_sock_stop_recv(struct connection *c)
+{
+ __conn_sock_stop_recv(c);
+ conn_cond_update_sock_polling(c);
+}
+
+static inline void conn_sock_want_send(struct connection *c)
+{
+ __conn_sock_want_send(c);
+ conn_cond_update_sock_polling(c);
+}
+
+static inline void conn_sock_stop_send(struct connection *c)
+{
+ __conn_sock_stop_send(c);
+ conn_cond_update_sock_polling(c);
+}
+
+static inline void conn_sock_stop_both(struct connection *c)
+{
+ __conn_sock_stop_both(c);
+ conn_cond_update_sock_polling(c);
+}
+
+/* shutdown management */
+static inline void conn_sock_read0(struct connection *c)
+{
+ c->flags |= CO_FL_SOCK_RD_SH;
+ __conn_sock_stop_recv(c);
+ /* we don't risk keeping ports unusable if we found the
+ * zero from the other side.
+ */
+ if (conn_ctrl_ready(c))
+ fdtab[c->t.sock.fd].linger_risk = 0;
+}
+
+static inline void conn_data_read0(struct connection *c)
+{
+ c->flags |= CO_FL_DATA_RD_SH;
+ __conn_data_stop_recv(c);
+}
+
+static inline void conn_sock_shutw(struct connection *c)
+{
+ c->flags |= CO_FL_SOCK_WR_SH;
+ __conn_sock_stop_send(c);
+ if (conn_ctrl_ready(c))
+ shutdown(c->t.sock.fd, SHUT_WR);
+}
+
+static inline void conn_data_shutw(struct connection *c)
+{
+ c->flags |= CO_FL_DATA_WR_SH;
+ __conn_data_stop_send(c);
+
+ /* clean data-layer shutdown */
+ if (c->xprt && c->xprt->shutw)
+ c->xprt->shutw(c, 1);
+}
+
+static inline void conn_data_shutw_hard(struct connection *c)
+{
+ c->flags |= CO_FL_DATA_WR_SH;
+ __conn_data_stop_send(c);
+
+ /* unclean data-layer shutdown */
+ if (c->xprt && c->xprt->shutw)
+ c->xprt->shutw(c, 0);
+}
+
+/* detect sock->data read0 transition */
+static inline int conn_data_read0_pending(struct connection *c)
+{
+ return (c->flags & (CO_FL_DATA_RD_SH | CO_FL_SOCK_RD_SH)) == CO_FL_SOCK_RD_SH;
+}
+
+/* detect data->sock shutw transition */
+static inline int conn_sock_shutw_pending(struct connection *c)
+{
+ return (c->flags & (CO_FL_DATA_WR_SH | CO_FL_SOCK_WR_SH)) == CO_FL_DATA_WR_SH;
+}
+
+/* prepares a connection to work with protocol <proto> and transport <xprt>.
+ * The transport's context is initialized as well.
+ */
+static inline void conn_prepare(struct connection *conn, const struct protocol *proto, const struct xprt_ops *xprt)
+{
+ conn->ctrl = proto;
+ conn->xprt = xprt;
+ conn->xprt_st = 0;
+ conn->xprt_ctx = NULL;
+}
+
+/* Initializes all required fields for a new connection. Note that it does the
+ * minimum acceptable initialization for a connection that already exists and
+ * is about to be reused. It also leaves the addresses untouched, which makes
+ * it usable across connection retries to reset a connection to a known state.
+ */
+static inline void conn_init(struct connection *conn)
+{
+ conn->obj_type = OBJ_TYPE_CONN;
+ conn->flags = CO_FL_NONE;
+ conn->data = NULL;
+ conn->owner = NULL;
+ conn->send_proxy_ofs = 0;
+ conn->t.sock.fd = -1; /* just to help with debugging */
+ conn->err_code = CO_ER_NONE;
+ conn->target = NULL;
+ conn->proxy_netns = NULL;
+ LIST_INIT(&conn->list);
+}
+
+/* Tries to allocate a new connection and initialized its main fields. The
+ * connection is returned on success, NULL on failure. The connection must
+ * be released using pool_free2() or conn_free().
+ */
+static inline struct connection *conn_new()
+{
+ struct connection *conn;
+
+ conn = pool_alloc2(pool2_connection);
+ if (likely(conn != NULL))
+ conn_init(conn);
+ return conn;
+}
+
+/* Releases a connection previously allocated by conn_new() */
+static inline void conn_free(struct connection *conn)
+{
+ pool_free2(pool2_connection, conn);
+}
+
+
+/* Retrieves the connection's source address */
+static inline void conn_get_from_addr(struct connection *conn)
+{
+ if (conn->flags & CO_FL_ADDR_FROM_SET)
+ return;
+
+ if (!conn_ctrl_ready(conn) || !conn->ctrl->get_src)
+ return;
+
+ if (conn->ctrl->get_src(conn->t.sock.fd, (struct sockaddr *)&conn->addr.from,
+ sizeof(conn->addr.from),
+ obj_type(conn->target) != OBJ_TYPE_LISTENER) == -1)
+ return;
+ conn->flags |= CO_FL_ADDR_FROM_SET;
+}
+
+/* Retrieves the connection's original destination address */
+static inline void conn_get_to_addr(struct connection *conn)
+{
+ if (conn->flags & CO_FL_ADDR_TO_SET)
+ return;
+
+ if (!conn_ctrl_ready(conn) || !conn->ctrl->get_dst)
+ return;
+
+ if (conn->ctrl->get_dst(conn->t.sock.fd, (struct sockaddr *)&conn->addr.to,
+ sizeof(conn->addr.to),
+ obj_type(conn->target) != OBJ_TYPE_LISTENER) == -1)
+ return;
+ conn->flags |= CO_FL_ADDR_TO_SET;
+}
+
+/* Attaches a connection to an owner and assigns a data layer */
+static inline void conn_attach(struct connection *conn, void *owner, const struct data_cb *data)
+{
+ conn->data = data;
+ conn->owner = owner;
+}
+
+/* returns a human-readable error code for conn->err_code, or NULL if the code
+ * is unknown.
+ */
+static inline const char *conn_err_code_str(struct connection *c)
+{
+ switch (c->err_code) {
+ case CO_ER_NONE: return "Success";
+
+ case CO_ER_CONF_FDLIM: return "Reached configured maxconn value";
+ case CO_ER_PROC_FDLIM: return "Too many sockets on the process";
+ case CO_ER_SYS_FDLIM: return "Too many sockets on the system";
+ case CO_ER_SYS_MEMLIM: return "Out of system buffers";
+ case CO_ER_NOPROTO: return "Protocol or address family not supported";
+ case CO_ER_SOCK_ERR: return "General socket error";
+ case CO_ER_PORT_RANGE: return "Source port range exhausted";
+ case CO_ER_CANT_BIND: return "Can't bind to source address";
+ case CO_ER_FREE_PORTS: return "Out of local source ports on the system";
+ case CO_ER_ADDR_INUSE: return "Local source address already in use";
+
+ case CO_ER_PRX_EMPTY: return "Connection closed while waiting for PROXY protocol header";
+ case CO_ER_PRX_ABORT: return "Connection error while waiting for PROXY protocol header";
+ case CO_ER_PRX_TIMEOUT: return "Timeout while waiting for PROXY protocol header";
+ case CO_ER_PRX_TRUNCATED: return "Truncated PROXY protocol header received";
+ case CO_ER_PRX_NOT_HDR: return "Received something which does not look like a PROXY protocol header";
+ case CO_ER_PRX_BAD_HDR: return "Received an invalid PROXY protocol header";
+ case CO_ER_PRX_BAD_PROTO: return "Received an unhandled protocol in the PROXY protocol header";
+ case CO_ER_SSL_EMPTY: return "Connection closed during SSL handshake";
+ case CO_ER_SSL_ABORT: return "Connection error during SSL handshake";
+ case CO_ER_SSL_TIMEOUT: return "Timeout during SSL handshake";
+ case CO_ER_SSL_TOO_MANY: return "Too many SSL connections";
+ case CO_ER_SSL_NO_MEM: return "Out of memory when initializing an SSL connection";
+ case CO_ER_SSL_RENEG: return "Rejected a client-initiated SSL renegociation attempt";
+ case CO_ER_SSL_CA_FAIL: return "SSL client CA chain cannot be verified";
+ case CO_ER_SSL_CRT_FAIL: return "SSL client certificate not trusted";
+ case CO_ER_SSL_HANDSHAKE: return "SSL handshake failure";
+ case CO_ER_SSL_HANDSHAKE_HB: return "SSL handshake failure after heartbeat";
+ case CO_ER_SSL_KILLED_HB: return "Stopped a TLSv1 heartbeat attack (CVE-2014-0160)";
+ case CO_ER_SSL_NO_TARGET: return "Attempt to use SSL on an unknown target (internal error)";
+ }
+ return NULL;
+}
+
+#endif /* _PROTO_CONNECTION_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/proto/dns.h
+ * This file provides functions related to DNS protocol
+ *
+ * Copyright (C) 2014 Baptiste Assmann <bedis9@gmail.com>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _PROTO_DNS_H
+#define _PROTO_DNS_H
+
+#include <types/dns.h>
+#include <types/proto_udp.h>
+
+char *dns_str_to_dn_label(const char *string, char *dn, int dn_len);
+int dns_str_to_dn_label_len(const char *string);
+int dns_hostname_validation(const char *string, char **err);
+int dns_build_query(int query_id, int query_type, char *hostname_dn, int hostname_dn_len, char *buf, int bufsize);
+struct task *dns_process_resolve(struct task *t);
+int dns_init_resolvers(void);
+uint16_t dns_rnd16(void);
+int dns_validate_dns_response(unsigned char *resp, unsigned char *bufend, char *dn_name, int dn_name_len);
+int dns_get_ip_from_response(unsigned char *resp, unsigned char *resp_end,
+ char *dn_name, int dn_name_len, void *currentip,
+ short currentip_sin_family, int family_priority,
+ void **newip, short *newip_sin_family);
+void dns_resolve_send(struct dgram_conn *dgram);
+void dns_resolve_recv(struct dgram_conn *dgram);
+int dns_send_query(struct dns_resolution *resolution);
+void dns_print_current_resolutions(struct dns_resolvers *resolvers);
+void dns_update_resolvers_timeout(struct dns_resolvers *resolvers);
+void dns_reset_resolution(struct dns_resolution *resolution);
+int dns_check_resolution_queue(struct dns_resolvers *resolvers);
+int dns_response_get_query_id(unsigned char *resp);
+
+#endif // _PROTO_DNS_H
--- /dev/null
+/*
+ * include/proto/dumpstats.h
+ * This file contains definitions of some primitives to dedicated to
+ * statistics output.
+ *
+ * Copyright (C) 2000-2011 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _PROTO_DUMPSTATS_H
+#define _PROTO_DUMPSTATS_H
+
+#include <common/config.h>
+#include <types/applet.h>
+#include <types/stream_interface.h>
+
+/* Flags for applet.ctx.stats.flags */
+#define STAT_FMT_HTML 0x00000001 /* dump the stats in HTML format */
+#define STAT_HIDE_DOWN 0x00000008 /* hide 'down' servers in the stats page */
+#define STAT_NO_REFRESH 0x00000010 /* do not automatically refresh the stats page */
+#define STAT_ADMIN 0x00000020 /* indicate a stats admin level */
+#define STAT_CHUNKED 0x00000040 /* use chunked encoding (HTTP/1.1) */
+#define STAT_BOUND 0x00800000 /* bound statistics to selected proxies/types/services */
+
+#define STATS_TYPE_FE 0
+#define STATS_TYPE_BE 1
+#define STATS_TYPE_SV 2
+#define STATS_TYPE_SO 3
+
+/* HTTP stats : applet.st0 */
+enum {
+ STAT_HTTP_DONE = 0, /* finished */
+ STAT_HTTP_HEAD, /* send headers before dump */
+ STAT_HTTP_DUMP, /* dumping stats */
+ STAT_HTTP_POST, /* waiting post data */
+ STAT_HTTP_LAST, /* sending last chunk of response */
+};
+
+/* HTML form to limit output scope */
+#define STAT_SCOPE_TXT_MAXLEN 20 /* max len for scope substring */
+#define STAT_SCOPE_INPUT_NAME "scope" /* pattern form scope name <input> in html form */
+#define STAT_SCOPE_PATTERN "?" STAT_SCOPE_INPUT_NAME "="
+
+extern struct applet http_stats_applet;
+
+void stats_io_handler(struct stream_interface *si);
+
+
+#endif /* _PROTO_DUMPSTATS_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/proto/fd.h
+ * File descriptors states.
+ *
+ * Copyright (C) 2000-2014 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _PROTO_FD_H
+#define _PROTO_FD_H
+
+#include <stdio.h>
+#include <sys/time.h>
+#include <sys/types.h>
+#include <unistd.h>
+
+#include <common/config.h>
+#include <types/fd.h>
+
+/* public variables */
+extern unsigned int *fd_cache; // FD events cache
+extern unsigned int *fd_updt; // FD updates list
+extern int fd_cache_num; // number of events in the cache
+extern int fd_nbupdt; // number of updates in the list
+
+/* Deletes an FD from the fdsets, and recomputes the maxfd limit.
+ * The file descriptor is also closed.
+ */
+void fd_delete(int fd);
+
+/* disable the specified poller */
+void disable_poller(const char *poller_name);
+
+/*
+ * Initialize the pollers till the best one is found.
+ * If none works, returns 0, otherwise 1.
+ * The pollers register themselves just before main() is called.
+ */
+int init_pollers();
+
+/*
+ * Deinitialize the pollers.
+ */
+void deinit_pollers();
+
+/*
+ * Some pollers may lose their connection after a fork(). It may be necessary
+ * to create initialize part of them again. Returns 0 in case of failure,
+ * otherwise 1. The fork() function may be NULL if unused. In case of error,
+ * the the current poller is destroyed and the caller is responsible for trying
+ * another one by calling init_pollers() again.
+ */
+int fork_poller();
+
+/*
+ * Lists the known pollers on <out>.
+ * Should be performed only before initialization.
+ */
+int list_pollers(FILE *out);
+
+/*
+ * Runs the polling loop
+ */
+void run_poller();
+
+/* Scan and process the cached events. This should be called right after
+ * the poller.
+ */
+void fd_process_cached_events();
+
+/* Mark fd <fd> as updated for polling and allocate an entry in the update list
+ * for this if it was not already there. This can be done at any time.
+ */
+static inline void updt_fd_polling(const int fd)
+{
+ if (fdtab[fd].updated)
+ /* already scheduled for update */
+ return;
+ fdtab[fd].updated = 1;
+ fd_updt[fd_nbupdt++] = fd;
+}
+
+
+/* Allocates a cache entry for a file descriptor if it does not yet have one.
+ * This can be done at any time.
+ */
+static inline void fd_alloc_cache_entry(const int fd)
+{
+ if (fdtab[fd].cache)
+ return;
+ fd_cache_num++;
+ fdtab[fd].cache = fd_cache_num;
+ fd_cache[fd_cache_num-1] = fd;
+}
+
+/* Removes entry used by fd <fd> from the FD cache and replaces it with the
+ * last one. The fdtab.cache is adjusted to match the back reference if needed.
+ * If the fd has no entry assigned, return immediately.
+ */
+static inline void fd_release_cache_entry(int fd)
+{
+ unsigned int pos;
+
+ pos = fdtab[fd].cache;
+ if (!pos)
+ return;
+ fdtab[fd].cache = 0;
+ fd_cache_num--;
+ if (likely(pos <= fd_cache_num)) {
+ /* was not the last entry */
+ fd = fd_cache[fd_cache_num];
+ fd_cache[pos - 1] = fd;
+ fdtab[fd].cache = pos;
+ }
+}
+
+/* Computes the new polled status based on the active and ready statuses, for
+ * each direction. This is meant to be used by pollers while processing updates.
+ */
+static inline int fd_compute_new_polled_status(int state)
+{
+ if (state & FD_EV_ACTIVE_R) {
+ if (!(state & FD_EV_READY_R))
+ state |= FD_EV_POLLED_R;
+ }
+ else
+ state &= ~FD_EV_POLLED_R;
+
+ if (state & FD_EV_ACTIVE_W) {
+ if (!(state & FD_EV_READY_W))
+ state |= FD_EV_POLLED_W;
+ }
+ else
+ state &= ~FD_EV_POLLED_W;
+
+ return state;
+}
+
+/* This function automatically enables/disables caching for an entry depending
+ * on its state, and also possibly creates an update entry so that the poller
+ * does its job as well. It is only called on state changes.
+ */
+static inline void fd_update_cache(int fd)
+{
+ /* 3 states for each direction require a polling update */
+ if ((fdtab[fd].state & (FD_EV_POLLED_R | FD_EV_ACTIVE_R)) == FD_EV_POLLED_R ||
+ (fdtab[fd].state & (FD_EV_POLLED_R | FD_EV_READY_R | FD_EV_ACTIVE_R)) == FD_EV_ACTIVE_R ||
+ (fdtab[fd].state & (FD_EV_POLLED_W | FD_EV_ACTIVE_W)) == FD_EV_POLLED_W ||
+ (fdtab[fd].state & (FD_EV_POLLED_W | FD_EV_READY_W | FD_EV_ACTIVE_W)) == FD_EV_ACTIVE_W)
+ updt_fd_polling(fd);
+
+ /* only READY and ACTIVE states (the two with both flags set) require a cache entry */
+ if (((fdtab[fd].state & (FD_EV_READY_R | FD_EV_ACTIVE_R)) == (FD_EV_READY_R | FD_EV_ACTIVE_R)) ||
+ ((fdtab[fd].state & (FD_EV_READY_W | FD_EV_ACTIVE_W)) == (FD_EV_READY_W | FD_EV_ACTIVE_W))) {
+ fd_alloc_cache_entry(fd);
+ }
+ else {
+ fd_release_cache_entry(fd);
+ }
+}
+
+/*
+ * returns the FD's recv state (FD_EV_*)
+ */
+static inline int fd_recv_state(const int fd)
+{
+ return ((unsigned)fdtab[fd].state >> (4 * DIR_RD)) & FD_EV_STATUS;
+}
+
+/*
+ * returns true if the FD is active for recv
+ */
+static inline int fd_recv_active(const int fd)
+{
+ return (unsigned)fdtab[fd].state & FD_EV_ACTIVE_R;
+}
+
+/*
+ * returns true if the FD is ready for recv
+ */
+static inline int fd_recv_ready(const int fd)
+{
+ return (unsigned)fdtab[fd].state & FD_EV_READY_R;
+}
+
+/*
+ * returns true if the FD is polled for recv
+ */
+static inline int fd_recv_polled(const int fd)
+{
+ return (unsigned)fdtab[fd].state & FD_EV_POLLED_R;
+}
+
+/*
+ * returns the FD's send state (FD_EV_*)
+ */
+static inline int fd_send_state(const int fd)
+{
+ return ((unsigned)fdtab[fd].state >> (4 * DIR_WR)) & FD_EV_STATUS;
+}
+
+/*
+ * returns true if the FD is active for send
+ */
+static inline int fd_send_active(const int fd)
+{
+ return (unsigned)fdtab[fd].state & FD_EV_ACTIVE_W;
+}
+
+/*
+ * returns true if the FD is ready for send
+ */
+static inline int fd_send_ready(const int fd)
+{
+ return (unsigned)fdtab[fd].state & FD_EV_READY_W;
+}
+
+/*
+ * returns true if the FD is polled for send
+ */
+static inline int fd_send_polled(const int fd)
+{
+ return (unsigned)fdtab[fd].state & FD_EV_POLLED_W;
+}
+
+/* Disable processing recv events on fd <fd> */
+static inline void fd_stop_recv(int fd)
+{
+ if (!((unsigned int)fdtab[fd].state & FD_EV_ACTIVE_R))
+ return; /* already disabled */
+ fdtab[fd].state &= ~FD_EV_ACTIVE_R;
+ fd_update_cache(fd); /* need an update entry to change the state */
+}
+
+/* Disable processing send events on fd <fd> */
+static inline void fd_stop_send(int fd)
+{
+ if (!((unsigned int)fdtab[fd].state & FD_EV_ACTIVE_W))
+ return; /* already disabled */
+ fdtab[fd].state &= ~FD_EV_ACTIVE_W;
+ fd_update_cache(fd); /* need an update entry to change the state */
+}
+
+/* Disable processing of events on fd <fd> for both directions. */
+static inline void fd_stop_both(int fd)
+{
+ if (!((unsigned int)fdtab[fd].state & FD_EV_ACTIVE_RW))
+ return; /* already disabled */
+ fdtab[fd].state &= ~FD_EV_ACTIVE_RW;
+ fd_update_cache(fd); /* need an update entry to change the state */
+}
+
+/* Report that FD <fd> cannot receive anymore without polling (EAGAIN detected). */
+static inline void fd_cant_recv(const int fd)
+{
+ if (!(((unsigned int)fdtab[fd].state) & FD_EV_READY_R))
+ return; /* already marked as blocked */
+ fdtab[fd].state &= ~FD_EV_READY_R;
+ fd_update_cache(fd);
+}
+
+/* Report that FD <fd> can receive anymore without polling. */
+static inline void fd_may_recv(const int fd)
+{
+ if (((unsigned int)fdtab[fd].state) & FD_EV_READY_R)
+ return; /* already marked as blocked */
+ fdtab[fd].state |= FD_EV_READY_R;
+ fd_update_cache(fd);
+}
+
+/* Disable readiness when polled. This is useful to interrupt reading when it
+ * is suspected that the end of data might have been reached (eg: short read).
+ * This can only be done using level-triggered pollers, so if any edge-triggered
+ * is ever implemented, a test will have to be added here.
+ */
+static inline void fd_done_recv(const int fd)
+{
+ if (fd_recv_polled(fd))
+ fd_cant_recv(fd);
+}
+
+/* Report that FD <fd> cannot send anymore without polling (EAGAIN detected). */
+static inline void fd_cant_send(const int fd)
+{
+ if (!(((unsigned int)fdtab[fd].state) & FD_EV_READY_W))
+ return; /* already marked as blocked */
+ fdtab[fd].state &= ~FD_EV_READY_W;
+ fd_update_cache(fd);
+}
+
+/* Report that FD <fd> can send anymore without polling (EAGAIN detected). */
+static inline void fd_may_send(const int fd)
+{
+ if (((unsigned int)fdtab[fd].state) & FD_EV_READY_W)
+ return; /* already marked as blocked */
+ fdtab[fd].state |= FD_EV_READY_W;
+ fd_update_cache(fd);
+}
+
+/* Prepare FD <fd> to try to receive */
+static inline void fd_want_recv(int fd)
+{
+ if (((unsigned int)fdtab[fd].state & FD_EV_ACTIVE_R))
+ return; /* already enabled */
+ fdtab[fd].state |= FD_EV_ACTIVE_R;
+ fd_update_cache(fd); /* need an update entry to change the state */
+}
+
+/* Prepare FD <fd> to try to send */
+static inline void fd_want_send(int fd)
+{
+ if (((unsigned int)fdtab[fd].state & FD_EV_ACTIVE_W))
+ return; /* already enabled */
+ fdtab[fd].state |= FD_EV_ACTIVE_W;
+ fd_update_cache(fd); /* need an update entry to change the state */
+}
+
+/* Prepares <fd> for being polled */
+static inline void fd_insert(int fd)
+{
+ fdtab[fd].ev = 0;
+ fdtab[fd].new = 1;
+ fdtab[fd].linger_risk = 0;
+ fdtab[fd].cloned = 0;
+ if (fd + 1 > maxfd)
+ maxfd = fd + 1;
+}
+
+
+#endif /* _PROTO_FD_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/proto/freq_ctr.h
+ * This file contains macros and inline functions for frequency counters.
+ *
+ * Copyright (C) 2000-2014 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _PROTO_FREQ_CTR_H
+#define _PROTO_FREQ_CTR_H
+
+#include <common/config.h>
+#include <common/time.h>
+#include <types/freq_ctr.h>
+
+/* Rotate a frequency counter when current period is over. Must not be called
+ * during a valid period. It is important that it correctly initializes a null
+ * area.
+ */
+static inline void rotate_freq_ctr(struct freq_ctr *ctr)
+{
+ ctr->prev_ctr = ctr->curr_ctr;
+ if (likely(now.tv_sec - ctr->curr_sec != 1)) {
+ /* we missed more than one second */
+ ctr->prev_ctr = 0;
+ }
+ ctr->curr_sec = now.tv_sec;
+ ctr->curr_ctr = 0; /* leave it at the end to help gcc optimize it away */
+}
+
+/* Update a frequency counter by <inc> incremental units. It is automatically
+ * rotated if the period is over. It is important that it correctly initializes
+ * a null area.
+ */
+static inline void update_freq_ctr(struct freq_ctr *ctr, unsigned int inc)
+{
+ if (likely(ctr->curr_sec == now.tv_sec)) {
+ ctr->curr_ctr += inc;
+ return;
+ }
+ rotate_freq_ctr(ctr);
+ ctr->curr_ctr = inc;
+ /* Note: later we may want to propagate the update to other counters */
+}
+
+/* Rotate a frequency counter when current period is over. Must not be called
+ * during a valid period. It is important that it correctly initializes a null
+ * area. This one works on frequency counters which have a period different
+ * from one second.
+ */
+static inline void rotate_freq_ctr_period(struct freq_ctr_period *ctr,
+ unsigned int period)
+{
+ ctr->prev_ctr = ctr->curr_ctr;
+ ctr->curr_tick += period;
+ if (likely(now_ms - ctr->curr_tick >= period)) {
+ /* we missed at least two periods */
+ ctr->prev_ctr = 0;
+ ctr->curr_tick = now_ms;
+ }
+ ctr->curr_ctr = 0; /* leave it at the end to help gcc optimize it away */
+}
+
+/* Update a frequency counter by <inc> incremental units. It is automatically
+ * rotated if the period is over. It is important that it correctly initializes
+ * a null area. This one works on frequency counters which have a period
+ * different from one second.
+ */
+static inline void update_freq_ctr_period(struct freq_ctr_period *ctr,
+ unsigned int period, unsigned int inc)
+{
+ if (likely(now_ms - ctr->curr_tick < period)) {
+ ctr->curr_ctr += inc;
+ return;
+ }
+ rotate_freq_ctr_period(ctr, period);
+ ctr->curr_ctr = inc;
+ /* Note: later we may want to propagate the update to other counters */
+}
+
+/* Read a frequency counter taking history into account for missing time in
+ * current period.
+ */
+unsigned int read_freq_ctr(struct freq_ctr *ctr);
+
+/* returns the number of remaining events that can occur on this freq counter
+ * while respecting <freq> and taking into account that <pend> events are
+ * already known to be pending. Returns 0 if limit was reached.
+ */
+unsigned int freq_ctr_remain(struct freq_ctr *ctr, unsigned int freq, unsigned int pend);
+
+/* return the expected wait time in ms before the next event may occur,
+ * respecting frequency <freq>, and assuming there may already be some pending
+ * events. It returns zero if we can proceed immediately, otherwise the wait
+ * time, which will be rounded down 1ms for better accuracy, with a minimum
+ * of one ms.
+ */
+unsigned int next_event_delay(struct freq_ctr *ctr, unsigned int freq, unsigned int pend);
+
+/* process freq counters over configurable periods */
+unsigned int read_freq_ctr_period(struct freq_ctr_period *ctr, unsigned int period);
+unsigned int freq_ctr_remain_period(struct freq_ctr_period *ctr, unsigned int period,
+ unsigned int freq, unsigned int pend);
+
+/* While the functions above report average event counts per period, we are
+ * also interested in average values per event. For this we use a different
+ * method. The principle is to rely on a long tail which sums the new value
+ * with a fraction of the previous value, resulting in a sliding window of
+ * infinite length depending on the precision we're interested in.
+ *
+ * The idea is that we always keep (N-1)/N of the sum and add the new sampled
+ * value. The sum over N values can be computed with a simple program for a
+ * constant value 1 at each iteration :
+ *
+ * N
+ * ,---
+ * \ N - 1 e - 1
+ * > ( --------- )^x ~= N * -----
+ * / N e
+ * '---
+ * x = 1
+ *
+ * Note: I'm not sure how to demonstrate this but at least this is easily
+ * verified with a simple program, the sum equals N * 0.632120 for any N
+ * moderately large (tens to hundreds).
+ *
+ * Inserting a constant sample value V here simply results in :
+ *
+ * sum = V * N * (e - 1) / e
+ *
+ * But we don't want to integrate over a small period, but infinitely. Let's
+ * cut the infinity in P periods of N values. Each period M is exactly the same
+ * as period M-1 with a factor of ((N-1)/N)^N applied. A test shows that given a
+ * large N :
+ *
+ * N - 1 1
+ * ( ------- )^N ~= ---
+ * N e
+ *
+ * Our sum is now a sum of each factor times :
+ *
+ * N*P P
+ * ,--- ,---
+ * \ N - 1 e - 1 \ 1
+ * > v ( --------- )^x ~= VN * ----- * > ---
+ * / N e / e^x
+ * '--- '---
+ * x = 1 x = 0
+ *
+ * For P "large enough", in tests we get this :
+ *
+ * P
+ * ,---
+ * \ 1 e
+ * > --- ~= -----
+ * / e^x e - 1
+ * '---
+ * x = 0
+ *
+ * This simplifies the sum above :
+ *
+ * N*P
+ * ,---
+ * \ N - 1
+ * > v ( --------- )^x = VN
+ * / N
+ * '---
+ * x = 1
+ *
+ * So basically by summing values and applying the last result an (N-1)/N factor
+ * we just get N times the values over the long term, so we can recover the
+ * constant value V by dividing by N.
+ *
+ * A value added at the entry of the sliding window of N values will thus be
+ * reduced to 1/e or 36.7% after N terms have been added. After a second batch,
+ * it will only be 1/e^2, or 13.5%, and so on. So practically speaking, each
+ * old period of N values represents only a quickly fading ratio of the global
+ * sum :
+ *
+ * period ratio
+ * 1 36.7%
+ * 2 13.5%
+ * 3 4.98%
+ * 4 1.83%
+ * 5 0.67%
+ * 6 0.25%
+ * 7 0.09%
+ * 8 0.033%
+ * 9 0.012%
+ * 10 0.0045%
+ *
+ * So after 10N samples, the initial value has already faded out by a factor of
+ * 22026, which is quite fast. If the sliding window is 1024 samples wide, it
+ * means that a sample will only count for 1/22k of its initial value after 10k
+ * samples went after it, which results in half of the value it would represent
+ * using an arithmetic mean. The benefit of this method is that it's very cheap
+ * in terms of computations when N is a power of two. This is very well suited
+ * to record response times as large values will fade out faster than with an
+ * arithmetic mean and will depend on sample count and not time.
+ *
+ * Demonstrating all the above assumptions with maths instead of a program is
+ * left as an exercise for the reader.
+ */
+
+/* Adds sample value <v> to sliding window sum <sum> configured for <n> samples.
+ * The sample is returned. Better if <n> is a power of two.
+ */
+static inline unsigned int swrate_add(unsigned int *sum, unsigned int n, unsigned int v)
+{
+ return *sum = *sum * (n - 1) / n + v;
+}
+
+/* Returns the average sample value for the sum <sum> over a sliding window of
+ * <n> samples. Better if <n> is a power of two. It must be the same <n> as the
+ * one used above in all additions.
+ */
+static inline unsigned int swrate_avg(unsigned int sum, unsigned int n)
+{
+ return (sum + n - 1) / n;
+}
+
+#endif /* _PROTO_FREQ_CTR_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/proto/frontend.h
+ * This file declares frontend-specific functions.
+ *
+ * Copyright (C) 2000-2011 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _PROTO_FRONTEND_H
+#define _PROTO_FRONTEND_H
+
+#include <common/config.h>
+#include <types/stream.h>
+
+int frontend_accept(struct stream *s);
+
+
+#endif /* _PROTO_FRONTEND_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/proto/hdr_idx.h
+ * This file defines function prototypes for fast header indexation.
+ *
+ * Copyright (C) 2000-2011 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _PROTO_HDR_IDX_H
+#define _PROTO_HDR_IDX_H
+
+#include <common/config.h>
+#include <types/hdr_idx.h>
+
+extern struct pool_head *pool2_hdr_idx;
+
+/*
+ * Initialize the list pointers.
+ * list->size must already be set. If list->size is set and list->v is
+ * non-null, list->v is also initialized..
+ */
+static inline void hdr_idx_init(struct hdr_idx *list)
+{
+ if (list->size && list->v) {
+ register struct hdr_idx_elem e = { .len=0, .cr=0, .next=0};
+ list->v[0] = e;
+ }
+ list->tail = 0;
+ list->used = list->last = 1;
+}
+
+/*
+ * Return index of the first entry in the list. Usually, it means the index of
+ * the first header just after the request or response. If zero is returned, it
+ * means that the list is empty.
+ */
+static inline int hdr_idx_first_idx(struct hdr_idx *list)
+{
+ return list->v[0].next;
+}
+
+/*
+ * Return position of the first entry in the list. Usually, it means the
+ * position of the first header just after the request, but it can also be the
+ * end of the headers if the request has no header. hdr_idx_start_idx() should
+ * be checked before to ensure there is a valid header.
+ */
+static inline int hdr_idx_first_pos(struct hdr_idx *list)
+{
+ return list->v[0].len + list->v[0].cr + 1;
+}
+
+/*
+ * Sets the information about the start line. Its length and the presence of
+ * the CR are registered so that hdr_idx_first_pos() knows exactly where to
+ * find the first header.
+ */
+static inline void hdr_idx_set_start(struct hdr_idx *list, int len, int cr)
+{
+ list->v[0].len = len;
+ list->v[0].cr = cr;
+}
+
+/*
+ * Add a header entry to <list> after element <after>. <after> is ignored when
+ * the list is empty or full. Common usage is to set <after> to list->tail.
+ *
+ * Returns the position of the new entry in the list (from 1 to size-1), or 0
+ * if the array is already full. An effort is made to fill the array linearly,
+ * but once the last entry has been used, we have to search for unused blocks,
+ * which takes much more time. For this reason, it's important to size is
+ * appropriately.
+ */
+int hdr_idx_add(int len, int cr, struct hdr_idx *list, int after);
+
+#endif /* _PROTO_HDR_IDX_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+#ifndef _PROTO_HLUA_H
+#define _PROTO_HLUA_H
+
+#ifdef USE_LUA
+
+#include <lua.h>
+
+#include <types/hlua.h>
+
+/* The following macros are used to set flags. */
+#define HLUA_SET_RUN(__hlua) do {(__hlua)->flags |= HLUA_RUN;} while(0)
+#define HLUA_CLR_RUN(__hlua) do {(__hlua)->flags &= ~HLUA_RUN;} while(0)
+#define HLUA_IS_RUNNING(__hlua) ((__hlua)->flags & HLUA_RUN)
+#define HLUA_SET_CTRLYIELD(__hlua) do {(__hlua)->flags |= HLUA_CTRLYIELD;} while(0)
+#define HLUA_CLR_CTRLYIELD(__hlua) do {(__hlua)->flags &= ~HLUA_CTRLYIELD;} while(0)
+#define HLUA_IS_CTRLYIELDING(__hlua) ((__hlua)->flags & HLUA_CTRLYIELD)
+#define HLUA_SET_WAKERESWR(__hlua) do {(__hlua)->flags |= HLUA_WAKERESWR;} while(0)
+#define HLUA_CLR_WAKERESWR(__hlua) do {(__hlua)->flags &= ~HLUA_WAKERESWR;} while(0)
+#define HLUA_IS_WAKERESWR(__hlua) ((__hlua)->flags & HLUA_WAKERESWR)
+#define HLUA_SET_WAKEREQWR(__hlua) do {(__hlua)->flags |= HLUA_WAKEREQWR;} while(0)
+#define HLUA_CLR_WAKEREQWR(__hlua) do {(__hlua)->flags &= ~HLUA_WAKEREQWR;} while(0)
+#define HLUA_IS_WAKEREQWR(__hlua) ((__hlua)->flags & HLUA_WAKEREQWR)
+
+#define HLUA_INIT(__hlua) do { (__hlua)->T = 0; } while(0)
+
+/* Lua HAProxy integration functions. */
+void hlua_ctx_destroy(struct hlua *lua);
+void hlua_init();
+int hlua_post_init();
+
+#else /* USE_LUA */
+
+#define HLUA_IS_RUNNING(__hlua) 0
+
+#define HLUA_INIT(__hlua)
+
+/* Empty function for compilation without Lua. */
+static inline void hlua_init() { }
+static inline int hlua_post_init() { return 1; }
+static inline void hlua_ctx_destroy(struct hlua *lua) { }
+
+#endif /* USE_LUA */
+
+#endif /* _PROTO_HLUA_H */
--- /dev/null
+/*
+ * include/proto/lb_chash.h
+ * Function declarations for Consistent Hash LB algorithm.
+ *
+ * Copyright (C) 2000-2009 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _PROTO_LB_CHASH_H
+#define _PROTO_LB_CHASH_H
+
+#include <common/config.h>
+#include <types/proxy.h>
+#include <types/server.h>
+
+void chash_init_server_tree(struct proxy *p);
+struct server *chash_get_next_server(struct proxy *p, struct server *srvtoavoid);
+struct server *chash_get_server_hash(struct proxy *p, unsigned int hash);
+
+#endif /* _PROTO_LB_CHASH_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/proto/lb_fas.h
+ * First Available Server load balancing algorithm.
+ *
+ * Copyright (C) 2000-2009 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _PROTO_LB_FAS_H
+#define _PROTO_LB_FAS_H
+
+#include <common/config.h>
+#include <types/proxy.h>
+#include <types/server.h>
+
+struct server *fas_get_next_server(struct proxy *p, struct server *srvtoavoid);
+void fas_init_server_tree(struct proxy *p);
+
+#endif /* _PROTO_LB_FAS_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/proto/lb_fwlc.h
+ * Fast Weighted Least Connection load balancing algorithm.
+ *
+ * Copyright (C) 2000-2009 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _PROTO_LB_FWLC_H
+#define _PROTO_LB_FWLC_H
+
+#include <common/config.h>
+#include <types/proxy.h>
+#include <types/server.h>
+
+struct server *fwlc_get_next_server(struct proxy *p, struct server *srvtoavoid);
+void fwlc_init_server_tree(struct proxy *p);
+
+#endif /* _PROTO_LB_FWLC_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/proto/lb_fwrr.h
+ * Fast Weighted Round Robin load balancing algorithm.
+ *
+ * Copyright (C) 2000-2009 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _PROTO_LB_FWRR_H
+#define _PROTO_LB_FWRR_H
+
+#include <common/config.h>
+#include <types/proxy.h>
+#include <types/server.h>
+
+void fwrr_init_server_groups(struct proxy *p);
+struct server *fwrr_get_next_server(struct proxy *p, struct server *srvtoavoid);
+
+#endif /* _PROTO_LB_FWRR_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/proto/lb_map.h
+ * Map-based load-balancing (RR and HASH)
+ *
+ * Copyright (C) 2000-2009 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _PROTO_LB_MAP_H
+#define _PROTO_LB_MAP_H
+
+#include <common/config.h>
+#include <types/proxy.h>
+#include <types/server.h>
+
+void map_set_server_status_down(struct server *srv);
+void map_set_server_status_up(struct server *srv);
+void recalc_server_map(struct proxy *px);
+void init_server_map(struct proxy *p);
+struct server *map_get_server_rr(struct proxy *px, struct server *srvtoavoid);
+struct server *map_get_server_hash(struct proxy *px, unsigned int hash);
+
+#endif /* _PROTO_LB_MAP_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/proto/listener.h
+ * This file declares listener management primitives.
+ *
+ * Copyright (C) 2000-2012 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _PROTO_LISTENER_H
+#define _PROTO_LISTENER_H
+
+#include <string.h>
+
+#include <types/listener.h>
+
+/* This function adds the specified listener's file descriptor to the polling
+ * lists if it is in the LI_LISTEN state. The listener enters LI_READY or
+ * LI_FULL state depending on its number of connections.
+ */
+void enable_listener(struct listener *listener);
+
+/* This function removes the specified listener's file descriptor from the
+ * polling lists if it is in the LI_READY or in the LI_FULL state. The listener
+ * enters LI_LISTEN.
+ */
+void disable_listener(struct listener *listener);
+
+/* This function tries to temporarily disable a listener, depending on the OS
+ * capabilities. Linux unbinds the listen socket after a SHUT_RD, and ignores
+ * SHUT_WR. Solaris refuses either shutdown(). OpenBSD ignores SHUT_RD but
+ * closes upon SHUT_WR and refuses to rebind. So a common validation path
+ * involves SHUT_WR && listen && SHUT_RD. In case of success, the FD's polling
+ * is disabled. It normally returns non-zero, unless an error is reported.
+ */
+int pause_listener(struct listener *l);
+
+/* This function tries to resume a temporarily disabled listener.
+ * The resulting state will either be LI_READY or LI_FULL. 0 is returned
+ * in case of failure to resume (eg: dead socket).
+ */
+int resume_listener(struct listener *l);
+
+/* Marks a ready listener as full so that the session code tries to re-enable
+ * it upon next close() using resume_listener().
+ */
+void listener_full(struct listener *l);
+
+/* This function adds all of the protocol's listener's file descriptors to the
+ * polling lists when they are in the LI_LISTEN state. It is intended to be
+ * used as a protocol's generic enable_all() primitive, for use after the
+ * fork(). It puts the listeners into LI_READY or LI_FULL states depending on
+ * their number of connections. It always returns ERR_NONE.
+ */
+int enable_all_listeners(struct protocol *proto);
+
+/* This function removes all of the protocol's listener's file descriptors from
+ * the polling lists when they are in the LI_READY or LI_FULL states. It is
+ * intended to be used as a protocol's generic disable_all() primitive. It puts
+ * the listeners into LI_LISTEN, and always returns ERR_NONE.
+ */
+int disable_all_listeners(struct protocol *proto);
+
+/* Marks a ready listener as limited so that we only try to re-enable it when
+ * resources are free again. It will be queued into the specified queue.
+ */
+void limit_listener(struct listener *l, struct list *list);
+
+/* Dequeues all of the listeners waiting for a resource in wait queue <queue>. */
+void dequeue_all_listeners(struct list *list);
+
+/* This function closes the listening socket for the specified listener,
+ * provided that it's already in a listening state. The listener enters the
+ * LI_ASSIGNED state. It always returns ERR_NONE. This function is intended
+ * to be used as a generic function for standard protocols.
+ */
+int unbind_listener(struct listener *listener);
+
+/* This function closes all listening sockets bound to the protocol <proto>,
+ * and the listeners end in LI_ASSIGNED state if they were higher. It does not
+ * detach them from the protocol. It always returns ERR_NONE.
+ */
+int unbind_all_listeners(struct protocol *proto);
+
+/* Delete a listener from its protocol's list of listeners. The listener's
+ * state is automatically updated from LI_ASSIGNED to LI_INIT. The protocol's
+ * number of listeners is updated. Note that the listener must have previously
+ * been unbound. This is the generic function to use to remove a listener.
+ */
+void delete_listener(struct listener *listener);
+
+/* This function is called on a read event from a listening socket, corresponding
+ * to an accept. It tries to accept as many connections as possible, and for each
+ * calls the listener's accept handler (generally the frontend's accept handler).
+ */
+int listener_accept(int fd);
+
+/*
+ * Registers the bind keyword list <kwl> as a list of valid keywords for next
+ * parsing sessions.
+ */
+void bind_register_keywords(struct bind_kw_list *kwl);
+
+/* Return a pointer to the bind keyword <kw>, or NULL if not found. */
+struct bind_kw *bind_find_kw(const char *kw);
+
+/* Dumps all registered "bind" keywords to the <out> string pointer. */
+void bind_dump_kws(char **out);
+
+/* allocate an bind_conf struct for a bind line, and chain it to list head <lh>.
+ * If <arg> is not NULL, it is duplicated into ->arg to store useful config
+ * information for error reporting.
+ */
+static inline struct bind_conf *bind_conf_alloc(struct list *lh, const char *file, int line, const char *arg)
+{
+ struct bind_conf *bind_conf = (void *)calloc(1, sizeof(struct bind_conf));
+
+ bind_conf->file = strdup(file);
+ bind_conf->line = line;
+ if (lh)
+ LIST_ADDQ(lh, &bind_conf->by_fe);
+ if (arg)
+ bind_conf->arg = strdup(arg);
+
+ bind_conf->ux.uid = -1;
+ bind_conf->ux.gid = -1;
+ bind_conf->ux.mode = 0;
+
+ LIST_INIT(&bind_conf->listeners);
+ return bind_conf;
+}
+
+#endif /* _PROTO_LISTENER_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ include/proto/log.h
+ This file contains definitions of log-related functions, structures,
+ and macros.
+
+ Copyright (C) 2000-2008 Willy Tarreau - w@1wt.eu
+
+ This library is free software; you can redistribute it and/or
+ modify it under the terms of the GNU Lesser General Public
+ License as published by the Free Software Foundation, version 2.1
+ exclusively.
+
+ This library is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ Lesser General Public License for more details.
+
+ You should have received a copy of the GNU Lesser General Public
+ License along with this library; if not, write to the Free Software
+ Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+*/
+
+#ifndef _PROTO_LOG_H
+#define _PROTO_LOG_H
+
+#include <stdio.h>
+#include <syslog.h>
+
+#include <common/config.h>
+#include <common/memory.h>
+#include <types/log.h>
+#include <types/proxy.h>
+#include <types/stream.h>
+
+extern struct pool_head *pool2_requri;
+extern struct pool_head *pool2_uniqueid;
+
+extern char *log_format;
+extern char default_tcp_log_format[];
+extern char default_http_log_format[];
+extern char clf_http_log_format[];
+
+extern char default_rfc5424_sd_log_format[];
+
+extern char *logheader;
+extern char *logheader_rfc5424;
+extern char *logline;
+extern char *logline_rfc5424;
+
+
+int build_logline(struct stream *s, char *dst, size_t maxsize, struct list *list_format);
+
+/*
+ * send a log for the stream when we have enough info about it.
+ * Will not log if the frontend has no log defined.
+ */
+void strm_log(struct stream *s);
+
+/*
+ * Parse args in a logformat_var
+ */
+int parse_logformat_var_args(char *args, struct logformat_node *node);
+
+/*
+ * Parse a variable '%varname' or '%{args}varname' in log-format
+ *
+ */
+int parse_logformat_var(char *arg, int arg_len, char *var, int var_len, struct proxy *curproxy, struct list *list_format, int *defoptions);
+
+/*
+ * add to the logformat linked list
+ */
+void add_to_logformat_list(char *start, char *end, int type, struct list *list_format);
+
+/*
+ * Parse the log_format string and fill a linked list.
+ * Variable name are preceded by % and composed by characters [a-zA-Z0-9]* : %varname
+ * You can set arguments using { } : %{many arguments}varname
+ */
+void parse_logformat_string(const char *str, struct proxy *curproxy, struct list *list_format, int options, int cap, const char *file, int line);
+/*
+ * Displays the message on stderr with the date and pid. Overrides the quiet
+ * mode during startup.
+ */
+void Alert(const char *fmt, ...)
+ __attribute__ ((format(printf, 1, 2)));
+
+/*
+ * Displays the message on stderr with the date and pid.
+ */
+void Warning(const char *fmt, ...)
+ __attribute__ ((format(printf, 1, 2)));
+
+/*
+ * Displays the message on <out> only if quiet mode is not set.
+ */
+void qfprintf(FILE *out, const char *fmt, ...)
+ __attribute__ ((format(printf, 2, 3)));
+
+/*
+ * This function adds a header to the message and sends the syslog message
+ * using a printf format string
+ */
+void send_log(struct proxy *p, int level, const char *format, ...)
+ __attribute__ ((format(printf, 3, 4)));
+
+/*
+ * This function sends a syslog message to both log servers of a proxy,
+ * or to global log servers if the proxy is NULL.
+ * It also tries not to waste too much time computing the message header.
+ * It doesn't care about errors nor does it report them.
+ */
+
+void __send_log(struct proxy *p, int level, char *message, size_t size, char *sd, size_t sd_size);
+
+/*
+ * returns log format for <fmt> or -1 if not found.
+ */
+int get_log_format(const char *fmt);
+
+/*
+ * returns log level for <lev> or -1 if not found.
+ */
+int get_log_level(const char *lev);
+
+/*
+ * returns log facility for <fac> or -1 if not found.
+ */
+int get_log_facility(const char *fac);
+
+/*
+ * Write a string in the log string
+ * Take cares of quote options
+ *
+ * Return the adress of the \0 character, or NULL on error
+ */
+char *lf_text_len(char *dst, const char *src, size_t len, size_t size, struct logformat_node *node);
+
+/*
+ * Write a IP adress to the log string
+ * +X option write in hexadecimal notation, most signifant byte on the left
+ */
+char *lf_ip(char *dst, struct sockaddr *sockaddr, size_t size, struct logformat_node *node);
+
+/*
+ * Write a port to the log
+ * +X option write in hexadecimal notation, most signifant byte on the left
+ */
+char *lf_port(char *dst, struct sockaddr *sockaddr, size_t size, struct logformat_node *node);
+
+
+#endif /* _PROTO_LOG_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/proto/map.h
+ * This file provides structures and types for pattern matching.
+ *
+ * Copyright (C) 2000-2013 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _PROTO_MAP_H
+#define _PROTO_MAP_H
+
+#include <types/map.h>
+
+/* maps output sample parser */
+int map_parse_ip(const char *text, struct sample_data *data);
+int map_parse_ip6(const char *text, struct sample_data *data);
+int map_parse_str(const char *text, struct sample_data *data);
+int map_parse_int(const char *text, struct sample_data *data);
+
+struct map_reference *map_get_reference(const char *reference);
+
+int sample_load_map(struct arg *arg, struct sample_conv *conv,
+ const char *file, int line, char **err);
+
+#endif /* _PROTO_PATTERN_H */
--- /dev/null
+/*
+ * include/proto/obj_type.h
+ * This file contains function prototypes to manipulate object types
+ *
+ * Copyright (C) 2000-2013 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _PROTO_OBJ_TYPE_H
+#define _PROTO_OBJ_TYPE_H
+
+#include <common/config.h>
+#include <common/memory.h>
+#include <types/applet.h>
+#include <types/connection.h>
+#include <types/listener.h>
+#include <types/obj_type.h>
+#include <types/proxy.h>
+#include <types/server.h>
+#include <types/stream_interface.h>
+
+static inline enum obj_type obj_type(enum obj_type *t)
+{
+ if (!t || *t >= OBJ_TYPE_ENTRIES)
+ return OBJ_TYPE_NONE;
+ return *t;
+}
+
+static inline const char *obj_type_name(enum obj_type *t)
+{
+ switch (obj_type(t)) {
+ case OBJ_TYPE_LISTENER: return "LISTENER";
+ case OBJ_TYPE_PROXY: return "PROXY";
+ case OBJ_TYPE_SERVER: return "SERVER";
+ case OBJ_TYPE_APPLET: return "APPLET";
+ case OBJ_TYPE_APPCTX: return "APPCTX";
+ case OBJ_TYPE_CONN: return "CONN";
+ default: return "NONE";
+ }
+}
+
+/* Note: for convenience, we provide two versions of each function :
+ * - __objt_<type> : converts the pointer without any control of its
+ * value nor type.
+ * - objt_<type> : same as above except that if the pointer is NULL
+ * or points to a non-matching type, NULL is returned instead.
+ */
+
+static inline struct listener *__objt_listener(enum obj_type *t)
+{
+ return container_of(t, struct listener, obj_type);
+}
+
+static inline struct listener *objt_listener(enum obj_type *t)
+{
+ if (!t || *t != OBJ_TYPE_LISTENER)
+ return NULL;
+ return __objt_listener(t);
+}
+
+static inline struct proxy *__objt_proxy(enum obj_type *t)
+{
+ return container_of(t, struct proxy, obj_type);
+}
+
+static inline struct proxy *objt_proxy(enum obj_type *t)
+{
+ if (!t || *t != OBJ_TYPE_PROXY)
+ return NULL;
+ return __objt_proxy(t);
+}
+
+static inline struct server *__objt_server(enum obj_type *t)
+{
+ return container_of(t, struct server, obj_type);
+}
+
+static inline struct server *objt_server(enum obj_type *t)
+{
+ if (!t || *t != OBJ_TYPE_SERVER)
+ return NULL;
+ return __objt_server(t);
+}
+
+static inline struct applet *__objt_applet(enum obj_type *t)
+{
+ return container_of(t, struct applet, obj_type);
+}
+
+static inline struct applet *objt_applet(enum obj_type *t)
+{
+ if (!t || *t != OBJ_TYPE_APPLET)
+ return NULL;
+ return __objt_applet(t);
+}
+
+static inline struct appctx *__objt_appctx(enum obj_type *t)
+{
+ return container_of(t, struct appctx, obj_type);
+}
+
+static inline struct appctx *objt_appctx(enum obj_type *t)
+{
+ if (!t || *t != OBJ_TYPE_APPCTX)
+ return NULL;
+ return __objt_appctx(t);
+}
+
+static inline struct connection *__objt_conn(enum obj_type *t)
+{
+ return container_of(t, struct connection, obj_type);
+}
+
+static inline struct connection *objt_conn(enum obj_type *t)
+{
+ if (!t || *t != OBJ_TYPE_CONN)
+ return NULL;
+ return __objt_conn(t);
+}
+
+static inline void *obj_base_ptr(enum obj_type *t)
+{
+ switch (obj_type(t)) {
+ case OBJ_TYPE_LISTENER: return __objt_listener(t);
+ case OBJ_TYPE_PROXY: return __objt_proxy(t);
+ case OBJ_TYPE_SERVER: return __objt_server(t);
+ case OBJ_TYPE_APPLET: return __objt_applet(t);
+ case OBJ_TYPE_APPCTX: return __objt_appctx(t);
+ case OBJ_TYPE_CONN: return __objt_conn(t);
+ default: return NULL;
+ }
+}
+
+#endif /* _PROTO_OBJ_TYPE_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/proto/pattern.h
+ * This file provides structures and types for pattern matching.
+ *
+ * Copyright (C) 2000-2013 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _PROTO_PATTERN_H
+#define _PROTO_PATTERN_H
+
+#include <string.h>
+
+#include <common/config.h>
+#include <common/standard.h>
+#include <types/pattern.h>
+
+/* pattern management function arrays */
+extern char *pat_match_names[PAT_MATCH_NUM];
+extern int (*pat_parse_fcts[PAT_MATCH_NUM])(const char *, struct pattern *, int, char **);
+extern int (*pat_index_fcts[PAT_MATCH_NUM])(struct pattern_expr *, struct pattern *, char **);
+extern void (*pat_delete_fcts[PAT_MATCH_NUM])(struct pattern_expr *, struct pat_ref_elt *);
+extern void (*pat_prune_fcts[PAT_MATCH_NUM])(struct pattern_expr *);
+extern struct pattern *(*pat_match_fcts[PAT_MATCH_NUM])(struct sample *, struct pattern_expr *, int);
+extern int pat_match_types[PAT_MATCH_NUM];
+
+void pattern_finalize_config(void);
+
+/* return the PAT_MATCH_* index for match name "name", or < 0 if not found */
+static inline int pat_find_match_name(const char *name)
+{
+ int i;
+
+ for (i = 0; i < PAT_MATCH_NUM; i++)
+ if (strcmp(name, pat_match_names[i]) == 0)
+ return i;
+ return -1;
+}
+
+/* This function executes a pattern match on a sample. It applies pattern <expr>
+ * to sample <smp>. The function returns NULL if the sample dont match. It returns
+ * non-null if the sample match. If <fill> is true and the sample match, the
+ * function returns the matched pattern. In many cases, this pattern can be a
+ * static buffer.
+ */
+struct pattern *pattern_exec_match(struct pattern_head *head, struct sample *smp, int fill);
+
+/*
+ *
+ * The following function gets "pattern", duplicate it and index it in "expr"
+ *
+ */
+int pat_idx_list_val(struct pattern_expr *expr, struct pattern *pat, char **err);
+int pat_idx_list_ptr(struct pattern_expr *expr, struct pattern *pat, char **err);
+int pat_idx_list_str(struct pattern_expr *expr, struct pattern *pat, char **err);
+int pat_idx_list_reg(struct pattern_expr *expr, struct pattern *pat, char **err);
+int pat_idx_tree_ip(struct pattern_expr *expr, struct pattern *pat, char **err);
+int pat_idx_tree_str(struct pattern_expr *expr, struct pattern *pat, char **err);
+int pat_idx_tree_pfx(struct pattern_expr *expr, struct pattern *pat, char **err);
+
+/*
+ *
+ * The following functions search pattern <pattern> into the pattern
+ * expression <expr>. If the pattern is found, delete it. This function
+ * never fails.
+ *
+ */
+void pat_del_list_val(struct pattern_expr *expr, struct pat_ref_elt *ref);
+void pat_del_tree_ip(struct pattern_expr *expr, struct pat_ref_elt *ref);
+void pat_del_list_ptr(struct pattern_expr *expr, struct pat_ref_elt *ref);
+void pat_del_tree_str(struct pattern_expr *expr, struct pat_ref_elt *ref);
+void pat_del_list_reg(struct pattern_expr *expr, struct pat_ref_elt *ref);
+
+/*
+ *
+ * The following functions clean all entries of a pattern expression and
+ * reset the tree and list root.
+ *
+ */
+void pat_prune_val(struct pattern_expr *expr);
+void pat_prune_ptr(struct pattern_expr *expr);
+void pat_prune_reg(struct pattern_expr *expr);
+
+/*
+ *
+ * The following functions are general purpose pattern matching functions.
+ *
+ */
+
+
+/* ignore the current line */
+int pat_parse_nothing(const char *text, struct pattern *pattern, int mflags, char **err);
+
+/* Parse an integer. It is put both in min and max. */
+int pat_parse_int(const char *text, struct pattern *pattern, int mflags, char **err);
+
+/* Parse an version. It is put both in min and max. */
+int pat_parse_dotted_ver(const char *text, struct pattern *pattern, int mflags, char **err);
+
+/* Parse a range of integers delimited by either ':' or '-'. If only one
+ * integer is read, it is set as both min and max.
+ */
+int pat_parse_range(const char *text, struct pattern *pattern, int mflags, char **err);
+
+/* Parse a string. It is allocated and duplicated. */
+int pat_parse_str(const char *text, struct pattern *pattern, int mflags, char **err);
+
+/* Parse a hexa binary definition. It is allocated and duplicated. */
+int pat_parse_bin(const char *text, struct pattern *pattern, int mflags, char **err);
+
+/* Parse a regex. It is allocated. */
+int pat_parse_reg(const char *text, struct pattern *pattern, int mflags, char **err);
+
+/* Parse an IP address and an optional mask in the form addr[/mask].
+ * The addr may either be an IPv4 address or a hostname. The mask
+ * may either be a dotted mask or a number of bits. Returns 1 if OK,
+ * otherwise 0.
+ */
+int pat_parse_ip(const char *text, struct pattern *pattern, int mflags, char **err);
+
+/* NB: For two strings to be identical, it is required that their lengths match */
+struct pattern *pat_match_str(struct sample *smp, struct pattern_expr *expr, int fill);
+
+/* NB: For two binary buffers to be identical, it is required that their lengths match */
+struct pattern *pat_match_bin(struct sample *smp, struct pattern_expr *expr, int fill);
+
+/* Checks that the length of the pattern in <test> is included between min and max */
+struct pattern *pat_match_len(struct sample *smp, struct pattern_expr *expr, int fill);
+
+/* Checks that the integer in <test> is included between min and max */
+struct pattern *pat_match_int(struct sample *smp, struct pattern_expr *expr, int fill);
+
+/* always return false */
+struct pattern *pat_match_nothing(struct sample *smp, struct pattern_expr *expr, int fill);
+
+/* Checks that the pattern matches the end of the tested string. */
+struct pattern *pat_match_end(struct sample *smp, struct pattern_expr *expr, int fill);
+
+/* Checks that the pattern matches the beginning of the tested string. */
+struct pattern *pat_match_beg(struct sample *smp, struct pattern_expr *expr, int fill);
+
+/* Checks that the pattern is included inside the tested string. */
+struct pattern *pat_match_sub(struct sample *smp, struct pattern_expr *expr, int fill);
+
+/* Checks that the pattern is included inside the tested string, but enclosed
+ * between slashes or at the beginning or end of the string. Slashes at the
+ * beginning or end of the pattern are ignored.
+ */
+struct pattern *pat_match_dir(struct sample *smp, struct pattern_expr *expr, int fill);
+
+/* Checks that the pattern is included inside the tested string, but enclosed
+ * between dots or at the beginning or end of the string. Dots at the beginning
+ * or end of the pattern are ignored.
+ */
+struct pattern *pat_match_dom(struct sample *smp, struct pattern_expr *expr, int fill);
+
+/* Check that the IPv4 address in <test> matches the IP/mask in pattern */
+struct pattern *pat_match_ip(struct sample *smp, struct pattern_expr *expr, int fill);
+
+/* Executes a regex. It temporarily changes the data to add a trailing zero,
+ * and restores the previous character when leaving.
+ */
+struct pattern *pat_match_reg(struct sample *smp, struct pattern_expr *expr, int fill);
+
+/*
+ * pattern_ref manipulation.
+ */
+struct pat_ref *pat_ref_lookup(const char *reference);
+struct pat_ref *pat_ref_lookupid(int unique_id);
+struct pat_ref *pat_ref_new(const char *reference, const char *display, unsigned int flags);
+struct pat_ref *pat_ref_newid(int unique_id, const char *display, unsigned int flags);
+struct pat_ref_elt *pat_ref_find_elt(struct pat_ref *ref, const char *key);
+int pat_ref_append(struct pat_ref *ref, char *pattern, char *sample, int line);
+int pat_ref_add(struct pat_ref *ref, const char *pattern, const char *sample, char **err);
+int pat_ref_set(struct pat_ref *ref, const char *pattern, const char *sample, char **err);
+int pat_ref_set_by_id(struct pat_ref *ref, struct pat_ref_elt *refelt, const char *value, char **err);
+int pat_ref_delete(struct pat_ref *ref, const char *key);
+int pat_ref_delete_by_id(struct pat_ref *ref, struct pat_ref_elt *refelt);
+void pat_ref_prune(struct pat_ref *ref);
+int pat_ref_load(struct pat_ref *ref, struct pattern_expr *expr, int patflags, int soe, char **err);
+void pat_ref_reload(struct pat_ref *ref, struct pat_ref *replace);
+
+
+/*
+ * pattern_head manipulation.
+ */
+void pattern_init_head(struct pattern_head *head);
+void pattern_prune(struct pattern_head *head);
+int pattern_read_from_file(struct pattern_head *head, unsigned int refflags, const char *filename, int patflags, int load_smp, char **err, const char *file, int line);
+
+/*
+ * pattern_expr manipulation.
+ */
+void pattern_init_expr(struct pattern_expr *expr);
+struct pattern_expr *pattern_lookup_expr(struct pattern_head *head, struct pat_ref *ref);
+struct pattern_expr *pattern_new_expr(struct pattern_head *head, struct pat_ref *ref,
+ char **err, int *reuse);
+struct sample_data **pattern_find_smp(struct pattern_expr *expr, struct pat_ref_elt *elt);
+int pattern_delete(struct pattern_expr *expr, struct pat_ref_elt *ref);
+
+
+#endif
--- /dev/null
+/*
+ * include/proto/payload.h
+ * Definitions for payload-based sample fetches and ACLs
+ *
+ * Copyright (C) 2000-2013 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _PROTO_PROTO_PAYLOAD_H
+#define _PROTO_PROTO_PAYLOAD_H
+
+#include <common/config.h>
+#include <types/sample.h>
+#include <types/stream.h>
+
+int fetch_rdp_cookie_name(struct stream *s, struct sample *smp, const char *cname, int clen);
+int val_payload_lv(struct arg *arg, char **err_msg);
+
+#endif /* _PROTO_PROTO_PAYLOAD_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/proto/peers.h
+ * This file defines function prototypes for peers management.
+ *
+ * Copyright 2010 EXCELIANCE, Emeric Brun <ebrun@exceliance.fr>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _PROTO_PEERS_H
+#define _PROTO_PEERS_H
+
+#include <common/config.h>
+#include <common/ticks.h>
+#include <common/time.h>
+#include <types/stream.h>
+#include <types/peers.h>
+
+void peers_init_sync(struct peers *peers);
+void peers_register_table(struct peers *, struct stktable *table);
+void peers_setup_frontend(struct proxy *fe);
+
+#endif /* _PROTO_PEERS_H */
+
--- /dev/null
+/*
+ include/proto/pipe.h
+ Pipe management
+
+ Copyright (C) 2000-2009 Willy Tarreau - w@1wt.eu
+
+ This library is free software; you can redistribute it and/or
+ modify it under the terms of the GNU Lesser General Public
+ License as published by the Free Software Foundation, version 2.1
+ exclusively.
+
+ This library is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ Lesser General Public License for more details.
+
+ You should have received a copy of the GNU Lesser General Public
+ License along with this library; if not, write to the Free Software
+ Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+*/
+
+#ifndef _PROTO_PIPE_H
+#define _PROTO_PIPE_H
+
+#include <common/config.h>
+#include <types/pipe.h>
+
+extern int pipes_used; /* # of pipes in use (2 fds each) */
+extern int pipes_free; /* # of pipes unused (2 fds each) */
+
+/* return a pre-allocated empty pipe. Try to allocate one if there isn't any
+ * left. NULL is returned if a pipe could not be allocated.
+ */
+struct pipe *get_pipe();
+
+/* destroy a pipe, possibly because an error was encountered on it. Its FDs
+ * will be closed and it will not be reinjected into the live pool.
+ */
+void kill_pipe(struct pipe *p);
+
+/* put back a unused pipe into the live pool. If it still has data in it, it is
+ * closed and not reinjected into the live pool. The caller is not allowed to
+ * use it once released.
+ */
+void put_pipe(struct pipe *p);
+
+#endif /* _PROTO_PIPE_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ include/proto/port_range.h
+ This file defines everything needed to manage port ranges
+
+ Copyright (C) 2000-2009 Willy Tarreau - w@1wt.eu
+
+ This library is free software; you can redistribute it and/or
+ modify it under the terms of the GNU Lesser General Public
+ License as published by the Free Software Foundation, version 2.1
+ exclusively.
+
+ This library is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ Lesser General Public License for more details.
+
+ You should have received a copy of the GNU Lesser General Public
+ License along with this library; if not, write to the Free Software
+ Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+*/
+
+#ifndef _PROTO_PORT_RANGE_H
+#define _PROTO_PORT_RANGE_H
+
+#include <types/port_range.h>
+
+/* return an available port from range <range>, or zero if none is left */
+static inline int port_range_alloc_port(struct port_range *range)
+{
+ int ret;
+
+ if (!range->avail)
+ return 0;
+ ret = range->ports[range->get];
+ range->get++;
+ if (range->get >= range->size)
+ range->get = 0;
+ range->avail--;
+ return ret;
+}
+
+/* release port <port> into port range <range>. Does nothing if <port> is zero
+ * nor if <range> is null. The caller is responsible for marking the port
+ * unused by either setting the port to zero or the range to NULL.
+ */
+static inline void port_range_release_port(struct port_range *range, int port)
+{
+ if (!port || !range)
+ return;
+
+ range->ports[range->put] = port;
+ range->avail++;
+ range->put++;
+ if (range->put >= range->size)
+ range->put = 0;
+}
+
+/* return a new initialized port range of N ports. The ports are not
+ * filled in, it's up to the caller to do it.
+ */
+static inline struct port_range *port_range_alloc_range(int n)
+{
+ struct port_range *ret;
+ ret = calloc(1, sizeof(struct port_range) +
+ n * sizeof(((struct port_range *)0)->ports[0]));
+ ret->size = ret->avail = n;
+ return ret;
+}
+
+#endif /* _PROTO_PORT_RANGE_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/proto/proto_http.h
+ * This file contains HTTP protocol definitions.
+ *
+ * Copyright (C) 2000-2011 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _PROTO_PROTO_HTTP_H
+#define _PROTO_PROTO_HTTP_H
+
+#include <common/config.h>
+#include <types/action.h>
+#include <types/proto_http.h>
+#include <types/stream.h>
+#include <types/task.h>
+
+/*
+ * some macros used for the request parsing.
+ * from RFC2616:
+ * CTL = <any US-ASCII control character (octets 0 - 31) and DEL (127)>
+ * SEP = one of the 17 defined separators or SP or HT
+ * LWS = CR, LF, SP or HT
+ * SPHT = SP or HT. Use this macro and not a boolean expression for best speed.
+ * CRLF = CR or LF. Use this macro and not a boolean expression for best speed.
+ * token = any CHAR except CTL or SEP. Use this macro and not a boolean expression for best speed.
+ *
+ * added for ease of use:
+ * ver_token = 'H', 'P', 'T', '/', '.', and digits.
+ */
+
+extern const char http_is_ctl[256];
+extern const char http_is_sep[256];
+extern const char http_is_lws[256];
+extern const char http_is_spht[256];
+extern const char http_is_crlf[256];
+extern const char http_is_token[256];
+extern const char http_is_ver_token[256];
+
+extern const int http_err_codes[HTTP_ERR_SIZE];
+extern struct chunk http_err_chunks[HTTP_ERR_SIZE];
+extern const char *HTTP_302;
+extern const char *HTTP_303;
+extern char *get_http_auth_buff;
+
+#define HTTP_IS_CTL(x) (http_is_ctl[(unsigned char)(x)])
+#define HTTP_IS_SEP(x) (http_is_sep[(unsigned char)(x)])
+#define HTTP_IS_LWS(x) (http_is_lws[(unsigned char)(x)])
+#define HTTP_IS_SPHT(x) (http_is_spht[(unsigned char)(x)])
+#define HTTP_IS_CRLF(x) (http_is_crlf[(unsigned char)(x)])
+#define HTTP_IS_TOKEN(x) (http_is_token[(unsigned char)(x)])
+#define HTTP_IS_VER_TOKEN(x) (http_is_ver_token[(unsigned char)(x)])
+
+int process_cli(struct stream *s);
+int process_srv_data(struct stream *s);
+int process_srv_conn(struct stream *s);
+int http_wait_for_request(struct stream *s, struct channel *req, int an_bit);
+int http_process_req_common(struct stream *s, struct channel *req, int an_bit, struct proxy *px);
+int http_process_request(struct stream *s, struct channel *req, int an_bit);
+int http_process_tarpit(struct stream *s, struct channel *req, int an_bit);
+int http_wait_for_request_body(struct stream *s, struct channel *req, int an_bit);
+int http_send_name_header(struct http_txn *txn, struct proxy* be, const char* svr_name);
+int http_wait_for_response(struct stream *s, struct channel *rep, int an_bit);
+int http_process_res_common(struct stream *s, struct channel *rep, int an_bit, struct proxy *px);
+int http_request_forward_body(struct stream *s, struct channel *req, int an_bit);
+int http_response_forward_body(struct stream *s, struct channel *res, int an_bit);
+void http_msg_analyzer(struct http_msg *msg, struct hdr_idx *idx);
+void http_txn_reset_req(struct http_txn *txn);
+void http_txn_reset_res(struct http_txn *txn);
+
+void debug_hdr(const char *dir, struct stream *s, const char *start, const char *end);
+int apply_filter_to_req_headers(struct stream *s, struct channel *req, struct hdr_exp *exp);
+int apply_filter_to_req_line(struct stream *s, struct channel *req, struct hdr_exp *exp);
+int apply_filters_to_request(struct stream *s, struct channel *req, struct proxy *px);
+int apply_filters_to_response(struct stream *s, struct channel *rtr, struct proxy *px);
+void manage_client_side_cookies(struct stream *s, struct channel *req);
+void manage_server_side_cookies(struct stream *s, struct channel *rtr);
+void check_response_for_cacheability(struct stream *s, struct channel *rtr);
+int stats_check_uri(struct stream_interface *si, struct http_txn *txn, struct proxy *backend);
+void init_proto_http();
+int http_find_full_header2(const char *name, int len,
+ char *sol, struct hdr_idx *idx,
+ struct hdr_ctx *ctx);
+int http_find_header2(const char *name, int len,
+ char *sol, struct hdr_idx *idx,
+ struct hdr_ctx *ctx);
+int http_find_next_header(char *sol, struct hdr_idx *idx,
+ struct hdr_ctx *ctx);
+char *find_hdr_value_end(char *s, const char *e);
+char *extract_cookie_value(char *hdr, const char *hdr_end, char *cookie_name,
+ size_t cookie_name_l, int list, char **value, int *value_l);
+int http_header_match2(const char *hdr, const char *end, const char *name, int len);
+int http_remove_header2(struct http_msg *msg, struct hdr_idx *idx, struct hdr_ctx *ctx);
+int http_header_add_tail2(struct http_msg *msg, struct hdr_idx *hdr_idx, const char *text, int len);
+int http_replace_req_line(int action, const char *replace, int len, struct proxy *px, struct stream *s);
+void http_set_status(unsigned int status, struct stream *s);
+int http_transform_header_str(struct stream* s, struct http_msg *msg, const char* name,
+ unsigned int name_len, const char *str, struct my_regex *re,
+ int action);
+void inet_set_tos(int fd, struct sockaddr_storage from, int tos);
+void http_perform_server_redirect(struct stream *s, struct stream_interface *si);
+void http_return_srv_error(struct stream *s, struct stream_interface *si);
+void http_capture_bad_message(struct error_snapshot *es, struct stream *s,
+ struct http_msg *msg,
+ enum ht_state state, struct proxy *other_end);
+unsigned int http_get_hdr(const struct http_msg *msg, const char *hname, int hlen,
+ struct hdr_idx *idx, int occ,
+ struct hdr_ctx *ctx, char **vptr, int *vlen);
+char *http_get_path(struct http_txn *txn);
+const char *get_reason(unsigned int status);
+
+struct http_txn *http_alloc_txn(struct stream *s);
+void http_init_txn(struct stream *s);
+void http_end_txn(struct stream *s);
+void http_reset_txn(struct stream *s);
+void http_adjust_conn_mode(struct stream *s, struct http_txn *txn, struct http_msg *msg);
+
+struct act_rule *parse_http_req_cond(const char **args, const char *file, int linenum, struct proxy *proxy);
+struct act_rule *parse_http_res_cond(const char **args, const char *file, int linenum, struct proxy *proxy);
+void free_http_req_rules(struct list *r);
+void free_http_res_rules(struct list *r);
+struct chunk *http_error_message(struct stream *s, int msgnum);
+struct redirect_rule *http_parse_redirect_rule(const char *file, int linenum, struct proxy *curproxy,
+ const char **args, char **errmsg, int use_fmt, int dir);
+int smp_fetch_cookie(const struct arg *args, struct sample *smp, const char *kw, void *private);
+int smp_fetch_base32(const struct arg *args, struct sample *smp, const char *kw, void *private);
+
+enum http_meth_t find_http_meth(const char *str, const int len);
+
+struct action_kw *action_http_req_custom(const char *kw);
+struct action_kw *action_http_res_custom(const char *kw);
+int val_hdr(struct arg *arg, char **err_msg);
+
+int smp_prefetch_http(struct proxy *px, struct stream *s, unsigned int opt,
+ const struct arg *args, struct sample *smp, int req_vol);
+
+enum act_return http_action_req_capture_by_id(struct act_rule *rule, struct proxy *px,
+ struct session *sess, struct stream *s, int flags);
+enum act_return http_action_res_capture_by_id(struct act_rule *rule, struct proxy *px,
+ struct session *sess, struct stream *s, int flags);
+
+/* Note: these functions *do* modify the sample. Even in case of success, at
+ * least the type and uint value are modified.
+ */
+#define CHECK_HTTP_MESSAGE_FIRST() \
+ do { int r = smp_prefetch_http(smp->px, smp->strm, smp->opt, args, smp, 1); if (r <= 0) return r; } while (0)
+
+#define CHECK_HTTP_MESSAGE_FIRST_PERM() \
+ do { int r = smp_prefetch_http(smp->px, smp->strm, smp->opt, args, smp, 0); if (r <= 0) return r; } while (0)
+
+static inline void http_req_keywords_register(struct action_kw_list *kw_list)
+{
+ LIST_ADDQ(&http_req_keywords.list, &kw_list->list);
+}
+
+static inline void http_res_keywords_register(struct action_kw_list *kw_list)
+{
+ LIST_ADDQ(&http_res_keywords.list, &kw_list->list);
+}
+
+
+/* to be used when contents change in an HTTP message */
+#define http_msg_move_end(msg, bytes) do { \
+ unsigned int _bytes = (bytes); \
+ (msg)->next += (_bytes); \
+ (msg)->sov += (_bytes); \
+ (msg)->eoh += (_bytes); \
+ } while (0)
+
+
+/* Return the amount of bytes that need to be rewound before buf->p to access
+ * the current message's headers. The purpose is to be able to easily fetch
+ * the message's beginning before headers are forwarded, as well as after.
+ * The principle is that msg->eoh and msg->eol are immutable while msg->sov
+ * equals the sum of the two before forwarding and is zero after forwarding,
+ * so the difference cancels the rewinding.
+ */
+static inline int http_hdr_rewind(const struct http_msg *msg)
+{
+ return msg->eoh + msg->eol - msg->sov;
+}
+
+/* Return the amount of bytes that need to be rewound before buf->p to access
+ * the current message's URI. The purpose is to be able to easily fetch
+ * the message's beginning before headers are forwarded, as well as after.
+ */
+static inline int http_uri_rewind(const struct http_msg *msg)
+{
+ return http_hdr_rewind(msg) - msg->sl.rq.u;
+}
+
+/* Return the amount of bytes that need to be rewound before buf->p to access
+ * the current message's BODY. The purpose is to be able to easily fetch
+ * the message's beginning before headers are forwarded, as well as after.
+ */
+static inline int http_body_rewind(const struct http_msg *msg)
+{
+ return http_hdr_rewind(msg) - msg->eoh - msg->eol;
+}
+
+/* Return the amount of bytes that need to be rewound before buf->p to access
+ * the current message's DATA. The difference with the function above is that
+ * if a chunk is present and has already been parsed, its size is skipped so
+ * that the byte pointed to is the first byte of actual data. The function is
+ * safe for use in state HTTP_MSG_DATA regardless of whether the headers were
+ * already forwarded or not.
+ */
+static inline int http_data_rewind(const struct http_msg *msg)
+{
+ return http_body_rewind(msg) - msg->sol;
+}
+
+/* Return the maximum amount of bytes that may be read after the beginning of
+ * the message body, according to the advertised length. The function is safe
+ * for use between HTTP_MSG_BODY and HTTP_MSG_DATA regardless of whether the
+ * headers were already forwarded or not.
+ */
+static inline int http_body_bytes(const struct http_msg *msg)
+{
+ int len;
+
+ len = msg->chn->buf->i - msg->sov - msg->sol;
+ if (len > msg->body_len)
+ len = msg->body_len;
+ return len;
+}
+
+/* for an http-request action ACT_HTTP_REQ_TRK_*, return a tracking index
+ * starting at zero for SC0. Unknown actions also return zero.
+ */
+static inline int http_req_trk_idx(int trk_action)
+{
+ return trk_action - ACT_ACTION_TRK_SC0;
+}
+
+/* for debugging, reports the HTTP message state name */
+static inline const char *http_msg_state_str(int msg_state)
+{
+ switch (msg_state) {
+ case HTTP_MSG_RQBEFORE: return "MSG_RQBEFORE";
+ case HTTP_MSG_RQBEFORE_CR: return "MSG_RQBEFORE_CR";
+ case HTTP_MSG_RQMETH: return "MSG_RQMETH";
+ case HTTP_MSG_RQMETH_SP: return "MSG_RQMETH_SP";
+ case HTTP_MSG_RQURI: return "MSG_RQURI";
+ case HTTP_MSG_RQURI_SP: return "MSG_RQURI_SP";
+ case HTTP_MSG_RQVER: return "MSG_RQVER";
+ case HTTP_MSG_RQLINE_END: return "MSG_RQLINE_END";
+ case HTTP_MSG_RPBEFORE: return "MSG_RPBEFORE";
+ case HTTP_MSG_RPBEFORE_CR: return "MSG_RPBEFORE_CR";
+ case HTTP_MSG_RPVER: return "MSG_RPVER";
+ case HTTP_MSG_RPVER_SP: return "MSG_RPVER_SP";
+ case HTTP_MSG_RPCODE: return "MSG_RPCODE";
+ case HTTP_MSG_RPCODE_SP: return "MSG_RPCODE_SP";
+ case HTTP_MSG_RPREASON: return "MSG_RPREASON";
+ case HTTP_MSG_RPLINE_END: return "MSG_RPLINE_END";
+ case HTTP_MSG_HDR_FIRST: return "MSG_HDR_FIRST";
+ case HTTP_MSG_HDR_NAME: return "MSG_HDR_NAME";
+ case HTTP_MSG_HDR_COL: return "MSG_HDR_COL";
+ case HTTP_MSG_HDR_L1_SP: return "MSG_HDR_L1_SP";
+ case HTTP_MSG_HDR_L1_LF: return "MSG_HDR_L1_LF";
+ case HTTP_MSG_HDR_L1_LWS: return "MSG_HDR_L1_LWS";
+ case HTTP_MSG_HDR_VAL: return "MSG_HDR_VAL";
+ case HTTP_MSG_HDR_L2_LF: return "MSG_HDR_L2_LF";
+ case HTTP_MSG_HDR_L2_LWS: return "MSG_HDR_L2_LWS";
+ case HTTP_MSG_LAST_LF: return "MSG_LAST_LF";
+ case HTTP_MSG_ERROR: return "MSG_ERROR";
+ case HTTP_MSG_BODY: return "MSG_BODY";
+ case HTTP_MSG_100_SENT: return "MSG_100_SENT";
+ case HTTP_MSG_CHUNK_SIZE: return "MSG_CHUNK_SIZE";
+ case HTTP_MSG_DATA: return "MSG_DATA";
+ case HTTP_MSG_CHUNK_CRLF: return "MSG_CHUNK_CRLF";
+ case HTTP_MSG_TRAILERS: return "MSG_TRAILERS";
+ case HTTP_MSG_DONE: return "MSG_DONE";
+ case HTTP_MSG_CLOSING: return "MSG_CLOSING";
+ case HTTP_MSG_CLOSED: return "MSG_CLOSED";
+ case HTTP_MSG_TUNNEL: return "MSG_TUNNEL";
+ default: return "MSG_??????";
+ }
+}
+
+#endif /* _PROTO_PROTO_HTTP_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/proto/proto_tcp.h
+ * This file contains TCP socket protocol definitions.
+ *
+ * Copyright (C) 2000-2013 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _PROTO_PROTO_TCP_H
+#define _PROTO_PROTO_TCP_H
+
+#include <common/config.h>
+#include <types/action.h>
+#include <types/task.h>
+#include <proto/stick_table.h>
+
+int tcp_bind_socket(int fd, int flags, struct sockaddr_storage *local, struct sockaddr_storage *remote);
+void tcpv4_add_listener(struct listener *listener);
+void tcpv6_add_listener(struct listener *listener);
+int tcp_pause_listener(struct listener *l);
+int tcp_connect_server(struct connection *conn, int data, int delack);
+int tcp_connect_probe(struct connection *conn);
+int tcp_get_src(int fd, struct sockaddr *sa, socklen_t salen, int dir);
+int tcp_get_dst(int fd, struct sockaddr *sa, socklen_t salen, int dir);
+int tcp_drain(int fd);
+int tcp_inspect_request(struct stream *s, struct channel *req, int an_bit);
+int tcp_inspect_response(struct stream *s, struct channel *rep, int an_bit);
+int tcp_exec_req_rules(struct session *sess);
+
+/* TCP keywords. */
+void tcp_req_conn_keywords_register(struct action_kw_list *kw_list);
+void tcp_req_cont_keywords_register(struct action_kw_list *kw_list);
+void tcp_res_cont_keywords_register(struct action_kw_list *kw_list);
+
+/* Export some samples. */
+int smp_fetch_src(const struct arg *args, struct sample *smp, const char *kw, void *private);
+
+
+/* for a tcp-request action ACT_TCP_TRK_*, return a tracking index starting at
+ * zero for SC0. Unknown actions also return zero.
+ */
+static inline int tcp_trk_idx(int trk_action)
+{
+ return trk_action - ACT_ACTION_TRK_SC0;
+}
+
+#endif /* _PROTO_PROTO_TCP_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/proto/proto_udp.h
+ * This file provides functions related to UDP protocol.
+ *
+ * Copyright (C) 2014 Baptiste Assmann <bedis9@gmail.com>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _PROTO_PROTO_UDP_H
+#define _PROTO_PROTO_UDP_H
+
+int dgram_fd_handler(int);
+
+#endif // _PROTO_PROTO_UDP_H
--- /dev/null
+/*
+ * include/proto/proto_uxst.h
+ * This file contains UNIX-stream socket protocol definitions.
+ *
+ * Copyright (C) 2000-2010 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _PROTO_PROTO_UXST_H
+#define _PROTO_PROTO_UXST_H
+
+#include <common/config.h>
+#include <types/stream.h>
+#include <types/task.h>
+
+void uxst_add_listener(struct listener *listener);
+int uxst_pause_listener(struct listener *l);
+int uxst_get_src(int fd, struct sockaddr *sa, socklen_t salen, int dir);
+int uxst_get_dst(int fd, struct sockaddr *sa, socklen_t salen, int dir);
+
+#endif /* _PROTO_PROTO_UXST_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/proto/protocol.h
+ * This file declares generic protocol management primitives.
+ *
+ * Copyright (C) 2000-2012 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _PROTO_PROTOCOL_H
+#define _PROTO_PROTOCOL_H
+
+#include <sys/socket.h>
+#include <types/protocol.h>
+
+extern struct protocol *__protocol_by_family[AF_MAX];
+
+/* Registers the protocol <proto> */
+void protocol_register(struct protocol *proto);
+
+/* Unregisters the protocol <proto>. Note that all listeners must have
+ * previously been unbound.
+ */
+void protocol_unregister(struct protocol *proto);
+
+/* binds all listeneres of all registered protocols. Returns a composition
+ * of ERR_NONE, ERR_RETRYABLE, ERR_FATAL.
+ */
+int protocol_bind_all(char *errmsg, int errlen);
+
+/* unbinds all listeners of all registered protocols. They are also closed.
+ * This must be performed before calling exit() in order to get a chance to
+ * remove file-system based sockets and pipes.
+ * Returns a composition of ERR_NONE, ERR_RETRYABLE, ERR_FATAL.
+ */
+int protocol_unbind_all(void);
+
+/* enables all listeners of all registered protocols. This is intended to be
+ * used after a fork() to enable reading on all file descriptors. Returns a
+ * composition of ERR_NONE, ERR_RETRYABLE, ERR_FATAL.
+ */
+int protocol_enable_all(void);
+
+/* returns the protocol associated to family <family> or NULL if not found */
+static inline struct protocol *protocol_by_family(int family)
+{
+ if (family >= 0 && family < AF_MAX)
+ return __protocol_by_family[family];
+ return NULL;
+}
+
+#endif /* _PROTO_PROTOCOL_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/proto/proxy.h
+ * This file defines function prototypes for proxy management.
+ *
+ * Copyright (C) 2000-2011 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _PROTO_PROXY_H
+#define _PROTO_PROXY_H
+
+#include <common/config.h>
+#include <common/ticks.h>
+#include <common/time.h>
+#include <types/global.h>
+#include <types/proxy.h>
+#include <types/listener.h>
+#include <proto/freq_ctr.h>
+
+extern struct proxy *proxy;
+extern struct eb_root used_proxy_id; /* list of proxy IDs in use */
+extern unsigned int error_snapshot_id; /* global ID assigned to each error then incremented */
+extern struct eb_root proxy_by_name; /* tree of proxies sorted by name */
+
+int start_proxies(int verbose);
+struct task *manage_proxy(struct task *t);
+void soft_stop(void);
+int pause_proxy(struct proxy *p);
+int resume_proxy(struct proxy *p);
+void stop_proxy(struct proxy *p);
+void pause_proxies(void);
+void resume_proxies(void);
+int stream_set_backend(struct stream *s, struct proxy *be);
+
+const char *proxy_cap_str(int cap);
+const char *proxy_mode_str(int mode);
+void proxy_store_name(struct proxy *px);
+struct proxy *proxy_find_by_id(int id, int cap, int table);
+struct proxy *proxy_find_by_name(const char *name, int cap, int table);
+struct proxy *proxy_find_best_match(int cap, const char *name, int id, int *diff);
+struct server *findserver(const struct proxy *px, const char *name);
+int proxy_cfg_ensure_no_http(struct proxy *curproxy);
+void init_new_proxy(struct proxy *p);
+int get_backend_server(const char *bk_name, const char *sv_name,
+ struct proxy **bk, struct server **sv);
+
+/*
+ * This function returns a string containing the type of the proxy in a format
+ * suitable for error messages, from its capabilities.
+ */
+static inline const char *proxy_type_str(struct proxy *proxy)
+{
+ return proxy_cap_str(proxy->cap);
+}
+
+/* Find the frontend having name <name>. The name may also start with a '#' to
+ * reference a numeric id. NULL is returned if not found.
+ */
+static inline struct proxy *proxy_fe_by_name(const char *name)
+{
+ return proxy_find_by_name(name, PR_CAP_FE, 0);
+}
+
+/* Find the backend having name <name>. The name may also start with a '#' to
+ * reference a numeric id. NULL is returned if not found.
+ */
+static inline struct proxy *proxy_be_by_name(const char *name)
+{
+ return proxy_find_by_name(name, PR_CAP_BE, 0);
+}
+
+/* Find the table having name <name>. The name may also start with a '#' to
+ * reference a numeric id. NULL is returned if not found.
+ */
+static inline struct proxy *proxy_tbl_by_name(const char *name)
+{
+ return proxy_find_by_name(name, 0, 1);
+}
+
+/* this function initializes all timeouts for proxy p */
+static inline void proxy_reset_timeouts(struct proxy *proxy)
+{
+ proxy->timeout.client = TICK_ETERNITY;
+ proxy->timeout.tarpit = TICK_ETERNITY;
+ proxy->timeout.queue = TICK_ETERNITY;
+ proxy->timeout.connect = TICK_ETERNITY;
+ proxy->timeout.server = TICK_ETERNITY;
+ proxy->timeout.httpreq = TICK_ETERNITY;
+ proxy->timeout.check = TICK_ETERNITY;
+ proxy->timeout.tunnel = TICK_ETERNITY;
+}
+
+/* increase the number of cumulated connections received on the designated frontend */
+static void inline proxy_inc_fe_conn_ctr(struct listener *l, struct proxy *fe)
+{
+ fe->fe_counters.cum_conn++;
+ if (l->counters)
+ l->counters->cum_conn++;
+
+ update_freq_ctr(&fe->fe_conn_per_sec, 1);
+ if (fe->fe_conn_per_sec.curr_ctr > fe->fe_counters.cps_max)
+ fe->fe_counters.cps_max = fe->fe_conn_per_sec.curr_ctr;
+}
+
+/* increase the number of cumulated connections accepted by the designated frontend */
+static void inline proxy_inc_fe_sess_ctr(struct listener *l, struct proxy *fe)
+{
+ fe->fe_counters.cum_sess++;
+ if (l->counters)
+ l->counters->cum_sess++;
+ update_freq_ctr(&fe->fe_sess_per_sec, 1);
+ if (fe->fe_sess_per_sec.curr_ctr > fe->fe_counters.sps_max)
+ fe->fe_counters.sps_max = fe->fe_sess_per_sec.curr_ctr;
+}
+
+/* increase the number of cumulated connections on the designated backend */
+static void inline proxy_inc_be_ctr(struct proxy *be)
+{
+ be->be_counters.cum_conn++;
+ update_freq_ctr(&be->be_sess_per_sec, 1);
+ if (be->be_sess_per_sec.curr_ctr > be->be_counters.sps_max)
+ be->be_counters.sps_max = be->be_sess_per_sec.curr_ctr;
+}
+
+/* increase the number of cumulated requests on the designated frontend */
+static void inline proxy_inc_fe_req_ctr(struct proxy *fe)
+{
+ fe->fe_counters.p.http.cum_req++;
+ update_freq_ctr(&fe->fe_req_per_sec, 1);
+ if (fe->fe_req_per_sec.curr_ctr > fe->fe_counters.p.http.rps_max)
+ fe->fe_counters.p.http.rps_max = fe->fe_req_per_sec.curr_ctr;
+}
+
+#endif /* _PROTO_PROXY_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/proto/queue.h
+ * This file defines everything related to queues.
+ *
+ * Copyright (C) 2000-2012 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _PROTO_QUEUE_H
+#define _PROTO_QUEUE_H
+
+#include <common/config.h>
+#include <common/memory.h>
+#include <common/mini-clist.h>
+
+#include <types/proxy.h>
+#include <types/queue.h>
+#include <types/stream.h>
+#include <types/server.h>
+#include <types/task.h>
+
+#include <proto/backend.h>
+
+extern struct pool_head *pool2_pendconn;
+
+int init_pendconn();
+struct stream *pendconn_get_next_strm(struct server *srv, struct proxy *px);
+struct pendconn *pendconn_add(struct stream *strm);
+void pendconn_free(struct pendconn *p);
+void process_srv_queue(struct server *s);
+unsigned int srv_dynamic_maxconn(const struct server *s);
+int pendconn_redistribute(struct server *s);
+int pendconn_grab_from_px(struct server *s);
+
+
+/* Returns the first pending connection for server <s>, which may be NULL if
+ * nothing is pending.
+ */
+static inline struct pendconn *pendconn_from_srv(const struct server *s) {
+ if (!s->nbpend)
+ return NULL;
+
+ return LIST_ELEM(s->pendconns.n, struct pendconn *, list);
+}
+
+/* Returns the first pending connection for proxy <px>, which may be NULL if
+ * nothing is pending.
+ */
+static inline struct pendconn *pendconn_from_px(const struct proxy *px) {
+ if (!px->nbpend)
+ return NULL;
+
+ return LIST_ELEM(px->pendconns.n, struct pendconn *, list);
+}
+
+/* Returns 0 if all slots are full on a server, or 1 if there are slots available. */
+static inline int server_has_room(const struct server *s) {
+ return !s->maxconn || s->cur_sess < srv_dynamic_maxconn(s);
+}
+
+/* returns 0 if nothing has to be done for server <s> regarding queued connections,
+ * and non-zero otherwise. If the server is down, we only check its own queue. Suited
+ * for and if/else usage.
+ */
+static inline int may_dequeue_tasks(const struct server *s, const struct proxy *p) {
+ return (s && (s->nbpend || (p->nbpend && srv_is_usable(s))) &&
+ (!s->maxconn || s->cur_sess < srv_dynamic_maxconn(s)));
+}
+
+#endif /* _PROTO_QUEUE_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/proto/raw_sock.h
+ * This file contains definition for raw stream socket operations
+ *
+ * Copyright (C) 2000-2012 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _PROTO_RAW_SOCK_H
+#define _PROTO_RAW_SOCK_H
+
+#include <types/stream_interface.h>
+
+extern struct xprt_ops raw_sock;
+
+#endif /* _PROTO_RAW_SOCK_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/proto/sample.h
+ * Functions for samples management.
+ *
+ * Copyright (C) 2009-2010 EXCELIANCE, Emeric Brun <ebrun@exceliance.fr>
+ * Copyright (C) 2012 Willy Tarreau <w@1wt.eu>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _PROTO_SAMPLE_H
+#define _PROTO_SAMPLE_H
+
+#include <types/sample.h>
+#include <types/stick_table.h>
+
+extern const char *smp_to_type[SMP_TYPES];
+
+struct sample_expr *sample_parse_expr(char **str, int *idx, const char *file, int line, char **err, struct arg_list *al);
+struct sample_conv *find_sample_conv(const char *kw, int len);
+struct sample *sample_process(struct proxy *px, struct session *sess,
+ struct stream *strm, unsigned int opt,
+ struct sample_expr *expr, struct sample *p);
+struct sample *sample_fetch_as_type(struct proxy *px, struct session *sess,
+ struct stream *strm, unsigned int opt,
+ struct sample_expr *expr, int smp_type);
+void sample_register_fetches(struct sample_fetch_kw_list *psl);
+void sample_register_convs(struct sample_conv_kw_list *psl);
+const char *sample_src_names(unsigned int use);
+const char *sample_ckp_names(unsigned int use);
+struct sample_fetch *find_sample_fetch(const char *kw, int len);
+struct sample_fetch *sample_fetch_getnext(struct sample_fetch *current, int *idx);
+struct sample_conv *sample_conv_getnext(struct sample_conv *current, int *idx);
+int smp_resolve_args(struct proxy *p);
+int smp_expr_output_type(struct sample_expr *expr);
+int c_none(struct sample *smp);
+int smp_dup(struct sample *smp);
+
+/*
+ * This function just apply a cast on sample. It returns 0 if the cast is not
+ * avalaible or if the cast fails, otherwise returns 1. It does not modify the
+ * input sample on failure.
+ */
+static inline
+int sample_convert(struct sample *sample, int req_type)
+{
+ if (!sample_casts[sample->data.type][req_type])
+ return 0;
+ if (sample_casts[sample->data.type][req_type] == c_none)
+ return 1;
+ return sample_casts[sample->data.type][req_type](sample);
+}
+
+#endif /* _PROTO_SAMPLE_H */
--- /dev/null
+/*
+ * include/proto/server.h
+ * This file defines everything related to servers.
+ *
+ * Copyright (C) 2000-2009 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _PROTO_SERVER_H
+#define _PROTO_SERVER_H
+
+#include <unistd.h>
+
+#include <common/config.h>
+#include <common/time.h>
+#include <types/dns.h>
+#include <types/proxy.h>
+#include <types/queue.h>
+#include <types/server.h>
+
+#include <proto/queue.h>
+#include <proto/log.h>
+#include <proto/freq_ctr.h>
+
+int srv_downtime(const struct server *s);
+int srv_lastsession(const struct server *s);
+int srv_getinter(const struct check *check);
+int parse_server(const char *file, int linenum, char **args, struct proxy *curproxy, struct proxy *defproxy);
+int update_server_addr(struct server *s, void *ip, int ip_sin_family, char *updater);
+struct server *server_find_by_id(struct proxy *bk, int id);
+struct server *server_find_by_name(struct proxy *bk, const char *name);
+struct server *server_find_best_match(struct proxy *bk, char *name, int id, int *diff);
+void apply_server_state(void);
+
+/* functions related to server name resolution */
+int snr_update_srv_status(struct server *s);
+int snr_resolution_cb(struct dns_resolution *resolution, struct dns_nameserver *nameserver, unsigned char *response, int response_len);
+int snr_resolution_error_cb(struct dns_resolution *resolution, int error_code);
+
+/* increase the number of cumulated connections on the designated server */
+static void inline srv_inc_sess_ctr(struct server *s)
+{
+ s->counters.cum_sess++;
+ update_freq_ctr(&s->sess_per_sec, 1);
+ if (s->sess_per_sec.curr_ctr > s->counters.sps_max)
+ s->counters.sps_max = s->sess_per_sec.curr_ctr;
+}
+
+/* set the time of last session on the designated server */
+static void inline srv_set_sess_last(struct server *s)
+{
+ s->counters.last_sess = now.tv_sec;
+}
+
+/*
+ * Registers the server keyword list <kwl> as a list of valid keywords for next
+ * parsing sessions.
+ */
+void srv_register_keywords(struct srv_kw_list *kwl);
+
+/* Return a pointer to the server keyword <kw>, or NULL if not found. */
+struct srv_kw *srv_find_kw(const char *kw);
+
+/* Dumps all registered "server" keywords to the <out> string pointer. */
+void srv_dump_kws(char **out);
+
+/* Recomputes the server's eweight based on its state, uweight, the current time,
+ * and the proxy's algorihtm. To be used after updating sv->uweight. The warmup
+ * state is automatically disabled if the time is elapsed.
+ */
+void server_recalc_eweight(struct server *sv);
+
+/* returns the current server throttle rate between 0 and 100% */
+static inline unsigned int server_throttle_rate(struct server *sv)
+{
+ struct proxy *px = sv->proxy;
+
+ /* when uweight is 0, we're in soft-stop so that cannot be a slowstart,
+ * thus the throttle is 100%.
+ */
+ if (!sv->uweight)
+ return 100;
+
+ return (100U * px->lbprm.wmult * sv->eweight + px->lbprm.wdiv - 1) / (px->lbprm.wdiv * sv->uweight);
+}
+
+/*
+ * Parses weight_str and configures sv accordingly.
+ * Returns NULL on success, error message string otherwise.
+ */
+const char *server_parse_weight_change_request(struct server *sv,
+ const char *weight_str);
+
+/*
+ * Parses addr_str and configures sv accordingly.
+ * Returns NULL on success, error message string otherwise.
+ */
+const char *server_parse_addr_change_request(struct server *sv,
+ const char *addr_str);
+
+/*
+ * Return true if the server has a zero user-weight, meaning it's in draining
+ * mode (ie: not taking new non-persistent connections).
+ */
+static inline int server_is_draining(const struct server *s)
+{
+ return !s->uweight || (s->admin & SRV_ADMF_DRAIN);
+}
+
+/* Shutdown all connections of a server. The caller must pass a termination
+ * code in <why>, which must be one of SF_ERR_* indicating the reason for the
+ * shutdown.
+ */
+void srv_shutdown_sessions(struct server *srv, int why);
+
+/* Shutdown all connections of all backup servers of a proxy. The caller must
+ * pass a termination code in <why>, which must be one of SF_ERR_* indicating
+ * the reason for the shutdown.
+ */
+void srv_shutdown_backup_sessions(struct proxy *px, int why);
+
+/* Appends some information to a message string related to a server going UP or
+ * DOWN. If both <forced> and <reason> are null and the server tracks another
+ * one, a "via" information will be provided to know where the status came from.
+ * If <reason> is non-null, the entire string will be appended after a comma and
+ * a space (eg: to report some information from the check that changed the state).
+ * If <xferred> is non-negative, some information about requeued sessions are
+ * provided.
+ */
+void srv_append_status(struct chunk *msg, struct server *s, const char *reason, int xferred, int forced);
+
+/* Marks server <s> down, regardless of its checks' statuses, notifies by all
+ * available means, recounts the remaining servers on the proxy and transfers
+ * queued sessions whenever possible to other servers. It automatically
+ * recomputes the number of servers, but not the map. Maintenance servers are
+ * ignored. It reports <reason> if non-null as the reason for going down. Note
+ * that it makes use of the trash to build the log strings, so <reason> must
+ * not be placed there.
+ */
+void srv_set_stopped(struct server *s, const char *reason);
+
+/* Marks server <s> up regardless of its checks' statuses and provided it isn't
+ * in maintenance. Notifies by all available means, recounts the remaining
+ * servers on the proxy and tries to grab requests from the proxy. It
+ * automatically recomputes the number of servers, but not the map. Maintenance
+ * servers are ignored. It reports <reason> if non-null as the reason for going
+ * up. Note that it makes use of the trash to build the log strings, so <reason>
+ * must not be placed there.
+ */
+void srv_set_running(struct server *s, const char *reason);
+
+/* Marks server <s> stopping regardless of its checks' statuses and provided it
+ * isn't in maintenance. Notifies by all available means, recounts the remaining
+ * servers on the proxy and tries to grab requests from the proxy. It
+ * automatically recomputes the number of servers, but not the map. Maintenance
+ * servers are ignored. It reports <reason> if non-null as the reason for going
+ * up. Note that it makes use of the trash to build the log strings, so <reason>
+ * must not be placed there.
+ */
+void srv_set_stopping(struct server *s, const char *reason);
+
+/* Enables admin flag <mode> (among SRV_ADMF_*) on server <s>. This is used to
+ * enforce either maint mode or drain mode. It is not allowed to set more than
+ * one flag at once. The equivalent "inherited" flag is propagated to all
+ * tracking servers. Maintenance mode disables health checks (but not agent
+ * checks). When either the flag is already set or no flag is passed, nothing
+ * is done.
+ */
+void srv_set_admin_flag(struct server *s, enum srv_admin mode);
+
+/* Disables admin flag <mode> (among SRV_ADMF_*) on server <s>. This is used to
+ * stop enforcing either maint mode or drain mode. It is not allowed to set more
+ * than one flag at once. The equivalent "inherited" flag is propagated to all
+ * tracking servers. Leaving maintenance mode re-enables health checks. When
+ * either the flag is already cleared or no flag is passed, nothing is done.
+ */
+void srv_clr_admin_flag(struct server *s, enum srv_admin mode);
+
+/* Puts server <s> into maintenance mode, and propagate that status down to all
+ * tracking servers.
+ */
+static inline void srv_adm_set_maint(struct server *s)
+{
+ srv_set_admin_flag(s, SRV_ADMF_FMAINT);
+ srv_clr_admin_flag(s, SRV_ADMF_FDRAIN);
+}
+
+/* Puts server <s> into drain mode, and propagate that status down to all
+ * tracking servers.
+ */
+static inline void srv_adm_set_drain(struct server *s)
+{
+ srv_set_admin_flag(s, SRV_ADMF_FDRAIN);
+ srv_clr_admin_flag(s, SRV_ADMF_FMAINT);
+}
+
+/* Puts server <s> into ready mode, and propagate that status down to all
+ * tracking servers.
+ */
+static inline void srv_adm_set_ready(struct server *s)
+{
+ srv_clr_admin_flag(s, SRV_ADMF_FDRAIN);
+ srv_clr_admin_flag(s, SRV_ADMF_FMAINT);
+}
+
+#endif /* _PROTO_SERVER_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/proto/session.h
+ * This file defines everything related to sessions.
+ *
+ * Copyright (C) 2000-2015 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _PROTO_SESSION_H
+#define _PROTO_SESSION_H
+
+#include <common/config.h>
+#include <common/buffer.h>
+#include <common/debug.h>
+#include <common/memory.h>
+
+#include <types/global.h>
+#include <types/session.h>
+
+#include <proto/stick_table.h>
+
+extern struct pool_head *pool2_session;
+struct session *session_new(struct proxy *fe, struct listener *li, enum obj_type *origin);
+void session_free(struct session *sess);
+int init_session();
+int session_accept_fd(struct listener *l, int cfd, struct sockaddr_storage *addr);
+
+/* Remove the refcount from the session to the tracked counters, and clear the
+ * pointer to ensure this is only performed once. The caller is responsible for
+ * ensuring that the pointer is valid first.
+ */
+static inline void session_store_counters(struct session *sess)
+{
+ void *ptr;
+ int i;
+
+ for (i = 0; i < MAX_SESS_STKCTR; i++) {
+ struct stkctr *stkctr = &sess->stkctr[i];
+
+ if (!stkctr_entry(stkctr))
+ continue;
+
+ ptr = stktable_data_ptr(stkctr->table, stkctr_entry(stkctr), STKTABLE_DT_CONN_CUR);
+ if (ptr)
+ stktable_data_cast(ptr, conn_cur)--;
+ stkctr_entry(stkctr)->ref_cnt--;
+ stksess_kill_if_expired(stkctr->table, stkctr_entry(stkctr));
+ stkctr_set_entry(stkctr, NULL);
+ }
+}
+
+
+#endif /* _PROTO_SESSION_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * shctx.h - shared context management functions for SSL
+ *
+ * Copyright (C) 2011-2012 EXCELIANCE
+ *
+ * Author: Emeric Brun - emeric@exceliance.fr
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#ifndef SHCTX_H
+#define SHCTX_H
+#include <openssl/ssl.h>
+#include <stdint.h>
+
+#ifndef SHSESS_BLOCK_MIN_SIZE
+#define SHSESS_BLOCK_MIN_SIZE 128
+#endif
+
+#ifndef SHSESS_MAX_DATA_LEN
+#define SHSESS_MAX_DATA_LEN 4096
+#endif
+
+#ifndef SHCTX_APPNAME
+#define SHCTX_APPNAME "haproxy"
+#endif
+
+#define SHCTX_E_ALLOC_CACHE -1
+#define SHCTX_E_INIT_LOCK -2
+
+/* Allocate shared memory context.
+ * <size> is the number of allocated blocks into cache (default 128 bytes)
+ * A block is large enough to contain a classic session (without client cert)
+ * If <size> is set less or equal to 0, ssl cache is disabled.
+ * Set <use_shared_memory> to 1 to use a mapped shared memory instead
+ * of private. (ignored if compiled with USE_PRIVATE_CACHE=1).
+ * Returns: -1 on alloc failure, <size> if it performs context alloc,
+ * and 0 if cache is already allocated.
+ */
+int shared_context_init(int size, int use_shared_memory);
+
+/* Set shared cache callbacks on an ssl context.
+ * Set session cache mode to server and disable openssl internal cache.
+ * Shared context MUST be firstly initialized */
+void shared_context_set_cache(SSL_CTX *ctx);
+
+#endif /* SHCTX_H */
+
--- /dev/null
+/*
+ * include/proto/signal.h
+ * Asynchronous signal delivery functions.
+ *
+ * Copyright 2000-2010 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <signal.h>
+#include <common/standard.h>
+#include <types/signal.h>
+#include <types/task.h>
+
+extern int signal_queue_len;
+extern struct signal_descriptor signal_state[];
+extern struct pool_head *pool2_sig_handlers;
+
+void signal_handler(int sig);
+void __signal_process_queue();
+int signal_init();
+void deinit_signals();
+struct sig_handler *signal_register_fct(int sig, void (*fct)(struct sig_handler *), int arg);
+struct sig_handler *signal_register_task(int sig, struct task *task, int reason);
+void signal_unregister_handler(struct sig_handler *handler);
+void signal_unregister_target(int sig, void *target);
+
+static inline void signal_process_queue()
+{
+ if (unlikely(signal_queue_len > 0))
+ __signal_process_queue();
+}
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/proto/ssl_sock.h
+ * This file contains definition for ssl stream socket operations
+ *
+ * Copyright (C) 2012 EXCELIANCE, Emeric Brun <ebrun@exceliance.fr>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _PROTO_SSL_SOCK_H
+#define _PROTO_SSL_SOCK_H
+#include <openssl/ssl.h>
+
+#include <types/connection.h>
+#include <types/listener.h>
+#include <types/proxy.h>
+#include <types/stream_interface.h>
+
+extern struct xprt_ops ssl_sock;
+extern int sslconns;
+extern int totalsslconns;
+
+/* boolean, returns true if connection is over SSL */
+static inline
+int ssl_sock_is_ssl(struct connection *conn)
+{
+ if (!conn || conn->xprt != &ssl_sock || !conn->xprt_ctx)
+ return 0;
+ else
+ return 1;
+}
+
+int ssl_sock_handshake(struct connection *conn, unsigned int flag);
+int ssl_sock_prepare_ctx(struct bind_conf *bind_conf, SSL_CTX *ctx, struct proxy *proxy);
+int ssl_sock_prepare_all_ctx(struct bind_conf *bind_conf, struct proxy *px);
+int ssl_sock_prepare_srv_ctx(struct server *srv, struct proxy *px);
+void ssl_sock_free_srv_ctx(struct server *srv);
+void ssl_sock_free_all_ctx(struct bind_conf *bind_conf);
+int ssl_sock_load_ca(struct bind_conf *bind_conf, struct proxy *px);
+void ssl_sock_free_ca(struct bind_conf *bind_conf);
+const char *ssl_sock_get_cipher_name(struct connection *conn);
+const char *ssl_sock_get_proto_version(struct connection *conn);
+char *ssl_sock_get_version(struct connection *conn);
+void ssl_sock_set_servername(struct connection *conn, const char *hostname);
+int ssl_sock_get_cert_used_sess(struct connection *conn);
+int ssl_sock_get_cert_used_conn(struct connection *conn);
+int ssl_sock_get_remote_common_name(struct connection *conn, struct chunk *out);
+unsigned int ssl_sock_get_verify_result(struct connection *conn);
+#if (defined SSL_CTRL_SET_TLSEXT_STATUS_REQ_CB && !defined OPENSSL_NO_OCSP)
+int ssl_sock_update_ocsp_response(struct chunk *ocsp_response, char **err);
+#endif
+#if (defined SSL_CTRL_SET_TLSEXT_TICKET_KEY_CB && TLS_TICKETS_NO > 0)
+int ssl_sock_update_tlskey(char *filename, struct chunk *tlskey, char **err);
+struct tls_keys_ref *tlskeys_ref_lookup(const char *filename);
+struct tls_keys_ref *tlskeys_ref_lookupid(int unique_id);
+void tlskeys_finalize_config(void);
+#endif
+#ifndef OPENSSL_NO_DH
+int ssl_sock_load_global_dh_param_from_file(const char *filename);
+#endif
+
+SSL_CTX *ssl_sock_create_cert(struct connection *conn, const char *servername, unsigned int serial);
+SSL_CTX *ssl_sock_get_generated_cert(unsigned int serial, struct bind_conf *bind_conf);
+int ssl_sock_set_generated_cert(SSL_CTX *ctx, unsigned int serial, struct bind_conf *bind_conf);
+unsigned int ssl_sock_generated_cert_serial(const void *data, size_t len);
+
+#endif /* _PROTO_SSL_SOCK_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/proto/stick_table.h
+ * Functions for stick tables management.
+ *
+ * Copyright (C) 2009-2010 EXCELIANCE, Emeric Brun <ebrun@exceliance.fr>
+ * Copyright (C) 2010 Willy Tarreau <w@1wt.eu>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _PROTO_STICK_TABLE_H
+#define _PROTO_STICK_TABLE_H
+
+#include <common/errors.h>
+#include <common/ticks.h>
+#include <common/time.h>
+#include <types/stick_table.h>
+
+#define stktable_data_size(type) (sizeof(((union stktable_data*)0)->type))
+#define stktable_data_cast(ptr, type) ((union stktable_data*)(ptr))->type
+
+extern struct stktable_key *static_table_key;
+
+struct stksess *stksess_new(struct stktable *t, struct stktable_key *key);
+void stksess_setkey(struct stktable *t, struct stksess *ts, struct stktable_key *key);
+void stksess_free(struct stktable *t, struct stksess *ts);
+void stksess_kill(struct stktable *t, struct stksess *ts);
+
+int stktable_init(struct stktable *t);
+int stktable_parse_type(char **args, int *idx, unsigned long *type, size_t *key_size);
+struct stksess *stktable_get_entry(struct stktable *table, struct stktable_key *key);
+struct stksess *stktable_store(struct stktable *t, struct stksess *ts, int local);
+struct stksess *stktable_touch(struct stktable *t, struct stksess *ts, int local);
+struct stksess *stktable_lookup(struct stktable *t, struct stksess *ts);
+struct stksess *stktable_lookup_key(struct stktable *t, struct stktable_key *key);
+struct stksess *stktable_update_key(struct stktable *table, struct stktable_key *key);
+struct stktable_key *smp_to_stkey(struct sample *smp, struct stktable *t);
+struct stktable_key *stktable_fetch_key(struct stktable *t, struct proxy *px, struct session *sess,
+ struct stream *strm, unsigned int opt,
+ struct sample_expr *expr, struct sample *smp);
+int stktable_compatible_sample(struct sample_expr *expr, unsigned long table_type);
+int stktable_register_data_store(int idx, const char *name, int std_type, int arg_type);
+int stktable_get_data_type(char *name);
+int stktable_trash_oldest(struct stktable *t, int to_batch);
+
+/* return allocation size for standard data type <type> */
+static inline int stktable_type_size(int type)
+{
+ switch(type) {
+ case STD_T_SINT:
+ case STD_T_UINT:
+ return sizeof(int);
+ case STD_T_ULL:
+ return sizeof(unsigned long long);
+ case STD_T_FRQP:
+ return sizeof(struct freq_ctr_period);
+ }
+ return 0;
+}
+
+/* reserve some space for data type <type>, and associate argument at <sa> if
+ * not NULL. Returns PE_NONE (0) if OK or an error code among :
+ * - PE_ENUM_OOR if <type> does not exist
+ * - PE_EXIST if <type> is already registered
+ * - PE_ARG_NOT_USE if <sa> was provided but not expected
+ * - PE_ARG_MISSING if <sa> was expected but not provided
+ */
+static inline int stktable_alloc_data_type(struct stktable *t, int type, const char *sa)
+{
+ if (type >= STKTABLE_DATA_TYPES)
+ return PE_ENUM_OOR;
+
+ if (t->data_ofs[type])
+ /* already allocated */
+ return PE_EXIST;
+
+ switch (stktable_data_types[type].arg_type) {
+ case ARG_T_NONE:
+ if (sa)
+ return PE_ARG_NOT_USED;
+ break;
+ case ARG_T_INT:
+ if (!sa)
+ return PE_ARG_MISSING;
+ t->data_arg[type].i = atoi(sa);
+ break;
+ case ARG_T_DELAY:
+ if (!sa)
+ return PE_ARG_MISSING;
+ sa = parse_time_err(sa, &t->data_arg[type].u, TIME_UNIT_MS);
+ if (sa)
+ return PE_ARG_INVC; /* invalid char */
+ break;
+ }
+
+ t->data_size += stktable_type_size(stktable_data_types[type].std_type);
+ t->data_ofs[type] = -t->data_size;
+ return PE_NONE;
+}
+
+/* return pointer for data type <type> in sticky session <ts> of table <t>, or
+ * NULL if either <ts> is NULL or the type is not stored.
+ */
+static inline void *stktable_data_ptr(struct stktable *t, struct stksess *ts, int type)
+{
+ if (type >= STKTABLE_DATA_TYPES)
+ return NULL;
+
+ if (!t->data_ofs[type]) /* type not stored */
+ return NULL;
+
+ if (!ts)
+ return NULL;
+
+ return (void *)ts + t->data_ofs[type];
+}
+
+/* kill an entry if it's expired and its ref_cnt is zero */
+static inline void stksess_kill_if_expired(struct stktable *t, struct stksess *ts)
+{
+ if (t->expire != TICK_ETERNITY && tick_is_expired(ts->expire, now_ms))
+ stksess_kill(t, ts);
+}
+
+/* sets the stick counter's entry pointer */
+static inline void stkctr_set_entry(struct stkctr *stkctr, struct stksess *entry)
+{
+ stkctr->entry = caddr_from_ptr(entry, 0);
+}
+
+/* returns the entry pointer from a stick counter */
+static inline struct stksess *stkctr_entry(struct stkctr *stkctr)
+{
+ return caddr_to_ptr(stkctr->entry);
+}
+
+/* returns the two flags from a stick counter */
+static inline unsigned int stkctr_flags(struct stkctr *stkctr)
+{
+ return caddr_to_data(stkctr->entry);
+}
+
+/* sets up to two flags at a time on a composite address */
+static inline void stkctr_set_flags(struct stkctr *stkctr, unsigned int flags)
+{
+ stkctr->entry = caddr_set_flags(stkctr->entry, flags);
+}
+
+/* returns the two flags from a stick counter */
+static inline void stkctr_clr_flags(struct stkctr *stkctr, unsigned int flags)
+{
+ stkctr->entry = caddr_clr_flags(stkctr->entry, flags);
+}
+
+#endif /* _PROTO_STICK_TABLE_H */
--- /dev/null
+/*
+ * include/proto/stream.h
+ * This file defines everything related to streams.
+ *
+ * Copyright (C) 2000-2010 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _PROTO_STREAM_H
+#define _PROTO_STREAM_H
+
+#include <common/config.h>
+#include <common/memory.h>
+#include <types/stream.h>
+#include <proto/fd.h>
+#include <proto/freq_ctr.h>
+#include <proto/stick_table.h>
+#include <proto/task.h>
+
+extern struct pool_head *pool2_stream;
+extern struct list streams;
+extern struct list buffer_wq;
+
+extern struct data_cb sess_conn_cb;
+
+struct stream *stream_new(struct session *sess, struct task *t, enum obj_type *origin);
+
+/* perform minimal intializations, report 0 in case of error, 1 if OK. */
+int init_stream();
+
+/* kill a stream and set the termination flags to <why> (one of SF_ERR_*) */
+void stream_shutdown(struct stream *stream, int why);
+
+void stream_process_counters(struct stream *s);
+void sess_change_server(struct stream *sess, struct server *newsrv);
+struct task *process_stream(struct task *t);
+void default_srv_error(struct stream *s, struct stream_interface *si);
+struct stkctr *smp_fetch_sc_stkctr(struct session *sess, struct stream *strm, const struct arg *args, const char *kw);
+int parse_track_counters(char **args, int *arg,
+ int section_type, struct proxy *curpx,
+ struct track_ctr_prm *prm,
+ struct proxy *defpx, char **err);
+
+/* Update the stream's backend and server time stats */
+void stream_update_time_stats(struct stream *s);
+void __stream_offer_buffers(int rqlimit);
+static inline void stream_offer_buffers();
+int stream_alloc_work_buffer(struct stream *s);
+void stream_release_buffers(struct stream *s);
+int stream_alloc_recv_buffer(struct channel *chn);
+
+/* returns the session this stream belongs to */
+static inline struct session *strm_sess(const struct stream *strm)
+{
+ return strm->sess;
+}
+
+/* returns the frontend this stream was initiated from */
+static inline struct proxy *strm_fe(const struct stream *strm)
+{
+ return strm->sess->fe;
+}
+
+/* returns the listener this stream was initiated from */
+static inline struct listener *strm_li(const struct stream *strm)
+{
+ return strm->sess->listener;
+}
+
+/* returns a pointer to the origin of the session which created this stream */
+static inline enum obj_type *strm_orig(const struct stream *strm)
+{
+ return strm->sess->origin;
+}
+
+/* Remove the refcount from the stream to the tracked counters, and clear the
+ * pointer to ensure this is only performed once. The caller is responsible for
+ * ensuring that the pointer is valid first. We must be extremely careful not
+ * to touch the entries we inherited from the session.
+ */
+static inline void stream_store_counters(struct stream *s)
+{
+ void *ptr;
+ int i;
+
+ for (i = 0; i < MAX_SESS_STKCTR; i++) {
+ if (!stkctr_entry(&s->stkctr[i]))
+ continue;
+
+ if (stkctr_entry(&s->sess->stkctr[i]))
+ continue;
+
+ ptr = stktable_data_ptr(s->stkctr[i].table, stkctr_entry(&s->stkctr[i]), STKTABLE_DT_CONN_CUR);
+ if (ptr)
+ stktable_data_cast(ptr, conn_cur)--;
+ stkctr_entry(&s->stkctr[i])->ref_cnt--;
+ stksess_kill_if_expired(s->stkctr[i].table, stkctr_entry(&s->stkctr[i]));
+ stkctr_set_entry(&s->stkctr[i], NULL);
+ }
+}
+
+/* Remove the refcount from the stream counters tracked at the content level if
+ * any, and clear the pointer to ensure this is only performed once. The caller
+ * is responsible for ensuring that the pointer is valid first. We must be
+ * extremely careful not to touch the entries we inherited from the session.
+ */
+static inline void stream_stop_content_counters(struct stream *s)
+{
+ void *ptr;
+ int i;
+
+ for (i = 0; i < MAX_SESS_STKCTR; i++) {
+ if (!stkctr_entry(&s->stkctr[i]))
+ continue;
+
+ if (stkctr_entry(&s->sess->stkctr[i]))
+ continue;
+
+ if (!(stkctr_flags(&s->stkctr[i]) & STKCTR_TRACK_CONTENT))
+ continue;
+
+ ptr = stktable_data_ptr(s->stkctr[i].table, stkctr_entry(&s->stkctr[i]), STKTABLE_DT_CONN_CUR);
+ if (ptr)
+ stktable_data_cast(ptr, conn_cur)--;
+ stkctr_entry(&s->stkctr[i])->ref_cnt--;
+ stksess_kill_if_expired(s->stkctr[i].table, stkctr_entry(&s->stkctr[i]));
+ stkctr_set_entry(&s->stkctr[i], NULL);
+ }
+}
+
+/* Increase total and concurrent connection count for stick entry <ts> of table
+ * <t>. The caller is responsible for ensuring that <t> and <ts> are valid
+ * pointers, and for calling this only once per connection.
+ */
+static inline void stream_start_counters(struct stktable *t, struct stksess *ts)
+{
+ void *ptr;
+
+ ptr = stktable_data_ptr(t, ts, STKTABLE_DT_CONN_CUR);
+ if (ptr)
+ stktable_data_cast(ptr, conn_cur)++;
+
+ ptr = stktable_data_ptr(t, ts, STKTABLE_DT_CONN_CNT);
+ if (ptr)
+ stktable_data_cast(ptr, conn_cnt)++;
+
+ ptr = stktable_data_ptr(t, ts, STKTABLE_DT_CONN_RATE);
+ if (ptr)
+ update_freq_ctr_period(&stktable_data_cast(ptr, conn_rate),
+ t->data_arg[STKTABLE_DT_CONN_RATE].u, 1);
+ if (tick_isset(t->expire))
+ ts->expire = tick_add(now_ms, MS_TO_TICKS(t->expire));
+}
+
+/* Enable tracking of stream counters as <stkctr> on stksess <ts>. The caller is
+ * responsible for ensuring that <t> and <ts> are valid pointers. Some controls
+ * are performed to ensure the state can still change.
+ */
+static inline void stream_track_stkctr(struct stkctr *ctr, struct stktable *t, struct stksess *ts)
+{
+ if (stkctr_entry(ctr))
+ return;
+
+ ts->ref_cnt++;
+ ctr->table = t;
+ stkctr_set_entry(ctr, ts);
+ stream_start_counters(t, ts);
+}
+
+/* Increase the number of cumulated HTTP requests in the tracked counters */
+static void inline stream_inc_http_req_ctr(struct stream *s)
+{
+ void *ptr;
+ int i;
+
+ for (i = 0; i < MAX_SESS_STKCTR; i++) {
+ struct stkctr *stkctr = &s->stkctr[i];
+
+ if (!stkctr_entry(stkctr)) {
+ stkctr = &s->sess->stkctr[i];
+ if (!stkctr_entry(stkctr))
+ continue;
+ }
+
+ ptr = stktable_data_ptr(stkctr->table, stkctr_entry(stkctr), STKTABLE_DT_HTTP_REQ_CNT);
+ if (ptr)
+ stktable_data_cast(ptr, http_req_cnt)++;
+
+ ptr = stktable_data_ptr(stkctr->table, stkctr_entry(stkctr), STKTABLE_DT_HTTP_REQ_RATE);
+ if (ptr)
+ update_freq_ctr_period(&stktable_data_cast(ptr, http_req_rate),
+ stkctr->table->data_arg[STKTABLE_DT_HTTP_REQ_RATE].u, 1);
+ }
+}
+
+/* Increase the number of cumulated HTTP requests in the backend's tracked
+ * counters. We don't look up the session since it cannot happen in the bakcend.
+ */
+static void inline stream_inc_be_http_req_ctr(struct stream *s)
+{
+ void *ptr;
+ int i;
+
+ for (i = 0; i < MAX_SESS_STKCTR; i++) {
+ struct stkctr *stkctr = &s->stkctr[i];
+
+ if (!stkctr_entry(stkctr))
+ continue;
+
+ if (!(stkctr_flags(&s->stkctr[i]) & STKCTR_TRACK_BACKEND))
+ continue;
+
+ ptr = stktable_data_ptr(stkctr->table, stkctr_entry(stkctr), STKTABLE_DT_HTTP_REQ_CNT);
+ if (ptr)
+ stktable_data_cast(ptr, http_req_cnt)++;
+
+ ptr = stktable_data_ptr(stkctr->table, stkctr_entry(stkctr), STKTABLE_DT_HTTP_REQ_RATE);
+ if (ptr)
+ update_freq_ctr_period(&stktable_data_cast(ptr, http_req_rate),
+ stkctr->table->data_arg[STKTABLE_DT_HTTP_REQ_RATE].u, 1);
+ }
+}
+
+/* Increase the number of cumulated failed HTTP requests in the tracked
+ * counters. Only 4xx requests should be counted here so that we can
+ * distinguish between errors caused by client behaviour and other ones.
+ * Note that even 404 are interesting because they're generally caused by
+ * vulnerability scans.
+ */
+static void inline stream_inc_http_err_ctr(struct stream *s)
+{
+ void *ptr;
+ int i;
+
+ for (i = 0; i < MAX_SESS_STKCTR; i++) {
+ struct stkctr *stkctr = &s->stkctr[i];
+
+ if (!stkctr_entry(stkctr)) {
+ stkctr = &s->sess->stkctr[i];
+ if (!stkctr_entry(stkctr))
+ continue;
+ }
+
+ ptr = stktable_data_ptr(stkctr->table, stkctr_entry(stkctr), STKTABLE_DT_HTTP_ERR_CNT);
+ if (ptr)
+ stktable_data_cast(ptr, http_err_cnt)++;
+
+ ptr = stktable_data_ptr(stkctr->table, stkctr_entry(stkctr), STKTABLE_DT_HTTP_ERR_RATE);
+ if (ptr)
+ update_freq_ctr_period(&stktable_data_cast(ptr, http_err_rate),
+ stkctr->table->data_arg[STKTABLE_DT_HTTP_ERR_RATE].u, 1);
+ }
+}
+
+static void inline stream_add_srv_conn(struct stream *sess, struct server *srv)
+{
+ sess->srv_conn = srv;
+ LIST_ADD(&srv->actconns, &sess->by_srv);
+}
+
+static void inline stream_del_srv_conn(struct stream *sess)
+{
+ if (!sess->srv_conn)
+ return;
+
+ sess->srv_conn = NULL;
+ LIST_DEL(&sess->by_srv);
+}
+
+static void inline stream_init_srv_conn(struct stream *sess)
+{
+ sess->srv_conn = NULL;
+ LIST_INIT(&sess->by_srv);
+}
+
+static inline void stream_offer_buffers()
+{
+ int avail;
+
+ if (LIST_ISEMPTY(&buffer_wq))
+ return;
+
+ /* all streams will need 1 buffer, so we can stop waking up streams
+ * once we have enough of them to eat all the buffers. Note that we
+ * don't really know if they are streams or just other tasks, but
+ * that's a rough estimate. Similarly, for each cached event we'll need
+ * 1 buffer. If no buffer is currently used, always wake up the number
+ * of tasks we can offer a buffer based on what is allocated, and in
+ * any case at least one task per two reserved buffers.
+ */
+ avail = pool2_buffer->allocated - pool2_buffer->used - global.tune.reserved_bufs / 2;
+
+ if (avail > (int)run_queue)
+ __stream_offer_buffers(avail);
+}
+
+void service_keywords_register(struct action_kw_list *kw_list);
+
+#endif /* _PROTO_STREAM_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/proto/stream_interface.h
+ * This file contains stream_interface function prototypes
+ *
+ * Copyright (C) 2000-2014 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _PROTO_STREAM_INTERFACE_H
+#define _PROTO_STREAM_INTERFACE_H
+
+#include <stdlib.h>
+
+#include <common/config.h>
+#include <types/server.h>
+#include <types/stream.h>
+#include <types/stream_interface.h>
+#include <proto/applet.h>
+#include <proto/channel.h>
+#include <proto/connection.h>
+
+
+/* main event functions used to move data between sockets and buffers */
+int stream_int_check_timeouts(struct stream_interface *si);
+void stream_int_report_error(struct stream_interface *si);
+void stream_int_retnclose(struct stream_interface *si, const struct chunk *msg);
+int conn_si_send_proxy(struct connection *conn, unsigned int flag);
+void stream_sock_read0(struct stream_interface *si);
+
+extern struct si_ops si_embedded_ops;
+extern struct si_ops si_conn_ops;
+extern struct si_ops si_applet_ops;
+extern struct data_cb si_conn_cb;
+extern struct data_cb si_idle_conn_cb;
+
+struct appctx *stream_int_register_handler(struct stream_interface *si, struct applet *app);
+void si_applet_wake_cb(struct stream_interface *si);
+void stream_int_update(struct stream_interface *si);
+void stream_int_update_conn(struct stream_interface *si);
+void stream_int_update_applet(struct stream_interface *si);
+void stream_int_notify(struct stream_interface *si);
+
+/* returns the channel which receives data from this stream interface (input channel) */
+static inline struct channel *si_ic(struct stream_interface *si)
+{
+ if (si->flags & SI_FL_ISBACK)
+ return &LIST_ELEM(si, struct stream *, si[1])->res;
+ else
+ return &LIST_ELEM(si, struct stream *, si[0])->req;
+}
+
+/* returns the channel which feeds data to this stream interface (output channel) */
+static inline struct channel *si_oc(struct stream_interface *si)
+{
+ if (si->flags & SI_FL_ISBACK)
+ return &LIST_ELEM(si, struct stream *, si[1])->req;
+ else
+ return &LIST_ELEM(si, struct stream *, si[0])->res;
+}
+
+/* returns the buffer which receives data from this stream interface (input channel's buffer) */
+static inline struct buffer *si_ib(struct stream_interface *si)
+{
+ return si_ic(si)->buf;
+}
+
+/* returns the buffer which feeds data to this stream interface (output channel's buffer) */
+static inline struct buffer *si_ob(struct stream_interface *si)
+{
+ return si_oc(si)->buf;
+}
+
+/* returns the stream associated to a stream interface */
+static inline struct stream *si_strm(struct stream_interface *si)
+{
+ if (si->flags & SI_FL_ISBACK)
+ return LIST_ELEM(si, struct stream *, si[1]);
+ else
+ return LIST_ELEM(si, struct stream *, si[0]);
+}
+
+/* returns the task associated to this stream interface */
+static inline struct task *si_task(struct stream_interface *si)
+{
+ if (si->flags & SI_FL_ISBACK)
+ return LIST_ELEM(si, struct stream *, si[1])->task;
+ else
+ return LIST_ELEM(si, struct stream *, si[0])->task;
+}
+
+/* returns the stream interface on the other side. Used during forwarding. */
+static inline struct stream_interface *si_opposite(struct stream_interface *si)
+{
+ if (si->flags & SI_FL_ISBACK)
+ return &LIST_ELEM(si, struct stream *, si[1])->si[0];
+ else
+ return &LIST_ELEM(si, struct stream *, si[0])->si[1];
+}
+
+/* initializes a stream interface in the SI_ST_INI state. It's detached from
+ * any endpoint and only keeps its side which is expected to have already been
+ * set.
+ */
+static inline void si_reset(struct stream_interface *si)
+{
+ si->err_type = SI_ET_NONE;
+ si->conn_retries = 0; /* used for logging too */
+ si->exp = TICK_ETERNITY;
+ si->flags &= SI_FL_ISBACK;
+ si->end = NULL;
+ si->state = si->prev_state = SI_ST_INI;
+ si->ops = &si_embedded_ops;
+}
+
+/* sets the current and previous state of a stream interface to <state>. This
+ * is mainly used to create one in the established state on incoming
+ * conncetions.
+ */
+static inline void si_set_state(struct stream_interface *si, int state)
+{
+ si->state = si->prev_state = state;
+}
+
+/* only detaches the endpoint from the SI, which means that it's set to
+ * NULL and that ->ops is mapped to si_embedded_ops. The previous endpoint
+ * is returned.
+ */
+static inline enum obj_type *si_detach_endpoint(struct stream_interface *si)
+{
+ enum obj_type *prev = si->end;
+
+ si->end = NULL;
+ si->ops = &si_embedded_ops;
+ return prev;
+}
+
+/* Release the endpoint if it's a connection or an applet, then nullify it.
+ * Note: released connections are closed then freed.
+ */
+static inline void si_release_endpoint(struct stream_interface *si)
+{
+ struct connection *conn;
+ struct appctx *appctx;
+
+ if (!si->end)
+ return;
+
+ if ((conn = objt_conn(si->end))) {
+ LIST_DEL(&conn->list);
+ conn_force_close(conn);
+ conn_free(conn);
+ }
+ else if ((appctx = objt_appctx(si->end))) {
+ if (appctx->applet->release && si->state < SI_ST_DIS)
+ appctx->applet->release(appctx);
+ appctx_free(appctx); /* we share the connection pool */
+ }
+ si_detach_endpoint(si);
+}
+
+/* Turn an existing connection endpoint of stream interface <si> to idle mode,
+ * which means that the connection will be polled for incoming events and might
+ * be killed by the underlying I/O handler. If <pool> is not null, the
+ * connection will also be added at the head of this list. This connection
+ * remains assigned to the stream interface it is currently attached to.
+ */
+static inline void si_idle_conn(struct stream_interface *si, struct list *pool)
+{
+ struct connection *conn = __objt_conn(si->end);
+
+ if (pool)
+ LIST_ADD(pool, &conn->list);
+
+ conn_attach(conn, si, &si_idle_conn_cb);
+ conn_data_want_recv(conn);
+}
+
+/* Attach connection <conn> to the stream interface <si>. The stream interface
+ * is configured to work with a connection and the connection it configured
+ * with a stream interface data layer.
+ */
+static inline void si_attach_conn(struct stream_interface *si, struct connection *conn)
+{
+ si->ops = &si_conn_ops;
+ si->end = &conn->obj_type;
+ conn_attach(conn, si, &si_conn_cb);
+}
+
+/* Returns true if a connection is attached to the stream interface <si> and
+ * if this connection is ready.
+ */
+static inline int si_conn_ready(struct stream_interface *si)
+{
+ struct connection *conn = objt_conn(si->end);
+
+ return conn && conn_ctrl_ready(conn) && conn_xprt_ready(conn);
+}
+
+/* Attach appctx <appctx> to the stream interface <si>. The stream interface
+ * is configured to work with an applet context.
+ */
+static inline void si_attach_appctx(struct stream_interface *si, struct appctx *appctx)
+{
+ si->ops = &si_applet_ops;
+ si->end = &appctx->obj_type;
+ appctx->owner = si;
+}
+
+/* returns a pointer to the appctx being run in the SI or NULL if none */
+static inline struct appctx *si_appctx(struct stream_interface *si)
+{
+ return objt_appctx(si->end);
+}
+
+/* call the applet's release function if any. Needs to be called upon close() */
+static inline void si_applet_release(struct stream_interface *si)
+{
+ struct appctx *appctx;
+
+ appctx = si_appctx(si);
+ if (appctx && appctx->applet->release && si->state < SI_ST_DIS)
+ appctx->applet->release(appctx);
+}
+
+/* let an applet indicate that it wants to put some data into the input buffer */
+static inline void si_applet_want_put(struct stream_interface *si)
+{
+ si->flags |= SI_FL_WANT_PUT;
+}
+
+/* let an applet indicate that it wanted to put some data into the input buffer
+ * but it couldn't.
+ */
+static inline void si_applet_cant_put(struct stream_interface *si)
+{
+ si->flags |= SI_FL_WANT_PUT | SI_FL_WAIT_ROOM;
+}
+
+/* let an applet indicate that it doesn't want to put data into the input buffer */
+static inline void si_applet_stop_put(struct stream_interface *si)
+{
+ si->flags &= ~SI_FL_WANT_PUT;
+}
+
+/* let an applet indicate that it wants to get some data from the output buffer */
+static inline void si_applet_want_get(struct stream_interface *si)
+{
+ si->flags |= SI_FL_WANT_GET;
+}
+
+/* let an applet indicate that it wanted to get some data from the output buffer
+ * but it couldn't.
+ */
+static inline void si_applet_cant_get(struct stream_interface *si)
+{
+ si->flags |= SI_FL_WANT_GET | SI_FL_WAIT_DATA;
+}
+
+/* let an applet indicate that it doesn't want to get data from the input buffer */
+static inline void si_applet_stop_get(struct stream_interface *si)
+{
+ si->flags &= ~SI_FL_WANT_GET;
+}
+
+/* Try to allocate a new connection and assign it to the interface. If
+ * an endpoint was previously allocated, it is released first. The newly
+ * allocated connection is initialized, assigned to the stream interface,
+ * and returned.
+ */
+static inline struct connection *si_alloc_conn(struct stream_interface *si)
+{
+ struct connection *conn;
+
+ si_release_endpoint(si);
+
+ conn = conn_new();
+ if (conn)
+ si_attach_conn(si, conn);
+
+ return conn;
+}
+
+/* Release the interface's existing endpoint (connection or appctx) and
+ * allocate then initialize a new appctx which is assigned to the interface
+ * and returned. NULL may be returned upon memory shortage. Applet <applet>
+ * is assigned to the appctx, but it may be NULL.
+ */
+static inline struct appctx *si_alloc_appctx(struct stream_interface *si, struct applet *applet)
+{
+ struct appctx *appctx;
+
+ si_release_endpoint(si);
+ appctx = appctx_new(applet);
+ if (appctx)
+ si_attach_appctx(si, appctx);
+
+ return appctx;
+}
+
+/* Sends a shutr to the connection using the data layer */
+static inline void si_shutr(struct stream_interface *si)
+{
+ si->ops->shutr(si);
+}
+
+/* Sends a shutw to the connection using the data layer */
+static inline void si_shutw(struct stream_interface *si)
+{
+ si->ops->shutw(si);
+}
+
+/* Updates the stream interface and timers, then updates the data layer below */
+static inline void si_update(struct stream_interface *si)
+{
+ stream_int_update(si);
+ if (si->ops->update)
+ si->ops->update(si);
+}
+
+/* Calls chk_rcv on the connection using the data layer */
+static inline void si_chk_rcv(struct stream_interface *si)
+{
+ si->ops->chk_rcv(si);
+}
+
+/* Calls chk_snd on the connection using the data layer */
+static inline void si_chk_snd(struct stream_interface *si)
+{
+ si->ops->chk_snd(si);
+}
+
+/* Calls chk_snd on the connection using the ctrl layer */
+static inline int si_connect(struct stream_interface *si)
+{
+ struct connection *conn = objt_conn(si->end);
+ int ret = SF_ERR_NONE;
+
+ if (unlikely(!conn || !conn->ctrl || !conn->ctrl->connect))
+ return SF_ERR_INTERNAL;
+
+ if (!conn_ctrl_ready(conn) || !conn_xprt_ready(conn)) {
+ ret = conn->ctrl->connect(conn, !channel_is_empty(si_oc(si)), 0);
+ if (ret != SF_ERR_NONE)
+ return ret;
+
+ /* we need to be notified about connection establishment */
+ conn->flags |= CO_FL_WAKE_DATA;
+
+ /* we're in the process of establishing a connection */
+ si->state = SI_ST_CON;
+ }
+ else if (!channel_is_empty(si_oc(si))) {
+ /* reuse the existing connection, we'll have to send a
+ * request there.
+ */
+ conn_data_want_send(conn);
+
+ /* the connection is established */
+ si->state = SI_ST_EST;
+ }
+
+ /* needs src ip/port for logging */
+ if (si->flags & SI_FL_SRC_ADDR)
+ conn_get_from_addr(conn);
+
+ return ret;
+}
+
+/* for debugging, reports the stream interface state name */
+static inline const char *si_state_str(int state)
+{
+ switch (state) {
+ case SI_ST_INI: return "INI";
+ case SI_ST_REQ: return "REQ";
+ case SI_ST_QUE: return "QUE";
+ case SI_ST_TAR: return "TAR";
+ case SI_ST_ASS: return "ASS";
+ case SI_ST_CON: return "CON";
+ case SI_ST_CER: return "CER";
+ case SI_ST_EST: return "EST";
+ case SI_ST_DIS: return "DIS";
+ case SI_ST_CLO: return "CLO";
+ default: return "???";
+ }
+}
+
+#endif /* _PROTO_STREAM_INTERFACE_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/proto/task.h
+ * Functions for task management.
+ *
+ * Copyright (C) 2000-2010 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _PROTO_TASK_H
+#define _PROTO_TASK_H
+
+
+#include <sys/time.h>
+
+#include <common/config.h>
+#include <common/memory.h>
+#include <common/mini-clist.h>
+#include <common/standard.h>
+#include <common/ticks.h>
+#include <eb32tree.h>
+
+#include <types/global.h>
+#include <types/task.h>
+
+/* Principle of the wait queue.
+ *
+ * We want to be able to tell whether an expiration date is before of after the
+ * current time <now>. We KNOW that expiration dates are never too far apart,
+ * because they are measured in ticks (milliseconds). We also know that almost
+ * all dates will be in the future, and that a very small part of them will be
+ * in the past, they are the ones which have expired since last time we checked
+ * them. Using ticks, we know if a date is in the future or in the past, but we
+ * cannot use that to store sorted information because that reference changes
+ * all the time.
+ *
+ * We'll use the fact that the time wraps to sort timers. Timers above <now>
+ * are in the future, timers below <now> are in the past. Here, "above" and
+ * "below" are to be considered modulo 2^31.
+ *
+ * Timers are stored sorted in an ebtree. We use the new ability for ebtrees to
+ * lookup values starting from X to only expire tasks between <now> - 2^31 and
+ * <now>. If the end of the tree is reached while walking over it, we simply
+ * loop back to the beginning. That way, we have no problem keeping sorted
+ * wrapping timers in a tree, between (now - 24 days) and (now + 24 days). The
+ * keys in the tree always reflect their real position, none can be infinite.
+ * This reduces the number of checks to be performed.
+ *
+ * Another nice optimisation is to allow a timer to stay at an old place in the
+ * queue as long as it's not further than the real expiration date. That way,
+ * we use the tree as a place holder for a minorant of the real expiration
+ * date. Since we have a very low chance of hitting a timeout anyway, we can
+ * bounce the nodes to their right place when we scan the tree if we encounter
+ * a misplaced node once in a while. This even allows us not to remove the
+ * infinite timers from the wait queue.
+ *
+ * So, to summarize, we have :
+ * - node->key always defines current position in the wait queue
+ * - timer is the real expiration date (possibly infinite)
+ * - node->key is always before or equal to timer
+ *
+ * The run queue works similarly to the wait queue except that the current date
+ * is replaced by an insertion counter which can also wrap without any problem.
+ */
+
+/* The farthest we can look back in a timer tree */
+#define TIMER_LOOK_BACK (1U << 31)
+
+/* a few exported variables */
+extern unsigned int nb_tasks; /* total number of tasks */
+extern unsigned int run_queue; /* run queue size */
+extern unsigned int run_queue_cur;
+extern unsigned int nb_tasks_cur;
+extern unsigned int niced_tasks; /* number of niced tasks in the run queue */
+extern struct pool_head *pool2_task;
+extern struct eb32_node *last_timer; /* optimization: last queued timer */
+extern struct eb32_node *rq_next; /* optimization: next task except if delete/insert */
+
+/* return 0 if task is in run queue, otherwise non-zero */
+static inline int task_in_rq(struct task *t)
+{
+ return t->rq.node.leaf_p != NULL;
+}
+
+/* return 0 if task is in wait queue, otherwise non-zero */
+static inline int task_in_wq(struct task *t)
+{
+ return t->wq.node.leaf_p != NULL;
+}
+
+/* puts the task <t> in run queue with reason flags <f>, and returns <t> */
+struct task *__task_wakeup(struct task *t);
+static inline struct task *task_wakeup(struct task *t, unsigned int f)
+{
+ if (likely(!task_in_rq(t)))
+ __task_wakeup(t);
+ t->state |= f;
+ return t;
+}
+
+/*
+ * Unlink the task from the wait queue, and possibly update the last_timer
+ * pointer. A pointer to the task itself is returned. The task *must* already
+ * be in the wait queue before calling this function. If unsure, use the safer
+ * task_unlink_wq() function.
+ */
+static inline struct task *__task_unlink_wq(struct task *t)
+{
+ eb32_delete(&t->wq);
+ if (last_timer == &t->wq)
+ last_timer = NULL;
+ return t;
+}
+
+static inline struct task *task_unlink_wq(struct task *t)
+{
+ if (likely(task_in_wq(t)))
+ __task_unlink_wq(t);
+ return t;
+}
+
+/*
+ * Unlink the task from the run queue. The run_queue size and number of niced
+ * tasks are updated too. A pointer to the task itself is returned. The task
+ * *must* already be in the run queue before calling this function. If unsure,
+ * use the safer task_unlink_rq() function. Note that the pointer to the next
+ * run queue entry is neither checked nor updated.
+ */
+static inline struct task *__task_unlink_rq(struct task *t)
+{
+ eb32_delete(&t->rq);
+ run_queue--;
+ if (likely(t->nice))
+ niced_tasks--;
+ return t;
+}
+
+/* This function unlinks task <t> from the run queue if it is in it. It also
+ * takes care of updating the next run queue task if it was this task.
+ */
+static inline struct task *task_unlink_rq(struct task *t)
+{
+ if (likely(task_in_rq(t))) {
+ if (&t->rq == rq_next)
+ rq_next = eb32_next(rq_next);
+ __task_unlink_rq(t);
+ }
+ return t;
+}
+
+/*
+ * Unlinks the task and adjusts run queue stats.
+ * A pointer to the task itself is returned.
+ */
+static inline struct task *task_delete(struct task *t)
+{
+ task_unlink_wq(t);
+ task_unlink_rq(t);
+ return t;
+}
+
+/*
+ * Initialize a new task. The bare minimum is performed (queue pointers and
+ * state). The task is returned. This function should not be used outside of
+ * task_new().
+ */
+static inline struct task *task_init(struct task *t)
+{
+ t->wq.node.leaf_p = NULL;
+ t->rq.node.leaf_p = NULL;
+ t->state = TASK_SLEEPING;
+ t->nice = 0;
+ t->calls = 0;
+ return t;
+}
+
+/*
+ * Allocate and initialise a new task. The new task is returned, or NULL in
+ * case of lack of memory. The task count is incremented. Tasks should only
+ * be allocated this way, and must be freed using task_free().
+ */
+static inline struct task *task_new(void)
+{
+ struct task *t = pool_alloc2(pool2_task);
+ if (t) {
+ nb_tasks++;
+ task_init(t);
+ }
+ return t;
+}
+
+/*
+ * Free a task. Its context must have been freed since it will be lost.
+ * The task count is decremented.
+ */
+static inline void task_free(struct task *t)
+{
+ pool_free2(pool2_task, t);
+ if (unlikely(stopping))
+ pool_flush2(pool2_task);
+ nb_tasks--;
+}
+
+/* Place <task> into the wait queue, where it may already be. If the expiration
+ * timer is infinite, do nothing and rely on wake_expired_task to clean up.
+ */
+void __task_queue(struct task *task);
+static inline void task_queue(struct task *task)
+{
+ /* If we already have a place in the wait queue no later than the
+ * timeout we're trying to set, we'll stay there, because it is very
+ * unlikely that we will reach the timeout anyway. If the timeout
+ * has been disabled, it's useless to leave the queue as well. We'll
+ * rely on wake_expired_tasks() to catch the node and move it to the
+ * proper place should it ever happen. Finally we only add the task
+ * to the queue if it was not there or if it was further than what
+ * we want.
+ */
+ if (!tick_isset(task->expire))
+ return;
+
+ if (!task_in_wq(task) || tick_is_lt(task->expire, task->wq.key))
+ __task_queue(task);
+}
+
+/* Ensure <task> will be woken up at most at <when>. If the task is already in
+ * the run queue (but not running), nothing is done. It may be used that way
+ * with a delay : task_schedule(task, tick_add(now_ms, delay));
+ */
+static inline void task_schedule(struct task *task, int when)
+{
+ if (task_in_rq(task))
+ return;
+
+ if (task_in_wq(task))
+ when = tick_first(when, task->expire);
+
+ task->expire = when;
+ if (!task_in_wq(task) || tick_is_lt(task->expire, task->wq.key))
+ __task_queue(task);
+}
+
+/*
+ * This does 3 things :
+ * - wake up all expired tasks
+ * - call all runnable tasks
+ * - return the date of next event in <next> or eternity.
+ */
+
+void process_runnable_tasks();
+
+/*
+ * Extract all expired timers from the timer queue, and wakes up all
+ * associated tasks. Returns the date of next event (or eternity).
+ */
+int wake_expired_tasks();
+
+/* Perform minimal initializations, report 0 in case of error, 1 if OK. */
+int init_task();
+
+#endif /* _PROTO_TASK_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ include/proto/template.h
+ This file serves as a template for future include files.
+
+ Copyright (C) 2000-2006 Willy Tarreau - w@1wt.eu
+
+ This library is free software; you can redistribute it and/or
+ modify it under the terms of the GNU Lesser General Public
+ License as published by the Free Software Foundation, version 2.1
+ exclusively.
+
+ This library is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ Lesser General Public License for more details.
+
+ You should have received a copy of the GNU Lesser General Public
+ License along with this library; if not, write to the Free Software
+ Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+*/
+
+#ifndef _PROTO_TEMPLATE_H
+#define _PROTO_TEMPLATE_H
+
+#include <common/config.h>
+#include <types/template.h>
+
+
+#endif /* _PROTO_TEMPLATE_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+#ifndef _PROTO_VARS_H
+#define _PROTO_VARS_H
+
+#include <types/vars.h>
+
+void vars_init(struct vars *vars, enum vars_scope scope);
+void vars_prune(struct vars *vars, struct stream *strm);
+void vars_prune_per_sess(struct vars *vars);
+int vars_get_by_name(const char *name, size_t len, struct stream *strm, struct sample *smp);
+void vars_set_by_name(const char *name, size_t len, struct stream *strm, struct sample *smp);
+int vars_get_by_desc(const struct var_desc *var_desc, struct stream *strm, struct sample *smp);
+int vars_check_arg(struct arg *arg, char **err);
+
+#endif
--- /dev/null
+/*
+ * include/types/acl.h
+ * This file provides structures and types for ACLs.
+ *
+ * Copyright (C) 2000-2012 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _TYPES_ACL_H
+#define _TYPES_ACL_H
+
+#include <common/compat.h>
+#include <common/config.h>
+#include <common/mini-clist.h>
+
+#include <types/arg.h>
+#include <types/auth.h>
+#include <types/pattern.h>
+#include <types/proxy.h>
+#include <types/server.h>
+
+#include <ebmbtree.h>
+
+/* ACL test result.
+ *
+ * We're using a 3-state matching system :
+ * - PASS : at least one pattern already matches
+ * - MISS : some data is missing to decide if some rules may finally match.
+ * - FAIL : no mattern may ever match
+ *
+ * We assign values 0, 1 and 3 to FAIL, MISS and PASS respectively, so that we
+ * can make use of standard arithmetics for the truth tables below :
+ *
+ * x | !x x&y | F(0) | M(1) | P(3) x|y | F(0) | M(1) | P(3)
+ * ------+----- -----+------+------+----- -----+------+------+-----
+ * F(0) | P(3) F(0)| F(0) | F(0) | F(0) F(0)| F(0) | M(1) | P(3)
+ * M(1) | M(1) M(1)| F(0) | M(1) | M(1) M(1)| M(1) | M(1) | P(3)
+ * P(3) | F(0) P(3)| F(0) | M(1) | P(3) P(3)| P(3) | P(3) | P(3)
+ *
+ * neg(x) = (3 >> x) and(x,y) = (x & y) or(x,y) = (x | y)
+ *
+ * For efficiency, the ACL return flags are directly mapped from the pattern
+ * match flags. See include/pattern.h for existing values.
+ */
+enum acl_test_res {
+ ACL_TEST_FAIL = 0, /* test failed */
+ ACL_TEST_MISS = 1, /* test may pass with more info */
+ ACL_TEST_PASS = 3, /* test passed */
+};
+
+/* Condition polarity. It makes it easier for any option to choose between
+ * IF/UNLESS if it can store that information within the condition itself.
+ * Those should be interpreted as "IF/UNLESS result == PASS".
+ */
+enum acl_cond_pol {
+ ACL_COND_NONE, /* no polarity set yet */
+ ACL_COND_IF, /* positive condition (after 'if') */
+ ACL_COND_UNLESS, /* negative condition (after 'unless') */
+};
+
+/* some dummy declarations to silent the compiler */
+struct proxy;
+struct stream;
+
+/*
+ * ACL keyword: Associates keywords with parsers, methods to retrieve the value and testers.
+ */
+/*
+ * NOTE:
+ * The 'parse' function is called to parse words in the configuration. It must
+ * return the number of valid words read. 0 = error. The 'opaque' argument may
+ * be used by functions which need to maintain a context between consecutive
+ * values. It is initialized to zero before the first call, and passed along
+ * successive calls.
+ */
+
+struct acl_expr;
+struct acl_keyword {
+ const char *kw;
+ char *fetch_kw;
+ int match_type; /* Contain PAT_MATCH_* */
+ int (*parse)(const char *text, struct pattern *pattern, int flags, char **err);
+ int (*index)(struct pattern_expr *expr, struct pattern *pattern, char **err);
+ void (*delete)(struct pattern_expr *expr, struct pat_ref_elt *);
+ void (*prune)(struct pattern_expr *expr);
+ struct pattern *(*match)(struct sample *smp, struct pattern_expr *expr, int fill);
+ /* must be after the config params */
+ struct sample_fetch *smp; /* the sample fetch we depend on */
+};
+
+/*
+ * A keyword list. It is a NULL-terminated array of keywords. It embeds a
+ * struct list in order to be linked to other lists, allowing it to easily
+ * be declared where it is needed, and linked without duplicating data nor
+ * allocating memory.
+ */
+struct acl_kw_list {
+ struct list list;
+ struct acl_keyword kw[VAR_ARRAY];
+};
+
+/*
+ * Description of an ACL expression.
+ * The expression is part of a list. It contains pointers to the keyword, the
+ * sample fetch descriptor which defaults to the keyword's, and the associated
+ * pattern matching. The structure is organized so that the hot parts are
+ * grouped together in order to optimize caching.
+ */
+struct acl_expr {
+ struct sample_expr *smp; /* the sample expression we depend on */
+ struct pattern_head pat; /* the pattern matching expression */
+ struct list list; /* chaining */
+ const char *kw; /* points to the ACL kw's name or fetch's name (must not free) */
+};
+
+/* The acl will be linked to from the proxy where it is declared */
+struct acl {
+ struct list list; /* chaining */
+ char *name; /* acl name */
+ struct list expr; /* list of acl_exprs */
+ int cache_idx; /* ACL index in cache */
+ unsigned int use; /* or'ed bit mask of all acl_expr's SMP_USE_* */
+ unsigned int val; /* or'ed bit mask of all acl_expr's SMP_VAL_* */
+};
+
+/* the condition will be linked to from an action in a proxy */
+struct acl_term {
+ struct list list; /* chaining */
+ struct acl *acl; /* acl pointed to by this term */
+ int neg; /* 1 if the ACL result must be negated */
+};
+
+struct acl_term_suite {
+ struct list list; /* chaining of term suites */
+ struct list terms; /* list of acl_terms */
+};
+
+struct acl_cond {
+ struct list list; /* Some specific tests may use multiple conditions */
+ struct list suites; /* list of acl_term_suites */
+ enum acl_cond_pol pol; /* polarity: ACL_COND_IF / ACL_COND_UNLESS */
+ unsigned int use; /* or'ed bit mask of all suites's SMP_USE_* */
+ unsigned int val; /* or'ed bit mask of all suites's SMP_VAL_* */
+ const char *file; /* config file where the condition is declared */
+ int line; /* line in the config file where the condition is declared */
+};
+
+#endif /* _TYPES_ACL_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/types/action.h
+ * This file contains actions definitions.
+ *
+ * Copyright (C) 2000-2010 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _TYPES_ACTION_H
+#define _TYPES_ACTION_H
+
+#include <common/regex.h>
+
+#include <types/applet.h>
+#include <types/stick_table.h>
+
+enum act_from {
+ ACT_F_TCP_REQ_CON, /* tcp-request connection */
+ ACT_F_TCP_REQ_CNT, /* tcp-request content */
+ ACT_F_TCP_RES_CNT, /* tcp-response content */
+ ACT_F_HTTP_REQ, /* http-request */
+ ACT_F_HTTP_RES, /* http-response */
+};
+
+enum act_return {
+ ACT_RET_CONT, /* continue processing. */
+ ACT_RET_STOP, /* stop processing. */
+ ACT_RET_YIELD, /* call me again. */
+ ACT_RET_ERR, /* processing error. */
+};
+
+enum act_parse_ret {
+ ACT_RET_PRS_OK, /* continue processing. */
+ ACT_RET_PRS_ERR, /* abort processing. */
+};
+
+/* flags passed to custom actions */
+enum act_flag {
+ ACT_FLAG_NONE = 0x00000000, /* no flag */
+ ACT_FLAG_FINAL = 0x00000001, /* last call, cannot yield */
+ ACT_FLAG_FIRST = 0x00000002, /* first call for this action */
+};
+
+enum act_name {
+ ACT_CUSTOM = 0,
+
+ /* common action */
+ ACT_ACTION_ALLOW,
+ ACT_ACTION_DENY,
+
+ /* common http actions .*/
+ ACT_HTTP_ADD_HDR,
+ ACT_HTTP_REPLACE_HDR,
+ ACT_HTTP_REPLACE_VAL,
+ ACT_HTTP_SET_HDR,
+ ACT_HTTP_DEL_HDR,
+ ACT_HTTP_REDIR,
+ ACT_HTTP_SET_NICE,
+ ACT_HTTP_SET_LOGL,
+ ACT_HTTP_SET_TOS,
+ ACT_HTTP_SET_MARK,
+ ACT_HTTP_ADD_ACL,
+ ACT_HTTP_DEL_ACL,
+ ACT_HTTP_DEL_MAP,
+ ACT_HTTP_SET_MAP,
+
+ /* http request actions. */
+ ACT_HTTP_REQ_TARPIT,
+ ACT_HTTP_REQ_AUTH,
+ ACT_HTTP_REQ_SET_SRC,
+
+ /* tcp actions */
+ ACT_TCP_EXPECT_PX,
+ ACT_TCP_CLOSE, /* close at the sender's */
+ ACT_TCP_CAPTURE, /* capture a fetched sample */
+
+ /* track stick counters */
+ ACT_ACTION_TRK_SC0,
+ /* SC1, SC2, ... SCn */
+ ACT_ACTION_TRK_SCMAX = ACT_ACTION_TRK_SC0 + MAX_SESS_STKCTR - 1,
+};
+
+struct act_rule {
+ struct list list;
+ struct acl_cond *cond; /* acl condition to meet */
+ enum act_name action; /* ACT_ACTION_* */
+ enum act_from from; /* ACT_F_* */
+ short deny_status; /* HTTP status to return to user when denying */
+ enum act_return (*action_ptr)(struct act_rule *rule, struct proxy *px, /* ptr to custom action */
+ struct session *sess, struct stream *s, int flags);
+ struct action_kw *kw;
+ struct applet applet; /* used for the applet registration. */
+ union {
+ struct {
+ char *realm;
+ } auth; /* arg used by "auth" */
+ struct {
+ char *name; /* header name */
+ int name_len; /* header name's length */
+ struct list fmt; /* log-format compatible expression */
+ struct my_regex re; /* used by replace-header and replace-value */
+ } hdr_add; /* args used by "add-header" and "set-header" */
+ struct redirect_rule *redir; /* redirect rule or "http-request redirect" */
+ int nice; /* nice value for ACT_HTTP_SET_NICE */
+ int loglevel; /* log-level value for ACT_HTTP_SET_LOGL */
+ int tos; /* tos value for ACT_HTTP_SET_TOS */
+ int mark; /* nfmark value for ACT_HTTP_SET_MARK */
+ struct {
+ char *ref; /* MAP or ACL file name to update */
+ struct list key; /* pattern to retrieve MAP or ACL key */
+ struct list value; /* pattern to retrieve MAP value */
+ } map;
+ struct sample_expr *expr;
+ struct {
+ struct list logfmt;
+ int action;
+ } http;
+ struct {
+ struct sample_expr *expr; /* expression used as the key */
+ struct cap_hdr *hdr; /* the capture storage */
+ } cap;
+ struct {
+ unsigned int code; /* HTTP status code */
+ } status;
+ struct {
+ struct sample_expr *expr;
+ int idx;
+ } capid;
+ struct hlua_rule *hlua_rule;
+ struct {
+ struct sample_expr *expr;
+ const char *name;
+ enum vars_scope scope;
+ } vars;
+ struct {
+ int sc;
+ } gpc;
+ struct {
+ int sc;
+ long long int value;
+ } gpt;
+ struct track_ctr_prm trk_ctr;
+ struct {
+ void *p[4];
+ } act; /* generic pointers to be used by custom actions */
+ } arg; /* arguments used by some actions */
+};
+
+struct action_kw {
+ const char *kw;
+ enum act_parse_ret (*parse)(const char **args, int *cur_arg, struct proxy *px,
+ struct act_rule *rule, char **err);
+ int match_pfx;
+ void *private;
+};
+
+struct action_kw_list {
+ struct list list;
+ struct action_kw kw[VAR_ARRAY];
+};
+
+#endif /* _TYPES_ACTION_H */
--- /dev/null
+/*
+ * include/types/applet.h
+ * This file describes the applet struct and associated constants.
+ *
+ * Copyright (C) 2000-2015 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _TYPES_APPLET_H
+#define _TYPES_APPLET_H
+
+#include <types/hlua.h>
+#include <types/obj_type.h>
+#include <types/proxy.h>
+#include <types/stream.h>
+#include <common/chunk.h>
+#include <common/config.h>
+
+struct appctx;
+
+/* Applet descriptor */
+struct applet {
+ enum obj_type obj_type; /* object type = OBJ_TYPE_APPLET */
+ /* 3 unused bytes here */
+ char *name; /* applet's name to report in logs */
+ int (*init)(struct appctx *, struct proxy *px, struct stream *strm); /* callback to init ressources, may be NULL.
+ expect 1 if ok, 0 if an error occurs, -1 if miss data. */
+ void (*fct)(struct appctx *); /* internal I/O handler, may never be NULL */
+ void (*release)(struct appctx *); /* callback to release resources, may be NULL */
+ unsigned int timeout; /* execution timeout. */
+};
+
+/* Context of a running applet. */
+struct appctx {
+ struct list runq; /* chaining in the applet run queue */
+ enum obj_type obj_type; /* OBJ_TYPE_APPCTX */
+ /* 3 unused bytes here */
+ unsigned int st0; /* CLI state for stats, session state for peers */
+ unsigned int st1; /* prompt for stats, session error for peers */
+ unsigned int st2; /* output state for stats, unused by peers */
+ struct applet *applet; /* applet this context refers to */
+ void *owner; /* pointer to upper layer's entity (eg: stream interface) */
+ struct act_rule *rule; /* rule associated with the applet. */
+
+ union {
+ struct {
+ struct proxy *px;
+ struct server *sv;
+ void *l;
+ int scope_str; /* limit scope to a frontend/backend substring */
+ int scope_len; /* length of the string above in the buffer */
+ int px_st; /* STAT_PX_ST* */
+ unsigned int flags; /* STAT_* */
+ int iid, type, sid; /* proxy id, type and service id if bounding of stats is enabled */
+ int st_code; /* the status code returned by an action */
+ } stats;
+ struct {
+ struct bref bref; /* back-reference from the session being dumped */
+ void *target; /* session we want to dump, or NULL for all */
+ unsigned int uid; /* if non-null, the uniq_id of the session being dumped */
+ int section; /* section of the session being dumped */
+ int pos; /* last position of the current session's buffer */
+ } sess;
+ struct {
+ int iid; /* if >= 0, ID of the proxy to filter on */
+ struct proxy *px; /* current proxy being dumped, NULL = not started yet. */
+ unsigned int buf; /* buffer being dumped, 0 = req, 1 = rep */
+ unsigned int sid; /* session ID of error being dumped */
+ int ptr; /* <0: headers, >=0 : text pointer to restart from */
+ int bol; /* pointer to beginning of current line */
+ } errors;
+ struct {
+ void *target; /* table we want to dump, or NULL for all */
+ struct proxy *proxy; /* table being currently dumped (first if NULL) */
+ struct stksess *entry; /* last entry we were trying to dump (or first if NULL) */
+ long long value; /* value to compare against */
+ signed char data_type; /* type of data to compare, or -1 if none */
+ signed char data_op; /* operator (STD_OP_*) when data_type set */
+ } table;
+ struct {
+ const char *msg; /* pointer to a persistent message to be returned in PRINT state */
+ char *err; /* pointer to a 'must free' message to be returned in PRINT_FREE state */
+ } cli;
+ struct {
+ void *ptr; /* multi-purpose pointer for peers */
+ } peers;
+ struct {
+ unsigned int display_flags;
+ struct pat_ref *ref;
+ struct pat_ref_elt *elt;
+ struct pattern_expr *expr;
+ struct chunk chunk;
+ } map;
+#if (defined SSL_CTRL_SET_TLSEXT_TICKET_KEY_CB && TLS_TICKETS_NO > 0)
+ struct {
+ struct tls_keys_ref *ref;
+ } tlskeys;
+#endif
+ struct {
+ int connected;
+ struct hlua_socket *socket;
+ struct list wake_on_read;
+ struct list wake_on_write;
+ } hlua;
+ struct {
+ struct hlua hlua;
+ int flags;
+ struct task *task;
+ } hlua_apptcp;
+ struct {
+ struct hlua hlua;
+ int left_bytes; /* The max amount of bytes that we can read. */
+ int flags;
+ int status;
+ struct task *task;
+ } hlua_apphttp;
+ struct {
+ struct dns_resolvers *ptr;
+ } resolvers;
+ struct {
+ struct proxy *backend;
+ } server_state;
+ } ctx; /* used by stats I/O handlers to dump the stats */
+};
+
+#endif /* _TYPES_APPLET_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/types/arg.h
+ * This file contains structure declarations for generaic argument parsing.
+ *
+ * Copyright 2012 Willy Tarreau <w@1wt.eu>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _TYPES_ARG_H
+#define _TYPES_ARG_H
+
+#include <sys/socket.h>
+#include <netinet/in.h>
+
+#include <common/chunk.h>
+#include <common/mini-clist.h>
+
+#include <types/vars.h>
+
+/* encoding of each arg type : up to 31 types are supported */
+#define ARGT_BITS 5
+#define ARGT_NBTYPES (1 << ARGT_BITS)
+#define ARGT_MASK (ARGT_NBTYPES - 1)
+
+/* encoding of the arg count : up to 5 args are possible. 4 bits are left
+ * unused at the top.
+ */
+#define ARGM_MASK ((1 << ARGM_BITS) - 1)
+#define ARGM_BITS 3
+#define ARGM_NBARGS (32 - ARGM_BITS) / sizeof(int)
+
+enum {
+ ARGT_STOP = 0, /* end of the arg list */
+ ARGT_SINT, /* signed 64 bit integer. */
+ ARGT_STR, /* string */
+ ARGT_IPV4, /* an IPv4 address */
+ ARGT_MSK4, /* an IPv4 address mask (integer or dotted), stored as ARGT_IPV4 */
+ ARGT_IPV6, /* an IPv6 address */
+ ARGT_MSK6, /* an IPv6 address mask (integer or dotted), stored as ARGT_IPV4 */
+ ARGT_TIME, /* a delay in ms by default, stored as ARGT_UINT */
+ ARGT_SIZE, /* a size in bytes by default, stored as ARGT_UINT */
+ ARGT_FE, /* a pointer to a frontend only */
+ ARGT_BE, /* a pointer to a backend only */
+ ARGT_TAB, /* a pointer to a stick table */
+ ARGT_SRV, /* a pointer to a server */
+ ARGT_USR, /* a pointer to a user list */
+ ARGT_MAP, /* a pointer to a map descriptor */
+ ARGT_REG, /* a pointer to a regex */
+ ARGT_VAR, /* contains a variable description. */
+ /* please update arg_type_names[] in args.c if you add entries here */
+};
+
+/* context where arguments are used, in order to help error reporting */
+enum {
+ ARGC_ACL = 0, /* ACL */
+ ARGC_STK, /* sticking rule */
+ ARGC_TRK, /* tracking rule */
+ ARGC_LOG, /* log-format */
+ ARGC_LOGSD, /* log-format-sd */
+ ARGC_HRQ, /* http-request */
+ ARGC_HRS, /* http-response */
+ ARGC_UIF, /* unique-id-format */
+ ARGC_RDR, /* redirect */
+ ARGC_CAP, /* capture rule */
+ ARGC_SRV, /* server line */
+};
+
+/* flags used when compiling and executing regex */
+#define ARGF_REG_ICASE 1
+#define ARGF_REG_GLOB 2
+
+/* some types that are externally defined */
+struct proxy;
+struct server;
+struct userlist;
+struct my_regex;
+
+union arg_data {
+ long long int sint;
+ struct chunk str;
+ struct in_addr ipv4;
+ struct in6_addr ipv6;
+ struct proxy *prx; /* used for fe, be, tables */
+ struct server *srv;
+ struct userlist *usr;
+ struct map_descriptor *map;
+ struct my_regex *reg;
+ struct var_desc var;
+};
+
+struct arg {
+ unsigned char type; /* argument type, ARGT_* */
+ unsigned char unresolved; /* argument contains a string in <str> that must be resolved and freed */
+ unsigned char type_flags; /* type-specific extra flags (eg: case sensitivity for regex), ARGF_* */
+ union arg_data data; /* argument data */
+};
+
+/* arg lists are used to store information about arguments that could not be
+ * resolved when parsing the configuration. The head is an arg_list which
+ * serves as a template to create new entries. Nothing here is allocated,
+ * so plain copies are OK.
+ */
+struct arg_list {
+ struct list list; /* chaining with other arg_list, or list head */
+ struct arg *arg; /* pointer to the arg, NULL on list head */
+ int arg_pos; /* argument position */
+ int ctx; /* context where the arg is used (ARGC_*) */
+ const char *kw; /* keyword making use of these args */
+ const char *conv; /* conv keyword when in conv, otherwise NULL */
+ const char *file; /* file name where the args are referenced */
+ int line; /* line number where the args are referenced */
+};
+
+#endif /* _TYPES_ARG_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * User authentication & authorization.
+ *
+ * Copyright 2010 Krzysztof Piotr Oledzki <ole@ans.pl>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#ifndef _TYPES_AUTH_H
+#define _TYPES_AUTH_H
+
+#include <common/config.h>
+#include <common/mini-clist.h>
+
+#include <types/auth.h>
+
+#define AU_O_INSECURE 0x00000001 /* insecure, unencrypted password */
+
+struct auth_groups {
+ struct auth_groups *next;
+ char *name;
+ char *groupusers; /* Just used during the configuration parsing. */
+};
+
+struct auth_groups_list {
+ struct auth_groups_list *next;
+ struct auth_groups *group;
+};
+
+struct auth_users {
+ struct auth_users *next;
+ unsigned int flags;
+ char *user, *pass;
+ union {
+ char *groups_names; /* Just used during the configuration parsing. */
+ struct auth_groups_list *groups;
+ } u;
+};
+
+struct userlist {
+ struct userlist *next;
+ char *name;
+ struct auth_users *users;
+ struct auth_groups *groups;
+};
+
+#endif /* _TYPES_AUTH_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
+
--- /dev/null
+/*
+ * include/types/backend.h
+ * This file assembles definitions for backends
+ *
+ * Copyright (C) 2000-2012 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _TYPES_BACKEND_H
+#define _TYPES_BACKEND_H
+
+#include <common/config.h>
+#include <types/lb_chash.h>
+#include <types/lb_fas.h>
+#include <types/lb_fwlc.h>
+#include <types/lb_fwrr.h>
+#include <types/lb_map.h>
+#include <types/server.h>
+
+/* Parameters for lbprm.algo */
+
+/* Lower bits define the kind of load balancing method, which means the type of
+ * algorithm, and which criterion it is based on. For this reason, those bits
+ * also include information about dependencies, so that the config parser can
+ * detect incompatibilities.
+ */
+
+/* LB parameters are on the lower 8 bits. Depends on the LB kind. */
+
+/* BE_LB_HASH_* is used with BE_LB_KIND_HI */
+#define BE_LB_HASH_SRC 0x00000 /* hash source IP */
+#define BE_LB_HASH_URI 0x00001 /* hash HTTP URI */
+#define BE_LB_HASH_PRM 0x00002 /* hash HTTP URL parameter */
+#define BE_LB_HASH_HDR 0x00003 /* hash HTTP header value */
+#define BE_LB_HASH_RDP 0x00004 /* hash RDP cookie value */
+
+/* BE_LB_RR_* is used with BE_LB_KIND_RR */
+#define BE_LB_RR_DYN 0x00000 /* dynamic round robin (default) */
+#define BE_LB_RR_STATIC 0x00001 /* static round robin */
+
+/* BE_LB_CB_* is used with BE_LB_KIND_CB */
+#define BE_LB_CB_LC 0x00000 /* least-connections */
+#define BE_LB_CB_FAS 0x00001 /* first available server (opposite of leastconn) */
+
+#define BE_LB_PARM 0x000FF /* mask to get/clear the LB param */
+
+/* Required input(s) */
+#define BE_LB_NEED_NONE 0x00000 /* no input needed */
+#define BE_LB_NEED_ADDR 0x00100 /* only source address needed */
+#define BE_LB_NEED_DATA 0x00200 /* some payload is needed */
+#define BE_LB_NEED_HTTP 0x00400 /* an HTTP request is needed */
+/* not used: 0x0800 */
+#define BE_LB_NEED 0x00F00 /* mask to get/clear dependencies */
+
+/* Algorithm */
+#define BE_LB_KIND_NONE 0x00000 /* algorithm not set */
+#define BE_LB_KIND_RR 0x01000 /* round-robin */
+#define BE_LB_KIND_CB 0x02000 /* connection-based */
+#define BE_LB_KIND_HI 0x03000 /* hash of input (see hash inputs above) */
+#define BE_LB_KIND 0x07000 /* mask to get/clear LB algorithm */
+
+/* All known variants of load balancing algorithms. These can be cleared using
+ * the BE_LB_ALGO mask. For a check, using BE_LB_KIND is preferred.
+ */
+#define BE_LB_ALGO_NONE (BE_LB_KIND_NONE | BE_LB_NEED_NONE) /* not defined */
+#define BE_LB_ALGO_RR (BE_LB_KIND_RR | BE_LB_NEED_NONE) /* round robin */
+#define BE_LB_ALGO_LC (BE_LB_KIND_CB | BE_LB_NEED_NONE | BE_LB_CB_LC) /* least connections */
+#define BE_LB_ALGO_FAS (BE_LB_KIND_CB | BE_LB_NEED_NONE | BE_LB_CB_FAS) /* first available server */
+#define BE_LB_ALGO_SRR (BE_LB_KIND_RR | BE_LB_NEED_NONE | BE_LB_RR_STATIC) /* static round robin */
+#define BE_LB_ALGO_SH (BE_LB_KIND_HI | BE_LB_NEED_ADDR | BE_LB_HASH_SRC) /* hash: source IP */
+#define BE_LB_ALGO_UH (BE_LB_KIND_HI | BE_LB_NEED_HTTP | BE_LB_HASH_URI) /* hash: HTTP URI */
+#define BE_LB_ALGO_PH (BE_LB_KIND_HI | BE_LB_NEED_HTTP | BE_LB_HASH_PRM) /* hash: HTTP URL parameter */
+#define BE_LB_ALGO_HH (BE_LB_KIND_HI | BE_LB_NEED_HTTP | BE_LB_HASH_HDR) /* hash: HTTP header value */
+#define BE_LB_ALGO_RCH (BE_LB_KIND_HI | BE_LB_NEED_DATA | BE_LB_HASH_RDP) /* hash: RDP cookie value */
+#define BE_LB_ALGO (BE_LB_KIND | BE_LB_NEED | BE_LB_PARM ) /* mask to clear algo */
+
+/* Higher bits define how a given criterion is mapped to a server. In fact it
+ * designates the LB function by itself. The dynamic algorithms will also have
+ * the DYN bit set. These flags are automatically set at the end of the parsing.
+ */
+#define BE_LB_LKUP_NONE 0x00000 /* not defined */
+#define BE_LB_LKUP_MAP 0x10000 /* static map based lookup */
+#define BE_LB_LKUP_RRTREE 0x20000 /* FWRR tree lookup */
+#define BE_LB_LKUP_LCTREE 0x30000 /* FWLC tree lookup */
+#define BE_LB_LKUP_CHTREE 0x40000 /* consistent hash */
+#define BE_LB_LKUP_FSTREE 0x50000 /* FAS tree lookup */
+#define BE_LB_LKUP 0x70000 /* mask to get just the LKUP value */
+
+/* additional properties */
+#define BE_LB_PROP_DYN 0x80000 /* bit to indicate a dynamic algorithm */
+
+/* hash types */
+#define BE_LB_HASH_MAP 0x000000 /* map-based hash (default) */
+#define BE_LB_HASH_CONS 0x100000 /* consistent hashbit to indicate a dynamic algorithm */
+#define BE_LB_HASH_TYPE 0x100000 /* get/clear hash types */
+
+/* additional modifier on top of the hash function (only avalanche right now) */
+#define BE_LB_HMOD_AVAL 0x200000 /* avalanche modifier */
+#define BE_LB_HASH_MOD 0x200000 /* get/clear hash modifier */
+
+/* BE_LB_HFCN_* is the hash function, to be used with BE_LB_HASH_FUNC */
+#define BE_LB_HFCN_SDBM 0x000000 /* sdbm hash */
+#define BE_LB_HFCN_DJB2 0x400000 /* djb2 hash */
+#define BE_LB_HFCN_WT6 0x800000 /* wt6 hash */
+#define BE_LB_HFCN_CRC32 0xC00000 /* crc32 hash */
+#define BE_LB_HASH_FUNC 0xC00000 /* get/clear hash function */
+
+
+/* various constants */
+
+/* The scale factor between user weight and effective weight allows smooth
+ * weight modulation even with small weights (eg: 1). It should not be too high
+ * though because it limits the number of servers in FWRR mode in order to
+ * prevent any integer overflow. The max number of servers per backend is
+ * limited to about (2^32-1)/256^2/scale ~= 65535.9999/scale. A scale of 16
+ * looks like a good value, as it allows 4095 servers per backend while leaving
+ * modulation steps of about 6% for servers with the lowest weight (1).
+ */
+#define BE_WEIGHT_SCALE 16
+
+/* LB parameters for all algorithms */
+struct lbprm {
+ int algo; /* load balancing algorithm and variants: BE_LB_* */
+ int tot_wact, tot_wbck; /* total effective weights of active and backup servers */
+ int tot_weight; /* total effective weight of servers participating to LB */
+ int tot_used; /* total number of servers used for LB */
+ int wmult; /* ratio between user weight and effective weight */
+ int wdiv; /* ratio between effective weight and user weight */
+ struct server *fbck; /* first backup server when !PR_O_USE_ALL_BK, or NULL */
+ struct lb_map map; /* LB parameters for map-based algorithms */
+ struct lb_fwrr fwrr;
+ struct lb_fwlc fwlc;
+ struct lb_chash chash;
+ struct lb_fas fas;
+ /* Call backs for some actions. Any of them may be NULL (thus should be ignored). */
+ void (*update_server_eweight)(struct server *); /* to be called after eweight change */
+ void (*set_server_status_up)(struct server *); /* to be called after status changes to UP */
+ void (*set_server_status_down)(struct server *); /* to be called after status changes to DOWN */
+ void (*server_take_conn)(struct server *); /* to be called when connection is assigned */
+ void (*server_drop_conn)(struct server *); /* to be called when connection is dropped */
+};
+
+#endif /* _TYPES_BACKEND_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ include/types/capture.h
+ This file defines everything related to captures.
+
+ Copyright (C) 2000-2007 Willy Tarreau - w@1wt.eu
+
+ This library is free software; you can redistribute it and/or
+ modify it under the terms of the GNU Lesser General Public
+ License as published by the Free Software Foundation, version 2.1
+ exclusively.
+
+ This library is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ Lesser General Public License for more details.
+
+ You should have received a copy of the GNU Lesser General Public
+ License along with this library; if not, write to the Free Software
+ Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+*/
+
+#ifndef _TYPES_CAPTURE_H
+#define _TYPES_CAPTURE_H
+
+#include <common/config.h>
+#include <common/memory.h>
+
+struct cap_hdr {
+ struct cap_hdr *next;
+ char *name; /* header name, case insensitive, NULL if not header */
+ int namelen; /* length of the header name, to speed-up lookups, 0 if !name */
+ int len; /* capture length, not including terminal zero */
+ int index; /* index in the output array */
+ struct pool_head *pool; /* pool of pre-allocated memory area of (len+1) bytes */
+};
+
+extern struct pool_head *pool2_capture;
+
+#endif /* _TYPES_CAPTURE_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/types/channel.h
+ * Channel management definitions, macros and inline functions.
+ *
+ * Copyright (C) 2000-2014 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _TYPES_CHANNEL_H
+#define _TYPES_CHANNEL_H
+
+#include <common/config.h>
+#include <common/buffer.h>
+
+/* The CF_* macros designate Channel Flags, which may be ORed in the bit field
+ * member 'flags' in struct channel. Here we have several types of flags :
+ *
+ * - pure status flags, reported by the data layer, which must be cleared
+ * before doing further I/O :
+ * CF_*_NULL, CF_*_PARTIAL
+ *
+ * - pure status flags, reported by stream-interface layer, which must also
+ * be cleared before doing further I/O :
+ * CF_*_TIMEOUT, CF_*_ERROR
+ *
+ * - read-only indicators reported by lower data levels :
+ * CF_STREAMER, CF_STREAMER_FAST
+ *
+ * - write-once status flags reported by the stream-interface layer :
+ * CF_SHUTR, CF_SHUTW
+ *
+ * - persistent control flags managed only by application level :
+ * CF_SHUT*_NOW, CF_*_ENA
+ *
+ * The flags have been arranged for readability, so that the read and write
+ * bits have the same position in a byte (read being the lower byte and write
+ * the second one). All flag names are relative to the channel. For instance,
+ * 'write' indicates the direction from the channel to the stream interface.
+ */
+
+#define CF_READ_NULL 0x00000001 /* last read detected on producer side */
+#define CF_READ_PARTIAL 0x00000002 /* some data were read from producer */
+#define CF_READ_TIMEOUT 0x00000004 /* timeout while waiting for producer */
+#define CF_READ_ERROR 0x00000008 /* unrecoverable error on producer side */
+#define CF_READ_ACTIVITY (CF_READ_NULL|CF_READ_PARTIAL|CF_READ_ERROR)
+
+#define CF_WAKE_CONNECT 0x00000010 /* wake the task up after connect succeeds */
+#define CF_SHUTR 0x00000020 /* producer has already shut down */
+#define CF_SHUTR_NOW 0x00000040 /* the producer must shut down for reads ASAP */
+#define CF_READ_NOEXP 0x00000080 /* producer should not expire */
+
+#define CF_WRITE_NULL 0x00000100 /* write(0) or connect() succeeded on consumer side */
+#define CF_WRITE_PARTIAL 0x00000200 /* some data were written to the consumer */
+#define CF_WRITE_TIMEOUT 0x00000400 /* timeout while waiting for consumer */
+#define CF_WRITE_ERROR 0x00000800 /* unrecoverable error on consumer side */
+#define CF_WRITE_ACTIVITY (CF_WRITE_NULL|CF_WRITE_PARTIAL|CF_WRITE_ERROR)
+
+#define CF_WAKE_WRITE 0x00001000 /* wake the task up when there's write activity */
+#define CF_SHUTW 0x00002000 /* consumer has already shut down */
+#define CF_SHUTW_NOW 0x00004000 /* the consumer must shut down for writes ASAP */
+#define CF_AUTO_CLOSE 0x00008000 /* producer can forward shutdown to other side */
+
+/* When CF_SHUTR_NOW is set, it is strictly forbidden for the producer to alter
+ * the buffer contents. When CF_SHUTW_NOW is set, the consumer is free to perform
+ * a shutw() when it has consumed the last contents, otherwise the session processor
+ * will do it anyway.
+ *
+ * The SHUT* flags work like this :
+ *
+ * SHUTR SHUTR_NOW meaning
+ * 0 0 normal case, connection still open and data is being read
+ * 0 1 closing : the producer cannot feed data anymore but can close
+ * 1 0 closed: the producer has closed its input channel.
+ * 1 1 impossible
+ *
+ * SHUTW SHUTW_NOW meaning
+ * 0 0 normal case, connection still open and data is being written
+ * 0 1 closing: the consumer can send last data and may then close
+ * 1 0 closed: the consumer has closed its output channel.
+ * 1 1 impossible
+ *
+ * The SHUTW_NOW flag should be set by the session processor when SHUTR and AUTO_CLOSE
+ * are both set. And it may also be set by the producer when it detects SHUTR while
+ * directly forwarding data to the consumer.
+ *
+ * The SHUTR_NOW flag is mostly used to force the producer to abort when an error is
+ * detected on the consumer side.
+ */
+
+#define CF_STREAMER 0x00010000 /* the producer is identified as streaming data */
+#define CF_STREAMER_FAST 0x00020000 /* the consumer seems to eat the stream very fast */
+
+#define CF_WROTE_DATA 0x00040000 /* some data were sent from this buffer */
+#define CF_ANA_TIMEOUT 0x00080000 /* the analyser timeout has expired */
+#define CF_READ_ATTACHED 0x00100000 /* the read side is attached for the first time */
+#define CF_KERN_SPLICING 0x00200000 /* kernel splicing desired for this channel */
+#define CF_READ_DONTWAIT 0x00400000 /* wake the task up after every read (eg: HTTP request) */
+#define CF_AUTO_CONNECT 0x00800000 /* consumer may attempt to establish a new connection */
+
+#define CF_DONT_READ 0x01000000 /* disable reading for now */
+#define CF_EXPECT_MORE 0x02000000 /* more data expected to be sent very soon (one-shoot) */
+#define CF_SEND_DONTWAIT 0x04000000 /* don't wait for sending data (one-shoot) */
+#define CF_NEVER_WAIT 0x08000000 /* never wait for sending data (permanent) */
+
+#define CF_WAKE_ONCE 0x10000000 /* pretend there is activity on this channel (one-shoot) */
+/* unused: 0x20000000, 0x40000000 */
+#define CF_ISRESP 0x80000000 /* 0 = request channel, 1 = response channel */
+
+/* Masks which define input events for stream analysers */
+#define CF_MASK_ANALYSER (CF_READ_ATTACHED|CF_READ_ACTIVITY|CF_READ_TIMEOUT|CF_ANA_TIMEOUT|CF_WRITE_ACTIVITY|CF_WAKE_ONCE)
+
+/* Mask for static flags which cause analysers to be woken up when they change */
+#define CF_MASK_STATIC (CF_SHUTR|CF_SHUTW|CF_SHUTR_NOW|CF_SHUTW_NOW)
+
+
+/* Analysers (channel->analysers).
+ * Those bits indicate that there are some processing to do on the buffer
+ * contents. It will probably evolve into a linked list later. Those
+ * analysers could be compared to higher level processors.
+ * The field is blanked by channel_init() and only by analysers themselves
+ * afterwards.
+ */
+/* unused: 0x00000001 */
+#define AN_REQ_INSPECT_FE 0x00000002 /* inspect request contents in the frontend */
+#define AN_REQ_WAIT_HTTP 0x00000004 /* wait for an HTTP request */
+#define AN_REQ_HTTP_BODY 0x00000008 /* wait for HTTP request body */
+#define AN_REQ_HTTP_PROCESS_FE 0x00000010 /* process the frontend's HTTP part */
+#define AN_REQ_SWITCHING_RULES 0x00000020 /* apply the switching rules */
+#define AN_REQ_INSPECT_BE 0x00000040 /* inspect request contents in the backend */
+#define AN_REQ_HTTP_PROCESS_BE 0x00000080 /* process the backend's HTTP part */
+#define AN_REQ_SRV_RULES 0x00000100 /* use-server rules */
+#define AN_REQ_HTTP_INNER 0x00000200 /* inner processing of HTTP request */
+#define AN_REQ_HTTP_TARPIT 0x00000400 /* wait for end of HTTP tarpit */
+#define AN_REQ_STICKING_RULES 0x00000800 /* table persistence matching */
+#define AN_REQ_PRST_RDP_COOKIE 0x00001000 /* persistence on rdp cookie */
+#define AN_REQ_HTTP_XFER_BODY 0x00002000 /* forward request body */
+#define AN_REQ_ALL 0x00003ffe /* all of the request analysers */
+
+/* response analysers */
+#define AN_RES_INSPECT 0x00010000 /* content inspection */
+#define AN_RES_WAIT_HTTP 0x00020000 /* wait for HTTP response */
+#define AN_RES_HTTP_PROCESS_BE 0x00040000 /* process backend's HTTP part */
+#define AN_RES_HTTP_PROCESS_FE 0x00040000 /* process frontend's HTTP part (same for now) */
+#define AN_RES_STORE_RULES 0x00080000 /* table persistence matching */
+#define AN_RES_HTTP_XFER_BODY 0x00100000 /* forward response body */
+
+
+/* Magic value to forward infinite size (TCP, ...), used with ->to_forward */
+#define CHN_INFINITE_FORWARD MAX_RANGE(unsigned int)
+
+
+struct channel {
+ unsigned int flags; /* CF_* */
+ unsigned int analysers; /* bit field indicating what to do on the channel */
+ struct buffer *buf; /* buffer attached to the channel, always present but may move */
+ struct pipe *pipe; /* non-NULL only when data present */
+ unsigned int to_forward; /* number of bytes to forward after out without a wake-up */
+ unsigned short last_read; /* 16 lower bits of last read date (max pause=65s) */
+ unsigned char xfer_large; /* number of consecutive large xfers */
+ unsigned char xfer_small; /* number of consecutive small xfers */
+ unsigned long long total; /* total data read */
+ int rex; /* expiration date for a read, in ticks */
+ int wex; /* expiration date for a write or connect, in ticks */
+ int rto; /* read timeout, in ticks */
+ int wto; /* write timeout, in ticks */
+ int analyse_exp; /* expiration date for current analysers (if set) */
+};
+
+
+/* Note about the channel structure
+
+ A channel stores information needed to reliably transport data in a single
+ direction. It stores status flags, timeouts, counters, subscribed analysers,
+ pointers to a data producer and to a data consumer, and information about
+ the amount of data which is allowed to flow directly from the producer to
+ the consumer without waking up the analysers.
+
+ A channel may buffer data into two locations :
+ - a visible buffer (->buf)
+ - an invisible buffer which right now consists in a pipe making use of
+ kernel buffers that cannot be tampered with.
+
+ Data stored into the first location may be analysed and altered by analysers
+ while data stored in pipes is only aimed at being transported from one
+ network socket to another one without being subject to memory copies. This
+ buffer may only be used when both the socket layer and the data layer of the
+ producer and the consumer support it, which typically is the case with Linux
+ splicing over sockets, and when there are enough data to be transported
+ without being analyzed (transport of TCP/HTTP payload or tunnelled data,
+ which is indicated by ->to_forward).
+
+ In order not to mix data streams, the producer may only feed the invisible
+ data with data to forward, and only when the visible buffer is empty. The
+ producer may not always be able to feed the invisible buffer due to platform
+ limitations (lack of kernel support).
+
+ Conversely, the consumer must always take data from the invisible data first
+ before ever considering visible data. There is no limit to the size of data
+ to consume from the invisible buffer, as platform-specific implementations
+ will rarely leave enough control on this. So any byte fed into the invisible
+ buffer is expected to reach the destination file descriptor, by any means.
+ However, it's the consumer's responsibility to ensure that the invisible
+ data has been entirely consumed before consuming visible data. This must be
+ reflected by ->pipe->data. This is very important as this and only this can
+ ensure strict ordering of data between buffers.
+
+ The producer is responsible for decreasing ->to_forward. The ->to_forward
+ parameter indicates how many bytes may be fed into either data buffer
+ without waking the parent up. The special value CHN_INFINITE_FORWARD is
+ never decreased nor increased.
+
+ The buf->o parameter says how many bytes may be consumed from the visible
+ buffer. This parameter is updated by any buffer_write() as well as any data
+ forwarded through the visible buffer. Since the ->to_forward attribute
+ applies to data after buf->p, an analyser will not see a buffer which has a
+ non-null ->to_forward with buf->i > 0. A producer is responsible for raising
+ buf->o by min(to_forward, buf->i) when it injects data into the buffer.
+
+ The consumer is responsible for decreasing ->buf->o when it sends data
+ from the visible buffer, and ->pipe->data when it sends data from the
+ invisible buffer.
+
+ A real-world example consists in part in an HTTP response waiting in a
+ buffer to be forwarded. We know the header length (300) and the amount of
+ data to forward (content-length=9000). The buffer already contains 1000
+ bytes of data after the 300 bytes of headers. Thus the caller will set
+ buf->o to 300 indicating that it explicitly wants to send those data, and
+ set ->to_forward to 9000 (content-length). This value must be normalised
+ immediately after updating ->to_forward : since there are already 1300 bytes
+ in the buffer, 300 of which are already counted in buf->o, and that size
+ is smaller than ->to_forward, we must update buf->o to 1300 to flush the
+ whole buffer, and reduce ->to_forward to 8000. After that, the producer may
+ try to feed the additional data through the invisible buffer using a
+ platform-specific method such as splice().
+
+ The ->to_forward entry is also used to detect whether we can fill the buffer
+ or not. The idea is that we need to save some space for data manipulation
+ (mainly header rewriting in HTTP) so we don't want to have a full buffer on
+ input before processing a request or response. Thus, we ensure that there is
+ always global.maxrewrite bytes of free space. Since we don't want to forward
+ chunks without filling the buffer, we rely on ->to_forward. When ->to_forward
+ is null, we may have some processing to do so we don't want to fill the
+ buffer. When ->to_forward is non-null, we know we don't care for at least as
+ many bytes. In the end, we know that each of the ->to_forward bytes will
+ eventually leave the buffer. So as long as ->to_forward is larger than
+ global.maxrewrite, we can fill the buffer. If ->to_forward is smaller than
+ global.maxrewrite, then we don't want to fill the buffer with more than
+ buf->size - global.maxrewrite + ->to_forward.
+
+ A buffer may contain up to 5 areas :
+ - the data waiting to be sent. These data are located between buf->p-o and
+ buf->p ;
+ - the data to process and possibly transform. These data start at
+ buf->p and may be up to ->i bytes long.
+ - the data to preserve. They start at ->p and stop at ->p+i. The limit
+ between the two solely depends on the protocol being analysed.
+ - the spare area : it is the remainder of the buffer, which can be used to
+ store new incoming data. It starts at ->p+i and is up to ->size-i-o long.
+ It may be limited by global.maxrewrite.
+ - the reserved area : this is the area which must not be filled and is
+ reserved for possible rewrites ; it is up to global.maxrewrite bytes
+ long.
+ */
+
+#endif /* _TYPES_CHANNEL_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * Health-checks.
+ *
+ * Copyright 2008-2009 Krzysztof Piotr Oledzki <ole@ans.pl>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#ifndef _TYPES_CHECKS_H
+#define _TYPES_CHECKS_H
+
+#include <sys/time.h>
+
+#include <common/config.h>
+#include <common/mini-clist.h>
+#include <common/regex.h>
+
+#include <types/connection.h>
+#include <types/obj_type.h>
+#include <types/task.h>
+#include <types/server.h>
+
+/* enum used by check->result. Must remain in this order, as some code uses
+ * result >= CHK_RES_PASSED to declare success.
+ */
+enum chk_result {
+ CHK_RES_UNKNOWN = 0, /* initialized to this by default */
+ CHK_RES_NEUTRAL, /* valid check but no status information */
+ CHK_RES_FAILED, /* check failed */
+ CHK_RES_PASSED, /* check succeeded and server is fully up again */
+ CHK_RES_CONDPASS, /* check reports the server doesn't want new sessions */
+};
+
+/* flags used by check->state */
+#define CHK_ST_INPROGRESS 0x0001 /* a check is currently running */
+#define CHK_ST_CONFIGURED 0x0002 /* this check is configured and may be enabled */
+#define CHK_ST_ENABLED 0x0004 /* this check is currently administratively enabled */
+#define CHK_ST_PAUSED 0x0008 /* checks are paused because of maintenance (health only) */
+#define CHK_ST_AGENT 0x0010 /* check is an agent check (otherwise it's a health check) */
+
+/* check status */
+enum {
+ HCHK_STATUS_UNKNOWN = 0, /* Unknown */
+ HCHK_STATUS_INI, /* Initializing */
+ HCHK_STATUS_START, /* Check started - SPECIAL STATUS */
+
+ /* Below we have finished checks */
+ HCHK_STATUS_CHECKED, /* DUMMY STATUS */
+
+ HCHK_STATUS_HANA, /* Health analyze detected enough consecutive errors */
+
+ HCHK_STATUS_SOCKERR, /* Socket error */
+
+ HCHK_STATUS_L4OK, /* L4 check passed, for example tcp connect */
+ HCHK_STATUS_L4TOUT, /* L4 timeout */
+ HCHK_STATUS_L4CON, /* L4 connection problem, for example: */
+ /* "Connection refused" (tcp rst) or "No route to host" (icmp) */
+
+ HCHK_STATUS_L6OK, /* L6 check passed */
+ HCHK_STATUS_L6TOUT, /* L6 (SSL) timeout */
+ HCHK_STATUS_L6RSP, /* L6 invalid response - protocol error */
+
+ HCHK_STATUS_L7TOUT, /* L7 (HTTP/SMTP) timeout */
+ HCHK_STATUS_L7RSP, /* L7 invalid response - protocol error */
+
+ /* Below we have layer 5-7 data available */
+ HCHK_STATUS_L57DATA, /* DUMMY STATUS */
+ HCHK_STATUS_L7OKD, /* L7 check passed */
+ HCHK_STATUS_L7OKCD, /* L7 check conditionally passed */
+ HCHK_STATUS_L7STS, /* L7 response error, for example HTTP 5xx */
+
+ HCHK_STATUS_PROCERR, /* External process check failure */
+ HCHK_STATUS_PROCTOUT, /* External process check timeout */
+ HCHK_STATUS_PROCOK, /* External process check passed */
+
+ HCHK_STATUS_SIZE
+};
+
+/* environment variables memory requirement for different types of data */
+#define EXTCHK_SIZE_EVAL_INIT 0 /* size determined during the init phase,
+ * such environment variables are not updatable. */
+#define EXTCHK_SIZE_ULONG 20 /* max string length for an unsigned long value */
+
+/* external checks environment variables */
+enum {
+ EXTCHK_PATH = 0,
+
+ /* Proxy specific environment variables */
+ EXTCHK_HAPROXY_PROXY_NAME, /* the backend name */
+ EXTCHK_HAPROXY_PROXY_ID, /* the backend id */
+ EXTCHK_HAPROXY_PROXY_ADDR, /* the first bind address if available (or empty) */
+ EXTCHK_HAPROXY_PROXY_PORT, /* the first bind port if available (or empty) */
+
+ /* Server specific environment variables */
+ EXTCHK_HAPROXY_SERVER_NAME, /* the server name */
+ EXTCHK_HAPROXY_SERVER_ID, /* the server id */
+ EXTCHK_HAPROXY_SERVER_ADDR, /* the server address */
+ EXTCHK_HAPROXY_SERVER_PORT, /* the server port if available (or empty) */
+ EXTCHK_HAPROXY_SERVER_MAXCONN, /* the server max connections */
+ EXTCHK_HAPROXY_SERVER_CURCONN, /* the current number of connections on the server */
+
+ EXTCHK_SIZE
+};
+
+
+/* health status for response tracking */
+enum {
+ HANA_STATUS_UNKNOWN = 0,
+
+ HANA_STATUS_L4_OK, /* L4 successful connection */
+ HANA_STATUS_L4_ERR, /* L4 unsuccessful connection */
+
+ HANA_STATUS_HTTP_OK, /* Correct http response */
+ HANA_STATUS_HTTP_STS, /* Wrong http response, for example HTTP 5xx */
+ HANA_STATUS_HTTP_HDRRSP, /* Invalid http response (headers) */
+ HANA_STATUS_HTTP_RSP, /* Invalid http response */
+
+ HANA_STATUS_HTTP_READ_ERROR, /* Read error */
+ HANA_STATUS_HTTP_READ_TIMEOUT, /* Read timeout */
+ HANA_STATUS_HTTP_BROKEN_PIPE, /* Unexpected close from server */
+
+ HANA_STATUS_SIZE
+};
+
+enum {
+ HANA_ONERR_UNKNOWN = 0,
+
+ HANA_ONERR_FASTINTER, /* Force fastinter*/
+ HANA_ONERR_FAILCHK, /* Simulate a failed check */
+ HANA_ONERR_SUDDTH, /* Enters sudden death - one more failed check will mark this server down */
+ HANA_ONERR_MARKDWN, /* Mark this server down, now! */
+};
+
+enum {
+ HANA_ONMARKEDDOWN_NONE = 0,
+ HANA_ONMARKEDDOWN_SHUTDOWNSESSIONS, /* Shutdown peer sessions */
+};
+
+enum {
+ HANA_ONMARKEDUP_NONE = 0,
+ HANA_ONMARKEDUP_SHUTDOWNBACKUPSESSIONS, /* Shutdown peer sessions */
+};
+
+enum {
+ HANA_OBS_NONE = 0,
+
+ HANA_OBS_LAYER4, /* Observe L4 - for example tcp */
+ HANA_OBS_LAYER7, /* Observe L7 - for example http */
+
+ HANA_OBS_SIZE
+};
+
+struct check {
+ struct xprt_ops *xprt; /* transport layer operations for health checks */
+ struct connection *conn; /* connection state for health checks */
+ unsigned short port; /* the port to use for the health checks */
+ struct buffer *bi, *bo; /* input and output buffers to send/recv check */
+ struct task *task; /* the task associated to the health check processing, NULL if disabled */
+ struct timeval start; /* last health check start time */
+ long duration; /* time in ms took to finish last health check */
+ short status, code; /* check result, check code */
+ char desc[HCHK_DESC_LEN]; /* health check description */
+ int use_ssl; /* use SSL for health checks */
+ int send_proxy; /* send a PROXY protocol header with checks */
+ struct list *tcpcheck_rules; /* tcp-check send / expect rules */
+ struct tcpcheck_rule *current_step; /* current step when using tcpcheck */
+ struct tcpcheck_rule *last_started_step;/* pointer to latest tcpcheck rule started */
+ int inter, fastinter, downinter; /* checks: time in milliseconds */
+ enum chk_result result; /* health-check result : CHK_RES_* */
+ int state; /* state of the check : CHK_ST_* */
+ int health; /* 0 to rise-1 = bad;
+ * rise to rise+fall-1 = good */
+ int rise, fall; /* time in iterations */
+ int type; /* Check type, one of PR_O2_*_CHK */
+ struct server *server; /* back-pointer to server */
+ char **argv; /* the arguments to use if running a process-based check */
+ char **envp; /* the environment to use if running a process-based check */
+ struct pid_list *curpid; /* entry in pid_list used for current process-based test, or -1 if not in test */
+ struct sockaddr_storage addr; /* the address to check */
+};
+
+struct check_status {
+ short result; /* one of SRV_CHK_* */
+ char *info; /* human readable short info */
+ char *desc; /* long description */
+};
+
+struct extcheck_env {
+ char *name; /* environment variable name */
+ int vmaxlen; /* value maximum length, used to determine the required memory allocation */
+};
+
+struct analyze_status {
+ char *desc; /* description */
+ unsigned char lr[HANA_OBS_SIZE]; /* result for l4/l7: 0 = ignore, 1 - error, 2 - OK */
+};
+
+/* possible actions for tcpcheck_rule->action */
+enum {
+ TCPCHK_ACT_SEND = 0, /* send action, regular string format */
+ TCPCHK_ACT_EXPECT, /* expect action, either regular or binary string */
+ TCPCHK_ACT_CONNECT, /* connect action, to probe a new port */
+ TCPCHK_ACT_COMMENT, /* no action, simply a comment used for logs */
+};
+
+/* flags used by tcpcheck_rule->conn_opts */
+#define TCPCHK_OPT_NONE 0x0000 /* no options specified, default */
+#define TCPCHK_OPT_SEND_PROXY 0x0001 /* send proxy-protocol string */
+#define TCPCHK_OPT_SSL 0x0002 /* SSL connection */
+
+struct tcpcheck_rule {
+ struct list list; /* list linked to from the proxy */
+ int action; /* action: send or expect */
+ char *comment; /* comment to be used in the logs and on the stats socket */
+ /* match type uses NON-NULL pointer from either string or expect_regex below */
+ /* sent string is string */
+ char *string; /* sent or expected string */
+ int string_len; /* string lenght */
+ struct my_regex *expect_regex; /* expected */
+ int inverse; /* 0 = regular match, 1 = inverse match */
+ unsigned short port; /* port to connect to */
+ unsigned short conn_opts; /* options when setting up a new connection */
+};
+
+#endif /* _TYPES_CHECKS_H */
--- /dev/null
+/*
+ * include/types/compression.h
+ * This file defines everything related to compression.
+ *
+ * Copyright 2012 Exceliance, David Du Colombier <dducolombier@exceliance.fr>
+ William Lallemand <wlallemand@exceliance.fr>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _TYPES_COMP_H
+#define _TYPES_COMP_H
+
+#if defined(USE_SLZ)
+#ifdef USE_ZLIB
+#error "Cannot build with both USE_SLZ and USE_ZLIB at the same time."
+#endif
+#include <slz.h>
+#elif defined(USE_ZLIB)
+#include <zlib.h>
+#endif
+
+struct comp {
+ struct comp_algo *algos;
+ struct comp_type *types;
+ unsigned int offload;
+};
+
+struct comp_ctx {
+#if defined(USE_SLZ)
+ struct slz_stream strm;
+ const void *direct_ptr; /* NULL or pointer to beginning of data */
+ int direct_len; /* length of direct_ptr if not NULL */
+ struct buffer *queued; /* if not NULL, data already queued */
+#elif defined(USE_ZLIB)
+ z_stream strm; /* zlib stream */
+ void *zlib_deflate_state;
+ void *zlib_window;
+ void *zlib_prev;
+ void *zlib_pending_buf;
+ void *zlib_head;
+#endif
+ int cur_lvl;
+};
+
+/* Thanks to MSIE/IIS, the "deflate" name is ambigous, as according to the RFC
+ * it's a zlib-wrapped deflate stream, but MSIE only understands a raw deflate
+ * stream. For this reason some people prefer to emit a raw deflate stream on
+ * "deflate" and we'll need two algos for the same name, they are distinguished
+ * with the config name.
+ */
+struct comp_algo {
+ char *cfg_name; /* config name */
+ int cfg_name_len;
+
+ char *ua_name; /* name for the user-agent */
+ int ua_name_len;
+
+ int (*init)(struct comp_ctx **comp_ctx, int level);
+ int (*add_data)(struct comp_ctx *comp_ctx, const char *in_data, int in_len, struct buffer *out);
+ int (*flush)(struct comp_ctx *comp_ctx, struct buffer *out);
+ int (*finish)(struct comp_ctx *comp_ctx, struct buffer *out);
+ int (*end)(struct comp_ctx **comp_ctx);
+ struct comp_algo *next;
+};
+
+struct comp_type {
+ char *name;
+ int name_len;
+ struct comp_type *next;
+};
+
+
+#endif /* _TYPES_COMP_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
+
--- /dev/null
+/*
+ * include/types/connection.h
+ * This file describes the connection struct and associated constants.
+ *
+ * Copyright (C) 2000-2014 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _TYPES_CONNECTION_H
+#define _TYPES_CONNECTION_H
+
+#include <stdlib.h>
+#include <sys/socket.h>
+
+#include <common/config.h>
+
+#include <types/listener.h>
+#include <types/obj_type.h>
+#include <types/port_range.h>
+#include <types/protocol.h>
+
+/* referenced below */
+struct connection;
+struct buffer;
+struct pipe;
+
+/* For each direction, we have a CO_FL_{SOCK,DATA}_<DIR>_ENA flag, which
+ * indicates if read or write is desired in that direction for the respective
+ * layers. The current status corresponding to the current layer being used is
+ * remembered in the CO_FL_CURR_<DIR>_ENA flag. The need to poll (ie receipt of
+ * EAGAIN) is remembered at the file descriptor level so that even when the
+ * activity is stopped and restarted, we still remember whether it was needed
+ * to poll before attempting the I/O.
+ *
+ * The CO_FL_CURR_<DIR>_ENA flag is set from the FD status in
+ * conn_refresh_polling_flags(). The FD state is updated according to these
+ * flags in conn_cond_update_polling().
+ */
+
+/* flags for use in connection->flags */
+enum {
+ CO_FL_NONE = 0x00000000, /* Just for initialization purposes */
+
+ /* Do not change these values without updating conn_*_poll_changes() ! */
+ CO_FL_SOCK_RD_ENA = 0x00000001, /* receiving handshakes is allowed */
+ CO_FL_DATA_RD_ENA = 0x00000002, /* receiving data is allowed */
+ CO_FL_CURR_RD_ENA = 0x00000004, /* receiving is currently allowed */
+ /* unused : 0x00000008 */
+
+ CO_FL_SOCK_WR_ENA = 0x00000010, /* sending handshakes is desired */
+ CO_FL_DATA_WR_ENA = 0x00000020, /* sending data is desired */
+ CO_FL_CURR_WR_ENA = 0x00000040, /* sending is currently desired */
+ /* unused : 0x00000080 */
+
+ /* These flags indicate whether the Control and Transport layers are initialized */
+ CO_FL_CTRL_READY = 0x00000100, /* FD was registered, fd_delete() needed */
+ CO_FL_XPRT_READY = 0x00000200, /* xprt_init() done, xprt_close() needed */
+
+ /* These flags are used by data layers to indicate they had to stop
+ * sending data because a buffer was empty (WAIT_DATA) or stop receiving
+ * data because a buffer was full (WAIT_ROOM). The connection handler
+ * clears them before first calling the I/O and data callbacks.
+ */
+ CO_FL_WAIT_DATA = 0x00000400, /* data source is empty */
+ CO_FL_WAIT_ROOM = 0x00000800, /* data sink is full */
+
+ /* These flags are used to report whether the from/to addresses are set or not */
+ CO_FL_ADDR_FROM_SET = 0x00001000, /* addr.from is set */
+ CO_FL_ADDR_TO_SET = 0x00002000, /* addr.to is set */
+
+ /* flags indicating what event type the data layer is interested in */
+ CO_FL_INIT_DATA = 0x00004000, /* initialize the data layer before using it */
+ CO_FL_WAKE_DATA = 0x00008000, /* wake-up data layer upon activity at the transport layer */
+
+ /* flags used to remember what shutdown have been performed/reported */
+ CO_FL_DATA_RD_SH = 0x00010000, /* DATA layer was notified about shutr/read0 */
+ CO_FL_DATA_WR_SH = 0x00020000, /* DATA layer asked for shutw */
+ CO_FL_SOCK_RD_SH = 0x00040000, /* SOCK layer was notified about shutr/read0 */
+ CO_FL_SOCK_WR_SH = 0x00080000, /* SOCK layer asked for shutw */
+
+ /* flags used to report connection status and errors */
+ CO_FL_ERROR = 0x00100000, /* a fatal error was reported */
+ CO_FL_CONNECTED = 0x00200000, /* the connection is now established */
+ CO_FL_WAIT_L4_CONN = 0x00400000, /* waiting for L4 to be connected */
+ CO_FL_WAIT_L6_CONN = 0x00800000, /* waiting for L6 to be connected (eg: SSL) */
+
+ /* synthesis of the flags above */
+ CO_FL_CONN_STATE = 0x00FF0000, /* all shut/connected flags */
+
+ /*** All the flags below are used for connection handshakes. Any new
+ * handshake should be added after this point, and CO_FL_HANDSHAKE
+ * should be updated.
+ */
+ CO_FL_SEND_PROXY = 0x01000000, /* send a valid PROXY protocol header */
+ CO_FL_SSL_WAIT_HS = 0x02000000, /* wait for an SSL handshake to complete */
+ CO_FL_ACCEPT_PROXY = 0x04000000, /* receive a valid PROXY protocol header */
+ /* unused : 0x08000000 */
+
+ /* below we have all handshake flags grouped into one */
+ CO_FL_HANDSHAKE = CO_FL_SEND_PROXY | CO_FL_SSL_WAIT_HS | CO_FL_ACCEPT_PROXY,
+
+ /* when any of these flags is set, polling is defined by socket-layer
+ * operations, as opposed to data-layer. Transport is explicitly not
+ * mentionned here to avoid any confusion, since it can be the same
+ * as DATA or SOCK on some implementations.
+ */
+ CO_FL_POLL_SOCK = CO_FL_HANDSHAKE | CO_FL_WAIT_L4_CONN | CO_FL_WAIT_L6_CONN,
+
+ /* This connection may not be shared between clients */
+ CO_FL_PRIVATE = 0x10000000,
+
+ /* unused : 0x20000000, 0x40000000 */
+
+ /* This last flag indicates that the transport layer is used (for instance
+ * by logs) and must not be cleared yet. The last call to conn_xprt_close()
+ * must be done after clearing this flag.
+ */
+ CO_FL_XPRT_TRACKED = 0x80000000,
+};
+
+
+/* possible connection error codes */
+enum {
+ CO_ER_NONE, /* no error */
+
+ CO_ER_CONF_FDLIM, /* reached process' configured FD limitation */
+ CO_ER_PROC_FDLIM, /* reached process' FD limitation */
+ CO_ER_SYS_FDLIM, /* reached system's FD limitation */
+ CO_ER_SYS_MEMLIM, /* reached system buffers limitation */
+ CO_ER_NOPROTO, /* protocol not supported */
+ CO_ER_SOCK_ERR, /* other socket error */
+
+ CO_ER_PORT_RANGE, /* source port range exhausted */
+ CO_ER_CANT_BIND, /* can't bind to source address */
+ CO_ER_FREE_PORTS, /* no more free ports on the system */
+ CO_ER_ADDR_INUSE, /* local address already in use */
+
+ CO_ER_PRX_EMPTY, /* nothing received in PROXY protocol header */
+ CO_ER_PRX_ABORT, /* client abort during PROXY protocol header */
+ CO_ER_PRX_TIMEOUT, /* timeout while waiting for a PROXY header */
+ CO_ER_PRX_TRUNCATED, /* truncated PROXY protocol header */
+ CO_ER_PRX_NOT_HDR, /* not a PROXY protocol header */
+ CO_ER_PRX_BAD_HDR, /* bad PROXY protocol header */
+ CO_ER_PRX_BAD_PROTO, /* unsupported protocol in PROXY header */
+
+ CO_ER_SSL_EMPTY, /* client closed during SSL handshake */
+ CO_ER_SSL_ABORT, /* client abort during SSL handshake */
+ CO_ER_SSL_TIMEOUT, /* timeout during SSL handshake */
+ CO_ER_SSL_TOO_MANY, /* too many SSL connections */
+ CO_ER_SSL_NO_MEM, /* no more memory to allocate an SSL connection */
+ CO_ER_SSL_RENEG, /* forbidden client renegociation */
+ CO_ER_SSL_CA_FAIL, /* client cert verification failed in the CA chain */
+ CO_ER_SSL_CRT_FAIL, /* client cert verification failed on the certificate */
+ CO_ER_SSL_HANDSHAKE, /* SSL error during handshake */
+ CO_ER_SSL_HANDSHAKE_HB, /* SSL error during handshake with heartbeat present */
+ CO_ER_SSL_KILLED_HB, /* Stopped a TLSv1 heartbeat attack (CVE-2014-0160) */
+ CO_ER_SSL_NO_TARGET, /* unknown target (not client nor server) */
+};
+
+/* source address settings for outgoing connections */
+enum {
+ /* Tproxy exclusive values from 0 to 7 */
+ CO_SRC_TPROXY_ADDR = 0x0001, /* bind to this non-local address when connecting */
+ CO_SRC_TPROXY_CIP = 0x0002, /* bind to the client's IP address when connecting */
+ CO_SRC_TPROXY_CLI = 0x0003, /* bind to the client's IP+port when connecting */
+ CO_SRC_TPROXY_DYN = 0x0004, /* bind to a dynamically computed non-local address */
+ CO_SRC_TPROXY_MASK = 0x0007, /* bind to a non-local address when connecting */
+
+ CO_SRC_BIND = 0x0008, /* bind to a specific source address when connecting */
+};
+
+/* flags that can be passed to xprt->snd_buf() */
+enum {
+ CO_SFL_MSG_MORE = 0x0001, /* More data to come afterwards */
+ CO_SFL_STREAMER = 0x0002, /* Producer is continuously streaming data */
+};
+
+/* xprt_ops describes transport-layer operations for a connection. They
+ * generally run over a socket-based control layer, but not always. Some
+ * of them are used for data transfer with the upper layer (rcv_*, snd_*)
+ * and the other ones are used to setup and release the transport layer.
+ */
+struct xprt_ops {
+ int (*rcv_buf)(struct connection *conn, struct buffer *buf, int count); /* recv callback */
+ int (*snd_buf)(struct connection *conn, struct buffer *buf, int flags); /* send callback */
+ int (*rcv_pipe)(struct connection *conn, struct pipe *pipe, unsigned int count); /* recv-to-pipe callback */
+ int (*snd_pipe)(struct connection *conn, struct pipe *pipe); /* send-to-pipe callback */
+ void (*shutr)(struct connection *, int); /* shutr function */
+ void (*shutw)(struct connection *, int); /* shutw function */
+ void (*close)(struct connection *); /* close the transport layer */
+ int (*init)(struct connection *conn); /* initialize the transport layer */
+};
+
+/* data_cb describes the data layer's recv and send callbacks which are called
+ * when I/O activity was detected after the transport layer is ready. These
+ * callbacks are supposed to make use of the xprt_ops above to exchange data
+ * from/to buffers and pipes. The <wake> callback is used to report activity
+ * at the transport layer, which can be a connection opening/close, or any
+ * data movement. The <init> callback may be called by the connection handler
+ * at the end of a transport handshake, when it is about to transfer data and
+ * the data layer is not ready yet. Both <wake> and <init> may abort a connection
+ * by returning < 0.
+ */
+struct data_cb {
+ void (*recv)(struct connection *conn); /* data-layer recv callback */
+ void (*send)(struct connection *conn); /* data-layer send callback */
+ int (*wake)(struct connection *conn); /* data-layer callback to report activity */
+ int (*init)(struct connection *conn); /* data-layer initialization */
+};
+
+/* a connection source profile defines all the parameters needed to properly
+ * bind an outgoing connection for a server or proxy.
+ */
+
+struct conn_src {
+ unsigned int opts; /* CO_SRC_* */
+ int iface_len; /* bind interface name length */
+ char *iface_name; /* bind interface name or NULL */
+ struct port_range *sport_range; /* optional per-server TCP source ports */
+ struct sockaddr_storage source_addr; /* the address to which we want to bind for connect() */
+#if defined(CONFIG_HAP_TRANSPARENT)
+ struct sockaddr_storage tproxy_addr; /* non-local address we want to bind to for connect() */
+ char *bind_hdr_name; /* bind to this header name if defined */
+ int bind_hdr_len; /* length of the name of the header above */
+ int bind_hdr_occ; /* occurrence number of header above: >0 = from first, <0 = from end, 0=disabled */
+#endif
+};
+
+/* This structure describes a connection with its methods and data.
+ * A connection may be performed to proxy or server via a local or remote
+ * socket, and can also be made to an internal applet. It can support
+ * several transport schemes (raw, ssl, ...). It can support several
+ * connection control schemes, generally a protocol for socket-oriented
+ * connections, but other methods for applets.
+ */
+struct connection {
+ enum obj_type obj_type; /* differentiates connection from applet context */
+ unsigned char err_code; /* CO_ER_* */
+ signed short send_proxy_ofs; /* <0 = offset to (re)send from the end, >0 = send all */
+ unsigned int flags; /* CO_FL_* */
+ const struct protocol *ctrl; /* operations at the socket layer */
+ const struct xprt_ops *xprt; /* operations at the transport layer */
+ const struct data_cb *data; /* data layer callbacks. Must be set before xprt->init() */
+ void *xprt_ctx; /* general purpose pointer, initialized to NULL */
+ void *owner; /* pointer to upper layer's entity (eg: stream interface) */
+ int xprt_st; /* transport layer state, initialized to zero */
+
+ union { /* definitions which depend on connection type */
+ struct { /*** information used by socket-based connections ***/
+ int fd; /* file descriptor for a stream driver when known */
+ } sock;
+ } t;
+ enum obj_type *target; /* the target to connect to (server, proxy, applet, ...) */
+ struct list list; /* attach point to various connection lists (idle, ...) */
+ const struct netns_entry *proxy_netns;
+ struct {
+ struct sockaddr_storage from; /* client address, or address to spoof when connecting to the server */
+ struct sockaddr_storage to; /* address reached by the client, or address to connect to */
+ } addr; /* addresses of the remote side, client for producer and server for consumer */
+};
+
+/* proxy protocol v2 definitions */
+#define PP2_SIGNATURE "\x0D\x0A\x0D\x0A\x00\x0D\x0A\x51\x55\x49\x54\x0A"
+#define PP2_SIGNATURE_LEN 12
+#define PP2_HEADER_LEN 16
+
+/* ver_cmd byte */
+#define PP2_CMD_LOCAL 0x00
+#define PP2_CMD_PROXY 0x01
+#define PP2_CMD_MASK 0x0F
+
+#define PP2_VERSION 0x20
+#define PP2_VERSION_MASK 0xF0
+
+/* fam byte */
+#define PP2_TRANS_UNSPEC 0x00
+#define PP2_TRANS_STREAM 0x01
+#define PP2_TRANS_DGRAM 0x02
+#define PP2_TRANS_MASK 0x0F
+
+#define PP2_FAM_UNSPEC 0x00
+#define PP2_FAM_INET 0x10
+#define PP2_FAM_INET6 0x20
+#define PP2_FAM_UNIX 0x30
+#define PP2_FAM_MASK 0xF0
+
+#define PP2_ADDR_LEN_UNSPEC (0)
+#define PP2_ADDR_LEN_INET (4 + 4 + 2 + 2)
+#define PP2_ADDR_LEN_INET6 (16 + 16 + 2 + 2)
+#define PP2_ADDR_LEN_UNIX (108 + 108)
+
+#define PP2_HDR_LEN_UNSPEC (PP2_HEADER_LEN + PP2_ADDR_LEN_UNSPEC)
+#define PP2_HDR_LEN_INET (PP2_HEADER_LEN + PP2_ADDR_LEN_INET)
+#define PP2_HDR_LEN_INET6 (PP2_HEADER_LEN + PP2_ADDR_LEN_INET6)
+#define PP2_HDR_LEN_UNIX (PP2_HEADER_LEN + PP2_ADDR_LEN_UNIX)
+
+struct proxy_hdr_v2 {
+ uint8_t sig[12]; /* hex 0D 0A 0D 0A 00 0D 0A 51 55 49 54 0A */
+ uint8_t ver_cmd; /* protocol version and command */
+ uint8_t fam; /* protocol family and transport */
+ uint16_t len; /* number of following bytes part of the header */
+ union {
+ struct { /* for TCP/UDP over IPv4, len = 12 */
+ uint32_t src_addr;
+ uint32_t dst_addr;
+ uint16_t src_port;
+ uint16_t dst_port;
+ } ip4;
+ struct { /* for TCP/UDP over IPv6, len = 36 */
+ uint8_t src_addr[16];
+ uint8_t dst_addr[16];
+ uint16_t src_port;
+ uint16_t dst_port;
+ } ip6;
+ struct { /* for AF_UNIX sockets, len = 216 */
+ uint8_t src_addr[108];
+ uint8_t dst_addr[108];
+ } unx;
+ } addr;
+};
+
+#define PP2_TYPE_SSL 0x20
+#define PP2_TYPE_SSL_VERSION 0x21
+#define PP2_TYPE_SSL_CN 0x22
+#define PP2_TYPE_NETNS 0x30
+
+#define TLV_HEADER_SIZE 3
+struct tlv {
+ uint8_t type;
+ uint8_t length_hi;
+ uint8_t length_lo;
+ uint8_t value[0];
+}__attribute__((packed));
+
+struct tlv_ssl {
+ struct tlv tlv;
+ uint8_t client;
+ uint32_t verify;
+ uint8_t sub_tlv[0];
+}__attribute__((packed));
+
+#define PP2_CLIENT_SSL 0x01
+#define PP2_CLIENT_CERT_CONN 0x02
+#define PP2_CLIENT_CERT_SESS 0x04
+
+#endif /* _TYPES_CONNECTION_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/types/counters.h
+ * This file contains structure declarations for statistics counters.
+ *
+ * Copyright 2008-2009 Krzysztof Piotr Oledzki <ole@ans.pl>
+ * Copyright 2011-2014 Willy Tarreau <w@1wt.eu>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _TYPES_COUNTERS_H
+#define _TYPES_COUNTERS_H
+
+/* maybe later we might thing about having a different struct for FE and BE */
+struct pxcounters {
+ unsigned int conn_max; /* max # of active sessions */
+ long long cum_conn; /* cumulated number of received connections */
+ long long cum_sess; /* cumulated number of accepted connections */
+ long long cum_lbconn; /* cumulated number of sessions processed by load balancing (BE only) */
+ unsigned long last_sess; /* last session time */
+
+ unsigned int cps_max; /* maximum of new connections received per second */
+ unsigned int sps_max; /* maximum of new connections accepted per second (sessions) */
+ unsigned int nbpend_max; /* max number of pending connections with no server assigned yet (BE only) */
+
+ long long bytes_in; /* number of bytes transferred from the client to the server */
+ long long bytes_out; /* number of bytes transferred from the server to the client */
+
+ long long comp_in; /* input bytes fed to the compressor */
+ long long comp_out; /* output bytes emitted by the compressor */
+ long long comp_byp; /* input bytes that bypassed the compressor (cpu/ram/bw limitation) */
+
+ long long denied_req; /* blocked requests/responses because of security concerns */
+ long long denied_resp; /* blocked requests/responses because of security concerns */
+ long long failed_req; /* failed requests (eg: invalid or timeout) */
+ long long denied_conn; /* denied connection requests (tcp-req rules) */
+
+ long long failed_conns; /* failed connect() attempts (BE only) */
+ long long failed_resp; /* failed responses (BE only) */
+ long long cli_aborts; /* aborted responses during DATA phase caused by the client */
+ long long srv_aborts; /* aborted responses during DATA phase caused by the server */
+ long long retries; /* retried and redispatched connections (BE only) */
+ long long redispatches; /* retried and redispatched connections (BE only) */
+ long long intercepted_req; /* number of monitoring or stats requests intercepted by the frontend */
+
+ unsigned int q_time, c_time, d_time, t_time; /* sums of conn_time, queue_time, data_time, total_time */
+
+ union {
+ struct {
+ long long cum_req; /* cumulated number of processed HTTP requests */
+ long long comp_rsp; /* number of compressed responses */
+ unsigned int rps_max; /* maximum of new HTTP requests second observed */
+ long long rsp[6]; /* http response codes */
+ } http;
+ } p; /* protocol-specific stats */
+};
+
+struct licounters {
+ unsigned int conn_max; /* max # of active listener sessions */
+
+ long long cum_conn; /* cumulated number of received connections */
+ long long cum_sess; /* cumulated number of accepted sessions */
+
+ long long bytes_in; /* number of bytes transferred from the client to the server */
+ long long bytes_out; /* number of bytes transferred from the server to the client */
+
+ long long denied_req, denied_resp; /* blocked requests/responses because of security concerns */
+ long long failed_req; /* failed requests (eg: invalid or timeout) */
+ long long denied_conn; /* denied connection requests (tcp-req rules) */
+};
+
+struct srvcounters {
+ unsigned int cur_sess_max; /* max number of currently active sessions */
+ unsigned int nbpend_max; /* max number of pending connections reached */
+ unsigned int sps_max; /* maximum of new sessions per second seen on this server */
+
+ long long cum_sess; /* cumulated number of sessions really sent to this server */
+ long long cum_lbconn; /* cumulated number of sessions directed by load balancing */
+ unsigned long last_sess; /* last session time */
+
+ long long bytes_in; /* number of bytes transferred from the client to the server */
+ long long bytes_out; /* number of bytes transferred from the server to the client */
+
+ long long failed_conns, failed_resp; /* failed connect() and responses */
+ long long cli_aborts, srv_aborts; /* aborted responses during DATA phase due to client or server */
+ long long retries, redispatches; /* retried and redispatched connections */
+ long long failed_secu; /* blocked responses because of security concerns */
+
+ unsigned int q_time, c_time, d_time, t_time; /* sums of conn_time, queue_time, data_time, total_time */
+
+ union {
+ struct {
+ long long rsp[6]; /* http response codes */
+ } http;
+ } p;
+
+ long long failed_checks, failed_hana; /* failed health checks and health analyses */
+ long long down_trans; /* up->down transitions */
+};
+
+#endif /* _TYPES_COUNTERS_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/types/dns.h
+ * This file provides structures and types for DNS.
+ *
+ * Copyright (C) 2014 Baptiste Assmann <bedis9@gmail.com>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _TYPES_DNS_H
+#define _TYPES_DNS_H
+
+/*DNS maximum values */
+/*
+ * Maximum issued from RFC:
+ * RFC 1035: https://www.ietf.org/rfc/rfc1035.txt chapter 2.3.4
+ * RFC 2671: http://tools.ietf.org/html/rfc2671
+ */
+#define DNS_MAX_LABEL_SIZE 63
+#define DNS_MAX_NAME_SIZE 255
+#define DNS_MAX_UDP_MESSAGE 4096
+
+/* DNS error messages */
+#define DNS_TOO_LONG_FQDN "hostname too long"
+#define DNS_LABEL_TOO_LONG "one label too long"
+#define DNS_INVALID_CHARACTER "found an invalid character"
+
+/* dns query class */
+#define DNS_RCLASS_IN 1 /* internet class */
+
+/* dns record types (non exhaustive list) */
+#define DNS_RTYPE_A 1 /* IPv4 address */
+#define DNS_RTYPE_CNAME 5 /* canonical name */
+#define DNS_RTYPE_AAAA 28 /* IPv6 address */
+#define DNS_RTYPE_ANY 255 /* all records */
+
+/* dns rcode values */
+#define DNS_RCODE_NO_ERROR 0 /* no error */
+#define DNS_RCODE_NX_DOMAIN 3 /* non existent domain */
+#define DNS_RCODE_REFUSED 5 /* query refused */
+
+/* dns flags masks */
+#define DNS_FLAG_TRUNCATED 0x0200 /* mask for truncated flag */
+#define DNS_FLAG_REPLYCODE 0x000F /* mask for reply code */
+
+/* DNS request or response header structure */
+struct dns_header {
+ unsigned short id:16; /* identifier */
+ unsigned char rd :1; /* recursion desired 0: no, 1: yes */
+ unsigned char tc :1; /* truncation 0:no, 1: yes */
+ unsigned char aa :1; /* authoritative answer 0: no, 1: yes */
+ unsigned char opcode :4; /* operation code */
+ unsigned char qr :1; /* query/response 0: query, 1: response */
+ unsigned char rcode :4; /* response code */
+ unsigned char z :1; /* no used */
+ unsigned char ad :1; /* authentic data */
+ unsigned char cd :1; /* checking disabled */
+ unsigned char ra :1; /* recursion available 0: no, 1: yes */
+ unsigned short qdcount :16; /* question count */
+ unsigned short ancount :16; /* answer count */
+ unsigned short nscount :16; /* authority count */
+ unsigned short arcount :16; /* additional count */
+};
+
+/* short structure to describe a DNS question */
+struct dns_question {
+ unsigned short qtype; /* question type */
+ unsigned short qclass; /* query class */
+};
+
+/*
+ * resolvers section and parameters. It is linked to the name servers
+ * servers points to it.
+ * current resolution are stored in a FIFO list.
+ */
+struct dns_resolvers {
+ struct list list; /* resolvers list */
+ char *id; /* resolvers unique identifier */
+ struct {
+ const char *file; /* file where the section appears */
+ int line; /* line where the section appears */
+ } conf; /* config information */
+ struct list nameserver_list; /* dns server list */
+ int count_nameservers; /* total number of nameservers in a resolvers section */
+ int resolve_retries; /* number of retries before giving up */
+ struct { /* time to: */
+ int retry; /* wait for a response before retrying */
+ } timeout;
+ struct { /* time to hold current data when */
+ int valid; /* a response is valid */
+ } hold;
+ struct task *t; /* timeout management */
+ struct list curr_resolution; /* current running resolutions */
+ struct eb_root query_ids; /* tree to quickly lookup/retrieve query ids currently in use */
+ /* used by each nameserver, but stored in resolvers since there must */
+ /* be a unique relation between an eb_root and an eb_node (resolution) */
+};
+
+/*
+ * structure describing a name server used during name resolution.
+ * A name server belongs to a resolvers section.
+ */
+struct dns_nameserver {
+ struct list list; /* nameserver chained list */
+ char *id; /* nameserver unique identifier */
+ struct {
+ const char *file; /* file where the section appears */
+ int line; /* line where the section appears */
+ } conf; /* config information */
+ struct dns_resolvers *resolvers;
+ struct dgram_conn *dgram; /* transport layer */
+ struct sockaddr_storage addr; /* IP address */
+ struct { /* numbers relted to this name server: */
+ long int sent; /* - queries sent */
+ long int valid; /* - valid response */
+ long int update; /* - valid response used to update server's IP */
+ long int cname; /* - CNAME response requiring new resolution */
+ long int cname_error; /* - error when resolving CNAMEs */
+ long int any_err; /* - void response (usually because ANY qtype) */
+ long int nx; /* - NX response */
+ long int timeout; /* - queries which reached timeout */
+ long int refused; /* - queries refused */
+ long int other; /* - other type of response */
+ long int invalid; /* - malformed DNS response */
+ long int too_big; /* - too big response */
+ long int outdated; /* - outdated response (server slower than the other ones) */
+ long int truncated; /* - truncated response */
+ } counters;
+};
+
+/*
+ * resolution structure associated to single server and used to manage name resolution for
+ * this server.
+ * The only link between the resolution and a nameserver is through the query_id.
+ */
+struct dns_resolution {
+ struct list list; /* resolution list */
+ struct dns_resolvers *resolvers; /* resolvers section associated to this resolution */
+ void *requester; /* owner of this name resolution */
+ int (*requester_cb)(struct dns_resolution *, struct dns_nameserver *, unsigned char *, int);
+ /* requester callback for valid response */
+ int (*requester_error_cb)(struct dns_resolution *, int);
+ /* requester callback, for error management */
+ char *hostname_dn; /* server hostname in domain name label format */
+ int hostname_dn_len; /* server domain name label len */
+ int resolver_family_priority; /* which IP family should the resolver use when both are returned */
+ unsigned int last_resolution; /* time of the lastest valid resolution */
+ unsigned int last_sent_packet; /* time of the latest DNS packet sent */
+ unsigned int last_status_change; /* time of the latest DNS resolution status change */
+ int query_id; /* DNS query ID dedicated for this resolution */
+ struct eb32_node qid; /* ebtree query id */
+ int query_type;
+ /* query type to send. By default DNS_RTYPE_A or DNS_RTYPE_AAAA depending on resolver_family_priority */
+ int status; /* status of the resolution being processed RSLV_STATUS_* */
+ int step; /* */
+ int try; /* current resolution try */
+ int try_cname; /* number of CNAME requests sent */
+ int nb_responses; /* count number of responses received */
+};
+
+/* last resolution status code */
+enum {
+ RSLV_STATUS_NONE = 0, /* no resolution occured yet */
+ RSLV_STATUS_VALID, /* no error */
+ RSLV_STATUS_INVALID, /* invalid responses */
+ RSLV_STATUS_ERROR, /* error */
+ RSLV_STATUS_NX, /* NXDOMAIN */
+ RSLV_STATUS_REFUSED, /* server refused our query */
+ RSLV_STATUS_TIMEOUT, /* no response from DNS servers */
+ RSLV_STATUS_OTHER, /* other errors */
+};
+
+/* current resolution step */
+enum {
+ RSLV_STEP_NONE = 0, /* nothing happening currently */
+ RSLV_STEP_RUNNING, /* resolution is running */
+};
+
+/* return codes after analyzing a DNS response */
+enum {
+ DNS_RESP_VALID = 0, /* valid response */
+ DNS_RESP_INVALID, /* invalid response (various type of errors can trigger it) */
+ DNS_RESP_ERROR, /* DNS error code */
+ DNS_RESP_NX_DOMAIN, /* resolution unsuccessful */
+ DNS_RESP_REFUSED, /* DNS server refused to answer */
+ DNS_RESP_ANCOUNT_ZERO, /* no answers in the response */
+ DNS_RESP_WRONG_NAME, /* response does not match query name */
+ DNS_RESP_CNAME_ERROR, /* error when resolving a CNAME in an atomic response */
+ DNS_RESP_TIMEOUT, /* DNS server has not answered in time */
+ DNS_RESP_TRUNCATED, /* DNS response is truncated */
+ DNS_RESP_NO_EXPECTED_RECORD, /* No expected records were found in the response */
+};
+
+/* return codes after searching an IP in a DNS response buffer, using a family preference */
+enum {
+ DNS_UPD_NO = 1, /* provided IP was found and preference is matched
+ * OR provided IP found and preference is not matched, but no IP
+ * matching preference was found */
+ DNS_UPD_SRVIP_NOT_FOUND, /* provided IP not found
+ * OR provided IP found and preference is not match and an IP
+ * matching preference was found */
+ DNS_UPD_CNAME, /* CNAME without any IP provided in the response */
+ DNS_UPD_NAME_ERROR, /* name in the response did not match the query */
+ DNS_UPD_NO_IP_FOUND, /* no IP could be found in the response */
+};
+
+#endif /* _TYPES_DNS_H */
--- /dev/null
+/*
+ * include/types/fd.h
+ * File descriptors states - check src/fd.c for explanations.
+ *
+ * Copyright (C) 2000-2014 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _TYPES_FD_H
+#define _TYPES_FD_H
+
+#include <common/config.h>
+#include <types/port_range.h>
+
+/* Direction for each FD event update */
+enum {
+ DIR_RD=0,
+ DIR_WR=1,
+};
+
+/* Polling status flags returned in fdtab[].ev :
+ * FD_POLL_IN remains set as long as some data is pending for read.
+ * FD_POLL_OUT remains set as long as the fd accepts to write data.
+ * FD_POLL_ERR and FD_POLL_ERR remain set forever (until processed).
+ */
+#define FD_POLL_IN 0x01
+#define FD_POLL_PRI 0x02
+#define FD_POLL_OUT 0x04
+#define FD_POLL_ERR 0x08
+#define FD_POLL_HUP 0x10
+
+#define FD_POLL_DATA (FD_POLL_IN | FD_POLL_OUT)
+#define FD_POLL_STICKY (FD_POLL_ERR | FD_POLL_HUP)
+
+#define FD_EV_ACTIVE 1U
+#define FD_EV_READY 2U
+#define FD_EV_POLLED 4U
+
+#define FD_EV_STATUS (FD_EV_ACTIVE | FD_EV_POLLED | FD_EV_READY)
+#define FD_EV_STATUS_R (FD_EV_STATUS)
+#define FD_EV_STATUS_W (FD_EV_STATUS << 4)
+
+#define FD_EV_POLLED_R (FD_EV_POLLED)
+#define FD_EV_POLLED_W (FD_EV_POLLED << 4)
+#define FD_EV_POLLED_RW (FD_EV_POLLED_R | FD_EV_POLLED_W)
+
+#define FD_EV_ACTIVE_R (FD_EV_ACTIVE)
+#define FD_EV_ACTIVE_W (FD_EV_ACTIVE << 4)
+#define FD_EV_ACTIVE_RW (FD_EV_ACTIVE_R | FD_EV_ACTIVE_W)
+
+#define FD_EV_READY_R (FD_EV_READY)
+#define FD_EV_READY_W (FD_EV_READY << 4)
+#define FD_EV_READY_RW (FD_EV_READY_R | FD_EV_READY_W)
+
+enum fd_states {
+ FD_ST_DISABLED = 0,
+ FD_ST_MUSTPOLL,
+ FD_ST_STOPPED,
+ FD_ST_ACTIVE,
+ FD_ST_ABORT,
+ FD_ST_POLLED,
+ FD_ST_PAUSED,
+ FD_ST_READY
+};
+
+/* info about one given fd */
+struct fdtab {
+ int (*iocb)(int fd); /* I/O handler, returns FD_WAIT_* */
+ void *owner; /* the connection or listener associated with this fd, NULL if closed */
+ unsigned int cache; /* position+1 in the FD cache. 0=not in cache. */
+ unsigned char state; /* FD state for read and write directions (2*3 bits) */
+ unsigned char ev; /* event seen in return of poll() : FD_POLL_* */
+ unsigned char new:1; /* 1 if this fd has just been created */
+ unsigned char updated:1; /* 1 if this fd is already in the update list */
+ unsigned char linger_risk:1; /* 1 if we must kill lingering before closing */
+ unsigned char cloned:1; /* 1 if a cloned socket, requires EPOLL_CTL_DEL on close */
+};
+
+/* less often used information */
+struct fdinfo {
+ struct port_range *port_range; /* optional port range to bind to */
+ int local_port; /* optional local port */
+};
+
+/*
+ * Poller descriptors.
+ * - <name> is initialized by the poller's register() function, and should not
+ * be allocated, just linked to.
+ * - <pref> is initialized by the poller's register() function. It is set to 0
+ * by default, meaning the poller is disabled. init() should set it to 0 in
+ * case of failure. term() must set it to 0. A generic unoptimized select()
+ * poller should set it to 100.
+ * - <private> is initialized by the poller's init() function, and cleaned by
+ * the term() function.
+ * - clo() should be used to do indicate the poller that fd will be closed.
+ * - poll() calls the poller, expiring at <exp>
+ */
+struct poller {
+ void *private; /* any private data for the poller */
+ void REGPRM1 (*clo)(const int fd); /* mark <fd> as closed */
+ void REGPRM2 (*poll)(struct poller *p, int exp); /* the poller itself */
+ int REGPRM1 (*init)(struct poller *p); /* poller initialization */
+ void REGPRM1 (*term)(struct poller *p); /* termination of this poller */
+ int REGPRM1 (*test)(struct poller *p); /* pre-init check of the poller */
+ int REGPRM1 (*fork)(struct poller *p); /* post-fork re-opening */
+ const char *name; /* poller name */
+ int pref; /* try pollers with higher preference first */
+};
+
+extern struct poller cur_poller; /* the current poller */
+extern int nbpollers;
+#define MAX_POLLERS 10
+extern struct poller pollers[MAX_POLLERS]; /* all registered pollers */
+
+extern struct fdtab *fdtab; /* array of all the file descriptors */
+extern struct fdinfo *fdinfo; /* less-often used infos for file descriptors */
+extern int maxfd; /* # of the highest fd + 1 */
+extern int totalconn; /* total # of terminated sessions */
+extern int actconn; /* # of active sessions */
+
+#endif /* _TYPES_FD_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/types/freq_ctr.h
+ * This file contains structure declarations for frequency counters.
+ *
+ * Copyright (C) 2000-2010 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _TYPES_FREQ_CTR_H
+#define _TYPES_FREQ_CTR_H
+
+#include <common/config.h>
+
+/* The implicit freq_ctr counter counts a rate of events per second. It is the
+ * preferred form to count rates over a one-second period, because it does not
+ * involve any divide.
+ */
+struct freq_ctr {
+ unsigned int curr_sec; /* start date of current period (seconds from now.tv_sec) */
+ unsigned int curr_ctr; /* cumulated value for current period */
+ unsigned int prev_ctr; /* value for last period */
+};
+
+/* The generic freq_ctr_period counter counts a rate of events per period, where
+ * the period has to be known by the user. The period is measured in ticks and
+ * must be at least 2 ticks long. This form is slightly more CPU intensive than
+ * the per-second form.
+ */
+struct freq_ctr_period {
+ unsigned int curr_tick; /* start date of current period (wrapping ticks) */
+ unsigned int curr_ctr; /* cumulated value for current period */
+ unsigned int prev_ctr; /* value for last period */
+};
+
+#endif /* _TYPES_FREQ_CTR_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/types/global.h
+ * Global variables.
+ *
+ * Copyright (C) 2000-2012 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _TYPES_GLOBAL_H
+#define _TYPES_GLOBAL_H
+
+#include <netinet/in.h>
+
+#include <common/config.h>
+#include <common/standard.h>
+#include <import/da.h>
+#include <types/freq_ctr.h>
+#include <types/listener.h>
+#include <types/proxy.h>
+#include <types/task.h>
+
+#ifdef USE_51DEGREES
+#include <import/51d.h>
+#endif
+
+#ifndef UNIX_MAX_PATH
+#define UNIX_MAX_PATH 108
+#endif
+
+/* modes of operation (global.mode) */
+#define MODE_DEBUG 0x01
+#define MODE_DAEMON 0x02
+#define MODE_QUIET 0x04
+#define MODE_CHECK 0x08
+#define MODE_VERBOSE 0x10
+#define MODE_STARTING 0x20
+#define MODE_FOREGROUND 0x40
+#define MODE_SYSTEMD 0x80
+
+/* list of last checks to perform, depending on config options */
+#define LSTCHK_CAP_BIND 0x00000001 /* check that we can bind to any port */
+#define LSTCHK_NETADM 0x00000002 /* check that we have CAP_NET_ADMIN */
+
+/* Global tuning options */
+/* available polling mechanisms */
+#define GTUNE_USE_SELECT (1<<0)
+#define GTUNE_USE_POLL (1<<1)
+#define GTUNE_USE_EPOLL (1<<2)
+#define GTUNE_USE_KQUEUE (1<<3)
+/* platform-specific options */
+#define GTUNE_USE_SPLICE (1<<4)
+#define GTUNE_USE_GAI (1<<5)
+
+/* Access level for a stats socket */
+#define ACCESS_LVL_NONE 0
+#define ACCESS_LVL_USER 1
+#define ACCESS_LVL_OPER 2
+#define ACCESS_LVL_ADMIN 3
+
+/* SSL server verify mode */
+enum {
+ SSL_SERVER_VERIFY_NONE = 0,
+ SSL_SERVER_VERIFY_REQUIRED = 1,
+};
+
+/* FIXME : this will have to be redefined correctly */
+struct global {
+#ifdef USE_OPENSSL
+ char *crt_base; /* base directory path for certificates */
+ char *ca_base; /* base directory path for CAs and CRLs */
+#endif
+ int uid;
+ int gid;
+ int external_check;
+ int nbproc;
+ int maxconn, hardmaxconn;
+ int maxsslconn;
+ int ssl_session_max_cost; /* how many bytes an SSL session may cost */
+ int ssl_handshake_max_cost; /* how many bytes an SSL handshake may use */
+ int ssl_used_frontend; /* non-zero if SSL is used in a frontend */
+ int ssl_used_backend; /* non-zero if SSL is used in a backend */
+#ifdef USE_OPENSSL
+ char *listen_default_ciphers;
+ char *connect_default_ciphers;
+ int listen_default_ssloptions;
+ int connect_default_ssloptions;
+#endif
+ unsigned int ssl_server_verify; /* default verify mode on servers side */
+ struct freq_ctr conn_per_sec;
+ struct freq_ctr sess_per_sec;
+ struct freq_ctr ssl_per_sec;
+ struct freq_ctr ssl_fe_keys_per_sec;
+ struct freq_ctr ssl_be_keys_per_sec;
+ struct freq_ctr comp_bps_in; /* bytes per second, before http compression */
+ struct freq_ctr comp_bps_out; /* bytes per second, after http compression */
+ int cps_lim, cps_max;
+ int sps_lim, sps_max;
+ int ssl_lim, ssl_max;
+ int ssl_fe_keys_max, ssl_be_keys_max;
+ unsigned int shctx_lookups, shctx_misses;
+ int comp_rate_lim; /* HTTP compression rate limit */
+ int maxpipes; /* max # of pipes */
+ int maxsock; /* max # of sockets */
+ int rlimit_nofile; /* default ulimit-n value : 0=unset */
+ int rlimit_memmax_all; /* default all-process memory limit in megs ; 0=unset */
+ int rlimit_memmax; /* default per-process memory limit in megs ; 0=unset */
+ long maxzlibmem; /* max RAM for zlib in bytes */
+ int mode;
+ unsigned int req_count; /* request counter (HTTP or TCP session) for logs and unique_id */
+ int last_checks;
+ int spread_checks;
+ int max_spread_checks;
+ int max_syslog_len;
+ char *chroot;
+ char *pidfile;
+ char *node, *desc; /* node name & description */
+ struct chunk log_tag; /* name for syslog */
+ struct list logsrvs;
+ char *log_send_hostname; /* set hostname in syslog header */
+ char *server_state_base; /* path to a directory where server state files can be found */
+ char *server_state_file; /* path to the file where server states are loaded from */
+ struct {
+ int maxpollevents; /* max number of poll events at once */
+ int maxaccept; /* max number of consecutive accept() */
+ int options; /* various tuning options */
+ int recv_enough; /* how many input bytes at once are "enough" */
+ int bufsize; /* buffer size in bytes, defaults to BUFSIZE */
+ int maxrewrite; /* buffer max rewrite size in bytes, defaults to MAXREWRITE */
+ int reserved_bufs; /* how many buffers can only be allocated for response */
+ int buf_limit; /* if not null, how many total buffers may only be allocated */
+ int client_sndbuf; /* set client sndbuf to this value if not null */
+ int client_rcvbuf; /* set client rcvbuf to this value if not null */
+ int server_sndbuf; /* set server sndbuf to this value if not null */
+ int server_rcvbuf; /* set server rcvbuf to this value if not null */
+ int chksize; /* check buffer size in bytes, defaults to BUFSIZE */
+ int pipesize; /* pipe size in bytes, system defaults if zero */
+ int max_http_hdr; /* max number of HTTP headers, use MAX_HTTP_HDR if zero */
+ int cookie_len; /* max length of cookie captures */
+ int pattern_cache; /* max number of entries in the pattern cache. */
+ int sslcachesize; /* SSL cache size in session, defaults to 20000 */
+#ifdef USE_OPENSSL
+ int sslprivatecache; /* Force to use a private session cache even if nbproc > 1 */
+ unsigned int ssllifetime; /* SSL session lifetime in seconds */
+ unsigned int ssl_max_record; /* SSL max record size */
+ unsigned int ssl_default_dh_param; /* SSL maximum DH parameter size */
+ int ssl_ctx_cache; /* max number of entries in the ssl_ctx cache. */
+#endif
+#ifdef USE_ZLIB
+ int zlibmemlevel; /* zlib memlevel */
+ int zlibwindowsize; /* zlib window size */
+#endif
+ int comp_maxlevel; /* max HTTP compression level */
+ unsigned short idle_timer; /* how long before an empty buffer is considered idle (ms) */
+ } tune;
+ struct {
+ char *prefix; /* path prefix of unix bind socket */
+ struct { /* UNIX socket permissions */
+ uid_t uid; /* -1 to leave unchanged */
+ gid_t gid; /* -1 to leave unchanged */
+ mode_t mode; /* 0 to leave unchanged */
+ } ux;
+ } unix_bind;
+#ifdef USE_CPU_AFFINITY
+ unsigned long cpu_map[LONGBITS]; /* list of CPU masks for the 32/64 first processes */
+#endif
+ struct proxy *stats_fe; /* the frontend holding the stats settings */
+#ifdef USE_DEVICEATLAS
+ struct {
+ void *atlasimgptr;
+ char *jsonpath;
+ char *cookiename;
+ size_t cookienamelen;
+ da_atlas_t atlas;
+ da_evidence_id_t useragentid;
+ da_severity_t loglevel;
+ char separator;
+ unsigned char daset:1;
+ } deviceatlas;
+#endif
+#ifdef USE_51DEGREES
+ struct {
+ char property_separator; /* the separator to use in the response for the values. this is taken from 51degrees-property-separator from config. */
+ struct list property_names; /* list of properties to load into the data set. this is taken from 51degrees-property-name-list from config. */
+ char *data_file_path;
+ int header_count; /* number of HTTP headers related to device detection. */
+ struct chunk *header_names; /* array of HTTP header names. */
+#ifdef FIFTYONEDEGREES_H_PATTERN_INCLUDED
+ fiftyoneDegreesDataSet data_set; /* data set used with the pattern detection method. */
+ fiftyoneDegreesWorksetPool *pool; /* pool of worksets to avoid creating a new one for each request. */
+#endif
+#ifdef FIFTYONEDEGREES_H_TRIE_INCLUDED
+ int32_t *header_offsets; /* offsets to the HTTP header name string. */
+ fiftyoneDegreesDeviceOffsets device_offsets; /* Memory used for device offsets. */
+#endif
+ int cache_size;
+ } _51degrees;
+#endif
+};
+
+extern struct global global;
+extern int pid; /* current process id */
+extern int relative_pid; /* process id starting at 1 */
+extern int actconn; /* # of active sessions */
+extern int listeners;
+extern int jobs; /* # of active jobs */
+extern struct chunk trash;
+extern char *swap_buffer;
+extern int nb_oldpids; /* contains the number of old pids found */
+extern const int zero;
+extern const int one;
+extern const struct linger nolinger;
+extern int stopping; /* non zero means stopping in progress */
+extern char hostname[MAX_HOSTNAME_LEN];
+extern char localpeer[MAX_HOSTNAME_LEN];
+extern struct list global_listener_queue; /* list of the temporarily limited listeners */
+extern struct task *global_listener_queue_task;
+extern unsigned int warned; /* bitfield of a few warnings to emit just once */
+extern struct list dns_resolvers;
+
+/* bit values to go with "warned" above */
+#define WARN_BLOCK_DEPRECATED 0x00000001
+/* unassigned : 0x00000002 */
+#define WARN_REDISPATCH_DEPRECATED 0x00000004
+#define WARN_CLITO_DEPRECATED 0x00000008
+#define WARN_SRVTO_DEPRECATED 0x00000010
+#define WARN_CONTO_DEPRECATED 0x00000020
+
+/* to be used with warned and WARN_* */
+static inline int already_warned(unsigned int warning)
+{
+ if (warned & warning)
+ return 1;
+ warned |= warning;
+ return 0;
+}
+
+#endif /* _TYPES_GLOBAL_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ include/types/hdr_idx.h
+ This file defines everything related to fast header indexation.
+
+ Copyright (C) 2000-2006 Willy Tarreau - w@1wt.eu
+
+ This library is free software; you can redistribute it and/or
+ modify it under the terms of the GNU Lesser General Public
+ License as published by the Free Software Foundation, version 2.1
+ exclusively.
+
+ This library is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ Lesser General Public License for more details.
+
+ You should have received a copy of the GNU Lesser General Public
+ License along with this library; if not, write to the Free Software
+ Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+*/
+
+
+/*
+ * The type of structure described here is a finite linked list used to
+ * reference small number of objects of small size. This is typically used
+ * to index HTTP headers within one request or response, in order to be able
+ * to add, remove, modify and check them in an efficient way. The overhead is
+ * very low : 32 bits are used per list element. This is enough to reference
+ * 32k headers of at most 64kB each, with one bit to indicate if the header
+ * is terminated by 1 or 2 chars. It may also evolve towards something like
+ * 1k headers of at most 64B for the name and 32kB of data + CR/CRLF.
+ *
+ * A future evolution of this concept may allow for fast header manipulation
+ * without data movement through the use of vectors. This is not yet possible
+ * in this version, whose goal is only to avoid parsing whole lines for each
+ * consultation.
+ *
+ */
+
+
+#ifndef _TYPES_HDR_IDX_H
+#define _TYPES_HDR_IDX_H
+
+/*
+ * This describes one element of the hdr_idx array.
+ * It's a tiny linked list of at most 32k 32bit elements. The first one has a
+ * special meaning, it's used as the head of the list and cannod be removed.
+ * That way, we know that 'next==0' is not possible so we use it to indicate
+ * an end of list. Also, [0]->next always designates the head of the list. The
+ * first allocatable element is at 1. By convention, [0]->len indicates how
+ * many chars should be skipped in the original buffer before finding the first
+ * header.
+ *
+ */
+
+struct hdr_idx_elem {
+ unsigned len :16; /* length of this header not counting CRLF. 0=unused entry. */
+ unsigned cr : 1; /* CR present (1=CRLF, 0=LF). Total line size=len+cr+1. */
+ unsigned next :15; /* offset of next header if len>0. 0=end of list. */
+};
+
+/*
+ * This structure provides necessary information to store, find, remove
+ * index entries from a list. This list cannot reference more than 32k
+ * elements of 64k each.
+ */
+struct hdr_idx {
+ struct hdr_idx_elem *v; /* the array itself */
+ short size; /* size of the array including the head */
+ short used; /* # of elements really used (1..size) */
+ short last; /* length of the allocated area (1..size) */
+ signed short tail; /* last used element, 0..size-1 */
+};
+
+
+
+#endif /* _TYPES_HDR_IDX_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+#ifndef _TYPES_HLUA_H
+#define _TYPES_HLUA_H
+
+#ifdef USE_LUA
+
+#include <lua.h>
+#include <lauxlib.h>
+
+#include <types/proxy.h>
+#include <types/server.h>
+
+#define CLASS_CORE "Core"
+#define CLASS_TXN "TXN"
+#define CLASS_FETCHES "Fetches"
+#define CLASS_CONVERTERS "Converters"
+#define CLASS_SOCKET "Socket"
+#define CLASS_CHANNEL "Channel"
+#define CLASS_HTTP "HTTP"
+#define CLASS_MAP "Map"
+#define CLASS_APPLET_TCP "AppletTCP"
+#define CLASS_APPLET_HTTP "AppletHTTP"
+
+struct stream;
+
+#define HLUA_RUN 0x00000001
+#define HLUA_CTRLYIELD 0x00000002
+#define HLUA_WAKERESWR 0x00000004
+#define HLUA_WAKEREQWR 0x00000008
+#define HLUA_EXIT 0x00000010
+#define HLUA_MUST_GC 0x00000020
+
+#define HLUA_F_AS_STRING 0x01
+#define HLUA_F_MAY_USE_HTTP 0x02
+
+enum hlua_exec {
+ HLUA_E_OK = 0,
+ HLUA_E_AGAIN, /* LUA yield, must resume the stack execution later, when
+ the associatedtask is waked. */
+ HLUA_E_ERRMSG, /* LUA stack execution failed with a string error message
+ in the top of stack. */
+ HLUA_E_ERR, /* LUA stack execution failed without error message. */
+};
+
+struct hlua {
+ lua_State *T; /* The LUA stack. */
+ int Tref; /* The reference of the stack in coroutine case.
+ -1 for the main lua stack. */
+ int Mref; /* The reference of the memory context in coroutine case.
+ -1 if the memory context is not used. */
+ int nargs; /* The number of arguments in the stack at the start of execution. */
+ unsigned int flags; /* The current execution flags. */
+ int wake_time; /* The lua wants to be waked at this time, or before. */
+ unsigned int max_time; /* The max amount of execution time for an Lua process, in ms. */
+ unsigned int start_time; /* The ms time when the Lua starts the last execution. */
+ unsigned int run_time; /* Lua total execution time in ms. */
+ struct task *task; /* The task associated with the lua stack execution.
+ We must wake this task to continue the task execution */
+ struct list com; /* The list head of the signals attached to this task. */
+ struct ebpt_node node;
+};
+
+struct hlua_com {
+ struct list purge_me; /* Part of the list of signals to be purged in the
+ case of the LUA execution stack crash. */
+ struct list wake_me; /* Part of list of signals to be targeted if an
+ event occurs. */
+ struct task *task; /* The task to be wake if an event occurs. */
+};
+
+/* This is a part of the list containing references to functions
+ * called at the initialisation time.
+ */
+struct hlua_init_function {
+ struct list l;
+ int function_ref;
+};
+
+/* This struct contains the lua data used to bind
+ * Lua function on HAProxy hook like sample-fetches
+ * or actions.
+ */
+struct hlua_function {
+ char *name;
+ int function_ref;
+};
+
+/* This struct is used with the structs:
+ * - http_req_rule
+ * - http_res_rule
+ * - tcp_rule
+ * It contains the lua execution configuration.
+ */
+struct hlua_rule {
+ struct hlua_function fcn;
+ char **args;
+};
+
+/* This struct contains the pointer provided on the most
+ * of internal HAProxy calls during the processing of
+ * rules, converters and sample-fetches. This struct is
+ * associated with the lua object called "TXN".
+ */
+struct hlua_txn {
+ struct stream *s;
+ struct proxy *p;
+ int dir; /* SMP_OPT_DIR_{REQ,RES} */
+};
+
+/* This struct contains the applet context. */
+struct hlua_appctx {
+ struct appctx *appctx;
+ luaL_Buffer b; /* buffer used to prepare strings. */
+ struct hlua_txn htxn;
+};
+
+/* This struc is used with sample fetches and sample converters. */
+struct hlua_smp {
+ struct stream *s;
+ struct proxy *p;
+ unsigned int flags; /* LUA_F_OPT_* */
+ int dir; /* SMP_OPT_DIR_{REQ,RES} */
+};
+
+/* This struct contains data used with sleep functions. */
+struct hlua_sleep {
+ struct task *task; /* task associated with sleep. */
+ struct list com; /* list of signal to wake at the end of sleep. */
+ unsigned int wakeup_ms; /* hour to wakeup. */
+};
+
+/* This struct is used to create coprocess doing TCP or
+ * SSL I/O. It uses a fake stream.
+ */
+struct hlua_socket {
+ struct stream *s; /* Stream used for socket I/O. */
+ luaL_Buffer b; /* buffer used to prepare strings. */
+};
+
+#else /* USE_LUA */
+
+/* Empty struct for compilation compatibility */
+struct hlua { };
+struct hlua_socket { };
+struct hlua_rule { };
+
+#endif /* USE_LUA */
+
+#endif /* _TYPES_HLUA_H */
--- /dev/null
+/*
+ * include/types/lb_chash.h
+ * Types for Consistent Hash LB algorithm.
+ *
+ * Copyright (C) 2000-2009 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _TYPES_LB_CHASH_H
+#define _TYPES_LB_CHASH_H
+
+#include <common/config.h>
+#include <ebtree.h>
+#include <eb32tree.h>
+
+struct lb_chash {
+ struct eb_root act; /* weighted chash entries of active servers */
+ struct eb_root bck; /* weighted chash entries of backup servers */
+ struct eb32_node *last; /* last node found in case of round robin (or NULL) */
+};
+
+#endif /* _TYPES_LB_CHASH_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/types/lb_fash
+ * Types for First Available Server load balancing algorithm.
+ *
+ * Copyright (C) 2000-2012 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _TYPES_LB_FAS_H
+#define _TYPES_LB_FAS_H
+
+#include <common/config.h>
+#include <ebtree.h>
+
+struct lb_fas {
+ struct eb_root act; /* weighted least conns on the active servers */
+ struct eb_root bck; /* weighted least conns on the backup servers */
+};
+
+#endif /* _TYPES_LB_FAS_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/types/lb_fwlc.h
+ * Types for Fast Weighted Least Connection load balancing algorithm.
+ *
+ * Copyright (C) 2000-2009 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _TYPES_LB_FWLC_H
+#define _TYPES_LB_FWLC_H
+
+#include <common/config.h>
+#include <ebtree.h>
+
+struct lb_fwlc {
+ struct eb_root act; /* weighted least conns on the active servers */
+ struct eb_root bck; /* weighted least conns on the backup servers */
+};
+
+#endif /* _TYPES_LB_FWLC_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/types/lb_fwrr.h
+ * Types for Fast Weighted Round Robin load balancing algorithm.
+ *
+ * Copyright (C) 2000-2009 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _TYPES_LB_FWRR_H
+#define _TYPES_LB_FWRR_H
+
+#include <common/config.h>
+#include <ebtree.h>
+
+/* This structure is used to apply fast weighted round robin on a server group */
+struct fwrr_group {
+ struct eb_root curr; /* tree for servers in "current" time range */
+ struct eb_root t0, t1; /* "init" and "next" servers */
+ struct eb_root *init; /* servers waiting to be placed */
+ struct eb_root *next; /* servers to be placed at next run */
+ int curr_pos; /* current position in the tree */
+ int curr_weight; /* total weight of the current time range */
+ int next_weight; /* total weight of the next time range */
+};
+
+struct lb_fwrr {
+ struct fwrr_group act; /* weighted round robin on the active servers */
+ struct fwrr_group bck; /* weighted round robin on the backup servers */
+};
+
+#endif /* _TYPES_LB_FWRR_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/types/lb_map.h
+ * Types for map-based load-balancing (RR and HASH)
+ *
+ * Copyright (C) 2000-2009 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _TYPES_LB_MAP_H
+#define _TYPES_LB_MAP_H
+
+#include <common/config.h>
+#include <types/server.h>
+
+/* values for map.state */
+#define LB_MAP_RECALC (1 << 0)
+
+struct lb_map {
+ struct server **srv; /* the server map used to apply weights */
+ int rr_idx; /* next server to be elected in round robin mode */
+ int state; /* LB_MAP_RECALC */
+};
+
+#endif /* _TYPES_LB_MAP_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/types/listener.h
+ * This file defines the structures needed to manage listeners.
+ *
+ * Copyright (C) 2000-2012 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _TYPES_LISTENER_H
+#define _TYPES_LISTENER_H
+
+#include <sys/types.h>
+#include <sys/socket.h>
+
+#ifdef USE_OPENSSL
+#include <openssl/ssl.h>
+#endif
+
+#include <common/config.h>
+#include <common/mini-clist.h>
+#include <types/obj_type.h>
+#include <eb32tree.h>
+
+/* Some pointer types reference below */
+struct task;
+struct protocol;
+struct xprt_ops;
+struct proxy;
+struct licounters;
+
+/* listener state */
+enum li_state {
+ LI_NEW = 0, /* not initialized yet */
+ LI_INIT, /* all parameters filled in, but not assigned yet */
+ LI_ASSIGNED, /* assigned to the protocol, but not listening yet */
+ LI_PAUSED, /* listener was paused, it's bound but not listening */
+ LI_LISTEN, /* started, listening but not enabled */
+ LI_READY, /* started, listening and enabled */
+ LI_FULL, /* reached its connection limit */
+ LI_LIMITED, /* transient state: limits have been reached, listener is queued */
+} __attribute__((packed));
+
+/* Listener transitions
+ * calloc() set() add_listener() bind()
+ * -------> NEW ----> INIT ----------> ASSIGNED -----> LISTEN
+ * <------- <---- <---------- <-----
+ * free() bzero() del_listener() unbind()
+ *
+ * The file descriptor is valid only during these three states :
+ *
+ * disable()
+ * LISTEN <------------ READY
+ * A| ------------> |A
+ * || !max & enable() ||
+ * || ||
+ * || max ||
+ * || max & enable() V| !max
+ * |+---------------> FULL
+ * +-----------------
+ * disable()
+ *
+ * The LIMITED state my be used when a limit has been detected just before
+ * using a listener. In this case, the listener MUST be queued into the
+ * appropriate wait queue (either the proxy's or the global one). It may be
+ * set back to the READY state at any instant and for any reason, so one must
+ * not rely on this state.
+ */
+
+/* listener socket options */
+#define LI_O_NONE 0x0000
+#define LI_O_NOLINGER 0x0001 /* disable linger on this socket */
+#define LI_O_FOREIGN 0x0002 /* permit listening on foreing addresses */
+#define LI_O_NOQUICKACK 0x0004 /* disable quick ack of immediate data (linux) */
+#define LI_O_DEF_ACCEPT 0x0008 /* wait up to 1 second for data before accepting */
+#define LI_O_TCP_RULES 0x0010 /* run TCP rules checks on the incoming connection */
+#define LI_O_CHK_MONNET 0x0020 /* check the source against a monitor-net rule */
+#define LI_O_ACC_PROXY 0x0040 /* find the proxied address in the first request line */
+#define LI_O_UNLIMITED 0x0080 /* listener not subject to global limits (peers & stats socket) */
+#define LI_O_TCP_FO 0x0100 /* enable TCP Fast Open (linux >= 3.7) */
+#define LI_O_V6ONLY 0x0200 /* bind to IPv6 only on Linux >= 2.4.21 */
+#define LI_O_V4V6 0x0400 /* bind to IPv4/IPv6 on Linux >= 2.4.21 */
+
+/* Note: if a listener uses LI_O_UNLIMITED, it is highly recommended that it adds its own
+ * maxconn setting to the global.maxsock value so that its resources are reserved.
+ */
+
+#ifdef USE_OPENSSL
+/* bind_conf ssl options */
+#define BC_SSL_O_NONE 0x0000
+#define BC_SSL_O_NO_SSLV3 0x0001 /* disable SSLv3 */
+#define BC_SSL_O_NO_TLSV10 0x0002 /* disable TLSv10 */
+#define BC_SSL_O_NO_TLSV11 0x0004 /* disable TLSv11 */
+#define BC_SSL_O_NO_TLSV12 0x0008 /* disable TLSv12 */
+/* 0x000F reserved for 'no' protocol version options */
+#define BC_SSL_O_USE_SSLV3 0x0010 /* force SSLv3 */
+#define BC_SSL_O_USE_TLSV10 0x0020 /* force TLSv10 */
+#define BC_SSL_O_USE_TLSV11 0x0040 /* force TLSv11 */
+#define BC_SSL_O_USE_TLSV12 0x0080 /* force TLSv12 */
+/* 0x00F0 reserved for 'force' protocol version options */
+#define BC_SSL_O_NO_TLS_TICKETS 0x0100 /* disable session resumption tickets */
+#endif
+
+/* "bind" line settings */
+struct bind_conf {
+#ifdef USE_OPENSSL
+ char *ca_file; /* CAfile to use on verify */
+ unsigned long long ca_ignerr; /* ignored verify errors in handshake if depth > 0 */
+ unsigned long long crt_ignerr; /* ignored verify errors in handshake if depth == 0 */
+ char *ciphers; /* cipher suite to use if non-null */
+ char *crl_file; /* CRLfile to use on verify */
+ char *ecdhe; /* named curve to use for ECDHE */
+ int ssl_options; /* ssl options */
+ int verify; /* verify method (set of SSL_VERIFY_* flags) */
+ SSL_CTX *default_ctx; /* SSL context of first/default certificate */
+ char *npn_str; /* NPN protocol string */
+ int npn_len; /* NPN protocol string length */
+ char *alpn_str; /* ALPN protocol string */
+ int alpn_len; /* ALPN protocol string length */
+ int strict_sni; /* refuse negotiation if sni doesn't match a certificate */
+ struct eb_root sni_ctx; /* sni_ctx tree of all known certs full-names sorted by name */
+ struct eb_root sni_w_ctx; /* sni_ctx tree of all known certs wildcards sorted by name */
+ struct tls_keys_ref *keys_ref; /* TLS ticket keys reference */
+
+ char *ca_sign_file; /* CAFile used to generate and sign server certificates */
+ char *ca_sign_pass; /* CAKey passphrase */
+
+ X509 *ca_sign_cert; /* CA certificate referenced by ca_file */
+ EVP_PKEY *ca_sign_pkey; /* CA private key referenced by ca_key */
+#endif
+ int is_ssl; /* SSL is required for these listeners */
+ int generate_certs; /* 1 if generate-certificates option is set, else 0 */
+ unsigned long bind_proc; /* bitmask of processes allowed to use these listeners */
+ struct { /* UNIX socket permissions */
+ uid_t uid; /* -1 to leave unchanged */
+ gid_t gid; /* -1 to leave unchanged */
+ mode_t mode; /* 0 to leave unchanged */
+ } ux;
+ int level; /* stats access level (ACCESS_LVL_*) */
+ struct list by_fe; /* next binding for the same frontend, or NULL */
+ struct list listeners; /* list of listeners using this bind config */
+ char *arg; /* argument passed to "bind" for better error reporting */
+ char *file; /* file where the section appears */
+ int line; /* line where the section appears */
+};
+
+/* The listener will be directly referenced by the fdtab[] which holds its
+ * socket. The listener provides the protocol-specific accept() function to
+ * the fdtab.
+ */
+struct listener {
+ enum obj_type obj_type; /* object type = OBJ_TYPE_LISTENER */
+ enum li_state state; /* state: NEW, INIT, ASSIGNED, LISTEN, READY, FULL */
+ short int nice; /* nice value to assign to the instanciated tasks */
+ int fd; /* the listen socket */
+ char *name; /* listener's name */
+ int luid; /* listener universally unique ID, used for SNMP */
+ int options; /* socket options : LI_O_* */
+ struct licounters *counters; /* statistics counters */
+ struct protocol *proto; /* protocol this listener belongs to */
+ struct xprt_ops *xprt; /* transport-layer operations for this socket */
+ int nbconn; /* current number of connections on this listener */
+ int maxconn; /* maximum connections allowed on this listener */
+ unsigned int backlog; /* if set, listen backlog */
+ unsigned int maxaccept; /* if set, max number of connections accepted at once */
+ struct list proto_list; /* list in the protocol header */
+ int (*accept)(struct listener *l, int fd, struct sockaddr_storage *addr); /* upper layer's accept() */
+ struct task * (*handler)(struct task *t); /* protocol handler. It is a task */
+ struct proxy *frontend; /* the frontend this listener belongs to, or NULL */
+ enum obj_type *default_target; /* default target to use for accepted sessions or NULL */
+ struct list wait_queue; /* link element to make the listener wait for something (LI_LIMITED) */
+ unsigned int analysers; /* bitmap of required protocol analysers */
+ int maxseg; /* for TCP, advertised MSS */
+ int tcp_ut; /* for TCP, user timeout */
+ char *interface; /* interface name or NULL */
+
+ const struct netns_entry *netns; /* network namespace of the listener*/
+
+ struct list by_fe; /* chaining in frontend's list of listeners */
+ struct list by_bind; /* chaining in bind_conf's list of listeners */
+ struct bind_conf *bind_conf; /* "bind" line settings, include SSL settings among other things */
+
+ /* warning: this struct is huge, keep it at the bottom */
+ struct sockaddr_storage addr; /* the address we listen to */
+ struct {
+ struct eb32_node id; /* place in the tree of used IDs */
+ } conf; /* config information */
+};
+
+/* Descriptor for a "bind" keyword. The ->parse() function returns 0 in case of
+ * success, or a combination of ERR_* flags if an error is encountered. The
+ * function pointer can be NULL if not implemented. The function also has an
+ * access to the current "bind" config line. The ->skip value tells the parser
+ * how many words have to be skipped after the keyword.
+ */
+struct bind_kw {
+ const char *kw;
+ int (*parse)(char **args, int cur_arg, struct proxy *px, struct bind_conf *conf, char **err);
+ int skip; /* nb of args to skip */
+};
+
+/*
+ * A keyword list. It is a NULL-terminated array of keywords. It embeds a
+ * struct list in order to be linked to other lists, allowing it to easily
+ * be declared where it is needed, and linked without duplicating data nor
+ * allocating memory. It is also possible to indicate a scope for the keywords.
+ */
+struct bind_kw_list {
+ const char *scope;
+ struct list list;
+ struct bind_kw kw[VAR_ARRAY];
+};
+
+
+#endif /* _TYPES_LISTENER_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ include/types/log.h
+ This file contains definitions of log-related structures and macros.
+
+ Copyright (C) 2000-2006 Willy Tarreau - w@1wt.eu
+
+ This library is free software; you can redistribute it and/or
+ modify it under the terms of the GNU Lesser General Public
+ License as published by the Free Software Foundation, version 2.1
+ exclusively.
+
+ This library is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ Lesser General Public License for more details.
+
+ You should have received a copy of the GNU Lesser General Public
+ License along with this library; if not, write to the Free Software
+ Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+*/
+
+#ifndef _TYPES_LOG_H
+#define _TYPES_LOG_H
+
+#include <sys/socket.h>
+#include <sys/un.h>
+#include <netinet/in.h>
+#include <common/config.h>
+#include <common/mini-clist.h>
+
+#define NB_LOG_FACILITIES 24
+#define NB_LOG_LEVELS 8
+#define NB_MSG_IOVEC_ELEMENTS 8
+#define SYSLOG_PORT 514
+#define UNIQUEID_LEN 128
+
+/* The array containing the names of the log levels. */
+extern const char *log_levels[];
+
+/* enum for log format */
+enum {
+ LOG_FORMAT_RFC3164 = 0,
+ LOG_FORMAT_RFC5424,
+ LOG_FORMATS, /* number of supported log formats, must always be last */
+};
+
+/* lists of fields that can be logged */
+enum {
+
+ LOG_FMT_TEXT = 0, /* raw text */
+ LOG_FMT_EXPR, /* sample expression */
+ LOG_FMT_SEPARATOR, /* separator replaced by one space */
+ LOG_FMT_VARIABLE,
+
+ /* information fields */
+ LOG_FMT_GLOBAL,
+ LOG_FMT_CLIENTIP,
+ LOG_FMT_CLIENTPORT,
+ LOG_FMT_BACKENDIP,
+ LOG_FMT_BACKENDPORT,
+ LOG_FMT_FRONTENDIP,
+ LOG_FMT_FRONTENDPORT,
+ LOG_FMT_SERVERPORT,
+ LOG_FMT_SERVERIP,
+ LOG_FMT_COUNTER,
+ LOG_FMT_LOGCNT,
+ LOG_FMT_PID,
+ LOG_FMT_DATE,
+ LOG_FMT_DATEGMT,
+ LOG_FMT_DATELOCAL,
+ LOG_FMT_TS,
+ LOG_FMT_MS,
+ LOG_FMT_FRONTEND,
+ LOG_FMT_FRONTEND_XPRT,
+ LOG_FMT_BACKEND,
+ LOG_FMT_SERVER,
+ LOG_FMT_BYTES,
+ LOG_FMT_BYTES_UP,
+ LOG_FMT_T,
+ LOG_FMT_TQ,
+ LOG_FMT_TW,
+ LOG_FMT_TC,
+ LOG_FMT_TR,
+ LOG_FMT_TT,
+ LOG_FMT_STATUS,
+ LOG_FMT_CCLIENT,
+ LOG_FMT_CSERVER,
+ LOG_FMT_TERMSTATE,
+ LOG_FMT_TERMSTATE_CK,
+ LOG_FMT_CONN,
+ LOG_FMT_ACTCONN,
+ LOG_FMT_FECONN,
+ LOG_FMT_BECONN,
+ LOG_FMT_SRVCONN,
+ LOG_FMT_RETRIES,
+ LOG_FMT_QUEUES,
+ LOG_FMT_SRVQUEUE,
+ LOG_FMT_BCKQUEUE,
+ LOG_FMT_HDRREQUEST,
+ LOG_FMT_HDRRESPONS,
+ LOG_FMT_HDRREQUESTLIST,
+ LOG_FMT_HDRRESPONSLIST,
+ LOG_FMT_REQ,
+ LOG_FMT_HTTP_METHOD,
+ LOG_FMT_HTTP_URI,
+ LOG_FMT_HTTP_PATH,
+ LOG_FMT_HTTP_QUERY,
+ LOG_FMT_HTTP_VERSION,
+ LOG_FMT_HOSTNAME,
+ LOG_FMT_UNIQUEID,
+ LOG_FMT_SSL_CIPHER,
+ LOG_FMT_SSL_VERSION,
+};
+
+/* enum for parse_logformat_string */
+enum {
+ LF_INIT = 0, // before first character
+ LF_TEXT, // normal text
+ LF_SEPARATOR, // a single separator
+ LF_VAR, // variable name, after '%' or '%{..}'
+ LF_STARTVAR, // % in text
+ LF_STARG, // after '%{' and berore '}'
+ LF_EDARG, // '}' after '%{'
+ LF_STEXPR, // after '%[' or '%{..}[' and berore ']'
+ LF_EDEXPR, // ']' after '%['
+ LF_END, // \0 found
+};
+
+
+struct logformat_node {
+ struct list list;
+ int type; // LOG_FMT_*
+ int options; // LOG_OPT_*
+ char *arg; // text for LOG_FMT_TEXT, arg for others
+ void *expr; // for use with LOG_FMT_EXPR
+};
+
+#define LOG_OPT_HEXA 0x00000001
+#define LOG_OPT_MANDATORY 0x00000002
+#define LOG_OPT_QUOTE 0x00000004
+#define LOG_OPT_REQ_CAP 0x00000008
+#define LOG_OPT_RES_CAP 0x00000010
+#define LOG_OPT_HTTP 0x00000020
+
+
+/* Fields that need to be extracted from the incoming connection or request for
+ * logging or for sending specific header information. They're set in px->to_log
+ * and appear as flags in session->logs.logwait, which are removed once the
+ * required information has been collected.
+ */
+#define LW_INIT 1 /* anything */
+#define LW_CLIP 2 /* CLient IP */
+#define LW_SVIP 4 /* SerVer IP */
+#define LW_SVID 8 /* server ID */
+#define LW_REQ 16 /* http REQuest */
+#define LW_RESP 32 /* http RESPonse */
+#define LW_BYTES 256 /* bytes read from server */
+#define LW_COOKIE 512 /* captured cookie */
+#define LW_REQHDR 1024 /* request header(s) */
+#define LW_RSPHDR 2048 /* response header(s) */
+#define LW_BCKIP 4096 /* backend IP */
+#define LW_FRTIP 8192 /* frontend IP */
+#define LW_XPRT 16384 /* transport layer information (eg: SSL) */
+
+struct logsrv {
+ struct list list;
+ struct sockaddr_storage addr;
+ int format;
+ int facility;
+ int level;
+ int minlvl;
+ int maxlen;
+};
+
+#endif /* _TYPES_LOG_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/types/mailer.h
+ * This file defines everything related to mailer.
+ *
+ * Copyright 2015 Horms Solutions Ltd., Simon Horman <horms@verge.net.au>
+ *
+ * Based on include/types/peers.h
+ *
+ * Copyright 2010 EXCELIANCE, Emeric Brun <ebrun@exceliance.fr>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _TYPES_MAILERS_H
+#define _TYPES_MAILERS_H
+
+#include <sys/types.h>
+#include <sys/socket.h>
+#include <netinet/in.h>
+#include <arpa/inet.h>
+
+struct mailer {
+ char *id;
+ struct mailers *mailers;
+ struct {
+ const char *file; /* file where the section appears */
+ int line; /* line where the section appears */
+ } conf; /* config information */
+ struct sockaddr_storage addr; /* SMTP server address */
+ struct protocol *proto; /* SMTP server address's protocol */
+ struct xprt_ops *xprt; /* SMTP server socket operations at transport layer */
+ void *sock_init_arg; /* socket operations's opaque init argument if needed */
+ struct mailer *next; /* next mailer in the list */
+};
+
+
+struct mailers {
+ char *id; /* mailers section name */
+ struct mailer *mailer_list; /* mailers in this mailers section */
+ struct {
+ const char *file; /* file where the section appears */
+ int line; /* line where the section appears */
+ } conf; /* config information */
+ struct mailers *next; /* next mailers section */
+ int count; /* total number of mailers in this mailers section */
+ int users; /* number of users of this mailers section */
+};
+
+
+extern struct mailers *mailers;
+
+#endif /* _TYPES_MAILERS_H */
+
--- /dev/null
+/*
+ * include/types/map.h
+ * This file provides structures and types for MAPs.
+ *
+ * Copyright (C) 2000-2012 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _TYPES_MAP_H
+#define _TYPES_MAP_H
+
+#include <types/pattern.h>
+#include <types/sample.h>
+
+/* These structs contains a string representation of the map. These struct is
+ * sorted by file. Permit to hot-add and hot-remove entries.
+ *
+ * "maps" is the list head. This list cotains all the mao file name identifier.
+ */
+extern struct list maps;
+
+struct map_descriptor {
+ struct list list; /* used for listing */
+ struct sample_conv *conv; /* original converter descriptor */
+ struct pattern_head pat; /* the pattern matching associated to the map */
+ int do_free; /* set if <pat> is the orignal pat and must be freed */
+};
+
+#endif /* _TYPES_MAP_H */
--- /dev/null
+/*
+ * include/types/obj_type.h
+ * This file declares some object types for use in various structures.
+ *
+ * Copyright (C) 2000-2013 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _TYPES_OBJ_TYPE_H
+#define _TYPES_OBJ_TYPE_H
+
+/* The principle is to be able to change the type of a pointer by pointing
+ * it directly to an object type. The object type indicates the format of the
+ * structure holing the type, and this is used to retrieve the pointer to the
+ * beginning of the structure. Doing so saves us from having to maintain both
+ * a pointer and a type for elements such as connections which can point to
+ * various types of objects.
+ */
+
+/* object types : these ones take the same space as a char */
+enum obj_type {
+ OBJ_TYPE_NONE = 0, /* pointer is NULL by definition */
+ OBJ_TYPE_LISTENER, /* object is a struct listener */
+ OBJ_TYPE_PROXY, /* object is a struct proxy */
+ OBJ_TYPE_SERVER, /* object is a struct server */
+ OBJ_TYPE_APPLET, /* object is a struct applet */
+ OBJ_TYPE_APPCTX, /* object is a struct appctx */
+ OBJ_TYPE_CONN, /* object is a struct connection */
+ OBJ_TYPE_ENTRIES /* last one : number of entries */
+} __attribute__((packed)) ;
+
+#endif /* _TYPES_OBJ_TYPE_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/types/pattern.h
+ * This file provides structures and types for ACLs.
+ *
+ * Copyright (C) 2000-2012 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _TYPES_PATTERN_H
+#define _TYPES_PATTERN_H
+
+#include <common/compat.h>
+#include <common/config.h>
+#include <common/mini-clist.h>
+#include <common/regex.h>
+
+#include <types/sample.h>
+
+#include <ebmbtree.h>
+
+/* Pattern matching function result.
+ *
+ * We're using a 3-state matching system to match samples against patterns in
+ * ACLs :
+ * - PASS : at least one pattern already matches
+ * - MISS : some data is missing to decide if some rules may finally match.
+ * - FAIL : no mattern may ever match
+ *
+ * We assign values 0, 1 and 3 to FAIL, MISS and PASS respectively, so that we
+ * can make use of standard arithmetics for the truth tables below :
+ *
+ * x | !x x&y | F(0) | M(1) | P(3) x|y | F(0) | M(1) | P(3)
+ * ------+----- -----+------+------+----- -----+------+------+-----
+ * F(0) | P(3) F(0)| F(0) | F(0) | F(0) F(0)| F(0) | M(1) | P(3)
+ * M(1) | M(1) M(1)| F(0) | M(1) | M(1) M(1)| M(1) | M(1) | P(3)
+ * P(3) | F(0) P(3)| F(0) | M(1) | P(3) P(3)| P(3) | P(3) | P(3)
+ *
+ * neg(x) = (3 >> x) and(x,y) = (x & y) or(x,y) = (x | y)
+ *
+ * For efficiency, the ACL return flags are directly mapped from the pattern
+ * match flags. A pattern can't return "MISS" since it's always presented an
+ * existing sample. So that leaves us with only two possible values :
+ * MATCH = 0
+ * NOMATCH = 3
+ */
+enum pat_match_res {
+ PAT_NOMATCH = 0, /* sample didn't match any pattern */
+ PAT_MATCH = 3, /* sample matched at least one pattern */
+};
+
+/* possible flags for patterns matching or parsing */
+enum {
+ PAT_MF_IGNORE_CASE = 1 << 0, /* ignore case */
+ PAT_MF_NO_DNS = 1 << 1, /* dont perform any DNS requests */
+};
+
+/* possible flags for patterns storage */
+enum {
+ PAT_SF_TREE = 1 << 0, /* some patterns are arranged in a tree */
+};
+
+/* ACL match methods */
+enum {
+ PAT_MATCH_FOUND, /* just ensure that fetch found the sample */
+ PAT_MATCH_BOOL, /* match fetch's integer value as boolean */
+ PAT_MATCH_INT, /* unsigned integer (int) */
+ PAT_MATCH_IP, /* IPv4/IPv6 address (IP) */
+ PAT_MATCH_BIN, /* hex string (bin) */
+ PAT_MATCH_LEN, /* string length (str -> int) */
+ PAT_MATCH_STR, /* exact string match (str) */
+ PAT_MATCH_BEG, /* beginning of string (str) */
+ PAT_MATCH_SUB, /* substring (str) */
+ PAT_MATCH_DIR, /* directory-like sub-string (str) */
+ PAT_MATCH_DOM, /* domain-like sub-string (str) */
+ PAT_MATCH_END, /* end of string (str) */
+ PAT_MATCH_REG, /* regex (str -> reg) */
+ /* keep this one last */
+ PAT_MATCH_NUM
+};
+
+#define PAT_REF_MAP 0x1 /* Set if the reference is used by at least one map. */
+#define PAT_REF_ACL 0x2 /* Set if the reference is used by at least one acl. */
+#define PAT_REF_SMP 0x4 /* Flag used if the reference contains a sample. */
+
+/* This struct contain a list of reference strings for dunamically
+ * updatable patterns.
+ */
+struct pat_ref {
+ struct list list; /* Used to chain refs. */
+ unsigned int flags; /* flags PAT_REF_*. */
+ char *reference; /* The reference name. */
+ int unique_id; /* Each pattern reference have unique id. */
+ char *display; /* String displayed to identify the pattern origin. */
+ struct list head; /* The head of the list of struct pat_ref_elt. */
+ struct list pat; /* The head of the list of struct pattern_expr. */
+};
+
+/* This is a part of struct pat_ref. Each entry contain one
+ * pattern and one associated value as original string.
+ */
+struct pat_ref_elt {
+ struct list list; /* Used to chain elements. */
+ char *pattern;
+ char *sample;
+ int line;
+};
+
+/* How to store a time range and the valid days in 29 bits */
+struct pat_time {
+ int dow:7; /* 1 bit per day of week: 0-6 */
+ int h1:5, m1:6; /* 0..24:0..60. Use 0:0 for all day. */
+ int h2:5, m2:6; /* 0..24:0..60. Use 24:0 for all day. */
+};
+
+/* This contain each tree indexed entry. This struct permit to associate
+ * "sample" with a tree entry. It is used with maps.
+ */
+struct pattern_tree {
+ struct sample_data *data;
+ struct pat_ref_elt *ref;
+ struct ebmb_node node;
+};
+
+/* This describes one ACL pattern, which might be a single value or a tree of
+ * values. All patterns for a single ACL expression are linked together. Some
+ * of them might have a type (eg: IP). Right now, the types are shared with
+ * the samples, though it is possible that in the future this will change to
+ * accommodate for other types (eg: meth, regex). Unsigned and constant types
+ * are preferred when there is a doubt.
+ */
+struct pattern {
+ int type; /* type of the ACL pattern (SMP_T_*) */
+ union {
+ int i; /* integer value */
+ struct {
+ signed long long min, max;
+ int min_set :1;
+ int max_set :1;
+ } range; /* integer range */
+ struct {
+ struct in_addr addr;
+ struct in_addr mask;
+ } ipv4; /* IPv4 address */
+ struct {
+ struct in6_addr addr;
+ unsigned char mask; /* number of bits */
+ } ipv6; /* IPv6 address/mask */
+ struct pat_time time; /* valid hours and days */
+ struct eb_root *tree; /* tree storing all values if any */
+ } val; /* direct value */
+ union {
+ void *ptr; /* any data */
+ char *str; /* any string */
+ struct my_regex *reg; /* a compiled regex */
+ } ptr; /* indirect values, allocated */
+ int len; /* data length when required */
+ int sflags; /* flags relative to the storage method. */
+ struct sample_data *data; /* used to store a pointer to sample value associated
+ with the match. It is used with maps */
+ struct pat_ref_elt *ref;
+};
+
+/* This struct is just used for chaining patterns */
+struct pattern_list {
+ struct list list;
+ struct pattern pat;
+};
+
+/* Description of a pattern expression.
+ * It contains pointers to the parse and match functions, and a list or tree of
+ * patterns to test against. The structure is organized so that the hot parts
+ * are grouped together in order to optimize caching.
+ */
+struct pattern_expr {
+ struct list list; /* Used for chaining pattern_expr in pat_ref. */
+ unsigned long long revision; /* updated for each update */
+ struct pat_ref *ref; /* The pattern reference if exists. */
+ struct pattern_head *pat_head; /* Point to the pattern_head that contain manipulation functions.
+ * Note that this link point on compatible head but not on the real
+ * head. You can use only the function, and you must not use the
+ * "head". Dont write "(struct pattern_expr *)any->pat_head->expr".
+ */
+ struct list patterns; /* list of acl_patterns */
+ struct eb_root pattern_tree; /* may be used for lookup in large datasets */
+ struct eb_root pattern_tree_2; /* may be used for different types */
+ int mflags; /* flags relative to the parsing or matching method. */
+};
+
+/* This is a list of expression. A struct pattern_expr can be used by
+ * more than one "struct pattern_head". this intermediate struct
+ * permit more than one list.
+ */
+struct pattern_expr_list {
+ struct list list; /* Used for chaining pattern_expr in pattern_head. */
+ int do_free;
+ struct pattern_expr *expr; /* The used expr. */
+};
+
+/* This struct contain a list of pattern expr */
+struct pattern_head {
+ int (*parse)(const char *text, struct pattern *pattern, int flags, char **err);
+ int (*parse_smp)(const char *text, struct sample_data *data);
+ int (*index)(struct pattern_expr *, struct pattern *, char **);
+ void (*delete)(struct pattern_expr *, struct pat_ref_elt *);
+ void (*prune)(struct pattern_expr *);
+ struct pattern *(*match)(struct sample *, struct pattern_expr *, int);
+ int expect_type; /* type of the expected sample (SMP_T_*) */
+
+ struct list head; /* This is a list of struct pattern_expr_list. */
+};
+
+/* This is the root of the list of all pattern_ref avalaibles. */
+extern struct list pattern_reference;
+
+#endif /* _TYPES_PATTERN_H */
--- /dev/null
+/*
+ * include/types/peers.h
+ * This file defines everything related to peers.
+ *
+ * Copyright 2010 EXCELIANCE, Emeric Brun <ebrun@exceliance.fr>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _TYPES_PEERS_H
+#define _TYPES_PEERS_H
+
+#include <sys/types.h>
+#include <sys/socket.h>
+#include <netinet/in.h>
+#include <arpa/inet.h>
+
+#include <common/config.h>
+#include <common/mini-clist.h>
+#include <common/regex.h>
+#include <common/tools.h>
+#include <eb32tree.h>
+
+struct shared_table {
+ struct stktable *table; /* stick table to sync */
+ int local_id;
+ int remote_id;
+ int flags;
+ uint64_t remote_data;
+ unsigned int last_acked;
+ unsigned int last_pushed;
+ unsigned int last_get;
+ unsigned int teaching_origin;
+ unsigned int update;
+ struct shared_table *next; /* next shared table in list */
+};
+
+struct peer {
+ int local; /* proxy state */
+ char *id;
+ struct {
+ const char *file; /* file where the section appears */
+ int line; /* line where the section appears */
+ } conf; /* config information */
+ time_t last_change;
+ struct sockaddr_storage addr; /* peer address */
+ struct protocol *proto; /* peer address protocol */
+ struct xprt_ops *xprt; /* peer socket operations at transport layer */
+ void *sock_init_arg; /* socket operations's opaque init argument if needed */
+ unsigned int flags; /* peer session flags */
+ unsigned int statuscode; /* current/last session status code */
+ unsigned int reconnect; /* next connect timer */
+ unsigned int confirm; /* confirm message counter */
+ struct stream *stream; /* current transport stream */
+ struct appctx *appctx; /* the appctx running it */
+ struct shared_table *remote_table;
+ struct shared_table *last_local_table;
+ struct shared_table *tables;
+ struct peer *next; /* next peer in the list */
+};
+
+
+struct peers {
+ int state; /* proxy state */
+ char *id; /* peer section name */
+ struct task *sync_task; /* main sync task */
+ struct sig_handler *sighandler; /* signal handler */
+ struct peer *remote; /* remote peers list */
+ struct peer *local; /* local peer list */
+ struct proxy *peers_fe; /* peer frontend */
+ struct {
+ const char *file; /* file where the section appears */
+ int line; /* line where the section appears */
+ } conf; /* config information */
+ time_t last_change;
+ struct peers *next; /* next peer section */
+ unsigned int flags; /* current peers section resync state */
+ unsigned int resync_timeout; /* resync timeout timer */
+ int count; /* total of peers */
+};
+
+
+extern struct peers *peers;
+
+#endif /* _TYPES_PEERS_H */
+
--- /dev/null
+/*
+ include/types/pipe.h
+ Pipe management.
+
+ Copyright (C) 2000-2009 Willy Tarreau - w@1wt.eu
+
+ This library is free software; you can redistribute it and/or
+ modify it under the terms of the GNU Lesser General Public
+ License as published by the Free Software Foundation, version 2.1
+ exclusively.
+
+ This library is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ Lesser General Public License for more details.
+
+ You should have received a copy of the GNU Lesser General Public
+ License along with this library; if not, write to the Free Software
+ Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+*/
+
+#ifndef _TYPES_PIPE_H
+#define _TYPES_PIPE_H
+
+#include <common/config.h>
+
+/* A pipe is described by its read and write FDs, and the data remaining in it.
+ * The FDs are valid if there are data pending. The user is not allowed to
+ * change the FDs.
+ */
+struct pipe {
+ int data; /* number of bytes present in the pipe */
+ int prod; /* FD the producer must write to ; -1 if none */
+ int cons; /* FD the consumer must read from ; -1 if none */
+ struct pipe *next;
+};
+
+#endif /* _TYPES_PIPE_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ include/types/port_range.h
+ This file defines everything needed to manage port ranges
+
+ Copyright (C) 2000-2009 Willy Tarreau - w@1wt.eu
+
+ This library is free software; you can redistribute it and/or
+ modify it under the terms of the GNU Lesser General Public
+ License as published by the Free Software Foundation, version 2.1
+ exclusively.
+
+ This library is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ Lesser General Public License for more details.
+
+ You should have received a copy of the GNU Lesser General Public
+ License along with this library; if not, write to the Free Software
+ Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+*/
+
+#ifndef _TYPES_PORT_RANGE_H
+#define _TYPES_PORT_RANGE_H
+
+#include <netinet/in.h>
+
+struct port_range {
+ int size, get, put; /* range size, and get/put positions */
+ int avail; /* number of available ports left */
+ uint16_t ports[0]; /* array of <size> ports, in host byte order */
+};
+
+#endif /* _TYPES_PORT_RANGE_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/types/proto_http.h
+ * This file contains HTTP protocol definitions.
+ *
+ * Copyright (C) 2000-2011 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _TYPES_PROTO_HTTP_H
+#define _TYPES_PROTO_HTTP_H
+
+#include <common/chunk.h>
+#include <common/config.h>
+#include <common/mini-clist.h>
+#include <common/regex.h>
+
+#include <types/hdr_idx.h>
+#include <types/sample.h>
+
+/* These are the flags that are found in txn->flags */
+
+/* action flags */
+#define TX_CLDENY 0x00000001 /* a client header matches a deny regex */
+#define TX_CLALLOW 0x00000002 /* a client header matches an allow regex */
+#define TX_SVDENY 0x00000004 /* a server header matches a deny regex */
+#define TX_SVALLOW 0x00000008 /* a server header matches an allow regex */
+#define TX_CLTARPIT 0x00000010 /* the transaction is tarpitted (anti-dos) */
+
+/* transaction flags dedicated to cookies : bits values 0x20 to 0x80 (0-7 shift 5) */
+#define TX_CK_NONE 0x00000000 /* this transaction had no cookie */
+#define TX_CK_INVALID 0x00000020 /* this transaction had a cookie which matches no server */
+#define TX_CK_DOWN 0x00000040 /* this transaction had cookie matching a down server */
+#define TX_CK_VALID 0x00000060 /* this transaction had cookie matching a valid server */
+#define TX_CK_EXPIRED 0x00000080 /* this transaction had an expired cookie (idle for too long) */
+#define TX_CK_OLD 0x000000A0 /* this transaction had too old a cookie (offered too long ago) */
+#define TX_CK_UNUSED 0x000000C0 /* this transaction had a cookie but it was not used (eg: use-server was preferred) */
+#define TX_CK_MASK 0x000000E0 /* mask to get this transaction's cookie flags */
+#define TX_CK_SHIFT 5 /* bit shift */
+
+/* response cookie information, bits values 0x100 to 0x700 (0-7 shift 8) */
+#define TX_SCK_NONE 0x00000000 /* no cookie found in the response */
+#define TX_SCK_FOUND 0x00000100 /* a persistence cookie was found and forwarded */
+#define TX_SCK_DELETED 0x00000200 /* an existing persistence cookie was deleted */
+#define TX_SCK_INSERTED 0x00000300 /* a persistence cookie was inserted */
+#define TX_SCK_REPLACED 0x00000400 /* a persistence cookie was present and rewritten */
+#define TX_SCK_UPDATED 0x00000500 /* an expirable persistence cookie was updated */
+#define TX_SCK_MASK 0x00000700 /* mask to get the set-cookie field */
+#define TX_SCK_SHIFT 8 /* bit shift */
+
+#define TX_SCK_PRESENT 0x00000800 /* a cookie was found in the server's response */
+
+/* cacheability management, bits values 0x1000 to 0x3000 (0-3 shift 12) */
+#define TX_CACHEABLE 0x00001000 /* at least part of the response is cacheable */
+#define TX_CACHE_COOK 0x00002000 /* a cookie in the response is cacheable */
+#define TX_CACHE_SHIFT 12 /* bit shift */
+
+/* Unused: 0x4000, 0x8000, 0x10000, 0x20000, 0x80000 */
+
+/* indicate how we *want* the connection to behave, regardless of what is in
+ * the headers. We have 4 possible values right now :
+ * - WANT_KAL : try to maintain keep-alive (default when nothing configured)
+ * - WANT_TUN : will be a tunnel (CONNECT).
+ * - WANT_SCL : enforce close on the server side
+ * - WANT_CLO : enforce close on both sides
+ */
+#define TX_CON_WANT_KAL 0x00000000 /* note: it's important that it is 0 (init) */
+#define TX_CON_WANT_TUN 0x00100000
+#define TX_CON_WANT_SCL 0x00200000
+#define TX_CON_WANT_CLO 0x00300000
+#define TX_CON_WANT_MSK 0x00300000 /* this is the mask to get the bits */
+
+#define TX_CON_CLO_SET 0x00400000 /* "connection: close" is now set */
+#define TX_CON_KAL_SET 0x00800000 /* "connection: keep-alive" is now set */
+
+#define TX_PREFER_LAST 0x01000000 /* try to stay on same server if possible (eg: after 401) */
+
+#define TX_HDR_CONN_UPG 0x02000000 /* The "Upgrade" token was found in the "Connection" header */
+#define TX_WAIT_NEXT_RQ 0x04000000 /* waiting for the second request to start, use keep-alive timeout */
+
+#define TX_HDR_CONN_PRS 0x08000000 /* "connection" header already parsed (req or res), results below */
+#define TX_HDR_CONN_CLO 0x10000000 /* "Connection: close" was present at least once */
+#define TX_HDR_CONN_KAL 0x20000000 /* "Connection: keep-alive" was present at least once */
+#define TX_USE_PX_CONN 0x40000000 /* Use "Proxy-Connection" instead of "Connection" */
+
+/* used only for keep-alive purposes, to indicate we're on a second transaction */
+#define TX_NOT_FIRST 0x80000000 /* the transaction is not the first one */
+/* no more room for transaction flags ! */
+
+/* The HTTP parser is more complex than it looks like, because we have to
+ * support multi-line headers and any number of spaces between the colon and
+ * the value.
+ *
+ * All those examples must work :
+
+ Hdr1:val1\r\n
+ Hdr1: val1\r\n
+ Hdr1:\t val1\r\n
+ Hdr1: \r\n
+ val1\r\n
+ Hdr1:\r\n
+ val1\n
+ \tval2\r\n
+ val3\n
+
+ *
+ */
+
+/* Possible states while parsing HTTP messages (request|response) */
+enum ht_state {
+ HTTP_MSG_RQBEFORE = 0, // request: leading LF, before start line
+ HTTP_MSG_RQBEFORE_CR = 1, // request: leading CRLF, before start line
+ /* these ones define a request start line */
+ HTTP_MSG_RQMETH = 2, // parsing the Method
+ HTTP_MSG_RQMETH_SP = 3, // space(s) after the Method
+ HTTP_MSG_RQURI = 4, // parsing the Request URI
+ HTTP_MSG_RQURI_SP = 5, // space(s) after the Request URI
+ HTTP_MSG_RQVER = 6, // parsing the Request Version
+ HTTP_MSG_RQLINE_END = 7, // end of request line (CR or LF)
+
+ HTTP_MSG_RPBEFORE = 8, // response: leading LF, before start line
+ HTTP_MSG_RPBEFORE_CR = 9, // response: leading CRLF, before start line
+
+ /* these ones define a response start line */
+ HTTP_MSG_RPVER = 10, // parsing the Response Version
+ HTTP_MSG_RPVER_SP = 11, // space(s) after the Response Version
+ HTTP_MSG_RPCODE = 12, // response code
+ HTTP_MSG_RPCODE_SP = 13, // space(s) after the response code
+ HTTP_MSG_RPREASON = 14, // response reason
+ HTTP_MSG_RPLINE_END = 15, // end of response line (CR or LF)
+
+ /* common header processing */
+ HTTP_MSG_HDR_FIRST = 16, // waiting for first header or last CRLF (no LWS possible)
+ HTTP_MSG_HDR_NAME = 17, // parsing header name
+ HTTP_MSG_HDR_COL = 18, // parsing header colon
+ HTTP_MSG_HDR_L1_SP = 19, // parsing header LWS (SP|HT) before value
+ HTTP_MSG_HDR_L1_LF = 20, // parsing header LWS (LF) before value
+ HTTP_MSG_HDR_L1_LWS = 21, // checking whether it's a new header or an LWS
+ HTTP_MSG_HDR_VAL = 22, // parsing header value
+ HTTP_MSG_HDR_L2_LF = 23, // parsing header LWS (LF) inside/after value
+ HTTP_MSG_HDR_L2_LWS = 24, // checking whether it's a new header or an LWS
+
+ HTTP_MSG_LAST_LF = 25, // parsing last LF
+
+ /* error state : must be before HTTP_MSG_BODY so that (>=BODY) always indicates
+ * that data are being processed.
+ */
+ HTTP_MSG_ERROR = 26, // an error occurred
+ /* Body processing.
+ * The state HTTP_MSG_BODY is a delimiter to know if we're waiting for headers
+ * or body. All the sub-states below also indicate we're processing the body,
+ * with some additional information.
+ */
+ HTTP_MSG_BODY = 27, // parsing body at end of headers
+ HTTP_MSG_100_SENT = 28, // parsing body after a 100-Continue was sent
+ HTTP_MSG_CHUNK_SIZE = 29, // parsing the chunk size (RFC2616 #3.6.1)
+ HTTP_MSG_DATA = 30, // skipping data chunk / content-length data
+ HTTP_MSG_CHUNK_CRLF = 31, // skipping CRLF after data chunk
+ HTTP_MSG_TRAILERS = 32, // trailers (post-data entity headers)
+ /* we enter this state when we've received the end of the current message */
+ HTTP_MSG_DONE = 33, // message end received, waiting for resync or close
+ HTTP_MSG_CLOSING = 34, // shutdown_w done, not all bytes sent yet
+ HTTP_MSG_CLOSED = 35, // shutdown_w done, all bytes sent
+ HTTP_MSG_TUNNEL = 36, // tunneled data after DONE
+} __attribute__((packed));
+
+/*
+ * HTTP message status flags (msg->flags)
+ */
+#define HTTP_MSGF_CNT_LEN 0x00000001 /* content-length was found in the message */
+#define HTTP_MSGF_TE_CHNK 0x00000002 /* transfer-encoding: chunked was found */
+
+/* if this flags is not set in either direction, we may be forced to complete a
+ * connection as a half-way tunnel (eg if no content-length appears in a 1.1
+ * response, but the request is correctly sized)
+ */
+#define HTTP_MSGF_XFER_LEN 0x00000004 /* message xfer size can be determined */
+#define HTTP_MSGF_VER_11 0x00000008 /* the message is HTTP/1.1 or above */
+
+/* If this flag is set, we don't process the body until the connect() is confirmed.
+ * This is only used by the request forwarding function to protect the buffer
+ * contents if something needs them during a redispatch.
+ */
+#define HTTP_MSGF_WAIT_CONN 0x00000010 /* Wait for connect() to be confirmed before processing body */
+
+
+/* Redirect flags */
+enum {
+ REDIRECT_FLAG_NONE = 0,
+ REDIRECT_FLAG_DROP_QS = 1, /* drop query string */
+ REDIRECT_FLAG_APPEND_SLASH = 2, /* append a slash if missing at the end */
+};
+
+/* Redirect types (location, prefix, extended ) */
+enum {
+ REDIRECT_TYPE_NONE = 0, /* no redirection */
+ REDIRECT_TYPE_LOCATION, /* location redirect */
+ REDIRECT_TYPE_PREFIX, /* prefix redirect */
+ REDIRECT_TYPE_SCHEME, /* scheme redirect (eg: switch from http to https) */
+};
+
+/* Perist types (force-persist, ignore-persist) */
+enum {
+ PERSIST_TYPE_NONE = 0, /* no persistence */
+ PERSIST_TYPE_FORCE, /* force-persist */
+ PERSIST_TYPE_IGNORE, /* ignore-persist */
+};
+
+enum ht_auth_m {
+ HTTP_AUTH_WRONG = -1, /* missing or unknown */
+ HTTP_AUTH_UNKNOWN = 0,
+ HTTP_AUTH_BASIC,
+ HTTP_AUTH_DIGEST,
+} __attribute__((packed));
+
+/* final results for http-request rules */
+enum rule_result {
+ HTTP_RULE_RES_CONT = 0, /* nothing special, continue rules evaluation */
+ HTTP_RULE_RES_YIELD, /* call me later because some data is missing. */
+ HTTP_RULE_RES_STOP, /* stopped processing on an accept */
+ HTTP_RULE_RES_DENY, /* deny (or tarpit if TX_CLTARPIT) */
+ HTTP_RULE_RES_ABRT, /* abort request, msg already sent (eg: auth) */
+ HTTP_RULE_RES_DONE, /* processing done, stop processing (eg: redirect) */
+ HTTP_RULE_RES_BADREQ, /* bad request */
+};
+
+/*
+ * All implemented return codes
+ */
+enum {
+ HTTP_ERR_200 = 0,
+ HTTP_ERR_400,
+ HTTP_ERR_403,
+ HTTP_ERR_405,
+ HTTP_ERR_408,
+ HTTP_ERR_429,
+ HTTP_ERR_500,
+ HTTP_ERR_502,
+ HTTP_ERR_503,
+ HTTP_ERR_504,
+ HTTP_ERR_SIZE
+};
+
+/* status codes available for the stats admin page */
+enum {
+ STAT_STATUS_INIT = 0,
+ STAT_STATUS_DENY, /* action denied */
+ STAT_STATUS_DONE, /* the action is successful */
+ STAT_STATUS_ERRP, /* an error occured due to invalid values in parameters */
+ STAT_STATUS_EXCD, /* an error occured because the buffer couldn't store all data */
+ STAT_STATUS_NONE, /* nothing happened (no action chosen or servers state didn't change) */
+ STAT_STATUS_PART, /* the action is partially successful */
+ STAT_STATUS_UNKN, /* an unknown error occured, shouldn't happen */
+ STAT_STATUS_SIZE
+};
+
+/* This is an HTTP message, as described in RFC2616. It can be either a request
+ * message or a response message.
+ *
+ * The values there are a little bit obscure, because their meaning can change
+ * during the parsing. Please read carefully doc/internal/body-parsing.txt if
+ * you need to manipulate them. Quick reminder :
+ *
+ * - eoh (End of Headers) : relative offset in the buffer of first byte that
+ * is not part of a completely processed header.
+ * During parsing, it points to last header seen
+ * for states after START. When in HTTP_MSG_BODY,
+ * eoh points to the first byte of the last CRLF
+ * preceeding data. Relative to buffer's origin.
+ * This value then remains unchanged till the end
+ * so that we can rewind the buffer to change some
+ * headers if needed (eg: http-send-name-header).
+ *
+ * - sov (start of value) : Before HTTP_MSG_BODY, points to the value of
+ * the header being parsed. Starting from
+ * HTTP_MSG_BODY, will point to the start of the
+ * body (relative to buffer's origin). It can be
+ * negative when forwarding data. It stops growing
+ * once data start to leave the buffer.
+ *
+ * - next (parse pointer) : next relative byte to be parsed. Always points
+ * to a byte matching the current state.
+ *
+ * - sol (start of line) : start of current line before MSG_BODY. Starting
+ * from MSG_BODY, contains the length of the last
+ * parsed chunk size so that when added to sov it
+ * always points to the beginning of the current
+ * data chunk.
+ *
+ * - eol (End of Line) : Before HTTP_MSG_BODY, relative offset in the
+ * buffer of the first byte which marks the end of
+ * the line current (LF or CRLF).
+ * From HTTP_MSG_BODY to the end, contains the
+ * length of the last CRLF (1 for a plain LF, or 2
+ * for a true CRLF). So eoh+eol always contain the
+ * exact size of the header size.
+ *
+ * Note that all offsets are relative to the origin of the buffer (buf->p)
+ * which always points to the beginning of the message (request or response).
+ * Since a message may not wrap, pointer computations may be one without any
+ * care for wrapping (no addition overflow nor subtract underflow).
+ */
+struct http_msg {
+ enum ht_state msg_state; /* where we are in the current message parsing */
+ unsigned char flags; /* flags describing the message (HTTP version, ...) */
+ /* 6 bytes unused here */
+ struct channel *chn; /* pointer to the channel transporting the message */
+ unsigned int next; /* pointer to next byte to parse, relative to buf->p */
+ int sov; /* current header: start of value ; data: start of body */
+ unsigned int eoh; /* End Of Headers, relative to buffer */
+ unsigned int sol; /* start of current line during parsing otherwise zero */
+ unsigned int eol; /* end of line */
+ int err_pos; /* err handling: -2=block, -1=pass, 0+=detected */
+ union { /* useful start line pointers, relative to ->sol */
+ struct {
+ int l; /* request line length (not including CR) */
+ int m_l; /* METHOD length (method starts at buf->p) */
+ int u, u_l; /* URI, length */
+ int v, v_l; /* VERSION, length */
+ } rq; /* request line : field, length */
+ struct {
+ int l; /* status line length (not including CR) */
+ int v_l; /* VERSION length (version starts at buf->p) */
+ int c, c_l; /* CODE, length */
+ int r, r_l; /* REASON, length */
+ } st; /* status line : field, length */
+ } sl; /* start line */
+ unsigned long long chunk_len; /* cache for last chunk size or content-length header value */
+ unsigned long long body_len; /* total known length of the body, excluding encoding */
+};
+
+struct http_auth_data {
+ enum ht_auth_m method; /* one of HTTP_AUTH_* */
+ /* 7 bytes unused here */
+ struct chunk method_data; /* points to the creditial part from 'Authorization:' header */
+ char *user, *pass; /* extracted username & password */
+};
+
+struct proxy;
+struct http_txn;
+struct stream;
+
+/* This is an HTTP transaction. It contains both a request message and a
+ * response message (which can be empty).
+ */
+struct http_txn {
+ struct hdr_idx hdr_idx; /* array of header indexes (max: global.tune.max_http_hdr) */
+ struct http_msg rsp; /* HTTP response message */
+ struct http_msg req; /* HTTP request message */
+ unsigned int flags; /* transaction flags */
+ enum http_meth_t meth; /* HTTP method */
+ /* 1 unused byte here */
+ short rule_deny_status; /* HTTP status from rule when denying */
+ short status; /* HTTP status from the server, negative if from proxy */
+
+ char *uri; /* first line if log needed, NULL otherwise */
+ char *cli_cookie; /* cookie presented by the client, in capture mode */
+ char *srv_cookie; /* cookie presented by the server, in capture mode */
+ int cookie_first_date; /* if non-zero, first date the expirable cookie was set/seen */
+ int cookie_last_date; /* if non-zero, last date the expirable cookie was set/seen */
+
+ struct http_auth_data auth; /* HTTP auth data */
+};
+
+
+/* This structure is used by http_find_header() to return values of headers.
+ * The header starts at <line>, the value (excluding leading and trailing white
+ * spaces) at <line>+<val> for <vlen> bytes, followed by optional <tws> trailing
+ * white spaces, and sets <line>+<del> to point to the last delimitor (colon or
+ * comma) before this value. <prev> points to the index of the header whose next
+ * is this one.
+ */
+struct hdr_ctx {
+ char *line;
+ int idx;
+ int val; /* relative to line, may skip some leading white spaces */
+ int vlen; /* relative to line+val, stops before trailing white spaces */
+ int tws; /* added to vlen if some trailing white spaces are present */
+ int del; /* relative to line */
+ int prev; /* index of previous header */
+};
+
+struct http_method_name {
+ char *name;
+ int len;
+};
+
+extern struct action_kw_list http_req_keywords;
+extern struct action_kw_list http_res_keywords;
+
+extern const struct http_method_name http_known_methods[HTTP_METH_OTHER];
+
+extern struct pool_head *pool2_http_txn;
+
+#endif /* _TYPES_PROTO_HTTP_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/types/proto_udp.h
+ * This file provides structures and types for UDP protocol.
+ *
+ * Copyright (C) 2014 Baptiste Assmann <bedis9@gmail.com>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _TYPES_PROTO_UDP_H
+#define _TYPES_PROTO_UDP_H
+
+#include <arpa/inet.h>
+
+/*
+ * datagram related structure
+ */
+struct dgram_conn {
+ const struct dgram_data_cb *data; /* data layer callbacks. Must be set before */
+ void *owner; /* pointer to upper layer's entity */
+ union { /* definitions which depend on connection type */
+ struct { /*** information used by socket-based dgram ***/
+ int fd; /* file descriptor */
+ } sock;
+ } t;
+ struct {
+ struct sockaddr_storage from; /* client address, or address to spoof when connecting to the server */
+ struct sockaddr_storage to; /* address reached by the client, or address to connect to */
+ } addr; /* addresses of the remote side, client for producer and server for consumer */
+};
+
+/*
+ * datagram callback structure
+ */
+struct dgram_data_cb {
+ void (*recv)(struct dgram_conn *dgram); /* recv callback */
+ void (*send)(struct dgram_conn *dgram); /* send callback */
+};
+
+#endif /* _TYPES_PROTO_UDP_H */
--- /dev/null
+/*
+ * include/types/protocol.h
+ * This file defines the structures used by generic network protocols.
+ *
+ * Copyright (C) 2000-2012 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _TYPES_PROTOCOL_H
+#define _TYPES_PROTOCOL_H
+
+#include <sys/types.h>
+#include <sys/socket.h>
+
+#include <common/config.h>
+#include <common/mini-clist.h>
+#include <eb32tree.h>
+
+/* some pointer types referenced below */
+struct listener;
+struct connection;
+
+/* max length of a protcol name, including trailing zero */
+#define PROTO_NAME_LEN 16
+
+/* This structure contains all information needed to easily handle a protocol.
+ * Its primary goal is to ease listeners maintenance. Specifically, the
+ * bind_all() primitive must be used before any fork(), and the enable_all()
+ * primitive must be called after the fork() to enable all fds. Last, the
+ * unbind_all() primitive closes all listeners.
+ */
+struct protocol {
+ char name[PROTO_NAME_LEN]; /* protocol name, zero-terminated */
+ int sock_domain; /* socket domain, as passed to socket() */
+ int sock_type; /* socket type, as passed to socket() */
+ int sock_prot; /* socket protocol, as passed to socket() */
+ sa_family_t sock_family; /* socket family, for sockaddr */
+ socklen_t sock_addrlen; /* socket address length, used by bind() */
+ int l3_addrlen; /* layer3 address length, used by hashes */
+ int (*accept)(int fd); /* generic accept function */
+ int (*bind)(struct listener *l, char *errmsg, int errlen); /* bind a listener */
+ int (*bind_all)(struct protocol *proto, char *errmsg, int errlen); /* bind all unbound listeners */
+ int (*unbind_all)(struct protocol *proto); /* unbind all bound listeners */
+ int (*enable_all)(struct protocol *proto); /* enable all bound listeners */
+ int (*disable_all)(struct protocol *proto); /* disable all bound listeners */
+ int (*connect)(struct connection *, int data, int delack); /* connect function if any */
+ int (*get_src)(int fd, struct sockaddr *, socklen_t, int dir); /* syscall used to retrieve src addr */
+ int (*get_dst)(int fd, struct sockaddr *, socklen_t, int dir); /* syscall used to retrieve dst addr */
+ int (*drain)(int fd); /* indicates whether we can safely close the fd */
+ int (*pause)(struct listener *l); /* temporarily pause this listener for a soft restart */
+
+ struct list listeners; /* list of listeners using this protocol */
+ int nb_listeners; /* number of listeners */
+ struct list list; /* list of registered protocols */
+};
+
+#endif /* _TYPES_PROTOCOL_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/types/proxy.h
+ * This file defines everything related to proxies.
+ *
+ * Copyright (C) 2000-2011 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _TYPES_PROXY_H
+#define _TYPES_PROXY_H
+
+#include <sys/types.h>
+#include <sys/socket.h>
+#include <netinet/in.h>
+#include <arpa/inet.h>
+
+#include <common/chunk.h>
+#include <common/config.h>
+#include <common/mini-clist.h>
+#include <common/regex.h>
+#include <common/tools.h>
+#include <eb32tree.h>
+#include <ebistree.h>
+
+#include <types/acl.h>
+#include <types/backend.h>
+#include <types/counters.h>
+#include <types/freq_ctr.h>
+#include <types/listener.h>
+#include <types/log.h>
+#include <types/obj_type.h>
+#include <types/proto_http.h>
+#include <types/sample.h>
+#include <types/server.h>
+#include <types/stick_table.h>
+
+/* values for proxy->state */
+enum pr_state {
+ PR_STNEW = 0, /* proxy has not been initialized yet */
+ PR_STREADY, /* proxy has been initialized and is ready */
+ PR_STFULL, /* frontend is full (maxconn reached) */
+ PR_STPAUSED, /* frontend is paused (during hot restart) */
+ PR_STSTOPPED, /* proxy is stopped (end of a restart) */
+ PR_STERROR, /* proxy experienced an unrecoverable error */
+} __attribute__((packed));
+
+/* values for proxy->mode */
+enum pr_mode {
+ PR_MODE_TCP = 0,
+ PR_MODE_HTTP,
+ PR_MODE_HEALTH,
+} __attribute__((packed));
+
+enum PR_SRV_STATE_FILE {
+ PR_SRV_STATE_FILE_UNSPEC = 0,
+ PR_SRV_STATE_FILE_NONE,
+ PR_SRV_STATE_FILE_GLOBAL,
+ PR_SRV_STATE_FILE_LOCAL,
+};
+
+
+/* flag values for proxy->cap. This is a bitmask of capabilities supported by the proxy */
+#define PR_CAP_NONE 0x0000
+#define PR_CAP_FE 0x0001
+#define PR_CAP_BE 0x0002
+#define PR_CAP_RS 0x0004
+#define PR_CAP_LISTEN (PR_CAP_FE|PR_CAP_BE|PR_CAP_RS)
+
+/* bits for proxy->options */
+#define PR_O_REDISP 0x00000001 /* allow reconnection to dispatch in case of errors */
+#define PR_O_TRANSP 0x00000002 /* transparent mode : use original DEST as dispatch */
+
+/* HTTP server-side reuse */
+#define PR_O_REUSE_NEVR 0x00000000 /* never reuse a shared connection */
+#define PR_O_REUSE_SAFE 0x00000004 /* only reuse a shared connection when it's safe to do so */
+#define PR_O_REUSE_AGGR 0x00000008 /* aggressively reuse a shared connection */
+#define PR_O_REUSE_ALWS 0x0000000C /* always reuse a shared connection */
+#define PR_O_REUSE_MASK 0x0000000C /* mask to retrieve shared connection preferences */
+
+/* unused: 0x10 */
+#define PR_O_PREF_LAST 0x00000020 /* prefer last server */
+#define PR_O_DISPATCH 0x00000040 /* use dispatch mode */
+#define PR_O_FORCED_ID 0x00000080 /* proxy's ID was forced in the configuration */
+#define PR_O_FWDFOR 0x00000100 /* conditionally insert x-forwarded-for with client address */
+#define PR_O_IGNORE_PRB 0x00000200 /* ignore empty requests (aborts and timeouts) */
+#define PR_O_NULLNOLOG 0x00000400 /* a connect without request will not be logged */
+#define PR_O_WREQ_BODY 0x00000800 /* always wait for the HTTP request body */
+/* unused: 0x1000 */
+#define PR_O_FF_ALWAYS 0x00002000 /* always set x-forwarded-for */
+#define PR_O_PERSIST 0x00004000 /* server persistence stays effective even when server is down */
+#define PR_O_LOGASAP 0x00008000 /* log as soon as possible, without waiting for the stream to complete */
+/* unused: 0x00010000 */
+#define PR_O_CHK_CACHE 0x00020000 /* require examination of cacheability of the 'set-cookie' field */
+#define PR_O_TCP_CLI_KA 0x00040000 /* enable TCP keep-alive on client-side streams */
+#define PR_O_TCP_SRV_KA 0x00080000 /* enable TCP keep-alive on server-side streams */
+#define PR_O_USE_ALL_BK 0x00100000 /* load-balance between backup servers */
+/* unused: 0x00020000 */
+#define PR_O_TCP_NOLING 0x00400000 /* disable lingering on client and server connections */
+#define PR_O_ABRT_CLOSE 0x00800000 /* immediately abort request when client closes */
+
+/* unused: 0x01000000, 0x02000000, 0x04000000, 0x08000000 */
+#define PR_O_HTTP_KAL 0x00000000 /* HTTP keep-alive mode (http-keep-alive) */
+#define PR_O_HTTP_PCL 0x01000000 /* HTTP passive close mode (httpclose) = tunnel with Connection: close */
+#define PR_O_HTTP_FCL 0x02000000 /* HTTP forced close mode (forceclose) */
+#define PR_O_HTTP_SCL 0x03000000 /* HTTP server close mode (http-server-close) */
+#define PR_O_HTTP_TUN 0x04000000 /* HTTP tunnel mode : no analysis past first request/response */
+/* unassigned values : 0x05000000, 0x06000000, 0x07000000 */
+#define PR_O_HTTP_MODE 0x07000000 /* MASK to retrieve the HTTP mode */
+#define PR_O_TCPCHK_SSL 0x08000000 /* at least one TCPCHECK connect rule requires SSL */
+#define PR_O_CONTSTATS 0x10000000 /* continous counters */
+#define PR_O_HTTP_PROXY 0x20000000 /* Enable stream to use HTTP proxy operations */
+#define PR_O_DISABLE404 0x40000000 /* Disable a server on a 404 response to a health-check */
+#define PR_O_ORGTO 0x80000000 /* insert x-original-to with destination address */
+
+/* bits for proxy->options2 */
+#define PR_O2_SPLIC_REQ 0x00000001 /* transfer requests using linux kernel's splice() */
+#define PR_O2_SPLIC_RTR 0x00000002 /* transfer responses using linux kernel's splice() */
+#define PR_O2_SPLIC_AUT 0x00000004 /* automatically use linux kernel's splice() */
+#define PR_O2_SPLIC_ANY (PR_O2_SPLIC_REQ|PR_O2_SPLIC_RTR|PR_O2_SPLIC_AUT)
+#define PR_O2_REQBUG_OK 0x00000008 /* let buggy requests pass through */
+#define PR_O2_RSPBUG_OK 0x00000010 /* let buggy responses pass through */
+#define PR_O2_NOLOGNORM 0x00000020 /* don't log normal traffic, only errors and retries */
+#define PR_O2_LOGERRORS 0x00000040 /* log errors and retries at level LOG_ERR */
+#define PR_O2_SMARTACC 0x00000080 /* don't immediately ACK request after accept */
+#define PR_O2_SMARTCON 0x00000100 /* don't immediately send empty ACK after connect */
+#define PR_O2_RDPC_PRST 0x00000200 /* Actvate rdp cookie analyser */
+#define PR_O2_CLFLOG 0x00000400 /* log into clf format */
+#define PR_O2_LOGHCHKS 0x00000800 /* log health checks */
+#define PR_O2_INDEPSTR 0x00001000 /* independent streams, don't update rex on write */
+#define PR_O2_SOCKSTAT 0x00002000 /* collect & provide separate statistics for sockets */
+
+/* unused: 0x00004000 0x00008000 0x00010000 */
+
+#define PR_O2_NODELAY 0x00020000 /* fully interactive mode, never delay outgoing data */
+#define PR_O2_USE_PXHDR 0x00040000 /* use Proxy-Connection for proxy requests */
+#define PR_O2_CHK_SNDST 0x00080000 /* send the state of each server along with HTTP health checks */
+
+#define PR_O2_SRC_ADDR 0x00100000 /* get the source ip and port for logs */
+
+#define PR_O2_FAKE_KA 0x00200000 /* pretend we do keep-alive with server eventhough we close */
+/* unused: 0x00400000 */
+#define PR_O2_EXP_NONE 0x00000000 /* http-check : no expect rule */
+#define PR_O2_EXP_STS 0x00800000 /* http-check expect status */
+#define PR_O2_EXP_RSTS 0x01000000 /* http-check expect rstatus */
+#define PR_O2_EXP_STR 0x01800000 /* http-check expect string */
+#define PR_O2_EXP_RSTR 0x02000000 /* http-check expect rstring */
+#define PR_O2_EXP_TYPE 0x03800000 /* mask for http-check expect type */
+#define PR_O2_EXP_INV 0x04000000 /* http-check expect !<rule> */
+/* unused: 0x08000000 */
+
+/* server health checks */
+#define PR_O2_CHK_NONE 0x00000000 /* no L7 health checks configured (TCP by default) */
+#define PR_O2_PGSQL_CHK 0x10000000 /* use PGSQL check for server health */
+#define PR_O2_REDIS_CHK 0x20000000 /* use LDAP check for server health */
+#define PR_O2_SMTP_CHK 0x30000000 /* use SMTP EHLO check for server health - pvandijk@vision6.com.au */
+#define PR_O2_HTTP_CHK 0x40000000 /* use HTTP 'OPTIONS' method to check server health */
+#define PR_O2_MYSQL_CHK 0x50000000 /* use MYSQL check for server health */
+#define PR_O2_LDAP_CHK 0x60000000 /* use LDAP check for server health */
+#define PR_O2_SSL3_CHK 0x70000000 /* use SSLv3 CLIENT_HELLO packets for server health */
+#define PR_O2_LB_AGENT_CHK 0x80000000 /* use a TCP connection to obtain a metric of server health */
+#define PR_O2_TCPCHK_CHK 0x90000000 /* use TCPCHK check for server health */
+#define PR_O2_EXT_CHK 0xA0000000 /* use external command for server health */
+/* unused: 0xB0000000 to 0xF000000, reserved for health checks */
+#define PR_O2_CHK_ANY 0xF0000000 /* Mask to cover any check */
+/* end of proxy->options2 */
+
+/* Cookie settings for pr->ck_opts */
+#define PR_CK_RW 0x00000001 /* rewrite all direct cookies with the right serverid */
+#define PR_CK_IND 0x00000002 /* keep only indirect cookies */
+#define PR_CK_INS 0x00000004 /* insert cookies when not accessing a server directly */
+#define PR_CK_PFX 0x00000008 /* rewrite all cookies by prefixing the right serverid */
+#define PR_CK_ANY (PR_CK_RW | PR_CK_IND | PR_CK_INS | PR_CK_PFX)
+#define PR_CK_NOC 0x00000010 /* add a 'Cache-control' header with the cookie */
+#define PR_CK_POST 0x00000020 /* don't insert cookies for requests other than a POST */
+#define PR_CK_PSV 0x00000040 /* cookie ... preserve */
+#define PR_CK_HTTPONLY 0x00000080 /* emit the "HttpOnly" attribute */
+#define PR_CK_SECURE 0x00000100 /* emit the "Secure" attribute */
+
+/* bits for sticking rules */
+#define STK_IS_MATCH 0x00000001 /* match on request fetch */
+#define STK_IS_STORE 0x00000002 /* store on request fetch */
+#define STK_ON_RSP 0x00000004 /* store on response fetch */
+
+/* diff bits for proxy_find_best_match */
+#define PR_FBM_MISMATCH_ID 0x01
+#define PR_FBM_MISMATCH_NAME 0x02
+#define PR_FBM_MISMATCH_PROXYTYPE 0x04
+
+struct stream;
+
+struct error_snapshot {
+ struct timeval when; /* date of this event, (tv_sec == 0) means "never" */
+ unsigned int len; /* original length of the last invalid request/response */
+ unsigned int pos; /* position of the first invalid character */
+ unsigned int sid; /* ID of the faulty stream */
+ unsigned int ev_id; /* event number (counter incremented for each capture) */
+ unsigned int state; /* message state before the error (when saved) */
+ unsigned int b_flags; /* buffer flags */
+ unsigned int s_flags; /* stream flags */
+ unsigned int t_flags; /* transaction flags */
+ unsigned int m_flags; /* message flags */
+ unsigned int b_out; /* pending output bytes */
+ unsigned int b_wrap; /* position where the buffer is expected to wrap */
+ unsigned long long b_tot; /* total bytes transferred via this buffer */
+ unsigned long long m_clen; /* chunk len for this message */
+ unsigned long long m_blen; /* body len for this message */
+ struct server *srv; /* server associated with the error (or NULL) */
+ struct proxy *oe; /* other end = frontend or backend involved */
+ struct sockaddr_storage src; /* client's address */
+ char buf[BUFSIZE]; /* copy of the beginning of the message */
+};
+
+struct email_alert {
+ struct list list;
+ struct list tcpcheck_rules;
+};
+
+struct email_alertq {
+ struct list email_alerts;
+ struct check check; /* Email alerts are implemented using existing check
+ * code even though they are not checks. This structure
+ * is as a parameter to the check code.
+ * Each check corresponds to a mailer */
+};
+
+struct proxy {
+ enum obj_type obj_type; /* object type == OBJ_TYPE_PROXY */
+ enum pr_state state; /* proxy state, one of PR_* */
+ enum pr_mode mode; /* mode = PR_MODE_TCP, PR_MODE_HTTP or PR_MODE_HEALTH */
+ char cap; /* supported capabilities (PR_CAP_*) */
+ unsigned int maxconn; /* max # of active streams on the frontend */
+
+ int options; /* PR_O_REDISP, PR_O_TRANSP, ... */
+ int options2; /* PR_O2_* */
+ struct in_addr mon_net, mon_mask; /* don't forward connections from this net (network order) FIXME: should support IPv6 */
+ unsigned int ck_opts; /* PR_CK_* (cookie options) */
+ unsigned int fe_req_ana, be_req_ana; /* bitmap of common request protocol analysers for the frontend and backend */
+ unsigned int fe_rsp_ana, be_rsp_ana; /* bitmap of common response protocol analysers for the frontend and backend */
+ unsigned int http_needed; /* non-null if HTTP analyser may be used */
+ union {
+ struct proxy *be; /* default backend, or NULL if none set */
+ char *name; /* default backend name during config parse */
+ } defbe;
+ struct list acl; /* ACL declared on this proxy */
+ struct list http_req_rules; /* HTTP request rules: allow/deny/... */
+ struct list http_res_rules; /* HTTP response rules: allow/deny/... */
+ struct list block_rules; /* http-request block rules to be inserted before other ones */
+ struct list redirect_rules; /* content redirecting rules (chained) */
+ struct list switching_rules; /* content switching rules (chained) */
+ struct list persist_rules; /* 'force-persist' and 'ignore-persist' rules (chained) */
+ struct list sticking_rules; /* content sticking rules (chained) */
+ struct list storersp_rules; /* content store response rules (chained) */
+ struct list server_rules; /* server switching rules (chained) */
+ struct { /* TCP request processing */
+ unsigned int inspect_delay; /* inspection delay */
+ struct list inspect_rules; /* inspection rules */
+ struct list l4_rules; /* layer4 rules */
+ } tcp_req;
+ struct { /* TCP request processing */
+ unsigned int inspect_delay; /* inspection delay */
+ struct list inspect_rules; /* inspection rules */
+ } tcp_rep;
+ struct server *srv, defsrv; /* known servers; default server configuration */
+ int srv_act, srv_bck; /* # of servers eligible for LB (UP|!checked) AND (enabled+weight!=0) */
+ struct lbprm lbprm; /* load-balancing parameters */
+ char *cookie_domain; /* domain used to insert the cookie */
+ char *cookie_name; /* name of the cookie to look for */
+ int cookie_len; /* strlen(cookie_name), computed only once */
+ unsigned int cookie_maxidle; /* max idle time for this cookie */
+ unsigned int cookie_maxlife; /* max life time for this cookie */
+ int rdp_cookie_len; /* strlen(rdp_cookie_name), computed only once */
+ char *rdp_cookie_name; /* name of the RDP cookie to look for */
+ char *url_param_name; /* name of the URL parameter used for hashing */
+ int url_param_len; /* strlen(url_param_name), computed only once */
+ int uri_len_limit; /* character limit for uri balancing algorithm */
+ int uri_dirs_depth1; /* directories+1 (slashes) limit for uri balancing algorithm */
+ int uri_whole; /* if != 0, calculates the hash from the whole uri. Still honors the len_limit and dirs_depth1 */
+ char *hh_name; /* name of the header parameter used for hashing */
+ int hh_len; /* strlen(hh_name), computed only once */
+ int hh_match_domain; /* toggle use of special match function */
+ char *capture_name; /* beginning of the name of the cookie to capture */
+ int capture_namelen; /* length of the cookie name to match */
+ int capture_len; /* length of the string to be captured */
+ struct uri_auth *uri_auth; /* if non-NULL, the (list of) per-URI authentications */
+ int max_ka_queue; /* 1+maximum requests in queue accepted for reusing a K-A conn (0=none) */
+ int monitor_uri_len; /* length of the string above. 0 if unused */
+ char *monitor_uri; /* a special URI to which we respond with HTTP/200 OK */
+ struct list mon_fail_cond; /* list of conditions to fail monitoring requests (chained) */
+ struct { /* WARNING! check proxy_reset_timeouts() in proxy.h !!! */
+ int client; /* client I/O timeout (in ticks) */
+ int tarpit; /* tarpit timeout, defaults to connect if unspecified */
+ int queue; /* queue timeout, defaults to connect if unspecified */
+ int connect; /* connect timeout (in ticks) */
+ int server; /* server I/O timeout (in ticks) */
+ int httpreq; /* maximum time for complete HTTP request */
+ int httpka; /* maximum time for a new HTTP request when using keep-alive */
+ int check; /* maximum time for complete check */
+ int tunnel; /* I/O timeout to use in tunnel mode (in ticks) */
+ int clientfin; /* timeout to apply to client half-closed connections */
+ int serverfin; /* timeout to apply to server half-closed connections */
+ } timeout;
+ char *id, *desc; /* proxy id (name) and description */
+ struct list pendconns; /* pending connections with no server assigned yet */
+ int nbpend; /* number of pending connections with no server assigned yet */
+ int totpend; /* total number of pending connections on this instance (for stats) */
+ unsigned int feconn, beconn; /* # of active frontend and backends streams */
+ struct freq_ctr fe_req_per_sec; /* HTTP requests per second on the frontend */
+ struct freq_ctr fe_conn_per_sec; /* received connections per second on the frontend */
+ struct freq_ctr fe_sess_per_sec; /* accepted sessions per second on the frontend (after tcp rules) */
+ struct freq_ctr be_sess_per_sec; /* sessions per second on the backend */
+ unsigned int fe_sps_lim; /* limit on new sessions per second on the frontend */
+ unsigned int fullconn; /* #conns on backend above which servers are used at full load */
+ struct in_addr except_net, except_mask; /* don't x-forward-for for this address. FIXME: should support IPv6 */
+ struct in_addr except_to; /* don't x-original-to for this address. */
+ struct in_addr except_mask_to; /* the netmask for except_to. */
+ char *fwdfor_hdr_name; /* header to use - default: "x-forwarded-for" */
+ char *orgto_hdr_name; /* header to use - default: "x-original-to" */
+ int fwdfor_hdr_len; /* length of "x-forwarded-for" header */
+ int orgto_hdr_len; /* length of "x-original-to" header */
+ char *server_id_hdr_name; /* the header to use to send the server id (name) */
+ int server_id_hdr_len; /* the length of the id (name) header... name */
+ int conn_retries; /* maximum number of connect retries */
+ int redispatch_after; /* number of retries before redispatch */
+ unsigned down_trans; /* up-down transitions */
+ unsigned down_time; /* total time the proxy was down */
+ time_t last_change; /* last time, when the state was changed */
+ int (*accept)(struct stream *s); /* application layer's accept() */
+ struct conn_src conn_src; /* connection source settings */
+ enum obj_type *default_target; /* default target to use for accepted streams or NULL */
+ struct proxy *next;
+
+ unsigned int log_count; /* number of logs produced by the frontend */
+ struct list logsrvs;
+ struct list logformat; /* log_format linked list */
+ struct list logformat_sd; /* log_format linked list for the RFC5424 structured-data part */
+ struct chunk log_tag; /* override default syslog tag */
+ char *header_unique_id; /* unique-id header */
+ struct list format_unique_id; /* unique-id format */
+ int to_log; /* things to be logged (LW_*) */
+ int stop_time; /* date to stop listening, when stopping != 0 (int ticks) */
+ struct hdr_exp *req_exp; /* regular expressions for request headers */
+ struct hdr_exp *rsp_exp; /* regular expressions for response headers */
+ int nb_req_cap, nb_rsp_cap; /* # of headers to be captured */
+ struct cap_hdr *req_cap; /* chained list of request headers to be captured */
+ struct cap_hdr *rsp_cap; /* chained list of response headers to be captured */
+ struct pool_head *req_cap_pool, /* pools of pre-allocated char ** used to build the streams */
+ *rsp_cap_pool;
+ struct list req_add, rsp_add; /* headers to be added */
+ struct pxcounters be_counters; /* backend statistics counters */
+ struct pxcounters fe_counters; /* frontend statistics counters */
+
+ struct list listener_queue; /* list of the temporarily limited listeners because of lack of a proxy resource */
+ struct stktable table; /* table for storing sticking streams */
+
+ struct task *task; /* the associated task, mandatory to manage rate limiting, stopping and resource shortage, NULL if disabled */
+ struct list tcpcheck_rules; /* tcp-check send / expect rules */
+ int grace; /* grace time after stop request */
+ int check_len; /* Length of the HTTP or SSL3 request */
+ char *check_req; /* HTTP or SSL request to use for PR_O_HTTP_CHK|PR_O_SSL3_CHK */
+ char *check_command; /* Command to use for external agent checks */
+ char *check_path; /* PATH environment to use for external agent checks */
+ char *expect_str; /* http-check expected content : string or text version of the regex */
+ struct my_regex *expect_regex; /* http-check expected content */
+ struct chunk errmsg[HTTP_ERR_SIZE]; /* default or customized error messages for known errors */
+ int uuid; /* universally unique proxy ID, used for SNMP */
+ unsigned int backlog; /* force the frontend's listen backlog */
+ unsigned long bind_proc; /* bitmask of processes using this proxy */
+
+ /* warning: these structs are huge, keep them at the bottom */
+ struct sockaddr_storage dispatch_addr; /* the default address to connect to */
+ struct error_snapshot invalid_req, invalid_rep; /* captures of last errors */
+
+ /* used only during configuration parsing */
+ int no_options; /* PR_O_REDISP, PR_O_TRANSP, ... */
+ int no_options2; /* PR_O2_* */
+
+ struct {
+ char *file; /* file where the section appears */
+ int line; /* line where the section appears */
+ struct eb32_node id; /* place in the tree of used IDs */
+ struct eb_root used_listener_id;/* list of listener IDs in use */
+ struct eb_root used_server_id; /* list of server IDs in use */
+ struct list bind; /* list of bind settings */
+ struct list listeners; /* list of listeners belonging to this frontend */
+ struct arg_list args; /* sample arg list that need to be resolved */
+ struct ebpt_node by_name; /* proxies are stored sorted by name here */
+ char *logformat_string; /* log format string */
+ char *lfs_file; /* file name where the logformat string appears (strdup) */
+ int lfs_line; /* file name where the logformat string appears */
+ int uif_line; /* file name where the unique-id-format string appears */
+ char *uif_file; /* file name where the unique-id-format string appears (strdup) */
+ char *uniqueid_format_string; /* unique-id format string */
+ char *logformat_sd_string; /* log format string for the RFC5424 structured-data part */
+ char *lfsd_file; /* file name where the structured-data logformat string for RFC5424 appears (strdup) */
+ int lfsd_line; /* file name where the structured-data logformat string for RFC5424 appears */
+ } conf; /* config information */
+ void *parent; /* parent of the proxy when applicable */
+ struct comp *comp; /* http compression */
+
+ struct {
+ union {
+ struct mailers *m; /* Mailer to send email alerts via */
+ char *name;
+ } mailers;
+ char *from; /* Address to send email alerts from */
+ char *to; /* Address(es) to send email alerts to */
+ char *myhostname; /* Identity to use in HELO command sent to mailer */
+ int level; /* Maximum syslog level of messages to send
+ * email alerts for */
+ int set; /* True if email_alert settings are present */
+ struct email_alertq *queues; /* per-mailer alerts queues */
+ } email_alert;
+
+ int load_server_state_from_file; /* location of the file containing server state.
+ * flag PR_SRV_STATE_FILE_* */
+ char *server_state_file_name; /* used when load_server_state_from_file is set to
+ * PR_SRV_STATE_FILE_LOCAL. Give a specific file name for
+ * this backend. If not specified or void, then the backend
+ * name is used
+ */
+};
+
+struct switching_rule {
+ struct list list; /* list linked to from the proxy */
+ struct acl_cond *cond; /* acl condition to meet */
+ int dynamic; /* this is a dynamic rule using the logformat expression */
+ union {
+ struct proxy *backend; /* target backend */
+ char *name; /* target backend name during config parsing */
+ struct list expr; /* logformat expression to use for dynamic rules */
+ } be;
+};
+
+struct server_rule {
+ struct list list; /* list linked to from the proxy */
+ struct acl_cond *cond; /* acl condition to meet */
+ union {
+ struct server *ptr; /* target server */
+ char *name; /* target server name during config parsing */
+ } srv;
+};
+
+struct persist_rule {
+ struct list list; /* list linked to from the proxy */
+ struct acl_cond *cond; /* acl condition to meet */
+ int type;
+};
+
+struct sticking_rule {
+ struct list list; /* list linked to from the proxy */
+ struct acl_cond *cond; /* acl condition to meet */
+ struct sample_expr *expr; /* fetch expr to fetch key */
+ int flags; /* STK_* */
+ union {
+ struct stktable *t; /* target table */
+ char *name; /* target table name during config parsing */
+ } table;
+};
+
+
+struct redirect_rule {
+ struct list list; /* list linked to from the proxy */
+ struct acl_cond *cond; /* acl condition to meet */
+ int type;
+ int rdr_len;
+ char *rdr_str;
+ struct list rdr_fmt;
+ int code;
+ unsigned int flags;
+ int cookie_len;
+ char *cookie_str;
+};
+
+#endif /* _TYPES_PROXY_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ include/types/queue.h
+ This file defines variables and structures needed for queues.
+
+ Copyright (C) 2000-2006 Willy Tarreau - w@1wt.eu
+
+ This library is free software; you can redistribute it and/or
+ modify it under the terms of the GNU Lesser General Public
+ License as published by the Free Software Foundation, version 2.1
+ exclusively.
+
+ This library is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ Lesser General Public License for more details.
+
+ You should have received a copy of the GNU Lesser General Public
+ License along with this library; if not, write to the Free Software
+ Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+*/
+
+#ifndef _TYPES_QUEUE_H
+#define _TYPES_QUEUE_H
+
+#include <common/config.h>
+#include <common/mini-clist.h>
+
+#include <types/server.h>
+
+struct stream;
+
+struct pendconn {
+ struct list list; /* chaining ... */
+ struct stream *strm; /* the stream waiting for a connection */
+ struct server *srv; /* the server we are waiting for */
+};
+
+#endif /* _TYPES_QUEUE_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/types/sample.h
+ * Macros, variables and structures for sample management.
+ *
+ * Copyright (C) 2009-2010 EXCELIANCE, Emeric Brun <ebrun@exceliance.fr>
+ * Copyright (C) 2012-2013 Willy Tarreau <w@1wt.eu>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _TYPES_SAMPLE_H
+#define _TYPES_SAMPLE_H
+
+#include <sys/socket.h>
+#include <netinet/in.h>
+
+#include <common/chunk.h>
+#include <common/mini-clist.h>
+
+struct arg;
+
+/* input and output sample types */
+enum {
+ SMP_T_ANY = 0, /* any type */
+ SMP_T_BOOL, /* boolean */
+ SMP_T_SINT, /* signed 64bits integer type */
+ SMP_T_ADDR, /* ipv4 or ipv6, only used for input type compatibility */
+ SMP_T_IPV4, /* ipv4 type */
+ SMP_T_IPV6, /* ipv6 type */
+ SMP_T_STR, /* char string type */
+ SMP_T_BIN, /* buffer type */
+ SMP_T_METH, /* contain method */
+ SMP_TYPES /* number of types, must always be last */
+};
+
+/* Sample sources are used to establish a relation between fetch keywords and
+ * the location where they're about to be used. They're reserved for internal
+ * use and are not meant to be known outside the sample management code.
+ */
+enum {
+ SMP_SRC_INTRN, /* internal context-less information */
+ SMP_SRC_LISTN, /* listener which accepted the connection */
+ SMP_SRC_FTEND, /* frontend which accepted the connection */
+ SMP_SRC_L4CLI, /* L4 information about the client */
+ SMP_SRC_L5CLI, /* fetch uses client information from embryonic session */
+ SMP_SRC_TRACK, /* fetch involves track counters */
+ SMP_SRC_L6REQ, /* fetch uses raw information from the request buffer */
+ SMP_SRC_HRQHV, /* fetch uses volatile information about HTTP request headers (eg: value) */
+ SMP_SRC_HRQHP, /* fetch uses persistent information about HTTP request headers (eg: meth) */
+ SMP_SRC_HRQBO, /* fetch uses information about HTTP request body */
+ SMP_SRC_BKEND, /* fetch uses information about the backend */
+ SMP_SRC_SERVR, /* fetch uses information about the selected server */
+ SMP_SRC_L4SRV, /* fetch uses information about the server L4 connection */
+ SMP_SRC_L5SRV, /* fetch uses information about the server L5 connection */
+ SMP_SRC_L6RES, /* fetch uses raw information from the response buffer */
+ SMP_SRC_HRSHV, /* fetch uses volatile information about HTTP response headers (eg: value) */
+ SMP_SRC_HRSHP, /* fetch uses persistent information about HTTP response headers (eg: status) */
+ SMP_SRC_HRSBO, /* fetch uses information about HTTP response body */
+ SMP_SRC_RQFIN, /* final information about request buffer (eg: tot bytes) */
+ SMP_SRC_RSFIN, /* final information about response buffer (eg: tot bytes) */
+ SMP_SRC_TXFIN, /* final information about the transaction (eg: #comp rate) */
+ SMP_SRC_SSFIN, /* final information about the stream (eg: #requests, final flags) */
+ SMP_SRC_ENTRIES /* nothing after this */
+};
+
+/* Sample checkpoints are a list of places where samples may be used. This is
+ * an internal enum used only to build SMP_VAL_*.
+ */
+enum {
+ SMP_CKP_FE_CON_ACC, /* FE connection accept rules ("tcp request connection") */
+ SMP_CKP_FE_SES_ACC, /* FE stream accept rules (to come soon) */
+ SMP_CKP_FE_REQ_CNT, /* FE request content rules ("tcp request content") */
+ SMP_CKP_FE_HRQ_HDR, /* FE HTTP request headers (rules, headers, monitor, stats, redirect) */
+ SMP_CKP_FE_HRQ_BDY, /* FE HTTP request body */
+ SMP_CKP_FE_SET_BCK, /* FE backend switching rules ("use_backend") */
+ SMP_CKP_BE_REQ_CNT, /* BE request content rules ("tcp request content") */
+ SMP_CKP_BE_HRQ_HDR, /* BE HTTP request headers (rules, headers, monitor, stats, redirect) */
+ SMP_CKP_BE_HRQ_BDY, /* BE HTTP request body */
+ SMP_CKP_BE_SET_SRV, /* BE server switching rules ("use_server", "balance", "force-persist", "stick", ...) */
+ SMP_CKP_BE_SRV_CON, /* BE server connect (eg: "source") */
+ SMP_CKP_BE_RES_CNT, /* BE response content rules ("tcp response content") */
+ SMP_CKP_BE_HRS_HDR, /* BE HTTP response headers (rules, headers) */
+ SMP_CKP_BE_HRS_BDY, /* BE HTTP response body (stick-store rules are there) */
+ SMP_CKP_BE_STO_RUL, /* BE stick-store rules */
+ SMP_CKP_FE_RES_CNT, /* FE response content rules ("tcp response content") */
+ SMP_CKP_FE_HRS_HDR, /* FE HTTP response headers (rules, headers) */
+ SMP_CKP_FE_HRS_BDY, /* FE HTTP response body */
+ SMP_CKP_FE_LOG_END, /* FE log at the end of the txn/stream */
+ SMP_CKP_ENTRIES /* nothing after this */
+};
+
+/* SMP_USE_* are flags used to declare fetch keywords. Fetch methods are
+ * associated with bitfields composed of these values, generally only one, to
+ * indicate where the contents may be sampled. Some fetches are ambiguous as
+ * they apply to either the request or the response depending on the context,
+ * so they will have 2 of these bits (eg: hdr(), payload(), ...). These are
+ * stored in smp->use.
+ */
+enum {
+ SMP_USE_INTRN = 1 << SMP_SRC_INTRN, /* internal context-less information */
+ SMP_USE_LISTN = 1 << SMP_SRC_LISTN, /* listener which accepted the connection */
+ SMP_USE_FTEND = 1 << SMP_SRC_FTEND, /* frontend which accepted the connection */
+ SMP_USE_L4CLI = 1 << SMP_SRC_L4CLI, /* L4 information about the client */
+ SMP_USE_L5CLI = 1 << SMP_SRC_L5CLI, /* fetch uses client information from embryonic session */
+ SMP_USE_TRACK = 1 << SMP_SRC_TRACK, /* fetch involves track counters */
+ SMP_USE_L6REQ = 1 << SMP_SRC_L6REQ, /* fetch uses raw information from the request buffer */
+ SMP_USE_HRQHV = 1 << SMP_SRC_HRQHV, /* fetch uses volatile information about HTTP request headers (eg: value) */
+ SMP_USE_HRQHP = 1 << SMP_SRC_HRQHP, /* fetch uses persistent information about HTTP request headers (eg: meth) */
+ SMP_USE_HRQBO = 1 << SMP_SRC_HRQBO, /* fetch uses information about HTTP request body */
+ SMP_USE_BKEND = 1 << SMP_SRC_BKEND, /* fetch uses information about the backend */
+ SMP_USE_SERVR = 1 << SMP_SRC_SERVR, /* fetch uses information about the selected server */
+ SMP_USE_L4SRV = 1 << SMP_SRC_L4SRV, /* fetch uses information about the server L4 connection */
+ SMP_USE_L5SRV = 1 << SMP_SRC_L5SRV, /* fetch uses information about the server L5 connection */
+ SMP_USE_L6RES = 1 << SMP_SRC_L6RES, /* fetch uses raw information from the response buffer */
+ SMP_USE_HRSHV = 1 << SMP_SRC_HRSHV, /* fetch uses volatile information about HTTP response headers (eg: value) */
+ SMP_USE_HRSHP = 1 << SMP_SRC_HRSHP, /* fetch uses persistent information about HTTP response headers (eg: status) */
+ SMP_USE_HRSBO = 1 << SMP_SRC_HRSBO, /* fetch uses information about HTTP response body */
+ SMP_USE_RQFIN = 1 << SMP_SRC_RQFIN, /* final information about request buffer (eg: tot bytes) */
+ SMP_USE_RSFIN = 1 << SMP_SRC_RSFIN, /* final information about response buffer (eg: tot bytes) */
+ SMP_USE_TXFIN = 1 << SMP_SRC_TXFIN, /* final information about the transaction (eg: #comp rate) */
+ SMP_USE_SSFIN = 1 << SMP_SRC_SSFIN, /* final information about the stream (eg: #requests, final flags) */
+
+ /* This composite one is useful to detect if an hdr_idx needs to be allocated */
+ SMP_USE_HTTP_ANY = SMP_USE_HRQHV | SMP_USE_HRQHP | SMP_USE_HRQBO |
+ SMP_USE_HRSHV | SMP_USE_HRSHP | SMP_USE_HRSBO,
+};
+
+/* Sample validity is computed from the fetch sources above when keywords
+ * are registered. Each fetch method may be used at different locations. The
+ * configuration parser will check whether the fetches are compatible with the
+ * location where they're used. These are stored in smp->val.
+ */
+enum {
+ SMP_VAL___________ = 0, /* Just used as a visual marker */
+ SMP_VAL_FE_CON_ACC = 1 << SMP_CKP_FE_CON_ACC, /* FE connection accept rules ("tcp request connection") */
+ SMP_VAL_FE_SES_ACC = 1 << SMP_CKP_FE_SES_ACC, /* FE stream accept rules (to come soon) */
+ SMP_VAL_FE_REQ_CNT = 1 << SMP_CKP_FE_REQ_CNT, /* FE request content rules ("tcp request content") */
+ SMP_VAL_FE_HRQ_HDR = 1 << SMP_CKP_FE_HRQ_HDR, /* FE HTTP request headers (rules, headers, monitor, stats, redirect) */
+ SMP_VAL_FE_HRQ_BDY = 1 << SMP_CKP_FE_HRQ_BDY, /* FE HTTP request body */
+ SMP_VAL_FE_SET_BCK = 1 << SMP_CKP_FE_SET_BCK, /* FE backend switching rules ("use_backend") */
+ SMP_VAL_BE_REQ_CNT = 1 << SMP_CKP_BE_REQ_CNT, /* BE request content rules ("tcp request content") */
+ SMP_VAL_BE_HRQ_HDR = 1 << SMP_CKP_BE_HRQ_HDR, /* BE HTTP request headers (rules, headers, monitor, stats, redirect) */
+ SMP_VAL_BE_HRQ_BDY = 1 << SMP_CKP_BE_HRQ_BDY, /* BE HTTP request body */
+ SMP_VAL_BE_SET_SRV = 1 << SMP_CKP_BE_SET_SRV, /* BE server switching rules ("use_server", "balance", "force-persist", "stick", ...) */
+ SMP_VAL_BE_SRV_CON = 1 << SMP_CKP_BE_SRV_CON, /* BE server connect (eg: "source") */
+ SMP_VAL_BE_RES_CNT = 1 << SMP_CKP_BE_RES_CNT, /* BE response content rules ("tcp response content") */
+ SMP_VAL_BE_HRS_HDR = 1 << SMP_CKP_BE_HRS_HDR, /* BE HTTP response headers (rules, headers) */
+ SMP_VAL_BE_HRS_BDY = 1 << SMP_CKP_BE_HRS_BDY, /* BE HTTP response body (stick-store rules are there) */
+ SMP_VAL_BE_STO_RUL = 1 << SMP_CKP_BE_STO_RUL, /* BE stick-store rules */
+ SMP_VAL_FE_RES_CNT = 1 << SMP_CKP_FE_RES_CNT, /* FE response content rules ("tcp response content") */
+ SMP_VAL_FE_HRS_HDR = 1 << SMP_CKP_FE_HRS_HDR, /* FE HTTP response headers (rules, headers) */
+ SMP_VAL_FE_HRS_BDY = 1 << SMP_CKP_FE_HRS_BDY, /* FE HTTP response body */
+ SMP_VAL_FE_LOG_END = 1 << SMP_CKP_FE_LOG_END, /* FE log at the end of the txn/stream */
+
+ /* a few combinations to decide what direction to try to fetch (useful for logs) */
+ SMP_VAL_REQUEST = SMP_VAL_FE_CON_ACC | SMP_VAL_FE_SES_ACC | SMP_VAL_FE_REQ_CNT |
+ SMP_VAL_FE_HRQ_HDR | SMP_VAL_FE_HRQ_BDY | SMP_VAL_FE_SET_BCK |
+ SMP_VAL_BE_REQ_CNT | SMP_VAL_BE_HRQ_HDR | SMP_VAL_BE_HRQ_BDY |
+ SMP_VAL_BE_SET_SRV,
+
+ SMP_VAL_RESPONSE = SMP_VAL_BE_SRV_CON | SMP_VAL_BE_RES_CNT | SMP_VAL_BE_HRS_HDR |
+ SMP_VAL_BE_HRS_BDY | SMP_VAL_BE_STO_RUL | SMP_VAL_FE_RES_CNT |
+ SMP_VAL_FE_HRS_HDR | SMP_VAL_FE_HRS_BDY | SMP_VAL_FE_LOG_END,
+};
+
+extern const unsigned int fetch_cap[SMP_SRC_ENTRIES];
+
+/* Sample fetch options are passed to sample fetch functions to add precision
+ * about what is desired :
+ * - fetch direction (req/resp)
+ * - intermediary / final fetch
+ */
+enum {
+ SMP_OPT_DIR_REQ = 0, /* direction = request */
+ SMP_OPT_DIR_RES = 1, /* direction = response */
+ SMP_OPT_DIR = (SMP_OPT_DIR_REQ|SMP_OPT_DIR_RES), /* mask to get direction */
+ SMP_OPT_FINAL = 2, /* final fetch, contents won't change anymore */
+ SMP_OPT_ITERATE = 4, /* fetches may be iterated if supported (for ACLs) */
+};
+
+/* Flags used to describe fetched samples. MAY_CHANGE indicates that the result
+ * of the fetch might still evolve, for instance because of more data expected,
+ * even if the fetch has failed. VOL_* indicates how long a result may be cached.
+ */
+enum {
+ SMP_F_NOT_LAST = 1 << 0, /* other occurrences might exist for this sample */
+ SMP_F_MAY_CHANGE = 1 << 1, /* sample is unstable and might change (eg: request length) */
+ SMP_F_VOL_TEST = 1 << 2, /* result must not survive longer than the test (eg: time) */
+ SMP_F_VOL_1ST = 1 << 3, /* result sensitive to changes in first line (eg: URI) */
+ SMP_F_VOL_HDR = 1 << 4, /* result sensitive to changes in headers */
+ SMP_F_VOL_TXN = 1 << 5, /* result sensitive to new transaction (eg: HTTP version) */
+ SMP_F_VOL_SESS = 1 << 6, /* result sensitive to new session (eg: src IP) */
+ SMP_F_VOLATILE = (1<<2)|(1<<3)|(1<<4)|(1<<5)|(1<<6), /* any volatility condition */
+ SMP_F_CONST = 1 << 7, /* This sample use constant memory. May diplicate it before changes */
+};
+
+/* needed below */
+struct session;
+struct stream;
+
+/* Known HTTP methods */
+enum http_meth_t {
+ HTTP_METH_OPTIONS,
+ HTTP_METH_GET,
+ HTTP_METH_HEAD,
+ HTTP_METH_POST,
+ HTTP_METH_PUT,
+ HTTP_METH_DELETE,
+ HTTP_METH_TRACE,
+ HTTP_METH_CONNECT,
+ HTTP_METH_OTHER, /* Must be the last entry */
+} __attribute__((packed));
+
+/* a sample context might be used by any sample fetch function in order to
+ * store information needed across multiple calls (eg: restart point for a
+ * next occurrence). By definition it may store up to 8 pointers, or any
+ * scalar (double, int, long long).
+ */
+union smp_ctx {
+ void *p; /* any pointer */
+ int i; /* any integer */
+ long long ll; /* any long long or smaller */
+ double d; /* any float or double */
+ void *a[8]; /* any array of up to 8 pointers */
+};
+
+struct meth {
+ enum http_meth_t meth;
+ struct chunk str;
+};
+
+union sample_value {
+ long long int sint; /* used for signed 64bits integers */
+ struct in_addr ipv4; /* used for ipv4 addresses */
+ struct in6_addr ipv6; /* used for ipv6 addresses */
+ struct chunk str; /* used for char strings or buffers */
+ struct meth meth; /* used for http method */
+};
+
+/* Used to store sample constant */
+struct sample_data {
+ int type; /* SMP_T_* */
+ union sample_value u; /* sample data */
+};
+
+/* a sample is a typed data extracted from a stream. It has a type, contents,
+ * validity constraints, a context for use in iterative calls.
+ */
+struct sample {
+ unsigned int flags; /* SMP_F_* */
+ struct sample_data data;
+ union smp_ctx ctx;
+
+ /* Some sample analyzer (sample-fetch or converters) needs to
+ * known the attached proxy, session and stream. The sample-fetches
+ * and the converters function pointers cannot be called without
+ * these 3 pointers filled.
+ */
+ struct proxy *px;
+ struct session *sess;
+ struct stream *strm;
+ unsigned int opt; /* fetch options (SMP_OPT_*) */
+};
+
+/* Descriptor for a sample conversion */
+struct sample_conv {
+ const char *kw; /* configuration keyword */
+ int (*process)(const struct arg *arg_p,
+ struct sample *smp,
+ void *private); /* process function */
+ unsigned int arg_mask; /* arguments (ARG*()) */
+ int (*val_args)(struct arg *arg_p,
+ struct sample_conv *smp_conv,
+ const char *file, int line,
+ char **err_msg); /* argument validation function */
+ unsigned int in_type; /* expected input sample type */
+ unsigned int out_type; /* output sample type */
+ void *private; /* private values. only used by maps and Lua */
+};
+
+/* sample conversion expression */
+struct sample_conv_expr {
+ struct list list; /* member of a sample_expr */
+ struct sample_conv *conv; /* sample conversion used */
+ struct arg *arg_p; /* optional arguments */
+};
+
+/* Descriptor for a sample fetch method */
+struct sample_fetch {
+ const char *kw; /* configuration keyword */
+ int (*process)(const struct arg *arg_p,
+ struct sample *smp,
+ const char *kw, /* fetch processing function */
+ void *private); /* private value. */
+ unsigned int arg_mask; /* arguments (ARG*()) */
+ int (*val_args)(struct arg *arg_p,
+ char **err_msg); /* argument validation function */
+ unsigned long out_type; /* output sample type */
+ unsigned int use; /* fetch source (SMP_USE_*) */
+ unsigned int val; /* fetch validity (SMP_VAL_*) */
+ void *private; /* private values. only used by Lua */
+};
+
+/* sample expression */
+struct sample_expr {
+ struct list list; /* member of list of sample, currently not used */
+ struct sample_fetch *fetch; /* sample fetch method */
+ struct arg *arg_p; /* optional pointer to arguments to fetch function */
+ struct list conv_exprs; /* list of conversion expression to apply */
+};
+
+/* sample fetch keywords list */
+struct sample_fetch_kw_list {
+ struct list list; /* head of sample fetch keyword list */
+ struct sample_fetch kw[VAR_ARRAY]; /* array of sample fetch descriptors */
+};
+
+/* sample conversion keywords list */
+struct sample_conv_kw_list {
+ struct list list; /* head of sample conversion keyword list */
+ struct sample_conv kw[VAR_ARRAY]; /* array of sample conversion descriptors */
+};
+
+typedef int (*sample_cast_fct)(struct sample *smp);
+extern sample_cast_fct sample_casts[SMP_TYPES][SMP_TYPES];
+
+#endif /* _TYPES_SAMPLE_H */
--- /dev/null
+/*
+ * include/types/server.h
+ * This file defines everything related to servers.
+ *
+ * Copyright (C) 2000-2012 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _TYPES_SERVER_H
+#define _TYPES_SERVER_H
+
+#include <netinet/in.h>
+#include <arpa/inet.h>
+
+#ifdef USE_OPENSSL
+#include <openssl/ssl.h>
+#endif
+
+#include <common/config.h>
+#include <common/mini-clist.h>
+#include <eb32tree.h>
+
+#include <types/connection.h>
+#include <types/counters.h>
+#include <types/freq_ctr.h>
+#include <types/obj_type.h>
+#include <types/proxy.h>
+#include <types/queue.h>
+#include <types/task.h>
+#include <types/checks.h>
+
+
+/* server states. Only SRV_ST_STOPPED indicates a down server. */
+enum srv_state {
+ SRV_ST_STOPPED = 0, /* the server is down. Please keep set to zero. */
+ SRV_ST_STARTING, /* the server is warming up (up but throttled) */
+ SRV_ST_RUNNING, /* the server is fully up */
+ SRV_ST_STOPPING, /* the server is up but soft-stopping (eg: 404) */
+};
+
+/* Administrative status : a server runs in one of these 3 stats :
+ * - READY : normal mode
+ * - DRAIN : takes no new visitor, equivalent to weight == 0
+ * - MAINT : maintenance mode, no more traffic nor health checks.
+ *
+ * Each server may be in maintenance by itself or may inherit this status from
+ * another server it tracks. It can also be in drain mode by itself or inherit
+ * it from another server. Let's store these origins here as flags. These flags
+ * are combined this way :
+ *
+ * FMAINT IMAINT FDRAIN IDRAIN Resulting state
+ * 0 0 0 0 READY
+ * 0 0 0 1 DRAIN
+ * 0 0 1 x DRAIN
+ * 0 1 x x MAINT
+ * 1 x x x MAINT
+ *
+ * This can be simplified this way :
+ *
+ * state_str = (state & MAINT) ? "MAINT" : (state & DRAIN) : "DRAIN" : "READY"
+ */
+enum srv_admin {
+ SRV_ADMF_FMAINT = 0x01, /* the server was explicitly forced into maintenance */
+ SRV_ADMF_IMAINT = 0x02, /* the server has inherited the maintenance status from a tracked server */
+ SRV_ADMF_MAINT = 0x03, /* mask to check if any maintenance flag is present */
+ SRV_ADMF_CMAINT = 0x04, /* the server is in maintenance because of the configuration */
+ SRV_ADMF_FDRAIN = 0x08, /* the server was explicitly forced into drain state */
+ SRV_ADMF_IDRAIN = 0x10, /* the server has inherited the drain status from a tracked server */
+ SRV_ADMF_DRAIN = 0x18, /* mask to check if any drain flag is present */
+};
+
+/* server-state-file version */
+#define SRV_STATE_FILE_VERSION 1
+#define SRV_STATE_FILE_VERSION_MIN 1
+#define SRV_STATE_FILE_VERSION_MAX 1
+#define SRV_STATE_FILE_FIELD_NAMES "be_id be_name srv_id srv_name srv_addr srv_op_state srv_admin_state srv_uweight srv_iweight srv_time_since_last_change srv_check_status srv_check_result srv_check_health srv_check_state srv_agent_state bk_f_forced_id srv_f_forced_id"
+#define SRV_STATE_FILE_MAX_FIELDS 18
+#define SRV_STATE_FILE_NB_FIELDS_VERSION_1 18
+#define SRV_STATE_LINE_MAXLEN 512
+
+
+/* server flags */
+#define SRV_F_BACKUP 0x0001 /* this server is a backup server */
+#define SRV_F_MAPPORTS 0x0002 /* this server uses mapped ports */
+#define SRV_F_NON_STICK 0x0004 /* never add connections allocated to this server to a stick table */
+#define SRV_F_USE_NS_FROM_PP 0x0008 /* use namespace associated with connection if present */
+#define SRV_F_FORCED_ID 0x0010 /* server's ID was forced in the configuration */
+
+/* configured server options for send-proxy (server->pp_opts) */
+#define SRV_PP_V1 0x0001 /* proxy protocol version 1 */
+#define SRV_PP_V2 0x0002 /* proxy protocol version 2 */
+#define SRV_PP_V2_SSL 0x0004 /* proxy protocol version 2 with SSL*/
+#define SRV_PP_V2_SSL_CN 0x0008 /* proxy protocol version 2 with SSL and CN*/
+
+/* function which act on servers need to return various errors */
+#define SRV_STATUS_OK 0 /* everything is OK. */
+#define SRV_STATUS_INTERNAL 1 /* other unrecoverable errors. */
+#define SRV_STATUS_NOSRV 2 /* no server is available */
+#define SRV_STATUS_FULL 3 /* the/all server(s) are saturated */
+#define SRV_STATUS_QUEUED 4 /* the/all server(s) are saturated but the connection was queued */
+
+/* various constants */
+#define SRV_UWGHT_RANGE 256
+#define SRV_UWGHT_MAX (SRV_UWGHT_RANGE)
+#define SRV_EWGHT_RANGE (SRV_UWGHT_RANGE * BE_WEIGHT_SCALE)
+#define SRV_EWGHT_MAX (SRV_UWGHT_MAX * BE_WEIGHT_SCALE)
+
+#ifdef USE_OPENSSL
+/* server ssl options */
+#define SRV_SSL_O_NONE 0x0000
+#define SRV_SSL_O_NO_VMASK 0x000F /* force version mask */
+#define SRV_SSL_O_NO_SSLV3 0x0001 /* disable SSLv3 */
+#define SRV_SSL_O_NO_TLSV10 0x0002 /* disable TLSv1.0 */
+#define SRV_SSL_O_NO_TLSV11 0x0004 /* disable TLSv1.1 */
+#define SRV_SSL_O_NO_TLSV12 0x0008 /* disable TLSv1.2 */
+/* 0x000F reserved for 'no' protocol version options */
+#define SRV_SSL_O_USE_VMASK 0x00F0 /* force version mask */
+#define SRV_SSL_O_USE_SSLV3 0x0010 /* force SSLv3 */
+#define SRV_SSL_O_USE_TLSV10 0x0020 /* force TLSv1.0 */
+#define SRV_SSL_O_USE_TLSV11 0x0040 /* force TLSv1.1 */
+#define SRV_SSL_O_USE_TLSV12 0x0080 /* force TLSv1.2 */
+/* 0x00F0 reserved for 'force' protocol version options */
+#define SRV_SSL_O_NO_TLS_TICKETS 0x0100 /* disable session resumption tickets */
+#define SRV_SSL_O_NO_REUSE 0x200 /* disable session reuse */
+#endif
+
+struct pid_list {
+ struct list list;
+ pid_t pid;
+ struct task *t;
+ int status;
+ int exited;
+};
+
+/* A tree occurrence is a descriptor of a place in a tree, with a pointer back
+ * to the server itself.
+ */
+struct server;
+struct tree_occ {
+ struct server *server;
+ struct eb32_node node;
+};
+
+struct server {
+ enum obj_type obj_type; /* object type == OBJ_TYPE_SERVER */
+ enum srv_state state, prev_state; /* server state among SRV_ST_* */
+ enum srv_admin admin, prev_admin; /* server maintenance status : SRV_ADMF_* */
+ unsigned char flags; /* server flags (SRV_F_*) */
+ struct server *next;
+ int cklen; /* the len of the cookie, to speed up checks */
+ int rdr_len; /* the length of the redirection prefix */
+ char *cookie; /* the id set in the cookie */
+ char *rdr_pfx; /* the redirection prefix */
+ int pp_opts; /* proxy protocol options (SRV_PP_*) */
+
+ struct proxy *proxy; /* the proxy this server belongs to */
+ int served; /* # of active sessions currently being served (ie not pending) */
+ int cur_sess; /* number of currently active sessions (including syn_sent) */
+ unsigned maxconn, minconn; /* max # of active sessions (0 = unlimited), min# for dynamic limit. */
+ int nbpend; /* number of pending connections */
+ int maxqueue; /* maximum number of pending connections allowed */
+ struct freq_ctr sess_per_sec; /* sessions per second on this server */
+ struct srvcounters counters; /* statistics counters */
+
+ struct list pendconns; /* pending connections */
+ struct list actconns; /* active connections */
+ struct list priv_conns; /* private idle connections attached to stream interfaces */
+ struct list idle_conns; /* sharable idle connections attached or not to a stream interface */
+ struct list safe_conns; /* safe idle connections attached to stream interfaces, shared */
+ struct task *warmup; /* the task dedicated to the warmup when slowstart is set */
+
+ struct conn_src conn_src; /* connection source settings */
+
+ struct server *track; /* the server we're currently tracking, if any */
+ struct server *trackers; /* the list of servers tracking us, if any */
+ struct server *tracknext; /* next server tracking <track> in <track>'s trackers list */
+ char *trackit; /* temporary variable to make assignment deferrable */
+ int consecutive_errors; /* current number of consecutive errors */
+ int consecutive_errors_limit; /* number of consecutive errors that triggers an event */
+ short observe, onerror; /* observing mode: one of HANA_OBS_*; what to do on error: on of ANA_ONERR_* */
+ short onmarkeddown; /* what to do when marked down: one of HANA_ONMARKEDDOWN_* */
+ short onmarkedup; /* what to do when marked up: one of HANA_ONMARKEDUP_* */
+ int slowstart; /* slowstart time in seconds (ms in the conf) */
+
+ char *id; /* just for identification */
+ unsigned iweight,uweight, eweight; /* initial weight, user-specified weight, and effective weight */
+ unsigned wscore; /* weight score, used during srv map computation */
+ unsigned prev_eweight; /* eweight before last change */
+ unsigned rweight; /* remainer of weight in the current LB tree */
+ unsigned npos, lpos; /* next and last positions in the LB tree */
+ struct eb32_node lb_node; /* node used for tree-based load balancing */
+ struct eb_root *lb_tree; /* we want to know in what tree the server is */
+ struct server *next_full; /* next server in the temporary full list */
+ unsigned lb_nodes_tot; /* number of allocated lb_nodes (C-HASH) */
+ unsigned lb_nodes_now; /* number of lb_nodes placed in the tree (C-HASH) */
+ struct tree_occ *lb_nodes; /* lb_nodes_tot * struct tree_occ */
+
+ const struct netns_entry *netns; /* contains network namespace name or NULL. Network namespace comes from configuration */
+ /* warning, these structs are huge, keep them at the bottom */
+ struct sockaddr_storage addr; /* the address to connect to */
+ struct xprt_ops *xprt; /* transport-layer operations */
+ unsigned down_time; /* total time the server was down */
+ time_t last_change; /* last time, when the state was changed */
+
+ int puid; /* proxy-unique server ID, used for SNMP, and "first" LB algo */
+ int tcp_ut; /* for TCP, user timeout */
+
+ struct check check; /* health-check specific configuration */
+ struct check agent; /* agent specific configuration */
+
+ char *resolvers_id; /* resolvers section used by this server */
+ char *hostname; /* server hostname */
+ struct dns_resolution *resolution; /* server name resolution */
+ int resolver_family_priority; /* which IP family should the resolver use when both are returned */
+
+#ifdef USE_OPENSSL
+ int use_ssl; /* ssl enabled */
+ struct {
+ SSL_CTX *ctx;
+ SSL_SESSION *reused_sess;
+ char *ciphers; /* cipher suite to use if non-null */
+ int options; /* ssl options */
+ int verify; /* verify method (set of SSL_VERIFY_* flags) */
+ char *verify_host; /* hostname of certificate must match this host */
+ char *ca_file; /* CAfile to use on verify */
+ char *crl_file; /* CRLfile to use on verify */
+ char *client_crt; /* client certificate to send */
+ struct sample_expr *sni; /* sample expression for SNI */
+ } ssl_ctx;
+#endif
+ struct {
+ const char *file; /* file where the section appears */
+ int line; /* line where the section appears */
+ struct eb32_node id; /* place in the tree of used IDs */
+ } conf; /* config information */
+};
+
+/* Descriptor for a "server" keyword. The ->parse() function returns 0 in case of
+ * success, or a combination of ERR_* flags if an error is encountered. The
+ * function pointer can be NULL if not implemented. The function also has an
+ * access to the current "server" config line. The ->skip value tells the parser
+ * how many words have to be skipped after the keyword. If the function needs to
+ * parse more keywords, it needs to update cur_arg.
+ */
+struct srv_kw {
+ const char *kw;
+ int (*parse)(char **args, int *cur_arg, struct proxy *px, struct server *srv, char **err);
+ int skip; /* nb min of args to skip, for use when kw is not handled */
+ int default_ok; /* non-zero if kw is supported in default-server section */
+};
+
+/*
+ * A keyword list. It is a NULL-terminated array of keywords. It embeds a
+ * struct list in order to be linked to other lists, allowing it to easily
+ * be declared where it is needed, and linked without duplicating data nor
+ * allocating memory. It is also possible to indicate a scope for the keywords.
+ */
+struct srv_kw_list {
+ const char *scope;
+ struct list list;
+ struct srv_kw kw[VAR_ARRAY];
+};
+
+#endif /* _TYPES_SERVER_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/types/session.h
+ * This file defines everything related to sessions.
+ *
+ * Copyright (C) 2000-2015 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _TYPES_SESSION_H
+#define _TYPES_SESSION_H
+
+
+#include <sys/time.h>
+#include <unistd.h>
+#include <netinet/in.h>
+#include <arpa/inet.h>
+
+#include <common/config.h>
+#include <common/mini-clist.h>
+
+#include <types/obj_type.h>
+#include <types/proxy.h>
+#include <types/stick_table.h>
+#include <types/task.h>
+#include <types/vars.h>
+
+struct session {
+ struct proxy *fe; /* the proxy this session depends on for the client side */
+ struct listener *listener; /* the listener by which the request arrived */
+ enum obj_type *origin; /* the connection / applet which initiated this session */
+ struct timeval accept_date; /* date of the session's accept() in user date */
+ struct timeval tv_accept; /* date of the session's accept() in internal date (monotonic) */
+ struct stkctr stkctr[MAX_SESS_STKCTR]; /* stick counters for tcp-connection */
+ struct vars vars; /* list of variables for the session scope. */
+};
+
+#endif /* _TYPES_SESSION_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/types/signal.h
+ * Asynchronous signal delivery functions descriptors.
+ *
+ * Copyright 2000-2010 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#ifndef _TYPES_SIGNAL_H
+#define _TYPES_SIGNAL_H
+
+
+#include <signal.h>
+#include <common/config.h>
+#include <common/mini-clist.h>
+#include <common/standard.h>
+
+/* flags for -> flags */
+#define SIG_F_ONE_SHOOT 0x0001 /* unregister handler before calling it */
+#define SIG_F_TYPE_FCT 0x0002 /* handler is a function + arg */
+#define SIG_F_TYPE_TASK 0x0004 /* handler is a task + reason */
+
+/* those are highly dynamic and stored in pools */
+struct sig_handler {
+ struct list list;
+ void *handler; /* function to call or task to wake up */
+ int arg; /* arg to pass to function, or signals*/
+ int flags; /* SIG_F_* */
+};
+
+/* one per signal */
+struct signal_descriptor {
+ int count; /* number of times raised */
+ struct list handlers; /* sig_handler */
+};
+
+#endif /* _TYPES_SIGNAL_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/types/ssl_sock.h
+ * SSL settings for listeners and servers
+ *
+ * Copyright (C) 2012 EXCELIANCE, Emeric Brun <ebrun@exceliance.fr>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _TYPES_SSL_SOCK_H
+#define _TYPES_SSL_SOCK_H
+
+#include <openssl/ssl.h>
+#include <ebmbtree.h>
+
+struct sni_ctx {
+ SSL_CTX *ctx; /* context associated to the certificate */
+ int order; /* load order for the certificate */
+ int neg; /* reject if match */
+ struct ebmb_node name; /* node holding the servername value */
+};
+
+extern struct list tlskeys_reference;
+
+struct tls_sess_key {
+ unsigned char name[16];
+ unsigned char aes_key[16];
+ unsigned char hmac_key[16];
+} __attribute__((packed));
+
+struct tls_keys_ref {
+ struct list list; /* Used to chain refs. */
+ char *filename;
+ int unique_id; /* Each pattern reference have unique id. */
+ struct tls_sess_key *tlskeys;
+ int tls_ticket_enc_index;
+};
+
+#endif /* _TYPES_SSL_SOCK_H */
--- /dev/null
+/*
+ * include/types/stick_table.h
+ * Macros, variables and structures for stick tables management.
+ *
+ * Copyright (C) 2009-2010 EXCELIANCE, Emeric Brun <ebrun@exceliance.fr>
+ * Copyright (C) 2010 Willy Tarreau <w@1wt.eu>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _TYPES_STICK_TABLE_H
+#define _TYPES_STICK_TABLE_H
+
+#include <sys/socket.h>
+#include <netinet/in.h>
+
+#include <ebtree.h>
+#include <ebmbtree.h>
+#include <eb32tree.h>
+#include <common/memory.h>
+#include <types/freq_ctr.h>
+#include <types/sample.h>
+
+/* The types of extra data we can store in a stick table */
+enum {
+ STKTABLE_DT_SERVER_ID, /* the server ID to use with this stream if > 0 */
+ STKTABLE_DT_GPT0, /* General Purpose Flag 0. */
+ STKTABLE_DT_GPC0, /* General Purpose Counter 0 (unsigned 32-bit integer) */
+ STKTABLE_DT_GPC0_RATE, /* General Purpose Counter 0's event rate */
+ STKTABLE_DT_CONN_CNT, /* cumulated number of connections */
+ STKTABLE_DT_CONN_RATE, /* incoming connection rate */
+ STKTABLE_DT_CONN_CUR, /* concurrent number of connections */
+ STKTABLE_DT_SESS_CNT, /* cumulated number of sessions (accepted connections) */
+ STKTABLE_DT_SESS_RATE, /* accepted sessions rate */
+ STKTABLE_DT_HTTP_REQ_CNT, /* cumulated number of incoming HTTP requests */
+ STKTABLE_DT_HTTP_REQ_RATE,/* incoming HTTP request rate */
+ STKTABLE_DT_HTTP_ERR_CNT, /* cumulated number of HTTP requests errors (4xx) */
+ STKTABLE_DT_HTTP_ERR_RATE,/* HTTP request error rate */
+ STKTABLE_DT_BYTES_IN_CNT, /* cumulated bytes count from client to servers */
+ STKTABLE_DT_BYTES_IN_RATE,/* bytes rate from client to servers */
+ STKTABLE_DT_BYTES_OUT_CNT,/* cumulated bytes count from servers to client */
+ STKTABLE_DT_BYTES_OUT_RATE,/* bytes rate from servers to client */
+ STKTABLE_STATIC_DATA_TYPES,/* number of types above */
+ /* up to STKTABLE_EXTRA_DATA_TYPES types may be registered here, always
+ * followed by the number of data types, must always be last.
+ */
+ STKTABLE_DATA_TYPES = STKTABLE_STATIC_DATA_TYPES + STKTABLE_EXTRA_DATA_TYPES
+};
+
+/* The equivalent standard types of the stored data */
+enum {
+ STD_T_SINT = 0, /* data is of type signed int */
+ STD_T_UINT, /* data is of type unsigned int */
+ STD_T_ULL, /* data is of type unsigned long long */
+ STD_T_FRQP, /* data is of type freq_ctr_period */
+};
+
+/* The types of optional arguments to stored data */
+enum {
+ ARG_T_NONE = 0, /* data type takes no argument (default) */
+ ARG_T_INT, /* signed integer */
+ ARG_T_DELAY, /* a delay which supports time units */
+};
+
+/* stick_table extra data. This is mainly used for casting or size computation */
+union stktable_data {
+ /* standard types for easy casting */
+ int std_t_sint;
+ unsigned int std_t_uint;
+ unsigned long long std_t_ull;
+ struct freq_ctr_period std_t_frqp;
+
+ /* types of each storable data */
+ int server_id;
+ unsigned int gpt0;
+ unsigned int gpc0;
+ struct freq_ctr_period gpc0_rate;
+ unsigned int conn_cnt;
+ struct freq_ctr_period conn_rate;
+ unsigned int conn_cur;
+ unsigned int sess_cnt;
+ struct freq_ctr_period sess_rate;
+ unsigned int http_req_cnt;
+ struct freq_ctr_period http_req_rate;
+ unsigned int http_err_cnt;
+ struct freq_ctr_period http_err_rate;
+ unsigned long long bytes_in_cnt;
+ struct freq_ctr_period bytes_in_rate;
+ unsigned long long bytes_out_cnt;
+ struct freq_ctr_period bytes_out_rate;
+};
+
+/* known data types */
+struct stktable_data_type {
+ const char *name; /* name of the data type */
+ int std_type; /* standard type we can use for this data, STD_T_* */
+ int arg_type; /* type of optional argument, ARG_T_* */
+};
+
+/* stick table key type flags */
+#define STK_F_CUSTOM_KEYSIZE 0x00000001 /* this table's key size is configurable */
+
+/* stick table keyword type */
+struct stktable_type {
+ const char *kw; /* keyword string */
+ int flags; /* type flags */
+ size_t default_size; /* default key size */
+};
+
+extern struct stktable_type stktable_types[];
+
+/* Sticky session.
+ * Any additional data related to the stuck session is installed *before*
+ * stksess (with negative offsets). This allows us to run variable-sized
+ * keys and variable-sized data without making use of intermediate pointers.
+ */
+struct stksess {
+ unsigned int expire; /* session expiration date */
+ unsigned int ref_cnt; /* reference count, can only purge when zero */
+ struct eb32_node exp; /* ebtree node used to hold the session in expiration tree */
+ struct eb32_node upd; /* ebtree node used to hold the update sequence tree */
+ struct ebmb_node key; /* ebtree node used to hold the session in table */
+ /* WARNING! do not put anything after <keys>, it's used by the key */
+};
+
+
+/* stick table */
+struct stktable {
+ char *id; /* table id name */
+ struct eb_root keys; /* head of sticky session tree */
+ struct eb_root exps; /* head of sticky session expiration tree */
+ struct eb_root updates; /* head of sticky updates sequence tree */
+ struct pool_head *pool; /* pool used to allocate sticky sessions */
+ struct task *exp_task; /* expiration task */
+ struct task *sync_task; /* sync task */
+ unsigned int update;
+ unsigned int localupdate;
+ unsigned int commitupdate;/* used to identify the latest local updates
+ pending for sync */
+ unsigned int syncing; /* number of sync tasks watching this table now */
+ union {
+ struct peers *p; /* sync peers */
+ char *name;
+ } peers;
+
+ unsigned long type; /* type of table (determines key format) */
+ size_t key_size; /* size of a key, maximum size in case of string */
+ unsigned int size; /* maximum number of sticky sessions in table */
+ unsigned int current; /* number of sticky sessions currently in table */
+ int nopurge; /* if non-zero, don't purge sticky sessions when full */
+ int exp_next; /* next expiration date (ticks) */
+ int expire; /* time to live for sticky sessions (milliseconds) */
+ int data_size; /* the size of the data that is prepended *before* stksess */
+ int data_ofs[STKTABLE_DATA_TYPES]; /* negative offsets of present data types, or 0 if absent */
+ union {
+ int i;
+ unsigned int u;
+ void *p;
+ } data_arg[STKTABLE_DATA_TYPES]; /* optional argument of each data type */
+};
+
+extern struct stktable_data_type stktable_data_types[STKTABLE_DATA_TYPES];
+
+/* stick table key */
+struct stktable_key {
+ void *key; /* pointer on key buffer */
+ size_t key_len; /* data len to read in buff in case of null terminated string */
+};
+
+/* WARNING: if new fields are added, they must be initialized in stream_accept()
+ * and freed in stream_free() !
+ */
+#define STKCTR_TRACK_BACKEND 1
+#define STKCTR_TRACK_CONTENT 2
+
+/* stick counter. The <entry> member is a composite address (caddr) made of a
+ * pointer to an stksess struct, and two flags among STKCTR_TRACK_* above.
+ */
+struct stkctr {
+ unsigned long entry; /* entry containing counters currently being tracked by this stream */
+ struct stktable *table; /* table the counters above belong to (undefined if counters are null) */
+};
+
+/* parameters to configure tracked counters */
+struct track_ctr_prm {
+ struct sample_expr *expr; /* expression used as the key */
+ union {
+ struct stktable *t; /* a pointer to the table */
+ char *n; /* or its name during parsing. */
+ } table;
+};
+
+#endif /* _TYPES_STICK_TABLE_H */
--- /dev/null
+/*
+ * include/types/stream.h
+ * This file defines everything related to streams.
+ *
+ * Copyright (C) 2000-2015 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _TYPES_STREAM_H
+#define _TYPES_STREAM_H
+
+
+#include <sys/time.h>
+#include <unistd.h>
+#include <netinet/in.h>
+#include <arpa/inet.h>
+
+#include <common/config.h>
+#include <common/mini-clist.h>
+
+#include <types/channel.h>
+#include <types/compression.h>
+#include <types/hlua.h>
+#include <types/obj_type.h>
+#include <types/proto_http.h>
+#include <types/proxy.h>
+#include <types/queue.h>
+#include <types/server.h>
+#include <types/session.h>
+#include <types/stream_interface.h>
+#include <types/task.h>
+#include <types/stick_table.h>
+#include <types/vars.h>
+
+
+/* Various Stream Flags, bits values 0x01 to 0x100 (shift 0) */
+#define SF_DIRECT 0x00000001 /* connection made on the server matching the client cookie */
+#define SF_ASSIGNED 0x00000002 /* no need to assign a server to this stream */
+#define SF_ADDR_SET 0x00000004 /* this stream's server address has been set */
+#define SF_BE_ASSIGNED 0x00000008 /* a backend was assigned. Conns are accounted. */
+
+#define SF_FORCE_PRST 0x00000010 /* force persistence here, even if server is down */
+#define SF_MONITOR 0x00000020 /* this stream comes from a monitoring system */
+#define SF_CURR_SESS 0x00000040 /* a connection is currently being counted on the server */
+#define SF_INITIALIZED 0x00000080 /* the stream was fully initialized */
+#define SF_REDISP 0x00000100 /* set if this stream was redispatched from one server to another */
+#define SF_CONN_TAR 0x00000200 /* set if this stream is turning around before reconnecting */
+#define SF_REDIRECTABLE 0x00000400 /* set if this stream is redirectable (GET or HEAD) */
+#define SF_TUNNEL 0x00000800 /* tunnel-mode stream, nothing to catch after data */
+
+/* stream termination conditions, bits values 0x1000 to 0x7000 (0-9 shift 12) */
+#define SF_ERR_NONE 0x00000000 /* normal end of request */
+#define SF_ERR_LOCAL 0x00001000 /* the proxy locally processed this request => not an error */
+#define SF_ERR_CLITO 0x00002000 /* client time-out */
+#define SF_ERR_CLICL 0x00003000 /* client closed (read/write error) */
+#define SF_ERR_SRVTO 0x00004000 /* server time-out, connect time-out */
+#define SF_ERR_SRVCL 0x00005000 /* server closed (connect/read/write error) */
+#define SF_ERR_PRXCOND 0x00006000 /* the proxy decided to close (deny...) */
+#define SF_ERR_RESOURCE 0x00007000 /* the proxy encountered a lack of a local resources (fd, mem, ...) */
+#define SF_ERR_INTERNAL 0x00008000 /* the proxy encountered an internal error */
+#define SF_ERR_DOWN 0x00009000 /* the proxy killed a stream because the backend became unavailable */
+#define SF_ERR_KILLED 0x0000a000 /* the proxy killed a stream because it was asked to do so */
+#define SF_ERR_UP 0x0000b000 /* the proxy killed a stream because a preferred backend became available */
+#define SF_ERR_MASK 0x0000f000 /* mask to get only stream error flags */
+#define SF_ERR_SHIFT 12 /* bit shift */
+
+/* stream state at termination, bits values 0x10000 to 0x70000 (0-7 shift 16) */
+#define SF_FINST_R 0x00010000 /* stream ended during client request */
+#define SF_FINST_C 0x00020000 /* stream ended during server connect */
+#define SF_FINST_H 0x00030000 /* stream ended during server headers */
+#define SF_FINST_D 0x00040000 /* stream ended during data phase */
+#define SF_FINST_L 0x00050000 /* stream ended while pushing last data to client */
+#define SF_FINST_Q 0x00060000 /* stream ended while waiting in queue for a server slot */
+#define SF_FINST_T 0x00070000 /* stream ended tarpitted */
+#define SF_FINST_MASK 0x00070000 /* mask to get only final stream state flags */
+#define SF_FINST_SHIFT 16 /* bit shift */
+
+#define SF_IGNORE_PRST 0x00080000 /* ignore persistence */
+
+#define SF_COMP_READY 0x00100000 /* the compression is initialized */
+#define SF_SRV_REUSED 0x00200000 /* the server-side connection was reused */
+
+/* some external definitions */
+struct strm_logs {
+ int logwait; /* log fields waiting to be collected : LW_* */
+ int level; /* log level to force + 1 if > 0, -1 = no log */
+ struct timeval accept_date; /* date of the stream's accept() in user date */
+ struct timeval tv_accept; /* date of the stream's accept() in internal date (monotonic) */
+ struct timeval tv_request; /* date the request arrives, {0,0} if never occurs */
+ long t_queue; /* delay before the stream gets out of the connect queue, -1 if never occurs */
+ long t_connect; /* delay before the connect() to the server succeeds, -1 if never occurs */
+ long t_data; /* delay before the first data byte from the server ... */
+ unsigned long t_close; /* total stream duration */
+ unsigned long srv_queue_size; /* number of streams waiting for a connect slot on this server at accept() time (in direct assignment) */
+ unsigned long prx_queue_size; /* overall number of streams waiting for a connect slot on this instance at accept() time */
+ long long bytes_in; /* number of bytes transferred from the client to the server */
+ long long bytes_out; /* number of bytes transferred from the server to the client */
+};
+
+struct stream {
+ int flags; /* some flags describing the stream */
+ unsigned int uniq_id; /* unique ID used for the traces */
+ enum obj_type *target; /* target to use for this stream */
+
+ struct channel req; /* request channel */
+ struct channel res; /* response channel */
+
+ struct proxy *be; /* the proxy this stream depends on for the server side */
+
+ struct session *sess; /* the session this stream is attached to */
+
+ struct server *srv_conn; /* stream already has a slot on a server and is not in queue */
+ struct pendconn *pend_pos; /* if not NULL, points to the position in the pending queue */
+
+ struct http_txn *txn; /* current HTTP transaction being processed. Should become a list. */
+
+ struct task *task; /* the task associated with this stream */
+ struct list list; /* position in global streams list */
+ struct list by_srv; /* position in server stream list */
+ struct list back_refs; /* list of users tracking this stream */
+ struct list buffer_wait; /* position in the list of streams waiting for a buffer */
+
+ struct {
+ struct stksess *ts;
+ struct stktable *table;
+ } store[8]; /* tracked stickiness values to store */
+ int store_count;
+ /* 4 unused bytes here */
+
+ struct stkctr stkctr[MAX_SESS_STKCTR]; /* content-aware stick counters */
+
+ char **req_cap; /* array of captures from the request (may be NULL) */
+ char **res_cap; /* array of captures from the response (may be NULL) */
+ struct vars vars_txn; /* list of variables for the txn scope. */
+ struct vars vars_reqres; /* list of variables for the request and resp scope. */
+
+ struct stream_interface si[2]; /* client and server stream interfaces */
+ struct strm_logs logs; /* logs for this stream */
+
+ void (*do_log)(struct stream *s); /* the function to call in order to log (or NULL) */
+ void (*srv_error)(struct stream *s, /* the function to call upon unrecoverable server errors (or NULL) */
+ struct stream_interface *si);
+ struct comp_ctx *comp_ctx; /* HTTP compression context */
+ struct comp_algo *comp_algo; /* HTTP compression algorithm if not NULL */
+ char *unique_id; /* custom unique ID */
+
+ /* These two pointers are used to resume the execution of the rule lists. */
+ struct list *current_rule_list; /* this is used to store the current executed rule list. */
+ void *current_rule; /* this is used to store the current rule to be resumed. */
+ struct hlua hlua; /* lua runtime context */
+};
+
+#endif /* _TYPES_STREAM_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/types/stream_interface.h
+ * This file describes the stream_interface struct and associated constants.
+ *
+ * Copyright (C) 2000-2014 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _TYPES_STREAM_INTERFACE_H
+#define _TYPES_STREAM_INTERFACE_H
+
+#include <types/obj_type.h>
+#include <common/config.h>
+
+/* A stream interface must have its own errors independently of the buffer's,
+ * so that applications can rely on what the buffer reports while the stream
+ * interface is performing some retries (eg: connection error). Some states are
+ * transient and do not last beyond process_session().
+ */
+enum si_state {
+ SI_ST_INI = 0, /* interface not sollicitated yet */
+ SI_ST_REQ, /* [transient] connection initiation desired and not started yet */
+ SI_ST_QUE, /* interface waiting in queue */
+ SI_ST_TAR, /* interface in turn-around state after failed connect attempt */
+ SI_ST_ASS, /* server just assigned to this interface */
+ SI_ST_CON, /* initiated connection request (resource exists) */
+ SI_ST_CER, /* [transient] previous connection attempt failed (resource released) */
+ SI_ST_EST, /* connection established (resource exists) */
+ SI_ST_DIS, /* [transient] disconnected from other side, but cleanup not done yet */
+ SI_ST_CLO, /* stream intf closed, might not existing anymore. Buffers shut. */
+} __attribute__((packed));
+
+/* error types reported on the streams interface for more accurate reporting */
+enum {
+ SI_ET_NONE = 0x0000, /* no error yet, leave it to zero */
+ SI_ET_QUEUE_TO = 0x0001, /* queue timeout */
+ SI_ET_QUEUE_ERR = 0x0002, /* queue error (eg: full) */
+ SI_ET_QUEUE_ABRT = 0x0004, /* aborted in queue by external cause */
+ SI_ET_CONN_TO = 0x0008, /* connection timeout */
+ SI_ET_CONN_ERR = 0x0010, /* connection error (eg: no server available) */
+ SI_ET_CONN_ABRT = 0x0020, /* connection aborted by external cause (eg: abort) */
+ SI_ET_CONN_RES = 0x0040, /* connection aborted due to lack of resources */
+ SI_ET_CONN_OTHER = 0x0080, /* connection aborted for other reason (eg: 500) */
+ SI_ET_DATA_TO = 0x0100, /* timeout during data phase */
+ SI_ET_DATA_ERR = 0x0200, /* error during data phase */
+ SI_ET_DATA_ABRT = 0x0400, /* data phase aborted by external cause */
+};
+
+/* flags set after I/O (16 bit) */
+enum {
+ SI_FL_NONE = 0x0000, /* nothing */
+ SI_FL_EXP = 0x0001, /* timeout has expired */
+ SI_FL_ERR = 0x0002, /* a non-recoverable error has occurred */
+ SI_FL_WAIT_ROOM = 0x0004, /* waiting for space to store incoming data */
+ SI_FL_WAIT_DATA = 0x0008, /* waiting for more data to send */
+ SI_FL_ISBACK = 0x0010, /* 0 for front-side SI, 1 for back-side */
+ SI_FL_DONT_WAKE = 0x0020, /* resync in progress, don't wake up */
+ SI_FL_INDEP_STR = 0x0040, /* independent streams = don't update rex on write */
+ SI_FL_NOLINGER = 0x0080, /* may close without lingering. One-shot. */
+ SI_FL_NOHALF = 0x0100, /* no half close, close both sides at once */
+ SI_FL_SRC_ADDR = 0x1000, /* get the source ip/port with getsockname */
+ SI_FL_WANT_PUT = 0x2000, /* an applet would like to put some data into the buffer */
+ SI_FL_WANT_GET = 0x4000, /* an applet would like to get some data from the buffer */
+};
+
+/* A stream interface has 3 parts :
+ * - the buffer side, which interfaces to the buffers.
+ * - the remote side, which describes the state and address of the other side.
+ * - the functions, which are used by the buffer side to communicate with the
+ * remote side from the buffer side.
+ */
+
+/* Note that if an applet is registered, the update function will not be called
+ * by the session handler, so it may be used to resync flags at the end of the
+ * applet handler. See stream_int_update_embedded() for reference.
+ */
+struct stream_interface {
+ /* struct members used by the "buffer" side */
+ enum si_state state; /* SI_ST* */
+ enum si_state prev_state;/* SI_ST*, copy of previous state */
+ unsigned short flags; /* SI_FL_* */
+ unsigned int exp; /* wake up time for connect, queue, turn-around, ... */
+ enum obj_type *end; /* points to the end point (connection or appctx) */
+ struct si_ops *ops; /* general operations at the stream interface layer */
+
+ /* struct members below are the "remote" part, as seen from the buffer side */
+ unsigned int err_type; /* first error detected, one of SI_ET_* */
+ int conn_retries; /* number of connect retries left */
+};
+
+/* operations available on a stream-interface */
+struct si_ops {
+ void (*update)(struct stream_interface *); /* I/O update function */
+ void (*chk_rcv)(struct stream_interface *); /* chk_rcv function */
+ void (*chk_snd)(struct stream_interface *); /* chk_snd function */
+ void (*shutr)(struct stream_interface *); /* shut read function */
+ void (*shutw)(struct stream_interface *); /* shut write function */
+};
+
+#endif /* _TYPES_STREAM_INTERFACE_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * include/types/task.h
+ * Macros, variables and structures for task management.
+ *
+ * Copyright (C) 2000-2010 Willy Tarreau - w@1wt.eu
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef _TYPES_TASK_H
+#define _TYPES_TASK_H
+
+#include <sys/time.h>
+
+#include <common/config.h>
+#include <common/mini-clist.h>
+#include <eb32tree.h>
+
+/* values for task->state */
+#define TASK_SLEEPING 0x00 /* task sleeping */
+#define TASK_RUNNING 0x01 /* the task is currently running */
+#define TASK_WOKEN_INIT 0x02 /* woken up for initialisation purposes */
+#define TASK_WOKEN_TIMER 0x04 /* woken up because of expired timer */
+#define TASK_WOKEN_IO 0x08 /* woken up because of completed I/O */
+#define TASK_WOKEN_SIGNAL 0x10 /* woken up by a system signal */
+#define TASK_WOKEN_MSG 0x20 /* woken up by another task's message */
+#define TASK_WOKEN_RES 0x40 /* woken up because of available resource */
+#define TASK_WOKEN_OTHER 0x80 /* woken up for an unspecified reason */
+
+/* use this to check a task state or to clean it up before queueing */
+#define TASK_WOKEN_ANY (TASK_WOKEN_OTHER|TASK_WOKEN_INIT|TASK_WOKEN_TIMER| \
+ TASK_WOKEN_IO|TASK_WOKEN_SIGNAL|TASK_WOKEN_MSG| \
+ TASK_WOKEN_RES)
+
+/* Additional wakeup info may be passed in the state by lef-shifting the value
+ * by this number of bits. Not more than 8 bits are guaranteed to be delivered.
+ * System signals may use that too.
+ */
+#define TASK_REASON_SHIFT 8
+
+/* The base for all tasks */
+struct task {
+ struct eb32_node rq; /* ebtree node used to hold the task in the run queue */
+ unsigned short state; /* task state : bit field of TASK_* */
+ short nice; /* the task's current nice value from -1024 to +1024 */
+ unsigned int calls; /* number of times ->process() was called */
+ struct task * (*process)(struct task *t); /* the function which processes the task */
+ void *context; /* the task's context */
+ struct eb32_node wq; /* ebtree node used to hold the task in the wait queue */
+ int expire; /* next expiration date for this task, in ticks */
+};
+
+/*
+ * The task callback (->process) is responsible for updating ->expire. It must
+ * return a pointer to the task itself, except if the task has been deleted, in
+ * which case it returns NULL so that the scheduler knows it must not check the
+ * expire timer. The scheduler will requeue the task at the proper location.
+ */
+
+#endif /* _TYPES_TASK_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ include/types/template.h
+ This file serves as a template for future include files.
+
+ Copyright (C) 2000-2006 Willy Tarreau - w@1wt.eu
+
+ This library is free software; you can redistribute it and/or
+ modify it under the terms of the GNU Lesser General Public
+ License as published by the Free Software Foundation, version 2.1
+ exclusively.
+
+ This library is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ Lesser General Public License for more details.
+
+ You should have received a copy of the GNU Lesser General Public
+ License along with this library; if not, write to the Free Software
+ Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+*/
+
+#ifndef _TYPES_TEMPLATE_H
+#define _TYPES_TEMPLATE_H
+
+#include <common/config.h>
+
+#endif /* _TYPES_TEMPLATE_H */
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+#ifndef _TYPES_VARS_H
+#define _TYPES_VARS_H
+
+#include <common/mini-clist.h>
+
+#include <types/sample.h>
+
+enum vars_scope {
+ SCOPE_SESS = 0,
+ SCOPE_TXN,
+ SCOPE_REQ,
+ SCOPE_RES,
+};
+
+struct vars {
+ struct list head;
+ enum vars_scope scope;
+ unsigned int size;
+};
+
+/* This struct describes a variable. */
+struct var_desc {
+ const char *name; /* Contains the normalized variable name. */
+ enum vars_scope scope;
+};
+
+struct var {
+ struct list l; /* Used for chaining vars. */
+ const char *name; /* Contains the variable name. */
+ struct sample_data data; /* data storage. */
+};
+
+#endif
--- /dev/null
+#include <stdio.h>
+
+#include <common/cfgparse.h>
+#include <common/chunk.h>
+#include <common/buffer.h>
+#include <proto/arg.h>
+#include <proto/log.h>
+#include <proto/proto_http.h>
+#include <proto/sample.h>
+#include <import/xxhash.h>
+#include <import/lru.h>
+#include <import/51d.h>
+
+struct _51d_property_names {
+ struct list list;
+ char *name;
+};
+
+#ifdef FIFTYONEDEGREES_H_PATTERN_INCLUDED
+#define _51DEGREES_CONV_CACHE_KEY "_51d_conv"
+#define _51DEGREES_FETCH_CACHE_KEY "_51d_fetch"
+static struct lru64_head *_51d_lru_tree = NULL;
+static unsigned long long _51d_lru_seed;
+#endif
+
+static int _51d_data_file(char **args, int section_type, struct proxy *curpx,
+ struct proxy *defpx, const char *file, int line,
+ char **err)
+{
+ if (*(args[1]) == 0) {
+ memprintf(err,
+ "'%s' expects a filepath to a 51Degrees trie or pattern data file.",
+ args[0]);
+ return -1;
+ }
+
+ if (global._51degrees.data_file_path)
+ free(global._51degrees.data_file_path);
+ global._51degrees.data_file_path = strdup(args[1]);
+
+ return 0;
+}
+
+static int _51d_property_name_list(char **args, int section_type, struct proxy *curpx,
+ struct proxy *defpx, const char *file, int line,
+ char **err)
+{
+ int cur_arg = 1;
+ struct _51d_property_names *name;
+
+ if (*(args[cur_arg]) == 0) {
+ memprintf(err,
+ "'%s' expects at least one 51Degrees property name.",
+ args[0]);
+ return -1;
+ }
+
+ while (*(args[cur_arg])) {
+ name = calloc(1, sizeof(struct _51d_property_names));
+ name->name = strdup(args[cur_arg]);
+ LIST_ADDQ(&global._51degrees.property_names, &name->list);
+ ++cur_arg;
+ }
+
+ return 0;
+}
+
+static int _51d_property_separator(char **args, int section_type, struct proxy *curpx,
+ struct proxy *defpx, const char *file, int line,
+ char **err)
+{
+ if (*(args[1]) == 0) {
+ memprintf(err,
+ "'%s' expects a single character.",
+ args[0]);
+ return -1;
+ }
+ if (strlen(args[1]) > 1) {
+ memprintf(err,
+ "'%s' expects a single character, got '%s'.",
+ args[0], args[1]);
+ return -1;
+ }
+
+ global._51degrees.property_separator = *args[1];
+
+ return 0;
+}
+
+static int _51d_cache_size(char **args, int section_type, struct proxy *curpx,
+ struct proxy *defpx, const char *file, int line,
+ char **err)
+{
+ if (*(args[1]) == 0) {
+ memprintf(err,
+ "'%s' expects a positive numeric value.",
+ args[0]);
+ return -1;
+ }
+
+ global._51degrees.cache_size = atoi(args[1]);
+ if (global._51degrees.cache_size < 0) {
+ memprintf(err,
+ "'%s' expects a positive numeric value, got '%s'.",
+ args[0], args[1]);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int _51d_fetch_check(struct arg *arg, char **err_msg)
+{
+ if (global._51degrees.data_file_path)
+ return 1;
+
+ memprintf(err_msg, "51Degrees data file is not specified (parameter '51degrees-data-file')");
+ return 0;
+}
+
+static int _51d_conv_check(struct arg *arg, struct sample_conv *conv,
+ const char *file, int line, char **err_msg)
+{
+ if (global._51degrees.data_file_path)
+ return 1;
+
+ memprintf(err_msg, "51Degrees data file is not specified (parameter '51degrees-data-file')");
+ return 0;
+}
+
+#ifdef FIFTYONEDEGREES_H_PATTERN_INCLUDED
+/* Insert the data associated with the sample into the cache as a fresh item.
+ */
+static void _51d_insert_cache_entry(struct sample *smp, struct lru64 *lru)
+{
+ struct chunk *cache_entry = (struct chunk*)malloc(sizeof(struct chunk));
+
+ if (!cache_entry)
+ return;
+
+ smp->flags |= SMP_F_CONST;
+ cache_entry->str = malloc(smp->data.u.str.len + 1);
+ if (!cache_entry->str)
+ return;
+
+ memcpy(cache_entry->str, smp->data.u.str.str, smp->data.u.str.len);
+ cache_entry->str[smp->data.u.str.len] = 0;
+ cache_entry->len = smp->data.u.str.len;
+ lru64_commit(lru, cache_entry, _51DEGREES_CONV_CACHE_KEY, 0, free);
+}
+
+/* Retrieves the data from the cache and sets the sample data to this string.
+ */
+static void _51d_retrieve_cache_entry(struct sample *smp, struct lru64 *lru)
+{
+ struct chunk *cache_entry = (struct chunk*)lru->data;
+ smp->flags |= SMP_F_CONST;
+ smp->data.u.str.str = cache_entry->str;
+ smp->data.u.str.len = cache_entry->len;
+}
+#endif
+
+#ifdef FIFTYONEDEGREES_H_PATTERN_INCLUDED
+/* Sets the important HTTP headers ahead of the detection
+ */
+static void _51d_set_headers(struct sample *smp, fiftyoneDegreesWorkset *ws)
+{
+ struct hdr_idx *idx;
+ struct hdr_ctx ctx;
+ const struct http_msg *msg;
+ int i;
+
+ idx = &smp->strm->txn->hdr_idx;
+ msg = &smp->strm->txn->req;
+
+ ws->importantHeadersCount = 0;
+
+ for (i = 0; i < global._51degrees.header_count; i++) {
+ ctx.idx = 0;
+ if (http_find_full_header2(
+ (global._51degrees.header_names + i)->str,
+ (global._51degrees.header_names + i)->len,
+ msg->chn->buf->p, idx, &ctx) == 1) {
+ ws->importantHeaders[ws->importantHeadersCount].header = ws->dataSet->httpHeaders + i;
+ ws->importantHeaders[ws->importantHeadersCount].headerValue = ctx.line + ctx.val;
+ ws->importantHeaders[ws->importantHeadersCount].headerValueLength = ctx.vlen;
+ ws->importantHeadersCount++;
+ }
+ }
+}
+#endif
+
+#ifdef FIFTYONEDEGREES_H_TRIE_INCLUDED
+static void _51d_set_device_offsets(struct sample *smp)
+{
+ struct hdr_idx *idx;
+ struct hdr_ctx ctx;
+ const struct http_msg *msg;
+ int index;
+ fiftyoneDegreesDeviceOffsets *offsets = &global._51degrees.device_offsets;
+
+ idx = &smp->strm->txn->hdr_idx;
+ msg = &smp->strm->txn->req;
+ offsets->size = 0;
+
+ for (index = 0; index < global._51degrees.header_count; index++) {
+ ctx.idx = 0;
+ if (http_find_full_header2(
+ (global._51degrees.header_names + index)->str,
+ (global._51degrees.header_names + index)->len,
+ msg->chn->buf->p, idx, &ctx) == 1) {
+ (offsets->firstOffset + offsets->size)->httpHeaderOffset = *(global._51degrees.header_offsets + index);
+ (offsets->firstOffset + offsets->size)->deviceOffset = fiftyoneDegreesGetDeviceOffset(ctx.line + ctx.val);
+ offsets->size++;
+ }
+ }
+}
+#endif
+
+#ifdef FIFTYONEDEGREES_H_PATTERN_INCLUDED
+/* Provides a hash code for the important HTTP headers.
+ */
+unsigned long long _51d_req_hash(const struct arg *args, fiftyoneDegreesWorkset* ws)
+{
+ unsigned long long seed = _51d_lru_seed ^ (long)args;
+ unsigned long long hash = 0;
+ int i;
+ for(i = 0; i < ws->importantHeadersCount; i++) {
+ hash ^= ws->importantHeaders[i].header->headerNameOffset;
+ hash ^= XXH64(ws->importantHeaders[i].headerValue,
+ ws->importantHeaders[i].headerValueLength,
+ seed);
+ }
+ return hash;
+}
+#endif
+
+#ifdef FIFTYONEDEGREES_H_PATTERN_INCLUDED
+static void _51d_process_match(const struct arg *args, struct sample *smp, fiftyoneDegreesWorkset* ws)
+{
+ char *methodName;
+#endif
+#ifdef FIFTYONEDEGREES_H_TRIE_INCLUDED
+static void _51d_process_match(const struct arg *args, struct sample *smp)
+{
+ char valuesBuffer[1024];
+ char **requiredProperties = fiftyoneDegreesGetRequiredPropertiesNames();
+ int requiredPropertiesCount = fiftyoneDegreesGetRequiredPropertiesCount();
+ fiftyoneDegreesDeviceOffsets *deviceOffsets = &global._51degrees.device_offsets;
+
+#endif
+
+ char no_data[] = "NoData"; /* response when no data could be found */
+ struct chunk *temp = get_trash_chunk();
+ int j, i = 0, found;
+ const char* property_name;
+
+ /* Loop through property names passed to the filter and fetch them from the dataset. */
+ while (args[i].data.str.str) {
+ /* Try to find request property in dataset. */
+ found = 0;
+#ifdef FIFTYONEDEGREES_H_PATTERN_INCLUDED
+ if (strcmp("Method", args[i].data.str.str) == 0) {
+ switch(ws->method) {
+ case EXACT: methodName = "Exact"; break;
+ case NUMERIC: methodName = "Numeric"; break;
+ case NEAREST: methodName = "Nearest"; break;
+ case CLOSEST: methodName = "Closest"; break;
+ default:
+ case NONE: methodName = "None"; break;
+ }
+ chunk_appendf(temp, "%s", methodName);
+ found = 1;
+ }
+ else if (strcmp("Difference", args[i].data.str.str) == 0) {
+ chunk_appendf(temp, "%d", ws->difference);
+ found = 1;
+ }
+ else if (strcmp("Rank", args[i].data.str.str) == 0) {
+ chunk_appendf(temp, "%d", fiftyoneDegreesGetSignatureRank(ws));
+ found = 1;
+ }
+ else {
+ for (j = 0; j < ws->dataSet->requiredPropertyCount; j++) {
+ property_name = fiftyoneDegreesGetPropertyName(ws->dataSet, ws->dataSet->requiredProperties[j]);
+ if (strcmp(property_name, args[i].data.str.str) == 0) {
+ found = 1;
+ fiftyoneDegreesSetValues(ws, j);
+ chunk_appendf(temp, "%s", fiftyoneDegreesGetValueName(ws->dataSet, *ws->values));
+ break;
+ }
+ }
+ }
+#endif
+#ifdef FIFTYONEDEGREES_H_TRIE_INCLUDED
+ found = 0;
+ for (j = 0; j < requiredPropertiesCount; j++) {
+ property_name = requiredProperties[j];
+ if (strcmp(property_name, args[i].data.str.str) == 0 &&
+ fiftyoneDegreesGetValueFromOffsets(deviceOffsets, j, valuesBuffer, 1024) > 0) {
+ found = 1;
+ chunk_appendf(temp, "%s", valuesBuffer);
+ break;
+ }
+ }
+#endif
+ if (!found) {
+ chunk_appendf(temp, "%s", no_data);
+ }
+ /* Add separator. */
+ chunk_appendf(temp, "%c", global._51degrees.property_separator);
+ ++i;
+ }
+
+ if (temp->len) {
+ --temp->len;
+ temp->str[temp->len] = '\0';
+ }
+
+ smp->data.u.str.str = temp->str;
+ smp->data.u.str.len = temp->len;
+}
+
+static int _51d_fetch(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+#ifdef FIFTYONEDEGREES_H_PATTERN_INCLUDED
+ fiftyoneDegreesWorkset* ws; /* workset for detection */
+ struct lru64 *lru = NULL;
+#endif
+
+ /* Needed to ensure that the HTTP message has been fully recieved when
+ * used with TCP operation. Not required for HTTP operation.
+ * Data type has to be reset to ensure the string output is processed
+ * correctly.
+ */
+ CHECK_HTTP_MESSAGE_FIRST();
+ smp->data.type = SMP_T_STR;
+
+#ifdef FIFTYONEDEGREES_H_PATTERN_INCLUDED
+
+ /* Get only the headers needed for device detection so they can be used
+ * with the cache to return previous results. Pattern is slower than
+ * Trie so caching will help improve performance.
+ */
+
+ /* Get a workset from the pool which will later contain detection results. */
+ ws = fiftyoneDegreesWorksetPoolGet(global._51degrees.pool);
+ if (!ws)
+ return 0;
+
+ /* Set the important HTTP headers for this request in the workset. */
+ _51d_set_headers(smp, ws);
+
+ /* Check the cache to see if there's results for these headers already. */
+ if (_51d_lru_tree) {
+ lru = lru64_get(_51d_req_hash(args, ws),
+ _51d_lru_tree, _51DEGREES_FETCH_CACHE_KEY, 0);
+ if (lru && lru->domain) {
+ _51d_retrieve_cache_entry(smp, lru);
+ return 1;
+ }
+ }
+
+ fiftyoneDegreesMatchForHttpHeaders(ws);
+
+ _51d_process_match(args, smp, ws);
+
+#endif
+
+#ifdef FIFTYONEDEGREES_H_TRIE_INCLUDED
+
+ /* Trie is very fast so all the headers can be passed in and the result
+ * returned faster than the hashing algorithm process.
+ */
+ _51d_set_device_offsets(smp);
+ _51d_process_match(args, smp);
+
+#endif
+
+#ifdef FIFTYONEDEGREES_H_PATTERN_INCLUDED
+ fiftyoneDegreesWorksetPoolRelease(global._51degrees.pool, ws);
+ if (lru) {
+ _51d_insert_cache_entry(smp, lru);
+ }
+#endif
+
+ return 1;
+}
+
+static int _51d_conv(const struct arg *args, struct sample *smp, void *private)
+{
+#ifdef FIFTYONEDEGREES_H_PATTERN_INCLUDED
+ fiftyoneDegreesWorkset* ws; /* workset for detection */
+ struct lru64 *lru = NULL;
+#endif
+
+#ifdef FIFTYONEDEGREES_H_PATTERN_INCLUDED
+
+ /* Look in the list. */
+ if (_51d_lru_tree) {
+ unsigned long long seed = _51d_lru_seed ^ (long)args;
+
+ lru = lru64_get(XXH64(smp->data.u.str.str, smp->data.u.str.len, seed),
+ _51d_lru_tree, _51DEGREES_CONV_CACHE_KEY, 0);
+ if (lru && lru->domain) {
+ _51d_retrieve_cache_entry(smp, lru);
+ return 1;
+ }
+ }
+
+ /* Create workset. This will later contain detection results. */
+ ws = fiftyoneDegreesWorksetPoolGet(global._51degrees.pool);
+ if (!ws)
+ return 0;
+#endif
+
+ /* Duplicate the data and remove the "const" flag before device detection. */
+ if (!smp_dup(smp))
+ return 0;
+
+ smp->data.u.str.str[smp->data.u.str.len] = '\0';
+
+ /* Perform detection. */
+#ifdef FIFTYONEDEGREES_H_PATTERN_INCLUDED
+ fiftyoneDegreesMatch(ws, smp->data.u.str.str);
+ _51d_process_match(args, smp, ws);
+#endif
+#ifdef FIFTYONEDEGREES_H_TRIE_INCLUDED
+ global._51degrees.device_offsets.firstOffset->deviceOffset = fiftyoneDegreesGetDeviceOffset(smp->data.u.str.str);
+ global._51degrees.device_offsets.size = 1;
+ _51d_process_match(args, smp);
+#endif
+
+#ifdef FIFTYONEDEGREES_H_PATTERN_INCLUDED
+ fiftyoneDegreesWorksetPoolRelease(global._51degrees.pool, ws);
+ if (lru) {
+ _51d_insert_cache_entry(smp, lru);
+ }
+#endif
+
+ return 1;
+}
+
+#ifdef FIFTYONEDEGREES_H_PATTERN_INCLUDED
+void _51d_init_http_headers()
+{
+ int index = 0;
+ const fiftyoneDegreesAsciiString *headerName;
+ fiftyoneDegreesDataSet *ds = &global._51degrees.data_set;
+ global._51degrees.header_count = ds->httpHeadersCount;
+ global._51degrees.header_names = (struct chunk*)malloc(global._51degrees.header_count * sizeof(struct chunk));
+ for (index = 0; index < global._51degrees.header_count; index++) {
+ headerName = fiftyoneDegreesGetString(ds, ds->httpHeaders[index].headerNameOffset);
+ (global._51degrees.header_names + index)->str = (char*)&headerName->firstByte;
+ (global._51degrees.header_names + index)->len = headerName->length - 1;
+ (global._51degrees.header_names + index)->size = (global._51degrees.header_names + index)->len;
+ }
+}
+#endif
+
+#ifdef FIFTYONEDEGREES_H_TRIE_INCLUDED
+void _51d_init_http_headers()
+{
+ int index = 0;
+ global._51degrees.header_count = fiftyoneDegreesGetHttpHeaderCount();
+ global._51degrees.device_offsets.firstOffset = (fiftyoneDegreesDeviceOffset*)malloc(
+ global._51degrees.header_count * sizeof(fiftyoneDegreesDeviceOffset));
+ global._51degrees.header_names = (struct chunk*)malloc(global._51degrees.header_count * sizeof(struct chunk));
+ global._51degrees.header_offsets = (int32_t*)malloc(global._51degrees.header_count * sizeof(int32_t));
+ for (index = 0; index < global._51degrees.header_count; index++) {
+ global._51degrees.header_offsets[index] = fiftyoneDegreesGetHttpHeaderNameOffset(index);
+ global._51degrees.header_names[index].str = fiftyoneDegreesGetHttpHeaderNamePointer(index);
+ global._51degrees.header_names[index].len = strlen(global._51degrees.header_names[index].str);
+ global._51degrees.header_names[index].size = global._51degrees.header_names[index].len;
+ }
+}
+#endif
+
+int init_51degrees(void)
+{
+ int i = 0;
+ struct chunk *temp;
+ struct _51d_property_names *name;
+ char **_51d_property_list = NULL;
+ fiftyoneDegreesDataSetInitStatus _51d_dataset_status = DATA_SET_INIT_STATUS_NOT_SET;
+
+ if (!global._51degrees.data_file_path)
+ return -1;
+
+ if (!LIST_ISEMPTY(&global._51degrees.property_names)) {
+ i = 0;
+ list_for_each_entry(name, &global._51degrees.property_names, list)
+ ++i;
+ _51d_property_list = calloc(i, sizeof(char *));
+
+ i = 0;
+ list_for_each_entry(name, &global._51degrees.property_names, list)
+ _51d_property_list[i++] = name->name;
+ }
+
+#ifdef FIFTYONEDEGREES_H_PATTERN_INCLUDED
+ _51d_dataset_status = fiftyoneDegreesInitWithPropertyArray(global._51degrees.data_file_path, &global._51degrees.data_set, _51d_property_list, i);
+#endif
+#ifdef FIFTYONEDEGREES_H_TRIE_INCLUDED
+ _51d_dataset_status = fiftyoneDegreesInitWithPropertyArray(global._51degrees.data_file_path, _51d_property_list, i);
+#endif
+
+ temp = get_trash_chunk();
+ chunk_reset(temp);
+
+ switch (_51d_dataset_status) {
+ case DATA_SET_INIT_STATUS_SUCCESS:
+#ifdef FIFTYONEDEGREES_H_PATTERN_INCLUDED
+ /* only 1 workset in the pool because HAProxy is currently single threaded
+ * this value should be set to the number of threads in future versions.
+ */
+ global._51degrees.pool = fiftyoneDegreesWorksetPoolCreate(&global._51degrees.data_set, NULL, 1);
+#endif
+ _51d_init_http_headers();
+ break;
+ case DATA_SET_INIT_STATUS_INSUFFICIENT_MEMORY:
+ chunk_printf(temp, "Insufficient memory.");
+ break;
+ case DATA_SET_INIT_STATUS_CORRUPT_DATA:
+#ifdef FIFTYONEDEGREES_H_PATTERN_INCLUDED
+ chunk_printf(temp, "Corrupt data file. Check that the data file provided is uncompressed and Pattern data format.");
+#endif
+#ifdef FIFTYONEDEGREES_H_TRIE_INCLUDED
+ chunk_printf(temp, "Corrupt data file. Check that the data file provided is uncompressed and Trie data format.");
+#endif
+ break;
+ case DATA_SET_INIT_STATUS_INCORRECT_VERSION:
+#ifdef FIFTYONEDEGREES_H_PATTERN_INCLUDED
+ chunk_printf(temp, "Incorrect version. Check that the data file provided is uncompressed and Pattern data format.");
+#endif
+#ifdef FIFTYONEDEGREES_H_TRIE_INCLUDED
+ chunk_printf(temp, "Incorrect version. Check that the data file provided is uncompressed and Trie data format.");
+#endif
+ break;
+ case DATA_SET_INIT_STATUS_FILE_NOT_FOUND:
+ chunk_printf(temp, "File not found.");
+ break;
+ case DATA_SET_INIT_STATUS_NOT_SET:
+ chunk_printf(temp, "Data set not initialised.");
+ break;
+ }
+ if (_51d_dataset_status != DATA_SET_INIT_STATUS_SUCCESS) {
+ if (temp->len)
+ Alert("51Degrees Setup - Error reading 51Degrees data file. %s\n", temp->str);
+ else
+ Alert("51Degrees Setup - Error reading 51Degrees data file.\n");
+ exit(1);
+ }
+ free(_51d_property_list);
+
+#ifdef FIFTYONEDEGREES_H_PATTERN_INCLUDED
+ _51d_lru_seed = random();
+ if (global._51degrees.cache_size) {
+ _51d_lru_tree = lru64_new(global._51degrees.cache_size);
+ }
+#endif
+
+ return 0;
+}
+
+void deinit_51degrees(void)
+{
+ struct _51d_property_names *_51d_prop_name, *_51d_prop_nameb;
+
+ free(global._51degrees.header_names);
+#ifdef FIFTYONEDEGREES_H_PATTERN_INCLUDED
+ fiftyoneDegreesWorksetPoolFree(global._51degrees.pool);
+ fiftyoneDegreesDataSetFree(&global._51degrees.data_set);
+#endif
+#ifdef FIFTYONEDEGREES_H_TRIE_INCLUDED
+ free(global._51degrees.device_offsets.firstOffset);
+ free(global._51degrees.header_offsets);
+ fiftyoneDegreesDestroy();
+#endif
+
+ free(global._51degrees.data_file_path); global._51degrees.data_file_path = NULL;
+ list_for_each_entry_safe(_51d_prop_name, _51d_prop_nameb, &global._51degrees.property_names, list) {
+ LIST_DEL(&_51d_prop_name->list);
+ free(_51d_prop_name);
+ }
+
+#ifdef FIFTYONEDEGREES_H_PATTERN_INCLUDED
+ while (lru64_destroy(_51d_lru_tree));
+#endif
+}
+
+static struct cfg_kw_list _51dcfg_kws = {{ }, {
+ { CFG_GLOBAL, "51degrees-data-file", _51d_data_file },
+ { CFG_GLOBAL, "51degrees-property-name-list", _51d_property_name_list },
+ { CFG_GLOBAL, "51degrees-property-separator", _51d_property_separator },
+ { CFG_GLOBAL, "51degrees-cache-size", _51d_cache_size },
+ { 0, NULL, NULL },
+}};
+
+/* Note: must not be declared <const> as its list will be overwritten */
+static struct sample_fetch_kw_list sample_fetch_keywords = {ILH, {
+ { "51d.all", _51d_fetch, ARG5(1,STR,STR,STR,STR,STR), _51d_fetch_check, SMP_T_STR, SMP_USE_HRQHV },
+ { NULL, NULL, 0, 0, 0 },
+}};
+
+/* Note: must not be declared <const> as its list will be overwritten */
+static struct sample_conv_kw_list conv_kws = {ILH, {
+ { "51d.single", _51d_conv, ARG5(1,STR,STR,STR,STR,STR), _51d_conv_check, SMP_T_STR, SMP_T_STR },
+ { NULL, NULL, 0, 0, 0 },
+}};
+
+__attribute__((constructor))
+static void __51d_init(void)
+{
+ /* register sample fetch and conversion keywords */
+ sample_register_fetches(&sample_fetch_keywords);
+ sample_register_convs(&conv_kws);
+ cfg_register_keywords(&_51dcfg_kws);
+}
--- /dev/null
+/*
+ * ACL management functions.
+ *
+ * Copyright 2000-2013 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <ctype.h>
+#include <stdio.h>
+#include <string.h>
+
+#include <common/config.h>
+#include <common/mini-clist.h>
+#include <common/standard.h>
+#include <common/uri_auth.h>
+
+#include <types/global.h>
+
+#include <proto/acl.h>
+#include <proto/arg.h>
+#include <proto/auth.h>
+#include <proto/channel.h>
+#include <proto/log.h>
+#include <proto/pattern.h>
+#include <proto/proxy.h>
+#include <proto/sample.h>
+#include <proto/stick_table.h>
+
+#include <ebsttree.h>
+
+/* List head of all known ACL keywords */
+static struct acl_kw_list acl_keywords = {
+ .list = LIST_HEAD_INIT(acl_keywords.list)
+};
+
+/* input values are 0 or 3, output is the same */
+static inline enum acl_test_res pat2acl(struct pattern *pat)
+{
+ if (pat)
+ return ACL_TEST_PASS;
+ else
+ return ACL_TEST_FAIL;
+}
+
+/*
+ * Registers the ACL keyword list <kwl> as a list of valid keywords for next
+ * parsing sessions.
+ */
+void acl_register_keywords(struct acl_kw_list *kwl)
+{
+ LIST_ADDQ(&acl_keywords.list, &kwl->list);
+}
+
+/*
+ * Unregisters the ACL keyword list <kwl> from the list of valid keywords.
+ */
+void acl_unregister_keywords(struct acl_kw_list *kwl)
+{
+ LIST_DEL(&kwl->list);
+ LIST_INIT(&kwl->list);
+}
+
+/* Return a pointer to the ACL <name> within the list starting at <head>, or
+ * NULL if not found.
+ */
+struct acl *find_acl_by_name(const char *name, struct list *head)
+{
+ struct acl *acl;
+ list_for_each_entry(acl, head, list) {
+ if (strcmp(acl->name, name) == 0)
+ return acl;
+ }
+ return NULL;
+}
+
+/* Return a pointer to the ACL keyword <kw>, or NULL if not found. Note that if
+ * <kw> contains an opening parenthesis or a comma, only the left part of it is
+ * checked.
+ */
+struct acl_keyword *find_acl_kw(const char *kw)
+{
+ int index;
+ const char *kwend;
+ struct acl_kw_list *kwl;
+
+ kwend = kw;
+ while (*kwend && *kwend != '(' && *kwend != ',')
+ kwend++;
+
+ list_for_each_entry(kwl, &acl_keywords.list, list) {
+ for (index = 0; kwl->kw[index].kw != NULL; index++) {
+ if ((strncmp(kwl->kw[index].kw, kw, kwend - kw) == 0) &&
+ kwl->kw[index].kw[kwend-kw] == 0)
+ return &kwl->kw[index];
+ }
+ }
+ return NULL;
+}
+
+static struct acl_expr *prune_acl_expr(struct acl_expr *expr)
+{
+ struct arg *arg;
+
+ pattern_prune(&expr->pat);
+
+ for (arg = expr->smp->arg_p; arg; arg++) {
+ if (arg->type == ARGT_STOP)
+ break;
+ if (arg->type == ARGT_STR || arg->unresolved) {
+ free(arg->data.str.str);
+ arg->data.str.str = NULL;
+ arg->unresolved = 0;
+ }
+ }
+
+ if (expr->smp->arg_p != empty_arg_list)
+ free(expr->smp->arg_p);
+ return expr;
+}
+
+/* Parse an ACL expression starting at <args>[0], and return it. If <err> is
+ * not NULL, it will be filled with a pointer to an error message in case of
+ * error. This pointer must be freeable or NULL. <al> is an arg_list serving
+ * as a list head to report missing dependencies.
+ *
+ * Right now, the only accepted syntax is :
+ * <subject> [<value>...]
+ */
+struct acl_expr *parse_acl_expr(const char **args, char **err, struct arg_list *al,
+ const char *file, int line)
+{
+ __label__ out_return, out_free_expr;
+ struct acl_expr *expr;
+ struct acl_keyword *aclkw;
+ int patflags;
+ const char *arg;
+ struct sample_expr *smp = NULL;
+ int idx = 0;
+ char *ckw = NULL;
+ const char *begw;
+ const char *endw;
+ const char *endt;
+ int cur_type;
+ int nbargs;
+ int operator = STD_OP_EQ;
+ int op;
+ int contain_colon, have_dot;
+ const char *dot;
+ signed long long value, minor;
+ /* The following buffer contain two numbers, a ':' separator and the final \0. */
+ char buffer[NB_LLMAX_STR + 1 + NB_LLMAX_STR + 1];
+ int is_loaded;
+ int unique_id;
+ char *error;
+ struct pat_ref *ref;
+ struct pattern_expr *pattern_expr;
+ int load_as_map = 0;
+ int acl_conv_found = 0;
+
+ /* First, we look for an ACL keyword. And if we don't find one, then
+ * we look for a sample fetch expression starting with a sample fetch
+ * keyword.
+ */
+
+ al->ctx = ARGC_ACL; // to report errors while resolving args late
+ al->kw = *args;
+ al->conv = NULL;
+
+ aclkw = find_acl_kw(args[0]);
+ if (aclkw) {
+ /* OK we have a real ACL keyword */
+
+ /* build new sample expression for this ACL */
+ smp = calloc(1, sizeof(struct sample_expr));
+ if (!smp) {
+ memprintf(err, "out of memory when parsing ACL expression");
+ goto out_return;
+ }
+ LIST_INIT(&(smp->conv_exprs));
+ smp->fetch = aclkw->smp;
+ smp->arg_p = empty_arg_list;
+
+ /* look for the begining of the subject arguments */
+ for (arg = args[0]; *arg && *arg != '(' && *arg != ','; arg++);
+
+ endt = arg;
+ if (*endt == '(') {
+ /* look for the end of this term and skip the opening parenthesis */
+ endt = ++arg;
+ while (*endt && *endt != ')')
+ endt++;
+ if (*endt != ')') {
+ memprintf(err, "missing closing ')' after arguments to ACL keyword '%s'", aclkw->kw);
+ goto out_free_smp;
+ }
+ }
+
+ /* At this point, we have :
+ * - args[0] : beginning of the keyword
+ * - arg : end of the keyword, first character not part of keyword
+ * nor the opening parenthesis (so first character of args
+ * if present).
+ * - endt : end of the term (=arg or last parenthesis if args are present)
+ */
+ nbargs = make_arg_list(arg, endt - arg, smp->fetch->arg_mask, &smp->arg_p,
+ err, NULL, NULL, al);
+ if (nbargs < 0) {
+ /* note that make_arg_list will have set <err> here */
+ memprintf(err, "ACL keyword '%s' : %s", aclkw->kw, *err);
+ goto out_free_smp;
+ }
+
+ if (!smp->arg_p) {
+ smp->arg_p = empty_arg_list;
+ }
+ else if (smp->fetch->val_args && !smp->fetch->val_args(smp->arg_p, err)) {
+ /* invalid keyword argument, error must have been
+ * set by val_args().
+ */
+ memprintf(err, "in argument to '%s', %s", aclkw->kw, *err);
+ goto out_free_smp;
+ }
+ arg = endt;
+
+ /* look for the begining of the converters list. Those directly attached
+ * to the ACL keyword are found just after <arg> which points to the comma.
+ * If we find any converter, then we don't use the ACL keyword's match
+ * anymore but the one related to the converter's output type.
+ */
+ cur_type = smp->fetch->out_type;
+ while (*arg) {
+ struct sample_conv *conv;
+ struct sample_conv_expr *conv_expr;
+
+ if (*arg == ')') /* skip last closing parenthesis */
+ arg++;
+
+ if (*arg && *arg != ',') {
+ if (ckw)
+ memprintf(err, "ACL keyword '%s' : missing comma after conv keyword '%s'.",
+ aclkw->kw, ckw);
+ else
+ memprintf(err, "ACL keyword '%s' : missing comma after fetch keyword.",
+ aclkw->kw);
+ goto out_free_smp;
+ }
+
+ while (*arg == ',') /* then trailing commas */
+ arg++;
+
+ begw = arg; /* start of conv keyword */
+
+ if (!*begw)
+ /* none ? end of converters */
+ break;
+
+ for (endw = begw; *endw && *endw != '(' && *endw != ','; endw++);
+
+ free(ckw);
+ ckw = my_strndup(begw, endw - begw);
+
+ conv = find_sample_conv(begw, endw - begw);
+ if (!conv) {
+ /* Unknown converter method */
+ memprintf(err, "ACL keyword '%s' : unknown conv method '%s'.",
+ aclkw->kw, ckw);
+ goto out_free_smp;
+ }
+
+ arg = endw;
+ if (*arg == '(') {
+ /* look for the end of this term */
+ while (*arg && *arg != ')')
+ arg++;
+ if (*arg != ')') {
+ memprintf(err, "ACL keyword '%s' : syntax error: missing ')' after conv keyword '%s'.",
+ aclkw->kw, ckw);
+ goto out_free_smp;
+ }
+ }
+
+ if (conv->in_type >= SMP_TYPES || conv->out_type >= SMP_TYPES) {
+ memprintf(err, "ACL keyword '%s' : returns type of conv method '%s' is unknown.",
+ aclkw->kw, ckw);
+ goto out_free_smp;
+ }
+
+ /* If impossible type conversion */
+ if (!sample_casts[cur_type][conv->in_type]) {
+ memprintf(err, "ACL keyword '%s' : conv method '%s' cannot be applied.",
+ aclkw->kw, ckw);
+ goto out_free_smp;
+ }
+
+ cur_type = conv->out_type;
+ conv_expr = calloc(1, sizeof(struct sample_conv_expr));
+ if (!conv_expr)
+ goto out_free_smp;
+
+ LIST_ADDQ(&(smp->conv_exprs), &(conv_expr->list));
+ conv_expr->conv = conv;
+ acl_conv_found = 1;
+
+ if (arg != endw) {
+ int err_arg;
+
+ if (!conv->arg_mask) {
+ memprintf(err, "ACL keyword '%s' : conv method '%s' does not support any args.",
+ aclkw->kw, ckw);
+ goto out_free_smp;
+ }
+
+ al->kw = smp->fetch->kw;
+ al->conv = conv_expr->conv->kw;
+ if (make_arg_list(endw + 1, arg - endw - 1, conv->arg_mask, &conv_expr->arg_p, err, NULL, &err_arg, al) < 0) {
+ memprintf(err, "ACL keyword '%s' : invalid arg %d in conv method '%s' : %s.",
+ aclkw->kw, err_arg+1, ckw, *err);
+ goto out_free_smp;
+ }
+
+ if (!conv_expr->arg_p)
+ conv_expr->arg_p = empty_arg_list;
+
+ if (conv->val_args && !conv->val_args(conv_expr->arg_p, conv, file, line, err)) {
+ memprintf(err, "ACL keyword '%s' : invalid args in conv method '%s' : %s.",
+ aclkw->kw, ckw, *err);
+ goto out_free_smp;
+ }
+ }
+ else if (ARGM(conv->arg_mask)) {
+ memprintf(err, "ACL keyword '%s' : missing args for conv method '%s'.",
+ aclkw->kw, ckw);
+ goto out_free_smp;
+ }
+ }
+ }
+ else {
+ /* This is not an ACL keyword, so we hope this is a sample fetch
+ * keyword that we're going to transparently use as an ACL. If
+ * so, we retrieve a completely parsed expression with args and
+ * convs already done.
+ */
+ smp = sample_parse_expr((char **)args, &idx, file, line, err, al);
+ if (!smp) {
+ memprintf(err, "%s in ACL expression '%s'", *err, *args);
+ goto out_return;
+ }
+ cur_type = smp_expr_output_type(smp);
+ }
+
+ expr = (struct acl_expr *)calloc(1, sizeof(*expr));
+ if (!expr) {
+ memprintf(err, "out of memory when parsing ACL expression");
+ goto out_return;
+ }
+
+ pattern_init_head(&expr->pat);
+
+ expr->pat.expect_type = cur_type;
+ expr->smp = smp;
+ expr->kw = smp->fetch->kw;
+ smp = NULL; /* don't free it anymore */
+
+ if (aclkw && !acl_conv_found) {
+ expr->kw = aclkw->kw;
+ expr->pat.parse = aclkw->parse ? aclkw->parse : pat_parse_fcts[aclkw->match_type];
+ expr->pat.index = aclkw->index ? aclkw->index : pat_index_fcts[aclkw->match_type];
+ expr->pat.match = aclkw->match ? aclkw->match : pat_match_fcts[aclkw->match_type];
+ expr->pat.delete = aclkw->delete ? aclkw->delete : pat_delete_fcts[aclkw->match_type];
+ expr->pat.prune = aclkw->prune ? aclkw->prune : pat_prune_fcts[aclkw->match_type];
+ }
+
+ if (!expr->pat.parse) {
+ /* Parse/index/match functions depend on the expression type,
+ * so we have to map them now. Some types can be automatically
+ * converted.
+ */
+ switch (cur_type) {
+ case SMP_T_BOOL:
+ expr->pat.parse = pat_parse_fcts[PAT_MATCH_BOOL];
+ expr->pat.index = pat_index_fcts[PAT_MATCH_BOOL];
+ expr->pat.match = pat_match_fcts[PAT_MATCH_BOOL];
+ expr->pat.delete = pat_delete_fcts[PAT_MATCH_BOOL];
+ expr->pat.prune = pat_prune_fcts[PAT_MATCH_BOOL];
+ expr->pat.expect_type = pat_match_types[PAT_MATCH_BOOL];
+ break;
+ case SMP_T_SINT:
+ expr->pat.parse = pat_parse_fcts[PAT_MATCH_INT];
+ expr->pat.index = pat_index_fcts[PAT_MATCH_INT];
+ expr->pat.match = pat_match_fcts[PAT_MATCH_INT];
+ expr->pat.delete = pat_delete_fcts[PAT_MATCH_INT];
+ expr->pat.prune = pat_prune_fcts[PAT_MATCH_INT];
+ expr->pat.expect_type = pat_match_types[PAT_MATCH_INT];
+ break;
+ case SMP_T_IPV4:
+ case SMP_T_IPV6:
+ expr->pat.parse = pat_parse_fcts[PAT_MATCH_IP];
+ expr->pat.index = pat_index_fcts[PAT_MATCH_IP];
+ expr->pat.match = pat_match_fcts[PAT_MATCH_IP];
+ expr->pat.delete = pat_delete_fcts[PAT_MATCH_IP];
+ expr->pat.prune = pat_prune_fcts[PAT_MATCH_IP];
+ expr->pat.expect_type = pat_match_types[PAT_MATCH_IP];
+ break;
+ case SMP_T_STR:
+ expr->pat.parse = pat_parse_fcts[PAT_MATCH_STR];
+ expr->pat.index = pat_index_fcts[PAT_MATCH_STR];
+ expr->pat.match = pat_match_fcts[PAT_MATCH_STR];
+ expr->pat.delete = pat_delete_fcts[PAT_MATCH_STR];
+ expr->pat.prune = pat_prune_fcts[PAT_MATCH_STR];
+ expr->pat.expect_type = pat_match_types[PAT_MATCH_STR];
+ break;
+ }
+ }
+
+ /* Additional check to protect against common mistakes */
+ if (expr->pat.parse && cur_type != SMP_T_BOOL && !*args[1]) {
+ Warning("parsing acl keyword '%s' :\n"
+ " no pattern to match against were provided, so this ACL will never match.\n"
+ " If this is what you intended, please add '--' to get rid of this warning.\n"
+ " If you intended to match only for existence, please use '-m found'.\n"
+ " If you wanted to force an int to match as a bool, please use '-m bool'.\n"
+ "\n",
+ args[0]);
+ }
+
+ args++;
+
+ /* check for options before patterns. Supported options are :
+ * -i : ignore case for all patterns by default
+ * -f : read patterns from those files
+ * -m : force matching method (must be used before -f)
+ * -M : load the file as map file
+ * -u : force the unique id of the acl
+ * -- : everything after this is not an option
+ */
+ patflags = 0;
+ is_loaded = 0;
+ unique_id = -1;
+ while (**args == '-') {
+ if (strcmp(*args, "-i") == 0)
+ patflags |= PAT_MF_IGNORE_CASE;
+ else if (strcmp(*args, "-n") == 0)
+ patflags |= PAT_MF_NO_DNS;
+ else if (strcmp(*args, "-u") == 0) {
+ unique_id = strtol(args[1], &error, 10);
+ if (*error != '\0') {
+ memprintf(err, "the argument of -u must be an integer");
+ goto out_free_expr;
+ }
+
+ /* Check if this id is really unique. */
+ if (pat_ref_lookupid(unique_id)) {
+ memprintf(err, "the id is already used");
+ goto out_free_expr;
+ }
+
+ args++;
+ }
+ else if (strcmp(*args, "-f") == 0) {
+ if (!expr->pat.parse) {
+ memprintf(err, "matching method must be specified first (using '-m') when using a sample fetch of this type ('%s')", expr->kw);
+ goto out_free_expr;
+ }
+
+ if (!pattern_read_from_file(&expr->pat, PAT_REF_ACL, args[1], patflags, load_as_map, err, file, line))
+ goto out_free_expr;
+ is_loaded = 1;
+ args++;
+ }
+ else if (strcmp(*args, "-m") == 0) {
+ int idx;
+
+ if (is_loaded) {
+ memprintf(err, "'-m' must only be specified before patterns and files in parsing ACL expression");
+ goto out_free_expr;
+ }
+
+ idx = pat_find_match_name(args[1]);
+ if (idx < 0) {
+ memprintf(err, "unknown matching method '%s' when parsing ACL expression", args[1]);
+ goto out_free_expr;
+ }
+
+ /* Note: -m found is always valid, bool/int are compatible, str/bin/reg/len are compatible */
+ if (idx != PAT_MATCH_FOUND && !sample_casts[cur_type][pat_match_types[idx]]) {
+ memprintf(err, "matching method '%s' cannot be used with fetch keyword '%s'", args[1], expr->kw);
+ goto out_free_expr;
+ }
+ expr->pat.parse = pat_parse_fcts[idx];
+ expr->pat.index = pat_index_fcts[idx];
+ expr->pat.match = pat_match_fcts[idx];
+ expr->pat.delete = pat_delete_fcts[idx];
+ expr->pat.prune = pat_prune_fcts[idx];
+ expr->pat.expect_type = pat_match_types[idx];
+ args++;
+ }
+ else if (strcmp(*args, "-M") == 0) {
+ load_as_map = 1;
+ }
+ else if (strcmp(*args, "--") == 0) {
+ args++;
+ break;
+ }
+ else {
+ memprintf(err, "'%s' is not a valid ACL option. Please use '--' before any pattern beginning with a '-'", args[0]);
+ goto out_free_expr;
+ break;
+ }
+ args++;
+ }
+
+ if (!expr->pat.parse) {
+ memprintf(err, "matching method must be specified first (using '-m') when using a sample fetch of this type ('%s')", expr->kw);
+ goto out_free_expr;
+ }
+
+ /* Create displayed reference */
+ snprintf(trash.str, trash.size, "acl '%s' file '%s' line %d", expr->kw, file, line);
+ trash.str[trash.size - 1] = '\0';
+
+ /* Create new patern reference. */
+ ref = pat_ref_newid(unique_id, trash.str, PAT_REF_ACL);
+ if (!ref) {
+ memprintf(err, "memory error");
+ goto out_free_expr;
+ }
+
+ /* Create new pattern expression associated to this reference. */
+ pattern_expr = pattern_new_expr(&expr->pat, ref, err, NULL);
+ if (!pattern_expr)
+ goto out_free_expr;
+
+ /* Copy the pattern matching and indexing flags. */
+ pattern_expr->mflags = patflags;
+
+ /* now parse all patterns */
+ while (**args) {
+ arg = *args;
+
+ /* Compatibility layer. Each pattern can parse only one string per pattern,
+ * but the pat_parser_int() and pat_parse_dotted_ver() parsers were need
+ * optionnaly two operators. The first operator is the match method: eq,
+ * le, lt, ge and gt. pat_parse_int() and pat_parse_dotted_ver() functions
+ * can have a compatibility syntax based on ranges:
+ *
+ * pat_parse_int():
+ *
+ * "eq x" -> "x" or "x:x"
+ * "le x" -> ":x"
+ * "lt x" -> ":y" (with y = x - 1)
+ * "ge x" -> "x:"
+ * "gt x" -> "y:" (with y = x + 1)
+ *
+ * pat_parse_dotted_ver():
+ *
+ * "eq x.y" -> "x.y" or "x.y:x.y"
+ * "le x.y" -> ":x.y"
+ * "lt x.y" -> ":w.z" (with w.z = x.y - 1)
+ * "ge x.y" -> "x.y:"
+ * "gt x.y" -> "w.z:" (with w.z = x.y + 1)
+ *
+ * If y is not present, assume that is "0".
+ *
+ * The syntax eq, le, lt, ge and gt are proper to the acl syntax. The
+ * following block of code detect the operator, and rewrite each value
+ * in parsable string.
+ */
+ if (expr->pat.parse == pat_parse_int ||
+ expr->pat.parse == pat_parse_dotted_ver) {
+ /* Check for operator. If the argument is operator, memorise it and
+ * continue to the next argument.
+ */
+ op = get_std_op(arg);
+ if (op != -1) {
+ operator = op;
+ args++;
+ continue;
+ }
+
+ /* Check if the pattern contain ':' or '-' character. */
+ contain_colon = (strchr(arg, ':') || strchr(arg, '-'));
+
+ /* If the pattern contain ':' or '-' character, give it to the parser as is.
+ * If no contain ':' and operator is STD_OP_EQ, give it to the parser as is.
+ * In other case, try to convert the value according with the operator.
+ */
+ if (!contain_colon && operator != STD_OP_EQ) {
+ /* Search '.' separator. */
+ dot = strchr(arg, '.');
+ if (!dot) {
+ have_dot = 0;
+ minor = 0;
+ dot = arg + strlen(arg);
+ }
+ else
+ have_dot = 1;
+
+ /* convert the integer minor part for the pat_parse_dotted_ver() function. */
+ if (expr->pat.parse == pat_parse_dotted_ver && have_dot) {
+ if (strl2llrc(dot+1, strlen(dot+1), &minor) != 0) {
+ memprintf(err, "'%s' is neither a number nor a supported operator", arg);
+ goto out_free_expr;
+ }
+ if (minor >= 65536) {
+ memprintf(err, "'%s' contains too large a minor value", arg);
+ goto out_free_expr;
+ }
+ }
+
+ /* convert the integer value for the pat_parse_int() function, and the
+ * integer major part for the pat_parse_dotted_ver() function.
+ */
+ if (strl2llrc(arg, dot - arg, &value) != 0) {
+ memprintf(err, "'%s' is neither a number nor a supported operator", arg);
+ goto out_free_expr;
+ }
+ if (expr->pat.parse == pat_parse_dotted_ver) {
+ if (value >= 65536) {
+ memprintf(err, "'%s' contains too large a major value", arg);
+ goto out_free_expr;
+ }
+ value = (value << 16) | (minor & 0xffff);
+ }
+
+ switch (operator) {
+
+ case STD_OP_EQ: /* this case is not possible. */
+ memprintf(err, "internal error");
+ goto out_free_expr;
+
+ case STD_OP_GT:
+ value++; /* gt = ge + 1 */
+
+ case STD_OP_GE:
+ if (expr->pat.parse == pat_parse_int)
+ snprintf(buffer, NB_LLMAX_STR+NB_LLMAX_STR+2, "%lld:", value);
+ else
+ snprintf(buffer, NB_LLMAX_STR+NB_LLMAX_STR+2, "%lld.%lld:",
+ value >> 16, value & 0xffff);
+ arg = buffer;
+ break;
+
+ case STD_OP_LT:
+ value--; /* lt = le - 1 */
+
+ case STD_OP_LE:
+ if (expr->pat.parse == pat_parse_int)
+ snprintf(buffer, NB_LLMAX_STR+NB_LLMAX_STR+2, ":%lld", value);
+ else
+ snprintf(buffer, NB_LLMAX_STR+NB_LLMAX_STR+2, ":%lld.%lld",
+ value >> 16, value & 0xffff);
+ arg = buffer;
+ break;
+ }
+ }
+ }
+
+ /* Add sample to the reference, and try to compile it fior each pattern
+ * using this value.
+ */
+ if (!pat_ref_add(ref, arg, NULL, err))
+ goto out_free_expr;
+ args++;
+ }
+
+ return expr;
+
+ out_free_expr:
+ prune_acl_expr(expr);
+ free(expr);
+ free(ckw);
+ out_free_smp:
+ free(smp);
+ out_return:
+ return NULL;
+}
+
+/* Purge everything in the acl <acl>, then return <acl>. */
+struct acl *prune_acl(struct acl *acl) {
+
+ struct acl_expr *expr, *exprb;
+
+ free(acl->name);
+
+ list_for_each_entry_safe(expr, exprb, &acl->expr, list) {
+ LIST_DEL(&expr->list);
+ prune_acl_expr(expr);
+ free(expr);
+ }
+
+ return acl;
+}
+
+/* Parse an ACL with the name starting at <args>[0], and with a list of already
+ * known ACLs in <acl>. If the ACL was not in the list, it will be added.
+ * A pointer to that ACL is returned. If the ACL has an empty name, then it's
+ * an anonymous one and it won't be merged with any other one. If <err> is not
+ * NULL, it will be filled with an appropriate error. This pointer must be
+ * freeable or NULL. <al> is the arg_list serving as a head for unresolved
+ * dependencies.
+ *
+ * args syntax: <aclname> <acl_expr>
+ */
+struct acl *parse_acl(const char **args, struct list *known_acl, char **err, struct arg_list *al,
+ const char *file, int line)
+{
+ __label__ out_return, out_free_acl_expr, out_free_name;
+ struct acl *cur_acl;
+ struct acl_expr *acl_expr;
+ char *name;
+ const char *pos;
+
+ if (**args && (pos = invalid_char(*args))) {
+ memprintf(err, "invalid character in ACL name : '%c'", *pos);
+ goto out_return;
+ }
+
+ acl_expr = parse_acl_expr(args + 1, err, al, file, line);
+ if (!acl_expr) {
+ /* parse_acl_expr will have filled <err> here */
+ goto out_return;
+ }
+
+ /* Check for args beginning with an opening parenthesis just after the
+ * subject, as this is almost certainly a typo. Right now we can only
+ * emit a warning, so let's do so.
+ */
+ if (!strchr(args[1], '(') && *args[2] == '(')
+ Warning("parsing acl '%s' :\n"
+ " matching '%s' for pattern '%s' is likely a mistake and probably\n"
+ " not what you want. Maybe you need to remove the extraneous space before '('.\n"
+ " If you are really sure this is not an error, please insert '--' between the\n"
+ " match and the pattern to make this warning message disappear.\n",
+ args[0], args[1], args[2]);
+
+ if (*args[0])
+ cur_acl = find_acl_by_name(args[0], known_acl);
+ else
+ cur_acl = NULL;
+
+ if (!cur_acl) {
+ name = strdup(args[0]);
+ if (!name) {
+ memprintf(err, "out of memory when parsing ACL");
+ goto out_free_acl_expr;
+ }
+ cur_acl = (struct acl *)calloc(1, sizeof(*cur_acl));
+ if (cur_acl == NULL) {
+ memprintf(err, "out of memory when parsing ACL");
+ goto out_free_name;
+ }
+
+ LIST_INIT(&cur_acl->expr);
+ LIST_ADDQ(known_acl, &cur_acl->list);
+ cur_acl->name = name;
+ }
+
+ /* We want to know what features the ACL needs (typically HTTP parsing),
+ * and where it may be used. If an ACL relies on multiple matches, it is
+ * OK if at least one of them may match in the context where it is used.
+ */
+ cur_acl->use |= acl_expr->smp->fetch->use;
+ cur_acl->val |= acl_expr->smp->fetch->val;
+ LIST_ADDQ(&cur_acl->expr, &acl_expr->list);
+ return cur_acl;
+
+ out_free_name:
+ free(name);
+ out_free_acl_expr:
+ prune_acl_expr(acl_expr);
+ free(acl_expr);
+ out_return:
+ return NULL;
+}
+
+/* Some useful ACLs provided by default. Only those used are allocated. */
+
+const struct {
+ const char *name;
+ const char *expr[4]; /* put enough for longest expression */
+} default_acl_list[] = {
+ { .name = "TRUE", .expr = {"always_true",""}},
+ { .name = "FALSE", .expr = {"always_false",""}},
+ { .name = "LOCALHOST", .expr = {"src","127.0.0.1/8",""}},
+ { .name = "HTTP", .expr = {"req_proto_http",""}},
+ { .name = "HTTP_1.0", .expr = {"req_ver","1.0",""}},
+ { .name = "HTTP_1.1", .expr = {"req_ver","1.1",""}},
+ { .name = "METH_CONNECT", .expr = {"method","CONNECT",""}},
+ { .name = "METH_GET", .expr = {"method","GET","HEAD",""}},
+ { .name = "METH_HEAD", .expr = {"method","HEAD",""}},
+ { .name = "METH_OPTIONS", .expr = {"method","OPTIONS",""}},
+ { .name = "METH_POST", .expr = {"method","POST",""}},
+ { .name = "METH_TRACE", .expr = {"method","TRACE",""}},
+ { .name = "HTTP_URL_ABS", .expr = {"url_reg","^[^/:]*://",""}},
+ { .name = "HTTP_URL_SLASH", .expr = {"url_beg","/",""}},
+ { .name = "HTTP_URL_STAR", .expr = {"url","*",""}},
+ { .name = "HTTP_CONTENT", .expr = {"hdr_val(content-length)","gt","0",""}},
+ { .name = "RDP_COOKIE", .expr = {"req_rdp_cookie_cnt","gt","0",""}},
+ { .name = "REQ_CONTENT", .expr = {"req_len","gt","0",""}},
+ { .name = "WAIT_END", .expr = {"wait_end",""}},
+ { .name = NULL, .expr = {""}}
+};
+
+/* Find a default ACL from the default_acl list, compile it and return it.
+ * If the ACL is not found, NULL is returned. In theory, it cannot fail,
+ * except when default ACLs are broken, in which case it will return NULL.
+ * If <known_acl> is not NULL, the ACL will be queued at its tail. If <err> is
+ * not NULL, it will be filled with an error message if an error occurs. This
+ * pointer must be freeable or NULL. <al> is an arg_list serving as a list head
+ * to report missing dependencies.
+ */
+static struct acl *find_acl_default(const char *acl_name, struct list *known_acl,
+ char **err, struct arg_list *al,
+ const char *file, int line)
+{
+ __label__ out_return, out_free_acl_expr, out_free_name;
+ struct acl *cur_acl;
+ struct acl_expr *acl_expr;
+ char *name;
+ int index;
+
+ for (index = 0; default_acl_list[index].name != NULL; index++) {
+ if (strcmp(acl_name, default_acl_list[index].name) == 0)
+ break;
+ }
+
+ if (default_acl_list[index].name == NULL) {
+ memprintf(err, "no such ACL : '%s'", acl_name);
+ return NULL;
+ }
+
+ acl_expr = parse_acl_expr((const char **)default_acl_list[index].expr, err, al, file, line);
+ if (!acl_expr) {
+ /* parse_acl_expr must have filled err here */
+ goto out_return;
+ }
+
+ name = strdup(acl_name);
+ if (!name) {
+ memprintf(err, "out of memory when building default ACL '%s'", acl_name);
+ goto out_free_acl_expr;
+ }
+
+ cur_acl = (struct acl *)calloc(1, sizeof(*cur_acl));
+ if (cur_acl == NULL) {
+ memprintf(err, "out of memory when building default ACL '%s'", acl_name);
+ goto out_free_name;
+ }
+
+ cur_acl->name = name;
+ cur_acl->use |= acl_expr->smp->fetch->use;
+ cur_acl->val |= acl_expr->smp->fetch->val;
+ LIST_INIT(&cur_acl->expr);
+ LIST_ADDQ(&cur_acl->expr, &acl_expr->list);
+ if (known_acl)
+ LIST_ADDQ(known_acl, &cur_acl->list);
+
+ return cur_acl;
+
+ out_free_name:
+ free(name);
+ out_free_acl_expr:
+ prune_acl_expr(acl_expr);
+ free(acl_expr);
+ out_return:
+ return NULL;
+}
+
+/* Purge everything in the acl_cond <cond>, then return <cond>. */
+struct acl_cond *prune_acl_cond(struct acl_cond *cond)
+{
+ struct acl_term_suite *suite, *tmp_suite;
+ struct acl_term *term, *tmp_term;
+
+ /* iterate through all term suites and free all terms and all suites */
+ list_for_each_entry_safe(suite, tmp_suite, &cond->suites, list) {
+ list_for_each_entry_safe(term, tmp_term, &suite->terms, list)
+ free(term);
+ free(suite);
+ }
+ return cond;
+}
+
+/* Parse an ACL condition starting at <args>[0], relying on a list of already
+ * known ACLs passed in <known_acl>. The new condition is returned (or NULL in
+ * case of low memory). Supports multiple conditions separated by "or". If
+ * <err> is not NULL, it will be filled with a pointer to an error message in
+ * case of error, that the caller is responsible for freeing. The initial
+ * location must either be freeable or NULL. The list <al> serves as a list head
+ * for unresolved dependencies.
+ */
+struct acl_cond *parse_acl_cond(const char **args, struct list *known_acl,
+ enum acl_cond_pol pol, char **err, struct arg_list *al,
+ const char *file, int line)
+{
+ __label__ out_return, out_free_suite, out_free_term;
+ int arg, neg;
+ const char *word;
+ struct acl *cur_acl;
+ struct acl_term *cur_term;
+ struct acl_term_suite *cur_suite;
+ struct acl_cond *cond;
+ unsigned int suite_val;
+
+ cond = (struct acl_cond *)calloc(1, sizeof(*cond));
+ if (cond == NULL) {
+ memprintf(err, "out of memory when parsing condition");
+ goto out_return;
+ }
+
+ LIST_INIT(&cond->list);
+ LIST_INIT(&cond->suites);
+ cond->pol = pol;
+ cond->val = 0;
+
+ cur_suite = NULL;
+ suite_val = ~0U;
+ neg = 0;
+ for (arg = 0; *args[arg]; arg++) {
+ word = args[arg];
+
+ /* remove as many exclamation marks as we can */
+ while (*word == '!') {
+ neg = !neg;
+ word++;
+ }
+
+ /* an empty word is allowed because we cannot force the user to
+ * always think about not leaving exclamation marks alone.
+ */
+ if (!*word)
+ continue;
+
+ if (strcasecmp(word, "or") == 0 || strcmp(word, "||") == 0) {
+ /* new term suite */
+ cond->val |= suite_val;
+ suite_val = ~0U;
+ cur_suite = NULL;
+ neg = 0;
+ continue;
+ }
+
+ if (strcmp(word, "{") == 0) {
+ /* we may have a complete ACL expression between two braces,
+ * find the last one.
+ */
+ int arg_end = arg + 1;
+ const char **args_new;
+
+ while (*args[arg_end] && strcmp(args[arg_end], "}") != 0)
+ arg_end++;
+
+ if (!*args[arg_end]) {
+ memprintf(err, "missing closing '}' in condition");
+ goto out_free_suite;
+ }
+
+ args_new = calloc(1, (arg_end - arg + 1) * sizeof(*args_new));
+ if (!args_new) {
+ memprintf(err, "out of memory when parsing condition");
+ goto out_free_suite;
+ }
+
+ args_new[0] = "";
+ memcpy(args_new + 1, args + arg + 1, (arg_end - arg) * sizeof(*args_new));
+ args_new[arg_end - arg] = "";
+ cur_acl = parse_acl(args_new, known_acl, err, al, file, line);
+ free(args_new);
+
+ if (!cur_acl) {
+ /* note that parse_acl() must have filled <err> here */
+ goto out_free_suite;
+ }
+ word = args[arg + 1];
+ arg = arg_end;
+ }
+ else {
+ /* search for <word> in the known ACL names. If we do not find
+ * it, let's look for it in the default ACLs, and if found, add
+ * it to the list of ACLs of this proxy. This makes it possible
+ * to override them.
+ */
+ cur_acl = find_acl_by_name(word, known_acl);
+ if (cur_acl == NULL) {
+ cur_acl = find_acl_default(word, known_acl, err, al, file, line);
+ if (cur_acl == NULL) {
+ /* note that find_acl_default() must have filled <err> here */
+ goto out_free_suite;
+ }
+ }
+ }
+
+ cur_term = (struct acl_term *)calloc(1, sizeof(*cur_term));
+ if (cur_term == NULL) {
+ memprintf(err, "out of memory when parsing condition");
+ goto out_free_suite;
+ }
+
+ cur_term->acl = cur_acl;
+ cur_term->neg = neg;
+
+ /* Here it is a bit complex. The acl_term_suite is a conjunction
+ * of many terms. It may only be used if all of its terms are
+ * usable at the same time. So the suite's validity domain is an
+ * AND between all ACL keywords' ones. But, the global condition
+ * is valid if at least one term suite is OK. So it's an OR between
+ * all of their validity domains. We could emit a warning as soon
+ * as suite_val is null because it means that the last ACL is not
+ * compatible with the previous ones. Let's remain simple for now.
+ */
+ cond->use |= cur_acl->use;
+ suite_val &= cur_acl->val;
+
+ if (!cur_suite) {
+ cur_suite = (struct acl_term_suite *)calloc(1, sizeof(*cur_suite));
+ if (cur_suite == NULL) {
+ memprintf(err, "out of memory when parsing condition");
+ goto out_free_term;
+ }
+ LIST_INIT(&cur_suite->terms);
+ LIST_ADDQ(&cond->suites, &cur_suite->list);
+ }
+ LIST_ADDQ(&cur_suite->terms, &cur_term->list);
+ neg = 0;
+ }
+
+ cond->val |= suite_val;
+ return cond;
+
+ out_free_term:
+ free(cur_term);
+ out_free_suite:
+ prune_acl_cond(cond);
+ free(cond);
+ out_return:
+ return NULL;
+}
+
+/* Builds an ACL condition starting at the if/unless keyword. The complete
+ * condition is returned. NULL is returned in case of error or if the first
+ * word is neither "if" nor "unless". It automatically sets the file name and
+ * the line number in the condition for better error reporting, and sets the
+ * HTTP intiailization requirements in the proxy. If <err> is not NULL, it will
+ * be filled with a pointer to an error message in case of error, that the
+ * caller is responsible for freeing. The initial location must either be
+ * freeable or NULL.
+ */
+struct acl_cond *build_acl_cond(const char *file, int line, struct proxy *px, const char **args, char **err)
+{
+ enum acl_cond_pol pol = ACL_COND_NONE;
+ struct acl_cond *cond = NULL;
+
+ if (err)
+ *err = NULL;
+
+ if (!strcmp(*args, "if")) {
+ pol = ACL_COND_IF;
+ args++;
+ }
+ else if (!strcmp(*args, "unless")) {
+ pol = ACL_COND_UNLESS;
+ args++;
+ }
+ else {
+ memprintf(err, "conditions must start with either 'if' or 'unless'");
+ return NULL;
+ }
+
+ cond = parse_acl_cond(args, &px->acl, pol, err, &px->conf.args, file, line);
+ if (!cond) {
+ /* note that parse_acl_cond must have filled <err> here */
+ return NULL;
+ }
+
+ cond->file = file;
+ cond->line = line;
+ px->http_needed |= !!(cond->use & SMP_USE_HTTP_ANY);
+ return cond;
+}
+
+/* Execute condition <cond> and return either ACL_TEST_FAIL, ACL_TEST_MISS or
+ * ACL_TEST_PASS depending on the test results. ACL_TEST_MISS may only be
+ * returned if <opt> does not contain SMP_OPT_FINAL, indicating that incomplete
+ * data is being examined. The function automatically sets SMP_OPT_ITERATE. This
+ * function only computes the condition, it does not apply the polarity required
+ * by IF/UNLESS, it's up to the caller to do this using something like this :
+ *
+ * res = acl_pass(res);
+ * if (res == ACL_TEST_MISS)
+ * return 0;
+ * if (cond->pol == ACL_COND_UNLESS)
+ * res = !res;
+ */
+enum acl_test_res acl_exec_cond(struct acl_cond *cond, struct proxy *px, struct session *sess, struct stream *strm, unsigned int opt)
+{
+ __label__ fetch_next;
+ struct acl_term_suite *suite;
+ struct acl_term *term;
+ struct acl_expr *expr;
+ struct acl *acl;
+ struct sample smp;
+ enum acl_test_res acl_res, suite_res, cond_res;
+
+ /* ACLs are iterated over all values, so let's always set the flag to
+ * indicate this to the fetch functions.
+ */
+ opt |= SMP_OPT_ITERATE;
+
+ /* We're doing a logical OR between conditions so we initialize to FAIL.
+ * The MISS status is propagated down from the suites.
+ */
+ cond_res = ACL_TEST_FAIL;
+ list_for_each_entry(suite, &cond->suites, list) {
+ /* Evaluate condition suite <suite>. We stop at the first term
+ * which returns ACL_TEST_FAIL. The MISS status is still propagated
+ * in case of uncertainty in the result.
+ */
+
+ /* we're doing a logical AND between terms, so we must set the
+ * initial value to PASS.
+ */
+ suite_res = ACL_TEST_PASS;
+ list_for_each_entry(term, &suite->terms, list) {
+ acl = term->acl;
+
+ /* FIXME: use cache !
+ * check acl->cache_idx for this.
+ */
+
+ /* ACL result not cached. Let's scan all the expressions
+ * and use the first one to match.
+ */
+ acl_res = ACL_TEST_FAIL;
+ list_for_each_entry(expr, &acl->expr, list) {
+ /* we need to reset context and flags */
+ memset(&smp, 0, sizeof(smp));
+ fetch_next:
+ if (!sample_process(px, sess, strm, opt, expr->smp, &smp)) {
+ /* maybe we could not fetch because of missing data */
+ if (smp.flags & SMP_F_MAY_CHANGE && !(opt & SMP_OPT_FINAL))
+ acl_res |= ACL_TEST_MISS;
+ continue;
+ }
+
+ acl_res |= pat2acl(pattern_exec_match(&expr->pat, &smp, 0));
+ /*
+ * OK now acl_res holds the result of this expression
+ * as one of ACL_TEST_FAIL, ACL_TEST_MISS or ACL_TEST_PASS.
+ *
+ * Then if (!MISS) we can cache the result, and put
+ * (smp.flags & SMP_F_VOLATILE) in the cache flags.
+ *
+ * FIXME: implement cache.
+ *
+ */
+
+ /* we're ORing these terms, so a single PASS is enough */
+ if (acl_res == ACL_TEST_PASS)
+ break;
+
+ if (smp.flags & SMP_F_NOT_LAST)
+ goto fetch_next;
+
+ /* sometimes we know the fetched data is subject to change
+ * later and give another chance for a new match (eg: request
+ * size, time, ...)
+ */
+ if (smp.flags & SMP_F_MAY_CHANGE && !(opt & SMP_OPT_FINAL))
+ acl_res |= ACL_TEST_MISS;
+ }
+ /*
+ * Here we have the result of an ACL (cached or not).
+ * ACLs are combined, negated or not, to form conditions.
+ */
+
+ if (term->neg)
+ acl_res = acl_neg(acl_res);
+
+ suite_res &= acl_res;
+
+ /* we're ANDing these terms, so a single FAIL or MISS is enough */
+ if (suite_res != ACL_TEST_PASS)
+ break;
+ }
+ cond_res |= suite_res;
+
+ /* we're ORing these terms, so a single PASS is enough */
+ if (cond_res == ACL_TEST_PASS)
+ break;
+ }
+ return cond_res;
+}
+
+/* Returns a pointer to the first ACL conflicting with usage at place <where>
+ * which is one of the SMP_VAL_* bits indicating a check place, or NULL if
+ * no conflict is found. Only full conflicts are detected (ACL is not usable).
+ * Use the next function to check for useless keywords.
+ */
+const struct acl *acl_cond_conflicts(const struct acl_cond *cond, unsigned int where)
+{
+ struct acl_term_suite *suite;
+ struct acl_term *term;
+ struct acl *acl;
+
+ list_for_each_entry(suite, &cond->suites, list) {
+ list_for_each_entry(term, &suite->terms, list) {
+ acl = term->acl;
+ if (!(acl->val & where))
+ return acl;
+ }
+ }
+ return NULL;
+}
+
+/* Returns a pointer to the first ACL and its first keyword to conflict with
+ * usage at place <where> which is one of the SMP_VAL_* bits indicating a check
+ * place. Returns true if a conflict is found, with <acl> and <kw> set (if non
+ * null), or false if not conflict is found. The first useless keyword is
+ * returned.
+ */
+int acl_cond_kw_conflicts(const struct acl_cond *cond, unsigned int where, struct acl const **acl, char const **kw)
+{
+ struct acl_term_suite *suite;
+ struct acl_term *term;
+ struct acl_expr *expr;
+
+ list_for_each_entry(suite, &cond->suites, list) {
+ list_for_each_entry(term, &suite->terms, list) {
+ list_for_each_entry(expr, &term->acl->expr, list) {
+ if (!(expr->smp->fetch->val & where)) {
+ if (acl)
+ *acl = term->acl;
+ if (kw)
+ *kw = expr->kw;
+ return 1;
+ }
+ }
+ }
+ }
+ return 0;
+}
+
+/*
+ * Find targets for userlist and groups in acl. Function returns the number
+ * of errors or OK if everything is fine. It must be called only once sample
+ * fetch arguments have been resolved (after smp_resolve_args()).
+ */
+int acl_find_targets(struct proxy *p)
+{
+
+ struct acl *acl;
+ struct acl_expr *expr;
+ struct pattern_list *pattern;
+ int cfgerr = 0;
+ struct pattern_expr_list *pexp;
+
+ list_for_each_entry(acl, &p->acl, list) {
+ list_for_each_entry(expr, &acl->expr, list) {
+ if (!strcmp(expr->kw, "http_auth_group")) {
+ /* Note: the ARGT_USR argument may only have been resolved earlier
+ * by smp_resolve_args().
+ */
+ if (expr->smp->arg_p->unresolved) {
+ Alert("Internal bug in proxy %s: %sacl %s %s() makes use of unresolved userlist '%s'. Please report this.\n",
+ p->id, *acl->name ? "" : "anonymous ", acl->name, expr->kw, expr->smp->arg_p->data.str.str);
+ cfgerr++;
+ continue;
+ }
+
+ if (LIST_ISEMPTY(&expr->pat.head)) {
+ Alert("proxy %s: acl %s %s(): no groups specified.\n",
+ p->id, acl->name, expr->kw);
+ cfgerr++;
+ continue;
+ }
+
+ /* For each pattern, check if the group exists. */
+ list_for_each_entry(pexp, &expr->pat.head, list) {
+ if (LIST_ISEMPTY(&pexp->expr->patterns)) {
+ Alert("proxy %s: acl %s %s(): no groups specified.\n",
+ p->id, acl->name, expr->kw);
+ cfgerr++;
+ continue;
+ }
+
+ list_for_each_entry(pattern, &pexp->expr->patterns, list) {
+ /* this keyword only has one argument */
+ if (!check_group(expr->smp->arg_p->data.usr, pattern->pat.ptr.str)) {
+ Alert("proxy %s: acl %s %s(): invalid group '%s'.\n",
+ p->id, acl->name, expr->kw, pattern->pat.ptr.str);
+ cfgerr++;
+ }
+ }
+ }
+ }
+ }
+ }
+
+ return cfgerr;
+}
+
+/* initializes ACLs by resolving the sample fetch names they rely upon.
+ * Returns 0 on success, otherwise an error.
+ */
+int init_acl()
+{
+ int err = 0;
+ int index;
+ const char *name;
+ struct acl_kw_list *kwl;
+ struct sample_fetch *smp;
+
+ list_for_each_entry(kwl, &acl_keywords.list, list) {
+ for (index = 0; kwl->kw[index].kw != NULL; index++) {
+ name = kwl->kw[index].fetch_kw;
+ if (!name)
+ name = kwl->kw[index].kw;
+
+ smp = find_sample_fetch(name, strlen(name));
+ if (!smp) {
+ Alert("Critical internal error: ACL keyword '%s' relies on sample fetch '%s' which was not registered!\n",
+ kwl->kw[index].kw, name);
+ err++;
+ continue;
+ }
+ kwl->kw[index].smp = smp;
+ }
+ }
+ return err;
+}
+
+/************************************************************************/
+/* All supported sample and ACL keywords must be declared here. */
+/************************************************************************/
+
+/* Note: must not be declared <const> as its list will be overwritten.
+ * Please take care of keeping this list alphabetically sorted.
+ */
+static struct acl_kw_list acl_kws = {ILH, {
+ { /* END */ },
+}};
+
+__attribute__((constructor))
+static void __acl_init(void)
+{
+ acl_register_keywords(&acl_kws);
+}
+
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * Functions managing applets
+ *
+ * Copyright 2000-2015 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+
+#include <common/config.h>
+#include <common/mini-clist.h>
+#include <proto/applet.h>
+#include <proto/stream.h>
+#include <proto/stream_interface.h>
+
+struct list applet_active_queue = LIST_HEAD_INIT(applet_active_queue);
+struct list applet_run_queue = LIST_HEAD_INIT(applet_run_queue);
+
+void applet_run_active()
+{
+ struct appctx *curr;
+ struct stream_interface *si;
+
+ if (LIST_ISEMPTY(&applet_active_queue))
+ return;
+
+ /* move active queue to run queue */
+ applet_active_queue.n->p = &applet_run_queue;
+ applet_active_queue.p->n = &applet_run_queue;
+
+ applet_run_queue = applet_active_queue;
+ LIST_INIT(&applet_active_queue);
+
+ /* The list is only scanned from the head. This guarantees that if any
+ * applet removes another one, there is no side effect while walking
+ * through the list.
+ */
+ while (!LIST_ISEMPTY(&applet_run_queue)) {
+ curr = LIST_ELEM(applet_run_queue.n, typeof(curr), runq);
+ si = curr->owner;
+
+ /* now we'll need a buffer */
+ if (!stream_alloc_recv_buffer(si_ic(si))) {
+ si->flags |= SI_FL_WAIT_ROOM;
+ LIST_DEL(&curr->runq);
+ LIST_INIT(&curr->runq);
+ continue;
+ }
+
+ /* We always pretend the applet can't get and doesn't want to
+ * put, it's up to it to change this if needed. This ensures
+ * that one applet which ignores any event will not spin.
+ */
+ si_applet_cant_get(si);
+ si_applet_stop_put(si);
+
+ curr->applet->fct(curr);
+ si_applet_wake_cb(si);
+
+ if (applet_run_queue.n == &curr->runq) {
+ /* curr was left in the list, move it back to the active list */
+ LIST_DEL(&curr->runq);
+ LIST_ADDQ(&applet_active_queue, &curr->runq);
+ }
+ }
+}
--- /dev/null
+/*
+ * Functions used to parse typed argument lists
+ *
+ * Copyright 2012 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <sys/types.h>
+#include <sys/socket.h>
+#include <arpa/inet.h>
+
+#include <common/standard.h>
+#include <proto/arg.h>
+
+const char *arg_type_names[ARGT_NBTYPES] = {
+ [ARGT_STOP] = "end of arguments",
+ [ARGT_SINT] = "integer",
+ [ARGT_STR] = "string",
+ [ARGT_IPV4] = "IPv4 address",
+ [ARGT_MSK4] = "IPv4 mask",
+ [ARGT_IPV6] = "IPv6 address",
+ [ARGT_MSK6] = "IPv6 mask",
+ [ARGT_TIME] = "delay",
+ [ARGT_SIZE] = "size",
+ [ARGT_FE] = "frontend",
+ [ARGT_BE] = "backend",
+ [ARGT_TAB] = "table",
+ [ARGT_SRV] = "server",
+ [ARGT_USR] = "user list",
+ [ARGT_MAP] = "map",
+ [ARGT_REG] = "regex",
+ [ARGT_VAR] = "variable",
+ /* Unassigned types must never happen. Better crash during parsing if they do. */
+};
+
+/* This dummy arg list may be used by default when no arg is found, it helps
+ * parsers by removing pointer checks.
+ */
+struct arg empty_arg_list[ARGM_NBARGS] = { };
+
+/* This function clones a struct arg_list template into a new one which is
+ * returned.
+ */
+struct arg_list *arg_list_clone(const struct arg_list *orig)
+{
+ struct arg_list *new;
+
+ if ((new = calloc(1, sizeof(*new))) != NULL) {
+ /* ->list will be set by the caller when inserting the element.
+ * ->arg and ->arg_pos will be set by the caller.
+ */
+ new->ctx = orig->ctx;
+ new->kw = orig->kw;
+ new->conv = orig->conv;
+ new->file = orig->file;
+ new->line = orig->line;
+ }
+ return new;
+}
+
+/* This function clones a struct <arg_list> template into a new one which is
+ * set to point to arg <arg> at pos <pos>, and which is returned if the caller
+ * wants to apply further changes.
+ */
+struct arg_list *arg_list_add(struct arg_list *orig, struct arg *arg, int pos)
+{
+ struct arg_list *new;
+
+ new = arg_list_clone(orig);
+ new->arg = arg;
+ new->arg_pos = pos;
+ LIST_ADDQ(&orig->list, &new->list);
+ return new;
+}
+
+/* This function builds an argument list from a config line. It returns the
+ * number of arguments found, or <0 in case of any error. Everything needed
+ * it automatically allocated. A pointer to an error message might be returned
+ * in err_msg if not NULL, in which case it would be allocated and the caller
+ * will have to check it and free it. The output arg list is returned in argp
+ * which must be valid. The returned array is always terminated by an arg of
+ * type ARGT_STOP (0), unless the mask indicates that no argument is supported.
+ * Unresolved arguments are appended to arg list <al>, which also serves as a
+ * template to create new entries. The mask is composed of a number of
+ * mandatory arguments in its lower ARGM_BITS bits, and a concatenation of each
+ * argument type in each subsequent ARGT_BITS-bit sblock. If <err_msg> is not
+ * NULL, it must point to a freeable or NULL pointer.
+ */
+int make_arg_list(const char *in, int len, unsigned int mask, struct arg **argp,
+ char **err_msg, const char **err_ptr, int *err_arg,
+ struct arg_list *al)
+{
+ int nbarg;
+ int pos;
+ struct arg *arg;
+ const char *beg;
+ char *word = NULL;
+ const char *ptr_err = NULL;
+ int min_arg;
+
+ *argp = NULL;
+
+ min_arg = mask & ARGM_MASK;
+ mask >>= ARGM_BITS;
+
+ pos = 0;
+ /* find between 0 and NBARGS the max number of args supported by the mask */
+ for (nbarg = 0; nbarg < ARGM_NBARGS && ((mask >> (nbarg * ARGT_BITS)) & ARGT_MASK); nbarg++);
+
+ if (!nbarg)
+ goto end_parse;
+
+ /* Note: an empty input string contains an empty argument if this argument
+ * is marked mandatory. Otherwise we can ignore it.
+ */
+ if (!len && !min_arg)
+ goto end_parse;
+
+ arg = *argp = calloc(nbarg + 1, sizeof(*arg));
+
+ /* Note: empty arguments after a comma always exist. */
+ while (pos < nbarg) {
+ unsigned int uint;
+
+ beg = in;
+ while (len && *in != ',') {
+ in++;
+ len--;
+ }
+
+ /* we have a new argument between <beg> and <in> (not included).
+ * For ease of handling, we copy it into a zero-terminated word.
+ * By default, the output argument will be the same type of the
+ * expected one.
+ */
+ free(word);
+ word = my_strndup(beg, in - beg);
+
+ arg->type = (mask >> (pos * ARGT_BITS)) & ARGT_MASK;
+
+ switch (arg->type) {
+ case ARGT_SINT:
+ if (in == beg) // empty number
+ goto empty_err;
+ arg->data.sint = read_int64(&beg, in);
+ if (beg < in)
+ goto parse_err;
+ arg->type = ARGT_SINT;
+ break;
+
+ case ARGT_FE:
+ case ARGT_BE:
+ case ARGT_TAB:
+ case ARGT_SRV:
+ case ARGT_USR:
+ case ARGT_REG:
+ /* These argument types need to be stored as strings during
+ * parsing then resolved later.
+ */
+ arg->unresolved = 1;
+ arg_list_add(al, arg, pos);
+
+ /* fall through */
+ case ARGT_STR:
+ /* all types that must be resolved are stored as strings
+ * during the parsing. The caller must at one point resolve
+ * them and free the string.
+ */
+ arg->data.str.str = word;
+ arg->data.str.len = in - beg;
+ arg->data.str.size = arg->data.str.len + 1;
+ word = NULL;
+ break;
+
+ case ARGT_IPV4:
+ if (in == beg) // empty address
+ goto empty_err;
+
+ if (inet_pton(AF_INET, word, &arg->data.ipv4) <= 0)
+ goto parse_err;
+ break;
+
+ case ARGT_MSK4:
+ if (in == beg) // empty mask
+ goto empty_err;
+
+ if (!str2mask(word, &arg->data.ipv4))
+ goto parse_err;
+
+ arg->type = ARGT_IPV4;
+ break;
+
+ case ARGT_IPV6:
+ if (in == beg) // empty address
+ goto empty_err;
+
+ if (inet_pton(AF_INET6, word, &arg->data.ipv6) <= 0)
+ goto parse_err;
+ break;
+
+ case ARGT_MSK6: /* not yet implemented */
+ goto not_impl;
+
+ case ARGT_TIME:
+ if (in == beg) // empty time
+ goto empty_err;
+
+ ptr_err = parse_time_err(word, &uint, TIME_UNIT_MS);
+ if (ptr_err)
+ goto parse_err;
+ arg->data.sint = uint;
+ arg->type = ARGT_SINT;
+ break;
+
+ case ARGT_SIZE:
+ if (in == beg) // empty size
+ goto empty_err;
+
+ ptr_err = parse_size_err(word, &uint);
+ if (ptr_err)
+ goto parse_err;
+
+ arg->data.sint = uint;
+ arg->type = ARGT_SINT;
+ break;
+
+ /* FIXME: other types need to be implemented here */
+ default:
+ goto not_impl;
+ }
+
+ pos++;
+ arg++;
+
+ /* don't go back to parsing if we reached end */
+ if (!len || pos >= nbarg)
+ break;
+
+ /* skip comma */
+ in++; len--;
+ }
+
+ end_parse:
+ free(word); word = NULL;
+
+ if (pos < min_arg) {
+ /* not enough arguments */
+ memprintf(err_msg,
+ "missing arguments (got %d/%d), type '%s' expected",
+ pos, min_arg, arg_type_names[(mask >> (pos * ARGT_BITS)) & ARGT_MASK]);
+ goto err;
+ }
+
+ if (len) {
+ /* too many arguments, starting at <in> */
+ /* the caller is responsible for freeing this message */
+ word = my_strndup(in, len);
+ if (nbarg)
+ memprintf(err_msg, "end of arguments expected at position %d, but got '%s'",
+ pos + 1, word);
+ else
+ memprintf(err_msg, "no argument supported, but got '%s'", word);
+ free(word); word = NULL;
+ goto err;
+ }
+
+ /* note that pos might be < nbarg and this is not an error, it's up to the
+ * caller to decide what to do with optional args.
+ */
+ if (err_arg)
+ *err_arg = pos;
+ if (err_ptr)
+ *err_ptr = in;
+ return pos;
+
+ err:
+ free(word);
+ free(*argp);
+ *argp = NULL;
+ if (err_arg)
+ *err_arg = pos;
+ if (err_ptr)
+ *err_ptr = in;
+ return -1;
+
+ empty_err:
+ memprintf(err_msg, "expected type '%s' at position %d, but got nothing",
+ arg_type_names[(mask >> (pos * ARGT_BITS)) & ARGT_MASK], pos + 1);
+ goto err;
+
+ parse_err:
+ memprintf(err_msg, "failed to parse '%s' as type '%s' at position %d",
+ word, arg_type_names[(mask >> (pos * ARGT_BITS)) & ARGT_MASK], pos + 1);
+ goto err;
+
+ not_impl:
+ memprintf(err_msg, "parsing for type '%s' was not implemented, please report this bug",
+ arg_type_names[(mask >> (pos * ARGT_BITS)) & ARGT_MASK]);
+ goto err;
+}
--- /dev/null
+/*
+ * User authentication & authorization
+ *
+ * Copyright 2010 Krzysztof Piotr Oledzki <ole@ans.pl>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#ifdef CONFIG_HAP_CRYPT
+/* This is to have crypt() defined on Linux */
+#define _GNU_SOURCE
+
+#ifdef NEED_CRYPT_H
+/* some platforms such as Solaris need this */
+#include <crypt.h>
+#endif
+#endif /* CONFIG_HAP_CRYPT */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <unistd.h>
+
+#include <common/config.h>
+#include <common/errors.h>
+
+#include <proto/acl.h>
+#include <proto/log.h>
+
+#include <types/auth.h>
+#include <types/pattern.h>
+
+struct userlist *userlist = NULL; /* list of all existing userlists */
+
+/* find targets for selected gropus. The function returns pointer to
+ * the userlist struct ot NULL if name is NULL/empty or unresolvable.
+ */
+
+struct userlist *
+auth_find_userlist(char *name)
+{
+ struct userlist *l;
+
+ if (!name || !*name)
+ return NULL;
+
+ for (l = userlist; l; l = l->next)
+ if (!strcmp(l->name, name))
+ return l;
+
+ return NULL;
+}
+
+int check_group(struct userlist *ul, char *name)
+{
+ struct auth_groups *ag;
+
+ for (ag = ul->groups; ag; ag = ag->next)
+ if (strcmp(name, ag->name) == 0)
+ return 1;
+ return 0;
+}
+
+void
+userlist_free(struct userlist *ul)
+{
+ struct userlist *tul;
+ struct auth_users *au, *tau;
+ struct auth_groups_list *agl, *tagl;
+ struct auth_groups *ag, *tag;
+
+ while (ul) {
+ /* Free users. */
+ au = ul->users;
+ while (au) {
+ /* Free groups that own current user. */
+ agl = au->u.groups;
+ while (agl) {
+ tagl = agl;
+ agl = agl->next;
+ free(tagl);
+ }
+
+ tau = au;
+ au = au->next;
+ free(tau->user);
+ free(tau->pass);
+ free(tau);
+ }
+
+ /* Free grouplist. */
+ ag = ul->groups;
+ while (ag) {
+ tag = ag;
+ ag = ag->next;
+ free(tag->name);
+ free(tag);
+ }
+
+ tul = ul;
+ ul = ul->next;
+ free(tul->name);
+ free(tul);
+ };
+}
+
+int userlist_postinit()
+{
+ struct userlist *curuserlist = NULL;
+
+ /* Resolve usernames and groupnames. */
+ for (curuserlist = userlist; curuserlist; curuserlist = curuserlist->next) {
+ struct auth_groups *ag;
+ struct auth_users *curuser;
+ struct auth_groups_list *grl;
+
+ for (curuser = curuserlist->users; curuser; curuser = curuser->next) {
+ char *group = NULL;
+ struct auth_groups_list *groups = NULL;
+
+ if (!curuser->u.groups_names)
+ continue;
+
+ while ((group = strtok(group?NULL:curuser->u.groups_names, ","))) {
+ for (ag = curuserlist->groups; ag; ag = ag->next) {
+ if (!strcmp(ag->name, group))
+ break;
+ }
+
+ if (!ag) {
+ Alert("userlist '%s': no such group '%s' specified in user '%s'\n",
+ curuserlist->name, group, curuser->user);
+ free(groups);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ /* Add this group at the group userlist. */
+ grl = calloc(1, sizeof(*grl));
+ if (!grl) {
+ Alert("userlist '%s': no more memory when trying to allocate the user groups.\n",
+ curuserlist->name);
+ free(groups);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ grl->group = ag;
+ grl->next = groups;
+ groups = grl;
+ }
+
+ free(curuser->u.groups);
+ curuser->u.groups = groups;
+ }
+
+ for (ag = curuserlist->groups; ag; ag = ag->next) {
+ char *user = NULL;
+
+ if (!ag->groupusers)
+ continue;
+
+ while ((user = strtok(user?NULL:ag->groupusers, ","))) {
+ for (curuser = curuserlist->users; curuser; curuser = curuser->next) {
+ if (!strcmp(curuser->user, user))
+ break;
+ }
+
+ if (!curuser) {
+ Alert("userlist '%s': no such user '%s' specified in group '%s'\n",
+ curuserlist->name, user, ag->name);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ /* Add this group at the group userlist. */
+ grl = calloc(1, sizeof(*grl));
+ if (!grl) {
+ Alert("userlist '%s': no more memory when trying to allocate the user groups.\n",
+ curuserlist->name);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ grl->group = ag;
+ grl->next = curuser->u.groups;
+ curuser->u.groups = grl;
+ }
+
+ free(ag->groupusers);
+ ag->groupusers = NULL;
+ }
+
+#ifdef DEBUG_AUTH
+ for (ag = curuserlist->groups; ag; ag = ag->next) {
+ struct auth_groups_list *agl;
+
+ fprintf(stderr, "group %s, id %p, users:", ag->name, ag);
+ for (curuser = curuserlist->users; curuser; curuser = curuser->next) {
+ for (agl = curuser->u.groups; agl; agl = agl->next) {
+ if (agl->group == ag)
+ fprintf(stderr, " %s", curuser->user);
+ }
+ }
+ fprintf(stderr, "\n");
+ }
+#endif
+ }
+
+ return ERR_NONE;
+}
+
+/*
+ * Authenticate and authorize user; return 1 if OK, 0 if case of error.
+ */
+int
+check_user(struct userlist *ul, const char *user, const char *pass)
+{
+
+ struct auth_users *u;
+#ifdef DEBUG_AUTH
+ struct auth_groups_list *agl;
+#endif
+ const char *ep;
+
+#ifdef DEBUG_AUTH
+ fprintf(stderr, "req: userlist=%s, user=%s, pass=%s\n",
+ ul->name, user, pass);
+#endif
+
+ for (u = ul->users; u; u = u->next)
+ if (!strcmp(user, u->user))
+ break;
+
+ if (!u)
+ return 0;
+
+#ifdef DEBUG_AUTH
+ fprintf(stderr, "cfg: user=%s, pass=%s, flags=%X, groups=",
+ u->user, u->pass, u->flags);
+ for (agl = u->u.groups; agl; agl = agl->next)
+ fprintf(stderr, " %s", agl->group->name);
+#endif
+
+ if (!(u->flags & AU_O_INSECURE)) {
+#ifdef CONFIG_HAP_CRYPT
+ ep = crypt(pass, u->pass);
+#else
+ return 0;
+#endif
+ } else
+ ep = pass;
+
+#ifdef DEBUG_AUTH
+ fprintf(stderr, ", crypt=%s\n", ep);
+#endif
+
+ if (ep && strcmp(ep, u->pass) == 0)
+ return 1;
+ else
+ return 0;
+}
+
+struct pattern *
+pat_match_auth(struct sample *smp, struct pattern_expr *expr, int fill)
+{
+ struct userlist *ul = smp->ctx.a[0];
+ struct pattern_list *lst;
+ struct auth_users *u;
+ struct auth_groups_list *agl;
+ struct pattern *pattern;
+
+ /* Check if the userlist is present in the context data. */
+ if (!ul)
+ return NULL;
+
+ /* Browse the userlist for searching user. */
+ for (u = ul->users; u; u = u->next) {
+ if (strcmp(smp->data.u.str.str, u->user) == 0)
+ break;
+ }
+ if (!u)
+ return NULL;
+
+ /* Browse each pattern. */
+ list_for_each_entry(lst, &expr->patterns, list) {
+ pattern = &lst->pat;
+
+ /* Browse each group for searching group name that match the pattern. */
+ for (agl = u->u.groups; agl; agl = agl->next) {
+ if (strcmp(agl->group->name, pattern->ptr.str) == 0)
+ return pattern;
+ }
+ }
+ return NULL;
+}
--- /dev/null
+/*
+ * Backend variables and functions.
+ *
+ * Copyright 2000-2013 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <errno.h>
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <syslog.h>
+#include <string.h>
+#include <ctype.h>
+#include <sys/types.h>
+
+#include <common/buffer.h>
+#include <common/compat.h>
+#include <common/config.h>
+#include <common/debug.h>
+#include <common/hash.h>
+#include <common/ticks.h>
+#include <common/time.h>
+#include <common/namespace.h>
+
+#include <types/global.h>
+
+#include <proto/acl.h>
+#include <proto/arg.h>
+#include <proto/backend.h>
+#include <proto/channel.h>
+#include <proto/frontend.h>
+#include <proto/lb_chash.h>
+#include <proto/lb_fas.h>
+#include <proto/lb_fwlc.h>
+#include <proto/lb_fwrr.h>
+#include <proto/lb_map.h>
+#include <proto/log.h>
+#include <proto/obj_type.h>
+#include <proto/payload.h>
+#include <proto/protocol.h>
+#include <proto/proto_http.h>
+#include <proto/proto_tcp.h>
+#include <proto/proxy.h>
+#include <proto/queue.h>
+#include <proto/sample.h>
+#include <proto/server.h>
+#include <proto/stream.h>
+#include <proto/raw_sock.h>
+#include <proto/stream_interface.h>
+#include <proto/task.h>
+
+#ifdef USE_OPENSSL
+#include <proto/ssl_sock.h>
+#endif /* USE_OPENSSL */
+
+int be_lastsession(const struct proxy *be)
+{
+ if (be->be_counters.last_sess)
+ return now.tv_sec - be->be_counters.last_sess;
+
+ return -1;
+}
+
+/* helper function to invoke the correct hash method */
+static unsigned int gen_hash(const struct proxy* px, const char* key, unsigned long len)
+{
+ unsigned int hash;
+
+ switch (px->lbprm.algo & BE_LB_HASH_FUNC) {
+ case BE_LB_HFCN_DJB2:
+ hash = hash_djb2(key, len);
+ break;
+ case BE_LB_HFCN_WT6:
+ hash = hash_wt6(key, len);
+ break;
+ case BE_LB_HFCN_CRC32:
+ hash = hash_crc32(key, len);
+ break;
+ case BE_LB_HFCN_SDBM:
+ /* this is the default hash function */
+ default:
+ hash = hash_sdbm(key, len);
+ break;
+ }
+
+ return hash;
+}
+
+/*
+ * This function recounts the number of usable active and backup servers for
+ * proxy <p>. These numbers are returned into the p->srv_act and p->srv_bck.
+ * This function also recomputes the total active and backup weights. However,
+ * it does not update tot_weight nor tot_used. Use update_backend_weight() for
+ * this.
+ */
+void recount_servers(struct proxy *px)
+{
+ struct server *srv;
+
+ px->srv_act = px->srv_bck = 0;
+ px->lbprm.tot_wact = px->lbprm.tot_wbck = 0;
+ px->lbprm.fbck = NULL;
+ for (srv = px->srv; srv != NULL; srv = srv->next) {
+ if (!srv_is_usable(srv))
+ continue;
+
+ if (srv->flags & SRV_F_BACKUP) {
+ if (!px->srv_bck &&
+ !(px->options & PR_O_USE_ALL_BK))
+ px->lbprm.fbck = srv;
+ px->srv_bck++;
+ px->lbprm.tot_wbck += srv->eweight;
+ } else {
+ px->srv_act++;
+ px->lbprm.tot_wact += srv->eweight;
+ }
+ }
+}
+
+/* This function simply updates the backend's tot_weight and tot_used values
+ * after servers weights have been updated. It is designed to be used after
+ * recount_servers() or equivalent.
+ */
+void update_backend_weight(struct proxy *px)
+{
+ if (px->srv_act) {
+ px->lbprm.tot_weight = px->lbprm.tot_wact;
+ px->lbprm.tot_used = px->srv_act;
+ }
+ else if (px->lbprm.fbck) {
+ /* use only the first backup server */
+ px->lbprm.tot_weight = px->lbprm.fbck->eweight;
+ px->lbprm.tot_used = 1;
+ }
+ else {
+ px->lbprm.tot_weight = px->lbprm.tot_wbck;
+ px->lbprm.tot_used = px->srv_bck;
+ }
+}
+
+/*
+ * This function tries to find a running server for the proxy <px> following
+ * the source hash method. Depending on the number of active/backup servers,
+ * it will either look for active servers, or for backup servers.
+ * If any server is found, it will be returned. If no valid server is found,
+ * NULL is returned.
+ */
+struct server *get_server_sh(struct proxy *px, const char *addr, int len)
+{
+ unsigned int h, l;
+
+ if (px->lbprm.tot_weight == 0)
+ return NULL;
+
+ l = h = 0;
+
+ /* note: we won't hash if there's only one server left */
+ if (px->lbprm.tot_used == 1)
+ goto hash_done;
+
+ while ((l + sizeof (int)) <= len) {
+ h ^= ntohl(*(unsigned int *)(&addr[l]));
+ l += sizeof (int);
+ }
+ if ((px->lbprm.algo & BE_LB_HASH_MOD) == BE_LB_HMOD_AVAL)
+ h = full_hash(h);
+ hash_done:
+ if (px->lbprm.algo & BE_LB_LKUP_CHTREE)
+ return chash_get_server_hash(px, h);
+ else
+ return map_get_server_hash(px, h);
+}
+
+/*
+ * This function tries to find a running server for the proxy <px> following
+ * the URI hash method. In order to optimize cache hits, the hash computation
+ * ends at the question mark. Depending on the number of active/backup servers,
+ * it will either look for active servers, or for backup servers.
+ * If any server is found, it will be returned. If no valid server is found,
+ * NULL is returned.
+ *
+ * This code was contributed by Guillaume Dallaire, who also selected this hash
+ * algorithm out of a tens because it gave him the best results.
+ *
+ */
+struct server *get_server_uh(struct proxy *px, char *uri, int uri_len)
+{
+ unsigned int hash = 0;
+ int c;
+ int slashes = 0;
+ const char *start, *end;
+
+ if (px->lbprm.tot_weight == 0)
+ return NULL;
+
+ /* note: we won't hash if there's only one server left */
+ if (px->lbprm.tot_used == 1)
+ goto hash_done;
+
+ if (px->uri_len_limit)
+ uri_len = MIN(uri_len, px->uri_len_limit);
+
+ start = end = uri;
+ while (uri_len--) {
+ c = *end;
+ if (c == '/') {
+ slashes++;
+ if (slashes == px->uri_dirs_depth1) /* depth+1 */
+ break;
+ }
+ else if (c == '?' && !px->uri_whole)
+ break;
+ end++;
+ }
+
+ hash = gen_hash(px, start, (end - start));
+
+ if ((px->lbprm.algo & BE_LB_HASH_MOD) == BE_LB_HMOD_AVAL)
+ hash = full_hash(hash);
+ hash_done:
+ if (px->lbprm.algo & BE_LB_LKUP_CHTREE)
+ return chash_get_server_hash(px, hash);
+ else
+ return map_get_server_hash(px, hash);
+}
+
+/*
+ * This function tries to find a running server for the proxy <px> following
+ * the URL parameter hash method. It looks for a specific parameter in the
+ * URL and hashes it to compute the server ID. This is useful to optimize
+ * performance by avoiding bounces between servers in contexts where sessions
+ * are shared but cookies are not usable. If the parameter is not found, NULL
+ * is returned. If any server is found, it will be returned. If no valid server
+ * is found, NULL is returned.
+ */
+struct server *get_server_ph(struct proxy *px, const char *uri, int uri_len)
+{
+ unsigned int hash = 0;
+ const char *start, *end;
+ const char *p;
+ const char *params;
+ int plen;
+
+ /* when tot_weight is 0 then so is srv_count */
+ if (px->lbprm.tot_weight == 0)
+ return NULL;
+
+ if ((p = memchr(uri, '?', uri_len)) == NULL)
+ return NULL;
+
+ p++;
+
+ uri_len -= (p - uri);
+ plen = px->url_param_len;
+ params = p;
+
+ while (uri_len > plen) {
+ /* Look for the parameter name followed by an equal symbol */
+ if (params[plen] == '=') {
+ if (memcmp(params, px->url_param_name, plen) == 0) {
+ /* OK, we have the parameter here at <params>, and
+ * the value after the equal sign, at <p>
+ * skip the equal symbol
+ */
+ p += plen + 1;
+ start = end = p;
+ uri_len -= plen + 1;
+
+ while (uri_len && *end != '&') {
+ uri_len--;
+ end++;
+ }
+ hash = gen_hash(px, start, (end - start));
+
+ if ((px->lbprm.algo & BE_LB_HASH_MOD) == BE_LB_HMOD_AVAL)
+ hash = full_hash(hash);
+
+ if (px->lbprm.algo & BE_LB_LKUP_CHTREE)
+ return chash_get_server_hash(px, hash);
+ else
+ return map_get_server_hash(px, hash);
+ }
+ }
+ /* skip to next parameter */
+ p = memchr(params, '&', uri_len);
+ if (!p)
+ return NULL;
+ p++;
+ uri_len -= (p - params);
+ params = p;
+ }
+ return NULL;
+}
+
+/*
+ * this does the same as the previous server_ph, but check the body contents
+ */
+struct server *get_server_ph_post(struct stream *s)
+{
+ unsigned int hash = 0;
+ struct http_txn *txn = s->txn;
+ struct channel *req = &s->req;
+ struct http_msg *msg = &txn->req;
+ struct proxy *px = s->be;
+ unsigned int plen = px->url_param_len;
+ unsigned long len = http_body_bytes(msg);
+ const char *params = b_ptr(req->buf, -http_data_rewind(msg));
+ const char *p = params;
+ const char *start, *end;
+
+ if (len == 0)
+ return NULL;
+
+ if (len > req->buf->data + req->buf->size - p)
+ len = req->buf->data + req->buf->size - p;
+
+ if (px->lbprm.tot_weight == 0)
+ return NULL;
+
+ while (len > plen) {
+ /* Look for the parameter name followed by an equal symbol */
+ if (params[plen] == '=') {
+ if (memcmp(params, px->url_param_name, plen) == 0) {
+ /* OK, we have the parameter here at <params>, and
+ * the value after the equal sign, at <p>
+ * skip the equal symbol
+ */
+ p += plen + 1;
+ start = end = p;
+ len -= plen + 1;
+
+ while (len && *end != '&') {
+ if (unlikely(!HTTP_IS_TOKEN(*p))) {
+ /* if in a POST, body must be URI encoded or it's not a URI.
+ * Do not interpret any possible binary data as a parameter.
+ */
+ if (likely(HTTP_IS_LWS(*p))) /* eol, uncertain uri len */
+ break;
+ return NULL; /* oh, no; this is not uri-encoded.
+ * This body does not contain parameters.
+ */
+ }
+ len--;
+ end++;
+ /* should we break if vlen exceeds limit? */
+ }
+ hash = gen_hash(px, start, (end - start));
+
+ if ((px->lbprm.algo & BE_LB_HASH_MOD) == BE_LB_HMOD_AVAL)
+ hash = full_hash(hash);
+
+ if (px->lbprm.algo & BE_LB_LKUP_CHTREE)
+ return chash_get_server_hash(px, hash);
+ else
+ return map_get_server_hash(px, hash);
+ }
+ }
+ /* skip to next parameter */
+ p = memchr(params, '&', len);
+ if (!p)
+ return NULL;
+ p++;
+ len -= (p - params);
+ params = p;
+ }
+ return NULL;
+}
+
+
+/*
+ * This function tries to find a running server for the proxy <px> following
+ * the Header parameter hash method. It looks for a specific parameter in the
+ * URL and hashes it to compute the server ID. This is useful to optimize
+ * performance by avoiding bounces between servers in contexts where sessions
+ * are shared but cookies are not usable. If the parameter is not found, NULL
+ * is returned. If any server is found, it will be returned. If no valid server
+ * is found, NULL is returned.
+ */
+struct server *get_server_hh(struct stream *s)
+{
+ unsigned int hash = 0;
+ struct http_txn *txn = s->txn;
+ struct proxy *px = s->be;
+ unsigned int plen = px->hh_len;
+ unsigned long len;
+ struct hdr_ctx ctx;
+ const char *p;
+ const char *start, *end;
+
+ /* tot_weight appears to mean srv_count */
+ if (px->lbprm.tot_weight == 0)
+ return NULL;
+
+ ctx.idx = 0;
+
+ /* if the message is chunked, we skip the chunk size, but use the value as len */
+ http_find_header2(px->hh_name, plen, b_ptr(s->req.buf, -http_hdr_rewind(&txn->req)), &txn->hdr_idx, &ctx);
+
+ /* if the header is not found or empty, let's fallback to round robin */
+ if (!ctx.idx || !ctx.vlen)
+ return NULL;
+
+ /* note: we won't hash if there's only one server left */
+ if (px->lbprm.tot_used == 1)
+ goto hash_done;
+
+ /* Found a the hh_name in the headers.
+ * we will compute the hash based on this value ctx.val.
+ */
+ len = ctx.vlen;
+ p = (char *)ctx.line + ctx.val;
+ if (!px->hh_match_domain) {
+ hash = gen_hash(px, p, len);
+ } else {
+ int dohash = 0;
+ p += len;
+ /* special computation, use only main domain name, not tld/host
+ * going back from the end of string, start hashing at first
+ * dot stop at next.
+ * This is designed to work with the 'Host' header, and requires
+ * a special option to activate this.
+ */
+ end = p;
+ while (len) {
+ if (dohash) {
+ /* Rewind the pointer until the previous char
+ * is a dot, this will allow to set the start
+ * position of the domain. */
+ if (*(p - 1) == '.')
+ break;
+ }
+ else if (*p == '.') {
+ /* The pointer is rewinded to the dot before the
+ * tld, we memorize the end of the domain and
+ * can enter the domain processing. */
+ end = p;
+ dohash = 1;
+ }
+ p--;
+ len--;
+ }
+ start = p;
+ hash = gen_hash(px, start, (end - start));
+ }
+ if ((px->lbprm.algo & BE_LB_HASH_MOD) == BE_LB_HMOD_AVAL)
+ hash = full_hash(hash);
+ hash_done:
+ if (px->lbprm.algo & BE_LB_LKUP_CHTREE)
+ return chash_get_server_hash(px, hash);
+ else
+ return map_get_server_hash(px, hash);
+}
+
+/* RDP Cookie HASH. */
+struct server *get_server_rch(struct stream *s)
+{
+ unsigned int hash = 0;
+ struct proxy *px = s->be;
+ unsigned long len;
+ int ret;
+ struct sample smp;
+ int rewind;
+
+ /* tot_weight appears to mean srv_count */
+ if (px->lbprm.tot_weight == 0)
+ return NULL;
+
+ memset(&smp, 0, sizeof(smp));
+
+ b_rew(s->req.buf, rewind = s->req.buf->o);
+
+ ret = fetch_rdp_cookie_name(s, &smp, px->hh_name, px->hh_len);
+ len = smp.data.u.str.len;
+
+ b_adv(s->req.buf, rewind);
+
+ if (ret == 0 || (smp.flags & SMP_F_MAY_CHANGE) || len == 0)
+ return NULL;
+
+ /* note: we won't hash if there's only one server left */
+ if (px->lbprm.tot_used == 1)
+ goto hash_done;
+
+ /* Found a the hh_name in the headers.
+ * we will compute the hash based on this value ctx.val.
+ */
+ hash = gen_hash(px, smp.data.u.str.str, len);
+
+ if ((px->lbprm.algo & BE_LB_HASH_MOD) == BE_LB_HMOD_AVAL)
+ hash = full_hash(hash);
+ hash_done:
+ if (px->lbprm.algo & BE_LB_LKUP_CHTREE)
+ return chash_get_server_hash(px, hash);
+ else
+ return map_get_server_hash(px, hash);
+}
+
+/*
+ * This function applies the load-balancing algorithm to the stream, as
+ * defined by the backend it is assigned to. The stream is then marked as
+ * 'assigned'.
+ *
+ * This function MAY NOT be called with SF_ASSIGNED already set. If the stream
+ * had a server previously assigned, it is rebalanced, trying to avoid the same
+ * server, which should still be present in target_srv(&s->target) before the call.
+ * The function tries to keep the original connection slot if it reconnects to
+ * the same server, otherwise it releases it and tries to offer it.
+ *
+ * It is illegal to call this function with a stream in a queue.
+ *
+ * It may return :
+ * SRV_STATUS_OK if everything is OK. ->srv and ->target are assigned.
+ * SRV_STATUS_NOSRV if no server is available. Stream is not ASSIGNED
+ * SRV_STATUS_FULL if all servers are saturated. Stream is not ASSIGNED
+ * SRV_STATUS_INTERNAL for other unrecoverable errors.
+ *
+ * Upon successful return, the stream flag SF_ASSIGNED is set to indicate that
+ * it does not need to be called anymore. This means that target_srv(&s->target)
+ * can be trusted in balance and direct modes.
+ *
+ */
+
+int assign_server(struct stream *s)
+{
+ struct connection *conn;
+ struct server *conn_slot;
+ struct server *srv, *prev_srv;
+ int err;
+
+ DPRINTF(stderr,"assign_server : s=%p\n",s);
+
+ err = SRV_STATUS_INTERNAL;
+ if (unlikely(s->pend_pos || s->flags & SF_ASSIGNED))
+ goto out_err;
+
+ prev_srv = objt_server(s->target);
+ conn_slot = s->srv_conn;
+
+ /* We have to release any connection slot before applying any LB algo,
+ * otherwise we may erroneously end up with no available slot.
+ */
+ if (conn_slot)
+ sess_change_server(s, NULL);
+
+ /* We will now try to find the good server and store it into <objt_server(s->target)>.
+ * Note that <objt_server(s->target)> may be NULL in case of dispatch or proxy mode,
+ * as well as if no server is available (check error code).
+ */
+
+ srv = NULL;
+ s->target = NULL;
+ conn = objt_conn(s->si[1].end);
+
+ if (conn &&
+ (conn->flags & CO_FL_CONNECTED) &&
+ objt_server(conn->target) && __objt_server(conn->target)->proxy == s->be &&
+ ((s->txn && s->txn->flags & TX_PREFER_LAST) ||
+ ((s->be->options & PR_O_PREF_LAST) &&
+ (!s->be->max_ka_queue ||
+ server_has_room(__objt_server(conn->target)) ||
+ (__objt_server(conn->target)->nbpend + 1) < s->be->max_ka_queue))) &&
+ srv_is_usable(__objt_server(conn->target))) {
+ /* This stream was relying on a server in a previous request
+ * and the proxy has "option prefer-last-server" set, so
+ * let's try to reuse the same server.
+ */
+ srv = __objt_server(conn->target);
+ s->target = &srv->obj_type;
+ }
+ else if (s->be->lbprm.algo & BE_LB_KIND) {
+ /* we must check if we have at least one server available */
+ if (!s->be->lbprm.tot_weight) {
+ err = SRV_STATUS_NOSRV;
+ goto out;
+ }
+
+ /* First check whether we need to fetch some data or simply call
+ * the LB lookup function. Only the hashing functions will need
+ * some input data in fact, and will support multiple algorithms.
+ */
+ switch (s->be->lbprm.algo & BE_LB_LKUP) {
+ case BE_LB_LKUP_RRTREE:
+ srv = fwrr_get_next_server(s->be, prev_srv);
+ break;
+
+ case BE_LB_LKUP_FSTREE:
+ srv = fas_get_next_server(s->be, prev_srv);
+ break;
+
+ case BE_LB_LKUP_LCTREE:
+ srv = fwlc_get_next_server(s->be, prev_srv);
+ break;
+
+ case BE_LB_LKUP_CHTREE:
+ case BE_LB_LKUP_MAP:
+ if ((s->be->lbprm.algo & BE_LB_KIND) == BE_LB_KIND_RR) {
+ if (s->be->lbprm.algo & BE_LB_LKUP_CHTREE)
+ srv = chash_get_next_server(s->be, prev_srv);
+ else
+ srv = map_get_server_rr(s->be, prev_srv);
+ break;
+ }
+ else if ((s->be->lbprm.algo & BE_LB_KIND) != BE_LB_KIND_HI) {
+ /* unknown balancing algorithm */
+ err = SRV_STATUS_INTERNAL;
+ goto out;
+ }
+
+ switch (s->be->lbprm.algo & BE_LB_PARM) {
+ case BE_LB_HASH_SRC:
+ conn = objt_conn(strm_orig(s));
+ if (conn && conn->addr.from.ss_family == AF_INET) {
+ srv = get_server_sh(s->be,
+ (void *)&((struct sockaddr_in *)&conn->addr.from)->sin_addr,
+ 4);
+ }
+ else if (conn && conn->addr.from.ss_family == AF_INET6) {
+ srv = get_server_sh(s->be,
+ (void *)&((struct sockaddr_in6 *)&conn->addr.from)->sin6_addr,
+ 16);
+ }
+ else {
+ /* unknown IP family */
+ err = SRV_STATUS_INTERNAL;
+ goto out;
+ }
+ break;
+
+ case BE_LB_HASH_URI:
+ /* URI hashing */
+ if (!s->txn || s->txn->req.msg_state < HTTP_MSG_BODY)
+ break;
+ srv = get_server_uh(s->be,
+ b_ptr(s->req.buf, -http_uri_rewind(&s->txn->req)),
+ s->txn->req.sl.rq.u_l);
+ break;
+
+ case BE_LB_HASH_PRM:
+ /* URL Parameter hashing */
+ if (!s->txn || s->txn->req.msg_state < HTTP_MSG_BODY)
+ break;
+
+ srv = get_server_ph(s->be,
+ b_ptr(s->req.buf, -http_uri_rewind(&s->txn->req)),
+ s->txn->req.sl.rq.u_l);
+
+ if (!srv && s->txn->meth == HTTP_METH_POST)
+ srv = get_server_ph_post(s);
+ break;
+
+ case BE_LB_HASH_HDR:
+ /* Header Parameter hashing */
+ if (!s->txn || s->txn->req.msg_state < HTTP_MSG_BODY)
+ break;
+ srv = get_server_hh(s);
+ break;
+
+ case BE_LB_HASH_RDP:
+ /* RDP Cookie hashing */
+ srv = get_server_rch(s);
+ break;
+
+ default:
+ /* unknown balancing algorithm */
+ err = SRV_STATUS_INTERNAL;
+ goto out;
+ }
+
+ /* If the hashing parameter was not found, let's fall
+ * back to round robin on the map.
+ */
+ if (!srv) {
+ if (s->be->lbprm.algo & BE_LB_LKUP_CHTREE)
+ srv = chash_get_next_server(s->be, prev_srv);
+ else
+ srv = map_get_server_rr(s->be, prev_srv);
+ }
+
+ /* end of map-based LB */
+ break;
+
+ default:
+ /* unknown balancing algorithm */
+ err = SRV_STATUS_INTERNAL;
+ goto out;
+ }
+
+ if (!srv) {
+ err = SRV_STATUS_FULL;
+ goto out;
+ }
+ else if (srv != prev_srv) {
+ s->be->be_counters.cum_lbconn++;
+ srv->counters.cum_lbconn++;
+ }
+ s->target = &srv->obj_type;
+ }
+ else if (s->be->options & (PR_O_DISPATCH | PR_O_TRANSP)) {
+ s->target = &s->be->obj_type;
+ }
+ else if ((s->be->options & PR_O_HTTP_PROXY) &&
+ (conn = objt_conn(s->si[1].end)) &&
+ is_addr(&conn->addr.to)) {
+ /* in proxy mode, we need a valid destination address */
+ s->target = &s->be->obj_type;
+ }
+ else {
+ err = SRV_STATUS_NOSRV;
+ goto out;
+ }
+
+ s->flags |= SF_ASSIGNED;
+ err = SRV_STATUS_OK;
+ out:
+
+ /* Either we take back our connection slot, or we offer it to someone
+ * else if we don't need it anymore.
+ */
+ if (conn_slot) {
+ if (conn_slot == srv) {
+ sess_change_server(s, srv);
+ } else {
+ if (may_dequeue_tasks(conn_slot, s->be))
+ process_srv_queue(conn_slot);
+ }
+ }
+
+ out_err:
+ return err;
+}
+
+/*
+ * This function assigns a server address to a stream, and sets SF_ADDR_SET.
+ * The address is taken from the currently assigned server, or from the
+ * dispatch or transparent address.
+ *
+ * It may return :
+ * SRV_STATUS_OK if everything is OK.
+ * SRV_STATUS_INTERNAL for other unrecoverable errors.
+ *
+ * Upon successful return, the stream flag SF_ADDR_SET is set. This flag is
+ * not cleared, so it's to the caller to clear it if required.
+ *
+ * The caller is responsible for having already assigned a connection
+ * to si->end.
+ *
+ */
+int assign_server_address(struct stream *s)
+{
+ struct connection *cli_conn = objt_conn(strm_orig(s));
+ struct connection *srv_conn = objt_conn(s->si[1].end);
+
+#ifdef DEBUG_FULL
+ fprintf(stderr,"assign_server_address : s=%p\n",s);
+#endif
+
+ if ((s->flags & SF_DIRECT) || (s->be->lbprm.algo & BE_LB_KIND)) {
+ /* A server is necessarily known for this stream */
+ if (!(s->flags & SF_ASSIGNED))
+ return SRV_STATUS_INTERNAL;
+
+ srv_conn->addr.to = objt_server(s->target)->addr;
+
+ if (!is_addr(&srv_conn->addr.to) && cli_conn) {
+ /* if the server has no address, we use the same address
+ * the client asked, which is handy for remapping ports
+ * locally on multiple addresses at once. Nothing is done
+ * for AF_UNIX addresses.
+ */
+ conn_get_to_addr(cli_conn);
+
+ if (cli_conn->addr.to.ss_family == AF_INET) {
+ ((struct sockaddr_in *)&srv_conn->addr.to)->sin_addr = ((struct sockaddr_in *)&cli_conn->addr.to)->sin_addr;
+ } else if (cli_conn->addr.to.ss_family == AF_INET6) {
+ ((struct sockaddr_in6 *)&srv_conn->addr.to)->sin6_addr = ((struct sockaddr_in6 *)&cli_conn->addr.to)->sin6_addr;
+ }
+ }
+
+ /* if this server remaps proxied ports, we'll use
+ * the port the client connected to with an offset. */
+ if ((objt_server(s->target)->flags & SRV_F_MAPPORTS) && cli_conn) {
+ int base_port;
+
+ conn_get_to_addr(cli_conn);
+
+ /* First, retrieve the port from the incoming connection */
+ base_port = get_host_port(&cli_conn->addr.to);
+
+ /* Second, assign the outgoing connection's port */
+ base_port += get_host_port(&srv_conn->addr.to);
+ set_host_port(&srv_conn->addr.to, base_port);
+ }
+ }
+ else if (s->be->options & PR_O_DISPATCH) {
+ /* connect to the defined dispatch addr */
+ srv_conn->addr.to = s->be->dispatch_addr;
+ }
+ else if ((s->be->options & PR_O_TRANSP) && cli_conn) {
+ /* in transparent mode, use the original dest addr if no dispatch specified */
+ conn_get_to_addr(cli_conn);
+
+ if (cli_conn->addr.to.ss_family == AF_INET || cli_conn->addr.to.ss_family == AF_INET6)
+ srv_conn->addr.to = cli_conn->addr.to;
+ }
+ else if (s->be->options & PR_O_HTTP_PROXY) {
+ /* If HTTP PROXY option is set, then server is already assigned
+ * during incoming client request parsing. */
+ }
+ else {
+ /* no server and no LB algorithm ! */
+ return SRV_STATUS_INTERNAL;
+ }
+
+ /* Copy network namespace from client connection */
+ srv_conn->proxy_netns = cli_conn ? cli_conn->proxy_netns : NULL;
+
+ s->flags |= SF_ADDR_SET;
+ return SRV_STATUS_OK;
+}
+
+/* This function assigns a server to stream <s> if required, and can add the
+ * connection to either the assigned server's queue or to the proxy's queue.
+ * If ->srv_conn is set, the stream is first released from the server.
+ * It may also be called with SF_DIRECT and/or SF_ASSIGNED though. It will
+ * be called before any connection and after any retry or redispatch occurs.
+ *
+ * It is not allowed to call this function with a stream in a queue.
+ *
+ * Returns :
+ *
+ * SRV_STATUS_OK if everything is OK.
+ * SRV_STATUS_NOSRV if no server is available. objt_server(s->target) = NULL.
+ * SRV_STATUS_QUEUED if the connection has been queued.
+ * SRV_STATUS_FULL if the server(s) is/are saturated and the
+ * connection could not be queued at the server's,
+ * which may be NULL if we queue on the backend.
+ * SRV_STATUS_INTERNAL for other unrecoverable errors.
+ *
+ */
+int assign_server_and_queue(struct stream *s)
+{
+ struct pendconn *p;
+ struct server *srv;
+ int err;
+
+ if (s->pend_pos)
+ return SRV_STATUS_INTERNAL;
+
+ err = SRV_STATUS_OK;
+ if (!(s->flags & SF_ASSIGNED)) {
+ struct server *prev_srv = objt_server(s->target);
+
+ err = assign_server(s);
+ if (prev_srv) {
+ /* This stream was previously assigned to a server. We have to
+ * update the stream's and the server's stats :
+ * - if the server changed :
+ * - set TX_CK_DOWN if txn.flags was TX_CK_VALID
+ * - set SF_REDISP if it was successfully redispatched
+ * - increment srv->redispatches and be->redispatches
+ * - if the server remained the same : update retries.
+ */
+
+ if (prev_srv != objt_server(s->target)) {
+ if (s->txn && (s->txn->flags & TX_CK_MASK) == TX_CK_VALID) {
+ s->txn->flags &= ~TX_CK_MASK;
+ s->txn->flags |= TX_CK_DOWN;
+ }
+ s->flags |= SF_REDISP;
+ prev_srv->counters.redispatches++;
+ s->be->be_counters.redispatches++;
+ } else {
+ prev_srv->counters.retries++;
+ s->be->be_counters.retries++;
+ }
+ }
+ }
+
+ switch (err) {
+ case SRV_STATUS_OK:
+ /* we have SF_ASSIGNED set */
+ srv = objt_server(s->target);
+ if (!srv)
+ return SRV_STATUS_OK; /* dispatch or proxy mode */
+
+ /* If we already have a connection slot, no need to check any queue */
+ if (s->srv_conn == srv)
+ return SRV_STATUS_OK;
+
+ /* OK, this stream already has an assigned server, but no
+ * connection slot yet. Either it is a redispatch, or it was
+ * assigned from persistence information (direct mode).
+ */
+ if ((s->flags & SF_REDIRECTABLE) && srv->rdr_len) {
+ /* server scheduled for redirection, and already assigned. We
+ * don't want to go further nor check the queue.
+ */
+ sess_change_server(s, srv); /* not really needed in fact */
+ return SRV_STATUS_OK;
+ }
+
+ /* We might have to queue this stream if the assigned server is full.
+ * We know we have to queue it into the server's queue, so if a maxqueue
+ * is set on the server, we must also check that the server's queue is
+ * not full, in which case we have to return FULL.
+ */
+ if (srv->maxconn &&
+ (srv->nbpend || srv->served >= srv_dynamic_maxconn(srv))) {
+
+ if (srv->maxqueue > 0 && srv->nbpend >= srv->maxqueue)
+ return SRV_STATUS_FULL;
+
+ p = pendconn_add(s);
+ if (p)
+ return SRV_STATUS_QUEUED;
+ else
+ return SRV_STATUS_INTERNAL;
+ }
+
+ /* OK, we can use this server. Let's reserve our place */
+ sess_change_server(s, srv);
+ return SRV_STATUS_OK;
+
+ case SRV_STATUS_FULL:
+ /* queue this stream into the proxy's queue */
+ p = pendconn_add(s);
+ if (p)
+ return SRV_STATUS_QUEUED;
+ else
+ return SRV_STATUS_INTERNAL;
+
+ case SRV_STATUS_NOSRV:
+ return err;
+
+ case SRV_STATUS_INTERNAL:
+ return err;
+
+ default:
+ return SRV_STATUS_INTERNAL;
+ }
+}
+
+/* If an explicit source binding is specified on the server and/or backend, and
+ * this source makes use of the transparent proxy, then it is extracted now and
+ * assigned to the stream's pending connection. This function assumes that an
+ * outgoing connection has already been assigned to s->si[1].end.
+ */
+static void assign_tproxy_address(struct stream *s)
+{
+#if defined(CONFIG_HAP_TRANSPARENT)
+ struct server *srv = objt_server(s->target);
+ struct conn_src *src;
+ struct connection *cli_conn;
+ struct connection *srv_conn = objt_conn(s->si[1].end);
+
+ if (srv && srv->conn_src.opts & CO_SRC_BIND)
+ src = &srv->conn_src;
+ else if (s->be->conn_src.opts & CO_SRC_BIND)
+ src = &s->be->conn_src;
+ else
+ return;
+
+ switch (src->opts & CO_SRC_TPROXY_MASK) {
+ case CO_SRC_TPROXY_ADDR:
+ srv_conn->addr.from = src->tproxy_addr;
+ break;
+ case CO_SRC_TPROXY_CLI:
+ case CO_SRC_TPROXY_CIP:
+ /* FIXME: what can we do if the client connects in IPv6 or unix socket ? */
+ cli_conn = objt_conn(strm_orig(s));
+ if (cli_conn)
+ srv_conn->addr.from = cli_conn->addr.from;
+ else
+ memset(&srv_conn->addr.from, 0, sizeof(srv_conn->addr.from));
+ break;
+ case CO_SRC_TPROXY_DYN:
+ if (src->bind_hdr_occ && s->txn) {
+ char *vptr;
+ int vlen;
+ int rewind;
+
+ /* bind to the IP in a header */
+ ((struct sockaddr_in *)&srv_conn->addr.from)->sin_family = AF_INET;
+ ((struct sockaddr_in *)&srv_conn->addr.from)->sin_port = 0;
+ ((struct sockaddr_in *)&srv_conn->addr.from)->sin_addr.s_addr = 0;
+
+ b_rew(s->req.buf, rewind = http_hdr_rewind(&s->txn->req));
+ if (http_get_hdr(&s->txn->req, src->bind_hdr_name, src->bind_hdr_len,
+ &s->txn->hdr_idx, src->bind_hdr_occ, NULL, &vptr, &vlen)) {
+ ((struct sockaddr_in *)&srv_conn->addr.from)->sin_addr.s_addr =
+ htonl(inetaddr_host_lim(vptr, vptr + vlen));
+ }
+ b_adv(s->req.buf, rewind);
+ }
+ break;
+ default:
+ memset(&srv_conn->addr.from, 0, sizeof(srv_conn->addr.from));
+ }
+#endif
+}
+
+
+/*
+ * This function initiates a connection to the server assigned to this stream
+ * (s->target, s->si[1].addr.to). It will assign a server if none
+ * is assigned yet.
+ * It can return one of :
+ * - SF_ERR_NONE if everything's OK
+ * - SF_ERR_SRVTO if there are no more servers
+ * - SF_ERR_SRVCL if the connection was refused by the server
+ * - SF_ERR_PRXCOND if the connection has been limited by the proxy (maxconn)
+ * - SF_ERR_RESOURCE if a system resource is lacking (eg: fd limits, ports, ...)
+ * - SF_ERR_INTERNAL for any other purely internal errors
+ * Additionnally, in the case of SF_ERR_RESOURCE, an emergency log will be emitted.
+ * The server-facing stream interface is expected to hold a pre-allocated connection
+ * in s->si[1].conn.
+ */
+int connect_server(struct stream *s)
+{
+ struct connection *cli_conn;
+ struct connection *srv_conn;
+ struct connection *old_conn;
+ struct server *srv;
+ int reuse = 0;
+ int err;
+
+ srv = objt_server(s->target);
+ srv_conn = objt_conn(s->si[1].end);
+ if (srv_conn)
+ reuse = s->target == srv_conn->target;
+
+ if (srv && !reuse) {
+ old_conn = srv_conn;
+ if (old_conn) {
+ srv_conn = NULL;
+ old_conn->owner = NULL;
+ si_detach_endpoint(&s->si[1]);
+ /* note: if the connection was in a server's idle
+ * queue, it doesn't get dequeued.
+ */
+ }
+
+ /* Below we pick connections from the safe or idle lists based
+ * on the strategy, the fact that this is a first or second
+ * (retryable) request, with the indicated priority (1 or 2) :
+ *
+ * SAFE AGGR ALWS
+ *
+ * +-----+-----+ +-----+-----+ +-----+-----+
+ * req| 1st | 2nd | req| 1st | 2nd | req| 1st | 2nd |
+ * ----+-----+-----+ ----+-----+-----+ ----+-----+-----+
+ * safe| - | 2 | safe| 1 | 2 | safe| 1 | 2 |
+ * ----+-----+-----+ ----+-----+-----+ ----+-----+-----+
+ * idle| - | 1 | idle| - | 1 | idle| 2 | 1 |
+ * ----+-----+-----+ ----+-----+-----+ ----+-----+-----+
+ */
+
+ if (!LIST_ISEMPTY(&srv->idle_conns) &&
+ ((s->be->options & PR_O_REUSE_MASK) != PR_O_REUSE_NEVR &&
+ s->txn && (s->txn->flags & TX_NOT_FIRST))) {
+ srv_conn = LIST_ELEM(srv->idle_conns.n, struct connection *, list);
+ }
+ else if (!LIST_ISEMPTY(&srv->safe_conns) &&
+ ((s->txn && (s->txn->flags & TX_NOT_FIRST)) ||
+ (s->be->options & PR_O_REUSE_MASK) >= PR_O_REUSE_AGGR)) {
+ srv_conn = LIST_ELEM(srv->safe_conns.n, struct connection *, list);
+ }
+ else if (!LIST_ISEMPTY(&srv->idle_conns) &&
+ (s->be->options & PR_O_REUSE_MASK) == PR_O_REUSE_ALWS) {
+ srv_conn = LIST_ELEM(srv->idle_conns.n, struct connection *, list);
+ }
+
+ /* If we've picked a connection from the pool, we now have to
+ * detach it. We may have to get rid of the previous idle
+ * connection we had, so for this we try to swap it with the
+ * other owner's. That way it may remain alive for others to
+ * pick.
+ */
+ if (srv_conn) {
+ LIST_DEL(&srv_conn->list);
+ LIST_INIT(&srv_conn->list);
+
+ if (srv_conn->owner) {
+ si_detach_endpoint(srv_conn->owner);
+ if (old_conn && !(old_conn->flags & CO_FL_PRIVATE))
+ si_attach_conn(srv_conn->owner, old_conn);
+ }
+ si_attach_conn(&s->si[1], srv_conn);
+ reuse = 1;
+ }
+
+ /* we may have to release our connection if we couldn't swap it */
+ if (old_conn && !old_conn->owner) {
+ LIST_DEL(&old_conn->list);
+ conn_force_close(old_conn);
+ conn_free(old_conn);
+ }
+ }
+
+ if (reuse) {
+ /* Disable connection reuse if a dynamic source is used.
+ * As long as we don't share connections between servers,
+ * we don't need to disable connection reuse on no-idempotent
+ * requests nor when PROXY protocol is used.
+ */
+ if (srv && srv->conn_src.opts & CO_SRC_BIND) {
+ if ((srv->conn_src.opts & CO_SRC_TPROXY_MASK) == CO_SRC_TPROXY_DYN)
+ reuse = 0;
+ }
+ else if (s->be->conn_src.opts & CO_SRC_BIND) {
+ if ((s->be->conn_src.opts & CO_SRC_TPROXY_MASK) == CO_SRC_TPROXY_DYN)
+ reuse = 0;
+ }
+ }
+
+ if (!reuse)
+ srv_conn = si_alloc_conn(&s->si[1]);
+ else {
+ /* reusing our connection, take it out of the idle list */
+ LIST_DEL(&srv_conn->list);
+ LIST_INIT(&srv_conn->list);
+ }
+
+ if (!srv_conn)
+ return SF_ERR_RESOURCE;
+
+ if (!(s->flags & SF_ADDR_SET)) {
+ err = assign_server_address(s);
+ if (err != SRV_STATUS_OK)
+ return SF_ERR_INTERNAL;
+ }
+
+ if (!conn_xprt_ready(srv_conn)) {
+ /* the target was only on the stream, assign it to the SI now */
+ srv_conn->target = s->target;
+
+ /* set the correct protocol on the output stream interface */
+ if (srv) {
+ conn_prepare(srv_conn, protocol_by_family(srv_conn->addr.to.ss_family), srv->xprt);
+ }
+ else if (obj_type(s->target) == OBJ_TYPE_PROXY) {
+ /* proxies exclusively run on raw_sock right now */
+ conn_prepare(srv_conn, protocol_by_family(srv_conn->addr.to.ss_family), &raw_sock);
+ if (!objt_conn(s->si[1].end) || !objt_conn(s->si[1].end)->ctrl)
+ return SF_ERR_INTERNAL;
+ }
+ else
+ return SF_ERR_INTERNAL; /* how did we get there ? */
+
+ /* process the case where the server requires the PROXY protocol to be sent */
+ srv_conn->send_proxy_ofs = 0;
+ if (srv && srv->pp_opts) {
+ srv_conn->flags |= CO_FL_PRIVATE;
+ srv_conn->send_proxy_ofs = 1; /* must compute size */
+ cli_conn = objt_conn(strm_orig(s));
+ if (cli_conn)
+ conn_get_to_addr(cli_conn);
+ }
+
+ si_attach_conn(&s->si[1], srv_conn);
+
+ assign_tproxy_address(s);
+ }
+ else {
+ /* the connection is being reused, just re-attach it */
+ si_attach_conn(&s->si[1], srv_conn);
+ s->flags |= SF_SRV_REUSED;
+ }
+
+ /* flag for logging source ip/port */
+ if (strm_fe(s)->options2 & PR_O2_SRC_ADDR)
+ s->si[1].flags |= SI_FL_SRC_ADDR;
+
+ /* disable lingering */
+ if (s->be->options & PR_O_TCP_NOLING)
+ s->si[1].flags |= SI_FL_NOLINGER;
+
+ err = si_connect(&s->si[1]);
+
+ if (err != SF_ERR_NONE)
+ return err;
+
+ /* set connect timeout */
+ s->si[1].exp = tick_add_ifset(now_ms, s->be->timeout.connect);
+
+ if (srv) {
+ s->flags |= SF_CURR_SESS;
+ srv->cur_sess++;
+ if (srv->cur_sess > srv->counters.cur_sess_max)
+ srv->counters.cur_sess_max = srv->cur_sess;
+ if (s->be->lbprm.server_take_conn)
+ s->be->lbprm.server_take_conn(srv);
+
+#ifdef USE_OPENSSL
+ if (srv->ssl_ctx.sni) {
+ struct sample *smp;
+ int rewind;
+
+ /* Tricky case : we have already scheduled the pending
+ * HTTP request or TCP data for leaving. So in HTTP we
+ * rewind exactly the headers, otherwise we rewind the
+ * output data.
+ */
+ rewind = s->txn ? http_hdr_rewind(&s->txn->req) : s->req.buf->o;
+ b_rew(s->req.buf, rewind);
+
+ smp = sample_fetch_as_type(s->be, s->sess, s, SMP_OPT_DIR_REQ | SMP_OPT_FINAL, srv->ssl_ctx.sni, SMP_T_STR);
+
+ /* restore the pointers */
+ b_adv(s->req.buf, rewind);
+
+ if (smp) {
+ /* get write access to terminate with a zero */
+ smp_dup(smp);
+ if (smp->data.u.str.len >= smp->data.u.str.size)
+ smp->data.u.str.len = smp->data.u.str.size - 1;
+ smp->data.u.str.str[smp->data.u.str.len] = 0;
+ ssl_sock_set_servername(srv_conn, smp->data.u.str.str);
+ srv_conn->flags |= CO_FL_PRIVATE;
+ }
+ }
+#endif /* USE_OPENSSL */
+
+ }
+
+ return SF_ERR_NONE; /* connection is OK */
+}
+
+
+/* This function performs the "redispatch" part of a connection attempt. It
+ * will assign a server if required, queue the connection if required, and
+ * handle errors that might arise at this level. It can change the server
+ * state. It will return 1 if it encounters an error, switches the server
+ * state, or has to queue a connection. Otherwise, it will return 0 indicating
+ * that the connection is ready to use.
+ */
+
+int srv_redispatch_connect(struct stream *s)
+{
+ struct server *srv;
+ int conn_err;
+
+ /* We know that we don't have any connection pending, so we will
+ * try to get a new one, and wait in this state if it's queued
+ */
+ redispatch:
+ conn_err = assign_server_and_queue(s);
+ srv = objt_server(s->target);
+
+ switch (conn_err) {
+ case SRV_STATUS_OK:
+ break;
+
+ case SRV_STATUS_FULL:
+ /* The server has reached its maxqueue limit. Either PR_O_REDISP is set
+ * and we can redispatch to another server, or it is not and we return
+ * 503. This only makes sense in DIRECT mode however, because normal LB
+ * algorithms would never select such a server, and hash algorithms
+ * would bring us on the same server again. Note that s->target is set
+ * in this case.
+ */
+ if (((s->flags & (SF_DIRECT|SF_FORCE_PRST)) == SF_DIRECT) &&
+ (s->be->options & PR_O_REDISP)) {
+ s->flags &= ~(SF_DIRECT | SF_ASSIGNED | SF_ADDR_SET);
+ goto redispatch;
+ }
+
+ if (!s->si[1].err_type) {
+ s->si[1].err_type = SI_ET_QUEUE_ERR;
+ }
+
+ srv->counters.failed_conns++;
+ s->be->be_counters.failed_conns++;
+ return 1;
+
+ case SRV_STATUS_NOSRV:
+ /* note: it is guaranteed that srv == NULL here */
+ if (!s->si[1].err_type) {
+ s->si[1].err_type = SI_ET_CONN_ERR;
+ }
+
+ s->be->be_counters.failed_conns++;
+ return 1;
+
+ case SRV_STATUS_QUEUED:
+ s->si[1].exp = tick_add_ifset(now_ms, s->be->timeout.queue);
+ s->si[1].state = SI_ST_QUE;
+ /* do nothing else and do not wake any other stream up */
+ return 1;
+
+ case SRV_STATUS_INTERNAL:
+ default:
+ if (!s->si[1].err_type) {
+ s->si[1].err_type = SI_ET_CONN_OTHER;
+ }
+
+ if (srv)
+ srv_inc_sess_ctr(srv);
+ if (srv)
+ srv_set_sess_last(srv);
+ if (srv)
+ srv->counters.failed_conns++;
+ s->be->be_counters.failed_conns++;
+
+ /* release other streams waiting for this server */
+ if (may_dequeue_tasks(srv, s->be))
+ process_srv_queue(srv);
+ return 1;
+ }
+ /* if we get here, it's because we got SRV_STATUS_OK, which also
+ * means that the connection has not been queued.
+ */
+ return 0;
+}
+
+/* sends a log message when a backend goes down, and also sets last
+ * change date.
+ */
+void set_backend_down(struct proxy *be)
+{
+ be->last_change = now.tv_sec;
+ be->down_trans++;
+
+ Alert("%s '%s' has no server available!\n", proxy_type_str(be), be->id);
+ send_log(be, LOG_EMERG, "%s %s has no server available!\n", proxy_type_str(be), be->id);
+}
+
+/* Apply RDP cookie persistence to the current stream. For this, the function
+ * tries to extract an RDP cookie from the request buffer, and look for the
+ * matching server in the list. If the server is found, it is assigned to the
+ * stream. This always returns 1, and the analyser removes itself from the
+ * list. Nothing is performed if a server was already assigned.
+ */
+int tcp_persist_rdp_cookie(struct stream *s, struct channel *req, int an_bit)
+{
+ struct proxy *px = s->be;
+ int ret;
+ struct sample smp;
+ struct server *srv = px->srv;
+ struct sockaddr_in addr;
+ char *p;
+
+ DPRINTF(stderr,"[%u] %s: stream=%p b=%p, exp(r,w)=%u,%u bf=%08x bh=%d analysers=%02x\n",
+ now_ms, __FUNCTION__,
+ s,
+ req,
+ req->rex, req->wex,
+ req->flags,
+ req->buf->i,
+ req->analysers);
+
+ if (s->flags & SF_ASSIGNED)
+ goto no_cookie;
+
+ memset(&smp, 0, sizeof(smp));
+
+ ret = fetch_rdp_cookie_name(s, &smp, s->be->rdp_cookie_name, s->be->rdp_cookie_len);
+ if (ret == 0 || (smp.flags & SMP_F_MAY_CHANGE) || smp.data.u.str.len == 0)
+ goto no_cookie;
+
+ memset(&addr, 0, sizeof(addr));
+ addr.sin_family = AF_INET;
+
+ /* Considering an rdp cookie detected using acl, str ended with <cr><lf> and should return */
+ addr.sin_addr.s_addr = strtoul(smp.data.u.str.str, &p, 10);
+ if (*p != '.')
+ goto no_cookie;
+ p++;
+ addr.sin_port = (unsigned short)strtoul(p, &p, 10);
+ if (*p != '.')
+ goto no_cookie;
+
+ s->target = NULL;
+ while (srv) {
+ if (srv->addr.ss_family == AF_INET &&
+ memcmp(&addr, &(srv->addr), sizeof(addr)) == 0) {
+ if ((srv->state != SRV_ST_STOPPED) || (px->options & PR_O_PERSIST)) {
+ /* we found the server and it is usable */
+ s->flags |= SF_DIRECT | SF_ASSIGNED;
+ s->target = &srv->obj_type;
+ break;
+ }
+ }
+ srv = srv->next;
+ }
+
+no_cookie:
+ req->analysers &= ~an_bit;
+ req->analyse_exp = TICK_ETERNITY;
+ return 1;
+}
+
+int be_downtime(struct proxy *px) {
+ if (px->lbprm.tot_weight && px->last_change < now.tv_sec) // ignore negative time
+ return px->down_time;
+
+ return now.tv_sec - px->last_change + px->down_time;
+}
+
+/*
+ * This function returns a string containing the balancing
+ * mode of the proxy in a format suitable for stats.
+ */
+
+const char *backend_lb_algo_str(int algo) {
+
+ if (algo == BE_LB_ALGO_RR)
+ return "roundrobin";
+ else if (algo == BE_LB_ALGO_SRR)
+ return "static-rr";
+ else if (algo == BE_LB_ALGO_FAS)
+ return "first";
+ else if (algo == BE_LB_ALGO_LC)
+ return "leastconn";
+ else if (algo == BE_LB_ALGO_SH)
+ return "source";
+ else if (algo == BE_LB_ALGO_UH)
+ return "uri";
+ else if (algo == BE_LB_ALGO_PH)
+ return "url_param";
+ else if (algo == BE_LB_ALGO_HH)
+ return "hdr";
+ else if (algo == BE_LB_ALGO_RCH)
+ return "rdp-cookie";
+ else
+ return NULL;
+}
+
+/* This function parses a "balance" statement in a backend section describing
+ * <curproxy>. It returns -1 if there is any error, otherwise zero. If it
+ * returns -1, it will write an error message into the <err> buffer which will
+ * automatically be allocated and must be passed as NULL. The trailing '\n'
+ * will not be written. The function must be called with <args> pointing to the
+ * first word after "balance".
+ */
+int backend_parse_balance(const char **args, char **err, struct proxy *curproxy)
+{
+ if (!*(args[0])) {
+ /* if no option is set, use round-robin by default */
+ curproxy->lbprm.algo &= ~BE_LB_ALGO;
+ curproxy->lbprm.algo |= BE_LB_ALGO_RR;
+ return 0;
+ }
+
+ if (!strcmp(args[0], "roundrobin")) {
+ curproxy->lbprm.algo &= ~BE_LB_ALGO;
+ curproxy->lbprm.algo |= BE_LB_ALGO_RR;
+ }
+ else if (!strcmp(args[0], "static-rr")) {
+ curproxy->lbprm.algo &= ~BE_LB_ALGO;
+ curproxy->lbprm.algo |= BE_LB_ALGO_SRR;
+ }
+ else if (!strcmp(args[0], "first")) {
+ curproxy->lbprm.algo &= ~BE_LB_ALGO;
+ curproxy->lbprm.algo |= BE_LB_ALGO_FAS;
+ }
+ else if (!strcmp(args[0], "leastconn")) {
+ curproxy->lbprm.algo &= ~BE_LB_ALGO;
+ curproxy->lbprm.algo |= BE_LB_ALGO_LC;
+ }
+ else if (!strcmp(args[0], "source")) {
+ curproxy->lbprm.algo &= ~BE_LB_ALGO;
+ curproxy->lbprm.algo |= BE_LB_ALGO_SH;
+ }
+ else if (!strcmp(args[0], "uri")) {
+ int arg = 1;
+
+ curproxy->lbprm.algo &= ~BE_LB_ALGO;
+ curproxy->lbprm.algo |= BE_LB_ALGO_UH;
+
+ curproxy->uri_whole = 0;
+
+ while (*args[arg]) {
+ if (!strcmp(args[arg], "len")) {
+ if (!*args[arg+1] || (atoi(args[arg+1]) <= 0)) {
+ memprintf(err, "%s : '%s' expects a positive integer (got '%s').", args[0], args[arg], args[arg+1]);
+ return -1;
+ }
+ curproxy->uri_len_limit = atoi(args[arg+1]);
+ arg += 2;
+ }
+ else if (!strcmp(args[arg], "depth")) {
+ if (!*args[arg+1] || (atoi(args[arg+1]) <= 0)) {
+ memprintf(err, "%s : '%s' expects a positive integer (got '%s').", args[0], args[arg], args[arg+1]);
+ return -1;
+ }
+ /* hint: we store the position of the ending '/' (depth+1) so
+ * that we avoid a comparison while computing the hash.
+ */
+ curproxy->uri_dirs_depth1 = atoi(args[arg+1]) + 1;
+ arg += 2;
+ }
+ else if (!strcmp(args[arg], "whole")) {
+ curproxy->uri_whole = 1;
+ arg += 1;
+ }
+ else {
+ memprintf(err, "%s only accepts parameters 'len', 'depth', and 'whole' (got '%s').", args[0], args[arg]);
+ return -1;
+ }
+ }
+ }
+ else if (!strcmp(args[0], "url_param")) {
+ if (!*args[1]) {
+ memprintf(err, "%s requires an URL parameter name.", args[0]);
+ return -1;
+ }
+ curproxy->lbprm.algo &= ~BE_LB_ALGO;
+ curproxy->lbprm.algo |= BE_LB_ALGO_PH;
+
+ free(curproxy->url_param_name);
+ curproxy->url_param_name = strdup(args[1]);
+ curproxy->url_param_len = strlen(args[1]);
+ if (*args[2]) {
+ if (strcmp(args[2], "check_post")) {
+ memprintf(err, "%s only accepts 'check_post' modifier (got '%s').", args[0], args[2]);
+ return -1;
+ }
+ }
+ }
+ else if (!strncmp(args[0], "hdr(", 4)) {
+ const char *beg, *end;
+
+ beg = args[0] + 4;
+ end = strchr(beg, ')');
+
+ if (!end || end == beg) {
+ memprintf(err, "hdr requires an http header field name.");
+ return -1;
+ }
+
+ curproxy->lbprm.algo &= ~BE_LB_ALGO;
+ curproxy->lbprm.algo |= BE_LB_ALGO_HH;
+
+ free(curproxy->hh_name);
+ curproxy->hh_len = end - beg;
+ curproxy->hh_name = my_strndup(beg, end - beg);
+ curproxy->hh_match_domain = 0;
+
+ if (*args[1]) {
+ if (strcmp(args[1], "use_domain_only")) {
+ memprintf(err, "%s only accepts 'use_domain_only' modifier (got '%s').", args[0], args[1]);
+ return -1;
+ }
+ curproxy->hh_match_domain = 1;
+ }
+ }
+ else if (!strncmp(args[0], "rdp-cookie", 10)) {
+ curproxy->lbprm.algo &= ~BE_LB_ALGO;
+ curproxy->lbprm.algo |= BE_LB_ALGO_RCH;
+
+ if ( *(args[0] + 10 ) == '(' ) { /* cookie name */
+ const char *beg, *end;
+
+ beg = args[0] + 11;
+ end = strchr(beg, ')');
+
+ if (!end || end == beg) {
+ memprintf(err, "rdp-cookie : missing cookie name.");
+ return -1;
+ }
+
+ free(curproxy->hh_name);
+ curproxy->hh_name = my_strndup(beg, end - beg);
+ curproxy->hh_len = end - beg;
+ }
+ else if ( *(args[0] + 10 ) == '\0' ) { /* default cookie name 'mstshash' */
+ free(curproxy->hh_name);
+ curproxy->hh_name = strdup("mstshash");
+ curproxy->hh_len = strlen(curproxy->hh_name);
+ }
+ else { /* syntax */
+ memprintf(err, "rdp-cookie : missing cookie name.");
+ return -1;
+ }
+ }
+ else {
+ memprintf(err, "only supports 'roundrobin', 'static-rr', 'leastconn', 'source', 'uri', 'url_param', 'hdr(name)' and 'rdp-cookie(name)' options.");
+ return -1;
+ }
+ return 0;
+}
+
+
+/************************************************************************/
+/* All supported sample and ACL keywords must be declared here. */
+/************************************************************************/
+
+/* set temp integer to the number of enabled servers on the proxy.
+ * Accepts exactly 1 argument. Argument is a backend, other types will lead to
+ * undefined behaviour.
+ */
+static int
+smp_fetch_nbsrv(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct proxy *px;
+
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_SINT;
+ px = args->data.prx;
+
+ if (px->srv_act)
+ smp->data.u.sint = px->srv_act;
+ else if (px->lbprm.fbck)
+ smp->data.u.sint = 1;
+ else
+ smp->data.u.sint = px->srv_bck;
+
+ return 1;
+}
+
+/* report in smp->flags a success or failure depending on the designated
+ * server's state. There is no match function involved since there's no pattern.
+ * Accepts exactly 1 argument. Argument is a server, other types will lead to
+ * undefined behaviour.
+ */
+static int
+smp_fetch_srv_is_up(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct server *srv = args->data.srv;
+
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_BOOL;
+ if (!(srv->admin & SRV_ADMF_MAINT) &&
+ (!(srv->check.state & CHK_ST_CONFIGURED) || (srv->state != SRV_ST_STOPPED)))
+ smp->data.u.sint = 1;
+ else
+ smp->data.u.sint = 0;
+ return 1;
+}
+
+/* set temp integer to the number of enabled servers on the proxy.
+ * Accepts exactly 1 argument. Argument is a backend, other types will lead to
+ * undefined behaviour.
+ */
+static int
+smp_fetch_connslots(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct server *iterator;
+
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = 0;
+
+ for (iterator = args->data.prx->srv; iterator; iterator = iterator->next) {
+ if (iterator->state == SRV_ST_STOPPED)
+ continue;
+
+ if (iterator->maxconn == 0 || iterator->maxqueue == 0) {
+ /* configuration is stupid */
+ smp->data.u.sint = -1; /* FIXME: stupid value! */
+ return 1;
+ }
+
+ smp->data.u.sint += (iterator->maxconn - iterator->cur_sess)
+ + (iterator->maxqueue - iterator->nbpend);
+ }
+
+ return 1;
+}
+
+/* set temp integer to the id of the backend */
+static int
+smp_fetch_be_id(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ smp->flags = SMP_F_VOL_TXN;
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = smp->strm->be->uuid;
+ return 1;
+}
+
+/* set temp integer to the id of the server */
+static int
+smp_fetch_srv_id(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ if (!objt_server(smp->strm->target))
+ return 0;
+
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = objt_server(smp->strm->target)->puid;
+
+ return 1;
+}
+
+/* set temp integer to the number of connections per second reaching the backend.
+ * Accepts exactly 1 argument. Argument is a backend, other types will lead to
+ * undefined behaviour.
+ */
+static int
+smp_fetch_be_sess_rate(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = read_freq_ctr(&args->data.prx->be_sess_per_sec);
+ return 1;
+}
+
+/* set temp integer to the number of concurrent connections on the backend.
+ * Accepts exactly 1 argument. Argument is a backend, other types will lead to
+ * undefined behaviour.
+ */
+static int
+smp_fetch_be_conn(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = args->data.prx->beconn;
+ return 1;
+}
+
+/* set temp integer to the total number of queued connections on the backend.
+ * Accepts exactly 1 argument. Argument is a backend, other types will lead to
+ * undefined behaviour.
+ */
+static int
+smp_fetch_queue_size(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = args->data.prx->totpend;
+ return 1;
+}
+
+/* set temp integer to the total number of queued connections on the backend divided
+ * by the number of running servers and rounded up. If there is no running
+ * server, we return twice the total, just as if we had half a running server.
+ * This is more or less correct anyway, since we expect the last server to come
+ * back soon.
+ * Accepts exactly 1 argument. Argument is a backend, other types will lead to
+ * undefined behaviour.
+ */
+static int
+smp_fetch_avg_queue_size(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ int nbsrv;
+ struct proxy *px;
+
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_SINT;
+ px = args->data.prx;
+
+ if (px->srv_act)
+ nbsrv = px->srv_act;
+ else if (px->lbprm.fbck)
+ nbsrv = 1;
+ else
+ nbsrv = px->srv_bck;
+
+ if (nbsrv > 0)
+ smp->data.u.sint = (px->totpend + nbsrv - 1) / nbsrv;
+ else
+ smp->data.u.sint = px->totpend * 2;
+
+ return 1;
+}
+
+/* set temp integer to the number of concurrent connections on the server in the backend.
+ * Accepts exactly 1 argument. Argument is a server, other types will lead to
+ * undefined behaviour.
+ */
+static int
+smp_fetch_srv_conn(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = args->data.srv->cur_sess;
+ return 1;
+}
+
+/* set temp integer to the number of enabled servers on the proxy.
+ * Accepts exactly 1 argument. Argument is a server, other types will lead to
+ * undefined behaviour.
+ */
+static int
+smp_fetch_srv_sess_rate(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = read_freq_ctr(&args->data.srv->sess_per_sec);
+ return 1;
+}
+
+
+/* Note: must not be declared <const> as its list will be overwritten.
+ * Please take care of keeping this list alphabetically sorted.
+ */
+static struct sample_fetch_kw_list smp_kws = {ILH, {
+ { "avg_queue", smp_fetch_avg_queue_size, ARG1(1,BE), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "be_conn", smp_fetch_be_conn, ARG1(1,BE), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "be_id", smp_fetch_be_id, 0, NULL, SMP_T_SINT, SMP_USE_BKEND, },
+ { "be_sess_rate", smp_fetch_be_sess_rate, ARG1(1,BE), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "connslots", smp_fetch_connslots, ARG1(1,BE), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "nbsrv", smp_fetch_nbsrv, ARG1(1,BE), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "queue", smp_fetch_queue_size, ARG1(1,BE), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "srv_conn", smp_fetch_srv_conn, ARG1(1,SRV), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "srv_id", smp_fetch_srv_id, 0, NULL, SMP_T_SINT, SMP_USE_SERVR, },
+ { "srv_is_up", smp_fetch_srv_is_up, ARG1(1,SRV), NULL, SMP_T_BOOL, SMP_USE_INTRN, },
+ { "srv_sess_rate", smp_fetch_srv_sess_rate, ARG1(1,SRV), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { /* END */ },
+}};
+
+
+/* Note: must not be declared <const> as its list will be overwritten.
+ * Please take care of keeping this list alphabetically sorted.
+ */
+static struct acl_kw_list acl_kws = {ILH, {
+ { /* END */ },
+}};
+
+
+__attribute__((constructor))
+static void __backend_init(void)
+{
+ sample_register_fetches(&smp_kws);
+ acl_register_keywords(&acl_kws);
+}
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * ASCII <-> Base64 conversion as described in RFC1421.
+ *
+ * Copyright 2006-2010 Willy Tarreau <w@1wt.eu>
+ * Copyright 2009-2010 Krzysztof Piotr Oledzki <ole@ans.pl>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <stdlib.h>
+#include <string.h>
+
+#include <common/base64.h>
+#include <common/config.h>
+
+#define B64BASE '#' /* arbitrary chosen base value */
+#define B64CMIN '+'
+#define B64CMAX 'z'
+#define B64PADV 64 /* Base64 chosen special pad value */
+
+const char base64tab[65]="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/";
+const char base64rev[]="b###cXYZ[\\]^_`a###d###$%&'()*+,-./0123456789:;<=######>?@ABCDEFGHIJKLMNOPQRSTUVW";
+
+/* Encodes <ilen> bytes from <in> to <out> for at most <olen> chars (including
+ * the trailing zero). Returns the number of bytes written. No check is made
+ * for <in> or <out> to be NULL. Returns negative value if <olen> is too short
+ * to accept <ilen>. 4 output bytes are produced for 1 to 3 input bytes.
+ */
+int a2base64(char *in, int ilen, char *out, int olen)
+{
+ int convlen;
+
+ convlen = ((ilen + 2) / 3) * 4;
+
+ if (convlen >= olen)
+ return -1;
+
+ /* we don't need to check olen anymore */
+ while (ilen >= 3) {
+ out[0] = base64tab[(((unsigned char)in[0]) >> 2)];
+ out[1] = base64tab[(((unsigned char)in[0] & 0x03) << 4) | (((unsigned char)in[1]) >> 4)];
+ out[2] = base64tab[(((unsigned char)in[1] & 0x0F) << 2) | (((unsigned char)in[2]) >> 6)];
+ out[3] = base64tab[(((unsigned char)in[2] & 0x3F))];
+ out += 4;
+ in += 3; ilen -= 3;
+ }
+
+ if (!ilen) {
+ out[0] = '\0';
+ } else {
+ out[0] = base64tab[((unsigned char)in[0]) >> 2];
+ if (ilen == 1) {
+ out[1] = base64tab[((unsigned char)in[0] & 0x03) << 4];
+ out[2] = '=';
+ } else {
+ out[1] = base64tab[(((unsigned char)in[0] & 0x03) << 4) |
+ (((unsigned char)in[1]) >> 4)];
+ out[2] = base64tab[((unsigned char)in[1] & 0x0F) << 2];
+ }
+ out[3] = '=';
+ out[4] = '\0';
+ }
+
+ return convlen;
+}
+
+/* Decodes <ilen> bytes from <in> to <out> for at most <olen> chars.
+ * Returns the number of bytes converted. No check is made for
+ * <in> or <out> to be NULL. Returns -1 if <in> is invalid or ilen
+ * has wrong size, -2 if <olen> is too short.
+ * 1 to 3 output bytes are produced for 4 input bytes.
+ */
+int base64dec(const char *in, size_t ilen, char *out, size_t olen) {
+
+ unsigned char t[4];
+ signed char b;
+ int convlen = 0, i = 0, pad = 0;
+
+ if (ilen % 4)
+ return -1;
+
+ if (olen < ilen / 4 * 3)
+ return -2;
+
+ while (ilen) {
+
+ /* if (*p < B64CMIN || *p > B64CMAX) */
+ b = (signed char)*in - B64CMIN;
+ if ((unsigned char)b > (B64CMAX-B64CMIN))
+ return -1;
+
+ b = base64rev[b] - B64BASE - 1;
+
+ /* b == -1: invalid character */
+ if (b < 0)
+ return -1;
+
+ /* padding has to be continous */
+ if (pad && b != B64PADV)
+ return -1;
+
+ /* valid padding: "XX==" or "XXX=", but never "X===" or "====" */
+ if (pad && i < 2)
+ return -1;
+
+ if (b == B64PADV)
+ pad++;
+
+ t[i++] = b;
+
+ if (i == 4) {
+ /*
+ * WARNING: we allow to write little more data than we
+ * should, but the checks from the beginning of the
+ * functions guarantee that we can safely do that.
+ */
+
+ /* xx000000 xx001111 xx111122 xx222222 */
+ out[convlen] = ((t[0] << 2) + (t[1] >> 4));
+ out[convlen+1] = ((t[1] << 4) + (t[2] >> 2));
+ out[convlen+2] = ((t[2] << 6) + (t[3] >> 0));
+
+ convlen += 3-pad;
+
+ pad = i = 0;
+ }
+
+ in++;
+ ilen--;
+ }
+
+ return convlen;
+}
+
+
+/* Converts the lower 30 bits of an integer to a 5-char base64 string. The
+ * caller is responsible for ensuring that the output buffer can accept 6 bytes
+ * (5 + the trailing zero). The pointer to the string is returned. The
+ * conversion is performed with MSB first and in a format that can be
+ * decoded with b64tos30(). This format is not padded and thus is not
+ * compatible with usual base64 routines.
+ */
+const char *s30tob64(int in, char *out)
+{
+ int i;
+ for (i = 0; i < 5; i++) {
+ out[i] = base64tab[(in >> 24) & 0x3F];
+ in <<= 6;
+ }
+ out[5] = '\0';
+ return out;
+}
+
+/* Converts a 5-char base64 string encoded by s30tob64() into a 30-bit integer.
+ * The caller is responsible for ensuring that the input contains at least 5
+ * chars. If any unexpected character is encountered, a negative value is
+ * returned. Otherwise the decoded value is returned.
+ */
+int b64tos30(const char *in)
+{
+ int i, out;
+ signed char b;
+
+ out = 0;
+ for (i = 0; i < 5; i++) {
+ b = (signed char)in[i] - B64CMIN;
+ if ((unsigned char)b > (B64CMAX - B64CMIN))
+ return -1; /* input character out of range */
+
+ b = base64rev[b] - B64BASE - 1;
+ if (b < 0) /* invalid character */
+ return -1;
+
+ if (b == B64PADV) /* padding not allowed */
+ return -1;
+
+ out = (out << 6) + b;
+ }
+ return out;
+}
--- /dev/null
+/*
+ * Buffer management functions.
+ *
+ * Copyright 2000-2012 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <ctype.h>
+#include <stdio.h>
+#include <string.h>
+
+#include <common/config.h>
+#include <common/buffer.h>
+#include <common/memory.h>
+
+#include <types/global.h>
+
+struct pool_head *pool2_buffer;
+
+/* These buffers are used to always have a valid pointer to an empty buffer in
+ * channels. The first buffer is set once a buffer is empty. The second one is
+ * set when a buffer is desired but no more are available. It helps knowing
+ * what channel wants a buffer. They can reliably be exchanged, the split
+ * between the two is only an optimization.
+ */
+struct buffer buf_empty = { .p = buf_empty.data };
+struct buffer buf_wanted = { .p = buf_wanted.data };
+
+/* perform minimal intializations, report 0 in case of error, 1 if OK. */
+int init_buffer()
+{
+ void *buffer;
+
+ pool2_buffer = create_pool("buffer", sizeof (struct buffer) + global.tune.bufsize, MEM_F_SHARED);
+ if (!pool2_buffer)
+ return 0;
+
+ /* The reserved buffer is what we leave behind us. Thus we always need
+ * at least one extra buffer in minavail otherwise we'll end up waking
+ * up tasks with no memory available, causing a lot of useless wakeups.
+ * That means that we always want to have at least 3 buffers available
+ * (2 for current session, one for next session that might be needed to
+ * release a server connection).
+ */
+ pool2_buffer->minavail = MAX(global.tune.reserved_bufs, 3);
+ if (global.tune.buf_limit)
+ pool2_buffer->limit = global.tune.buf_limit;
+
+ buffer = pool_refill_alloc(pool2_buffer, pool2_buffer->minavail - 1);
+ if (!buffer)
+ return 0;
+
+ pool_free2(pool2_buffer, buffer);
+ return 1;
+}
+
+/* This function writes the string <str> at position <pos> which must be in
+ * buffer <b>, and moves <end> just after the end of <str>. <b>'s parameters
+ * <l> and <r> are updated to be valid after the shift. The shift value
+ * (positive or negative) is returned. If there's no space left, the move is
+ * not done. The function does not adjust ->o because it does not make sense to
+ * use it on data scheduled to be sent. For the same reason, it does not make
+ * sense to call this function on unparsed data, so <orig> is not updated. The
+ * string length is taken from parameter <len>. If <len> is null, the <str>
+ * pointer is allowed to be null.
+ */
+int buffer_replace2(struct buffer *b, char *pos, char *end, const char *str, int len)
+{
+ int delta;
+
+ delta = len - (end - pos);
+
+ if (bi_end(b) + delta > b->data + b->size)
+ return 0; /* no space left */
+
+ if (buffer_not_empty(b) &&
+ bi_end(b) + delta > bo_ptr(b) &&
+ bo_ptr(b) >= bi_end(b))
+ return 0; /* no space left before wrapping data */
+
+ /* first, protect the end of the buffer */
+ memmove(end + delta, end, bi_end(b) - end);
+
+ /* now, copy str over pos */
+ if (len)
+ memcpy(pos, str, len);
+
+ b->i += delta;
+
+ if (buffer_empty(b))
+ b->p = b->data;
+
+ return delta;
+}
+
+/*
+ * Inserts <str> followed by "\r\n" at position <pos> in buffer <b>. The <len>
+ * argument informs about the length of string <str> so that we don't have to
+ * measure it. It does not include the "\r\n". If <str> is NULL, then the buffer
+ * is only opened for len+2 bytes but nothing is copied in. It may be useful in
+ * some circumstances. The send limit is *not* adjusted. Same comments as above
+ * for the valid use cases.
+ *
+ * The number of bytes added is returned on success. 0 is returned on failure.
+ */
+int buffer_insert_line2(struct buffer *b, char *pos, const char *str, int len)
+{
+ int delta;
+
+ delta = len + 2;
+
+ if (bi_end(b) + delta >= b->data + b->size)
+ return 0; /* no space left */
+
+ if (buffer_not_empty(b) &&
+ bi_end(b) + delta > bo_ptr(b) &&
+ bo_ptr(b) >= bi_end(b))
+ return 0; /* no space left before wrapping data */
+
+ /* first, protect the end of the buffer */
+ memmove(pos + delta, pos, bi_end(b) - pos);
+
+ /* now, copy str over pos */
+ if (len && str) {
+ memcpy(pos, str, len);
+ pos[len] = '\r';
+ pos[len + 1] = '\n';
+ }
+
+ b->i += delta;
+ return delta;
+}
+
+/* This function realigns a possibly wrapping buffer so that the input part is
+ * contiguous and starts at the beginning of the buffer and the output part
+ * ends at the end of the buffer. This provides the best conditions since it
+ * allows the largest inputs to be processed at once and ensures that once the
+ * output data leaves, the whole buffer is available at once.
+ */
+void buffer_slow_realign(struct buffer *buf)
+{
+ int block1 = buf->o;
+ int block2 = 0;
+
+ /* process output data in two steps to cover wrapping */
+ if (block1 > buf->p - buf->data) {
+ block2 = buf->p - buf->data;
+ block1 -= block2;
+ }
+ memcpy(swap_buffer + buf->size - buf->o, bo_ptr(buf), block1);
+ memcpy(swap_buffer + buf->size - block2, buf->data, block2);
+
+ /* process input data in two steps to cover wrapping */
+ block1 = buf->i;
+ block2 = 0;
+
+ if (block1 > buf->data + buf->size - buf->p) {
+ block1 = buf->data + buf->size - buf->p;
+ block2 = buf->i - block1;
+ }
+ memcpy(swap_buffer, bi_ptr(buf), block1);
+ memcpy(swap_buffer + block1, buf->data, block2);
+
+ /* reinject changes into the buffer */
+ memcpy(buf->data, swap_buffer, buf->i);
+ memcpy(buf->data + buf->size - buf->o, swap_buffer + buf->size - buf->o, buf->o);
+
+ buf->p = buf->data;
+}
+
+
+/* Realigns a possibly non-contiguous buffer by bouncing bytes from source to
+ * destination. It does not use any intermediate buffer and does the move in
+ * place, though it will be slower than a simple memmove() on contiguous data,
+ * so it's desirable to use it only on non-contiguous buffers. No pointers are
+ * changed, the caller is responsible for that.
+ */
+void buffer_bounce_realign(struct buffer *buf)
+{
+ int advance, to_move;
+ char *from, *to;
+
+ from = bo_ptr(buf);
+ advance = buf->data + buf->size - from;
+ if (!advance)
+ return;
+
+ to_move = buffer_len(buf);
+ while (to_move) {
+ char last, save;
+
+ last = *from;
+ to = from + advance;
+ if (to >= buf->data + buf->size)
+ to -= buf->size;
+
+ while (1) {
+ save = *to;
+ *to = last;
+ last = save;
+ to_move--;
+ if (!to_move)
+ break;
+
+ /* check if we went back home after rotating a number of bytes */
+ if (to == from)
+ break;
+
+ /* if we ended up in the empty area, let's walk to next place. The
+ * empty area is either between buf->r and from or before from or
+ * after buf->r.
+ */
+ if (from > bi_end(buf)) {
+ if (to >= bi_end(buf) && to < from)
+ break;
+ } else if (from < bi_end(buf)) {
+ if (to < from || to >= bi_end(buf))
+ break;
+ }
+
+ /* we have overwritten a byte of the original set, let's move it */
+ to += advance;
+ if (to >= buf->data + buf->size)
+ to -= buf->size;
+ }
+
+ from++;
+ if (from >= buf->data + buf->size)
+ from -= buf->size;
+ }
+}
+
+
+/*
+ * Dumps part or all of a buffer.
+ */
+void buffer_dump(FILE *o, struct buffer *b, int from, int to)
+{
+ fprintf(o, "Dumping buffer %p\n", b);
+ fprintf(o, " data=%p o=%d i=%d p=%p\n"
+ " relative: p=0x%04x\n",
+ b->data, b->o, b->i, b->p, (unsigned int)(b->p - b->data));
+
+ fprintf(o, "Dumping contents from byte %d to byte %d\n", from, to);
+ fprintf(o, " 0 1 2 3 4 5 6 7 8 9 a b c d e f\n");
+ /* dump hexa */
+ while (from < to) {
+ int i;
+
+ fprintf(o, " %04x: ", from);
+ for (i = 0; ((from + i) < to) && (i < 16) ; i++) {
+ fprintf(o, "%02x ", (unsigned char)b->data[from + i]);
+ if (((from + i) & 15) == 7)
+ fprintf(o, "- ");
+ }
+ if (to - from < 16) {
+ int j = 0;
+
+ for (j = 0; j < from + 16 - to; j++)
+ fprintf(o, " ");
+ if (j > 8)
+ fprintf(o, " ");
+ }
+ fprintf(o, " ");
+ for (i = 0; (from + i < to) && (i < 16) ; i++) {
+ fprintf(o, "%c", isprint((int)b->data[from + i]) ? b->data[from + i] : '.') ;
+ if ((((from + i) & 15) == 15) && ((from + i) != to-1))
+ fprintf(o, "\n");
+ }
+ from += i;
+ }
+ fprintf(o, "\n--\n");
+ fflush(o);
+}
+
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * Configuration parser
+ *
+ * Copyright 2000-2011 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#ifdef CONFIG_HAP_CRYPT
+/* This is to have crypt() defined on Linux */
+#define _GNU_SOURCE
+
+#ifdef NEED_CRYPT_H
+/* some platforms such as Solaris need this */
+#include <crypt.h>
+#endif
+#endif /* CONFIG_HAP_CRYPT */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <netdb.h>
+#include <ctype.h>
+#include <pwd.h>
+#include <grp.h>
+#include <errno.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <fcntl.h>
+#include <unistd.h>
+
+#include <common/cfgparse.h>
+#include <common/chunk.h>
+#include <common/config.h>
+#include <common/errors.h>
+#include <common/memory.h>
+#include <common/standard.h>
+#include <common/time.h>
+#include <common/uri_auth.h>
+#include <common/namespace.h>
+
+#include <types/capture.h>
+#include <types/compression.h>
+#include <types/global.h>
+#include <types/obj_type.h>
+#include <types/peers.h>
+#include <types/mailers.h>
+#include <types/dns.h>
+
+#include <proto/acl.h>
+#include <proto/auth.h>
+#include <proto/backend.h>
+#include <proto/channel.h>
+#include <proto/checks.h>
+#include <proto/compression.h>
+#include <proto/dumpstats.h>
+#include <proto/frontend.h>
+#include <proto/hdr_idx.h>
+#include <proto/lb_chash.h>
+#include <proto/lb_fas.h>
+#include <proto/lb_fwlc.h>
+#include <proto/lb_fwrr.h>
+#include <proto/lb_map.h>
+#include <proto/listener.h>
+#include <proto/log.h>
+#include <proto/protocol.h>
+#include <proto/proto_tcp.h>
+#include <proto/proto_uxst.h>
+#include <proto/proto_http.h>
+#include <proto/proxy.h>
+#include <proto/peers.h>
+#include <proto/sample.h>
+#include <proto/session.h>
+#include <proto/server.h>
+#include <proto/stream.h>
+#include <proto/raw_sock.h>
+#include <proto/task.h>
+#include <proto/stick_table.h>
+
+#ifdef USE_OPENSSL
+#include <types/ssl_sock.h>
+#include <proto/ssl_sock.h>
+#include <proto/shctx.h>
+#endif /*USE_OPENSSL */
+
+/* This is the SSLv3 CLIENT HELLO packet used in conjunction with the
+ * ssl-hello-chk option to ensure that the remote server speaks SSL.
+ *
+ * Check RFC 2246 (TLSv1.0) sections A.3 and A.4 for details.
+ */
+const char sslv3_client_hello_pkt[] = {
+ "\x16" /* ContentType : 0x16 = Hanshake */
+ "\x03\x00" /* ProtocolVersion : 0x0300 = SSLv3 */
+ "\x00\x79" /* ContentLength : 0x79 bytes after this one */
+ "\x01" /* HanshakeType : 0x01 = CLIENT HELLO */
+ "\x00\x00\x75" /* HandshakeLength : 0x75 bytes after this one */
+ "\x03\x00" /* Hello Version : 0x0300 = v3 */
+ "\x00\x00\x00\x00" /* Unix GMT Time (s) : filled with <now> (@0x0B) */
+ "HAPROXYSSLCHK\nHAPROXYSSLCHK\n" /* Random : must be exactly 28 bytes */
+ "\x00" /* Session ID length : empty (no session ID) */
+ "\x00\x4E" /* Cipher Suite Length : 78 bytes after this one */
+ "\x00\x01" "\x00\x02" "\x00\x03" "\x00\x04" /* 39 most common ciphers : */
+ "\x00\x05" "\x00\x06" "\x00\x07" "\x00\x08" /* 0x01...0x1B, 0x2F...0x3A */
+ "\x00\x09" "\x00\x0A" "\x00\x0B" "\x00\x0C" /* This covers RSA/DH, */
+ "\x00\x0D" "\x00\x0E" "\x00\x0F" "\x00\x10" /* various bit lengths, */
+ "\x00\x11" "\x00\x12" "\x00\x13" "\x00\x14" /* SHA1/MD5, DES/3DES/AES... */
+ "\x00\x15" "\x00\x16" "\x00\x17" "\x00\x18"
+ "\x00\x19" "\x00\x1A" "\x00\x1B" "\x00\x2F"
+ "\x00\x30" "\x00\x31" "\x00\x32" "\x00\x33"
+ "\x00\x34" "\x00\x35" "\x00\x36" "\x00\x37"
+ "\x00\x38" "\x00\x39" "\x00\x3A"
+ "\x01" /* Compression Length : 0x01 = 1 byte for types */
+ "\x00" /* Compression Type : 0x00 = NULL compression */
+};
+
+/* various keyword modifiers */
+enum kw_mod {
+ KWM_STD = 0, /* normal */
+ KWM_NO, /* "no" prefixed before the keyword */
+ KWM_DEF, /* "default" prefixed before the keyword */
+};
+
+/* permit to store configuration section */
+struct cfg_section {
+ struct list list;
+ char *section_name;
+ int (*section_parser)(const char *, int, char **, int);
+};
+
+/* Used to chain configuration sections definitions. This list
+ * stores struct cfg_section
+ */
+struct list sections = LIST_HEAD_INIT(sections);
+
+/* some of the most common options which are also the easiest to handle */
+struct cfg_opt {
+ const char *name;
+ unsigned int val;
+ unsigned int cap;
+ unsigned int checks;
+ unsigned int mode;
+};
+
+/* proxy->options */
+static const struct cfg_opt cfg_opts[] =
+{
+ { "abortonclose", PR_O_ABRT_CLOSE, PR_CAP_BE, 0, 0 },
+ { "allbackups", PR_O_USE_ALL_BK, PR_CAP_BE, 0, 0 },
+ { "checkcache", PR_O_CHK_CACHE, PR_CAP_BE, 0, PR_MODE_HTTP },
+ { "clitcpka", PR_O_TCP_CLI_KA, PR_CAP_FE, 0, 0 },
+ { "contstats", PR_O_CONTSTATS, PR_CAP_FE, 0, 0 },
+ { "dontlognull", PR_O_NULLNOLOG, PR_CAP_FE, 0, 0 },
+ { "http_proxy", PR_O_HTTP_PROXY, PR_CAP_FE | PR_CAP_BE, 0, PR_MODE_HTTP },
+ { "http-buffer-request", PR_O_WREQ_BODY, PR_CAP_FE | PR_CAP_BE, 0, PR_MODE_HTTP },
+ { "http-ignore-probes", PR_O_IGNORE_PRB, PR_CAP_FE, 0, PR_MODE_HTTP },
+ { "prefer-last-server", PR_O_PREF_LAST, PR_CAP_BE, 0, PR_MODE_HTTP },
+ { "logasap", PR_O_LOGASAP, PR_CAP_FE, 0, 0 },
+ { "nolinger", PR_O_TCP_NOLING, PR_CAP_FE | PR_CAP_BE, 0, 0 },
+ { "persist", PR_O_PERSIST, PR_CAP_BE, 0, 0 },
+ { "srvtcpka", PR_O_TCP_SRV_KA, PR_CAP_BE, 0, 0 },
+#ifdef TPROXY
+ { "transparent", PR_O_TRANSP, PR_CAP_BE, 0, 0 },
+#else
+ { "transparent", 0, 0, 0, 0 },
+#endif
+
+ { NULL, 0, 0, 0, 0 }
+};
+
+/* proxy->options2 */
+static const struct cfg_opt cfg_opts2[] =
+{
+#ifdef CONFIG_HAP_LINUX_SPLICE
+ { "splice-request", PR_O2_SPLIC_REQ, PR_CAP_FE|PR_CAP_BE, 0, 0 },
+ { "splice-response", PR_O2_SPLIC_RTR, PR_CAP_FE|PR_CAP_BE, 0, 0 },
+ { "splice-auto", PR_O2_SPLIC_AUT, PR_CAP_FE|PR_CAP_BE, 0, 0 },
+#else
+ { "splice-request", 0, 0, 0, 0 },
+ { "splice-response", 0, 0, 0, 0 },
+ { "splice-auto", 0, 0, 0, 0 },
+#endif
+ { "accept-invalid-http-request", PR_O2_REQBUG_OK, PR_CAP_FE, 0, PR_MODE_HTTP },
+ { "accept-invalid-http-response", PR_O2_RSPBUG_OK, PR_CAP_BE, 0, PR_MODE_HTTP },
+ { "dontlog-normal", PR_O2_NOLOGNORM, PR_CAP_FE, 0, 0 },
+ { "log-separate-errors", PR_O2_LOGERRORS, PR_CAP_FE, 0, 0 },
+ { "log-health-checks", PR_O2_LOGHCHKS, PR_CAP_BE, 0, 0 },
+ { "socket-stats", PR_O2_SOCKSTAT, PR_CAP_FE, 0, 0 },
+ { "tcp-smart-accept", PR_O2_SMARTACC, PR_CAP_FE, 0, 0 },
+ { "tcp-smart-connect", PR_O2_SMARTCON, PR_CAP_BE, 0, 0 },
+ { "independant-streams", PR_O2_INDEPSTR, PR_CAP_FE|PR_CAP_BE, 0, 0 },
+ { "independent-streams", PR_O2_INDEPSTR, PR_CAP_FE|PR_CAP_BE, 0, 0 },
+ { "http-use-proxy-header", PR_O2_USE_PXHDR, PR_CAP_FE, 0, PR_MODE_HTTP },
+ { "http-pretend-keepalive", PR_O2_FAKE_KA, PR_CAP_FE|PR_CAP_BE, 0, PR_MODE_HTTP },
+ { "http-no-delay", PR_O2_NODELAY, PR_CAP_FE|PR_CAP_BE, 0, PR_MODE_HTTP },
+ { NULL, 0, 0, 0 }
+};
+
+static char *cursection = NULL;
+static struct proxy defproxy; /* fake proxy used to assign default values on all instances */
+int cfg_maxpconn = DEFAULT_MAXCONN; /* # of simultaneous connections per proxy (-N) */
+int cfg_maxconn = 0; /* # of simultaneous connections, (-n) */
+
+/* List head of all known configuration keywords */
+static struct cfg_kw_list cfg_keywords = {
+ .list = LIST_HEAD_INIT(cfg_keywords.list)
+};
+
+/*
+ * converts <str> to a list of listeners which are dynamically allocated.
+ * The format is "{addr|'*'}:port[-end][,{addr|'*'}:port[-end]]*", where :
+ * - <addr> can be empty or "*" to indicate INADDR_ANY ;
+ * - <port> is a numerical port from 1 to 65535 ;
+ * - <end> indicates to use the range from <port> to <end> instead (inclusive).
+ * This can be repeated as many times as necessary, separated by a coma.
+ * Function returns 1 for success or 0 if error. In case of errors, if <err> is
+ * not NULL, it must be a valid pointer to either NULL or a freeable area that
+ * will be replaced with an error message.
+ */
+int str2listener(char *str, struct proxy *curproxy, struct bind_conf *bind_conf, const char *file, int line, char **err)
+{
+ struct listener *l;
+ char *next, *dupstr;
+ int port, end;
+
+ next = dupstr = strdup(str);
+
+ while (next && *next) {
+ struct sockaddr_storage ss, *ss2;
+ int fd = -1;
+
+ str = next;
+ /* 1) look for the end of the first address */
+ if ((next = strchr(str, ',')) != NULL) {
+ *next++ = 0;
+ }
+
+ ss2 = str2sa_range(str, &port, &end, err,
+ curproxy == global.stats_fe ? NULL : global.unix_bind.prefix,
+ NULL, 1);
+ if (!ss2)
+ goto fail;
+
+ if (ss2->ss_family == AF_INET || ss2->ss_family == AF_INET6) {
+ if (!port && !end) {
+ memprintf(err, "missing port number: '%s'\n", str);
+ goto fail;
+ }
+
+ if (!port || !end) {
+ memprintf(err, "port offsets are not allowed in 'bind': '%s'\n", str);
+ goto fail;
+ }
+
+ if (port < 1 || port > 65535) {
+ memprintf(err, "invalid port '%d' specified for address '%s'.\n", port, str);
+ goto fail;
+ }
+
+ if (end < 1 || end > 65535) {
+ memprintf(err, "invalid port '%d' specified for address '%s'.\n", end, str);
+ goto fail;
+ }
+ }
+ else if (ss2->ss_family == AF_UNSPEC) {
+ socklen_t addr_len;
+
+ /* We want to attach to an already bound fd whose number
+ * is in the addr part of ss2 when cast to sockaddr_in.
+ * Note that by definition there is a single listener.
+ * We still have to determine the address family to
+ * register the correct protocol.
+ */
+ fd = ((struct sockaddr_in *)ss2)->sin_addr.s_addr;
+ addr_len = sizeof(*ss2);
+ if (getsockname(fd, (struct sockaddr *)ss2, &addr_len) == -1) {
+ memprintf(err, "cannot use file descriptor '%d' : %s.\n", fd, strerror(errno));
+ goto fail;
+ }
+
+ port = end = get_host_port(ss2);
+ }
+
+ /* OK the address looks correct */
+ ss = *ss2;
+
+ for (; port <= end; port++) {
+ l = (struct listener *)calloc(1, sizeof(struct listener));
+ l->obj_type = OBJ_TYPE_LISTENER;
+ LIST_ADDQ(&curproxy->conf.listeners, &l->by_fe);
+ LIST_ADDQ(&bind_conf->listeners, &l->by_bind);
+ l->frontend = curproxy;
+ l->bind_conf = bind_conf;
+
+ l->fd = fd;
+ l->addr = ss;
+ l->xprt = &raw_sock;
+ l->state = LI_INIT;
+
+ if (ss.ss_family == AF_INET) {
+ ((struct sockaddr_in *)(&l->addr))->sin_port = htons(port);
+ tcpv4_add_listener(l);
+ }
+ else if (ss.ss_family == AF_INET6) {
+ ((struct sockaddr_in6 *)(&l->addr))->sin6_port = htons(port);
+ tcpv6_add_listener(l);
+ }
+ else {
+ uxst_add_listener(l);
+ }
+
+ jobs++;
+ listeners++;
+ } /* end for(port) */
+ } /* end while(next) */
+ free(dupstr);
+ return 1;
+ fail:
+ free(dupstr);
+ return 0;
+}
+
+/*
+ * Report a fatal Alert when there is too much arguments
+ * The index is the current keyword in args
+ * Return 0 if the number of argument is correct, otherwise emit an alert and return 1
+ * Fill err_code with an ERR_ALERT and an ERR_FATAL
+ */
+int alertif_too_many_args_idx(int maxarg, int index, const char *file, int linenum, char **args, int *err_code)
+{
+ char *kw = NULL;
+ int i;
+
+ if (!*args[index + maxarg + 1])
+ return 0;
+
+ memprintf(&kw, "%s", args[0]);
+ for (i = 1; i <= index; i++) {
+ memprintf(&kw, "%s %s", kw, args[i]);
+ }
+
+ Alert("parsing [%s:%d] : '%s' cannot handle unexpected argument '%s'.\n", file, linenum, kw, args[index + maxarg + 1]);
+ free(kw);
+ *err_code |= ERR_ALERT | ERR_FATAL;
+ return 1;
+}
+
+/*
+ * same as alertif_too_many_args_idx with a 0 index
+ */
+int alertif_too_many_args(int maxarg, const char *file, int linenum, char **args, int *err_code)
+{
+ return alertif_too_many_args_idx(maxarg, 0, file, linenum, args, err_code);
+}
+
+/* Report a warning if a rule is placed after a 'tcp-request content' rule.
+ * Return 1 if the warning has been emitted, otherwise 0.
+ */
+int warnif_rule_after_tcp_cont(struct proxy *proxy, const char *file, int line, const char *arg)
+{
+ if (!LIST_ISEMPTY(&proxy->tcp_req.inspect_rules)) {
+ Warning("parsing [%s:%d] : a '%s' rule placed after a 'tcp-request content' rule will still be processed before.\n",
+ file, line, arg);
+ return 1;
+ }
+ return 0;
+}
+
+/* Report a warning if a rule is placed after a 'block' rule.
+ * Return 1 if the warning has been emitted, otherwise 0.
+ */
+int warnif_rule_after_block(struct proxy *proxy, const char *file, int line, const char *arg)
+{
+ if (!LIST_ISEMPTY(&proxy->block_rules)) {
+ Warning("parsing [%s:%d] : a '%s' rule placed after a 'block' rule will still be processed before.\n",
+ file, line, arg);
+ return 1;
+ }
+ return 0;
+}
+
+/* Report a warning if a rule is placed after an 'http_request' rule.
+ * Return 1 if the warning has been emitted, otherwise 0.
+ */
+int warnif_rule_after_http_req(struct proxy *proxy, const char *file, int line, const char *arg)
+{
+ if (!LIST_ISEMPTY(&proxy->http_req_rules)) {
+ Warning("parsing [%s:%d] : a '%s' rule placed after an 'http-request' rule will still be processed before.\n",
+ file, line, arg);
+ return 1;
+ }
+ return 0;
+}
+
+/* Report a warning if a rule is placed after a reqrewrite rule.
+ * Return 1 if the warning has been emitted, otherwise 0.
+ */
+int warnif_rule_after_reqxxx(struct proxy *proxy, const char *file, int line, const char *arg)
+{
+ if (proxy->req_exp) {
+ Warning("parsing [%s:%d] : a '%s' rule placed after a 'reqxxx' rule will still be processed before.\n",
+ file, line, arg);
+ return 1;
+ }
+ return 0;
+}
+
+/* Report a warning if a rule is placed after a reqadd rule.
+ * Return 1 if the warning has been emitted, otherwise 0.
+ */
+int warnif_rule_after_reqadd(struct proxy *proxy, const char *file, int line, const char *arg)
+{
+ if (!LIST_ISEMPTY(&proxy->req_add)) {
+ Warning("parsing [%s:%d] : a '%s' rule placed after a 'reqadd' rule will still be processed before.\n",
+ file, line, arg);
+ return 1;
+ }
+ return 0;
+}
+
+/* Report a warning if a rule is placed after a redirect rule.
+ * Return 1 if the warning has been emitted, otherwise 0.
+ */
+int warnif_rule_after_redirect(struct proxy *proxy, const char *file, int line, const char *arg)
+{
+ if (!LIST_ISEMPTY(&proxy->redirect_rules)) {
+ Warning("parsing [%s:%d] : a '%s' rule placed after a 'redirect' rule will still be processed before.\n",
+ file, line, arg);
+ return 1;
+ }
+ return 0;
+}
+
+/* Report a warning if a rule is placed after a 'use_backend' rule.
+ * Return 1 if the warning has been emitted, otherwise 0.
+ */
+int warnif_rule_after_use_backend(struct proxy *proxy, const char *file, int line, const char *arg)
+{
+ if (!LIST_ISEMPTY(&proxy->switching_rules)) {
+ Warning("parsing [%s:%d] : a '%s' rule placed after a 'use_backend' rule will still be processed before.\n",
+ file, line, arg);
+ return 1;
+ }
+ return 0;
+}
+
+/* Report a warning if a rule is placed after a 'use-server' rule.
+ * Return 1 if the warning has been emitted, otherwise 0.
+ */
+int warnif_rule_after_use_server(struct proxy *proxy, const char *file, int line, const char *arg)
+{
+ if (!LIST_ISEMPTY(&proxy->server_rules)) {
+ Warning("parsing [%s:%d] : a '%s' rule placed after a 'use-server' rule will still be processed before.\n",
+ file, line, arg);
+ return 1;
+ }
+ return 0;
+}
+
+/* report a warning if a "tcp request connection" rule is dangerously placed */
+int warnif_misplaced_tcp_conn(struct proxy *proxy, const char *file, int line, const char *arg)
+{
+ return warnif_rule_after_tcp_cont(proxy, file, line, arg) ||
+ warnif_rule_after_block(proxy, file, line, arg) ||
+ warnif_rule_after_http_req(proxy, file, line, arg) ||
+ warnif_rule_after_reqxxx(proxy, file, line, arg) ||
+ warnif_rule_after_reqadd(proxy, file, line, arg) ||
+ warnif_rule_after_redirect(proxy, file, line, arg) ||
+ warnif_rule_after_use_backend(proxy, file, line, arg) ||
+ warnif_rule_after_use_server(proxy, file, line, arg);
+}
+
+/* report a warning if a "tcp request content" rule is dangerously placed */
+int warnif_misplaced_tcp_cont(struct proxy *proxy, const char *file, int line, const char *arg)
+{
+ return warnif_rule_after_block(proxy, file, line, arg) ||
+ warnif_rule_after_http_req(proxy, file, line, arg) ||
+ warnif_rule_after_reqxxx(proxy, file, line, arg) ||
+ warnif_rule_after_reqadd(proxy, file, line, arg) ||
+ warnif_rule_after_redirect(proxy, file, line, arg) ||
+ warnif_rule_after_use_backend(proxy, file, line, arg) ||
+ warnif_rule_after_use_server(proxy, file, line, arg);
+}
+
+/* report a warning if a block rule is dangerously placed */
+int warnif_misplaced_block(struct proxy *proxy, const char *file, int line, const char *arg)
+{
+ return warnif_rule_after_http_req(proxy, file, line, arg) ||
+ warnif_rule_after_reqxxx(proxy, file, line, arg) ||
+ warnif_rule_after_reqadd(proxy, file, line, arg) ||
+ warnif_rule_after_redirect(proxy, file, line, arg) ||
+ warnif_rule_after_use_backend(proxy, file, line, arg) ||
+ warnif_rule_after_use_server(proxy, file, line, arg);
+}
+
+/* report a warning if an http-request rule is dangerously placed */
+int warnif_misplaced_http_req(struct proxy *proxy, const char *file, int line, const char *arg)
+{
+ return warnif_rule_after_reqxxx(proxy, file, line, arg) ||
+ warnif_rule_after_reqadd(proxy, file, line, arg) ||
+ warnif_rule_after_redirect(proxy, file, line, arg) ||
+ warnif_rule_after_use_backend(proxy, file, line, arg) ||
+ warnif_rule_after_use_server(proxy, file, line, arg);
+}
+
+/* report a warning if a reqxxx rule is dangerously placed */
+int warnif_misplaced_reqxxx(struct proxy *proxy, const char *file, int line, const char *arg)
+{
+ return warnif_rule_after_reqadd(proxy, file, line, arg) ||
+ warnif_rule_after_redirect(proxy, file, line, arg) ||
+ warnif_rule_after_use_backend(proxy, file, line, arg) ||
+ warnif_rule_after_use_server(proxy, file, line, arg);
+}
+
+/* report a warning if a reqadd rule is dangerously placed */
+int warnif_misplaced_reqadd(struct proxy *proxy, const char *file, int line, const char *arg)
+{
+ return warnif_rule_after_redirect(proxy, file, line, arg) ||
+ warnif_rule_after_use_backend(proxy, file, line, arg) ||
+ warnif_rule_after_use_server(proxy, file, line, arg);
+}
+
+/* report a warning if a redirect rule is dangerously placed */
+int warnif_misplaced_redirect(struct proxy *proxy, const char *file, int line, const char *arg)
+{
+ return warnif_rule_after_use_backend(proxy, file, line, arg) ||
+ warnif_rule_after_use_server(proxy, file, line, arg);
+}
+
+/* Report it if a request ACL condition uses some keywords that are incompatible
+ * with the place where the ACL is used. It returns either 0 or ERR_WARN so that
+ * its result can be or'ed with err_code. Note that <cond> may be NULL and then
+ * will be ignored.
+ */
+static int warnif_cond_conflicts(const struct acl_cond *cond, unsigned int where, const char *file, int line)
+{
+ const struct acl *acl;
+ const char *kw;
+
+ if (!cond)
+ return 0;
+
+ acl = acl_cond_conflicts(cond, where);
+ if (acl) {
+ if (acl->name && *acl->name)
+ Warning("parsing [%s:%d] : acl '%s' will never match because it only involves keywords that are incompatible with '%s'\n",
+ file, line, acl->name, sample_ckp_names(where));
+ else
+ Warning("parsing [%s:%d] : anonymous acl will never match because it uses keyword '%s' which is incompatible with '%s'\n",
+ file, line, LIST_ELEM(acl->expr.n, struct acl_expr *, list)->kw, sample_ckp_names(where));
+ return ERR_WARN;
+ }
+ if (!acl_cond_kw_conflicts(cond, where, &acl, &kw))
+ return 0;
+
+ if (acl->name && *acl->name)
+ Warning("parsing [%s:%d] : acl '%s' involves keywords '%s' which is incompatible with '%s'\n",
+ file, line, acl->name, kw, sample_ckp_names(where));
+ else
+ Warning("parsing [%s:%d] : anonymous acl involves keyword '%s' which is incompatible with '%s'\n",
+ file, line, kw, sample_ckp_names(where));
+ return ERR_WARN;
+}
+
+/*
+ * parse a line in a <global> section. Returns the error code, 0 if OK, or
+ * any combination of :
+ * - ERR_ABORT: must abort ASAP
+ * - ERR_FATAL: we can continue parsing but not start the service
+ * - ERR_WARN: a warning has been emitted
+ * - ERR_ALERT: an alert has been emitted
+ * Only the two first ones can stop processing, the two others are just
+ * indicators.
+ */
+int cfg_parse_global(const char *file, int linenum, char **args, int kwm)
+{
+ int err_code = 0;
+ char *errmsg = NULL;
+
+ if (!strcmp(args[0], "global")) { /* new section */
+ /* no option, nothing special to do */
+ alertif_too_many_args(0, file, linenum, args, &err_code);
+ goto out;
+ }
+ else if (!strcmp(args[0], "ca-base")) {
+#ifdef USE_OPENSSL
+ if(alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ if (global.ca_base != NULL) {
+ Alert("parsing [%s:%d] : '%s' already specified. Continuing.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT;
+ goto out;
+ }
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects a directory path as an argument.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ global.ca_base = strdup(args[1]);
+#else
+ Alert("parsing [%s:%d] : '%s' is not implemented.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+#endif
+ }
+ else if (!strcmp(args[0], "crt-base")) {
+#ifdef USE_OPENSSL
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ if (global.crt_base != NULL) {
+ Alert("parsing [%s:%d] : '%s' already specified. Continuing.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT;
+ goto out;
+ }
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects a directory path as an argument.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ global.crt_base = strdup(args[1]);
+#else
+ Alert("parsing [%s:%d] : '%s' is not implemented.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+#endif
+ }
+ else if (!strcmp(args[0], "daemon")) {
+ if (alertif_too_many_args(0, file, linenum, args, &err_code))
+ goto out;
+ global.mode |= MODE_DAEMON;
+ }
+ else if (!strcmp(args[0], "debug")) {
+ if (alertif_too_many_args(0, file, linenum, args, &err_code))
+ goto out;
+ global.mode |= MODE_DEBUG;
+ }
+ else if (!strcmp(args[0], "noepoll")) {
+ if (alertif_too_many_args(0, file, linenum, args, &err_code))
+ goto out;
+ global.tune.options &= ~GTUNE_USE_EPOLL;
+ }
+ else if (!strcmp(args[0], "nokqueue")) {
+ if (alertif_too_many_args(0, file, linenum, args, &err_code))
+ goto out;
+ global.tune.options &= ~GTUNE_USE_KQUEUE;
+ }
+ else if (!strcmp(args[0], "nopoll")) {
+ if (alertif_too_many_args(0, file, linenum, args, &err_code))
+ goto out;
+ global.tune.options &= ~GTUNE_USE_POLL;
+ }
+ else if (!strcmp(args[0], "nosplice")) {
+ if (alertif_too_many_args(0, file, linenum, args, &err_code))
+ goto out;
+ global.tune.options &= ~GTUNE_USE_SPLICE;
+ }
+ else if (!strcmp(args[0], "nogetaddrinfo")) {
+ if (alertif_too_many_args(0, file, linenum, args, &err_code))
+ goto out;
+ global.tune.options &= ~GTUNE_USE_GAI;
+ }
+ else if (!strcmp(args[0], "quiet")) {
+ if (alertif_too_many_args(0, file, linenum, args, &err_code))
+ goto out;
+ global.mode |= MODE_QUIET;
+ }
+ else if (!strcmp(args[0], "tune.maxpollevents")) {
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ if (global.tune.maxpollevents != 0) {
+ Alert("parsing [%s:%d] : '%s' already specified. Continuing.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT;
+ goto out;
+ }
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects an integer argument.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ global.tune.maxpollevents = atol(args[1]);
+ }
+ else if (!strcmp(args[0], "tune.maxaccept")) {
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ if (global.tune.maxaccept != 0) {
+ Alert("parsing [%s:%d] : '%s' already specified. Continuing.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT;
+ goto out;
+ }
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects an integer argument.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ global.tune.maxaccept = atol(args[1]);
+ }
+ else if (!strcmp(args[0], "tune.chksize")) {
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects an integer argument.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ global.tune.chksize = atol(args[1]);
+ }
+#ifdef USE_OPENSSL
+ else if (!strcmp(args[0], "tune.ssl.force-private-cache")) {
+ if (alertif_too_many_args(0, file, linenum, args, &err_code))
+ goto out;
+ global.tune.sslprivatecache = 1;
+ }
+ else if (!strcmp(args[0], "tune.ssl.cachesize")) {
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects an integer argument.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ global.tune.sslcachesize = atol(args[1]);
+ }
+ else if (!strcmp(args[0], "tune.ssl.lifetime")) {
+ unsigned int ssllifetime;
+ const char *res;
+
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects ssl sessions <lifetime> in seconds as argument.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ res = parse_time_err(args[1], &ssllifetime, TIME_UNIT_S);
+ if (res) {
+ Alert("parsing [%s:%d]: unexpected character '%c' in argument to <%s>.\n",
+ file, linenum, *res, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ global.tune.ssllifetime = ssllifetime;
+ }
+ else if (!strcmp(args[0], "tune.ssl.maxrecord")) {
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects an integer argument.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ global.tune.ssl_max_record = atol(args[1]);
+ }
+#ifndef OPENSSL_NO_DH
+ else if (!strcmp(args[0], "tune.ssl.default-dh-param")) {
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects an integer argument.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ global.tune.ssl_default_dh_param = atol(args[1]);
+ if (global.tune.ssl_default_dh_param < 1024) {
+ Alert("parsing [%s:%d] : '%s' expects a value >= 1024.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ }
+#endif
+ else if (!strcmp(args[0], "tune.ssl.ssl-ctx-cache-size")) {
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects an integer argument.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ global.tune.ssl_ctx_cache = atoi(args[1]);
+ if (global.tune.ssl_ctx_cache < 0) {
+ Alert("parsing [%s:%d] : '%s' expects a positive numeric value\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ }
+#endif
+ else if (!strcmp(args[0], "tune.buffers.limit")) {
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects an integer argument.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ global.tune.buf_limit = atol(args[1]);
+ if (global.tune.buf_limit) {
+ if (global.tune.buf_limit < 3)
+ global.tune.buf_limit = 3;
+ if (global.tune.buf_limit <= global.tune.reserved_bufs)
+ global.tune.buf_limit = global.tune.reserved_bufs + 1;
+ }
+ }
+ else if (!strcmp(args[0], "tune.buffers.reserve")) {
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects an integer argument.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ global.tune.reserved_bufs = atol(args[1]);
+ if (global.tune.reserved_bufs < 2)
+ global.tune.reserved_bufs = 2;
+ if (global.tune.buf_limit && global.tune.buf_limit <= global.tune.reserved_bufs)
+ global.tune.buf_limit = global.tune.reserved_bufs + 1;
+ }
+ else if (!strcmp(args[0], "tune.bufsize")) {
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects an integer argument.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ global.tune.bufsize = atol(args[1]);
+ if (global.tune.bufsize <= 0) {
+ Alert("parsing [%s:%d] : '%s' expects a positive integer argument.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ chunk_init(&trash, realloc(trash.str, global.tune.bufsize), global.tune.bufsize);
+ alloc_trash_buffers(global.tune.bufsize);
+ }
+ else if (!strcmp(args[0], "tune.maxrewrite")) {
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects an integer argument.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ global.tune.maxrewrite = atol(args[1]);
+ if (global.tune.maxrewrite < 0) {
+ Alert("parsing [%s:%d] : '%s' expects a positive integer argument.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ }
+ else if (!strcmp(args[0], "tune.idletimer")) {
+ unsigned int idle;
+ const char *res;
+
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects a timer value between 0 and 65535 ms.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ res = parse_time_err(args[1], &idle, TIME_UNIT_MS);
+ if (res) {
+ Alert("parsing [%s:%d]: unexpected character '%c' in argument to <%s>.\n",
+ file, linenum, *res, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (idle > 65535) {
+ Alert("parsing [%s:%d] : '%s' expects a timer value between 0 and 65535 ms.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ global.tune.idle_timer = idle;
+ }
+ else if (!strcmp(args[0], "tune.rcvbuf.client")) {
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ if (global.tune.client_rcvbuf != 0) {
+ Alert("parsing [%s:%d] : '%s' already specified. Continuing.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT;
+ goto out;
+ }
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects an integer argument.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ global.tune.client_rcvbuf = atol(args[1]);
+ }
+ else if (!strcmp(args[0], "tune.rcvbuf.server")) {
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ if (global.tune.server_rcvbuf != 0) {
+ Alert("parsing [%s:%d] : '%s' already specified. Continuing.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT;
+ goto out;
+ }
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects an integer argument.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ global.tune.server_rcvbuf = atol(args[1]);
+ }
+ else if (!strcmp(args[0], "tune.sndbuf.client")) {
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ if (global.tune.client_sndbuf != 0) {
+ Alert("parsing [%s:%d] : '%s' already specified. Continuing.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT;
+ goto out;
+ }
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects an integer argument.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ global.tune.client_sndbuf = atol(args[1]);
+ }
+ else if (!strcmp(args[0], "tune.sndbuf.server")) {
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ if (global.tune.server_sndbuf != 0) {
+ Alert("parsing [%s:%d] : '%s' already specified. Continuing.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT;
+ goto out;
+ }
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects an integer argument.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ global.tune.server_sndbuf = atol(args[1]);
+ }
+ else if (!strcmp(args[0], "tune.pipesize")) {
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects an integer argument.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ global.tune.pipesize = atol(args[1]);
+ }
+ else if (!strcmp(args[0], "tune.http.cookielen")) {
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects an integer argument.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ global.tune.cookie_len = atol(args[1]) + 1;
+ }
+ else if (!strcmp(args[0], "tune.http.maxhdr")) {
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects an integer argument.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ global.tune.max_http_hdr = atol(args[1]);
+ }
+ else if (!strcmp(args[0], "tune.zlib.memlevel")) {
+#ifdef USE_ZLIB
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ if (*args[1]) {
+ global.tune.zlibmemlevel = atoi(args[1]);
+ if (global.tune.zlibmemlevel < 1 || global.tune.zlibmemlevel > 9) {
+ Alert("parsing [%s:%d] : '%s' expects a numeric value between 1 and 9\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ } else {
+ Alert("parsing [%s:%d] : '%s' expects a numeric value between 1 and 9\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+#else
+ Alert("parsing [%s:%d] : '%s' is not implemented.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+#endif
+ }
+ else if (!strcmp(args[0], "tune.zlib.windowsize")) {
+#ifdef USE_ZLIB
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ if (*args[1]) {
+ global.tune.zlibwindowsize = atoi(args[1]);
+ if (global.tune.zlibwindowsize < 8 || global.tune.zlibwindowsize > 15) {
+ Alert("parsing [%s:%d] : '%s' expects a numeric value between 8 and 15\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ } else {
+ Alert("parsing [%s:%d] : '%s' expects a numeric value between 8 and 15\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+#else
+ Alert("parsing [%s:%d] : '%s' is not implemented.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+#endif
+ }
+ else if (!strcmp(args[0], "tune.comp.maxlevel")) {
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ if (*args[1]) {
+ global.tune.comp_maxlevel = atoi(args[1]);
+ if (global.tune.comp_maxlevel < 1 || global.tune.comp_maxlevel > 9) {
+ Alert("parsing [%s:%d] : '%s' expects a numeric value between 1 and 9\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ } else {
+ Alert("parsing [%s:%d] : '%s' expects a numeric value between 1 and 9\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ }
+ else if (!strcmp(args[0], "tune.pattern.cache-size")) {
+ if (*args[1]) {
+ global.tune.pattern_cache = atoi(args[1]);
+ if (global.tune.pattern_cache < 0) {
+ Alert("parsing [%s:%d] : '%s' expects a positive numeric value\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ } else {
+ Alert("parsing [%s:%d] : '%s' expects a positive numeric value\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ }
+ else if (!strcmp(args[0], "uid")) {
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ if (global.uid != 0) {
+ Alert("parsing [%s:%d] : user/uid already specified. Continuing.\n", file, linenum);
+ err_code |= ERR_ALERT;
+ goto out;
+ }
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects an integer argument.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ global.uid = atol(args[1]);
+ }
+ else if (!strcmp(args[0], "gid")) {
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ if (global.gid != 0) {
+ Alert("parsing [%s:%d] : group/gid already specified. Continuing.\n", file, linenum);
+ err_code |= ERR_ALERT;
+ goto out;
+ }
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects an integer argument.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ global.gid = atol(args[1]);
+ }
+ else if (!strcmp(args[0], "external-check")) {
+ if (alertif_too_many_args(0, file, linenum, args, &err_code))
+ goto out;
+ global.external_check = 1;
+ }
+ /* user/group name handling */
+ else if (!strcmp(args[0], "user")) {
+ struct passwd *ha_user;
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ if (global.uid != 0) {
+ Alert("parsing [%s:%d] : user/uid already specified. Continuing.\n", file, linenum);
+ err_code |= ERR_ALERT;
+ goto out;
+ }
+ errno = 0;
+ ha_user = getpwnam(args[1]);
+ if (ha_user != NULL) {
+ global.uid = (int)ha_user->pw_uid;
+ }
+ else {
+ Alert("parsing [%s:%d] : cannot find user id for '%s' (%d:%s)\n", file, linenum, args[1], errno, strerror(errno));
+ err_code |= ERR_ALERT | ERR_FATAL;
+ }
+ }
+ else if (!strcmp(args[0], "group")) {
+ struct group *ha_group;
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ if (global.gid != 0) {
+ Alert("parsing [%s:%d] : gid/group was already specified. Continuing.\n", file, linenum);
+ err_code |= ERR_ALERT;
+ goto out;
+ }
+ errno = 0;
+ ha_group = getgrnam(args[1]);
+ if (ha_group != NULL) {
+ global.gid = (int)ha_group->gr_gid;
+ }
+ else {
+ Alert("parsing [%s:%d] : cannot find group id for '%s' (%d:%s)\n", file, linenum, args[1], errno, strerror(errno));
+ err_code |= ERR_ALERT | ERR_FATAL;
+ }
+ }
+ /* end of user/group name handling*/
+ else if (!strcmp(args[0], "nbproc")) {
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects an integer argument.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ global.nbproc = atol(args[1]);
+ if (global.nbproc < 1 || global.nbproc > LONGBITS) {
+ Alert("parsing [%s:%d] : '%s' must be between 1 and %d (was %d).\n",
+ file, linenum, args[0], LONGBITS, global.nbproc);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ }
+ else if (!strcmp(args[0], "maxconn")) {
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ if (global.maxconn != 0) {
+ Alert("parsing [%s:%d] : '%s' already specified. Continuing.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT;
+ goto out;
+ }
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects an integer argument.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ global.maxconn = atol(args[1]);
+#ifdef SYSTEM_MAXCONN
+ if (global.maxconn > DEFAULT_MAXCONN && cfg_maxconn <= DEFAULT_MAXCONN) {
+ Alert("parsing [%s:%d] : maxconn value %d too high for this system.\nLimiting to %d. Please use '-n' to force the value.\n", file, linenum, global.maxconn, DEFAULT_MAXCONN);
+ global.maxconn = DEFAULT_MAXCONN;
+ err_code |= ERR_ALERT;
+ }
+#endif /* SYSTEM_MAXCONN */
+ }
+ else if (!strcmp(args[0], "maxsslconn")) {
+#ifdef USE_OPENSSL
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects an integer argument.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ global.maxsslconn = atol(args[1]);
+#else
+ Alert("parsing [%s:%d] : '%s' is not implemented.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+#endif
+ }
+ else if (!strcmp(args[0], "ssl-default-bind-ciphers")) {
+#ifdef USE_OPENSSL
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects a cipher suite as an argument.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ free(global.listen_default_ciphers);
+ global.listen_default_ciphers = strdup(args[1]);
+#else
+ Alert("parsing [%s:%d] : '%s' is not implemented.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+#endif
+ }
+ else if (!strcmp(args[0], "ssl-default-server-ciphers")) {
+#ifdef USE_OPENSSL
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects a cipher suite as an argument.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ free(global.connect_default_ciphers);
+ global.connect_default_ciphers = strdup(args[1]);
+#else
+ Alert("parsing [%s:%d] : '%s' is not implemented.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+#endif
+ }
+#ifdef USE_OPENSSL
+#ifndef OPENSSL_NO_DH
+ else if (!strcmp(args[0], "ssl-dh-param-file")) {
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects a file path as an argument.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ if (ssl_sock_load_global_dh_param_from_file(args[1])) {
+ Alert("parsing [%s:%d] : '%s': unable to load DH parameters from file <%s>.\n", file, linenum, args[0], args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ }
+#endif
+#endif
+ else if (!strcmp(args[0], "ssl-server-verify")) {
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects an integer argument.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ if (strcmp(args[1],"none") == 0)
+ global.ssl_server_verify = SSL_SERVER_VERIFY_NONE;
+ else if (strcmp(args[1],"required") == 0)
+ global.ssl_server_verify = SSL_SERVER_VERIFY_REQUIRED;
+ else {
+ Alert("parsing [%s:%d] : '%s' expects 'none' or 'required' as argument.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ }
+ else if (!strcmp(args[0], "maxconnrate")) {
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ if (global.cps_lim != 0) {
+ Alert("parsing [%s:%d] : '%s' already specified. Continuing.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT;
+ goto out;
+ }
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects an integer argument.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ global.cps_lim = atol(args[1]);
+ }
+ else if (!strcmp(args[0], "maxsessrate")) {
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ if (global.sps_lim != 0) {
+ Alert("parsing [%s:%d] : '%s' already specified. Continuing.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT;
+ goto out;
+ }
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects an integer argument.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ global.sps_lim = atol(args[1]);
+ }
+ else if (!strcmp(args[0], "maxsslrate")) {
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ if (global.ssl_lim != 0) {
+ Alert("parsing [%s:%d] : '%s' already specified. Continuing.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT;
+ goto out;
+ }
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects an integer argument.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ global.ssl_lim = atol(args[1]);
+ }
+ else if (!strcmp(args[0], "maxcomprate")) {
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects an integer argument in kb/s.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ global.comp_rate_lim = atoi(args[1]) * 1024;
+ }
+ else if (!strcmp(args[0], "maxpipes")) {
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ if (global.maxpipes != 0) {
+ Alert("parsing [%s:%d] : '%s' already specified. Continuing.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT;
+ goto out;
+ }
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects an integer argument.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ global.maxpipes = atol(args[1]);
+ }
+ else if (!strcmp(args[0], "maxzlibmem")) {
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects an integer argument.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ global.maxzlibmem = atol(args[1]) * 1024L * 1024L;
+ }
+ else if (!strcmp(args[0], "maxcompcpuusage")) {
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects an integer argument between 0 and 100.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ compress_min_idle = 100 - atoi(args[1]);
+ if (compress_min_idle > 100) {
+ Alert("parsing [%s:%d] : '%s' expects an integer argument between 0 and 100.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ }
+
+ else if (!strcmp(args[0], "ulimit-n")) {
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ if (global.rlimit_nofile != 0) {
+ Alert("parsing [%s:%d] : '%s' already specified. Continuing.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT;
+ goto out;
+ }
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects an integer argument.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ global.rlimit_nofile = atol(args[1]);
+ }
+ else if (!strcmp(args[0], "chroot")) {
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ if (global.chroot != NULL) {
+ Alert("parsing [%s:%d] : '%s' already specified. Continuing.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT;
+ goto out;
+ }
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects a directory as an argument.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ global.chroot = strdup(args[1]);
+ }
+ else if (!strcmp(args[0], "description")) {
+ int i, len=0;
+ char *d;
+
+ if (!*args[1]) {
+ Alert("parsing [%s:%d]: '%s' expects a string argument.\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ for (i = 1; *args[i]; i++)
+ len += strlen(args[i]) + 1;
+
+ if (global.desc)
+ free(global.desc);
+
+ global.desc = d = (char *)calloc(1, len);
+
+ d += snprintf(d, global.desc + len - d, "%s", args[1]);
+ for (i = 2; *args[i]; i++)
+ d += snprintf(d, global.desc + len - d, " %s", args[i]);
+ }
+ else if (!strcmp(args[0], "node")) {
+ int i;
+ char c;
+
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+
+ for (i=0; args[1][i]; i++) {
+ c = args[1][i];
+ if (!isupper((unsigned char)c) && !islower((unsigned char)c) &&
+ !isdigit((unsigned char)c) && c != '_' && c != '-' && c != '.')
+ break;
+ }
+
+ if (!i || args[1][i]) {
+ Alert("parsing [%s:%d]: '%s' requires valid node name - non-empty string"
+ " with digits(0-9), letters(A-Z, a-z), dot(.), hyphen(-) or underscode(_).\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (global.node)
+ free(global.node);
+
+ global.node = strdup(args[1]);
+ }
+ else if (!strcmp(args[0], "pidfile")) {
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ if (global.pidfile != NULL) {
+ Alert("parsing [%s:%d] : '%s' already specified. Continuing.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT;
+ goto out;
+ }
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects a file name as an argument.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ global.pidfile = strdup(args[1]);
+ }
+ else if (!strcmp(args[0], "unix-bind")) {
+ int cur_arg = 1;
+ while (*(args[cur_arg])) {
+ if (!strcmp(args[cur_arg], "prefix")) {
+ if (global.unix_bind.prefix != NULL) {
+ Alert("parsing [%s:%d] : unix-bind '%s' already specified. Continuing.\n", file, linenum, args[cur_arg]);
+ err_code |= ERR_ALERT;
+ cur_arg += 2;
+ continue;
+ }
+
+ if (*(args[cur_arg+1]) == 0) {
+ Alert("parsing [%s:%d] : unix_bind '%s' expects a path as an argument.\n", file, linenum, args[cur_arg]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ global.unix_bind.prefix = strdup(args[cur_arg+1]);
+ cur_arg += 2;
+ continue;
+ }
+
+ if (!strcmp(args[cur_arg], "mode")) {
+
+ global.unix_bind.ux.mode = strtol(args[cur_arg + 1], NULL, 8);
+ cur_arg += 2;
+ continue;
+ }
+
+ if (!strcmp(args[cur_arg], "uid")) {
+
+ global.unix_bind.ux.uid = atol(args[cur_arg + 1 ]);
+ cur_arg += 2;
+ continue;
+ }
+
+ if (!strcmp(args[cur_arg], "gid")) {
+
+ global.unix_bind.ux.gid = atol(args[cur_arg + 1 ]);
+ cur_arg += 2;
+ continue;
+ }
+
+ if (!strcmp(args[cur_arg], "user")) {
+ struct passwd *user;
+
+ user = getpwnam(args[cur_arg + 1]);
+ if (!user) {
+ Alert("parsing [%s:%d] : '%s' : '%s' unknown user.\n",
+ file, linenum, args[0], args[cur_arg + 1 ]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ global.unix_bind.ux.uid = user->pw_uid;
+ cur_arg += 2;
+ continue;
+ }
+
+ if (!strcmp(args[cur_arg], "group")) {
+ struct group *group;
+
+ group = getgrnam(args[cur_arg + 1]);
+ if (!group) {
+ Alert("parsing [%s:%d] : '%s' : '%s' unknown group.\n",
+ file, linenum, args[0], args[cur_arg + 1 ]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ global.unix_bind.ux.gid = group->gr_gid;
+ cur_arg += 2;
+ continue;
+ }
+
+ Alert("parsing [%s:%d] : '%s' only supports the 'prefix', 'mode', 'uid', 'gid', 'user' and 'group' options.\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ }
+ else if (!strcmp(args[0], "log") && kwm == KWM_NO) { /* no log */
+ /* delete previous herited or defined syslog servers */
+ struct logsrv *back;
+ struct logsrv *tmp;
+
+ if (*(args[1]) != 0) {
+ Alert("parsing [%s:%d]:%s : 'no log' does not expect arguments.\n", file, linenum, args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ list_for_each_entry_safe(tmp, back, &global.logsrvs, list) {
+ LIST_DEL(&tmp->list);
+ free(tmp);
+ }
+ }
+ else if (!strcmp(args[0], "log")) { /* syslog server address */
+ struct sockaddr_storage *sk;
+ int port1, port2;
+ struct logsrv *logsrv;
+ int arg = 0;
+ int len = 0;
+
+ if (alertif_too_many_args(8, file, linenum, args, &err_code)) /* does not strictly check optional arguments */
+ goto out;
+
+ if (*(args[1]) == 0 || *(args[2]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects <address> and <facility> as arguments.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ logsrv = calloc(1, sizeof(struct logsrv));
+
+ /* just after the address, a length may be specified */
+ if (strcmp(args[arg+2], "len") == 0) {
+ len = atoi(args[arg+3]);
+ if (len < 80 || len > 65535) {
+ Alert("parsing [%s:%d] : invalid log length '%s', must be between 80 and 65535.\n",
+ file, linenum, args[arg+3]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ logsrv->maxlen = len;
+
+ /* skip these two args */
+ arg += 2;
+ }
+ else
+ logsrv->maxlen = MAX_SYSLOG_LEN;
+
+ if (logsrv->maxlen > global.max_syslog_len) {
+ global.max_syslog_len = logsrv->maxlen;
+ logheader = realloc(logheader, global.max_syslog_len + 1);
+ logheader_rfc5424 = realloc(logheader_rfc5424, global.max_syslog_len + 1);
+ logline = realloc(logline, global.max_syslog_len + 1);
+ logline_rfc5424 = realloc(logline_rfc5424, global.max_syslog_len + 1);
+ }
+
+ /* after the length, a format may be specified */
+ if (strcmp(args[arg+2], "format") == 0) {
+ logsrv->format = get_log_format(args[arg+3]);
+ if (logsrv->format < 0) {
+ Alert("parsing [%s:%d] : unknown log format '%s'\n", file, linenum, args[arg+3]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ /* skip these two args */
+ arg += 2;
+ }
+
+ if (alertif_too_many_args_idx(3, arg + 1, file, linenum, args, &err_code))
+ goto out;
+
+ logsrv->facility = get_log_facility(args[arg+2]);
+ if (logsrv->facility < 0) {
+ Alert("parsing [%s:%d] : unknown log facility '%s'\n", file, linenum, args[arg+2]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ logsrv->facility = 0;
+ }
+
+ logsrv->level = 7; /* max syslog level = debug */
+ if (*(args[arg+3])) {
+ logsrv->level = get_log_level(args[arg+3]);
+ if (logsrv->level < 0) {
+ Alert("parsing [%s:%d] : unknown optional log level '%s'\n", file, linenum, args[arg+3]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ logsrv->level = 0;
+ }
+ }
+
+ logsrv->minlvl = 0; /* limit syslog level to this level (emerg) */
+ if (*(args[arg+4])) {
+ logsrv->minlvl = get_log_level(args[arg+4]);
+ if (logsrv->minlvl < 0) {
+ Alert("parsing [%s:%d] : unknown optional minimum log level '%s'\n", file, linenum, args[arg+4]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ logsrv->minlvl = 0;
+ }
+ }
+
+ sk = str2sa_range(args[1], &port1, &port2, &errmsg, NULL, NULL, 1);
+ if (!sk) {
+ Alert("parsing [%s:%d] : '%s': %s\n", file, linenum, args[0], errmsg);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ free(logsrv);
+ goto out;
+ }
+ logsrv->addr = *sk;
+
+ if (sk->ss_family == AF_INET || sk->ss_family == AF_INET6) {
+ if (port1 != port2) {
+ Alert("parsing [%s:%d] : '%s' : port ranges and offsets are not allowed in '%s'\n",
+ file, linenum, args[0], args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ free(logsrv);
+ goto out;
+ }
+
+ logsrv->addr = *sk;
+ if (!port1)
+ set_host_port(&logsrv->addr, SYSLOG_PORT);
+ }
+
+ LIST_ADDQ(&global.logsrvs, &logsrv->list);
+ }
+ else if (!strcmp(args[0], "log-send-hostname")) { /* set the hostname in syslog header */
+ char *name;
+
+ if (global.log_send_hostname != NULL) {
+ Alert("parsing [%s:%d] : '%s' already specified. Continuing.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT;
+ goto out;
+ }
+
+ if (*(args[1]))
+ name = args[1];
+ else
+ name = hostname;
+
+ free(global.log_send_hostname);
+ global.log_send_hostname = strdup(name);
+ }
+ else if (!strcmp(args[0], "server-state-base")) { /* path base where HAProxy can find server state files */
+ if (global.server_state_base != NULL) {
+ Alert("parsing [%s:%d] : '%s' already specified. Continuing.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT;
+ goto out;
+ }
+
+ if (!*(args[1])) {
+ Alert("parsing [%s:%d] : '%s' expects one argument: a directory path.\n", file, linenum, args[0]);
+ err_code |= ERR_FATAL;
+ goto out;
+ }
+
+ global.server_state_base = strdup(args[1]);
+ }
+ else if (!strcmp(args[0], "server-state-file")) { /* path to the file where HAProxy can load the server states */
+ if (global.server_state_file != NULL) {
+ Alert("parsing [%s:%d] : '%s' already specified. Continuing.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT;
+ goto out;
+ }
+
+ if (!*(args[1])) {
+ Alert("parsing [%s:%d] : '%s' expect one argument: a file path.\n", file, linenum, args[0]);
+ err_code |= ERR_FATAL;
+ goto out;
+ }
+
+ global.server_state_file = strdup(args[1]);
+ }
+ else if (!strcmp(args[0], "log-tag")) { /* tag to report to syslog */
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects a tag for use in syslog.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ chunk_destroy(&global.log_tag);
+ chunk_initstr(&global.log_tag, strdup(args[1]));
+ }
+ else if (!strcmp(args[0], "spread-checks")) { /* random time between checks (0-50) */
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ if (global.spread_checks != 0) {
+ Alert("parsing [%s:%d]: spread-checks already specified. Continuing.\n", file, linenum);
+ err_code |= ERR_ALERT;
+ goto out;
+ }
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d]: '%s' expects an integer argument (0..50).\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ global.spread_checks = atol(args[1]);
+ if (global.spread_checks < 0 || global.spread_checks > 50) {
+ Alert("parsing [%s:%d]: 'spread-checks' needs a positive value in range 0..50.\n", file, linenum);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ }
+ }
+ else if (!strcmp(args[0], "max-spread-checks")) { /* maximum time between first and last check */
+ const char *err;
+ unsigned int val;
+
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d]: '%s' expects an integer argument (0..50).\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ err = parse_time_err(args[1], &val, TIME_UNIT_MS);
+ if (err) {
+ Alert("parsing [%s:%d]: unsupported character '%c' in '%s' (wants an integer delay).\n", file, linenum, *err, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ }
+ global.max_spread_checks = val;
+ if (global.max_spread_checks < 0) {
+ Alert("parsing [%s:%d]: '%s' needs a positive delay in milliseconds.\n",file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ }
+ }
+ else if (strcmp(args[0], "cpu-map") == 0) { /* map a process list to a CPU set */
+#ifdef USE_CPU_AFFINITY
+ int cur_arg, i;
+ unsigned long proc = 0;
+ unsigned long cpus = 0;
+
+ if (strcmp(args[1], "all") == 0)
+ proc = ~0UL;
+ else if (strcmp(args[1], "odd") == 0)
+ proc = ~0UL/3UL; /* 0x555....555 */
+ else if (strcmp(args[1], "even") == 0)
+ proc = (~0UL/3UL) << 1; /* 0xAAA...AAA */
+ else {
+ proc = atol(args[1]);
+ if (proc >= 1 && proc <= LONGBITS)
+ proc = 1UL << (proc - 1);
+ }
+
+ if (!proc || !*args[2]) {
+ Alert("parsing [%s:%d]: %s expects a process number including 'all', 'odd', 'even', or a number from 1 to %d, followed by a list of CPU ranges with numbers from 0 to %d.\n",
+ file, linenum, args[0], LONGBITS, LONGBITS - 1);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ cur_arg = 2;
+ while (*args[cur_arg]) {
+ unsigned int low, high;
+
+ if (isdigit((int)*args[cur_arg])) {
+ char *dash = strchr(args[cur_arg], '-');
+
+ low = high = str2uic(args[cur_arg]);
+ if (dash)
+ high = str2uic(dash + 1);
+
+ if (high < low) {
+ unsigned int swap = low;
+ low = high;
+ high = swap;
+ }
+
+ if (high >= LONGBITS) {
+ Alert("parsing [%s:%d]: %s supports CPU numbers from 0 to %d.\n",
+ file, linenum, args[0], LONGBITS - 1);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ while (low <= high)
+ cpus |= 1UL << low++;
+ }
+ else {
+ Alert("parsing [%s:%d]: %s : '%s' is not a CPU range.\n",
+ file, linenum, args[0], args[cur_arg]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ cur_arg++;
+ }
+ for (i = 0; i < LONGBITS; i++)
+ if (proc & (1UL << i))
+ global.cpu_map[i] = cpus;
+#else
+ Alert("parsing [%s:%d] : '%s' is not enabled, please check build options for USE_CPU_AFFINITY.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+#endif
+ }
+ else {
+ struct cfg_kw_list *kwl;
+ int index;
+ int rc;
+
+ list_for_each_entry(kwl, &cfg_keywords.list, list) {
+ for (index = 0; kwl->kw[index].kw != NULL; index++) {
+ if (kwl->kw[index].section != CFG_GLOBAL)
+ continue;
+ if (strcmp(kwl->kw[index].kw, args[0]) == 0) {
+ rc = kwl->kw[index].parse(args, CFG_GLOBAL, NULL, NULL, file, linenum, &errmsg);
+ if (rc < 0) {
+ Alert("parsing [%s:%d] : %s\n", file, linenum, errmsg);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ }
+ else if (rc > 0) {
+ Warning("parsing [%s:%d] : %s\n", file, linenum, errmsg);
+ err_code |= ERR_WARN;
+ goto out;
+ }
+ goto out;
+ }
+ }
+ }
+
+ Alert("parsing [%s:%d] : unknown keyword '%s' in '%s' section\n", file, linenum, args[0], "global");
+ err_code |= ERR_ALERT | ERR_FATAL;
+ }
+
+ out:
+ free(errmsg);
+ return err_code;
+}
+
+void init_default_instance()
+{
+ init_new_proxy(&defproxy);
+ defproxy.mode = PR_MODE_TCP;
+ defproxy.state = PR_STNEW;
+ defproxy.maxconn = cfg_maxpconn;
+ defproxy.conn_retries = CONN_RETRIES;
+ defproxy.redispatch_after = 0;
+
+ defproxy.defsrv.check.inter = DEF_CHKINTR;
+ defproxy.defsrv.check.fastinter = 0;
+ defproxy.defsrv.check.downinter = 0;
+ defproxy.defsrv.agent.inter = DEF_CHKINTR;
+ defproxy.defsrv.agent.fastinter = 0;
+ defproxy.defsrv.agent.downinter = 0;
+ defproxy.defsrv.check.rise = DEF_RISETIME;
+ defproxy.defsrv.check.fall = DEF_FALLTIME;
+ defproxy.defsrv.agent.rise = DEF_AGENT_RISETIME;
+ defproxy.defsrv.agent.fall = DEF_AGENT_FALLTIME;
+ defproxy.defsrv.check.port = 0;
+ defproxy.defsrv.agent.port = 0;
+ defproxy.defsrv.maxqueue = 0;
+ defproxy.defsrv.minconn = 0;
+ defproxy.defsrv.maxconn = 0;
+ defproxy.defsrv.slowstart = 0;
+ defproxy.defsrv.onerror = DEF_HANA_ONERR;
+ defproxy.defsrv.consecutive_errors_limit = DEF_HANA_ERRLIMIT;
+ defproxy.defsrv.uweight = defproxy.defsrv.iweight = 1;
+
+ defproxy.email_alert.level = LOG_ALERT;
+ defproxy.load_server_state_from_file = PR_SRV_STATE_FILE_UNSPEC;
+}
+
+
+/* This function createss a new req* or rsp* rule to the proxy. It compiles the
+ * regex and may return the ERR_WARN bit, and error bits such as ERR_ALERT and
+ * ERR_FATAL in case of error.
+ */
+static int create_cond_regex_rule(const char *file, int line,
+ struct proxy *px, int dir, int action, int flags,
+ const char *cmd, const char *reg, const char *repl,
+ const char **cond_start)
+{
+ struct my_regex *preg = NULL;
+ char *errmsg = NULL;
+ const char *err;
+ char *error;
+ int ret_code = 0;
+ struct acl_cond *cond = NULL;
+ int cs;
+ int cap;
+
+ if (px == &defproxy) {
+ Alert("parsing [%s:%d] : '%s' not allowed in 'defaults' section.\n", file, line, cmd);
+ ret_code |= ERR_ALERT | ERR_FATAL;
+ goto err;
+ }
+
+ if (*reg == 0) {
+ Alert("parsing [%s:%d] : '%s' expects <regex> as an argument.\n", file, line, cmd);
+ ret_code |= ERR_ALERT | ERR_FATAL;
+ goto err;
+ }
+
+ if (warnifnotcap(px, PR_CAP_RS, file, line, cmd, NULL))
+ ret_code |= ERR_WARN;
+
+ if (cond_start &&
+ (strcmp(*cond_start, "if") == 0 || strcmp(*cond_start, "unless") == 0)) {
+ if ((cond = build_acl_cond(file, line, px, cond_start, &errmsg)) == NULL) {
+ Alert("parsing [%s:%d] : error detected while parsing a '%s' condition : %s.\n",
+ file, line, cmd, errmsg);
+ ret_code |= ERR_ALERT | ERR_FATAL;
+ goto err;
+ }
+ }
+ else if (cond_start && **cond_start) {
+ Alert("parsing [%s:%d] : '%s' : Expecting nothing, 'if', or 'unless', got '%s'.\n",
+ file, line, cmd, *cond_start);
+ ret_code |= ERR_ALERT | ERR_FATAL;
+ goto err;
+ }
+
+ ret_code |= warnif_cond_conflicts(cond,
+ (dir == SMP_OPT_DIR_REQ) ?
+ ((px->cap & PR_CAP_FE) ? SMP_VAL_FE_HRQ_HDR : SMP_VAL_BE_HRQ_HDR) :
+ ((px->cap & PR_CAP_BE) ? SMP_VAL_BE_HRS_HDR : SMP_VAL_FE_HRS_HDR),
+ file, line);
+
+ preg = calloc(1, sizeof(*preg));
+ if (!preg) {
+ Alert("parsing [%s:%d] : '%s' : not enough memory to build regex.\n", file, line, cmd);
+ ret_code = ERR_ALERT | ERR_FATAL;
+ goto err;
+ }
+
+ cs = !(flags & REG_ICASE);
+ cap = !(flags & REG_NOSUB);
+ error = NULL;
+ if (!regex_comp(reg, preg, cs, cap, &error)) {
+ Alert("parsing [%s:%d] : '%s' : regular expression '%s' : %s\n", file, line, cmd, reg, error);
+ free(error);
+ ret_code = ERR_ALERT | ERR_FATAL;
+ goto err;
+ }
+
+ err = chain_regex((dir == SMP_OPT_DIR_REQ) ? &px->req_exp : &px->rsp_exp,
+ preg, action, repl ? strdup(repl) : NULL, cond);
+ if (repl && err) {
+ Alert("parsing [%s:%d] : '%s' : invalid character or unterminated sequence in replacement string near '%c'.\n",
+ file, line, cmd, *err);
+ ret_code |= ERR_ALERT | ERR_FATAL;
+ goto err_free;
+ }
+
+ if (dir == SMP_OPT_DIR_REQ && warnif_misplaced_reqxxx(px, file, line, cmd))
+ ret_code |= ERR_WARN;
+
+ return ret_code;
+
+ err_free:
+ regex_free(preg);
+ err:
+ free(preg);
+ free(errmsg);
+ return ret_code;
+}
+
+/*
+ * Parse a line in a <listen>, <frontend> or <backend> section.
+ * Returns the error code, 0 if OK, or any combination of :
+ * - ERR_ABORT: must abort ASAP
+ * - ERR_FATAL: we can continue parsing but not start the service
+ * - ERR_WARN: a warning has been emitted
+ * - ERR_ALERT: an alert has been emitted
+ * Only the two first ones can stop processing, the two others are just
+ * indicators.
+ */
+int cfg_parse_peers(const char *file, int linenum, char **args, int kwm)
+{
+ static struct peers *curpeers = NULL;
+ struct peer *newpeer = NULL;
+ const char *err;
+ struct bind_conf *bind_conf;
+ struct listener *l;
+ int err_code = 0;
+ char *errmsg = NULL;
+
+ if (strcmp(args[0], "peers") == 0) { /* new peers section */
+ if (!*args[1]) {
+ Alert("parsing [%s:%d] : missing name for peers section.\n", file, linenum);
+ err_code |= ERR_ALERT | ERR_ABORT;
+ goto out;
+ }
+
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+
+ err = invalid_char(args[1]);
+ if (err) {
+ Alert("parsing [%s:%d] : character '%c' is not permitted in '%s' name '%s'.\n",
+ file, linenum, *err, args[0], args[1]);
+ err_code |= ERR_ALERT | ERR_ABORT;
+ goto out;
+ }
+
+ for (curpeers = peers; curpeers != NULL; curpeers = curpeers->next) {
+ /*
+ * If there are two proxies with the same name only following
+ * combinations are allowed:
+ */
+ if (strcmp(curpeers->id, args[1]) == 0) {
+ Alert("Parsing [%s:%d]: peers section '%s' has the same name as another peers section declared at %s:%d.\n",
+ file, linenum, args[1], curpeers->conf.file, curpeers->conf.line);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ }
+ }
+
+ if ((curpeers = (struct peers *)calloc(1, sizeof(struct peers))) == NULL) {
+ Alert("parsing [%s:%d] : out of memory.\n", file, linenum);
+ err_code |= ERR_ALERT | ERR_ABORT;
+ goto out;
+ }
+
+ curpeers->next = peers;
+ peers = curpeers;
+ curpeers->conf.file = strdup(file);
+ curpeers->conf.line = linenum;
+ curpeers->last_change = now.tv_sec;
+ curpeers->id = strdup(args[1]);
+ curpeers->state = PR_STNEW;
+ }
+ else if (strcmp(args[0], "peer") == 0) { /* peer definition */
+ struct sockaddr_storage *sk;
+ int port1, port2;
+ struct protocol *proto;
+
+ if (!*args[2]) {
+ Alert("parsing [%s:%d] : '%s' expects <name> and <addr>[:<port>] as arguments.\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ err = invalid_char(args[1]);
+ if (err) {
+ Alert("parsing [%s:%d] : character '%c' is not permitted in server name '%s'.\n",
+ file, linenum, *err, args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if ((newpeer = (struct peer *)calloc(1, sizeof(struct peer))) == NULL) {
+ Alert("parsing [%s:%d] : out of memory.\n", file, linenum);
+ err_code |= ERR_ALERT | ERR_ABORT;
+ goto out;
+ }
+
+ /* the peers are linked backwards first */
+ curpeers->count++;
+ newpeer->next = curpeers->remote;
+ curpeers->remote = newpeer;
+ newpeer->conf.file = strdup(file);
+ newpeer->conf.line = linenum;
+
+ newpeer->last_change = now.tv_sec;
+ newpeer->id = strdup(args[1]);
+
+ sk = str2sa_range(args[2], &port1, &port2, &errmsg, NULL, NULL, 1);
+ if (!sk) {
+ Alert("parsing [%s:%d] : '%s %s' : %s\n", file, linenum, args[0], args[1], errmsg);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ proto = protocol_by_family(sk->ss_family);
+ if (!proto || !proto->connect) {
+ Alert("parsing [%s:%d] : '%s %s' : connect() not supported for this address family.\n",
+ file, linenum, args[0], args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (port1 != port2) {
+ Alert("parsing [%s:%d] : '%s %s' : port ranges and offsets are not allowed in '%s'\n",
+ file, linenum, args[0], args[1], args[2]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (!port1) {
+ Alert("parsing [%s:%d] : '%s %s' : missing or invalid port in '%s'\n",
+ file, linenum, args[0], args[1], args[2]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ newpeer->addr = *sk;
+ newpeer->proto = proto;
+ newpeer->xprt = &raw_sock;
+ newpeer->sock_init_arg = NULL;
+
+ if (strcmp(newpeer->id, localpeer) == 0) {
+ /* Current is local peer, it define a frontend */
+ newpeer->local = 1;
+ peers->local = newpeer;
+
+ if (!curpeers->peers_fe) {
+ if ((curpeers->peers_fe = calloc(1, sizeof(struct proxy))) == NULL) {
+ Alert("parsing [%s:%d] : out of memory.\n", file, linenum);
+ err_code |= ERR_ALERT | ERR_ABORT;
+ goto out;
+ }
+
+ init_new_proxy(curpeers->peers_fe);
+ curpeers->peers_fe->parent = curpeers;
+ curpeers->peers_fe->id = strdup(args[1]);
+ curpeers->peers_fe->conf.args.file = curpeers->peers_fe->conf.file = strdup(file);
+ curpeers->peers_fe->conf.args.line = curpeers->peers_fe->conf.line = linenum;
+ peers_setup_frontend(curpeers->peers_fe);
+
+ bind_conf = bind_conf_alloc(&curpeers->peers_fe->conf.bind, file, linenum, args[2]);
+
+ if (!str2listener(args[2], curpeers->peers_fe, bind_conf, file, linenum, &errmsg)) {
+ if (errmsg && *errmsg) {
+ indent_msg(&errmsg, 2);
+ Alert("parsing [%s:%d] : '%s %s' : %s\n", file, linenum, args[0], args[1], errmsg);
+ }
+ else
+ Alert("parsing [%s:%d] : '%s %s' : error encountered while parsing listening address %s.\n",
+ file, linenum, args[0], args[1], args[2]);
+ err_code |= ERR_FATAL;
+ goto out;
+ }
+
+ list_for_each_entry(l, &bind_conf->listeners, by_bind) {
+ l->maxaccept = 1;
+ l->maxconn = ((struct proxy *)curpeers->peers_fe)->maxconn;
+ l->backlog = ((struct proxy *)curpeers->peers_fe)->backlog;
+ l->accept = session_accept_fd;
+ l->handler = process_stream;
+ l->analysers |= ((struct proxy *)curpeers->peers_fe)->fe_req_ana;
+ l->default_target = ((struct proxy *)curpeers->peers_fe)->default_target;
+ l->options |= LI_O_UNLIMITED; /* don't make the peers subject to global limits */
+ global.maxsock += l->maxconn;
+ }
+ }
+ else {
+ Alert("parsing [%s:%d] : '%s %s' : local peer name already referenced at %s:%d.\n",
+ file, linenum, args[0], args[1],
+ curpeers->peers_fe->conf.file, curpeers->peers_fe->conf.line);
+ err_code |= ERR_FATAL;
+ goto out;
+ }
+ }
+ } /* neither "peer" nor "peers" */
+ else if (!strcmp(args[0], "disabled")) { /* disables this peers section */
+ curpeers->state = PR_STSTOPPED;
+ }
+ else if (!strcmp(args[0], "enabled")) { /* enables this peers section (used to revert a disabled default) */
+ curpeers->state = PR_STNEW;
+ }
+ else if (*args[0] != 0) {
+ Alert("parsing [%s:%d] : unknown keyword '%s' in '%s' section\n", file, linenum, args[0], cursection);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+out:
+ free(errmsg);
+ return err_code;
+}
+
+/*
+ * Parse a <resolvers> section.
+ * Returns the error code, 0 if OK, or any combination of :
+ * - ERR_ABORT: must abort ASAP
+ * - ERR_FATAL: we can continue parsing but not start the service
+ * - ERR_WARN: a warning has been emitted
+ * - ERR_ALERT: an alert has been emitted
+ * Only the two first ones can stop processing, the two others are just
+ * indicators.
+ */
+int cfg_parse_resolvers(const char *file, int linenum, char **args, int kwm)
+{
+ static struct dns_resolvers *curr_resolvers = NULL;
+ struct dns_nameserver *newnameserver = NULL;
+ const char *err;
+ int err_code = 0;
+ char *errmsg = NULL;
+
+ if (strcmp(args[0], "resolvers") == 0) { /* new resolvers section */
+ if (!*args[1]) {
+ Alert("parsing [%s:%d] : missing name for resolvers section.\n", file, linenum);
+ err_code |= ERR_ALERT | ERR_ABORT;
+ goto out;
+ }
+
+ err = invalid_char(args[1]);
+ if (err) {
+ Alert("parsing [%s:%d] : character '%c' is not permitted in '%s' name '%s'.\n",
+ file, linenum, *err, args[0], args[1]);
+ err_code |= ERR_ALERT | ERR_ABORT;
+ goto out;
+ }
+
+ list_for_each_entry(curr_resolvers, &dns_resolvers, list) {
+ /* Error if two resolvers owns the same name */
+ if (strcmp(curr_resolvers->id, args[1]) == 0) {
+ Alert("Parsing [%s:%d]: resolvers '%s' has same name as another resolvers (declared at %s:%d).\n",
+ file, linenum, args[1], curr_resolvers->conf.file, curr_resolvers->conf.line);
+ err_code |= ERR_ALERT | ERR_ABORT;
+ }
+ }
+
+ if ((curr_resolvers = (struct dns_resolvers *)calloc(1, sizeof(struct dns_resolvers))) == NULL) {
+ Alert("parsing [%s:%d] : out of memory.\n", file, linenum);
+ err_code |= ERR_ALERT | ERR_ABORT;
+ goto out;
+ }
+
+ /* default values */
+ LIST_ADDQ(&dns_resolvers, &curr_resolvers->list);
+ curr_resolvers->conf.file = strdup(file);
+ curr_resolvers->conf.line = linenum;
+ curr_resolvers->id = strdup(args[1]);
+ curr_resolvers->query_ids = EB_ROOT;
+ /* default hold period for valid is 10s */
+ curr_resolvers->hold.valid = 10000;
+ curr_resolvers->timeout.retry = 1000;
+ curr_resolvers->resolve_retries = 3;
+ LIST_INIT(&curr_resolvers->nameserver_list);
+ LIST_INIT(&curr_resolvers->curr_resolution);
+ }
+ else if (strcmp(args[0], "nameserver") == 0) { /* nameserver definition */
+ struct sockaddr_storage *sk;
+ int port1, port2;
+ struct protocol *proto;
+
+ if (!*args[2]) {
+ Alert("parsing [%s:%d] : '%s' expects <name> and <addr>[:<port>] as arguments.\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ err = invalid_char(args[1]);
+ if (err) {
+ Alert("parsing [%s:%d] : character '%c' is not permitted in server name '%s'.\n",
+ file, linenum, *err, args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ list_for_each_entry(newnameserver, &curr_resolvers->nameserver_list, list) {
+ /* Error if two resolvers owns the same name */
+ if (strcmp(newnameserver->id, args[1]) == 0) {
+ Alert("Parsing [%s:%d]: nameserver '%s' has same name as another nameserver (declared at %s:%d).\n",
+ file, linenum, args[1], curr_resolvers->conf.file, curr_resolvers->conf.line);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ }
+ }
+
+ if ((newnameserver = (struct dns_nameserver *)calloc(1, sizeof(struct dns_nameserver))) == NULL) {
+ Alert("parsing [%s:%d] : out of memory.\n", file, linenum);
+ err_code |= ERR_ALERT | ERR_ABORT;
+ goto out;
+ }
+
+ /* the nameservers are linked backward first */
+ LIST_ADDQ(&curr_resolvers->nameserver_list, &newnameserver->list);
+ curr_resolvers->count_nameservers++;
+ newnameserver->resolvers = curr_resolvers;
+ newnameserver->conf.file = strdup(file);
+ newnameserver->conf.line = linenum;
+ newnameserver->id = strdup(args[1]);
+
+ sk = str2sa_range(args[2], &port1, &port2, &errmsg, NULL, NULL, 1);
+ if (!sk) {
+ Alert("parsing [%s:%d] : '%s %s' : %s\n", file, linenum, args[0], args[1], errmsg);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ proto = protocol_by_family(sk->ss_family);
+ if (!proto || !proto->connect) {
+ Alert("parsing [%s:%d] : '%s %s' : connect() not supported for this address family.\n",
+ file, linenum, args[0], args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (port1 != port2) {
+ Alert("parsing [%s:%d] : '%s %s' : port ranges and offsets are not allowed in '%s'\n",
+ file, linenum, args[0], args[1], args[2]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ newnameserver->addr = *sk;
+ }
+ else if (strcmp(args[0], "hold") == 0) { /* hold periods */
+ const char *res;
+ unsigned int time;
+
+ if (!*args[2]) {
+ Alert("parsing [%s:%d] : '%s' expects an <event> and a <time> as arguments.\n",
+ file, linenum, args[0]);
+ Alert("<event> can be either 'valid', 'nx', 'refused', 'timeout', or 'other'\n");
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ res = parse_time_err(args[2], &time, TIME_UNIT_MS);
+ if (res) {
+ Alert("parsing [%s:%d]: unexpected character '%c' in argument to <%s>.\n",
+ file, linenum, *res, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ if (strcmp(args[1], "valid") == 0)
+ curr_resolvers->hold.valid = time;
+ else {
+ Alert("parsing [%s:%d] : '%s' unknown <event>: '%s', expects 'valid'\n",
+ file, linenum, args[0], args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ }
+ else if (strcmp(args[0], "resolve_retries") == 0) {
+ if (!*args[1]) {
+ Alert("parsing [%s:%d] : '%s' expects <nb> as argument.\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ curr_resolvers->resolve_retries = atoi(args[1]);
+ }
+ else if (strcmp(args[0], "timeout") == 0) {
+ const char *res;
+ unsigned int timeout_retry;
+
+ if (!*args[2]) {
+ Alert("parsing [%s:%d] : '%s' expects 'retry' and <time> as arguments.\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ res = parse_time_err(args[2], &timeout_retry, TIME_UNIT_MS);
+ if (res) {
+ Alert("parsing [%s:%d]: unexpected character '%c' in argument to <%s>.\n",
+ file, linenum, *res, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ curr_resolvers->timeout.retry = timeout_retry;
+ } /* neither "nameserver" nor "resolvers" */
+ else if (*args[0] != 0) {
+ Alert("parsing [%s:%d] : unknown keyword '%s' in '%s' section\n", file, linenum, args[0], cursection);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ out:
+ free(errmsg);
+ return err_code;
+}
+
+/*
+ * Parse a line in a <listen>, <frontend> or <backend> section.
+ * Returns the error code, 0 if OK, or any combination of :
+ * - ERR_ABORT: must abort ASAP
+ * - ERR_FATAL: we can continue parsing but not start the service
+ * - ERR_WARN: a warning has been emitted
+ * - ERR_ALERT: an alert has been emitted
+ * Only the two first ones can stop processing, the two others are just
+ * indicators.
+ */
+int cfg_parse_mailers(const char *file, int linenum, char **args, int kwm)
+{
+ static struct mailers *curmailers = NULL;
+ struct mailer *newmailer = NULL;
+ const char *err;
+ int err_code = 0;
+ char *errmsg = NULL;
+
+ if (strcmp(args[0], "mailers") == 0) { /* new mailers section */
+ if (!*args[1]) {
+ Alert("parsing [%s:%d] : missing name for mailers section.\n", file, linenum);
+ err_code |= ERR_ALERT | ERR_ABORT;
+ goto out;
+ }
+
+ err = invalid_char(args[1]);
+ if (err) {
+ Alert("parsing [%s:%d] : character '%c' is not permitted in '%s' name '%s'.\n",
+ file, linenum, *err, args[0], args[1]);
+ err_code |= ERR_ALERT | ERR_ABORT;
+ goto out;
+ }
+
+ for (curmailers = mailers; curmailers != NULL; curmailers = curmailers->next) {
+ /*
+ * If there are two proxies with the same name only following
+ * combinations are allowed:
+ */
+ if (strcmp(curmailers->id, args[1]) == 0) {
+ Alert("Parsing [%s:%d]: mailers section '%s' has the same name as another mailers section declared at %s:%d.\n",
+ file, linenum, args[1], curmailers->conf.file, curmailers->conf.line);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ }
+ }
+
+ if ((curmailers = (struct mailers *)calloc(1, sizeof(struct mailers))) == NULL) {
+ Alert("parsing [%s:%d] : out of memory.\n", file, linenum);
+ err_code |= ERR_ALERT | ERR_ABORT;
+ goto out;
+ }
+
+ curmailers->next = mailers;
+ mailers = curmailers;
+ curmailers->conf.file = strdup(file);
+ curmailers->conf.line = linenum;
+ curmailers->id = strdup(args[1]);
+ }
+ else if (strcmp(args[0], "mailer") == 0) { /* mailer definition */
+ struct sockaddr_storage *sk;
+ int port1, port2;
+ struct protocol *proto;
+
+ if (!*args[2]) {
+ Alert("parsing [%s:%d] : '%s' expects <name> and <addr>[:<port>] as arguments.\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ err = invalid_char(args[1]);
+ if (err) {
+ Alert("parsing [%s:%d] : character '%c' is not permitted in server name '%s'.\n",
+ file, linenum, *err, args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if ((newmailer = (struct mailer *)calloc(1, sizeof(struct mailer))) == NULL) {
+ Alert("parsing [%s:%d] : out of memory.\n", file, linenum);
+ err_code |= ERR_ALERT | ERR_ABORT;
+ goto out;
+ }
+
+ /* the mailers are linked backwards first */
+ curmailers->count++;
+ newmailer->next = curmailers->mailer_list;
+ curmailers->mailer_list = newmailer;
+ newmailer->mailers = curmailers;
+ newmailer->conf.file = strdup(file);
+ newmailer->conf.line = linenum;
+
+ newmailer->id = strdup(args[1]);
+
+ sk = str2sa_range(args[2], &port1, &port2, &errmsg, NULL, NULL, 1);
+ if (!sk) {
+ Alert("parsing [%s:%d] : '%s %s' : %s\n", file, linenum, args[0], args[1], errmsg);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ proto = protocol_by_family(sk->ss_family);
+ if (!proto || !proto->connect || proto->sock_prot != IPPROTO_TCP) {
+ Alert("parsing [%s:%d] : '%s %s' : TCP not supported for this address family.\n",
+ file, linenum, args[0], args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (port1 != port2) {
+ Alert("parsing [%s:%d] : '%s %s' : port ranges and offsets are not allowed in '%s'\n",
+ file, linenum, args[0], args[1], args[2]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (!port1) {
+ Alert("parsing [%s:%d] : '%s %s' : missing or invalid port in '%s'\n",
+ file, linenum, args[0], args[1], args[2]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ newmailer->addr = *sk;
+ newmailer->proto = proto;
+ newmailer->xprt = &raw_sock;
+ newmailer->sock_init_arg = NULL;
+ } /* neither "mailer" nor "mailers" */
+ else if (*args[0] != 0) {
+ Alert("parsing [%s:%d] : unknown keyword '%s' in '%s' section\n", file, linenum, args[0], cursection);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+out:
+ free(errmsg);
+ return err_code;
+}
+
+static void free_email_alert(struct proxy *p)
+{
+ free(p->email_alert.mailers.name);
+ p->email_alert.mailers.name = NULL;
+ free(p->email_alert.from);
+ p->email_alert.from = NULL;
+ free(p->email_alert.to);
+ p->email_alert.to = NULL;
+ free(p->email_alert.myhostname);
+ p->email_alert.myhostname = NULL;
+}
+
+int cfg_parse_listen(const char *file, int linenum, char **args, int kwm)
+{
+ static struct proxy *curproxy = NULL;
+ const char *err;
+ char *error;
+ int rc;
+ unsigned val;
+ int err_code = 0;
+ struct acl_cond *cond = NULL;
+ struct logsrv *tmplogsrv;
+ char *errmsg = NULL;
+ struct bind_conf *bind_conf;
+
+ if (!strcmp(args[0], "listen"))
+ rc = PR_CAP_LISTEN;
+ else if (!strcmp(args[0], "frontend"))
+ rc = PR_CAP_FE | PR_CAP_RS;
+ else if (!strcmp(args[0], "backend"))
+ rc = PR_CAP_BE | PR_CAP_RS;
+ else
+ rc = PR_CAP_NONE;
+
+ if (rc != PR_CAP_NONE) { /* new proxy */
+ if (!*args[1]) {
+ Alert("parsing [%s:%d] : '%s' expects an <id> argument and\n"
+ " optionnally supports [addr1]:port1[-end1]{,[addr]:port[-end]}...\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_ABORT;
+ goto out;
+ }
+
+ err = invalid_char(args[1]);
+ if (err) {
+ Alert("parsing [%s:%d] : character '%c' is not permitted in '%s' name '%s'.\n",
+ file, linenum, *err, args[0], args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ }
+
+ curproxy = (rc & PR_CAP_FE) ? proxy_fe_by_name(args[1]) : proxy_be_by_name(args[1]);
+ if (curproxy) {
+ Alert("Parsing [%s:%d]: %s '%s' has the same name as %s '%s' declared at %s:%d.\n",
+ file, linenum, proxy_cap_str(rc), args[1], proxy_type_str(curproxy),
+ curproxy->id, curproxy->conf.file, curproxy->conf.line);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ }
+
+ if ((curproxy = (struct proxy *)calloc(1, sizeof(struct proxy))) == NULL) {
+ Alert("parsing [%s:%d] : out of memory.\n", file, linenum);
+ err_code |= ERR_ALERT | ERR_ABORT;
+ goto out;
+ }
+
+ init_new_proxy(curproxy);
+ curproxy->next = proxy;
+ proxy = curproxy;
+ curproxy->conf.args.file = curproxy->conf.file = strdup(file);
+ curproxy->conf.args.line = curproxy->conf.line = linenum;
+ curproxy->last_change = now.tv_sec;
+ curproxy->id = strdup(args[1]);
+ curproxy->cap = rc;
+ proxy_store_name(curproxy);
+
+ if (alertif_too_many_args(1, file, linenum, args, &err_code)) {
+ if (curproxy->cap & PR_CAP_FE)
+ Alert("parsing [%s:%d] : please use the 'bind' keyword for listening addresses.\n", file, linenum);
+ goto out;
+ }
+
+ /* set default values */
+ memcpy(&curproxy->defsrv, &defproxy.defsrv, sizeof(curproxy->defsrv));
+ curproxy->defsrv.id = "default-server";
+
+ curproxy->state = defproxy.state;
+ curproxy->options = defproxy.options;
+ curproxy->options2 = defproxy.options2;
+ curproxy->no_options = defproxy.no_options;
+ curproxy->no_options2 = defproxy.no_options2;
+ curproxy->bind_proc = defproxy.bind_proc;
+ curproxy->except_net = defproxy.except_net;
+ curproxy->except_mask = defproxy.except_mask;
+ curproxy->except_to = defproxy.except_to;
+ curproxy->except_mask_to = defproxy.except_mask_to;
+
+ if (defproxy.fwdfor_hdr_len) {
+ curproxy->fwdfor_hdr_len = defproxy.fwdfor_hdr_len;
+ curproxy->fwdfor_hdr_name = strdup(defproxy.fwdfor_hdr_name);
+ }
+
+ if (defproxy.orgto_hdr_len) {
+ curproxy->orgto_hdr_len = defproxy.orgto_hdr_len;
+ curproxy->orgto_hdr_name = strdup(defproxy.orgto_hdr_name);
+ }
+
+ if (defproxy.server_id_hdr_len) {
+ curproxy->server_id_hdr_len = defproxy.server_id_hdr_len;
+ curproxy->server_id_hdr_name = strdup(defproxy.server_id_hdr_name);
+ }
+
+ if (curproxy->cap & PR_CAP_FE) {
+ curproxy->maxconn = defproxy.maxconn;
+ curproxy->backlog = defproxy.backlog;
+ curproxy->fe_sps_lim = defproxy.fe_sps_lim;
+
+ /* initialize error relocations */
+ for (rc = 0; rc < HTTP_ERR_SIZE; rc++)
+ chunk_dup(&curproxy->errmsg[rc], &defproxy.errmsg[rc]);
+
+ curproxy->to_log = defproxy.to_log & ~LW_COOKIE & ~LW_REQHDR & ~ LW_RSPHDR;
+ }
+
+ if (curproxy->cap & PR_CAP_BE) {
+ curproxy->lbprm.algo = defproxy.lbprm.algo;
+ curproxy->fullconn = defproxy.fullconn;
+ curproxy->conn_retries = defproxy.conn_retries;
+ curproxy->redispatch_after = defproxy.redispatch_after;
+ curproxy->max_ka_queue = defproxy.max_ka_queue;
+
+ if (defproxy.check_req) {
+ curproxy->check_req = calloc(1, defproxy.check_len);
+ memcpy(curproxy->check_req, defproxy.check_req, defproxy.check_len);
+ }
+ curproxy->check_len = defproxy.check_len;
+
+ if (defproxy.expect_str) {
+ curproxy->expect_str = strdup(defproxy.expect_str);
+ if (defproxy.expect_regex) {
+ /* note: this regex is known to be valid */
+ curproxy->expect_regex = calloc(1, sizeof(*curproxy->expect_regex));
+ regex_comp(defproxy.expect_str, curproxy->expect_regex, 1, 1, NULL);
+ }
+ }
+
+ curproxy->ck_opts = defproxy.ck_opts;
+ if (defproxy.cookie_name)
+ curproxy->cookie_name = strdup(defproxy.cookie_name);
+ curproxy->cookie_len = defproxy.cookie_len;
+ if (defproxy.cookie_domain)
+ curproxy->cookie_domain = strdup(defproxy.cookie_domain);
+
+ if (defproxy.cookie_maxidle)
+ curproxy->cookie_maxidle = defproxy.cookie_maxidle;
+
+ if (defproxy.cookie_maxlife)
+ curproxy->cookie_maxlife = defproxy.cookie_maxlife;
+
+ if (defproxy.rdp_cookie_name)
+ curproxy->rdp_cookie_name = strdup(defproxy.rdp_cookie_name);
+ curproxy->rdp_cookie_len = defproxy.rdp_cookie_len;
+
+ if (defproxy.url_param_name)
+ curproxy->url_param_name = strdup(defproxy.url_param_name);
+ curproxy->url_param_len = defproxy.url_param_len;
+
+ if (defproxy.hh_name)
+ curproxy->hh_name = strdup(defproxy.hh_name);
+ curproxy->hh_len = defproxy.hh_len;
+ curproxy->hh_match_domain = defproxy.hh_match_domain;
+
+ if (defproxy.conn_src.iface_name)
+ curproxy->conn_src.iface_name = strdup(defproxy.conn_src.iface_name);
+ curproxy->conn_src.iface_len = defproxy.conn_src.iface_len;
+ curproxy->conn_src.opts = defproxy.conn_src.opts;
+#if defined(CONFIG_HAP_TRANSPARENT)
+ curproxy->conn_src.tproxy_addr = defproxy.conn_src.tproxy_addr;
+#endif
+ curproxy->load_server_state_from_file = defproxy.load_server_state_from_file;
+ }
+
+ if (curproxy->cap & PR_CAP_FE) {
+ if (defproxy.capture_name)
+ curproxy->capture_name = strdup(defproxy.capture_name);
+ curproxy->capture_namelen = defproxy.capture_namelen;
+ curproxy->capture_len = defproxy.capture_len;
+ }
+
+ if (curproxy->cap & PR_CAP_FE) {
+ curproxy->timeout.client = defproxy.timeout.client;
+ curproxy->timeout.clientfin = defproxy.timeout.clientfin;
+ curproxy->timeout.tarpit = defproxy.timeout.tarpit;
+ curproxy->timeout.httpreq = defproxy.timeout.httpreq;
+ curproxy->timeout.httpka = defproxy.timeout.httpka;
+ curproxy->mon_net = defproxy.mon_net;
+ curproxy->mon_mask = defproxy.mon_mask;
+ if (defproxy.monitor_uri)
+ curproxy->monitor_uri = strdup(defproxy.monitor_uri);
+ curproxy->monitor_uri_len = defproxy.monitor_uri_len;
+ if (defproxy.defbe.name)
+ curproxy->defbe.name = strdup(defproxy.defbe.name);
+
+ /* get either a pointer to the logformat string or a copy of it */
+ curproxy->conf.logformat_string = defproxy.conf.logformat_string;
+ if (curproxy->conf.logformat_string &&
+ curproxy->conf.logformat_string != default_http_log_format &&
+ curproxy->conf.logformat_string != default_tcp_log_format &&
+ curproxy->conf.logformat_string != clf_http_log_format)
+ curproxy->conf.logformat_string = strdup(curproxy->conf.logformat_string);
+
+ if (defproxy.conf.lfs_file) {
+ curproxy->conf.lfs_file = strdup(defproxy.conf.lfs_file);
+ curproxy->conf.lfs_line = defproxy.conf.lfs_line;
+ }
+
+ /* get either a pointer to the logformat string for RFC5424 structured-data or a copy of it */
+ curproxy->conf.logformat_sd_string = defproxy.conf.logformat_sd_string;
+ if (curproxy->conf.logformat_sd_string &&
+ curproxy->conf.logformat_sd_string != default_rfc5424_sd_log_format)
+ curproxy->conf.logformat_sd_string = strdup(curproxy->conf.logformat_sd_string);
+
+ if (defproxy.conf.lfsd_file) {
+ curproxy->conf.lfsd_file = strdup(defproxy.conf.lfsd_file);
+ curproxy->conf.lfsd_line = defproxy.conf.lfsd_line;
+ }
+ }
+
+ if (curproxy->cap & PR_CAP_BE) {
+ curproxy->timeout.connect = defproxy.timeout.connect;
+ curproxy->timeout.server = defproxy.timeout.server;
+ curproxy->timeout.serverfin = defproxy.timeout.serverfin;
+ curproxy->timeout.check = defproxy.timeout.check;
+ curproxy->timeout.queue = defproxy.timeout.queue;
+ curproxy->timeout.tarpit = defproxy.timeout.tarpit;
+ curproxy->timeout.httpreq = defproxy.timeout.httpreq;
+ curproxy->timeout.httpka = defproxy.timeout.httpka;
+ curproxy->timeout.tunnel = defproxy.timeout.tunnel;
+ curproxy->conn_src.source_addr = defproxy.conn_src.source_addr;
+ }
+
+ curproxy->mode = defproxy.mode;
+ curproxy->uri_auth = defproxy.uri_auth; /* for stats */
+
+ /* copy default logsrvs to curproxy */
+ list_for_each_entry(tmplogsrv, &defproxy.logsrvs, list) {
+ struct logsrv *node = malloc(sizeof(struct logsrv));
+ memcpy(node, tmplogsrv, sizeof(struct logsrv));
+ LIST_INIT(&node->list);
+ LIST_ADDQ(&curproxy->logsrvs, &node->list);
+ }
+
+ curproxy->conf.uniqueid_format_string = defproxy.conf.uniqueid_format_string;
+ if (curproxy->conf.uniqueid_format_string)
+ curproxy->conf.uniqueid_format_string = strdup(curproxy->conf.uniqueid_format_string);
+
+ chunk_dup(&curproxy->log_tag, &defproxy.log_tag);
+
+ if (defproxy.conf.uif_file) {
+ curproxy->conf.uif_file = strdup(defproxy.conf.uif_file);
+ curproxy->conf.uif_line = defproxy.conf.uif_line;
+ }
+
+ /* copy default header unique id */
+ if (defproxy.header_unique_id)
+ curproxy->header_unique_id = strdup(defproxy.header_unique_id);
+
+ /* default compression options */
+ if (defproxy.comp != NULL) {
+ curproxy->comp = calloc(1, sizeof(struct comp));
+ curproxy->comp->algos = defproxy.comp->algos;
+ curproxy->comp->types = defproxy.comp->types;
+ }
+
+ curproxy->grace = defproxy.grace;
+ curproxy->conf.used_listener_id = EB_ROOT;
+ curproxy->conf.used_server_id = EB_ROOT;
+
+ if (defproxy.check_path)
+ curproxy->check_path = strdup(defproxy.check_path);
+ if (defproxy.check_command)
+ curproxy->check_command = strdup(defproxy.check_command);
+
+ if (defproxy.email_alert.mailers.name)
+ curproxy->email_alert.mailers.name = strdup(defproxy.email_alert.mailers.name);
+ if (defproxy.email_alert.from)
+ curproxy->email_alert.from = strdup(defproxy.email_alert.from);
+ if (defproxy.email_alert.to)
+ curproxy->email_alert.to = strdup(defproxy.email_alert.to);
+ if (defproxy.email_alert.myhostname)
+ curproxy->email_alert.myhostname = strdup(defproxy.email_alert.myhostname);
+ curproxy->email_alert.level = defproxy.email_alert.level;
+ curproxy->email_alert.set = defproxy.email_alert.set;
+
+ goto out;
+ }
+ else if (!strcmp(args[0], "defaults")) { /* use this one to assign default values */
+ /* some variables may have already been initialized earlier */
+ /* FIXME-20070101: we should do this too at the end of the
+ * config parsing to free all default values.
+ */
+ if (alertif_too_many_args(1, file, linenum, args, &err_code)) {
+ err_code |= ERR_ABORT;
+ goto out;
+ }
+
+ free(defproxy.check_req);
+ free(defproxy.check_command);
+ free(defproxy.check_path);
+ free(defproxy.cookie_name);
+ free(defproxy.rdp_cookie_name);
+ free(defproxy.cookie_domain);
+ free(defproxy.url_param_name);
+ free(defproxy.hh_name);
+ free(defproxy.capture_name);
+ free(defproxy.monitor_uri);
+ free(defproxy.defbe.name);
+ free(defproxy.conn_src.iface_name);
+ free(defproxy.fwdfor_hdr_name);
+ defproxy.fwdfor_hdr_len = 0;
+ free(defproxy.orgto_hdr_name);
+ defproxy.orgto_hdr_len = 0;
+ free(defproxy.server_id_hdr_name);
+ defproxy.server_id_hdr_len = 0;
+ free(defproxy.expect_str);
+ if (defproxy.expect_regex) {
+ regex_free(defproxy.expect_regex);
+ free(defproxy.expect_regex);
+ defproxy.expect_regex = NULL;
+ }
+
+ if (defproxy.conf.logformat_string != default_http_log_format &&
+ defproxy.conf.logformat_string != default_tcp_log_format &&
+ defproxy.conf.logformat_string != clf_http_log_format)
+ free(defproxy.conf.logformat_string);
+
+ free(defproxy.conf.uniqueid_format_string);
+ free(defproxy.conf.lfs_file);
+ free(defproxy.conf.uif_file);
+ chunk_destroy(&defproxy.log_tag);
+ free_email_alert(&defproxy);
+
+ if (defproxy.conf.logformat_sd_string != default_rfc5424_sd_log_format)
+ free(defproxy.conf.logformat_sd_string);
+ free(defproxy.conf.lfsd_file);
+
+ for (rc = 0; rc < HTTP_ERR_SIZE; rc++)
+ chunk_destroy(&defproxy.errmsg[rc]);
+
+ /* we cannot free uri_auth because it might already be used */
+ init_default_instance();
+ curproxy = &defproxy;
+ curproxy->conf.args.file = curproxy->conf.file = strdup(file);
+ curproxy->conf.args.line = curproxy->conf.line = linenum;
+ defproxy.cap = PR_CAP_LISTEN; /* all caps for now */
+ goto out;
+ }
+ else if (curproxy == NULL) {
+ Alert("parsing [%s:%d] : 'listen' or 'defaults' expected.\n", file, linenum);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ /* update the current file and line being parsed */
+ curproxy->conf.args.file = curproxy->conf.file;
+ curproxy->conf.args.line = linenum;
+
+ /* Now let's parse the proxy-specific keywords */
+ if (!strcmp(args[0], "server") || !strcmp(args[0], "default-server")) {
+ err_code |= parse_server(file, linenum, args, curproxy, &defproxy);
+ if (err_code & ERR_FATAL)
+ goto out;
+ }
+ else if (!strcmp(args[0], "bind")) { /* new listen addresses */
+ struct listener *l;
+ int cur_arg;
+
+ if (curproxy == &defproxy) {
+ Alert("parsing [%s:%d] : '%s' not allowed in 'defaults' section.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ if (warnifnotcap(curproxy, PR_CAP_FE, file, linenum, args[0], NULL))
+ err_code |= ERR_WARN;
+
+ if (!*(args[1])) {
+ Alert("parsing [%s:%d] : '%s' expects {<path>|[addr1]:port1[-end1]}{,[addr]:port[-end]}... as arguments.\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ bind_conf = bind_conf_alloc(&curproxy->conf.bind, file, linenum, args[1]);
+
+ /* use default settings for unix sockets */
+ bind_conf->ux.uid = global.unix_bind.ux.uid;
+ bind_conf->ux.gid = global.unix_bind.ux.gid;
+ bind_conf->ux.mode = global.unix_bind.ux.mode;
+
+ /* NOTE: the following line might create several listeners if there
+ * are comma-separated IPs or port ranges. So all further processing
+ * will have to be applied to all listeners created after last_listen.
+ */
+ if (!str2listener(args[1], curproxy, bind_conf, file, linenum, &errmsg)) {
+ if (errmsg && *errmsg) {
+ indent_msg(&errmsg, 2);
+ Alert("parsing [%s:%d] : '%s' : %s\n", file, linenum, args[0], errmsg);
+ }
+ else
+ Alert("parsing [%s:%d] : '%s' : error encountered while parsing listening address '%s'.\n",
+ file, linenum, args[0], args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ list_for_each_entry(l, &bind_conf->listeners, by_bind) {
+ /* Set default global rights and owner for unix bind */
+ global.maxsock++;
+ }
+
+ cur_arg = 2;
+ while (*(args[cur_arg])) {
+ static int bind_dumped;
+ struct bind_kw *kw;
+ char *err;
+
+ kw = bind_find_kw(args[cur_arg]);
+ if (kw) {
+ char *err = NULL;
+ int code;
+
+ if (!kw->parse) {
+ Alert("parsing [%s:%d] : '%s %s' : '%s' option is not implemented in this version (check build options).\n",
+ file, linenum, args[0], args[1], args[cur_arg]);
+ cur_arg += 1 + kw->skip ;
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ code = kw->parse(args, cur_arg, curproxy, bind_conf, &err);
+ err_code |= code;
+
+ if (code) {
+ if (err && *err) {
+ indent_msg(&err, 2);
+ Alert("parsing [%s:%d] : '%s %s' : %s\n", file, linenum, args[0], args[1], err);
+ }
+ else
+ Alert("parsing [%s:%d] : '%s %s' : error encountered while processing '%s'.\n",
+ file, linenum, args[0], args[1], args[cur_arg]);
+ if (code & ERR_FATAL) {
+ free(err);
+ cur_arg += 1 + kw->skip;
+ goto out;
+ }
+ }
+ free(err);
+ cur_arg += 1 + kw->skip;
+ continue;
+ }
+
+ err = NULL;
+ if (!bind_dumped) {
+ bind_dump_kws(&err);
+ indent_msg(&err, 4);
+ bind_dumped = 1;
+ }
+
+ Alert("parsing [%s:%d] : '%s %s' unknown keyword '%s'.%s%s\n",
+ file, linenum, args[0], args[1], args[cur_arg],
+ err ? " Registered keywords :" : "", err ? err : "");
+ free(err);
+
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ goto out;
+ }
+ else if (!strcmp(args[0], "monitor-net")) { /* set the range of IPs to ignore */
+ if (!*args[1] || !str2net(args[1], 1, &curproxy->mon_net, &curproxy->mon_mask)) {
+ Alert("parsing [%s:%d] : '%s' expects address[/mask].\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ if (warnifnotcap(curproxy, PR_CAP_FE, file, linenum, args[0], NULL))
+ err_code |= ERR_WARN;
+
+ /* flush useless bits */
+ curproxy->mon_net.s_addr &= curproxy->mon_mask.s_addr;
+ goto out;
+ }
+ else if (!strcmp(args[0], "monitor-uri")) { /* set the URI to intercept */
+ if (warnifnotcap(curproxy, PR_CAP_FE, file, linenum, args[0], NULL))
+ err_code |= ERR_WARN;
+
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+
+ if (!*args[1]) {
+ Alert("parsing [%s:%d] : '%s' expects an URI.\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ free(curproxy->monitor_uri);
+ curproxy->monitor_uri_len = strlen(args[1]);
+ curproxy->monitor_uri = (char *)calloc(1, curproxy->monitor_uri_len + 1);
+ memcpy(curproxy->monitor_uri, args[1], curproxy->monitor_uri_len);
+ curproxy->monitor_uri[curproxy->monitor_uri_len] = '\0';
+
+ goto out;
+ }
+ else if (!strcmp(args[0], "mode")) { /* sets the proxy mode */
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+
+ if (!strcmp(args[1], "http")) curproxy->mode = PR_MODE_HTTP;
+ else if (!strcmp(args[1], "tcp")) curproxy->mode = PR_MODE_TCP;
+ else if (!strcmp(args[1], "health")) curproxy->mode = PR_MODE_HEALTH;
+ else {
+ Alert("parsing [%s:%d] : unknown proxy mode '%s'.\n", file, linenum, args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ }
+ else if (!strcmp(args[0], "id")) {
+ struct eb32_node *node;
+
+ if (curproxy == &defproxy) {
+ Alert("parsing [%s:%d]: '%s' not allowed in 'defaults' section.\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+
+ if (!*args[1]) {
+ Alert("parsing [%s:%d]: '%s' expects an integer argument.\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ curproxy->uuid = atol(args[1]);
+ curproxy->conf.id.key = curproxy->uuid;
+ curproxy->options |= PR_O_FORCED_ID;
+
+ if (curproxy->uuid <= 0) {
+ Alert("parsing [%s:%d]: custom id has to be > 0.\n",
+ file, linenum);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ node = eb32_lookup(&used_proxy_id, curproxy->uuid);
+ if (node) {
+ struct proxy *target = container_of(node, struct proxy, conf.id);
+ Alert("parsing [%s:%d]: %s %s reuses same custom id as %s %s (declared at %s:%d).\n",
+ file, linenum, proxy_type_str(curproxy), curproxy->id,
+ proxy_type_str(target), target->id, target->conf.file, target->conf.line);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ eb32_insert(&used_proxy_id, &curproxy->conf.id);
+ }
+ else if (!strcmp(args[0], "description")) {
+ int i, len=0;
+ char *d;
+
+ if (curproxy == &defproxy) {
+ Alert("parsing [%s:%d]: '%s' not allowed in 'defaults' section.\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (!*args[1]) {
+ Alert("parsing [%s:%d]: '%s' expects a string argument.\n",
+ file, linenum, args[0]);
+ return -1;
+ }
+
+ for (i = 1; *args[i]; i++)
+ len += strlen(args[i]) + 1;
+
+ d = (char *)calloc(1, len);
+ curproxy->desc = d;
+
+ d += snprintf(d, curproxy->desc + len - d, "%s", args[1]);
+ for (i = 2; *args[i]; i++)
+ d += snprintf(d, curproxy->desc + len - d, " %s", args[i]);
+
+ }
+ else if (!strcmp(args[0], "disabled")) { /* disables this proxy */
+ if (alertif_too_many_args(0, file, linenum, args, &err_code))
+ goto out;
+ curproxy->state = PR_STSTOPPED;
+ }
+ else if (!strcmp(args[0], "enabled")) { /* enables this proxy (used to revert a disabled default) */
+ if (alertif_too_many_args(0, file, linenum, args, &err_code))
+ goto out;
+ curproxy->state = PR_STNEW;
+ }
+ else if (!strcmp(args[0], "bind-process")) { /* enable this proxy only on some processes */
+ int cur_arg = 1;
+ unsigned long set = 0;
+
+ while (*args[cur_arg]) {
+ unsigned int low, high;
+
+ if (strcmp(args[cur_arg], "all") == 0) {
+ set = 0;
+ break;
+ }
+ else if (strcmp(args[cur_arg], "odd") == 0) {
+ set |= ~0UL/3UL; /* 0x555....555 */
+ }
+ else if (strcmp(args[cur_arg], "even") == 0) {
+ set |= (~0UL/3UL) << 1; /* 0xAAA...AAA */
+ }
+ else if (isdigit((int)*args[cur_arg])) {
+ char *dash = strchr(args[cur_arg], '-');
+
+ low = high = str2uic(args[cur_arg]);
+ if (dash)
+ high = str2uic(dash + 1);
+
+ if (high < low) {
+ unsigned int swap = low;
+ low = high;
+ high = swap;
+ }
+
+ if (low < 1 || high > LONGBITS) {
+ Alert("parsing [%s:%d]: %s supports process numbers from 1 to %d.\n",
+ file, linenum, args[0], LONGBITS);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ while (low <= high)
+ set |= 1UL << (low++ - 1);
+ }
+ else {
+ Alert("parsing [%s:%d]: %s expects 'all', 'odd', 'even', or a list of process ranges with numbers from 1 to %d.\n",
+ file, linenum, args[0], LONGBITS);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ cur_arg++;
+ }
+ curproxy->bind_proc = set;
+ }
+ else if (!strcmp(args[0], "acl")) { /* add an ACL */
+ if (curproxy == &defproxy) {
+ Alert("parsing [%s:%d] : '%s' not allowed in 'defaults' section.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ err = invalid_char(args[1]);
+ if (err) {
+ Alert("parsing [%s:%d] : character '%c' is not permitted in acl name '%s'.\n",
+ file, linenum, *err, args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ }
+
+ if (parse_acl((const char **)args + 1, &curproxy->acl, &errmsg, &curproxy->conf.args, file, linenum) == NULL) {
+ Alert("parsing [%s:%d] : error detected while parsing ACL '%s' : %s.\n",
+ file, linenum, args[1], errmsg);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ }
+ else if (!strcmp(args[0], "cookie")) { /* cookie name */
+ int cur_arg;
+
+ if (warnifnotcap(curproxy, PR_CAP_BE, file, linenum, args[0], NULL))
+ err_code |= ERR_WARN;
+
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects <cookie_name> as argument.\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ curproxy->ck_opts = 0;
+ curproxy->cookie_maxidle = curproxy->cookie_maxlife = 0;
+ free(curproxy->cookie_domain); curproxy->cookie_domain = NULL;
+ free(curproxy->cookie_name);
+ curproxy->cookie_name = strdup(args[1]);
+ curproxy->cookie_len = strlen(curproxy->cookie_name);
+
+ cur_arg = 2;
+ while (*(args[cur_arg])) {
+ if (!strcmp(args[cur_arg], "rewrite")) {
+ curproxy->ck_opts |= PR_CK_RW;
+ }
+ else if (!strcmp(args[cur_arg], "indirect")) {
+ curproxy->ck_opts |= PR_CK_IND;
+ }
+ else if (!strcmp(args[cur_arg], "insert")) {
+ curproxy->ck_opts |= PR_CK_INS;
+ }
+ else if (!strcmp(args[cur_arg], "nocache")) {
+ curproxy->ck_opts |= PR_CK_NOC;
+ }
+ else if (!strcmp(args[cur_arg], "postonly")) {
+ curproxy->ck_opts |= PR_CK_POST;
+ }
+ else if (!strcmp(args[cur_arg], "preserve")) {
+ curproxy->ck_opts |= PR_CK_PSV;
+ }
+ else if (!strcmp(args[cur_arg], "prefix")) {
+ curproxy->ck_opts |= PR_CK_PFX;
+ }
+ else if (!strcmp(args[cur_arg], "httponly")) {
+ curproxy->ck_opts |= PR_CK_HTTPONLY;
+ }
+ else if (!strcmp(args[cur_arg], "secure")) {
+ curproxy->ck_opts |= PR_CK_SECURE;
+ }
+ else if (!strcmp(args[cur_arg], "domain")) {
+ if (!*args[cur_arg + 1]) {
+ Alert("parsing [%s:%d]: '%s' expects <domain> as argument.\n",
+ file, linenum, args[cur_arg]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (*args[cur_arg + 1] != '.' || !strchr(args[cur_arg + 1] + 1, '.')) {
+ /* rfc2109, 4.3.2 Rejecting Cookies */
+ Warning("parsing [%s:%d]: domain '%s' contains no embedded"
+ " dots nor does not start with a dot."
+ " RFC forbids it, this configuration may not work properly.\n",
+ file, linenum, args[cur_arg + 1]);
+ err_code |= ERR_WARN;
+ }
+
+ err = invalid_domainchar(args[cur_arg + 1]);
+ if (err) {
+ Alert("parsing [%s:%d]: character '%c' is not permitted in domain name '%s'.\n",
+ file, linenum, *err, args[cur_arg + 1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (!curproxy->cookie_domain) {
+ curproxy->cookie_domain = strdup(args[cur_arg + 1]);
+ } else {
+ /* one domain was already specified, add another one by
+ * building the string which will be returned along with
+ * the cookie.
+ */
+ char *new_ptr;
+ int new_len = strlen(curproxy->cookie_domain) +
+ strlen("; domain=") + strlen(args[cur_arg + 1]) + 1;
+ new_ptr = malloc(new_len);
+ snprintf(new_ptr, new_len, "%s; domain=%s", curproxy->cookie_domain, args[cur_arg+1]);
+ free(curproxy->cookie_domain);
+ curproxy->cookie_domain = new_ptr;
+ }
+ cur_arg++;
+ }
+ else if (!strcmp(args[cur_arg], "maxidle")) {
+ unsigned int maxidle;
+ const char *res;
+
+ if (!*args[cur_arg + 1]) {
+ Alert("parsing [%s:%d]: '%s' expects <idletime> in seconds as argument.\n",
+ file, linenum, args[cur_arg]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ res = parse_time_err(args[cur_arg + 1], &maxidle, TIME_UNIT_S);
+ if (res) {
+ Alert("parsing [%s:%d]: unexpected character '%c' in argument to <%s>.\n",
+ file, linenum, *res, args[cur_arg]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ curproxy->cookie_maxidle = maxidle;
+ cur_arg++;
+ }
+ else if (!strcmp(args[cur_arg], "maxlife")) {
+ unsigned int maxlife;
+ const char *res;
+
+ if (!*args[cur_arg + 1]) {
+ Alert("parsing [%s:%d]: '%s' expects <lifetime> in seconds as argument.\n",
+ file, linenum, args[cur_arg]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ res = parse_time_err(args[cur_arg + 1], &maxlife, TIME_UNIT_S);
+ if (res) {
+ Alert("parsing [%s:%d]: unexpected character '%c' in argument to <%s>.\n",
+ file, linenum, *res, args[cur_arg]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ curproxy->cookie_maxlife = maxlife;
+ cur_arg++;
+ }
+ else {
+ Alert("parsing [%s:%d] : '%s' supports 'rewrite', 'insert', 'prefix', 'indirect', 'nocache', 'postonly', 'domain', 'maxidle, and 'maxlife' options.\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ cur_arg++;
+ }
+ if (!POWEROF2(curproxy->ck_opts & (PR_CK_RW|PR_CK_IND))) {
+ Alert("parsing [%s:%d] : cookie 'rewrite' and 'indirect' modes are incompatible.\n",
+ file, linenum);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ }
+
+ if (!POWEROF2(curproxy->ck_opts & (PR_CK_RW|PR_CK_INS|PR_CK_PFX))) {
+ Alert("parsing [%s:%d] : cookie 'rewrite', 'insert' and 'prefix' modes are incompatible.\n",
+ file, linenum);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ }
+
+ if ((curproxy->ck_opts & (PR_CK_PSV | PR_CK_INS | PR_CK_IND)) == PR_CK_PSV) {
+ Alert("parsing [%s:%d] : cookie 'preserve' requires at least 'insert' or 'indirect'.\n",
+ file, linenum);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ }
+ }/* end else if (!strcmp(args[0], "cookie")) */
+ else if (!strcmp(args[0], "email-alert")) {
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : missing argument after '%s'.\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (!strcmp(args[1], "from")) {
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : missing argument after '%s'.\n",
+ file, linenum, args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ free(curproxy->email_alert.from);
+ curproxy->email_alert.from = strdup(args[2]);
+ }
+ else if (!strcmp(args[1], "mailers")) {
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : missing argument after '%s'.\n",
+ file, linenum, args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ free(curproxy->email_alert.mailers.name);
+ curproxy->email_alert.mailers.name = strdup(args[2]);
+ }
+ else if (!strcmp(args[1], "myhostname")) {
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : missing argument after '%s'.\n",
+ file, linenum, args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ free(curproxy->email_alert.myhostname);
+ curproxy->email_alert.myhostname = strdup(args[2]);
+ }
+ else if (!strcmp(args[1], "level")) {
+ curproxy->email_alert.level = get_log_level(args[2]);
+ if (curproxy->email_alert.level < 0) {
+ Alert("parsing [%s:%d] : unknown log level '%s' after '%s'\n",
+ file, linenum, args[1], args[2]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ }
+ else if (!strcmp(args[1], "to")) {
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : missing argument after '%s'.\n",
+ file, linenum, args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ free(curproxy->email_alert.to);
+ curproxy->email_alert.to = strdup(args[2]);
+ }
+ else {
+ Alert("parsing [%s:%d] : email-alert: unknown argument '%s'.\n",
+ file, linenum, args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ /* Indicate that the email_alert is at least partially configured */
+ curproxy->email_alert.set = 1;
+ }/* end else if (!strcmp(args[0], "email-alert")) */
+ else if (!strcmp(args[0], "external-check")) {
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : missing argument after '%s'.\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (!strcmp(args[1], "command")) {
+ if (alertif_too_many_args(2, file, linenum, args, &err_code))
+ goto out;
+ if (*(args[2]) == 0) {
+ Alert("parsing [%s:%d] : missing argument after '%s'.\n",
+ file, linenum, args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ free(curproxy->check_command);
+ curproxy->check_command = strdup(args[2]);
+ }
+ else if (!strcmp(args[1], "path")) {
+ if (alertif_too_many_args(2, file, linenum, args, &err_code))
+ goto out;
+ if (*(args[2]) == 0) {
+ Alert("parsing [%s:%d] : missing argument after '%s'.\n",
+ file, linenum, args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ free(curproxy->check_path);
+ curproxy->check_path = strdup(args[2]);
+ }
+ else {
+ Alert("parsing [%s:%d] : external-check: unknown argument '%s'.\n",
+ file, linenum, args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ }/* end else if (!strcmp(args[0], "external-check")) */
+ else if (!strcmp(args[0], "persist")) { /* persist */
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : missing persist method.\n",
+ file, linenum);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (!strncmp(args[1], "rdp-cookie", 10)) {
+ curproxy->options2 |= PR_O2_RDPC_PRST;
+
+ if (*(args[1] + 10) == '(') { /* cookie name */
+ const char *beg, *end;
+
+ beg = args[1] + 11;
+ end = strchr(beg, ')');
+
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+
+ if (!end || end == beg) {
+ Alert("parsing [%s:%d] : persist rdp-cookie(name)' requires an rdp cookie name.\n",
+ file, linenum);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ free(curproxy->rdp_cookie_name);
+ curproxy->rdp_cookie_name = my_strndup(beg, end - beg);
+ curproxy->rdp_cookie_len = end-beg;
+ }
+ else if (*(args[1] + 10) == '\0') { /* default cookie name 'msts' */
+ free(curproxy->rdp_cookie_name);
+ curproxy->rdp_cookie_name = strdup("msts");
+ curproxy->rdp_cookie_len = strlen(curproxy->rdp_cookie_name);
+ }
+ else { /* syntax */
+ Alert("parsing [%s:%d] : persist rdp-cookie(name)' requires an rdp cookie name.\n",
+ file, linenum);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ }
+ else {
+ Alert("parsing [%s:%d] : unknown persist method.\n",
+ file, linenum);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ }
+ else if (!strcmp(args[0], "appsession")) { /* cookie name */
+ Alert("parsing [%s:%d] : '%s' is not supported anymore, please check the documentation.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ else if (!strcmp(args[0], "load-server-state-from-file")) {
+ if (warnifnotcap(curproxy, PR_CAP_BE, file, linenum, args[0], NULL))
+ err_code |= ERR_WARN;
+ if (!strcmp(args[1], "global")) { /* use the file pointed to by global server-state-file directive */
+ curproxy->load_server_state_from_file = PR_SRV_STATE_FILE_GLOBAL;
+ }
+ else if (!strcmp(args[1], "local")) { /* use the server-state-file-name variable to locate the server-state file */
+ curproxy->load_server_state_from_file = PR_SRV_STATE_FILE_LOCAL;
+ }
+ else if (!strcmp(args[1], "none")) { /* don't use server-state-file directive for this backend */
+ curproxy->load_server_state_from_file = PR_SRV_STATE_FILE_NONE;
+ }
+ else {
+ Alert("parsing [%s:%d] : '%s' expects 'global', 'local' or 'none'. Got '%s'\n",
+ file, linenum, args[0], args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ }
+ else if (!strcmp(args[0], "server-state-file-name")) {
+ if (warnifnotcap(curproxy, PR_CAP_BE, file, linenum, args[0], NULL))
+ err_code |= ERR_WARN;
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects 'use-backend-name' or a string. Got no argument\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ else if (!strcmp(args[1], "use-backend-name"))
+ curproxy->server_state_file_name = strdup(curproxy->id);
+ else
+ curproxy->server_state_file_name = strdup(args[1]);
+ }
+ else if (!strcmp(args[0], "capture")) {
+ if (warnifnotcap(curproxy, PR_CAP_FE, file, linenum, args[0], NULL))
+ err_code |= ERR_WARN;
+
+ if (!strcmp(args[1], "cookie")) { /* name of a cookie to capture */
+ if (curproxy == &defproxy) {
+ Alert("parsing [%s:%d] : '%s %s' not allowed in 'defaults' section.\n", file, linenum, args[0], args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (alertif_too_many_args_idx(4, 1, file, linenum, args, &err_code))
+ goto out;
+
+ if (*(args[4]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects 'cookie' <cookie_name> 'len' <len>.\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ free(curproxy->capture_name);
+ curproxy->capture_name = strdup(args[2]);
+ curproxy->capture_namelen = strlen(curproxy->capture_name);
+ curproxy->capture_len = atol(args[4]);
+ curproxy->to_log |= LW_COOKIE;
+ }
+ else if (!strcmp(args[1], "request") && !strcmp(args[2], "header")) {
+ struct cap_hdr *hdr;
+
+ if (curproxy == &defproxy) {
+ Alert("parsing [%s:%d] : '%s %s' not allowed in 'defaults' section.\n", file, linenum, args[0], args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (alertif_too_many_args_idx(4, 1, file, linenum, args, &err_code))
+ goto out;
+
+ if (*(args[3]) == 0 || strcmp(args[4], "len") != 0 || *(args[5]) == 0) {
+ Alert("parsing [%s:%d] : '%s %s' expects 'header' <header_name> 'len' <len>.\n",
+ file, linenum, args[0], args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ hdr = calloc(sizeof(struct cap_hdr), 1);
+ hdr->next = curproxy->req_cap;
+ hdr->name = strdup(args[3]);
+ hdr->namelen = strlen(args[3]);
+ hdr->len = atol(args[5]);
+ hdr->pool = create_pool("caphdr", hdr->len + 1, MEM_F_SHARED);
+ hdr->index = curproxy->nb_req_cap++;
+ curproxy->req_cap = hdr;
+ curproxy->to_log |= LW_REQHDR;
+ }
+ else if (!strcmp(args[1], "response") && !strcmp(args[2], "header")) {
+ struct cap_hdr *hdr;
+
+ if (curproxy == &defproxy) {
+ Alert("parsing [%s:%d] : '%s %s' not allowed in 'defaults' section.\n", file, linenum, args[0], args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (alertif_too_many_args_idx(4, 1, file, linenum, args, &err_code))
+ goto out;
+
+ if (*(args[3]) == 0 || strcmp(args[4], "len") != 0 || *(args[5]) == 0) {
+ Alert("parsing [%s:%d] : '%s %s' expects 'header' <header_name> 'len' <len>.\n",
+ file, linenum, args[0], args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ hdr = calloc(sizeof(struct cap_hdr), 1);
+ hdr->next = curproxy->rsp_cap;
+ hdr->name = strdup(args[3]);
+ hdr->namelen = strlen(args[3]);
+ hdr->len = atol(args[5]);
+ hdr->pool = create_pool("caphdr", hdr->len + 1, MEM_F_SHARED);
+ hdr->index = curproxy->nb_rsp_cap++;
+ curproxy->rsp_cap = hdr;
+ curproxy->to_log |= LW_RSPHDR;
+ }
+ else {
+ Alert("parsing [%s:%d] : '%s' expects 'cookie' or 'request header' or 'response header'.\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ }
+ else if (!strcmp(args[0], "retries")) { /* connection retries */
+ if (warnifnotcap(curproxy, PR_CAP_BE, file, linenum, args[0], NULL))
+ err_code |= ERR_WARN;
+
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects an integer argument (dispatch counts for one).\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ curproxy->conn_retries = atol(args[1]);
+ }
+ else if (!strcmp(args[0], "http-request")) { /* request access control: allow/deny/auth */
+ struct act_rule *rule;
+
+ if (curproxy == &defproxy) {
+ Alert("parsing [%s:%d]: '%s' not allowed in 'defaults' section.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (!LIST_ISEMPTY(&curproxy->http_req_rules) &&
+ !LIST_PREV(&curproxy->http_req_rules, struct act_rule *, list)->cond &&
+ (LIST_PREV(&curproxy->http_req_rules, struct act_rule *, list)->action == ACT_ACTION_ALLOW ||
+ LIST_PREV(&curproxy->http_req_rules, struct act_rule *, list)->action == ACT_ACTION_DENY ||
+ LIST_PREV(&curproxy->http_req_rules, struct act_rule *, list)->action == ACT_HTTP_REDIR ||
+ LIST_PREV(&curproxy->http_req_rules, struct act_rule *, list)->action == ACT_HTTP_REQ_AUTH)) {
+ Warning("parsing [%s:%d]: previous '%s' action is final and has no condition attached, further entries are NOOP.\n",
+ file, linenum, args[0]);
+ err_code |= ERR_WARN;
+ }
+
+ rule = parse_http_req_cond((const char **)args + 1, file, linenum, curproxy);
+
+ if (!rule) {
+ err_code |= ERR_ALERT | ERR_ABORT;
+ goto out;
+ }
+
+ err_code |= warnif_misplaced_http_req(curproxy, file, linenum, args[0]);
+ err_code |= warnif_cond_conflicts(rule->cond,
+ (curproxy->cap & PR_CAP_FE) ? SMP_VAL_FE_HRQ_HDR : SMP_VAL_BE_HRQ_HDR,
+ file, linenum);
+
+ LIST_ADDQ(&curproxy->http_req_rules, &rule->list);
+ }
+ else if (!strcmp(args[0], "http-response")) { /* response access control */
+ struct act_rule *rule;
+
+ if (curproxy == &defproxy) {
+ Alert("parsing [%s:%d]: '%s' not allowed in 'defaults' section.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (!LIST_ISEMPTY(&curproxy->http_res_rules) &&
+ !LIST_PREV(&curproxy->http_res_rules, struct act_rule *, list)->cond &&
+ (LIST_PREV(&curproxy->http_res_rules, struct act_rule *, list)->action == ACT_ACTION_ALLOW ||
+ LIST_PREV(&curproxy->http_res_rules, struct act_rule *, list)->action == ACT_ACTION_DENY)) {
+ Warning("parsing [%s:%d]: previous '%s' action is final and has no condition attached, further entries are NOOP.\n",
+ file, linenum, args[0]);
+ err_code |= ERR_WARN;
+ }
+
+ rule = parse_http_res_cond((const char **)args + 1, file, linenum, curproxy);
+
+ if (!rule) {
+ err_code |= ERR_ALERT | ERR_ABORT;
+ goto out;
+ }
+
+ err_code |= warnif_cond_conflicts(rule->cond,
+ (curproxy->cap & PR_CAP_BE) ? SMP_VAL_BE_HRS_HDR : SMP_VAL_FE_HRS_HDR,
+ file, linenum);
+
+ LIST_ADDQ(&curproxy->http_res_rules, &rule->list);
+ }
+ else if (!strcmp(args[0], "http-send-name-header")) { /* send server name in request header */
+ /* set the header name and length into the proxy structure */
+ if (warnifnotcap(curproxy, PR_CAP_BE, file, linenum, args[0], NULL))
+ err_code |= ERR_WARN;
+
+ if (!*args[1]) {
+ Alert("parsing [%s:%d] : '%s' requires a header string.\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ /* set the desired header name */
+ free(curproxy->server_id_hdr_name);
+ curproxy->server_id_hdr_name = strdup(args[1]);
+ curproxy->server_id_hdr_len = strlen(curproxy->server_id_hdr_name);
+ }
+ else if (!strcmp(args[0], "block")) { /* early blocking based on ACLs */
+ struct act_rule *rule;
+
+ if (curproxy == &defproxy) {
+ Alert("parsing [%s:%d] : '%s' not allowed in 'defaults' section.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ /* emulate "block" using "http-request block". Since these rules are supposed to
+ * be processed before all http-request rules, we put them into their own list
+ * and will insert them at the end.
+ */
+ rule = parse_http_req_cond((const char **)args, file, linenum, curproxy);
+ if (!rule) {
+ err_code |= ERR_ALERT | ERR_ABORT;
+ goto out;
+ }
+ err_code |= warnif_misplaced_block(curproxy, file, linenum, args[0]);
+ err_code |= warnif_cond_conflicts(rule->cond,
+ (curproxy->cap & PR_CAP_FE) ? SMP_VAL_FE_HRQ_HDR : SMP_VAL_BE_HRQ_HDR,
+ file, linenum);
+ LIST_ADDQ(&curproxy->block_rules, &rule->list);
+
+ if (!already_warned(WARN_BLOCK_DEPRECATED))
+ Warning("parsing [%s:%d] : The '%s' directive is now deprecated in favor of 'http-request deny' which uses the exact same syntax. The rules are translated but support might disappear in a future version.\n", file, linenum, args[0]);
+
+ }
+ else if (!strcmp(args[0], "redirect")) {
+ struct redirect_rule *rule;
+
+ if (curproxy == &defproxy) {
+ Alert("parsing [%s:%d] : '%s' not allowed in 'defaults' section.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if ((rule = http_parse_redirect_rule(file, linenum, curproxy, (const char **)args + 1, &errmsg, 0, 0)) == NULL) {
+ Alert("parsing [%s:%d] : error detected in %s '%s' while parsing redirect rule : %s.\n",
+ file, linenum, proxy_type_str(curproxy), curproxy->id, errmsg);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ LIST_ADDQ(&curproxy->redirect_rules, &rule->list);
+ err_code |= warnif_misplaced_redirect(curproxy, file, linenum, args[0]);
+ err_code |= warnif_cond_conflicts(rule->cond,
+ (curproxy->cap & PR_CAP_FE) ? SMP_VAL_FE_HRQ_HDR : SMP_VAL_BE_HRQ_HDR,
+ file, linenum);
+ }
+ else if (!strcmp(args[0], "use_backend")) {
+ struct switching_rule *rule;
+
+ if (curproxy == &defproxy) {
+ Alert("parsing [%s:%d] : '%s' not allowed in 'defaults' section.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (warnifnotcap(curproxy, PR_CAP_FE, file, linenum, args[0], NULL))
+ err_code |= ERR_WARN;
+
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects a backend name.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (strcmp(args[2], "if") == 0 || strcmp(args[2], "unless") == 0) {
+ if ((cond = build_acl_cond(file, linenum, curproxy, (const char **)args + 2, &errmsg)) == NULL) {
+ Alert("parsing [%s:%d] : error detected while parsing switching rule : %s.\n",
+ file, linenum, errmsg);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ err_code |= warnif_cond_conflicts(cond, SMP_VAL_FE_SET_BCK, file, linenum);
+ }
+
+ rule = (struct switching_rule *)calloc(1, sizeof(*rule));
+ rule->cond = cond;
+ rule->be.name = strdup(args[1]);
+ LIST_INIT(&rule->list);
+ LIST_ADDQ(&curproxy->switching_rules, &rule->list);
+ }
+ else if (strcmp(args[0], "use-server") == 0) {
+ struct server_rule *rule;
+
+ if (curproxy == &defproxy) {
+ Alert("parsing [%s:%d] : '%s' not allowed in 'defaults' section.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (warnifnotcap(curproxy, PR_CAP_BE, file, linenum, args[0], NULL))
+ err_code |= ERR_WARN;
+
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects a server name.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (strcmp(args[2], "if") != 0 && strcmp(args[2], "unless") != 0) {
+ Alert("parsing [%s:%d] : '%s' requires either 'if' or 'unless' followed by a condition.\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if ((cond = build_acl_cond(file, linenum, curproxy, (const char **)args + 2, &errmsg)) == NULL) {
+ Alert("parsing [%s:%d] : error detected while parsing switching rule : %s.\n",
+ file, linenum, errmsg);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ err_code |= warnif_cond_conflicts(cond, SMP_VAL_BE_SET_SRV, file, linenum);
+
+ rule = (struct server_rule *)calloc(1, sizeof(*rule));
+ rule->cond = cond;
+ rule->srv.name = strdup(args[1]);
+ LIST_INIT(&rule->list);
+ LIST_ADDQ(&curproxy->server_rules, &rule->list);
+ curproxy->be_req_ana |= AN_REQ_SRV_RULES;
+ }
+ else if ((!strcmp(args[0], "force-persist")) ||
+ (!strcmp(args[0], "ignore-persist"))) {
+ struct persist_rule *rule;
+
+ if (curproxy == &defproxy) {
+ Alert("parsing [%s:%d] : '%s' not allowed in 'defaults' section.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (warnifnotcap(curproxy, PR_CAP_FE|PR_CAP_BE, file, linenum, args[0], NULL))
+ err_code |= ERR_WARN;
+
+ if (strcmp(args[1], "if") != 0 && strcmp(args[1], "unless") != 0) {
+ Alert("parsing [%s:%d] : '%s' requires either 'if' or 'unless' followed by a condition.\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if ((cond = build_acl_cond(file, linenum, curproxy, (const char **)args + 1, &errmsg)) == NULL) {
+ Alert("parsing [%s:%d] : error detected while parsing a '%s' rule : %s.\n",
+ file, linenum, args[0], errmsg);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ /* note: BE_REQ_CNT is the first one after FE_SET_BCK, which is
+ * where force-persist is applied.
+ */
+ err_code |= warnif_cond_conflicts(cond, SMP_VAL_BE_REQ_CNT, file, linenum);
+
+ rule = (struct persist_rule *)calloc(1, sizeof(*rule));
+ rule->cond = cond;
+ if (!strcmp(args[0], "force-persist")) {
+ rule->type = PERSIST_TYPE_FORCE;
+ } else {
+ rule->type = PERSIST_TYPE_IGNORE;
+ }
+ LIST_INIT(&rule->list);
+ LIST_ADDQ(&curproxy->persist_rules, &rule->list);
+ }
+ else if (!strcmp(args[0], "stick-table")) {
+ int myidx = 1;
+ struct proxy *other;
+
+ other = proxy_tbl_by_name(curproxy->id);
+ if (other) {
+ Alert("parsing [%s:%d] : stick-table name '%s' conflicts with table declared in %s '%s' at %s:%d.\n",
+ file, linenum, curproxy->id, proxy_type_str(other), other->id, other->conf.file, other->conf.line);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ curproxy->table.id = curproxy->id;
+ curproxy->table.type = (unsigned int)-1;
+ while (*args[myidx]) {
+ const char *err;
+
+ if (strcmp(args[myidx], "size") == 0) {
+ myidx++;
+ if (!*(args[myidx])) {
+ Alert("parsing [%s:%d] : stick-table: missing argument after '%s'.\n",
+ file, linenum, args[myidx-1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ if ((err = parse_size_err(args[myidx], &curproxy->table.size))) {
+ Alert("parsing [%s:%d] : stick-table: unexpected character '%c' in argument of '%s'.\n",
+ file, linenum, *err, args[myidx-1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ myidx++;
+ }
+ else if (strcmp(args[myidx], "peers") == 0) {
+ myidx++;
+ if (!*(args[myidx])) {
+ Alert("parsing [%s:%d] : stick-table: missing argument after '%s'.\n",
+ file, linenum, args[myidx-1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ curproxy->table.peers.name = strdup(args[myidx++]);
+ }
+ else if (strcmp(args[myidx], "expire") == 0) {
+ myidx++;
+ if (!*(args[myidx])) {
+ Alert("parsing [%s:%d] : stick-table: missing argument after '%s'.\n",
+ file, linenum, args[myidx-1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ err = parse_time_err(args[myidx], &val, TIME_UNIT_MS);
+ if (err) {
+ Alert("parsing [%s:%d] : stick-table: unexpected character '%c' in argument of '%s'.\n",
+ file, linenum, *err, args[myidx-1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ curproxy->table.expire = val;
+ myidx++;
+ }
+ else if (strcmp(args[myidx], "nopurge") == 0) {
+ curproxy->table.nopurge = 1;
+ myidx++;
+ }
+ else if (strcmp(args[myidx], "type") == 0) {
+ myidx++;
+ if (stktable_parse_type(args, &myidx, &curproxy->table.type, &curproxy->table.key_size) != 0) {
+ Alert("parsing [%s:%d] : stick-table: unknown type '%s'.\n",
+ file, linenum, args[myidx]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ /* myidx already points to next arg */
+ }
+ else if (strcmp(args[myidx], "store") == 0) {
+ int type, err;
+ char *cw, *nw, *sa;
+
+ myidx++;
+ nw = args[myidx];
+ while (*nw) {
+ /* the "store" keyword supports a comma-separated list */
+ cw = nw;
+ sa = NULL; /* store arg */
+ while (*nw && *nw != ',') {
+ if (*nw == '(') {
+ *nw = 0;
+ sa = ++nw;
+ while (*nw != ')') {
+ if (!*nw) {
+ Alert("parsing [%s:%d] : %s: missing closing parenthesis after store option '%s'.\n",
+ file, linenum, args[0], cw);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ nw++;
+ }
+ *nw = '\0';
+ }
+ nw++;
+ }
+ if (*nw)
+ *nw++ = '\0';
+ type = stktable_get_data_type(cw);
+ if (type < 0) {
+ Alert("parsing [%s:%d] : %s: unknown store option '%s'.\n",
+ file, linenum, args[0], cw);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ err = stktable_alloc_data_type(&curproxy->table, type, sa);
+ switch (err) {
+ case PE_NONE: break;
+ case PE_EXIST:
+ Warning("parsing [%s:%d]: %s: store option '%s' already enabled, ignored.\n",
+ file, linenum, args[0], cw);
+ err_code |= ERR_WARN;
+ break;
+
+ case PE_ARG_MISSING:
+ Alert("parsing [%s:%d] : %s: missing argument to store option '%s'.\n",
+ file, linenum, args[0], cw);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+
+ case PE_ARG_NOT_USED:
+ Alert("parsing [%s:%d] : %s: unexpected argument to store option '%s'.\n",
+ file, linenum, args[0], cw);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+
+ default:
+ Alert("parsing [%s:%d] : %s: error when processing store option '%s'.\n",
+ file, linenum, args[0], cw);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ }
+ myidx++;
+ }
+ else {
+ Alert("parsing [%s:%d] : stick-table: unknown argument '%s'.\n",
+ file, linenum, args[myidx]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ }
+
+ if (!curproxy->table.size) {
+ Alert("parsing [%s:%d] : stick-table: missing size.\n",
+ file, linenum);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (curproxy->table.type == (unsigned int)-1) {
+ Alert("parsing [%s:%d] : stick-table: missing type.\n",
+ file, linenum);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ }
+ else if (!strcmp(args[0], "stick")) {
+ struct sticking_rule *rule;
+ struct sample_expr *expr;
+ int myidx = 0;
+ const char *name = NULL;
+ int flags;
+
+ if (curproxy == &defproxy) {
+ Alert("parsing [%s:%d] : '%s' not allowed in 'defaults' section.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (warnifnotcap(curproxy, PR_CAP_BE, file, linenum, args[0], NULL)) {
+ err_code |= ERR_WARN;
+ goto out;
+ }
+
+ myidx++;
+ if ((strcmp(args[myidx], "store") == 0) ||
+ (strcmp(args[myidx], "store-request") == 0)) {
+ myidx++;
+ flags = STK_IS_STORE;
+ }
+ else if (strcmp(args[myidx], "store-response") == 0) {
+ myidx++;
+ flags = STK_IS_STORE | STK_ON_RSP;
+ }
+ else if (strcmp(args[myidx], "match") == 0) {
+ myidx++;
+ flags = STK_IS_MATCH;
+ }
+ else if (strcmp(args[myidx], "on") == 0) {
+ myidx++;
+ flags = STK_IS_MATCH | STK_IS_STORE;
+ }
+ else {
+ Alert("parsing [%s:%d] : '%s' expects 'on', 'match', or 'store'.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (*(args[myidx]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects a fetch method.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ curproxy->conf.args.ctx = ARGC_STK;
+ expr = sample_parse_expr(args, &myidx, file, linenum, &errmsg, &curproxy->conf.args);
+ if (!expr) {
+ Alert("parsing [%s:%d] : '%s': %s\n", file, linenum, args[0], errmsg);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (flags & STK_ON_RSP) {
+ if (!(expr->fetch->val & SMP_VAL_BE_STO_RUL)) {
+ Alert("parsing [%s:%d] : '%s': fetch method '%s' extracts information from '%s', none of which is available for 'store-response'.\n",
+ file, linenum, args[0], expr->fetch->kw, sample_src_names(expr->fetch->use));
+ err_code |= ERR_ALERT | ERR_FATAL;
+ free(expr);
+ goto out;
+ }
+ } else {
+ if (!(expr->fetch->val & SMP_VAL_BE_SET_SRV)) {
+ Alert("parsing [%s:%d] : '%s': fetch method '%s' extracts information from '%s', none of which is available during request.\n",
+ file, linenum, args[0], expr->fetch->kw, sample_src_names(expr->fetch->use));
+ err_code |= ERR_ALERT | ERR_FATAL;
+ free(expr);
+ goto out;
+ }
+ }
+
+ /* check if we need to allocate an hdr_idx struct for HTTP parsing */
+ curproxy->http_needed |= !!(expr->fetch->use & SMP_USE_HTTP_ANY);
+
+ if (strcmp(args[myidx], "table") == 0) {
+ myidx++;
+ name = args[myidx++];
+ }
+
+ if (strcmp(args[myidx], "if") == 0 || strcmp(args[myidx], "unless") == 0) {
+ if ((cond = build_acl_cond(file, linenum, curproxy, (const char **)args + myidx, &errmsg)) == NULL) {
+ Alert("parsing [%s:%d] : '%s': error detected while parsing sticking condition : %s.\n",
+ file, linenum, args[0], errmsg);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ free(expr);
+ goto out;
+ }
+ }
+ else if (*(args[myidx])) {
+ Alert("parsing [%s:%d] : '%s': unknown keyword '%s'.\n",
+ file, linenum, args[0], args[myidx]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ free(expr);
+ goto out;
+ }
+ if (flags & STK_ON_RSP)
+ err_code |= warnif_cond_conflicts(cond, SMP_VAL_BE_STO_RUL, file, linenum);
+ else
+ err_code |= warnif_cond_conflicts(cond, SMP_VAL_BE_SET_SRV, file, linenum);
+
+ rule = (struct sticking_rule *)calloc(1, sizeof(*rule));
+ rule->cond = cond;
+ rule->expr = expr;
+ rule->flags = flags;
+ rule->table.name = name ? strdup(name) : NULL;
+ LIST_INIT(&rule->list);
+ if (flags & STK_ON_RSP)
+ LIST_ADDQ(&curproxy->storersp_rules, &rule->list);
+ else
+ LIST_ADDQ(&curproxy->sticking_rules, &rule->list);
+ }
+ else if (!strcmp(args[0], "stats")) {
+ if (curproxy != &defproxy && curproxy->uri_auth == defproxy.uri_auth)
+ curproxy->uri_auth = NULL; /* we must detach from the default config */
+
+ if (!*args[1]) {
+ goto stats_error_parsing;
+ } else if (!strcmp(args[1], "admin")) {
+ struct stats_admin_rule *rule;
+
+ if (curproxy == &defproxy) {
+ Alert("parsing [%s:%d]: '%s %s' not allowed in 'defaults' section.\n", file, linenum, args[0], args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (!stats_check_init_uri_auth(&curproxy->uri_auth)) {
+ Alert("parsing [%s:%d]: out of memory.\n", file, linenum);
+ err_code |= ERR_ALERT | ERR_ABORT;
+ goto out;
+ }
+
+ if (strcmp(args[2], "if") != 0 && strcmp(args[2], "unless") != 0) {
+ Alert("parsing [%s:%d] : '%s %s' requires either 'if' or 'unless' followed by a condition.\n",
+ file, linenum, args[0], args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ if ((cond = build_acl_cond(file, linenum, curproxy, (const char **)args + 2, &errmsg)) == NULL) {
+ Alert("parsing [%s:%d] : error detected while parsing a '%s %s' rule : %s.\n",
+ file, linenum, args[0], args[1], errmsg);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ err_code |= warnif_cond_conflicts(cond,
+ (curproxy->cap & PR_CAP_FE) ? SMP_VAL_FE_HRQ_HDR : SMP_VAL_BE_HRQ_HDR,
+ file, linenum);
+
+ rule = (struct stats_admin_rule *)calloc(1, sizeof(*rule));
+ rule->cond = cond;
+ LIST_INIT(&rule->list);
+ LIST_ADDQ(&curproxy->uri_auth->admin_rules, &rule->list);
+ } else if (!strcmp(args[1], "uri")) {
+ if (*(args[2]) == 0) {
+ Alert("parsing [%s:%d] : 'uri' needs an URI prefix.\n", file, linenum);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ } else if (!stats_set_uri(&curproxy->uri_auth, args[2])) {
+ Alert("parsing [%s:%d] : out of memory.\n", file, linenum);
+ err_code |= ERR_ALERT | ERR_ABORT;
+ goto out;
+ }
+ } else if (!strcmp(args[1], "realm")) {
+ if (*(args[2]) == 0) {
+ Alert("parsing [%s:%d] : 'realm' needs an realm name.\n", file, linenum);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ } else if (!stats_set_realm(&curproxy->uri_auth, args[2])) {
+ Alert("parsing [%s:%d] : out of memory.\n", file, linenum);
+ err_code |= ERR_ALERT | ERR_ABORT;
+ goto out;
+ }
+ } else if (!strcmp(args[1], "refresh")) {
+ unsigned interval;
+
+ err = parse_time_err(args[2], &interval, TIME_UNIT_S);
+ if (err) {
+ Alert("parsing [%s:%d] : unexpected character '%c' in stats refresh interval.\n",
+ file, linenum, *err);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ } else if (!stats_set_refresh(&curproxy->uri_auth, interval)) {
+ Alert("parsing [%s:%d] : out of memory.\n", file, linenum);
+ err_code |= ERR_ALERT | ERR_ABORT;
+ goto out;
+ }
+ } else if (!strcmp(args[1], "http-request")) { /* request access control: allow/deny/auth */
+ struct act_rule *rule;
+
+ if (curproxy == &defproxy) {
+ Alert("parsing [%s:%d]: '%s' not allowed in 'defaults' section.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (!stats_check_init_uri_auth(&curproxy->uri_auth)) {
+ Alert("parsing [%s:%d]: out of memory.\n", file, linenum);
+ err_code |= ERR_ALERT | ERR_ABORT;
+ goto out;
+ }
+
+ if (!LIST_ISEMPTY(&curproxy->uri_auth->http_req_rules) &&
+ !LIST_PREV(&curproxy->uri_auth->http_req_rules, struct act_rule *, list)->cond) {
+ Warning("parsing [%s:%d]: previous '%s' action has no condition attached, further entries are NOOP.\n",
+ file, linenum, args[0]);
+ err_code |= ERR_WARN;
+ }
+
+ rule = parse_http_req_cond((const char **)args + 2, file, linenum, curproxy);
+
+ if (!rule) {
+ err_code |= ERR_ALERT | ERR_ABORT;
+ goto out;
+ }
+
+ err_code |= warnif_cond_conflicts(rule->cond,
+ (curproxy->cap & PR_CAP_FE) ? SMP_VAL_FE_HRQ_HDR : SMP_VAL_BE_HRQ_HDR,
+ file, linenum);
+ LIST_ADDQ(&curproxy->uri_auth->http_req_rules, &rule->list);
+
+ } else if (!strcmp(args[1], "auth")) {
+ if (*(args[2]) == 0) {
+ Alert("parsing [%s:%d] : 'auth' needs a user:password account.\n", file, linenum);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ } else if (!stats_add_auth(&curproxy->uri_auth, args[2])) {
+ Alert("parsing [%s:%d] : out of memory.\n", file, linenum);
+ err_code |= ERR_ALERT | ERR_ABORT;
+ goto out;
+ }
+ } else if (!strcmp(args[1], "scope")) {
+ if (*(args[2]) == 0) {
+ Alert("parsing [%s:%d] : 'scope' needs a proxy name.\n", file, linenum);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ } else if (!stats_add_scope(&curproxy->uri_auth, args[2])) {
+ Alert("parsing [%s:%d] : out of memory.\n", file, linenum);
+ err_code |= ERR_ALERT | ERR_ABORT;
+ goto out;
+ }
+ } else if (!strcmp(args[1], "enable")) {
+ if (!stats_check_init_uri_auth(&curproxy->uri_auth)) {
+ Alert("parsing [%s:%d] : out of memory.\n", file, linenum);
+ err_code |= ERR_ALERT | ERR_ABORT;
+ goto out;
+ }
+ } else if (!strcmp(args[1], "hide-version")) {
+ if (!stats_set_flag(&curproxy->uri_auth, ST_HIDEVER)) {
+ Alert("parsing [%s:%d] : out of memory.\n", file, linenum);
+ err_code |= ERR_ALERT | ERR_ABORT;
+ goto out;
+ }
+ } else if (!strcmp(args[1], "show-legends")) {
+ if (!stats_set_flag(&curproxy->uri_auth, ST_SHLGNDS)) {
+ Alert("parsing [%s:%d]: out of memory.\n", file, linenum);
+ err_code |= ERR_ALERT | ERR_ABORT;
+ goto out;
+ }
+ } else if (!strcmp(args[1], "show-node")) {
+
+ if (*args[2]) {
+ int i;
+ char c;
+
+ for (i=0; args[2][i]; i++) {
+ c = args[2][i];
+ if (!isupper((unsigned char)c) && !islower((unsigned char)c) &&
+ !isdigit((unsigned char)c) && c != '_' && c != '-' && c != '.')
+ break;
+ }
+
+ if (!i || args[2][i]) {
+ Alert("parsing [%s:%d]: '%s %s' invalid node name - should be a string"
+ "with digits(0-9), letters(A-Z, a-z), hyphen(-) or underscode(_).\n",
+ file, linenum, args[0], args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ }
+
+ if (!stats_set_node(&curproxy->uri_auth, args[2])) {
+ Alert("parsing [%s:%d]: out of memory.\n", file, linenum);
+ err_code |= ERR_ALERT | ERR_ABORT;
+ goto out;
+ }
+ } else if (!strcmp(args[1], "show-desc")) {
+ char *desc = NULL;
+
+ if (*args[2]) {
+ int i, len=0;
+ char *d;
+
+ for (i = 2; *args[i]; i++)
+ len += strlen(args[i]) + 1;
+
+ desc = d = (char *)calloc(1, len);
+
+ d += snprintf(d, desc + len - d, "%s", args[2]);
+ for (i = 3; *args[i]; i++)
+ d += snprintf(d, desc + len - d, " %s", args[i]);
+ }
+
+ if (!*args[2] && !global.desc)
+ Warning("parsing [%s:%d]: '%s' requires a parameter or 'desc' to be set in the global section.\n",
+ file, linenum, args[1]);
+ else {
+ if (!stats_set_desc(&curproxy->uri_auth, desc)) {
+ free(desc);
+ Alert("parsing [%s:%d]: out of memory.\n", file, linenum);
+ err_code |= ERR_ALERT | ERR_ABORT;
+ goto out;
+ }
+ free(desc);
+ }
+ } else {
+stats_error_parsing:
+ Alert("parsing [%s:%d]: %s '%s', expects 'admin', 'uri', 'realm', 'auth', 'scope', 'enable', 'hide-version', 'show-node', 'show-desc' or 'show-legends'.\n",
+ file, linenum, *args[1]?"unknown stats parameter":"missing keyword in", args[*args[1]?1:0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ }
+ else if (!strcmp(args[0], "option")) {
+ int optnum;
+
+ if (*(args[1]) == '\0') {
+ Alert("parsing [%s:%d]: '%s' expects an option name.\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ for (optnum = 0; cfg_opts[optnum].name; optnum++) {
+ if (!strcmp(args[1], cfg_opts[optnum].name)) {
+ if (cfg_opts[optnum].cap == PR_CAP_NONE) {
+ Alert("parsing [%s:%d]: option '%s' is not supported due to build options.\n",
+ file, linenum, cfg_opts[optnum].name);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ if (alertif_too_many_args_idx(0, 1, file, linenum, args, &err_code))
+ goto out;
+
+ if (warnifnotcap(curproxy, cfg_opts[optnum].cap, file, linenum, args[1], NULL)) {
+ err_code |= ERR_WARN;
+ goto out;
+ }
+
+ curproxy->no_options &= ~cfg_opts[optnum].val;
+ curproxy->options &= ~cfg_opts[optnum].val;
+
+ switch (kwm) {
+ case KWM_STD:
+ curproxy->options |= cfg_opts[optnum].val;
+ break;
+ case KWM_NO:
+ curproxy->no_options |= cfg_opts[optnum].val;
+ break;
+ case KWM_DEF: /* already cleared */
+ break;
+ }
+
+ goto out;
+ }
+ }
+
+ for (optnum = 0; cfg_opts2[optnum].name; optnum++) {
+ if (!strcmp(args[1], cfg_opts2[optnum].name)) {
+ if (cfg_opts2[optnum].cap == PR_CAP_NONE) {
+ Alert("parsing [%s:%d]: option '%s' is not supported due to build options.\n",
+ file, linenum, cfg_opts2[optnum].name);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ if (alertif_too_many_args_idx(0, 1, file, linenum, args, &err_code))
+ goto out;
+ if (warnifnotcap(curproxy, cfg_opts2[optnum].cap, file, linenum, args[1], NULL)) {
+ err_code |= ERR_WARN;
+ goto out;
+ }
+
+ curproxy->no_options2 &= ~cfg_opts2[optnum].val;
+ curproxy->options2 &= ~cfg_opts2[optnum].val;
+
+ switch (kwm) {
+ case KWM_STD:
+ curproxy->options2 |= cfg_opts2[optnum].val;
+ break;
+ case KWM_NO:
+ curproxy->no_options2 |= cfg_opts2[optnum].val;
+ break;
+ case KWM_DEF: /* already cleared */
+ break;
+ }
+ goto out;
+ }
+ }
+
+ /* HTTP options override each other. They can be cancelled using
+ * "no option xxx" which only switches to default mode if the mode
+ * was this one (useful for cancelling options set in defaults
+ * sections).
+ */
+ if (strcmp(args[1], "httpclose") == 0) {
+ if (alertif_too_many_args_idx(0, 1, file, linenum, args, &err_code))
+ goto out;
+ if (kwm == KWM_STD) {
+ curproxy->options &= ~PR_O_HTTP_MODE;
+ curproxy->options |= PR_O_HTTP_PCL;
+ goto out;
+ }
+ else if (kwm == KWM_NO) {
+ if ((curproxy->options & PR_O_HTTP_MODE) == PR_O_HTTP_PCL)
+ curproxy->options &= ~PR_O_HTTP_MODE;
+ goto out;
+ }
+ }
+ else if (strcmp(args[1], "forceclose") == 0) {
+ if (alertif_too_many_args_idx(0, 1, file, linenum, args, &err_code))
+ goto out;
+ if (kwm == KWM_STD) {
+ curproxy->options &= ~PR_O_HTTP_MODE;
+ curproxy->options |= PR_O_HTTP_FCL;
+ goto out;
+ }
+ else if (kwm == KWM_NO) {
+ if ((curproxy->options & PR_O_HTTP_MODE) == PR_O_HTTP_FCL)
+ curproxy->options &= ~PR_O_HTTP_MODE;
+ goto out;
+ }
+ }
+ else if (strcmp(args[1], "http-server-close") == 0) {
+ if (alertif_too_many_args_idx(0, 1, file, linenum, args, &err_code))
+ goto out;
+ if (kwm == KWM_STD) {
+ curproxy->options &= ~PR_O_HTTP_MODE;
+ curproxy->options |= PR_O_HTTP_SCL;
+ goto out;
+ }
+ else if (kwm == KWM_NO) {
+ if ((curproxy->options & PR_O_HTTP_MODE) == PR_O_HTTP_SCL)
+ curproxy->options &= ~PR_O_HTTP_MODE;
+ goto out;
+ }
+ }
+ else if (strcmp(args[1], "http-keep-alive") == 0) {
+ if (alertif_too_many_args_idx(0, 1, file, linenum, args, &err_code))
+ goto out;
+ if (kwm == KWM_STD) {
+ curproxy->options &= ~PR_O_HTTP_MODE;
+ curproxy->options |= PR_O_HTTP_KAL;
+ goto out;
+ }
+ else if (kwm == KWM_NO) {
+ if ((curproxy->options & PR_O_HTTP_MODE) == PR_O_HTTP_KAL)
+ curproxy->options &= ~PR_O_HTTP_MODE;
+ goto out;
+ }
+ }
+ else if (strcmp(args[1], "http-tunnel") == 0) {
+ if (alertif_too_many_args_idx(0, 1, file, linenum, args, &err_code))
+ goto out;
+ if (kwm == KWM_STD) {
+ curproxy->options &= ~PR_O_HTTP_MODE;
+ curproxy->options |= PR_O_HTTP_TUN;
+ goto out;
+ }
+ else if (kwm == KWM_NO) {
+ if ((curproxy->options & PR_O_HTTP_MODE) == PR_O_HTTP_TUN)
+ curproxy->options &= ~PR_O_HTTP_MODE;
+ goto out;
+ }
+ }
+
+ /* Redispatch can take an integer argument that control when the
+ * resispatch occurs. All values are relative to the retries option.
+ * This can be cancelled using "no option xxx".
+ */
+ if (strcmp(args[1], "redispatch") == 0) {
+ if (warnifnotcap(curproxy, PR_CAP_BE, file, linenum, args[1], NULL)) {
+ err_code |= ERR_WARN;
+ goto out;
+ }
+
+ curproxy->no_options &= ~PR_O_REDISP;
+ curproxy->options &= ~PR_O_REDISP;
+
+ switch (kwm) {
+ case KWM_STD:
+ curproxy->options |= PR_O_REDISP;
+ curproxy->redispatch_after = -1;
+ if(*args[2]) {
+ curproxy->redispatch_after = atol(args[2]);
+ }
+ break;
+ case KWM_NO:
+ curproxy->no_options |= PR_O_REDISP;
+ curproxy->redispatch_after = 0;
+ break;
+ case KWM_DEF: /* already cleared */
+ break;
+ }
+ goto out;
+ }
+
+ if (kwm != KWM_STD) {
+ Alert("parsing [%s:%d]: negation/default is not supported for option '%s'.\n",
+ file, linenum, args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (!strcmp(args[1], "httplog")) {
+ char *logformat;
+ /* generate a complete HTTP log */
+ logformat = default_http_log_format;
+ if (*(args[2]) != '\0') {
+ if (!strcmp(args[2], "clf")) {
+ curproxy->options2 |= PR_O2_CLFLOG;
+ logformat = clf_http_log_format;
+ } else {
+ Alert("parsing [%s:%d] : keyword '%s' only supports option 'clf'.\n", file, linenum, args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ if (alertif_too_many_args_idx(1, 1, file, linenum, args, &err_code))
+ goto out;
+ }
+ if (curproxy->conf.logformat_string != default_http_log_format &&
+ curproxy->conf.logformat_string != default_tcp_log_format &&
+ curproxy->conf.logformat_string != clf_http_log_format)
+ free(curproxy->conf.logformat_string);
+ curproxy->conf.logformat_string = logformat;
+
+ free(curproxy->conf.lfs_file);
+ curproxy->conf.lfs_file = strdup(curproxy->conf.args.file);
+ curproxy->conf.lfs_line = curproxy->conf.args.line;
+ }
+ else if (!strcmp(args[1], "tcplog")) {
+ /* generate a detailed TCP log */
+ if (curproxy->conf.logformat_string != default_http_log_format &&
+ curproxy->conf.logformat_string != default_tcp_log_format &&
+ curproxy->conf.logformat_string != clf_http_log_format)
+ free(curproxy->conf.logformat_string);
+ curproxy->conf.logformat_string = default_tcp_log_format;
+
+ free(curproxy->conf.lfs_file);
+ curproxy->conf.lfs_file = strdup(curproxy->conf.args.file);
+ curproxy->conf.lfs_line = curproxy->conf.args.line;
+
+ if (alertif_too_many_args_idx(0, 1, file, linenum, args, &err_code))
+ goto out;
+ }
+ else if (!strcmp(args[1], "tcpka")) {
+ /* enable TCP keep-alives on client and server streams */
+ if (warnifnotcap(curproxy, PR_CAP_BE | PR_CAP_FE, file, linenum, args[1], NULL))
+ err_code |= ERR_WARN;
+
+ if (alertif_too_many_args_idx(0, 1, file, linenum, args, &err_code))
+ goto out;
+
+ if (curproxy->cap & PR_CAP_FE)
+ curproxy->options |= PR_O_TCP_CLI_KA;
+ if (curproxy->cap & PR_CAP_BE)
+ curproxy->options |= PR_O_TCP_SRV_KA;
+ }
+ else if (!strcmp(args[1], "httpchk")) {
+ if (warnifnotcap(curproxy, PR_CAP_BE, file, linenum, args[1], NULL))
+ err_code |= ERR_WARN;
+
+ /* use HTTP request to check servers' health */
+ free(curproxy->check_req);
+ curproxy->check_req = NULL;
+ curproxy->options2 &= ~PR_O2_CHK_ANY;
+ curproxy->options2 |= PR_O2_HTTP_CHK;
+ if (!*args[2]) { /* no argument */
+ curproxy->check_req = strdup(DEF_CHECK_REQ); /* default request */
+ curproxy->check_len = strlen(DEF_CHECK_REQ);
+ } else if (!*args[3]) { /* one argument : URI */
+ int reqlen = strlen(args[2]) + strlen("OPTIONS HTTP/1.0\r\n") + 1;
+ curproxy->check_req = (char *)malloc(reqlen);
+ curproxy->check_len = snprintf(curproxy->check_req, reqlen,
+ "OPTIONS %s HTTP/1.0\r\n", args[2]); /* URI to use */
+ } else { /* more arguments : METHOD URI [HTTP_VER] */
+ int reqlen = strlen(args[2]) + strlen(args[3]) + 3 + strlen("\r\n");
+ if (*args[4])
+ reqlen += strlen(args[4]);
+ else
+ reqlen += strlen("HTTP/1.0");
+
+ curproxy->check_req = (char *)malloc(reqlen);
+ curproxy->check_len = snprintf(curproxy->check_req, reqlen,
+ "%s %s %s\r\n", args[2], args[3], *args[4]?args[4]:"HTTP/1.0");
+ }
+ if (alertif_too_many_args_idx(3, 1, file, linenum, args, &err_code))
+ goto out;
+ }
+ else if (!strcmp(args[1], "ssl-hello-chk")) {
+ /* use SSLv3 CLIENT HELLO to check servers' health */
+ if (warnifnotcap(curproxy, PR_CAP_BE, file, linenum, args[1], NULL))
+ err_code |= ERR_WARN;
+
+ free(curproxy->check_req);
+ curproxy->check_req = NULL;
+ curproxy->options2 &= ~PR_O2_CHK_ANY;
+ curproxy->options2 |= PR_O2_SSL3_CHK;
+
+ if (alertif_too_many_args_idx(0, 1, file, linenum, args, &err_code))
+ goto out;
+ }
+ else if (!strcmp(args[1], "smtpchk")) {
+ /* use SMTP request to check servers' health */
+ free(curproxy->check_req);
+ curproxy->check_req = NULL;
+ curproxy->options2 &= ~PR_O2_CHK_ANY;
+ curproxy->options2 |= PR_O2_SMTP_CHK;
+
+ if (!*args[2] || !*args[3]) { /* no argument or incomplete EHLO host */
+ curproxy->check_req = strdup(DEF_SMTP_CHECK_REQ); /* default request */
+ curproxy->check_len = strlen(DEF_SMTP_CHECK_REQ);
+ } else { /* ESMTP EHLO, or SMTP HELO, and a hostname */
+ if (!strcmp(args[2], "EHLO") || !strcmp(args[2], "HELO")) {
+ int reqlen = strlen(args[2]) + strlen(args[3]) + strlen(" \r\n") + 1;
+ curproxy->check_req = (char *)malloc(reqlen);
+ curproxy->check_len = snprintf(curproxy->check_req, reqlen,
+ "%s %s\r\n", args[2], args[3]); /* HELO hostname */
+ } else {
+ /* this just hits the default for now, but you could potentially expand it to allow for other stuff
+ though, it's unlikely you'd want to send anything other than an EHLO or HELO */
+ curproxy->check_req = strdup(DEF_SMTP_CHECK_REQ); /* default request */
+ curproxy->check_len = strlen(DEF_SMTP_CHECK_REQ);
+ }
+ }
+ if (alertif_too_many_args_idx(2, 1, file, linenum, args, &err_code))
+ goto out;
+ }
+ else if (!strcmp(args[1], "pgsql-check")) {
+ /* use PostgreSQL request to check servers' health */
+ if (warnifnotcap(curproxy, PR_CAP_BE, file, linenum, args[1], NULL))
+ err_code |= ERR_WARN;
+
+ free(curproxy->check_req);
+ curproxy->check_req = NULL;
+ curproxy->options2 &= ~PR_O2_CHK_ANY;
+ curproxy->options2 |= PR_O2_PGSQL_CHK;
+
+ if (*(args[2])) {
+ int cur_arg = 2;
+
+ while (*(args[cur_arg])) {
+ if (strcmp(args[cur_arg], "user") == 0) {
+ char * packet;
+ uint32_t packet_len;
+ uint32_t pv;
+
+ /* suboption header - needs additional argument for it */
+ if (*(args[cur_arg+1]) == 0) {
+ Alert("parsing [%s:%d] : '%s %s %s' expects <username> as argument.\n",
+ file, linenum, args[0], args[1], args[cur_arg]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ /* uint32_t + uint32_t + strlen("user")+1 + strlen(username)+1 + 1 */
+ packet_len = 4 + 4 + 5 + strlen(args[cur_arg + 1])+1 +1;
+ pv = htonl(0x30000); /* protocol version 3.0 */
+
+ packet = (char*) calloc(1, packet_len);
+
+ memcpy(packet + 4, &pv, 4);
+
+ /* copy "user" */
+ memcpy(packet + 8, "user", 4);
+
+ /* copy username */
+ memcpy(packet + 13, args[cur_arg+1], strlen(args[cur_arg+1]));
+
+ free(curproxy->check_req);
+ curproxy->check_req = packet;
+ curproxy->check_len = packet_len;
+
+ packet_len = htonl(packet_len);
+ memcpy(packet, &packet_len, 4);
+ cur_arg += 2;
+ } else {
+ /* unknown suboption - catchall */
+ Alert("parsing [%s:%d] : '%s %s' only supports optional values: 'user'.\n",
+ file, linenum, args[0], args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ } /* end while loop */
+ }
+ if (alertif_too_many_args_idx(2, 1, file, linenum, args, &err_code))
+ goto out;
+ }
+
+ else if (!strcmp(args[1], "redis-check")) {
+ /* use REDIS PING request to check servers' health */
+ if (warnifnotcap(curproxy, PR_CAP_BE, file, linenum, args[1], NULL))
+ err_code |= ERR_WARN;
+
+ free(curproxy->check_req);
+ curproxy->check_req = NULL;
+ curproxy->options2 &= ~PR_O2_CHK_ANY;
+ curproxy->options2 |= PR_O2_REDIS_CHK;
+
+ curproxy->check_req = (char *) malloc(sizeof(DEF_REDIS_CHECK_REQ) - 1);
+ memcpy(curproxy->check_req, DEF_REDIS_CHECK_REQ, sizeof(DEF_REDIS_CHECK_REQ) - 1);
+ curproxy->check_len = sizeof(DEF_REDIS_CHECK_REQ) - 1;
+
+ if (alertif_too_many_args_idx(0, 1, file, linenum, args, &err_code))
+ goto out;
+ }
+
+ else if (!strcmp(args[1], "mysql-check")) {
+ /* use MYSQL request to check servers' health */
+ if (warnifnotcap(curproxy, PR_CAP_BE, file, linenum, args[1], NULL))
+ err_code |= ERR_WARN;
+
+ free(curproxy->check_req);
+ curproxy->check_req = NULL;
+ curproxy->options2 &= ~PR_O2_CHK_ANY;
+ curproxy->options2 |= PR_O2_MYSQL_CHK;
+
+ /* This is an example of a MySQL >=4.0 client Authentication packet kindly provided by Cyril Bonte.
+ * const char mysql40_client_auth_pkt[] = {
+ * "\x0e\x00\x00" // packet length
+ * "\x01" // packet number
+ * "\x00\x00" // client capabilities
+ * "\x00\x00\x01" // max packet
+ * "haproxy\x00" // username (null terminated string)
+ * "\x00" // filler (always 0x00)
+ * "\x01\x00\x00" // packet length
+ * "\x00" // packet number
+ * "\x01" // COM_QUIT command
+ * };
+ */
+
+ /* This is an example of a MySQL >=4.1 client Authentication packet provided by Nenad Merdanovic.
+ * const char mysql41_client_auth_pkt[] = {
+ * "\x0e\x00\x00\" // packet length
+ * "\x01" // packet number
+ * "\x00\x00\x00\x00" // client capabilities
+ * "\x00\x00\x00\x01" // max packet
+ * "\x21" // character set (UTF-8)
+ * char[23] // All zeroes
+ * "haproxy\x00" // username (null terminated string)
+ * "\x00" // filler (always 0x00)
+ * "\x01\x00\x00" // packet length
+ * "\x00" // packet number
+ * "\x01" // COM_QUIT command
+ * };
+ */
+
+
+ if (*(args[2])) {
+ int cur_arg = 2;
+
+ while (*(args[cur_arg])) {
+ if (strcmp(args[cur_arg], "user") == 0) {
+ char *mysqluser;
+ int packetlen, reqlen, userlen;
+
+ /* suboption header - needs additional argument for it */
+ if (*(args[cur_arg+1]) == 0) {
+ Alert("parsing [%s:%d] : '%s %s %s' expects <username> as argument.\n",
+ file, linenum, args[0], args[1], args[cur_arg]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ mysqluser = args[cur_arg + 1];
+ userlen = strlen(mysqluser);
+
+ if (*(args[cur_arg+2])) {
+ if (!strcmp(args[cur_arg+2], "post-41")) {
+ packetlen = userlen + 7 + 27;
+ reqlen = packetlen + 9;
+
+ free(curproxy->check_req);
+ curproxy->check_req = (char *)calloc(1, reqlen);
+ curproxy->check_len = reqlen;
+
+ snprintf(curproxy->check_req, 4, "%c%c%c",
+ ((unsigned char) packetlen & 0xff),
+ ((unsigned char) (packetlen >> 8) & 0xff),
+ ((unsigned char) (packetlen >> 16) & 0xff));
+
+ curproxy->check_req[3] = 1;
+ curproxy->check_req[5] = 130;
+ curproxy->check_req[11] = 1;
+ curproxy->check_req[12] = 33;
+ memcpy(&curproxy->check_req[36], mysqluser, userlen);
+ curproxy->check_req[36 + userlen + 1 + 1] = 1;
+ curproxy->check_req[36 + userlen + 1 + 1 + 4] = 1;
+ cur_arg += 3;
+ } else {
+ Alert("parsing [%s:%d] : keyword '%s' only supports option 'post-41'.\n", file, linenum, args[cur_arg+2]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ } else {
+ packetlen = userlen + 7;
+ reqlen = packetlen + 9;
+
+ free(curproxy->check_req);
+ curproxy->check_req = (char *)calloc(1, reqlen);
+ curproxy->check_len = reqlen;
+
+ snprintf(curproxy->check_req, 4, "%c%c%c",
+ ((unsigned char) packetlen & 0xff),
+ ((unsigned char) (packetlen >> 8) & 0xff),
+ ((unsigned char) (packetlen >> 16) & 0xff));
+
+ curproxy->check_req[3] = 1;
+ curproxy->check_req[5] = 128;
+ curproxy->check_req[8] = 1;
+ memcpy(&curproxy->check_req[9], mysqluser, userlen);
+ curproxy->check_req[9 + userlen + 1 + 1] = 1;
+ curproxy->check_req[9 + userlen + 1 + 1 + 4] = 1;
+ cur_arg += 2;
+ }
+ } else {
+ /* unknown suboption - catchall */
+ Alert("parsing [%s:%d] : '%s %s' only supports optional values: 'user'.\n",
+ file, linenum, args[0], args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ } /* end while loop */
+ }
+ }
+ else if (!strcmp(args[1], "ldap-check")) {
+ /* use LDAP request to check servers' health */
+ free(curproxy->check_req);
+ curproxy->check_req = NULL;
+ curproxy->options2 &= ~PR_O2_CHK_ANY;
+ curproxy->options2 |= PR_O2_LDAP_CHK;
+
+ curproxy->check_req = (char *) malloc(sizeof(DEF_LDAP_CHECK_REQ) - 1);
+ memcpy(curproxy->check_req, DEF_LDAP_CHECK_REQ, sizeof(DEF_LDAP_CHECK_REQ) - 1);
+ curproxy->check_len = sizeof(DEF_LDAP_CHECK_REQ) - 1;
+ if (alertif_too_many_args_idx(0, 1, file, linenum, args, &err_code))
+ goto out;
+ }
+ else if (!strcmp(args[1], "tcp-check")) {
+ /* use raw TCPCHK send/expect to check servers' health */
+ if (warnifnotcap(curproxy, PR_CAP_BE, file, linenum, args[1], NULL))
+ err_code |= ERR_WARN;
+
+ free(curproxy->check_req);
+ curproxy->check_req = NULL;
+ curproxy->options2 &= ~PR_O2_CHK_ANY;
+ curproxy->options2 |= PR_O2_TCPCHK_CHK;
+ if (alertif_too_many_args_idx(0, 1, file, linenum, args, &err_code))
+ goto out;
+ }
+ else if (!strcmp(args[1], "external-check")) {
+ /* excute an external command to check servers' health */
+ free(curproxy->check_req);
+ curproxy->check_req = NULL;
+ curproxy->options2 &= ~PR_O2_CHK_ANY;
+ curproxy->options2 |= PR_O2_EXT_CHK;
+ if (alertif_too_many_args_idx(0, 1, file, linenum, args, &err_code))
+ goto out;
+ }
+ else if (!strcmp(args[1], "forwardfor")) {
+ int cur_arg;
+
+ /* insert x-forwarded-for field, but not for the IP address listed as an except.
+ * set default options (ie: bitfield, header name, etc)
+ */
+
+ curproxy->options |= PR_O_FWDFOR | PR_O_FF_ALWAYS;
+
+ free(curproxy->fwdfor_hdr_name);
+ curproxy->fwdfor_hdr_name = strdup(DEF_XFORWARDFOR_HDR);
+ curproxy->fwdfor_hdr_len = strlen(DEF_XFORWARDFOR_HDR);
+
+ /* loop to go through arguments - start at 2, since 0+1 = "option" "forwardfor" */
+ cur_arg = 2;
+ while (*(args[cur_arg])) {
+ if (!strcmp(args[cur_arg], "except")) {
+ /* suboption except - needs additional argument for it */
+ if (!*(args[cur_arg+1]) || !str2net(args[cur_arg+1], 1, &curproxy->except_net, &curproxy->except_mask)) {
+ Alert("parsing [%s:%d] : '%s %s %s' expects <address>[/mask] as argument.\n",
+ file, linenum, args[0], args[1], args[cur_arg]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ /* flush useless bits */
+ curproxy->except_net.s_addr &= curproxy->except_mask.s_addr;
+ cur_arg += 2;
+ } else if (!strcmp(args[cur_arg], "header")) {
+ /* suboption header - needs additional argument for it */
+ if (*(args[cur_arg+1]) == 0) {
+ Alert("parsing [%s:%d] : '%s %s %s' expects <header_name> as argument.\n",
+ file, linenum, args[0], args[1], args[cur_arg]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ free(curproxy->fwdfor_hdr_name);
+ curproxy->fwdfor_hdr_name = strdup(args[cur_arg+1]);
+ curproxy->fwdfor_hdr_len = strlen(curproxy->fwdfor_hdr_name);
+ cur_arg += 2;
+ } else if (!strcmp(args[cur_arg], "if-none")) {
+ curproxy->options &= ~PR_O_FF_ALWAYS;
+ cur_arg += 1;
+ } else {
+ /* unknown suboption - catchall */
+ Alert("parsing [%s:%d] : '%s %s' only supports optional values: 'except', 'header' and 'if-none'.\n",
+ file, linenum, args[0], args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ } /* end while loop */
+ }
+ else if (!strcmp(args[1], "originalto")) {
+ int cur_arg;
+
+ /* insert x-original-to field, but not for the IP address listed as an except.
+ * set default options (ie: bitfield, header name, etc)
+ */
+
+ curproxy->options |= PR_O_ORGTO;
+
+ free(curproxy->orgto_hdr_name);
+ curproxy->orgto_hdr_name = strdup(DEF_XORIGINALTO_HDR);
+ curproxy->orgto_hdr_len = strlen(DEF_XORIGINALTO_HDR);
+
+ /* loop to go through arguments - start at 2, since 0+1 = "option" "originalto" */
+ cur_arg = 2;
+ while (*(args[cur_arg])) {
+ if (!strcmp(args[cur_arg], "except")) {
+ /* suboption except - needs additional argument for it */
+ if (!*(args[cur_arg+1]) || !str2net(args[cur_arg+1], 1, &curproxy->except_to, &curproxy->except_mask_to)) {
+ Alert("parsing [%s:%d] : '%s %s %s' expects <address>[/mask] as argument.\n",
+ file, linenum, args[0], args[1], args[cur_arg]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ /* flush useless bits */
+ curproxy->except_to.s_addr &= curproxy->except_mask_to.s_addr;
+ cur_arg += 2;
+ } else if (!strcmp(args[cur_arg], "header")) {
+ /* suboption header - needs additional argument for it */
+ if (*(args[cur_arg+1]) == 0) {
+ Alert("parsing [%s:%d] : '%s %s %s' expects <header_name> as argument.\n",
+ file, linenum, args[0], args[1], args[cur_arg]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ free(curproxy->orgto_hdr_name);
+ curproxy->orgto_hdr_name = strdup(args[cur_arg+1]);
+ curproxy->orgto_hdr_len = strlen(curproxy->orgto_hdr_name);
+ cur_arg += 2;
+ } else {
+ /* unknown suboption - catchall */
+ Alert("parsing [%s:%d] : '%s %s' only supports optional values: 'except' and 'header'.\n",
+ file, linenum, args[0], args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ } /* end while loop */
+ }
+ else {
+ Alert("parsing [%s:%d] : unknown option '%s'.\n", file, linenum, args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ goto out;
+ }
+ else if (!strcmp(args[0], "default_backend")) {
+ if (warnifnotcap(curproxy, PR_CAP_FE, file, linenum, args[0], NULL))
+ err_code |= ERR_WARN;
+
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects a backend name.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ free(curproxy->defbe.name);
+ curproxy->defbe.name = strdup(args[1]);
+
+ if (alertif_too_many_args_idx(1, 0, file, linenum, args, &err_code))
+ goto out;
+ }
+ else if (!strcmp(args[0], "redispatch") || !strcmp(args[0], "redisp")) {
+ if (warnifnotcap(curproxy, PR_CAP_BE, file, linenum, args[0], NULL))
+ err_code |= ERR_WARN;
+
+ if (!already_warned(WARN_REDISPATCH_DEPRECATED))
+ Warning("parsing [%s:%d]: keyword '%s' is deprecated in favor of 'option redispatch', and will not be supported by future versions.\n",
+ file, linenum, args[0]);
+ err_code |= ERR_WARN;
+ /* enable reconnections to dispatch */
+ curproxy->options |= PR_O_REDISP;
+
+ if (alertif_too_many_args_idx(1, 0, file, linenum, args, &err_code))
+ goto out;
+ }
+ else if (!strcmp(args[0], "http-reuse")) {
+ if (warnifnotcap(curproxy, PR_CAP_BE, file, linenum, args[0], NULL))
+ err_code |= ERR_WARN;
+
+ if (strcmp(args[1], "never") == 0) {
+ /* enable a graceful server shutdown on an HTTP 404 response */
+ curproxy->options &= ~PR_O_REUSE_MASK;
+ curproxy->options |= PR_O_REUSE_NEVR;
+ if (alertif_too_many_args_idx(0, 1, file, linenum, args, &err_code))
+ goto out;
+ }
+ else if (strcmp(args[1], "safe") == 0) {
+ /* enable a graceful server shutdown on an HTTP 404 response */
+ curproxy->options &= ~PR_O_REUSE_MASK;
+ curproxy->options |= PR_O_REUSE_SAFE;
+ if (alertif_too_many_args_idx(0, 1, file, linenum, args, &err_code))
+ goto out;
+ }
+ else if (strcmp(args[1], "aggressive") == 0) {
+ curproxy->options &= ~PR_O_REUSE_MASK;
+ curproxy->options |= PR_O_REUSE_AGGR;
+ if (alertif_too_many_args_idx(0, 1, file, linenum, args, &err_code))
+ goto out;
+ }
+ else if (strcmp(args[1], "always") == 0) {
+ /* enable a graceful server shutdown on an HTTP 404 response */
+ curproxy->options &= ~PR_O_REUSE_MASK;
+ curproxy->options |= PR_O_REUSE_ALWS;
+ if (alertif_too_many_args_idx(0, 1, file, linenum, args, &err_code))
+ goto out;
+ }
+ else {
+ Alert("parsing [%s:%d] : '%s' only supports 'never', 'safe', 'aggressive', 'always'.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ }
+ else if (!strcmp(args[0], "http-check")) {
+ if (warnifnotcap(curproxy, PR_CAP_BE, file, linenum, args[0], NULL))
+ err_code |= ERR_WARN;
+
+ if (strcmp(args[1], "disable-on-404") == 0) {
+ /* enable a graceful server shutdown on an HTTP 404 response */
+ curproxy->options |= PR_O_DISABLE404;
+ if (alertif_too_many_args_idx(0, 1, file, linenum, args, &err_code))
+ goto out;
+ }
+ else if (strcmp(args[1], "send-state") == 0) {
+ /* enable emission of the apparent state of a server in HTTP checks */
+ curproxy->options2 |= PR_O2_CHK_SNDST;
+ if (alertif_too_many_args_idx(0, 1, file, linenum, args, &err_code))
+ goto out;
+ }
+ else if (strcmp(args[1], "expect") == 0) {
+ const char *ptr_arg;
+ int cur_arg;
+
+ if (curproxy->options2 & PR_O2_EXP_TYPE) {
+ Alert("parsing [%s:%d] : '%s %s' already specified.\n", file, linenum, args[0], args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ cur_arg = 2;
+ /* consider exclamation marks, sole or at the beginning of a word */
+ while (*(ptr_arg = args[cur_arg])) {
+ while (*ptr_arg == '!') {
+ curproxy->options2 ^= PR_O2_EXP_INV;
+ ptr_arg++;
+ }
+ if (*ptr_arg)
+ break;
+ cur_arg++;
+ }
+ /* now ptr_arg points to the beginning of a word past any possible
+ * exclamation mark, and cur_arg is the argument which holds this word.
+ */
+ if (strcmp(ptr_arg, "status") == 0) {
+ if (!*(args[cur_arg + 1])) {
+ Alert("parsing [%s:%d] : '%s %s %s' expects <string> as an argument.\n",
+ file, linenum, args[0], args[1], ptr_arg);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ curproxy->options2 |= PR_O2_EXP_STS;
+ free(curproxy->expect_str);
+ curproxy->expect_str = strdup(args[cur_arg + 1]);
+ }
+ else if (strcmp(ptr_arg, "string") == 0) {
+ if (!*(args[cur_arg + 1])) {
+ Alert("parsing [%s:%d] : '%s %s %s' expects <string> as an argument.\n",
+ file, linenum, args[0], args[1], ptr_arg);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ curproxy->options2 |= PR_O2_EXP_STR;
+ free(curproxy->expect_str);
+ curproxy->expect_str = strdup(args[cur_arg + 1]);
+ }
+ else if (strcmp(ptr_arg, "rstatus") == 0) {
+ if (!*(args[cur_arg + 1])) {
+ Alert("parsing [%s:%d] : '%s %s %s' expects <regex> as an argument.\n",
+ file, linenum, args[0], args[1], ptr_arg);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ curproxy->options2 |= PR_O2_EXP_RSTS;
+ free(curproxy->expect_str);
+ if (curproxy->expect_regex) {
+ regex_free(curproxy->expect_regex);
+ free(curproxy->expect_regex);
+ curproxy->expect_regex = NULL;
+ }
+ curproxy->expect_str = strdup(args[cur_arg + 1]);
+ curproxy->expect_regex = calloc(1, sizeof(*curproxy->expect_regex));
+ error = NULL;
+ if (!regex_comp(args[cur_arg + 1], curproxy->expect_regex, 1, 1, &error)) {
+ Alert("parsing [%s:%d] : '%s %s %s' : bad regular expression '%s': %s.\n",
+ file, linenum, args[0], args[1], ptr_arg, args[cur_arg + 1], error);
+ free(error);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ }
+ else if (strcmp(ptr_arg, "rstring") == 0) {
+ if (!*(args[cur_arg + 1])) {
+ Alert("parsing [%s:%d] : '%s %s %s' expects <regex> as an argument.\n",
+ file, linenum, args[0], args[1], ptr_arg);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ curproxy->options2 |= PR_O2_EXP_RSTR;
+ free(curproxy->expect_str);
+ if (curproxy->expect_regex) {
+ regex_free(curproxy->expect_regex);
+ free(curproxy->expect_regex);
+ curproxy->expect_regex = NULL;
+ }
+ curproxy->expect_str = strdup(args[cur_arg + 1]);
+ curproxy->expect_regex = calloc(1, sizeof(*curproxy->expect_regex));
+ error = NULL;
+ if (!regex_comp(args[cur_arg + 1], curproxy->expect_regex, 1, 1, &error)) {
+ Alert("parsing [%s:%d] : '%s %s %s' : bad regular expression '%s': %s.\n",
+ file, linenum, args[0], args[1], ptr_arg, args[cur_arg + 1], error);
+ free(error);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ }
+ else {
+ Alert("parsing [%s:%d] : '%s %s' only supports [!] 'status', 'string', 'rstatus', 'rstring', found '%s'.\n",
+ file, linenum, args[0], args[1], ptr_arg);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ }
+ else {
+ Alert("parsing [%s:%d] : '%s' only supports 'disable-on-404', 'send-state', 'expect'.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ }
+ else if (!strcmp(args[0], "tcp-check")) {
+ if (warnifnotcap(curproxy, PR_CAP_BE, file, linenum, args[0], NULL))
+ err_code |= ERR_WARN;
+
+ if (strcmp(args[1], "comment") == 0) {
+ int cur_arg;
+ struct tcpcheck_rule *tcpcheck;
+
+ cur_arg = 1;
+ tcpcheck = (struct tcpcheck_rule *)calloc(1, sizeof(*tcpcheck));
+ tcpcheck->action = TCPCHK_ACT_COMMENT;
+
+ if (!*args[cur_arg + 1]) {
+ Alert("parsing [%s:%d] : '%s' expects a comment string.\n",
+ file, linenum, args[cur_arg]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ tcpcheck->comment = strdup(args[cur_arg + 1]);
+
+ LIST_ADDQ(&curproxy->tcpcheck_rules, &tcpcheck->list);
+ if (alertif_too_many_args_idx(1, 1, file, linenum, args, &err_code))
+ goto out;
+ }
+ else if (strcmp(args[1], "connect") == 0) {
+ const char *ptr_arg;
+ int cur_arg;
+ struct tcpcheck_rule *tcpcheck;
+
+ /* check if first rule is also a 'connect' action */
+ tcpcheck = LIST_NEXT(&curproxy->tcpcheck_rules, struct tcpcheck_rule *, list);
+ while (&tcpcheck->list != &curproxy->tcpcheck_rules &&
+ tcpcheck->action == TCPCHK_ACT_COMMENT) {
+ tcpcheck = LIST_NEXT(&tcpcheck->list, struct tcpcheck_rule *, list);
+ }
+
+ if (&tcpcheck->list != &curproxy->tcpcheck_rules
+ && tcpcheck->action != TCPCHK_ACT_CONNECT) {
+ Alert("parsing [%s:%d] : first step MUST also be a 'connect' when there is a 'connect' step in the tcp-check ruleset.\n",
+ file, linenum);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ cur_arg = 2;
+ tcpcheck = (struct tcpcheck_rule *)calloc(1, sizeof(*tcpcheck));
+ tcpcheck->action = TCPCHK_ACT_CONNECT;
+
+ /* parsing each parameters to fill up the rule */
+ while (*(ptr_arg = args[cur_arg])) {
+ /* tcp port */
+ if (strcmp(args[cur_arg], "port") == 0) {
+ if ( (atol(args[cur_arg + 1]) > 65535) ||
+ (atol(args[cur_arg + 1]) < 1) ){
+ Alert("parsing [%s:%d] : '%s %s %s' expects a valid TCP port (from range 1 to 65535), got %s.\n",
+ file, linenum, args[0], args[1], "port", args[cur_arg + 1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ tcpcheck->port = atol(args[cur_arg + 1]);
+ cur_arg += 2;
+ }
+ /* send proxy protocol */
+ else if (strcmp(args[cur_arg], "send-proxy") == 0) {
+ tcpcheck->conn_opts |= TCPCHK_OPT_SEND_PROXY;
+ cur_arg++;
+ }
+#ifdef USE_OPENSSL
+ else if (strcmp(args[cur_arg], "ssl") == 0) {
+ curproxy->options |= PR_O_TCPCHK_SSL;
+ tcpcheck->conn_opts |= TCPCHK_OPT_SSL;
+ cur_arg++;
+ }
+#endif /* USE_OPENSSL */
+ /* comment for this tcpcheck line */
+ else if (strcmp(args[cur_arg], "comment") == 0) {
+ if (!*args[cur_arg + 1]) {
+ Alert("parsing [%s:%d] : '%s' expects a comment string.\n",
+ file, linenum, args[cur_arg]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ tcpcheck->comment = strdup(args[cur_arg + 1]);
+ cur_arg += 2;
+ }
+ else {
+#ifdef USE_OPENSSL
+ Alert("parsing [%s:%d] : '%s %s' expects 'comment', 'port', 'send-proxy' or 'ssl' but got '%s' as argument.\n",
+#else /* USE_OPENSSL */
+ Alert("parsing [%s:%d] : '%s %s' expects 'comment', 'port', 'send-proxy' or but got '%s' as argument.\n",
+#endif /* USE_OPENSSL */
+ file, linenum, args[0], args[1], args[cur_arg]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ }
+
+ LIST_ADDQ(&curproxy->tcpcheck_rules, &tcpcheck->list);
+ }
+ else if (strcmp(args[1], "send") == 0) {
+ if (! *(args[2]) ) {
+ /* SEND string expected */
+ Alert("parsing [%s:%d] : '%s %s %s' expects <STRING> as argument.\n",
+ file, linenum, args[0], args[1], args[2]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ } else {
+ struct tcpcheck_rule *tcpcheck;
+
+ tcpcheck = (struct tcpcheck_rule *)calloc(1, sizeof(*tcpcheck));
+
+ tcpcheck->action = TCPCHK_ACT_SEND;
+ tcpcheck->string_len = strlen(args[2]);
+ tcpcheck->string = strdup(args[2]);
+ tcpcheck->expect_regex = NULL;
+
+ /* comment for this tcpcheck line */
+ if (strcmp(args[3], "comment") == 0) {
+ if (!*args[4]) {
+ Alert("parsing [%s:%d] : '%s' expects a comment string.\n",
+ file, linenum, args[3]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ tcpcheck->comment = strdup(args[4]);
+ }
+
+ LIST_ADDQ(&curproxy->tcpcheck_rules, &tcpcheck->list);
+ }
+ }
+ else if (strcmp(args[1], "send-binary") == 0) {
+ if (! *(args[2]) ) {
+ /* SEND binary string expected */
+ Alert("parsing [%s:%d] : '%s %s %s' expects <BINARY STRING> as argument.\n",
+ file, linenum, args[0], args[1], args[2]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ } else {
+ struct tcpcheck_rule *tcpcheck;
+ char *err = NULL;
+
+ tcpcheck = (struct tcpcheck_rule *)calloc(1, sizeof(*tcpcheck));
+
+ tcpcheck->action = TCPCHK_ACT_SEND;
+ if (parse_binary(args[2], &tcpcheck->string, &tcpcheck->string_len, &err) == 0) {
+ Alert("parsing [%s:%d] : '%s %s %s' expects <BINARY STRING> as argument, but %s\n",
+ file, linenum, args[0], args[1], args[2], err);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ tcpcheck->expect_regex = NULL;
+
+ /* comment for this tcpcheck line */
+ if (strcmp(args[3], "comment") == 0) {
+ if (!*args[4]) {
+ Alert("parsing [%s:%d] : '%s' expects a comment string.\n",
+ file, linenum, args[3]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ tcpcheck->comment = strdup(args[4]);
+ }
+
+ LIST_ADDQ(&curproxy->tcpcheck_rules, &tcpcheck->list);
+ }
+ }
+ else if (strcmp(args[1], "expect") == 0) {
+ const char *ptr_arg;
+ int cur_arg;
+ int inverse = 0;
+
+ if (curproxy->options2 & PR_O2_EXP_TYPE) {
+ Alert("parsing [%s:%d] : '%s %s' already specified.\n", file, linenum, args[0], args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ cur_arg = 2;
+ /* consider exclamation marks, sole or at the beginning of a word */
+ while (*(ptr_arg = args[cur_arg])) {
+ while (*ptr_arg == '!') {
+ inverse = !inverse;
+ ptr_arg++;
+ }
+ if (*ptr_arg)
+ break;
+ cur_arg++;
+ }
+ /* now ptr_arg points to the beginning of a word past any possible
+ * exclamation mark, and cur_arg is the argument which holds this word.
+ */
+ if (strcmp(ptr_arg, "binary") == 0) {
+ struct tcpcheck_rule *tcpcheck;
+ char *err = NULL;
+
+ if (!*(args[cur_arg + 1])) {
+ Alert("parsing [%s:%d] : '%s %s %s' expects <binary string> as an argument.\n",
+ file, linenum, args[0], args[1], ptr_arg);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ tcpcheck = (struct tcpcheck_rule *)calloc(1, sizeof(*tcpcheck));
+
+ tcpcheck->action = TCPCHK_ACT_EXPECT;
+ if (parse_binary(args[cur_arg + 1], &tcpcheck->string, &tcpcheck->string_len, &err) == 0) {
+ Alert("parsing [%s:%d] : '%s %s %s' expects <BINARY STRING> as argument, but %s\n",
+ file, linenum, args[0], args[1], args[2], err);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ tcpcheck->expect_regex = NULL;
+ tcpcheck->inverse = inverse;
+
+ /* tcpcheck comment */
+ cur_arg += 2;
+ if (strcmp(args[cur_arg], "comment") == 0) {
+ if (!*args[cur_arg + 1]) {
+ Alert("parsing [%s:%d] : '%s' expects a comment string.\n",
+ file, linenum, args[cur_arg + 1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ tcpcheck->comment = strdup(args[cur_arg + 1]);
+ }
+
+ LIST_ADDQ(&curproxy->tcpcheck_rules, &tcpcheck->list);
+ }
+ else if (strcmp(ptr_arg, "string") == 0) {
+ struct tcpcheck_rule *tcpcheck;
+
+ if (!*(args[cur_arg + 1])) {
+ Alert("parsing [%s:%d] : '%s %s %s' expects <string> as an argument.\n",
+ file, linenum, args[0], args[1], ptr_arg);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ tcpcheck = (struct tcpcheck_rule *)calloc(1, sizeof(*tcpcheck));
+
+ tcpcheck->action = TCPCHK_ACT_EXPECT;
+ tcpcheck->string_len = strlen(args[cur_arg + 1]);
+ tcpcheck->string = strdup(args[cur_arg + 1]);
+ tcpcheck->expect_regex = NULL;
+ tcpcheck->inverse = inverse;
+
+ /* tcpcheck comment */
+ cur_arg += 2;
+ if (strcmp(args[cur_arg], "comment") == 0) {
+ if (!*args[cur_arg + 1]) {
+ Alert("parsing [%s:%d] : '%s' expects a comment string.\n",
+ file, linenum, args[cur_arg + 1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ tcpcheck->comment = strdup(args[cur_arg + 1]);
+ }
+
+ LIST_ADDQ(&curproxy->tcpcheck_rules, &tcpcheck->list);
+ }
+ else if (strcmp(ptr_arg, "rstring") == 0) {
+ struct tcpcheck_rule *tcpcheck;
+
+ if (!*(args[cur_arg + 1])) {
+ Alert("parsing [%s:%d] : '%s %s %s' expects <regex> as an argument.\n",
+ file, linenum, args[0], args[1], ptr_arg);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ tcpcheck = (struct tcpcheck_rule *)calloc(1, sizeof(*tcpcheck));
+
+ tcpcheck->action = TCPCHK_ACT_EXPECT;
+ tcpcheck->string_len = 0;
+ tcpcheck->string = NULL;
+ tcpcheck->expect_regex = calloc(1, sizeof(*tcpcheck->expect_regex));
+ error = NULL;
+ if (!regex_comp(args[cur_arg + 1], tcpcheck->expect_regex, 1, 1, &error)) {
+ Alert("parsing [%s:%d] : '%s %s %s' : bad regular expression '%s': %s.\n",
+ file, linenum, args[0], args[1], ptr_arg, args[cur_arg + 1], error);
+ free(error);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ tcpcheck->inverse = inverse;
+
+ /* tcpcheck comment */
+ cur_arg += 2;
+ if (strcmp(args[cur_arg], "comment") == 0) {
+ if (!*args[cur_arg + 1]) {
+ Alert("parsing [%s:%d] : '%s' expects a comment string.\n",
+ file, linenum, args[cur_arg + 1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ tcpcheck->comment = strdup(args[cur_arg + 1]);
+ }
+
+ LIST_ADDQ(&curproxy->tcpcheck_rules, &tcpcheck->list);
+ }
+ else {
+ Alert("parsing [%s:%d] : '%s %s' only supports [!] 'binary', 'string', 'rstring', found '%s'.\n",
+ file, linenum, args[0], args[1], ptr_arg);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ }
+ else {
+ Alert("parsing [%s:%d] : '%s' only supports 'comment', 'connect', 'send' or 'expect'.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ }
+ else if (!strcmp(args[0], "monitor")) {
+ if (curproxy == &defproxy) {
+ Alert("parsing [%s:%d] : '%s' not allowed in 'defaults' section.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (warnifnotcap(curproxy, PR_CAP_FE, file, linenum, args[0], NULL))
+ err_code |= ERR_WARN;
+
+ if (strcmp(args[1], "fail") == 0) {
+ /* add a condition to fail monitor requests */
+ if (strcmp(args[2], "if") != 0 && strcmp(args[2], "unless") != 0) {
+ Alert("parsing [%s:%d] : '%s %s' requires either 'if' or 'unless' followed by a condition.\n",
+ file, linenum, args[0], args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if ((cond = build_acl_cond(file, linenum, curproxy, (const char **)args + 2, &errmsg)) == NULL) {
+ Alert("parsing [%s:%d] : error detected while parsing a '%s %s' condition : %s.\n",
+ file, linenum, args[0], args[1], errmsg);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ LIST_ADDQ(&curproxy->mon_fail_cond, &cond->list);
+ }
+ else {
+ Alert("parsing [%s:%d] : '%s' only supports 'fail'.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ }
+#ifdef TPROXY
+ else if (!strcmp(args[0], "transparent")) {
+ /* enable transparent proxy connections */
+ curproxy->options |= PR_O_TRANSP;
+ if (alertif_too_many_args(0, file, linenum, args, &err_code))
+ goto out;
+ }
+#endif
+ else if (!strcmp(args[0], "maxconn")) { /* maxconn */
+ if (warnifnotcap(curproxy, PR_CAP_FE, file, linenum, args[0], " Maybe you want 'fullconn' instead ?"))
+ err_code |= ERR_WARN;
+
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects an integer argument.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ curproxy->maxconn = atol(args[1]);
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ }
+ else if (!strcmp(args[0], "backlog")) { /* backlog */
+ if (warnifnotcap(curproxy, PR_CAP_FE, file, linenum, args[0], NULL))
+ err_code |= ERR_WARN;
+
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects an integer argument.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ curproxy->backlog = atol(args[1]);
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ }
+ else if (!strcmp(args[0], "fullconn")) { /* fullconn */
+ if (warnifnotcap(curproxy, PR_CAP_BE, file, linenum, args[0], " Maybe you want 'maxconn' instead ?"))
+ err_code |= ERR_WARN;
+
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects an integer argument.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ curproxy->fullconn = atol(args[1]);
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ }
+ else if (!strcmp(args[0], "grace")) { /* grace time (ms) */
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects a time in milliseconds.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ err = parse_time_err(args[1], &val, TIME_UNIT_MS);
+ if (err) {
+ Alert("parsing [%s:%d] : unexpected character '%c' in grace time.\n",
+ file, linenum, *err);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ curproxy->grace = val;
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+ }
+ else if (!strcmp(args[0], "dispatch")) { /* dispatch address */
+ struct sockaddr_storage *sk;
+ int port1, port2;
+ struct protocol *proto;
+
+ if (curproxy == &defproxy) {
+ Alert("parsing [%s:%d] : '%s' not allowed in 'defaults' section.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ else if (warnifnotcap(curproxy, PR_CAP_BE, file, linenum, args[0], NULL))
+ err_code |= ERR_WARN;
+
+ sk = str2sa_range(args[1], &port1, &port2, &errmsg, NULL, NULL, 1);
+ if (!sk) {
+ Alert("parsing [%s:%d] : '%s' : %s\n", file, linenum, args[0], errmsg);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ proto = protocol_by_family(sk->ss_family);
+ if (!proto || !proto->connect) {
+ Alert("parsing [%s:%d] : '%s %s' : connect() not supported for this address family.\n",
+ file, linenum, args[0], args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (port1 != port2) {
+ Alert("parsing [%s:%d] : '%s' : port ranges and offsets are not allowed in '%s'.\n",
+ file, linenum, args[0], args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (!port1) {
+ Alert("parsing [%s:%d] : '%s' : missing port number in '%s', <addr:port> expected.\n",
+ file, linenum, args[0], args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+
+ curproxy->dispatch_addr = *sk;
+ curproxy->options |= PR_O_DISPATCH;
+ }
+ else if (!strcmp(args[0], "balance")) { /* set balancing with optional algorithm */
+ if (warnifnotcap(curproxy, PR_CAP_BE, file, linenum, args[0], NULL))
+ err_code |= ERR_WARN;
+
+ if (backend_parse_balance((const char **)args + 1, &errmsg, curproxy) < 0) {
+ Alert("parsing [%s:%d] : %s %s\n", file, linenum, args[0], errmsg);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ }
+ else if (!strcmp(args[0], "hash-type")) { /* set hashing method */
+ /**
+ * The syntax for hash-type config element is
+ * hash-type {map-based|consistent} [[<algo>] avalanche]
+ *
+ * The default hash function is sdbm for map-based and sdbm+avalanche for consistent.
+ */
+ curproxy->lbprm.algo &= ~(BE_LB_HASH_TYPE | BE_LB_HASH_FUNC | BE_LB_HASH_MOD);
+
+ if (warnifnotcap(curproxy, PR_CAP_BE, file, linenum, args[0], NULL))
+ err_code |= ERR_WARN;
+
+ if (strcmp(args[1], "consistent") == 0) { /* use consistent hashing */
+ curproxy->lbprm.algo |= BE_LB_HASH_CONS;
+ }
+ else if (strcmp(args[1], "map-based") == 0) { /* use map-based hashing */
+ curproxy->lbprm.algo |= BE_LB_HASH_MAP;
+ }
+ else if (strcmp(args[1], "avalanche") == 0) {
+ Alert("parsing [%s:%d] : experimental feature '%s %s' is not supported anymore, please use '%s map-based sdbm avalanche' instead.\n", file, linenum, args[0], args[1], args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ else {
+ Alert("parsing [%s:%d] : '%s' only supports 'consistent' and 'map-based'.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ /* set the hash function to use */
+ if (!*args[2]) {
+ /* the default algo is sdbm */
+ curproxy->lbprm.algo |= BE_LB_HFCN_SDBM;
+
+ /* if consistent with no argument, then avalanche modifier is also applied */
+ if ((curproxy->lbprm.algo & BE_LB_HASH_TYPE) == BE_LB_HASH_CONS)
+ curproxy->lbprm.algo |= BE_LB_HMOD_AVAL;
+ } else {
+ /* set the hash function */
+ if (!strcmp(args[2], "sdbm")) {
+ curproxy->lbprm.algo |= BE_LB_HFCN_SDBM;
+ }
+ else if (!strcmp(args[2], "djb2")) {
+ curproxy->lbprm.algo |= BE_LB_HFCN_DJB2;
+ }
+ else if (!strcmp(args[2], "wt6")) {
+ curproxy->lbprm.algo |= BE_LB_HFCN_WT6;
+ }
+ else if (!strcmp(args[2], "crc32")) {
+ curproxy->lbprm.algo |= BE_LB_HFCN_CRC32;
+ }
+ else {
+ Alert("parsing [%s:%d] : '%s' only supports 'sdbm', 'djb2', 'crc32', or 'wt6' hash functions.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ /* set the hash modifier */
+ if (!strcmp(args[3], "avalanche")) {
+ curproxy->lbprm.algo |= BE_LB_HMOD_AVAL;
+ }
+ else if (*args[3]) {
+ Alert("parsing [%s:%d] : '%s' only supports 'avalanche' as a modifier for hash functions.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ }
+ }
+ else if (strcmp(args[0], "unique-id-format") == 0) {
+ if (!*(args[1])) {
+ Alert("parsing [%s:%d] : %s expects an argument.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ if (*(args[2])) {
+ Alert("parsing [%s:%d] : %s expects only one argument, don't forget to escape spaces!\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ free(curproxy->conf.uniqueid_format_string);
+ curproxy->conf.uniqueid_format_string = strdup(args[1]);
+
+ free(curproxy->conf.uif_file);
+ curproxy->conf.uif_file = strdup(curproxy->conf.args.file);
+ curproxy->conf.uif_line = curproxy->conf.args.line;
+ }
+
+ else if (strcmp(args[0], "unique-id-header") == 0) {
+ if (!*(args[1])) {
+ Alert("parsing [%s:%d] : %s expects an argument.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ free(curproxy->header_unique_id);
+ curproxy->header_unique_id = strdup(args[1]);
+ }
+
+ else if (strcmp(args[0], "log-format") == 0) {
+ if (!*(args[1])) {
+ Alert("parsing [%s:%d] : %s expects an argument.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ if (*(args[2])) {
+ Alert("parsing [%s:%d] : %s expects only one argument, don't forget to escape spaces!\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (curproxy->conf.logformat_string != default_http_log_format &&
+ curproxy->conf.logformat_string != default_tcp_log_format &&
+ curproxy->conf.logformat_string != clf_http_log_format)
+ free(curproxy->conf.logformat_string);
+ curproxy->conf.logformat_string = strdup(args[1]);
+
+ free(curproxy->conf.lfs_file);
+ curproxy->conf.lfs_file = strdup(curproxy->conf.args.file);
+ curproxy->conf.lfs_line = curproxy->conf.args.line;
+
+ /* get a chance to improve log-format error reporting by
+ * reporting the correct line-number when possible.
+ */
+ if (curproxy != &defproxy && !(curproxy->cap & PR_CAP_FE)) {
+ Warning("parsing [%s:%d] : backend '%s' : 'log-format' directive is ignored in backends.\n",
+ file, linenum, curproxy->id);
+ err_code |= ERR_WARN;
+ }
+ }
+ else if (!strcmp(args[0], "log-format-sd")) {
+ if (!*(args[1])) {
+ Alert("parsing [%s:%d] : %s expects an argument.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ if (*(args[2])) {
+ Alert("parsing [%s:%d] : %s expects only one argument, don't forget to escape spaces!\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (curproxy->conf.logformat_sd_string != default_rfc5424_sd_log_format)
+ free(curproxy->conf.logformat_sd_string);
+ curproxy->conf.logformat_sd_string = strdup(args[1]);
+
+ free(curproxy->conf.lfsd_file);
+ curproxy->conf.lfsd_file = strdup(curproxy->conf.args.file);
+ curproxy->conf.lfsd_line = curproxy->conf.args.line;
+
+ /* get a chance to improve log-format-sd error reporting by
+ * reporting the correct line-number when possible.
+ */
+ if (curproxy != &defproxy && !(curproxy->cap & PR_CAP_FE)) {
+ Warning("parsing [%s:%d] : backend '%s' : 'log-format-sd' directive is ignored in backends.\n",
+ file, linenum, curproxy->id);
+ err_code |= ERR_WARN;
+ }
+ }
+ else if (!strcmp(args[0], "log-tag")) { /* tag to report to syslog */
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects a tag for use in syslog.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ chunk_destroy(&curproxy->log_tag);
+ chunk_initstr(&curproxy->log_tag, strdup(args[1]));
+ }
+ else if (!strcmp(args[0], "log") && kwm == KWM_NO) {
+ /* delete previous herited or defined syslog servers */
+ struct logsrv *back;
+
+ if (*(args[1]) != 0) {
+ Alert("parsing [%s:%d]:%s : 'no log' does not expect arguments.\n", file, linenum, args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ list_for_each_entry_safe(tmplogsrv, back, &curproxy->logsrvs, list) {
+ LIST_DEL(&tmplogsrv->list);
+ free(tmplogsrv);
+ }
+ }
+ else if (!strcmp(args[0], "log")) { /* syslog server address */
+ struct logsrv *logsrv;
+
+ if (*(args[1]) && *(args[2]) == 0 && !strcmp(args[1], "global")) {
+ /* copy global.logrsvs linked list to the end of curproxy->logsrvs */
+ list_for_each_entry(tmplogsrv, &global.logsrvs, list) {
+ struct logsrv *node = malloc(sizeof(struct logsrv));
+ memcpy(node, tmplogsrv, sizeof(struct logsrv));
+ LIST_INIT(&node->list);
+ LIST_ADDQ(&curproxy->logsrvs, &node->list);
+ }
+ }
+ else if (*(args[1]) && *(args[2])) {
+ struct sockaddr_storage *sk;
+ int port1, port2;
+ int arg = 0;
+ int len = 0;
+
+ logsrv = calloc(1, sizeof(struct logsrv));
+
+ /* just after the address, a length may be specified */
+ if (strcmp(args[arg+2], "len") == 0) {
+ len = atoi(args[arg+3]);
+ if (len < 80 || len > 65535) {
+ Alert("parsing [%s:%d] : invalid log length '%s', must be between 80 and 65535.\n",
+ file, linenum, args[arg+3]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ logsrv->maxlen = len;
+
+ /* skip these two args */
+ arg += 2;
+ }
+ else
+ logsrv->maxlen = MAX_SYSLOG_LEN;
+
+ if (logsrv->maxlen > global.max_syslog_len) {
+ global.max_syslog_len = logsrv->maxlen;
+ logheader = realloc(logheader, global.max_syslog_len + 1);
+ logheader_rfc5424 = realloc(logheader_rfc5424, global.max_syslog_len + 1);
+ logline = realloc(logline, global.max_syslog_len + 1);
+ logline_rfc5424 = realloc(logline_rfc5424, global.max_syslog_len + 1);
+ }
+
+ /* after the length, a format may be specified */
+ if (strcmp(args[arg+2], "format") == 0) {
+ logsrv->format = get_log_format(args[arg+3]);
+ if (logsrv->format < 0) {
+ Alert("parsing [%s:%d] : unknown log format '%s'\n", file, linenum, args[arg+3]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ /* skip these two args */
+ arg += 2;
+ }
+
+ if (alertif_too_many_args_idx(3, arg + 1, file, linenum, args, &err_code))
+ goto out;
+
+ logsrv->facility = get_log_facility(args[arg+2]);
+ if (logsrv->facility < 0) {
+ Alert("parsing [%s:%d] : unknown log facility '%s'\n", file, linenum, args[arg+2]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+
+ }
+
+ logsrv->level = 7; /* max syslog level = debug */
+ if (*(args[arg+3])) {
+ logsrv->level = get_log_level(args[arg+3]);
+ if (logsrv->level < 0) {
+ Alert("parsing [%s:%d] : unknown optional log level '%s'\n", file, linenum, args[arg+3]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+
+ }
+ }
+
+ logsrv->minlvl = 0; /* limit syslog level to this level (emerg) */
+ if (*(args[arg+4])) {
+ logsrv->minlvl = get_log_level(args[arg+4]);
+ if (logsrv->minlvl < 0) {
+ Alert("parsing [%s:%d] : unknown optional minimum log level '%s'\n", file, linenum, args[arg+4]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+
+ }
+ }
+
+ sk = str2sa_range(args[1], &port1, &port2, &errmsg, NULL, NULL, 1);
+ if (!sk) {
+ Alert("parsing [%s:%d] : '%s': %s\n", file, linenum, args[0], errmsg);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ logsrv->addr = *sk;
+
+ if (sk->ss_family == AF_INET || sk->ss_family == AF_INET6) {
+ if (port1 != port2) {
+ Alert("parsing [%s:%d] : '%s' : port ranges and offsets are not allowed in '%s'\n",
+ file, linenum, args[0], args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (!port1)
+ set_host_port(&logsrv->addr, SYSLOG_PORT);
+ }
+
+ LIST_ADDQ(&curproxy->logsrvs, &logsrv->list);
+ }
+ else {
+ Alert("parsing [%s:%d] : 'log' expects either <address[:port]> and <facility> or 'global' as arguments.\n",
+ file, linenum);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ }
+ else if (!strcmp(args[0], "source")) { /* address to which we bind when connecting */
+ int cur_arg;
+ int port1, port2;
+ struct sockaddr_storage *sk;
+ struct protocol *proto;
+
+ if (warnifnotcap(curproxy, PR_CAP_BE, file, linenum, args[0], NULL))
+ err_code |= ERR_WARN;
+
+ if (!*args[1]) {
+ Alert("parsing [%s:%d] : '%s' expects <addr>[:<port>], and optionally '%s' <addr>, and '%s' <name>.\n",
+ file, linenum, "source", "usesrc", "interface");
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ /* we must first clear any optional default setting */
+ curproxy->conn_src.opts &= ~CO_SRC_TPROXY_MASK;
+ free(curproxy->conn_src.iface_name);
+ curproxy->conn_src.iface_name = NULL;
+ curproxy->conn_src.iface_len = 0;
+
+ sk = str2sa_range(args[1], &port1, &port2, &errmsg, NULL, NULL, 1);
+ if (!sk) {
+ Alert("parsing [%s:%d] : '%s %s' : %s\n",
+ file, linenum, args[0], args[1], errmsg);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ proto = protocol_by_family(sk->ss_family);
+ if (!proto || !proto->connect) {
+ Alert("parsing [%s:%d] : '%s %s' : connect() not supported for this address family.\n",
+ file, linenum, args[0], args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (port1 != port2) {
+ Alert("parsing [%s:%d] : '%s' : port ranges and offsets are not allowed in '%s'\n",
+ file, linenum, args[0], args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ curproxy->conn_src.source_addr = *sk;
+ curproxy->conn_src.opts |= CO_SRC_BIND;
+
+ cur_arg = 2;
+ while (*(args[cur_arg])) {
+ if (!strcmp(args[cur_arg], "usesrc")) { /* address to use outside */
+#if defined(CONFIG_HAP_TRANSPARENT)
+ if (!*args[cur_arg + 1]) {
+ Alert("parsing [%s:%d] : '%s' expects <addr>[:<port>], 'client', or 'clientip' as argument.\n",
+ file, linenum, "usesrc");
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (!strcmp(args[cur_arg + 1], "client")) {
+ curproxy->conn_src.opts &= ~CO_SRC_TPROXY_MASK;
+ curproxy->conn_src.opts |= CO_SRC_TPROXY_CLI;
+ } else if (!strcmp(args[cur_arg + 1], "clientip")) {
+ curproxy->conn_src.opts &= ~CO_SRC_TPROXY_MASK;
+ curproxy->conn_src.opts |= CO_SRC_TPROXY_CIP;
+ } else if (!strncmp(args[cur_arg + 1], "hdr_ip(", 7)) {
+ char *name, *end;
+
+ name = args[cur_arg+1] + 7;
+ while (isspace(*name))
+ name++;
+
+ end = name;
+ while (*end && !isspace(*end) && *end != ',' && *end != ')')
+ end++;
+
+ curproxy->conn_src.opts &= ~CO_SRC_TPROXY_MASK;
+ curproxy->conn_src.opts |= CO_SRC_TPROXY_DYN;
+ curproxy->conn_src.bind_hdr_name = calloc(1, end - name + 1);
+ curproxy->conn_src.bind_hdr_len = end - name;
+ memcpy(curproxy->conn_src.bind_hdr_name, name, end - name);
+ curproxy->conn_src.bind_hdr_name[end-name] = '\0';
+ curproxy->conn_src.bind_hdr_occ = -1;
+
+ /* now look for an occurrence number */
+ while (isspace(*end))
+ end++;
+ if (*end == ',') {
+ end++;
+ name = end;
+ if (*end == '-')
+ end++;
+ while (isdigit((int)*end))
+ end++;
+ curproxy->conn_src.bind_hdr_occ = strl2ic(name, end-name);
+ }
+
+ if (curproxy->conn_src.bind_hdr_occ < -MAX_HDR_HISTORY) {
+ Alert("parsing [%s:%d] : usesrc hdr_ip(name,num) does not support negative"
+ " occurrences values smaller than %d.\n",
+ file, linenum, MAX_HDR_HISTORY);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ } else {
+ struct sockaddr_storage *sk;
+
+ sk = str2sa_range(args[cur_arg + 1], &port1, &port2, &errmsg, NULL, NULL, 1);
+ if (!sk) {
+ Alert("parsing [%s:%d] : '%s %s' : %s\n",
+ file, linenum, args[cur_arg], args[cur_arg+1], errmsg);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ proto = protocol_by_family(sk->ss_family);
+ if (!proto || !proto->connect) {
+ Alert("parsing [%s:%d] : '%s %s' : connect() not supported for this address family.\n",
+ file, linenum, args[cur_arg], args[cur_arg+1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (port1 != port2) {
+ Alert("parsing [%s:%d] : '%s' : port ranges and offsets are not allowed in '%s'\n",
+ file, linenum, args[cur_arg], args[cur_arg + 1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ curproxy->conn_src.tproxy_addr = *sk;
+ curproxy->conn_src.opts |= CO_SRC_TPROXY_ADDR;
+ }
+ global.last_checks |= LSTCHK_NETADM;
+#else /* no TPROXY support */
+ Alert("parsing [%s:%d] : '%s' not allowed here because support for TPROXY was not compiled in.\n",
+ file, linenum, "usesrc");
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+#endif
+ cur_arg += 2;
+ continue;
+ }
+
+ if (!strcmp(args[cur_arg], "interface")) { /* specifically bind to this interface */
+#ifdef SO_BINDTODEVICE
+ if (!*args[cur_arg + 1]) {
+ Alert("parsing [%s:%d] : '%s' : missing interface name.\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ free(curproxy->conn_src.iface_name);
+ curproxy->conn_src.iface_name = strdup(args[cur_arg + 1]);
+ curproxy->conn_src.iface_len = strlen(curproxy->conn_src.iface_name);
+ global.last_checks |= LSTCHK_NETADM;
+#else
+ Alert("parsing [%s:%d] : '%s' : '%s' option not implemented.\n",
+ file, linenum, args[0], args[cur_arg]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+#endif
+ cur_arg += 2;
+ continue;
+ }
+ Alert("parsing [%s:%d] : '%s' only supports optional keywords '%s' and '%s'.\n",
+ file, linenum, args[0], "interface", "usesrc");
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ }
+ else if (!strcmp(args[0], "usesrc")) { /* address to use outside: needs "source" first */
+ Alert("parsing [%s:%d] : '%s' only allowed after a '%s' statement.\n",
+ file, linenum, "usesrc", "source");
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ else if (!strcmp(args[0], "cliexp") || !strcmp(args[0], "reqrep")) { /* replace request header from a regex */
+ if (*(args[2]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects <search> and <replace> as arguments.\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ err_code |= create_cond_regex_rule(file, linenum, curproxy,
+ SMP_OPT_DIR_REQ, ACT_REPLACE, 0,
+ args[0], args[1], args[2], (const char **)args+3);
+ if (err_code & ERR_FATAL)
+ goto out;
+ }
+ else if (!strcmp(args[0], "reqdel")) { /* delete request header from a regex */
+ err_code |= create_cond_regex_rule(file, linenum, curproxy,
+ SMP_OPT_DIR_REQ, ACT_REMOVE, 0,
+ args[0], args[1], NULL, (const char **)args+2);
+ if (err_code & ERR_FATAL)
+ goto out;
+ }
+ else if (!strcmp(args[0], "reqdeny")) { /* deny a request if a header matches this regex */
+ err_code |= create_cond_regex_rule(file, linenum, curproxy,
+ SMP_OPT_DIR_REQ, ACT_DENY, 0,
+ args[0], args[1], NULL, (const char **)args+2);
+ if (err_code & ERR_FATAL)
+ goto out;
+ }
+ else if (!strcmp(args[0], "reqpass")) { /* pass this header without allowing or denying the request */
+ err_code |= create_cond_regex_rule(file, linenum, curproxy,
+ SMP_OPT_DIR_REQ, ACT_PASS, 0,
+ args[0], args[1], NULL, (const char **)args+2);
+ if (err_code & ERR_FATAL)
+ goto out;
+ }
+ else if (!strcmp(args[0], "reqallow")) { /* allow a request if a header matches this regex */
+ err_code |= create_cond_regex_rule(file, linenum, curproxy,
+ SMP_OPT_DIR_REQ, ACT_ALLOW, 0,
+ args[0], args[1], NULL, (const char **)args+2);
+ if (err_code & ERR_FATAL)
+ goto out;
+ }
+ else if (!strcmp(args[0], "reqtarpit")) { /* tarpit a request if a header matches this regex */
+ err_code |= create_cond_regex_rule(file, linenum, curproxy,
+ SMP_OPT_DIR_REQ, ACT_TARPIT, 0,
+ args[0], args[1], NULL, (const char **)args+2);
+ if (err_code & ERR_FATAL)
+ goto out;
+ }
+ else if (!strcmp(args[0], "reqirep")) { /* replace request header from a regex, ignoring case */
+ if (*(args[2]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects <search> and <replace> as arguments.\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ err_code |= create_cond_regex_rule(file, linenum, curproxy,
+ SMP_OPT_DIR_REQ, ACT_REPLACE, REG_ICASE,
+ args[0], args[1], args[2], (const char **)args+3);
+ if (err_code & ERR_FATAL)
+ goto out;
+ }
+ else if (!strcmp(args[0], "reqidel")) { /* delete request header from a regex ignoring case */
+ err_code |= create_cond_regex_rule(file, linenum, curproxy,
+ SMP_OPT_DIR_REQ, ACT_REMOVE, REG_ICASE,
+ args[0], args[1], NULL, (const char **)args+2);
+ if (err_code & ERR_FATAL)
+ goto out;
+ }
+ else if (!strcmp(args[0], "reqideny")) { /* deny a request if a header matches this regex ignoring case */
+ err_code |= create_cond_regex_rule(file, linenum, curproxy,
+ SMP_OPT_DIR_REQ, ACT_DENY, REG_ICASE,
+ args[0], args[1], NULL, (const char **)args+2);
+ if (err_code & ERR_FATAL)
+ goto out;
+ }
+ else if (!strcmp(args[0], "reqipass")) { /* pass this header without allowing or denying the request */
+ err_code |= create_cond_regex_rule(file, linenum, curproxy,
+ SMP_OPT_DIR_REQ, ACT_PASS, REG_ICASE,
+ args[0], args[1], NULL, (const char **)args+2);
+ if (err_code & ERR_FATAL)
+ goto out;
+ }
+ else if (!strcmp(args[0], "reqiallow")) { /* allow a request if a header matches this regex ignoring case */
+ err_code |= create_cond_regex_rule(file, linenum, curproxy,
+ SMP_OPT_DIR_REQ, ACT_ALLOW, REG_ICASE,
+ args[0], args[1], NULL, (const char **)args+2);
+ if (err_code & ERR_FATAL)
+ goto out;
+ }
+ else if (!strcmp(args[0], "reqitarpit")) { /* tarpit a request if a header matches this regex ignoring case */
+ err_code |= create_cond_regex_rule(file, linenum, curproxy,
+ SMP_OPT_DIR_REQ, ACT_TARPIT, REG_ICASE,
+ args[0], args[1], NULL, (const char **)args+2);
+ if (err_code & ERR_FATAL)
+ goto out;
+ }
+ else if (!strcmp(args[0], "reqadd")) { /* add request header */
+ struct cond_wordlist *wl;
+
+ if (curproxy == &defproxy) {
+ Alert("parsing [%s:%d] : '%s' not allowed in 'defaults' section.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ else if (warnifnotcap(curproxy, PR_CAP_RS, file, linenum, args[0], NULL))
+ err_code |= ERR_WARN;
+
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects <header> as an argument.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if ((strcmp(args[2], "if") == 0 || strcmp(args[2], "unless") == 0)) {
+ if ((cond = build_acl_cond(file, linenum, curproxy, (const char **)args+2, &errmsg)) == NULL) {
+ Alert("parsing [%s:%d] : error detected while parsing a '%s' condition : %s.\n",
+ file, linenum, args[0], errmsg);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ err_code |= warnif_cond_conflicts(cond,
+ (curproxy->cap & PR_CAP_FE) ? SMP_VAL_FE_HRQ_HDR : SMP_VAL_BE_HRQ_HDR,
+ file, linenum);
+ }
+ else if (*args[2]) {
+ Alert("parsing [%s:%d] : '%s' : Expecting nothing, 'if', or 'unless', got '%s'.\n",
+ file, linenum, args[0], args[2]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ wl = calloc(1, sizeof(*wl));
+ wl->cond = cond;
+ wl->s = strdup(args[1]);
+ LIST_ADDQ(&curproxy->req_add, &wl->list);
+ warnif_misplaced_reqadd(curproxy, file, linenum, args[0]);
+ }
+ else if (!strcmp(args[0], "srvexp") || !strcmp(args[0], "rsprep")) { /* replace response header from a regex */
+ if (*(args[2]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects <search> and <replace> as arguments.\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ err_code |= create_cond_regex_rule(file, linenum, curproxy,
+ SMP_OPT_DIR_RES, ACT_REPLACE, 0,
+ args[0], args[1], args[2], (const char **)args+3);
+ if (err_code & ERR_FATAL)
+ goto out;
+ }
+ else if (!strcmp(args[0], "rspdel")) { /* delete response header from a regex */
+ err_code |= create_cond_regex_rule(file, linenum, curproxy,
+ SMP_OPT_DIR_RES, ACT_REMOVE, 0,
+ args[0], args[1], NULL, (const char **)args+2);
+ if (err_code & ERR_FATAL)
+ goto out;
+ }
+ else if (!strcmp(args[0], "rspdeny")) { /* block response header from a regex */
+ err_code |= create_cond_regex_rule(file, linenum, curproxy,
+ SMP_OPT_DIR_RES, ACT_DENY, 0,
+ args[0], args[1], NULL, (const char **)args+2);
+ if (err_code & ERR_FATAL)
+ goto out;
+ }
+ else if (!strcmp(args[0], "rspirep")) { /* replace response header from a regex ignoring case */
+ if (*(args[2]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects <search> and <replace> as arguments.\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ err_code |= create_cond_regex_rule(file, linenum, curproxy,
+ SMP_OPT_DIR_RES, ACT_REPLACE, REG_ICASE,
+ args[0], args[1], args[2], (const char **)args+3);
+ if (err_code & ERR_FATAL)
+ goto out;
+ }
+ else if (!strcmp(args[0], "rspidel")) { /* delete response header from a regex ignoring case */
+ err_code |= create_cond_regex_rule(file, linenum, curproxy,
+ SMP_OPT_DIR_RES, ACT_REMOVE, REG_ICASE,
+ args[0], args[1], NULL, (const char **)args+2);
+ if (err_code & ERR_FATAL)
+ goto out;
+ }
+ else if (!strcmp(args[0], "rspideny")) { /* block response header from a regex ignoring case */
+ err_code |= create_cond_regex_rule(file, linenum, curproxy,
+ SMP_OPT_DIR_RES, ACT_DENY, REG_ICASE,
+ args[0], args[1], NULL, (const char **)args+2);
+ if (err_code & ERR_FATAL)
+ goto out;
+ }
+ else if (!strcmp(args[0], "rspadd")) { /* add response header */
+ struct cond_wordlist *wl;
+
+ if (curproxy == &defproxy) {
+ Alert("parsing [%s:%d] : '%s' not allowed in 'defaults' section.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ else if (warnifnotcap(curproxy, PR_CAP_RS, file, linenum, args[0], NULL))
+ err_code |= ERR_WARN;
+
+ if (*(args[1]) == 0) {
+ Alert("parsing [%s:%d] : '%s' expects <header> as an argument.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if ((strcmp(args[2], "if") == 0 || strcmp(args[2], "unless") == 0)) {
+ if ((cond = build_acl_cond(file, linenum, curproxy, (const char **)args+2, &errmsg)) == NULL) {
+ Alert("parsing [%s:%d] : error detected while parsing a '%s' condition : %s.\n",
+ file, linenum, args[0], errmsg);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ err_code |= warnif_cond_conflicts(cond,
+ (curproxy->cap & PR_CAP_BE) ? SMP_VAL_BE_HRS_HDR : SMP_VAL_FE_HRS_HDR,
+ file, linenum);
+ }
+ else if (*args[2]) {
+ Alert("parsing [%s:%d] : '%s' : Expecting nothing, 'if', or 'unless', got '%s'.\n",
+ file, linenum, args[0], args[2]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ wl = calloc(1, sizeof(*wl));
+ wl->cond = cond;
+ wl->s = strdup(args[1]);
+ LIST_ADDQ(&curproxy->rsp_add, &wl->list);
+ }
+ else if (!strcmp(args[0], "errorloc") ||
+ !strcmp(args[0], "errorloc302") ||
+ !strcmp(args[0], "errorloc303")) { /* error location */
+ int errnum, errlen;
+ char *err;
+
+ if (warnifnotcap(curproxy, PR_CAP_FE | PR_CAP_BE, file, linenum, args[0], NULL))
+ err_code |= ERR_WARN;
+
+ if (*(args[2]) == 0) {
+ Alert("parsing [%s:%d] : <%s> expects <status_code> and <url> as arguments.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ errnum = atol(args[1]);
+ if (!strcmp(args[0], "errorloc303")) {
+ errlen = strlen(HTTP_303) + strlen(args[2]) + 5;
+ err = malloc(errlen);
+ errlen = snprintf(err, errlen, "%s%s\r\n\r\n", HTTP_303, args[2]);
+ } else {
+ errlen = strlen(HTTP_302) + strlen(args[2]) + 5;
+ err = malloc(errlen);
+ errlen = snprintf(err, errlen, "%s%s\r\n\r\n", HTTP_302, args[2]);
+ }
+
+ for (rc = 0; rc < HTTP_ERR_SIZE; rc++) {
+ if (http_err_codes[rc] == errnum) {
+ chunk_destroy(&curproxy->errmsg[rc]);
+ chunk_initlen(&curproxy->errmsg[rc], err, errlen, errlen);
+ break;
+ }
+ }
+
+ if (rc >= HTTP_ERR_SIZE) {
+ Warning("parsing [%s:%d] : status code %d not handled by '%s', error relocation will be ignored.\n",
+ file, linenum, errnum, args[0]);
+ free(err);
+ }
+ }
+ else if (!strcmp(args[0], "errorfile")) { /* error message from a file */
+ int errnum, errlen, fd;
+ char *err;
+ struct stat stat;
+
+ if (warnifnotcap(curproxy, PR_CAP_FE | PR_CAP_BE, file, linenum, args[0], NULL))
+ err_code |= ERR_WARN;
+
+ if (*(args[2]) == 0) {
+ Alert("parsing [%s:%d] : <%s> expects <status_code> and <file> as arguments.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ fd = open(args[2], O_RDONLY);
+ if ((fd < 0) || (fstat(fd, &stat) < 0)) {
+ Alert("parsing [%s:%d] : error opening file <%s> for custom error message <%s>.\n",
+ file, linenum, args[2], args[1]);
+ if (fd >= 0)
+ close(fd);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (stat.st_size <= global.tune.bufsize) {
+ errlen = stat.st_size;
+ } else {
+ Warning("parsing [%s:%d] : custom error message file <%s> larger than %d bytes. Truncating.\n",
+ file, linenum, args[2], global.tune.bufsize);
+ err_code |= ERR_WARN;
+ errlen = global.tune.bufsize;
+ }
+
+ err = malloc(errlen); /* malloc() must succeed during parsing */
+ errnum = read(fd, err, errlen);
+ if (errnum != errlen) {
+ Alert("parsing [%s:%d] : error reading file <%s> for custom error message <%s>.\n",
+ file, linenum, args[2], args[1]);
+ close(fd);
+ free(err);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ close(fd);
+
+ errnum = atol(args[1]);
+ for (rc = 0; rc < HTTP_ERR_SIZE; rc++) {
+ if (http_err_codes[rc] == errnum) {
+ chunk_destroy(&curproxy->errmsg[rc]);
+ chunk_initlen(&curproxy->errmsg[rc], err, errlen, errlen);
+ break;
+ }
+ }
+
+ if (rc >= HTTP_ERR_SIZE) {
+ Warning("parsing [%s:%d] : status code %d not handled by '%s', error customization will be ignored.\n",
+ file, linenum, errnum, args[0]);
+ err_code |= ERR_WARN;
+ free(err);
+ }
+ }
+ else if (!strcmp(args[0], "compression")) {
+ struct comp *comp;
+ if (curproxy->comp == NULL) {
+ comp = calloc(1, sizeof(struct comp));
+ curproxy->comp = comp;
+ } else {
+ comp = curproxy->comp;
+ }
+
+ if (!strcmp(args[1], "algo")) {
+ int cur_arg;
+ struct comp_ctx *ctx;
+
+ cur_arg = 2;
+ if (!*args[cur_arg]) {
+ Alert("parsing [%s:%d] : '%s' expects <algorithm>\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ while (*(args[cur_arg])) {
+ if (comp_append_algo(comp, args[cur_arg]) < 0) {
+ Alert("parsing [%s:%d] : '%s' : '%s' is not a supported algorithm.\n",
+ file, linenum, args[0], args[cur_arg]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ if (curproxy->comp->algos->init(&ctx, 9) == 0) {
+ curproxy->comp->algos->end(&ctx);
+ } else {
+ Alert("parsing [%s:%d] : '%s' : Can't init '%s' algorithm.\n",
+ file, linenum, args[0], args[cur_arg]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ cur_arg ++;
+ continue;
+ }
+ }
+ else if (!strcmp(args[1], "offload")) {
+ comp->offload = 1;
+ }
+ else if (!strcmp(args[1], "type")) {
+ int cur_arg;
+ cur_arg = 2;
+ if (!*args[cur_arg]) {
+ Alert("parsing [%s:%d] : '%s' expects <type>\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ while (*(args[cur_arg])) {
+ comp_append_type(comp, args[cur_arg]);
+ cur_arg ++;
+ continue;
+ }
+ }
+ else {
+ Alert("parsing [%s:%d] : '%s' expects 'algo', 'type' or 'offload'\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ }
+ else {
+ struct cfg_kw_list *kwl;
+ int index;
+
+ list_for_each_entry(kwl, &cfg_keywords.list, list) {
+ for (index = 0; kwl->kw[index].kw != NULL; index++) {
+ if (kwl->kw[index].section != CFG_LISTEN)
+ continue;
+ if (strcmp(kwl->kw[index].kw, args[0]) == 0) {
+ /* prepare error message just in case */
+ rc = kwl->kw[index].parse(args, CFG_LISTEN, curproxy, &defproxy, file, linenum, &errmsg);
+ if (rc < 0) {
+ Alert("parsing [%s:%d] : %s\n", file, linenum, errmsg);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ else if (rc > 0) {
+ Warning("parsing [%s:%d] : %s\n", file, linenum, errmsg);
+ err_code |= ERR_WARN;
+ goto out;
+ }
+ goto out;
+ }
+ }
+ }
+
+ Alert("parsing [%s:%d] : unknown keyword '%s' in '%s' section\n", file, linenum, args[0], cursection);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ out:
+ free(errmsg);
+ return err_code;
+}
+
+int
+cfg_parse_netns(const char *file, int linenum, char **args, int kwm)
+{
+#ifdef CONFIG_HAP_NS
+ const char *err;
+ const char *item = args[0];
+
+ if (!strcmp(item, "namespace_list")) {
+ return 0;
+ }
+ else if (!strcmp(item, "namespace")) {
+ size_t idx = 1;
+ const char *current;
+ while (*(current = args[idx++])) {
+ err = invalid_char(current);
+ if (err) {
+ Alert("parsing [%s:%d]: character '%c' is not permitted in '%s' name '%s'.\n",
+ file, linenum, *err, item, current);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ if (netns_store_lookup(current, strlen(current))) {
+ Alert("parsing [%s:%d]: Namespace '%s' is already added.\n",
+ file, linenum, current);
+ return ERR_ALERT | ERR_FATAL;
+ }
+ if (!netns_store_insert(current)) {
+ Alert("parsing [%s:%d]: Cannot open namespace '%s'.\n",
+ file, linenum, current);
+ return ERR_ALERT | ERR_FATAL;
+ }
+ }
+ }
+
+ return 0;
+#else
+ Alert("parsing [%s:%d]: namespace support is not compiled in.",
+ file, linenum);
+ return ERR_ALERT | ERR_FATAL;
+#endif
+}
+
+int
+cfg_parse_users(const char *file, int linenum, char **args, int kwm)
+{
+
+ int err_code = 0;
+ const char *err;
+
+ if (!strcmp(args[0], "userlist")) { /* new userlist */
+ struct userlist *newul;
+
+ if (!*args[1]) {
+ Alert("parsing [%s:%d]: '%s' expects <name> as arguments.\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ if (alertif_too_many_args(1, file, linenum, args, &err_code))
+ goto out;
+
+ err = invalid_char(args[1]);
+ if (err) {
+ Alert("parsing [%s:%d]: character '%c' is not permitted in '%s' name '%s'.\n",
+ file, linenum, *err, args[0], args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ for (newul = userlist; newul; newul = newul->next)
+ if (!strcmp(newul->name, args[1])) {
+ Warning("parsing [%s:%d]: ignoring duplicated userlist '%s'.\n",
+ file, linenum, args[1]);
+ err_code |= ERR_WARN;
+ goto out;
+ }
+
+ newul = (struct userlist *)calloc(1, sizeof(struct userlist));
+ if (!newul) {
+ Alert("parsing [%s:%d]: out of memory.\n", file, linenum);
+ err_code |= ERR_ALERT | ERR_ABORT;
+ goto out;
+ }
+
+ newul->name = strdup(args[1]);
+ if (!newul->name) {
+ Alert("parsing [%s:%d]: out of memory.\n", file, linenum);
+ err_code |= ERR_ALERT | ERR_ABORT;
+ goto out;
+ }
+
+ newul->next = userlist;
+ userlist = newul;
+
+ } else if (!strcmp(args[0], "group")) { /* new group */
+ int cur_arg;
+ const char *err;
+ struct auth_groups *ag;
+
+ if (!*args[1]) {
+ Alert("parsing [%s:%d]: '%s' expects <name> as arguments.\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ err = invalid_char(args[1]);
+ if (err) {
+ Alert("parsing [%s:%d]: character '%c' is not permitted in '%s' name '%s'.\n",
+ file, linenum, *err, args[0], args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (!userlist)
+ goto out;
+
+ for (ag = userlist->groups; ag; ag = ag->next)
+ if (!strcmp(ag->name, args[1])) {
+ Warning("parsing [%s:%d]: ignoring duplicated group '%s' in userlist '%s'.\n",
+ file, linenum, args[1], userlist->name);
+ err_code |= ERR_ALERT;
+ goto out;
+ }
+
+ ag = calloc(1, sizeof(*ag));
+ if (!ag) {
+ Alert("parsing [%s:%d]: out of memory.\n", file, linenum);
+ err_code |= ERR_ALERT | ERR_ABORT;
+ goto out;
+ }
+
+ ag->name = strdup(args[1]);
+ if (!ag) {
+ Alert("parsing [%s:%d]: out of memory.\n", file, linenum);
+ err_code |= ERR_ALERT | ERR_ABORT;
+ goto out;
+ }
+
+ cur_arg = 2;
+
+ while (*args[cur_arg]) {
+ if (!strcmp(args[cur_arg], "users")) {
+ ag->groupusers = strdup(args[cur_arg + 1]);
+ cur_arg += 2;
+ continue;
+ } else {
+ Alert("parsing [%s:%d]: '%s' only supports 'users' option.\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ }
+
+ ag->next = userlist->groups;
+ userlist->groups = ag;
+
+ } else if (!strcmp(args[0], "user")) { /* new user */
+ struct auth_users *newuser;
+ int cur_arg;
+
+ if (!*args[1]) {
+ Alert("parsing [%s:%d]: '%s' expects <name> as arguments.\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ if (!userlist)
+ goto out;
+
+ for (newuser = userlist->users; newuser; newuser = newuser->next)
+ if (!strcmp(newuser->user, args[1])) {
+ Warning("parsing [%s:%d]: ignoring duplicated user '%s' in userlist '%s'.\n",
+ file, linenum, args[1], userlist->name);
+ err_code |= ERR_ALERT;
+ goto out;
+ }
+
+ newuser = (struct auth_users *)calloc(1, sizeof(struct auth_users));
+ if (!newuser) {
+ Alert("parsing [%s:%d]: out of memory.\n", file, linenum);
+ err_code |= ERR_ALERT | ERR_ABORT;
+ goto out;
+ }
+
+ newuser->user = strdup(args[1]);
+
+ newuser->next = userlist->users;
+ userlist->users = newuser;
+
+ cur_arg = 2;
+
+ while (*args[cur_arg]) {
+ if (!strcmp(args[cur_arg], "password")) {
+#ifdef CONFIG_HAP_CRYPT
+ if (!crypt("", args[cur_arg + 1])) {
+ Alert("parsing [%s:%d]: the encrypted password used for user '%s' is not supported by crypt(3).\n",
+ file, linenum, newuser->user);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+#else
+ Warning("parsing [%s:%d]: no crypt(3) support compiled, encrypted passwords will not work.\n",
+ file, linenum);
+ err_code |= ERR_ALERT;
+#endif
+ newuser->pass = strdup(args[cur_arg + 1]);
+ cur_arg += 2;
+ continue;
+ } else if (!strcmp(args[cur_arg], "insecure-password")) {
+ newuser->pass = strdup(args[cur_arg + 1]);
+ newuser->flags |= AU_O_INSECURE;
+ cur_arg += 2;
+ continue;
+ } else if (!strcmp(args[cur_arg], "groups")) {
+ newuser->u.groups_names = strdup(args[cur_arg + 1]);
+ cur_arg += 2;
+ continue;
+ } else {
+ Alert("parsing [%s:%d]: '%s' only supports 'password', 'insecure-password' and 'groups' options.\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ }
+ } else {
+ Alert("parsing [%s:%d]: unknown keyword '%s' in '%s' section\n", file, linenum, args[0], "users");
+ err_code |= ERR_ALERT | ERR_FATAL;
+ }
+
+out:
+ return err_code;
+}
+
+/*
+ * This function reads and parses the configuration file given in the argument.
+ * Returns the error code, 0 if OK, or any combination of :
+ * - ERR_ABORT: must abort ASAP
+ * - ERR_FATAL: we can continue parsing but not start the service
+ * - ERR_WARN: a warning has been emitted
+ * - ERR_ALERT: an alert has been emitted
+ * Only the two first ones can stop processing, the two others are just
+ * indicators.
+ */
+int readcfgfile(const char *file)
+{
+ char *thisline;
+ int linesize = LINESIZE;
+ FILE *f;
+ int linenum = 0;
+ int err_code = 0;
+ struct cfg_section *cs = NULL;
+ struct cfg_section *ics;
+ int readbytes = 0;
+
+ if ((thisline = malloc(sizeof(*thisline) * linesize)) == NULL) {
+ Alert("parsing [%s] : out of memory.\n", file);
+ return -1;
+ }
+
+ /* Register internal sections */
+ if (!cfg_register_section("listen", cfg_parse_listen) ||
+ !cfg_register_section("frontend", cfg_parse_listen) ||
+ !cfg_register_section("backend", cfg_parse_listen) ||
+ !cfg_register_section("defaults", cfg_parse_listen) ||
+ !cfg_register_section("global", cfg_parse_global) ||
+ !cfg_register_section("userlist", cfg_parse_users) ||
+ !cfg_register_section("peers", cfg_parse_peers) ||
+ !cfg_register_section("mailers", cfg_parse_mailers) ||
+ !cfg_register_section("namespace_list", cfg_parse_netns) ||
+ !cfg_register_section("resolvers", cfg_parse_resolvers))
+ return -1;
+
+ if ((f=fopen(file,"r")) == NULL)
+ return -1;
+
+next_line:
+ while (fgets(thisline + readbytes, linesize - readbytes, f) != NULL) {
+ int arg, kwm = KWM_STD;
+ char *end;
+ char *args[MAX_LINE_ARGS + 1];
+ char *line = thisline;
+ int dquote = 0; /* double quote */
+ int squote = 0; /* simple quote */
+
+ linenum++;
+
+ end = line + strlen(line);
+
+ if (end-line == linesize-1 && *(end-1) != '\n') {
+ /* Check if we reached the limit and the last char is not \n.
+ * Watch out for the last line without the terminating '\n'!
+ */
+ char *newline;
+ int newlinesize = linesize * 2;
+
+ newline = realloc(thisline, sizeof(*thisline) * newlinesize);
+ if (newline == NULL) {
+ Alert("parsing [%s:%d]: line too long, cannot allocate memory.\n",
+ file, linenum);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ continue;
+ }
+
+ readbytes = linesize - 1;
+ linesize = newlinesize;
+ thisline = newline;
+ continue;
+ }
+
+ readbytes = 0;
+
+ /* skip leading spaces */
+ while (isspace((unsigned char)*line))
+ line++;
+
+ arg = 0;
+ args[arg] = line;
+
+ while (*line && arg < MAX_LINE_ARGS) {
+ if (*line == '"' && !squote) { /* double quote outside single quotes */
+ if (dquote)
+ dquote = 0;
+ else
+ dquote = 1;
+ memmove(line, line + 1, end - line);
+ end--;
+ }
+ else if (*line == '\'' && !dquote) { /* single quote outside double quotes */
+ if (squote)
+ squote = 0;
+ else
+ squote = 1;
+ memmove(line, line + 1, end - line);
+ end--;
+ }
+ else if (*line == '\\' && !squote) {
+ /* first, we'll replace \\, \<space>, \#, \r, \n, \t, \xXX with their
+ * C equivalent value. Other combinations left unchanged (eg: \1).
+ */
+ int skip = 0;
+ if (line[1] == ' ' || line[1] == '\\' || line[1] == '#') {
+ *line = line[1];
+ skip = 1;
+ }
+ else if (line[1] == 'r') {
+ *line = '\r';
+ skip = 1;
+ }
+ else if (line[1] == 'n') {
+ *line = '\n';
+ skip = 1;
+ }
+ else if (line[1] == 't') {
+ *line = '\t';
+ skip = 1;
+ }
+ else if (line[1] == 'x') {
+ if ((line + 3 < end) && ishex(line[2]) && ishex(line[3])) {
+ unsigned char hex1, hex2;
+ hex1 = toupper(line[2]) - '0';
+ hex2 = toupper(line[3]) - '0';
+ if (hex1 > 9) hex1 -= 'A' - '9' - 1;
+ if (hex2 > 9) hex2 -= 'A' - '9' - 1;
+ *line = (hex1<<4) + hex2;
+ skip = 3;
+ }
+ else {
+ Alert("parsing [%s:%d] : invalid or incomplete '\\x' sequence in '%s'.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ }
+ } else if (line[1] == '"') {
+ *line = '"';
+ skip = 1;
+ } else if (line[1] == '\'') {
+ *line = '\'';
+ skip = 1;
+ } else if (line[1] == '$' && dquote) { /* escaping of $ only inside double quotes */
+ *line = '$';
+ skip = 1;
+ }
+ if (skip) {
+ memmove(line + 1, line + 1 + skip, end - (line + skip));
+ end -= skip;
+ }
+ line++;
+ }
+ else if ((!squote && !dquote && *line == '#') || *line == '\n' || *line == '\r') {
+ /* end of string, end of loop */
+ *line = 0;
+ break;
+ }
+ else if (!squote && !dquote && isspace((unsigned char)*line)) {
+ /* a non-escaped space is an argument separator */
+ *line++ = '\0';
+ while (isspace((unsigned char)*line))
+ line++;
+ args[++arg] = line;
+ }
+ else if (dquote && *line == '$') {
+ /* environment variables are evaluated inside double quotes */
+ char *var_beg;
+ char *var_end;
+ char save_char;
+ char *value;
+ int val_len;
+ int newlinesize;
+ int braces = 0;
+
+ var_beg = line + 1;
+ var_end = var_beg;
+
+ if (*var_beg == '{') {
+ var_beg++;
+ var_end++;
+ braces = 1;
+ }
+
+ if (!isalpha((int)(unsigned char)*var_beg) && *var_beg != '_') {
+ Alert("parsing [%s:%d] : Variable expansion: Unrecognized character '%c' in variable name.\n", file, linenum, *var_beg);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto next_line; /* skip current line */
+ }
+
+ while (isalnum((int)(unsigned char)*var_end) || *var_end == '_')
+ var_end++;
+
+ save_char = *var_end;
+ *var_end = '\0';
+ value = getenv(var_beg);
+ *var_end = save_char;
+ val_len = value ? strlen(value) : 0;
+
+ if (braces) {
+ if (*var_end == '}') {
+ var_end++;
+ braces = 0;
+ } else {
+ Alert("parsing [%s:%d] : Variable expansion: Mismatched braces.\n", file, linenum);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto next_line; /* skip current line */
+ }
+ }
+
+ newlinesize = (end - thisline) - (var_end - line) + val_len + 1;
+
+ /* if not enough space in thisline */
+ if (newlinesize > linesize) {
+ char *newline;
+
+ newline = realloc(thisline, newlinesize * sizeof(*thisline));
+ if (newline == NULL) {
+ Alert("parsing [%s:%d] : Variable expansion: Not enough memory.\n", file, linenum);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto next_line; /* slip current line */
+ }
+ /* recompute pointers if realloc returns a new pointer */
+ if (newline != thisline) {
+ int i;
+ int diff;
+
+ for (i = 0; i <= arg; i++) {
+ diff = args[i] - thisline;
+ args[i] = newline + diff;
+ }
+
+ diff = var_end - thisline;
+ var_end = newline + diff;
+ diff = end - thisline;
+ end = newline + diff;
+ diff = line - thisline;
+ line = newline + diff;
+ thisline = newline;
+ }
+ linesize = newlinesize;
+ }
+
+ /* insert value inside the line */
+ memmove(line + val_len, var_end, end - var_end + 1);
+ memcpy(line, value, val_len);
+ end += val_len - (var_end - line);
+ line += val_len;
+ }
+ else {
+ line++;
+ }
+ }
+
+ if (dquote) {
+ Alert("parsing [%s:%d] : Mismatched double quotes.\n", file, linenum);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ }
+
+ if (squote) {
+ Alert("parsing [%s:%d] : Mismatched simple quotes.\n", file, linenum);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ }
+
+ /* empty line */
+ if (!**args)
+ continue;
+
+ if (*line) {
+ /* we had to stop due to too many args.
+ * Let's terminate the string, print the offending part then cut the
+ * last arg.
+ */
+ while (*line && *line != '#' && *line != '\n' && *line != '\r')
+ line++;
+ *line = '\0';
+
+ Alert("parsing [%s:%d]: line too long, truncating at word %d, position %ld: <%s>.\n",
+ file, linenum, arg + 1, (long)(args[arg] - thisline + 1), args[arg]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ args[arg] = line;
+ }
+
+ /* zero out remaining args and ensure that at least one entry
+ * is zeroed out.
+ */
+ while (++arg <= MAX_LINE_ARGS) {
+ args[arg] = line;
+ }
+
+ /* check for keyword modifiers "no" and "default" */
+ if (!strcmp(args[0], "no")) {
+ char *tmp;
+
+ kwm = KWM_NO;
+ tmp = args[0];
+ for (arg=0; *args[arg+1]; arg++)
+ args[arg] = args[arg+1]; // shift args after inversion
+ *tmp = '\0'; // fix the next arg to \0
+ args[arg] = tmp;
+ }
+ else if (!strcmp(args[0], "default")) {
+ kwm = KWM_DEF;
+ for (arg=0; *args[arg+1]; arg++)
+ args[arg] = args[arg+1]; // shift args after inversion
+ }
+
+ if (kwm != KWM_STD && strcmp(args[0], "option") != 0 && \
+ strcmp(args[0], "log") != 0) {
+ Alert("parsing [%s:%d]: negation/default currently supported only for options and log.\n", file, linenum);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ }
+
+ /* detect section start */
+ list_for_each_entry(ics, §ions, list) {
+ if (strcmp(args[0], ics->section_name) == 0) {
+ cursection = ics->section_name;
+ cs = ics;
+ break;
+ }
+ }
+
+ /* else it's a section keyword */
+ if (cs)
+ err_code |= cs->section_parser(file, linenum, args, kwm);
+ else {
+ Alert("parsing [%s:%d]: unknown keyword '%s' out of section.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ }
+
+ if (err_code & ERR_ABORT)
+ break;
+ }
+ cursection = NULL;
+ free(thisline);
+ fclose(f);
+ return err_code;
+}
+
+/* This function propagates processes from frontend <from> to backend <to> so
+ * that it is always guaranteed that a backend pointed to by a frontend is
+ * bound to all of its processes. After that, if the target is a "listen"
+ * instance, the function recursively descends the target's own targets along
+ * default_backend and use_backend rules. Since the bits are
+ * checked first to ensure that <to> is already bound to all processes of
+ * <from>, there is no risk of looping and we ensure to follow the shortest
+ * path to the destination.
+ *
+ * It is possible to set <to> to NULL for the first call so that the function
+ * takes care of visiting the initial frontend in <from>.
+ *
+ * It is important to note that the function relies on the fact that all names
+ * have already been resolved.
+ */
+void propagate_processes(struct proxy *from, struct proxy *to)
+{
+ struct switching_rule *rule;
+
+ if (to) {
+ /* check whether we need to go down */
+ if (from->bind_proc &&
+ (from->bind_proc & to->bind_proc) == from->bind_proc)
+ return;
+
+ if (!from->bind_proc && !to->bind_proc)
+ return;
+
+ to->bind_proc = from->bind_proc ?
+ (to->bind_proc | from->bind_proc) : 0;
+
+ /* now propagate down */
+ from = to;
+ }
+
+ if (!(from->cap & PR_CAP_FE))
+ return;
+
+ if (from->state == PR_STSTOPPED)
+ return;
+
+ /* default_backend */
+ if (from->defbe.be)
+ propagate_processes(from, from->defbe.be);
+
+ /* use_backend */
+ list_for_each_entry(rule, &from->switching_rules, list) {
+ if (rule->dynamic)
+ continue;
+ to = rule->be.backend;
+ propagate_processes(from, to);
+ }
+}
+
+/*
+ * Returns the error code, 0 if OK, or any combination of :
+ * - ERR_ABORT: must abort ASAP
+ * - ERR_FATAL: we can continue parsing but not start the service
+ * - ERR_WARN: a warning has been emitted
+ * - ERR_ALERT: an alert has been emitted
+ * Only the two first ones can stop processing, the two others are just
+ * indicators.
+ */
+int check_config_validity()
+{
+ int cfgerr = 0;
+ struct proxy *curproxy = NULL;
+ struct server *newsrv = NULL;
+ int err_code = 0;
+ unsigned int next_pxid = 1;
+ struct bind_conf *bind_conf;
+
+ bind_conf = NULL;
+ /*
+ * Now, check for the integrity of all that we have collected.
+ */
+
+ /* will be needed further to delay some tasks */
+ tv_update_date(0,1);
+
+ if (!global.tune.max_http_hdr)
+ global.tune.max_http_hdr = MAX_HTTP_HDR;
+
+ if (!global.tune.cookie_len)
+ global.tune.cookie_len = CAPTURE_LEN;
+
+ pool2_capture = create_pool("capture", global.tune.cookie_len, MEM_F_SHARED);
+
+ /* Post initialisation of the users and groups lists. */
+ err_code = userlist_postinit();
+ if (err_code != ERR_NONE)
+ goto out;
+
+ /* first, we will invert the proxy list order */
+ curproxy = NULL;
+ while (proxy) {
+ struct proxy *next;
+
+ next = proxy->next;
+ proxy->next = curproxy;
+ curproxy = proxy;
+ if (!next)
+ break;
+ proxy = next;
+ }
+
+ for (curproxy = proxy; curproxy; curproxy = curproxy->next) {
+ struct switching_rule *rule;
+ struct server_rule *srule;
+ struct sticking_rule *mrule;
+ struct act_rule *trule;
+ struct act_rule *hrqrule;
+ struct logsrv *tmplogsrv;
+ unsigned int next_id;
+ int nbproc;
+
+ if (curproxy->uuid < 0) {
+ /* proxy ID not set, use automatic numbering with first
+ * spare entry starting with next_pxid.
+ */
+ next_pxid = get_next_id(&used_proxy_id, next_pxid);
+ curproxy->conf.id.key = curproxy->uuid = next_pxid;
+ eb32_insert(&used_proxy_id, &curproxy->conf.id);
+ }
+ next_pxid++;
+
+
+ if (curproxy->state == PR_STSTOPPED) {
+ /* ensure we don't keep listeners uselessly bound */
+ stop_proxy(curproxy);
+ free((void *)curproxy->table.peers.name);
+ curproxy->table.peers.p = NULL;
+ continue;
+ }
+
+ /* Check multi-process mode compatibility for the current proxy */
+
+ if (curproxy->bind_proc) {
+ /* an explicit bind-process was specified, let's check how many
+ * processes remain.
+ */
+ nbproc = my_popcountl(curproxy->bind_proc);
+
+ curproxy->bind_proc &= nbits(global.nbproc);
+ if (!curproxy->bind_proc && nbproc == 1) {
+ Warning("Proxy '%s': the process specified on the 'bind-process' directive refers to a process number that is higher than global.nbproc. The proxy has been forced to run on process 1 only.\n", curproxy->id);
+ curproxy->bind_proc = 1;
+ }
+ else if (!curproxy->bind_proc && nbproc > 1) {
+ Warning("Proxy '%s': all processes specified on the 'bind-process' directive refer to numbers that are all higher than global.nbproc. The directive was ignored and the proxy will run on all processes.\n", curproxy->id);
+ curproxy->bind_proc = 0;
+ }
+ }
+
+ /* check and reduce the bind-proc of each listener */
+ list_for_each_entry(bind_conf, &curproxy->conf.bind, by_fe) {
+ unsigned long mask;
+
+ if (!bind_conf->bind_proc)
+ continue;
+
+ mask = nbits(global.nbproc);
+ if (curproxy->bind_proc)
+ mask &= curproxy->bind_proc;
+ /* mask cannot be null here thanks to the previous checks */
+
+ nbproc = my_popcountl(bind_conf->bind_proc);
+ bind_conf->bind_proc &= mask;
+
+ if (!bind_conf->bind_proc && nbproc == 1) {
+ Warning("Proxy '%s': the process number specified on the 'process' directive of 'bind %s' at [%s:%d] refers to a process not covered by the proxy. This has been fixed by forcing it to run on the proxy's first process only.\n",
+ curproxy->id, bind_conf->arg, bind_conf->file, bind_conf->line);
+ bind_conf->bind_proc = mask & ~(mask - 1);
+ }
+ else if (!bind_conf->bind_proc && nbproc > 1) {
+ Warning("Proxy '%s': the process range specified on the 'process' directive of 'bind %s' at [%s:%d] only refers to processes not covered by the proxy. The directive was ignored so that all of the proxy's processes are used.\n",
+ curproxy->id, bind_conf->arg, bind_conf->file, bind_conf->line);
+ bind_conf->bind_proc = 0;
+ }
+ }
+
+ switch (curproxy->mode) {
+ case PR_MODE_HEALTH:
+ cfgerr += proxy_cfg_ensure_no_http(curproxy);
+ if (!(curproxy->cap & PR_CAP_FE)) {
+ Alert("config : %s '%s' cannot be in health mode as it has no frontend capability.\n",
+ proxy_type_str(curproxy), curproxy->id);
+ cfgerr++;
+ }
+
+ if (curproxy->srv != NULL)
+ Warning("config : servers will be ignored for %s '%s'.\n",
+ proxy_type_str(curproxy), curproxy->id);
+ break;
+
+ case PR_MODE_TCP:
+ cfgerr += proxy_cfg_ensure_no_http(curproxy);
+ break;
+
+ case PR_MODE_HTTP:
+ curproxy->http_needed = 1;
+ break;
+ }
+
+ if ((curproxy->cap & PR_CAP_FE) && LIST_ISEMPTY(&curproxy->conf.listeners)) {
+ Warning("config : %s '%s' has no 'bind' directive. Please declare it as a backend if this was intended.\n",
+ proxy_type_str(curproxy), curproxy->id);
+ err_code |= ERR_WARN;
+ }
+
+ if ((curproxy->cap & PR_CAP_BE) && (curproxy->mode != PR_MODE_HEALTH)) {
+ if (curproxy->lbprm.algo & BE_LB_KIND) {
+ if (curproxy->options & PR_O_TRANSP) {
+ Alert("config : %s '%s' cannot use both transparent and balance mode.\n",
+ proxy_type_str(curproxy), curproxy->id);
+ cfgerr++;
+ }
+#ifdef WE_DONT_SUPPORT_SERVERLESS_LISTENERS
+ else if (curproxy->srv == NULL) {
+ Alert("config : %s '%s' needs at least 1 server in balance mode.\n",
+ proxy_type_str(curproxy), curproxy->id);
+ cfgerr++;
+ }
+#endif
+ else if (curproxy->options & PR_O_DISPATCH) {
+ Warning("config : dispatch address of %s '%s' will be ignored in balance mode.\n",
+ proxy_type_str(curproxy), curproxy->id);
+ err_code |= ERR_WARN;
+ }
+ }
+ else if (!(curproxy->options & (PR_O_TRANSP | PR_O_DISPATCH | PR_O_HTTP_PROXY))) {
+ /* If no LB algo is set in a backend, and we're not in
+ * transparent mode, dispatch mode nor proxy mode, we
+ * want to use balance roundrobin by default.
+ */
+ curproxy->lbprm.algo &= ~BE_LB_ALGO;
+ curproxy->lbprm.algo |= BE_LB_ALGO_RR;
+ }
+ }
+
+ if (curproxy->options & PR_O_DISPATCH)
+ curproxy->options &= ~(PR_O_TRANSP | PR_O_HTTP_PROXY);
+ else if (curproxy->options & PR_O_HTTP_PROXY)
+ curproxy->options &= ~(PR_O_DISPATCH | PR_O_TRANSP);
+ else if (curproxy->options & PR_O_TRANSP)
+ curproxy->options &= ~(PR_O_DISPATCH | PR_O_HTTP_PROXY);
+
+ if ((curproxy->options2 & PR_O2_CHK_ANY) != PR_O2_HTTP_CHK) {
+ if (curproxy->options & PR_O_DISABLE404) {
+ Warning("config : '%s' will be ignored for %s '%s' (requires 'option httpchk').\n",
+ "disable-on-404", proxy_type_str(curproxy), curproxy->id);
+ err_code |= ERR_WARN;
+ curproxy->options &= ~PR_O_DISABLE404;
+ }
+ if (curproxy->options2 & PR_O2_CHK_SNDST) {
+ Warning("config : '%s' will be ignored for %s '%s' (requires 'option httpchk').\n",
+ "send-state", proxy_type_str(curproxy), curproxy->id);
+ err_code |= ERR_WARN;
+ curproxy->options &= ~PR_O2_CHK_SNDST;
+ }
+ }
+
+ if ((curproxy->options2 & PR_O2_CHK_ANY) == PR_O2_EXT_CHK) {
+ if (!global.external_check) {
+ Alert("Proxy '%s' : '%s' unable to find required 'global.external-check'.\n",
+ curproxy->id, "option external-check");
+ cfgerr++;
+ }
+ if (!curproxy->check_command) {
+ Alert("Proxy '%s' : '%s' unable to find required 'external-check command'.\n",
+ curproxy->id, "option external-check");
+ cfgerr++;
+ }
+ }
+
+ if (curproxy->email_alert.set) {
+ if (!(curproxy->email_alert.mailers.name && curproxy->email_alert.from && curproxy->email_alert.to)) {
+ Warning("config : 'email-alert' will be ignored for %s '%s' (the presence any of "
+ "'email-alert from', 'email-alert level' 'email-alert mailers', "
+ "'email-alert myhostname', or 'email-alert to' "
+ "requires each of 'email-alert from', 'email-alert mailers' and 'email-alert to' "
+ "to be present).\n",
+ proxy_type_str(curproxy), curproxy->id);
+ err_code |= ERR_WARN;
+ free_email_alert(curproxy);
+ }
+ if (!curproxy->email_alert.myhostname)
+ curproxy->email_alert.myhostname = strdup(hostname);
+ }
+
+ if (curproxy->check_command) {
+ int clear = 0;
+ if ((curproxy->options2 & PR_O2_CHK_ANY) != PR_O2_EXT_CHK) {
+ Warning("config : '%s' will be ignored for %s '%s' (requires 'option external-check').\n",
+ "external-check command", proxy_type_str(curproxy), curproxy->id);
+ err_code |= ERR_WARN;
+ clear = 1;
+ }
+ if (curproxy->check_command[0] != '/' && !curproxy->check_path) {
+ Alert("Proxy '%s': '%s' does not have a leading '/' and 'external-check path' is not set.\n",
+ curproxy->id, "external-check command");
+ cfgerr++;
+ }
+ if (clear) {
+ free(curproxy->check_command);
+ curproxy->check_command = NULL;
+ }
+ }
+
+ if (curproxy->check_path) {
+ if ((curproxy->options2 & PR_O2_CHK_ANY) != PR_O2_EXT_CHK) {
+ Warning("config : '%s' will be ignored for %s '%s' (requires 'option external-check').\n",
+ "external-check path", proxy_type_str(curproxy), curproxy->id);
+ err_code |= ERR_WARN;
+ free(curproxy->check_path);
+ curproxy->check_path = NULL;
+ }
+ }
+
+ /* if a default backend was specified, let's find it */
+ if (curproxy->defbe.name) {
+ struct proxy *target;
+
+ target = proxy_be_by_name(curproxy->defbe.name);
+ if (!target) {
+ Alert("Proxy '%s': unable to find required default_backend: '%s'.\n",
+ curproxy->id, curproxy->defbe.name);
+ cfgerr++;
+ } else if (target == curproxy) {
+ Alert("Proxy '%s': loop detected for default_backend: '%s'.\n",
+ curproxy->id, curproxy->defbe.name);
+ cfgerr++;
+ } else if (target->mode != curproxy->mode &&
+ !(curproxy->mode == PR_MODE_TCP && target->mode == PR_MODE_HTTP)) {
+
+ Alert("%s %s '%s' (%s:%d) tries to use incompatible %s %s '%s' (%s:%d) as its default backend (see 'mode').\n",
+ proxy_mode_str(curproxy->mode), proxy_type_str(curproxy), curproxy->id,
+ curproxy->conf.file, curproxy->conf.line,
+ proxy_mode_str(target->mode), proxy_type_str(target), target->id,
+ target->conf.file, target->conf.line);
+ cfgerr++;
+ } else {
+ free(curproxy->defbe.name);
+ curproxy->defbe.be = target;
+
+ /* Emit a warning if this proxy also has some servers */
+ if (curproxy->srv) {
+ Warning("In proxy '%s', the 'default_backend' rule always has precedence over the servers, which will never be used.\n",
+ curproxy->id);
+ err_code |= ERR_WARN;
+ }
+ }
+ }
+
+ /* find the target proxy for 'use_backend' rules */
+ list_for_each_entry(rule, &curproxy->switching_rules, list) {
+ struct proxy *target;
+ struct logformat_node *node;
+ char *pxname;
+
+ /* Try to parse the string as a log format expression. If the result
+ * of the parsing is only one entry containing a simple string, then
+ * it's a standard string corresponding to a static rule, thus the
+ * parsing is cancelled and be.name is restored to be resolved.
+ */
+ pxname = rule->be.name;
+ LIST_INIT(&rule->be.expr);
+ parse_logformat_string(pxname, curproxy, &rule->be.expr, 0, SMP_VAL_FE_HRQ_HDR,
+ curproxy->conf.args.file, curproxy->conf.args.line);
+ node = LIST_NEXT(&rule->be.expr, struct logformat_node *, list);
+
+ if (!LIST_ISEMPTY(&rule->be.expr)) {
+ if (node->type != LOG_FMT_TEXT || node->list.n != &rule->be.expr) {
+ rule->dynamic = 1;
+ free(pxname);
+ continue;
+ }
+ /* simple string: free the expression and fall back to static rule */
+ free(node->arg);
+ free(node);
+ }
+
+ rule->dynamic = 0;
+ rule->be.name = pxname;
+
+ target = proxy_be_by_name(rule->be.name);
+ if (!target) {
+ Alert("Proxy '%s': unable to find required use_backend: '%s'.\n",
+ curproxy->id, rule->be.name);
+ cfgerr++;
+ } else if (target == curproxy) {
+ Alert("Proxy '%s': loop detected for use_backend: '%s'.\n",
+ curproxy->id, rule->be.name);
+ cfgerr++;
+ } else if (target->mode != curproxy->mode &&
+ !(curproxy->mode == PR_MODE_TCP && target->mode == PR_MODE_HTTP)) {
+
+ Alert("%s %s '%s' (%s:%d) tries to use incompatible %s %s '%s' (%s:%d) in a 'use_backend' rule (see 'mode').\n",
+ proxy_mode_str(curproxy->mode), proxy_type_str(curproxy), curproxy->id,
+ curproxy->conf.file, curproxy->conf.line,
+ proxy_mode_str(target->mode), proxy_type_str(target), target->id,
+ target->conf.file, target->conf.line);
+ cfgerr++;
+ } else {
+ free((void *)rule->be.name);
+ rule->be.backend = target;
+ }
+ }
+
+ /* find the target server for 'use_server' rules */
+ list_for_each_entry(srule, &curproxy->server_rules, list) {
+ struct server *target = findserver(curproxy, srule->srv.name);
+
+ if (!target) {
+ Alert("config : %s '%s' : unable to find server '%s' referenced in a 'use-server' rule.\n",
+ proxy_type_str(curproxy), curproxy->id, srule->srv.name);
+ cfgerr++;
+ continue;
+ }
+ free((void *)srule->srv.name);
+ srule->srv.ptr = target;
+ }
+
+ /* find the target table for 'stick' rules */
+ list_for_each_entry(mrule, &curproxy->sticking_rules, list) {
+ struct proxy *target;
+
+ curproxy->be_req_ana |= AN_REQ_STICKING_RULES;
+ if (mrule->flags & STK_IS_STORE)
+ curproxy->be_rsp_ana |= AN_RES_STORE_RULES;
+
+ if (mrule->table.name)
+ target = proxy_tbl_by_name(mrule->table.name);
+ else
+ target = curproxy;
+
+ if (!target) {
+ Alert("Proxy '%s': unable to find stick-table '%s'.\n",
+ curproxy->id, mrule->table.name);
+ cfgerr++;
+ }
+ else if (target->table.size == 0) {
+ Alert("Proxy '%s': stick-table '%s' used but not configured.\n",
+ curproxy->id, mrule->table.name ? mrule->table.name : curproxy->id);
+ cfgerr++;
+ }
+ else if (!stktable_compatible_sample(mrule->expr, target->table.type)) {
+ Alert("Proxy '%s': type of fetch not usable with type of stick-table '%s'.\n",
+ curproxy->id, mrule->table.name ? mrule->table.name : curproxy->id);
+ cfgerr++;
+ }
+ else {
+ free((void *)mrule->table.name);
+ mrule->table.t = &(target->table);
+ stktable_alloc_data_type(&target->table, STKTABLE_DT_SERVER_ID, NULL);
+ }
+ }
+
+ /* find the target table for 'store response' rules */
+ list_for_each_entry(mrule, &curproxy->storersp_rules, list) {
+ struct proxy *target;
+
+ curproxy->be_rsp_ana |= AN_RES_STORE_RULES;
+
+ if (mrule->table.name)
+ target = proxy_tbl_by_name(mrule->table.name);
+ else
+ target = curproxy;
+
+ if (!target) {
+ Alert("Proxy '%s': unable to find store table '%s'.\n",
+ curproxy->id, mrule->table.name);
+ cfgerr++;
+ }
+ else if (target->table.size == 0) {
+ Alert("Proxy '%s': stick-table '%s' used but not configured.\n",
+ curproxy->id, mrule->table.name ? mrule->table.name : curproxy->id);
+ cfgerr++;
+ }
+ else if (!stktable_compatible_sample(mrule->expr, target->table.type)) {
+ Alert("Proxy '%s': type of fetch not usable with type of stick-table '%s'.\n",
+ curproxy->id, mrule->table.name ? mrule->table.name : curproxy->id);
+ cfgerr++;
+ }
+ else {
+ free((void *)mrule->table.name);
+ mrule->table.t = &(target->table);
+ stktable_alloc_data_type(&target->table, STKTABLE_DT_SERVER_ID, NULL);
+ }
+ }
+
+ /* find the target table for 'tcp-request' layer 4 rules */
+ list_for_each_entry(trule, &curproxy->tcp_req.l4_rules, list) {
+ struct proxy *target;
+
+ if (trule->action < ACT_ACTION_TRK_SC0 || trule->action > ACT_ACTION_TRK_SCMAX)
+ continue;
+
+ if (trule->arg.trk_ctr.table.n)
+ target = proxy_tbl_by_name(trule->arg.trk_ctr.table.n);
+ else
+ target = curproxy;
+
+ if (!target) {
+ Alert("Proxy '%s': unable to find table '%s' referenced by track-sc%d.\n",
+ curproxy->id, trule->arg.trk_ctr.table.n,
+ tcp_trk_idx(trule->action));
+ cfgerr++;
+ }
+ else if (target->table.size == 0) {
+ Alert("Proxy '%s': table '%s' used but not configured.\n",
+ curproxy->id, trule->arg.trk_ctr.table.n ? trule->arg.trk_ctr.table.n : curproxy->id);
+ cfgerr++;
+ }
+ else if (!stktable_compatible_sample(trule->arg.trk_ctr.expr, target->table.type)) {
+ Alert("Proxy '%s': stick-table '%s' uses a type incompatible with the 'track-sc%d' rule.\n",
+ curproxy->id, trule->arg.trk_ctr.table.n ? trule->arg.trk_ctr.table.n : curproxy->id,
+ tcp_trk_idx(trule->action));
+ cfgerr++;
+ }
+ else {
+ free(trule->arg.trk_ctr.table.n);
+ trule->arg.trk_ctr.table.t = &target->table;
+ /* Note: if we decide to enhance the track-sc syntax, we may be able
+ * to pass a list of counters to track and allocate them right here using
+ * stktable_alloc_data_type().
+ */
+ }
+ }
+
+ /* find the target table for 'tcp-request' layer 6 rules */
+ list_for_each_entry(trule, &curproxy->tcp_req.inspect_rules, list) {
+ struct proxy *target;
+
+ if (trule->action < ACT_ACTION_TRK_SC0 || trule->action > ACT_ACTION_TRK_SCMAX)
+ continue;
+
+ if (trule->arg.trk_ctr.table.n)
+ target = proxy_tbl_by_name(trule->arg.trk_ctr.table.n);
+ else
+ target = curproxy;
+
+ if (!target) {
+ Alert("Proxy '%s': unable to find table '%s' referenced by track-sc%d.\n",
+ curproxy->id, trule->arg.trk_ctr.table.n,
+ tcp_trk_idx(trule->action));
+ cfgerr++;
+ }
+ else if (target->table.size == 0) {
+ Alert("Proxy '%s': table '%s' used but not configured.\n",
+ curproxy->id, trule->arg.trk_ctr.table.n ? trule->arg.trk_ctr.table.n : curproxy->id);
+ cfgerr++;
+ }
+ else if (!stktable_compatible_sample(trule->arg.trk_ctr.expr, target->table.type)) {
+ Alert("Proxy '%s': stick-table '%s' uses a type incompatible with the 'track-sc%d' rule.\n",
+ curproxy->id, trule->arg.trk_ctr.table.n ? trule->arg.trk_ctr.table.n : curproxy->id,
+ tcp_trk_idx(trule->action));
+ cfgerr++;
+ }
+ else {
+ free(trule->arg.trk_ctr.table.n);
+ trule->arg.trk_ctr.table.t = &target->table;
+ /* Note: if we decide to enhance the track-sc syntax, we may be able
+ * to pass a list of counters to track and allocate them right here using
+ * stktable_alloc_data_type().
+ */
+ }
+ }
+
+ /* parse http-request capture rules to ensure id really exists */
+ list_for_each_entry(hrqrule, &curproxy->http_req_rules, list) {
+ if (hrqrule->action != ACT_CUSTOM ||
+ hrqrule->action_ptr != http_action_req_capture_by_id)
+ continue;
+
+ if (hrqrule->arg.capid.idx >= curproxy->nb_req_cap) {
+ Alert("Proxy '%s': unable to find capture id '%d' referenced by http-request capture rule.\n",
+ curproxy->id, hrqrule->arg.capid.idx);
+ cfgerr++;
+ }
+ }
+
+ /* parse http-response capture rules to ensure id really exists */
+ list_for_each_entry(hrqrule, &curproxy->http_res_rules, list) {
+ if (hrqrule->action != ACT_CUSTOM ||
+ hrqrule->action_ptr != http_action_res_capture_by_id)
+ continue;
+
+ if (hrqrule->arg.capid.idx >= curproxy->nb_rsp_cap) {
+ Alert("Proxy '%s': unable to find capture id '%d' referenced by http-response capture rule.\n",
+ curproxy->id, hrqrule->arg.capid.idx);
+ cfgerr++;
+ }
+ }
+
+ /* find the target table for 'http-request' layer 7 rules */
+ list_for_each_entry(hrqrule, &curproxy->http_req_rules, list) {
+ struct proxy *target;
+
+ if (hrqrule->action < ACT_ACTION_TRK_SC0 || hrqrule->action > ACT_ACTION_TRK_SCMAX)
+ continue;
+
+ if (hrqrule->arg.trk_ctr.table.n)
+ target = proxy_tbl_by_name(hrqrule->arg.trk_ctr.table.n);
+ else
+ target = curproxy;
+
+ if (!target) {
+ Alert("Proxy '%s': unable to find table '%s' referenced by track-sc%d.\n",
+ curproxy->id, hrqrule->arg.trk_ctr.table.n,
+ http_req_trk_idx(hrqrule->action));
+ cfgerr++;
+ }
+ else if (target->table.size == 0) {
+ Alert("Proxy '%s': table '%s' used but not configured.\n",
+ curproxy->id, hrqrule->arg.trk_ctr.table.n ? hrqrule->arg.trk_ctr.table.n : curproxy->id);
+ cfgerr++;
+ }
+ else if (!stktable_compatible_sample(hrqrule->arg.trk_ctr.expr, target->table.type)) {
+ Alert("Proxy '%s': stick-table '%s' uses a type incompatible with the 'track-sc%d' rule.\n",
+ curproxy->id, hrqrule->arg.trk_ctr.table.n ? hrqrule->arg.trk_ctr.table.n : curproxy->id,
+ http_req_trk_idx(hrqrule->action));
+ cfgerr++;
+ }
+ else {
+ free(hrqrule->arg.trk_ctr.table.n);
+ hrqrule->arg.trk_ctr.table.t = &target->table;
+ /* Note: if we decide to enhance the track-sc syntax, we may be able
+ * to pass a list of counters to track and allocate them right here using
+ * stktable_alloc_data_type().
+ */
+ }
+ }
+
+ /* move any "block" rules at the beginning of the http-request rules */
+ if (!LIST_ISEMPTY(&curproxy->block_rules)) {
+ /* insert block_rules into http_req_rules at the beginning */
+ curproxy->block_rules.p->n = curproxy->http_req_rules.n;
+ curproxy->http_req_rules.n->p = curproxy->block_rules.p;
+ curproxy->block_rules.n->p = &curproxy->http_req_rules;
+ curproxy->http_req_rules.n = curproxy->block_rules.n;
+ LIST_INIT(&curproxy->block_rules);
+ }
+
+ if (curproxy->table.peers.name) {
+ struct peers *curpeers = peers;
+
+ for (curpeers = peers; curpeers; curpeers = curpeers->next) {
+ if (strcmp(curpeers->id, curproxy->table.peers.name) == 0) {
+ free((void *)curproxy->table.peers.name);
+ curproxy->table.peers.p = curpeers;
+ break;
+ }
+ }
+
+ if (!curpeers) {
+ Alert("Proxy '%s': unable to find sync peers '%s'.\n",
+ curproxy->id, curproxy->table.peers.name);
+ free((void *)curproxy->table.peers.name);
+ curproxy->table.peers.p = NULL;
+ cfgerr++;
+ }
+ else if (curpeers->state == PR_STSTOPPED) {
+ /* silently disable this peers section */
+ curproxy->table.peers.p = NULL;
+ }
+ else if (!curpeers->peers_fe) {
+ Alert("Proxy '%s': unable to find local peer '%s' in peers section '%s'.\n",
+ curproxy->id, localpeer, curpeers->id);
+ curproxy->table.peers.p = NULL;
+ cfgerr++;
+ }
+ }
+
+
+ if (curproxy->email_alert.mailers.name) {
+ struct mailers *curmailers = mailers;
+
+ for (curmailers = mailers; curmailers; curmailers = curmailers->next) {
+ if (strcmp(curmailers->id, curproxy->email_alert.mailers.name) == 0) {
+ free(curproxy->email_alert.mailers.name);
+ curproxy->email_alert.mailers.m = curmailers;
+ curmailers->users++;
+ break;
+ }
+ }
+
+ if (!curmailers) {
+ Alert("Proxy '%s': unable to find mailers '%s'.\n",
+ curproxy->id, curproxy->email_alert.mailers.name);
+ free_email_alert(curproxy);
+ cfgerr++;
+ }
+ }
+
+ if (curproxy->uri_auth && !(curproxy->uri_auth->flags & ST_CONVDONE) &&
+ !LIST_ISEMPTY(&curproxy->uri_auth->http_req_rules) &&
+ (curproxy->uri_auth->userlist || curproxy->uri_auth->auth_realm )) {
+ Alert("%s '%s': stats 'auth'/'realm' and 'http-request' can't be used at the same time.\n",
+ "proxy", curproxy->id);
+ cfgerr++;
+ goto out_uri_auth_compat;
+ }
+
+ if (curproxy->uri_auth && curproxy->uri_auth->userlist && !(curproxy->uri_auth->flags & ST_CONVDONE)) {
+ const char *uri_auth_compat_req[10];
+ struct act_rule *rule;
+ int i = 0;
+
+ /* build the ACL condition from scratch. We're relying on anonymous ACLs for that */
+ uri_auth_compat_req[i++] = "auth";
+
+ if (curproxy->uri_auth->auth_realm) {
+ uri_auth_compat_req[i++] = "realm";
+ uri_auth_compat_req[i++] = curproxy->uri_auth->auth_realm;
+ }
+
+ uri_auth_compat_req[i++] = "unless";
+ uri_auth_compat_req[i++] = "{";
+ uri_auth_compat_req[i++] = "http_auth(.internal-stats-userlist)";
+ uri_auth_compat_req[i++] = "}";
+ uri_auth_compat_req[i++] = "";
+
+ rule = parse_http_req_cond(uri_auth_compat_req, "internal-stats-auth-compat", 0, curproxy);
+ if (!rule) {
+ cfgerr++;
+ break;
+ }
+
+ LIST_ADDQ(&curproxy->uri_auth->http_req_rules, &rule->list);
+
+ if (curproxy->uri_auth->auth_realm) {
+ free(curproxy->uri_auth->auth_realm);
+ curproxy->uri_auth->auth_realm = NULL;
+ }
+
+ curproxy->uri_auth->flags |= ST_CONVDONE;
+ }
+out_uri_auth_compat:
+
+ /* check whether we have a log server that uses RFC5424 log format */
+ list_for_each_entry(tmplogsrv, &curproxy->logsrvs, list) {
+ if (tmplogsrv->format == LOG_FORMAT_RFC5424) {
+ if (!curproxy->conf.logformat_sd_string) {
+ /* set the default logformat_sd_string */
+ curproxy->conf.logformat_sd_string = default_rfc5424_sd_log_format;
+ }
+ break;
+ }
+ }
+
+ /* compile the log format */
+ if (!(curproxy->cap & PR_CAP_FE)) {
+ if (curproxy->conf.logformat_string != default_http_log_format &&
+ curproxy->conf.logformat_string != default_tcp_log_format &&
+ curproxy->conf.logformat_string != clf_http_log_format)
+ free(curproxy->conf.logformat_string);
+ curproxy->conf.logformat_string = NULL;
+ free(curproxy->conf.lfs_file);
+ curproxy->conf.lfs_file = NULL;
+ curproxy->conf.lfs_line = 0;
+
+ if (curproxy->conf.logformat_sd_string != default_rfc5424_sd_log_format)
+ free(curproxy->conf.logformat_sd_string);
+ curproxy->conf.logformat_sd_string = NULL;
+ free(curproxy->conf.lfsd_file);
+ curproxy->conf.lfsd_file = NULL;
+ curproxy->conf.lfsd_line = 0;
+ }
+
+ if (curproxy->conf.logformat_string) {
+ curproxy->conf.args.ctx = ARGC_LOG;
+ curproxy->conf.args.file = curproxy->conf.lfs_file;
+ curproxy->conf.args.line = curproxy->conf.lfs_line;
+ parse_logformat_string(curproxy->conf.logformat_string, curproxy, &curproxy->logformat, LOG_OPT_MANDATORY,
+ SMP_VAL_FE_LOG_END, curproxy->conf.lfs_file, curproxy->conf.lfs_line);
+ curproxy->conf.args.file = NULL;
+ curproxy->conf.args.line = 0;
+ }
+
+ if (curproxy->conf.logformat_sd_string) {
+ curproxy->conf.args.ctx = ARGC_LOGSD;
+ curproxy->conf.args.file = curproxy->conf.lfsd_file;
+ curproxy->conf.args.line = curproxy->conf.lfsd_line;
+ parse_logformat_string(curproxy->conf.logformat_sd_string, curproxy, &curproxy->logformat_sd, LOG_OPT_MANDATORY,
+ SMP_VAL_FE_LOG_END, curproxy->conf.lfsd_file, curproxy->conf.lfsd_line);
+ add_to_logformat_list(NULL, NULL, LF_SEPARATOR, &curproxy->logformat_sd);
+ curproxy->conf.args.file = NULL;
+ curproxy->conf.args.line = 0;
+ }
+
+ if (curproxy->conf.uniqueid_format_string) {
+ curproxy->conf.args.ctx = ARGC_UIF;
+ curproxy->conf.args.file = curproxy->conf.uif_file;
+ curproxy->conf.args.line = curproxy->conf.uif_line;
+ parse_logformat_string(curproxy->conf.uniqueid_format_string, curproxy, &curproxy->format_unique_id, LOG_OPT_HTTP,
+ (curproxy->cap & PR_CAP_FE) ? SMP_VAL_FE_HRQ_HDR : SMP_VAL_BE_HRQ_HDR,
+ curproxy->conf.uif_file, curproxy->conf.uif_line);
+ curproxy->conf.args.file = NULL;
+ curproxy->conf.args.line = 0;
+ }
+
+ /* only now we can check if some args remain unresolved.
+ * This must be done after the users and groups resolution.
+ */
+ cfgerr += smp_resolve_args(curproxy);
+ if (!cfgerr)
+ cfgerr += acl_find_targets(curproxy);
+
+ if ((curproxy->mode == PR_MODE_TCP || curproxy->mode == PR_MODE_HTTP) &&
+ (((curproxy->cap & PR_CAP_FE) && !curproxy->timeout.client) ||
+ ((curproxy->cap & PR_CAP_BE) && (curproxy->srv) &&
+ (!curproxy->timeout.connect ||
+ (!curproxy->timeout.server && (curproxy->mode == PR_MODE_HTTP || !curproxy->timeout.tunnel)))))) {
+ Warning("config : missing timeouts for %s '%s'.\n"
+ " | While not properly invalid, you will certainly encounter various problems\n"
+ " | with such a configuration. To fix this, please ensure that all following\n"
+ " | timeouts are set to a non-zero value: 'client', 'connect', 'server'.\n",
+ proxy_type_str(curproxy), curproxy->id);
+ err_code |= ERR_WARN;
+ }
+
+ /* Historically, the tarpit and queue timeouts were inherited from contimeout.
+ * We must still support older configurations, so let's find out whether those
+ * parameters have been set or must be copied from contimeouts.
+ */
+ if (curproxy != &defproxy) {
+ if (!curproxy->timeout.tarpit ||
+ curproxy->timeout.tarpit == defproxy.timeout.tarpit) {
+ /* tarpit timeout not set. We search in the following order:
+ * default.tarpit, curr.connect, default.connect.
+ */
+ if (defproxy.timeout.tarpit)
+ curproxy->timeout.tarpit = defproxy.timeout.tarpit;
+ else if (curproxy->timeout.connect)
+ curproxy->timeout.tarpit = curproxy->timeout.connect;
+ else if (defproxy.timeout.connect)
+ curproxy->timeout.tarpit = defproxy.timeout.connect;
+ }
+ if ((curproxy->cap & PR_CAP_BE) &&
+ (!curproxy->timeout.queue ||
+ curproxy->timeout.queue == defproxy.timeout.queue)) {
+ /* queue timeout not set. We search in the following order:
+ * default.queue, curr.connect, default.connect.
+ */
+ if (defproxy.timeout.queue)
+ curproxy->timeout.queue = defproxy.timeout.queue;
+ else if (curproxy->timeout.connect)
+ curproxy->timeout.queue = curproxy->timeout.connect;
+ else if (defproxy.timeout.connect)
+ curproxy->timeout.queue = defproxy.timeout.connect;
+ }
+ }
+
+ if ((curproxy->options2 & PR_O2_CHK_ANY) == PR_O2_SSL3_CHK) {
+ curproxy->check_len = sizeof(sslv3_client_hello_pkt) - 1;
+ curproxy->check_req = (char *)malloc(curproxy->check_len);
+ memcpy(curproxy->check_req, sslv3_client_hello_pkt, curproxy->check_len);
+ }
+
+ if (!LIST_ISEMPTY(&curproxy->tcpcheck_rules) &&
+ (curproxy->options2 & PR_O2_CHK_ANY) != PR_O2_TCPCHK_CHK) {
+ Warning("config : %s '%s' uses tcp-check rules without 'option tcp-check', so the rules are ignored.\n",
+ proxy_type_str(curproxy), curproxy->id);
+ err_code |= ERR_WARN;
+ }
+
+ /* ensure that cookie capture length is not too large */
+ if (curproxy->capture_len >= global.tune.cookie_len) {
+ Warning("config : truncating capture length to %d bytes for %s '%s'.\n",
+ global.tune.cookie_len - 1, proxy_type_str(curproxy), curproxy->id);
+ err_code |= ERR_WARN;
+ curproxy->capture_len = global.tune.cookie_len - 1;
+ }
+
+ /* The small pools required for the capture lists */
+ if (curproxy->nb_req_cap) {
+ curproxy->req_cap_pool = create_pool("ptrcap",
+ curproxy->nb_req_cap * sizeof(char *),
+ MEM_F_SHARED);
+ }
+
+ if (curproxy->nb_rsp_cap) {
+ curproxy->rsp_cap_pool = create_pool("ptrcap",
+ curproxy->nb_rsp_cap * sizeof(char *),
+ MEM_F_SHARED);
+ }
+
+ switch (curproxy->load_server_state_from_file) {
+ case PR_SRV_STATE_FILE_UNSPEC:
+ curproxy->load_server_state_from_file = PR_SRV_STATE_FILE_NONE;
+ break;
+ case PR_SRV_STATE_FILE_GLOBAL:
+ if (!global.server_state_file) {
+ Warning("config : backend '%s' configured to load server state file from global section 'server-state-file' directive. Unfortunately, 'server-state-file' is not set!\n",
+ curproxy->id);
+ err_code |= ERR_WARN;
+ }
+ break;
+ }
+
+ /* first, we will invert the servers list order */
+ newsrv = NULL;
+ while (curproxy->srv) {
+ struct server *next;
+
+ next = curproxy->srv->next;
+ curproxy->srv->next = newsrv;
+ newsrv = curproxy->srv;
+ if (!next)
+ break;
+ curproxy->srv = next;
+ }
+
+ /* Check that no server name conflicts. This causes trouble in the stats.
+ * We only emit a warning for the first conflict affecting each server,
+ * in order to avoid combinatory explosion if all servers have the same
+ * name. We do that only for servers which do not have an explicit ID,
+ * because these IDs were made also for distinguishing them and we don't
+ * want to annoy people who correctly manage them.
+ */
+ for (newsrv = curproxy->srv; newsrv; newsrv = newsrv->next) {
+ struct server *other_srv;
+
+ if (newsrv->puid)
+ continue;
+
+ for (other_srv = curproxy->srv; other_srv && other_srv != newsrv; other_srv = other_srv->next) {
+ if (!other_srv->puid && strcmp(other_srv->id, newsrv->id) == 0) {
+ Warning("parsing [%s:%d] : %s '%s', another server named '%s' was defined without an explicit ID at line %d, this is not recommended.\n",
+ newsrv->conf.file, newsrv->conf.line,
+ proxy_type_str(curproxy), curproxy->id,
+ newsrv->id, other_srv->conf.line);
+ break;
+ }
+ }
+ }
+
+ /* assign automatic UIDs to servers which don't have one yet */
+ next_id = 1;
+ newsrv = curproxy->srv;
+ while (newsrv != NULL) {
+ if (!newsrv->puid) {
+ /* server ID not set, use automatic numbering with first
+ * spare entry starting with next_svid.
+ */
+ next_id = get_next_id(&curproxy->conf.used_server_id, next_id);
+ newsrv->conf.id.key = newsrv->puid = next_id;
+ eb32_insert(&curproxy->conf.used_server_id, &newsrv->conf.id);
+ }
+ next_id++;
+ newsrv = newsrv->next;
+ }
+
+ curproxy->lbprm.wmult = 1; /* default weight multiplier */
+ curproxy->lbprm.wdiv = 1; /* default weight divider */
+
+ /*
+ * If this server supports a maxconn parameter, it needs a dedicated
+ * tasks to fill the emptied slots when a connection leaves.
+ * Also, resolve deferred tracking dependency if needed.
+ */
+ newsrv = curproxy->srv;
+ while (newsrv != NULL) {
+ if (newsrv->minconn > newsrv->maxconn) {
+ /* Only 'minconn' was specified, or it was higher than or equal
+ * to 'maxconn'. Let's turn this into maxconn and clean it, as
+ * this will avoid further useless expensive computations.
+ */
+ newsrv->maxconn = newsrv->minconn;
+ } else if (newsrv->maxconn && !newsrv->minconn) {
+ /* minconn was not specified, so we set it to maxconn */
+ newsrv->minconn = newsrv->maxconn;
+ }
+
+#ifdef USE_OPENSSL
+ if (newsrv->use_ssl || newsrv->check.use_ssl)
+ cfgerr += ssl_sock_prepare_srv_ctx(newsrv, curproxy);
+#endif /* USE_OPENSSL */
+
+ /* set the check type on the server */
+ newsrv->check.type = curproxy->options2 & PR_O2_CHK_ANY;
+
+ if (newsrv->trackit) {
+ struct proxy *px;
+ struct server *srv, *loop;
+ char *pname, *sname;
+
+ pname = newsrv->trackit;
+ sname = strrchr(pname, '/');
+
+ if (sname)
+ *sname++ = '\0';
+ else {
+ sname = pname;
+ pname = NULL;
+ }
+
+ if (pname) {
+ px = proxy_be_by_name(pname);
+ if (!px) {
+ Alert("config : %s '%s', server '%s': unable to find required proxy '%s' for tracking.\n",
+ proxy_type_str(curproxy), curproxy->id,
+ newsrv->id, pname);
+ cfgerr++;
+ goto next_srv;
+ }
+ } else
+ px = curproxy;
+
+ srv = findserver(px, sname);
+ if (!srv) {
+ Alert("config : %s '%s', server '%s': unable to find required server '%s' for tracking.\n",
+ proxy_type_str(curproxy), curproxy->id,
+ newsrv->id, sname);
+ cfgerr++;
+ goto next_srv;
+ }
+
+ if (!(srv->check.state & CHK_ST_CONFIGURED) &&
+ !(srv->agent.state & CHK_ST_CONFIGURED) &&
+ !srv->track && !srv->trackit) {
+ Alert("config : %s '%s', server '%s': unable to use %s/%s for "
+ "tracking as it does not have any check nor agent enabled.\n",
+ proxy_type_str(curproxy), curproxy->id,
+ newsrv->id, px->id, srv->id);
+ cfgerr++;
+ goto next_srv;
+ }
+
+ for (loop = srv->track; loop && loop != newsrv; loop = loop->track);
+
+ if (loop) {
+ Alert("config : %s '%s', server '%s': unable to track %s/%s as it "
+ "belongs to a tracking chain looping back to %s/%s.\n",
+ proxy_type_str(curproxy), curproxy->id,
+ newsrv->id, px->id, srv->id, px->id, loop->id);
+ cfgerr++;
+ goto next_srv;
+ }
+
+ if (curproxy != px &&
+ (curproxy->options & PR_O_DISABLE404) != (px->options & PR_O_DISABLE404)) {
+ Alert("config : %s '%s', server '%s': unable to use %s/%s for"
+ "tracking: disable-on-404 option inconsistency.\n",
+ proxy_type_str(curproxy), curproxy->id,
+ newsrv->id, px->id, srv->id);
+ cfgerr++;
+ goto next_srv;
+ }
+
+ /* if the other server is forced disabled, we have to do the same here */
+ if (srv->admin & SRV_ADMF_MAINT) {
+ newsrv->admin |= SRV_ADMF_IMAINT;
+ newsrv->state = SRV_ST_STOPPED;
+ newsrv->check.health = 0;
+ }
+
+ newsrv->track = srv;
+ newsrv->tracknext = srv->trackers;
+ srv->trackers = newsrv;
+
+ free(newsrv->trackit);
+ newsrv->trackit = NULL;
+ }
+
+ /*
+ * resolve server's resolvers name and update the resolvers pointer
+ * accordingly
+ */
+ if (newsrv->resolvers_id) {
+ struct dns_resolvers *curr_resolvers;
+ int found;
+
+ found = 0;
+ list_for_each_entry(curr_resolvers, &dns_resolvers, list) {
+ if (!strcmp(curr_resolvers->id, newsrv->resolvers_id)) {
+ found = 1;
+ break;
+ }
+ }
+
+ if (!found) {
+ Alert("config : %s '%s', server '%s': unable to find required resolvers '%s'\n",
+ proxy_type_str(curproxy), curproxy->id,
+ newsrv->id, newsrv->resolvers_id);
+ cfgerr++;
+ } else {
+ free(newsrv->resolvers_id);
+ newsrv->resolvers_id = NULL;
+ if (newsrv->resolution)
+ newsrv->resolution->resolvers = curr_resolvers;
+ }
+ }
+ else {
+ /* if no resolvers section associated to this server
+ * we can clean up the associated resolution structure
+ */
+ if (newsrv->resolution) {
+ free(newsrv->resolution->hostname_dn);
+ newsrv->resolution->hostname_dn = NULL;
+ free(newsrv->resolution);
+ newsrv->resolution = NULL;
+ }
+ }
+
+ next_srv:
+ newsrv = newsrv->next;
+ }
+
+ /* We have to initialize the server lookup mechanism depending
+ * on what LB algorithm was choosen.
+ */
+
+ curproxy->lbprm.algo &= ~(BE_LB_LKUP | BE_LB_PROP_DYN);
+ switch (curproxy->lbprm.algo & BE_LB_KIND) {
+ case BE_LB_KIND_RR:
+ if ((curproxy->lbprm.algo & BE_LB_PARM) == BE_LB_RR_STATIC) {
+ curproxy->lbprm.algo |= BE_LB_LKUP_MAP;
+ init_server_map(curproxy);
+ } else {
+ curproxy->lbprm.algo |= BE_LB_LKUP_RRTREE | BE_LB_PROP_DYN;
+ fwrr_init_server_groups(curproxy);
+ }
+ break;
+
+ case BE_LB_KIND_CB:
+ if ((curproxy->lbprm.algo & BE_LB_PARM) == BE_LB_CB_LC) {
+ curproxy->lbprm.algo |= BE_LB_LKUP_LCTREE | BE_LB_PROP_DYN;
+ fwlc_init_server_tree(curproxy);
+ } else {
+ curproxy->lbprm.algo |= BE_LB_LKUP_FSTREE | BE_LB_PROP_DYN;
+ fas_init_server_tree(curproxy);
+ }
+ break;
+
+ case BE_LB_KIND_HI:
+ if ((curproxy->lbprm.algo & BE_LB_HASH_TYPE) == BE_LB_HASH_CONS) {
+ curproxy->lbprm.algo |= BE_LB_LKUP_CHTREE | BE_LB_PROP_DYN;
+ chash_init_server_tree(curproxy);
+ } else {
+ curproxy->lbprm.algo |= BE_LB_LKUP_MAP;
+ init_server_map(curproxy);
+ }
+ break;
+ }
+
+ if (curproxy->options & PR_O_LOGASAP)
+ curproxy->to_log &= ~LW_BYTES;
+
+ if ((curproxy->mode == PR_MODE_TCP || curproxy->mode == PR_MODE_HTTP) &&
+ (curproxy->cap & PR_CAP_FE) && LIST_ISEMPTY(&curproxy->logsrvs) &&
+ (!LIST_ISEMPTY(&curproxy->logformat) || !LIST_ISEMPTY(&curproxy->logformat_sd))) {
+ Warning("config : log format ignored for %s '%s' since it has no log address.\n",
+ proxy_type_str(curproxy), curproxy->id);
+ err_code |= ERR_WARN;
+ }
+
+ if (curproxy->mode != PR_MODE_HTTP) {
+ int optnum;
+
+ if (curproxy->uri_auth) {
+ Warning("config : 'stats' statement ignored for %s '%s' as it requires HTTP mode.\n",
+ proxy_type_str(curproxy), curproxy->id);
+ err_code |= ERR_WARN;
+ curproxy->uri_auth = NULL;
+ }
+
+ if (curproxy->options & (PR_O_FWDFOR | PR_O_FF_ALWAYS)) {
+ Warning("config : 'option %s' ignored for %s '%s' as it requires HTTP mode.\n",
+ "forwardfor", proxy_type_str(curproxy), curproxy->id);
+ err_code |= ERR_WARN;
+ curproxy->options &= ~(PR_O_FWDFOR | PR_O_FF_ALWAYS);
+ }
+
+ if (curproxy->options & PR_O_ORGTO) {
+ Warning("config : 'option %s' ignored for %s '%s' as it requires HTTP mode.\n",
+ "originalto", proxy_type_str(curproxy), curproxy->id);
+ err_code |= ERR_WARN;
+ curproxy->options &= ~PR_O_ORGTO;
+ }
+
+ for (optnum = 0; cfg_opts[optnum].name; optnum++) {
+ if (cfg_opts[optnum].mode == PR_MODE_HTTP &&
+ (curproxy->cap & cfg_opts[optnum].cap) &&
+ (curproxy->options & cfg_opts[optnum].val)) {
+ Warning("config : 'option %s' ignored for %s '%s' as it requires HTTP mode.\n",
+ cfg_opts[optnum].name, proxy_type_str(curproxy), curproxy->id);
+ err_code |= ERR_WARN;
+ curproxy->options &= ~cfg_opts[optnum].val;
+ }
+ }
+
+ for (optnum = 0; cfg_opts2[optnum].name; optnum++) {
+ if (cfg_opts2[optnum].mode == PR_MODE_HTTP &&
+ (curproxy->cap & cfg_opts2[optnum].cap) &&
+ (curproxy->options2 & cfg_opts2[optnum].val)) {
+ Warning("config : 'option %s' ignored for %s '%s' as it requires HTTP mode.\n",
+ cfg_opts2[optnum].name, proxy_type_str(curproxy), curproxy->id);
+ err_code |= ERR_WARN;
+ curproxy->options2 &= ~cfg_opts2[optnum].val;
+ }
+ }
+
+#if defined(CONFIG_HAP_TRANSPARENT)
+ if (curproxy->conn_src.bind_hdr_occ) {
+ curproxy->conn_src.bind_hdr_occ = 0;
+ Warning("config : %s '%s' : ignoring use of header %s as source IP in non-HTTP mode.\n",
+ proxy_type_str(curproxy), curproxy->id, curproxy->conn_src.bind_hdr_name);
+ err_code |= ERR_WARN;
+ }
+#endif
+ }
+
+ /*
+ * ensure that we're not cross-dressing a TCP server into HTTP.
+ */
+ newsrv = curproxy->srv;
+ while (newsrv != NULL) {
+ if ((curproxy->mode != PR_MODE_HTTP) && newsrv->rdr_len) {
+ Alert("config : %s '%s' : server cannot have cookie or redirect prefix in non-HTTP mode.\n",
+ proxy_type_str(curproxy), curproxy->id);
+ cfgerr++;
+ }
+
+ if ((curproxy->mode != PR_MODE_HTTP) && newsrv->cklen) {
+ Warning("config : %s '%s' : ignoring cookie for server '%s' as HTTP mode is disabled.\n",
+ proxy_type_str(curproxy), curproxy->id, newsrv->id);
+ err_code |= ERR_WARN;
+ }
+
+ if ((newsrv->flags & SRV_F_MAPPORTS) && (curproxy->options2 & PR_O2_RDPC_PRST)) {
+ Warning("config : %s '%s' : RDP cookie persistence will not work for server '%s' because it lacks an explicit port number.\n",
+ proxy_type_str(curproxy), curproxy->id, newsrv->id);
+ err_code |= ERR_WARN;
+ }
+
+#if defined(CONFIG_HAP_TRANSPARENT)
+ if (curproxy->mode != PR_MODE_HTTP && newsrv->conn_src.bind_hdr_occ) {
+ newsrv->conn_src.bind_hdr_occ = 0;
+ Warning("config : %s '%s' : server %s cannot use header %s as source IP in non-HTTP mode.\n",
+ proxy_type_str(curproxy), curproxy->id, newsrv->id, newsrv->conn_src.bind_hdr_name);
+ err_code |= ERR_WARN;
+ }
+#endif
+ newsrv = newsrv->next;
+ }
+
+ /* check if we have a frontend with "tcp-request content" looking at L7
+ * with no inspect-delay
+ */
+ if ((curproxy->cap & PR_CAP_FE) && !curproxy->tcp_req.inspect_delay) {
+ list_for_each_entry(trule, &curproxy->tcp_req.inspect_rules, list) {
+ if (trule->action == ACT_TCP_CAPTURE &&
+ !(trule->arg.cap.expr->fetch->val & SMP_VAL_FE_SES_ACC))
+ break;
+ if ((trule->action >= ACT_ACTION_TRK_SC0 && trule->action <= ACT_ACTION_TRK_SCMAX) &&
+ !(trule->arg.trk_ctr.expr->fetch->val & SMP_VAL_FE_SES_ACC))
+ break;
+ }
+
+ if (&trule->list != &curproxy->tcp_req.inspect_rules) {
+ Warning("config : %s '%s' : some 'tcp-request content' rules explicitly depending on request"
+ " contents were found in a frontend without any 'tcp-request inspect-delay' setting."
+ " This means that these rules will randomly find their contents. This can be fixed by"
+ " setting the tcp-request inspect-delay.\n",
+ proxy_type_str(curproxy), curproxy->id);
+ err_code |= ERR_WARN;
+ }
+ }
+
+ if (curproxy->cap & PR_CAP_FE) {
+ if (!curproxy->accept)
+ curproxy->accept = frontend_accept;
+
+ if (curproxy->tcp_req.inspect_delay ||
+ !LIST_ISEMPTY(&curproxy->tcp_req.inspect_rules))
+ curproxy->fe_req_ana |= AN_REQ_INSPECT_FE;
+
+ if (curproxy->mode == PR_MODE_HTTP) {
+ curproxy->fe_req_ana |= AN_REQ_WAIT_HTTP | AN_REQ_HTTP_PROCESS_FE;
+ curproxy->fe_rsp_ana |= AN_RES_WAIT_HTTP | AN_RES_HTTP_PROCESS_FE;
+ }
+
+ /* both TCP and HTTP must check switching rules */
+ curproxy->fe_req_ana |= AN_REQ_SWITCHING_RULES;
+ }
+
+ if (curproxy->cap & PR_CAP_BE) {
+ if (curproxy->tcp_req.inspect_delay ||
+ !LIST_ISEMPTY(&curproxy->tcp_req.inspect_rules))
+ curproxy->be_req_ana |= AN_REQ_INSPECT_BE;
+
+ if (!LIST_ISEMPTY(&curproxy->tcp_rep.inspect_rules))
+ curproxy->be_rsp_ana |= AN_RES_INSPECT;
+
+ if (curproxy->mode == PR_MODE_HTTP) {
+ curproxy->be_req_ana |= AN_REQ_WAIT_HTTP | AN_REQ_HTTP_INNER | AN_REQ_HTTP_PROCESS_BE;
+ curproxy->be_rsp_ana |= AN_RES_WAIT_HTTP | AN_RES_HTTP_PROCESS_BE;
+ }
+
+ /* If the backend does requires RDP cookie persistence, we have to
+ * enable the corresponding analyser.
+ */
+ if (curproxy->options2 & PR_O2_RDPC_PRST)
+ curproxy->be_req_ana |= AN_REQ_PRST_RDP_COOKIE;
+ }
+ }
+
+ /***********************************************************/
+ /* At this point, target names have already been resolved. */
+ /***********************************************************/
+
+ /* Check multi-process mode compatibility */
+
+ if (global.nbproc > 1 && global.stats_fe) {
+ list_for_each_entry(bind_conf, &global.stats_fe->conf.bind, by_fe) {
+ unsigned long mask;
+
+ mask = nbits(global.nbproc);
+ if (global.stats_fe->bind_proc)
+ mask &= global.stats_fe->bind_proc;
+
+ if (bind_conf->bind_proc)
+ mask &= bind_conf->bind_proc;
+
+ /* stop here if more than one process is used */
+ if (my_popcountl(mask) > 1)
+ break;
+ }
+ if (&bind_conf->by_fe != &global.stats_fe->conf.bind) {
+ Warning("stats socket will not work as expected in multi-process mode (nbproc > 1), you should force process binding globally using 'stats bind-process' or per socket using the 'process' attribute.\n");
+ }
+ }
+
+ /* Make each frontend inherit bind-process from its listeners when not specified. */
+ for (curproxy = proxy; curproxy; curproxy = curproxy->next) {
+ if (curproxy->bind_proc)
+ continue;
+
+ list_for_each_entry(bind_conf, &curproxy->conf.bind, by_fe) {
+ unsigned long mask;
+
+ mask = bind_conf->bind_proc ? bind_conf->bind_proc : nbits(global.nbproc);
+ curproxy->bind_proc |= mask;
+ }
+
+ if (!curproxy->bind_proc)
+ curproxy->bind_proc = nbits(global.nbproc);
+ }
+
+ if (global.stats_fe) {
+ list_for_each_entry(bind_conf, &global.stats_fe->conf.bind, by_fe) {
+ unsigned long mask;
+
+ mask = bind_conf->bind_proc ? bind_conf->bind_proc : nbits(global.nbproc);
+ global.stats_fe->bind_proc |= mask;
+ }
+ if (!global.stats_fe->bind_proc)
+ global.stats_fe->bind_proc = nbits(global.nbproc);
+ }
+
+ /* propagate bindings from frontends to backends. Don't do it if there
+ * are any fatal errors as we must not call it with unresolved proxies.
+ */
+ if (!cfgerr) {
+ for (curproxy = proxy; curproxy; curproxy = curproxy->next) {
+ if (curproxy->cap & PR_CAP_FE)
+ propagate_processes(curproxy, NULL);
+ }
+ }
+
+ /* Bind each unbound backend to all processes when not specified. */
+ for (curproxy = proxy; curproxy; curproxy = curproxy->next) {
+ if (curproxy->bind_proc)
+ continue;
+ curproxy->bind_proc = nbits(global.nbproc);
+ }
+
+ /*******************************************************/
+ /* At this step, all proxies have a non-null bind_proc */
+ /*******************************************************/
+
+ /* perform the final checks before creating tasks */
+
+ for (curproxy = proxy; curproxy; curproxy = curproxy->next) {
+ struct listener *listener;
+ unsigned int next_id;
+ int nbproc;
+
+ nbproc = my_popcountl(curproxy->bind_proc & nbits(global.nbproc));
+
+#ifdef USE_OPENSSL
+ /* Configure SSL for each bind line.
+ * Note: if configuration fails at some point, the ->ctx member
+ * remains NULL so that listeners can later detach.
+ */
+ list_for_each_entry(bind_conf, &curproxy->conf.bind, by_fe) {
+ int alloc_ctx;
+
+ if (!bind_conf->is_ssl) {
+ if (bind_conf->default_ctx) {
+ Warning("Proxy '%s': A certificate was specified but SSL was not enabled on bind '%s' at [%s:%d] (use 'ssl').\n",
+ curproxy->id, bind_conf->arg, bind_conf->file, bind_conf->line);
+ }
+ continue;
+ }
+ if (!bind_conf->default_ctx) {
+ Alert("Proxy '%s': no SSL certificate specified for bind '%s' at [%s:%d] (use 'crt').\n",
+ curproxy->id, bind_conf->arg, bind_conf->file, bind_conf->line);
+ cfgerr++;
+ continue;
+ }
+
+ alloc_ctx = shared_context_init(global.tune.sslcachesize, (!global.tune.sslprivatecache && (global.nbproc > 1)) ? 1 : 0);
+ if (alloc_ctx < 0) {
+ if (alloc_ctx == SHCTX_E_INIT_LOCK)
+ Alert("Unable to initialize the lock for the shared SSL session cache. You can retry using the global statement 'tune.ssl.force-private-cache' but it could increase CPU usage due to renegotiations if nbproc > 1.\n");
+ else
+ Alert("Unable to allocate SSL session cache.\n");
+ cfgerr++;
+ continue;
+ }
+
+ /* initialize all certificate contexts */
+ cfgerr += ssl_sock_prepare_all_ctx(bind_conf, curproxy);
+
+ /* initialize CA variables if the certificates generation is enabled */
+ cfgerr += ssl_sock_load_ca(bind_conf, curproxy);
+ }
+#endif /* USE_OPENSSL */
+
+ /* adjust this proxy's listeners */
+ next_id = 1;
+ list_for_each_entry(listener, &curproxy->conf.listeners, by_fe) {
+ if (!listener->luid) {
+ /* listener ID not set, use automatic numbering with first
+ * spare entry starting with next_luid.
+ */
+ next_id = get_next_id(&curproxy->conf.used_listener_id, next_id);
+ listener->conf.id.key = listener->luid = next_id;
+ eb32_insert(&curproxy->conf.used_listener_id, &listener->conf.id);
+ }
+ next_id++;
+
+ /* enable separate counters */
+ if (curproxy->options2 & PR_O2_SOCKSTAT) {
+ listener->counters = (struct licounters *)calloc(1, sizeof(struct licounters));
+ if (!listener->name)
+ memprintf(&listener->name, "sock-%d", listener->luid);
+ }
+
+ if (curproxy->options & PR_O_TCP_NOLING)
+ listener->options |= LI_O_NOLINGER;
+ if (!listener->maxconn)
+ listener->maxconn = curproxy->maxconn;
+ if (!listener->backlog)
+ listener->backlog = curproxy->backlog;
+ if (!listener->maxaccept)
+ listener->maxaccept = global.tune.maxaccept ? global.tune.maxaccept : 64;
+
+ /* we want to have an optimal behaviour on single process mode to
+ * maximize the work at once, but in multi-process we want to keep
+ * some fairness between processes, so we target half of the max
+ * number of events to be balanced over all the processes the proxy
+ * is bound to. Rememeber that maxaccept = -1 must be kept as it is
+ * used to disable the limit.
+ */
+ if (listener->maxaccept > 0) {
+ if (nbproc > 1)
+ listener->maxaccept = (listener->maxaccept + 1) / 2;
+ listener->maxaccept = (listener->maxaccept + nbproc - 1) / nbproc;
+ }
+
+ listener->accept = session_accept_fd;
+ listener->handler = process_stream;
+ listener->analysers |= curproxy->fe_req_ana;
+ listener->default_target = curproxy->default_target;
+
+ if (!LIST_ISEMPTY(&curproxy->tcp_req.l4_rules))
+ listener->options |= LI_O_TCP_RULES;
+
+ if (curproxy->mon_mask.s_addr)
+ listener->options |= LI_O_CHK_MONNET;
+
+ /* smart accept mode is automatic in HTTP mode */
+ if ((curproxy->options2 & PR_O2_SMARTACC) ||
+ ((curproxy->mode == PR_MODE_HTTP || listener->bind_conf->is_ssl) &&
+ !(curproxy->no_options2 & PR_O2_SMARTACC)))
+ listener->options |= LI_O_NOQUICKACK;
+ }
+
+ /* Release unused SSL configs */
+ list_for_each_entry(bind_conf, &curproxy->conf.bind, by_fe) {
+ if (bind_conf->is_ssl)
+ continue;
+#ifdef USE_OPENSSL
+ ssl_sock_free_ca(bind_conf);
+ ssl_sock_free_all_ctx(bind_conf);
+ free(bind_conf->ca_file);
+ free(bind_conf->ca_sign_file);
+ free(bind_conf->ca_sign_pass);
+ free(bind_conf->ciphers);
+ free(bind_conf->ecdhe);
+ free(bind_conf->crl_file);
+ if(bind_conf->keys_ref) {
+ free(bind_conf->keys_ref->filename);
+ free(bind_conf->keys_ref->tlskeys);
+ free(bind_conf->keys_ref);
+ }
+#endif /* USE_OPENSSL */
+ }
+
+ if (nbproc > 1) {
+ if (curproxy->uri_auth) {
+ int count, maxproc = 0;
+
+ list_for_each_entry(bind_conf, &curproxy->conf.bind, by_fe) {
+ count = my_popcountl(bind_conf->bind_proc);
+ if (count > maxproc)
+ maxproc = count;
+ }
+ /* backends have 0, frontends have 1 or more */
+ if (maxproc != 1)
+ Warning("Proxy '%s': in multi-process mode, stats will be"
+ " limited to process assigned to the current request.\n",
+ curproxy->id);
+
+ if (!LIST_ISEMPTY(&curproxy->uri_auth->admin_rules)) {
+ Warning("Proxy '%s': stats admin will not work correctly in multi-process mode.\n",
+ curproxy->id);
+ }
+ }
+ if (!LIST_ISEMPTY(&curproxy->sticking_rules)) {
+ Warning("Proxy '%s': sticking rules will not work correctly in multi-process mode.\n",
+ curproxy->id);
+ }
+ }
+
+ /* create the task associated with the proxy */
+ curproxy->task = task_new();
+ if (curproxy->task) {
+ curproxy->task->context = curproxy;
+ curproxy->task->process = manage_proxy;
+ /* no need to queue, it will be done automatically if some
+ * listener gets limited.
+ */
+ curproxy->task->expire = TICK_ETERNITY;
+ } else {
+ Alert("Proxy '%s': no more memory when trying to allocate the management task\n",
+ curproxy->id);
+ cfgerr++;
+ }
+ }
+
+ /* automatically compute fullconn if not set. We must not do it in the
+ * loop above because cross-references are not yet fully resolved.
+ */
+ for (curproxy = proxy; curproxy; curproxy = curproxy->next) {
+ /* If <fullconn> is not set, let's set it to 10% of the sum of
+ * the possible incoming frontend's maxconns.
+ */
+ if (!curproxy->fullconn && (curproxy->cap & PR_CAP_BE)) {
+ struct proxy *fe;
+ int total = 0;
+
+ /* sum up the number of maxconns of frontends which
+ * reference this backend at least once or which are
+ * the same one ('listen').
+ */
+ for (fe = proxy; fe; fe = fe->next) {
+ struct switching_rule *rule;
+ int found = 0;
+
+ if (!(fe->cap & PR_CAP_FE))
+ continue;
+
+ if (fe == curproxy) /* we're on a "listen" instance */
+ found = 1;
+
+ if (fe->defbe.be == curproxy) /* "default_backend" */
+ found = 1;
+
+ /* check if a "use_backend" rule matches */
+ if (!found) {
+ list_for_each_entry(rule, &fe->switching_rules, list) {
+ if (!rule->dynamic && rule->be.backend == curproxy) {
+ found = 1;
+ break;
+ }
+ }
+ }
+
+ /* now we've checked all possible ways to reference a backend
+ * from a frontend.
+ */
+ if (!found)
+ continue;
+ total += fe->maxconn;
+ }
+ /* we have the sum of the maxconns in <total>. We only
+ * keep 10% of that sum to set the default fullconn, with
+ * a hard minimum of 1 (to avoid a divide by zero).
+ */
+ curproxy->fullconn = (total + 9) / 10;
+ if (!curproxy->fullconn)
+ curproxy->fullconn = 1;
+ }
+ }
+
+ /*
+ * Recount currently required checks.
+ */
+
+ for (curproxy=proxy; curproxy; curproxy=curproxy->next) {
+ int optnum;
+
+ for (optnum = 0; cfg_opts[optnum].name; optnum++)
+ if (curproxy->options & cfg_opts[optnum].val)
+ global.last_checks |= cfg_opts[optnum].checks;
+
+ for (optnum = 0; cfg_opts2[optnum].name; optnum++)
+ if (curproxy->options2 & cfg_opts2[optnum].val)
+ global.last_checks |= cfg_opts2[optnum].checks;
+ }
+
+ /* compute the required process bindings for the peers */
+ for (curproxy = proxy; curproxy; curproxy = curproxy->next)
+ if (curproxy->table.peers.p)
+ curproxy->table.peers.p->peers_fe->bind_proc |= curproxy->bind_proc;
+
+ if (peers) {
+ struct peers *curpeers = peers, **last;
+ struct peer *p, *pb;
+
+ /* Remove all peers sections which don't have a valid listener,
+ * which are not used by any table, or which are bound to more
+ * than one process.
+ */
+ last = &peers;
+ while (*last) {
+ curpeers = *last;
+
+ if (curpeers->state == PR_STSTOPPED) {
+ /* the "disabled" keyword was present */
+ if (curpeers->peers_fe)
+ stop_proxy(curpeers->peers_fe);
+ curpeers->peers_fe = NULL;
+ }
+ else if (!curpeers->peers_fe) {
+ Warning("Removing incomplete section 'peers %s' (no peer named '%s').\n",
+ curpeers->id, localpeer);
+ }
+ else if (my_popcountl(curpeers->peers_fe->bind_proc) != 1) {
+ /* either it's totally stopped or too much used */
+ if (curpeers->peers_fe->bind_proc) {
+ Alert("Peers section '%s': peers referenced by sections "
+ "running in different processes (%d different ones). "
+ "Check global.nbproc and all tables' bind-process "
+ "settings.\n", curpeers->id, my_popcountl(curpeers->peers_fe->bind_proc));
+ cfgerr++;
+ }
+ stop_proxy(curpeers->peers_fe);
+ curpeers->peers_fe = NULL;
+ }
+ else {
+ peers_init_sync(curpeers);
+ last = &curpeers->next;
+ continue;
+ }
+
+ /* clean what has been detected above */
+ p = curpeers->remote;
+ while (p) {
+ pb = p->next;
+ free(p->id);
+ free(p);
+ p = pb;
+ }
+
+ /* Destroy and unlink this curpeers section.
+ * Note: curpeers is backed up into *last.
+ */
+ free(curpeers->id);
+ curpeers = curpeers->next;
+ free(*last);
+ *last = curpeers;
+ }
+ }
+
+ /* initialize stick-tables on backend capable proxies. This must not
+ * be done earlier because the data size may be discovered while parsing
+ * other proxies.
+ */
+ for (curproxy = proxy; curproxy; curproxy = curproxy->next) {
+ if (curproxy->state == PR_STSTOPPED)
+ continue;
+
+ if (!stktable_init(&curproxy->table)) {
+ Alert("Proxy '%s': failed to initialize stick-table.\n", curproxy->id);
+ cfgerr++;
+ }
+ }
+
+ if (mailers) {
+ struct mailers *curmailers = mailers, **last;
+ struct mailer *m, *mb;
+
+ /* Remove all mailers sections which don't have a valid listener.
+ * This can happen when a mailers section is never referenced.
+ */
+ last = &mailers;
+ while (*last) {
+ curmailers = *last;
+ if (curmailers->users) {
+ last = &curmailers->next;
+ continue;
+ }
+
+ Warning("Removing incomplete section 'mailers %s'.\n",
+ curmailers->id);
+
+ m = curmailers->mailer_list;
+ while (m) {
+ mb = m->next;
+ free(m->id);
+ free(m);
+ m = mb;
+ }
+
+ /* Destroy and unlink this curmailers section.
+ * Note: curmailers is backed up into *last.
+ */
+ free(curmailers->id);
+ curmailers = curmailers->next;
+ free(*last);
+ *last = curmailers;
+ }
+ }
+
+ /* Update server_state_file_name to backend name if backend is supposed to use
+ * a server-state file locally defined and none has been provided */
+ for (curproxy = proxy; curproxy; curproxy = curproxy->next) {
+ if (curproxy->load_server_state_from_file == PR_SRV_STATE_FILE_LOCAL &&
+ curproxy->server_state_file_name == NULL)
+ curproxy->server_state_file_name = strdup(curproxy->id);
+ }
+
+ pool2_hdr_idx = create_pool("hdr_idx",
+ global.tune.max_http_hdr * sizeof(struct hdr_idx_elem),
+ MEM_F_SHARED);
+
+ if (cfgerr > 0)
+ err_code |= ERR_ALERT | ERR_FATAL;
+ out:
+ return err_code;
+}
+
+/*
+ * Registers the CFG keyword list <kwl> as a list of valid keywords for next
+ * parsing sessions.
+ */
+void cfg_register_keywords(struct cfg_kw_list *kwl)
+{
+ LIST_ADDQ(&cfg_keywords.list, &kwl->list);
+}
+
+/*
+ * Unregisters the CFG keyword list <kwl> from the list of valid keywords.
+ */
+void cfg_unregister_keywords(struct cfg_kw_list *kwl)
+{
+ LIST_DEL(&kwl->list);
+ LIST_INIT(&kwl->list);
+}
+
+/* this function register new section in the haproxy configuration file.
+ * <section_name> is the name of this new section and <section_parser>
+ * is the called parser. If two section declaration have the same name,
+ * only the first declared is used.
+ */
+int cfg_register_section(char *section_name,
+ int (*section_parser)(const char *, int, char **, int))
+{
+ struct cfg_section *cs;
+
+ cs = calloc(1, sizeof(*cs));
+ if (!cs) {
+ Alert("register section '%s': out of memory.\n", section_name);
+ return 0;
+ }
+
+ cs->section_name = section_name;
+ cs->section_parser = section_parser;
+
+ LIST_ADDQ(§ions, &cs->list);
+
+ return 1;
+}
+
+/*
+ * free all config section entries
+ */
+void cfg_unregister_sections(void)
+{
+ struct cfg_section *cs, *ics;
+
+ list_for_each_entry_safe(cs, ics, §ions, list) {
+ LIST_DEL(&cs->list);
+ free(cs);
+ }
+}
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * Channel management functions.
+ *
+ * Copyright 2000-2014 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <ctype.h>
+#include <stdarg.h>
+#include <stdio.h>
+#include <string.h>
+
+#include <common/config.h>
+#include <common/buffer.h>
+
+#include <proto/channel.h>
+
+
+/* Schedule up to <bytes> more bytes to be forwarded via the channel without
+ * notifying the owner task. Any data pending in the buffer are scheduled to be
+ * sent as well, in the limit of the number of bytes to forward. This must be
+ * the only method to use to schedule bytes to be forwarded. If the requested
+ * number is too large, it is automatically adjusted. The number of bytes taken
+ * into account is returned. Directly touching ->to_forward will cause lockups
+ * when buf->o goes down to zero if nobody is ready to push the remaining data.
+ */
+unsigned long long __channel_forward(struct channel *chn, unsigned long long bytes)
+{
+ unsigned int new_forward;
+ unsigned int forwarded;
+
+ forwarded = chn->buf->i;
+ b_adv(chn->buf, chn->buf->i);
+
+ /* Note: the case below is the only case where we may return
+ * a byte count that does not fit into a 32-bit number.
+ */
+ if (likely(chn->to_forward == CHN_INFINITE_FORWARD))
+ return bytes;
+
+ if (likely(bytes == CHN_INFINITE_FORWARD)) {
+ chn->to_forward = bytes;
+ return bytes;
+ }
+
+ new_forward = chn->to_forward + bytes - forwarded;
+ bytes = forwarded; /* at least those bytes were scheduled */
+
+ if (new_forward <= chn->to_forward) {
+ /* integer overflow detected, let's assume no more than 2G at once */
+ new_forward = MID_RANGE(new_forward);
+ }
+
+ if (new_forward > chn->to_forward) {
+ bytes += new_forward - chn->to_forward;
+ chn->to_forward = new_forward;
+ }
+ return bytes;
+}
+
+/* writes <len> bytes from message <msg> to the channel's buffer. Returns -1 in
+ * case of success, -2 if the message is larger than the buffer size, or the
+ * number of bytes available otherwise. The send limit is automatically
+ * adjusted to the amount of data written. FIXME-20060521: handle unaligned
+ * data. Note: this function appends data to the buffer's output and possibly
+ * overwrites any pending input data which are assumed not to exist.
+ */
+int bo_inject(struct channel *chn, const char *msg, int len)
+{
+ int max;
+
+ if (len == 0)
+ return -1;
+
+ if (len > chn->buf->size) {
+ /* we can't write this chunk and will never be able to, because
+ * it is larger than the buffer. This must be reported as an
+ * error. Then we return -2 so that writers that don't care can
+ * ignore it and go on, and others can check for this value.
+ */
+ return -2;
+ }
+
+ max = buffer_realign(chn->buf);
+
+ if (len > max)
+ return max;
+
+ memcpy(chn->buf->p, msg, len);
+ chn->buf->o += len;
+ chn->buf->p = b_ptr(chn->buf, len);
+ chn->total += len;
+ return -1;
+}
+
+/* Tries to copy character <c> into the channel's buffer after some length
+ * controls. The chn->o and to_forward pointers are updated. If the channel
+ * input is closed, -2 is returned. If there is not enough room left in the
+ * buffer, -1 is returned. Otherwise the number of bytes copied is returned
+ * (1). Channel flag READ_PARTIAL is updated if some data can be transferred.
+ */
+int bi_putchr(struct channel *chn, char c)
+{
+ if (unlikely(channel_input_closed(chn)))
+ return -2;
+
+ if (!channel_may_recv(chn))
+ return -1;
+
+ *bi_end(chn->buf) = c;
+
+ chn->buf->i++;
+ chn->flags |= CF_READ_PARTIAL;
+
+ if (chn->to_forward >= 1) {
+ if (chn->to_forward != CHN_INFINITE_FORWARD)
+ chn->to_forward--;
+ b_adv(chn->buf, 1);
+ }
+
+ chn->total++;
+ return 1;
+}
+
+/* Tries to copy block <blk> at once into the channel's buffer after length
+ * controls. The chn->o and to_forward pointers are updated. If the channel
+ * input is closed, -2 is returned. If the block is too large for this buffer,
+ * -3 is returned. If there is not enough room left in the buffer, -1 is
+ * returned. Otherwise the number of bytes copied is returned (0 being a valid
+ * number). Channel flag READ_PARTIAL is updated if some data can be
+ * transferred.
+ */
+int bi_putblk(struct channel *chn, const char *blk, int len)
+{
+ int max;
+
+ if (unlikely(channel_input_closed(chn)))
+ return -2;
+
+ max = channel_recv_limit(chn);
+ if (unlikely(len > max - buffer_len(chn->buf))) {
+ /* we can't write this chunk right now because the buffer is
+ * almost full or because the block is too large. Return the
+ * available space or -2 if impossible.
+ */
+ if (len > max)
+ return -3;
+
+ return -1;
+ }
+
+ if (unlikely(len == 0))
+ return 0;
+
+ /* OK so the data fits in the buffer in one or two blocks */
+ max = buffer_contig_space(chn->buf);
+ memcpy(bi_end(chn->buf), blk, MIN(len, max));
+ if (len > max)
+ memcpy(chn->buf->data, blk + max, len - max);
+
+ chn->buf->i += len;
+ chn->total += len;
+ if (chn->to_forward) {
+ unsigned long fwd = len;
+ if (chn->to_forward != CHN_INFINITE_FORWARD) {
+ if (fwd > chn->to_forward)
+ fwd = chn->to_forward;
+ chn->to_forward -= fwd;
+ }
+ b_adv(chn->buf, fwd);
+ }
+
+ /* notify that some data was read from the SI into the buffer */
+ chn->flags |= CF_READ_PARTIAL;
+ return len;
+}
+
+/* Tries to copy the whole buffer <buf> into the channel's buffer after length
+ * controls. It will only succeed if the target buffer is empty, in which case
+ * it will simply swap the buffers. The buffer not attached to the channel is
+ * returned so that the caller can store it locally. The chn->buf->o and
+ * to_forward pointers are updated. If the output buffer is a dummy buffer or
+ * if it still contains data <buf> is returned, indicating that nothing could
+ * be done. Channel flag READ_PARTIAL is updated if some data can be transferred.
+ * The chunk's length is updated with the number of bytes sent. On errors, NULL
+ * is returned. Note that only buf->i is considered.
+ */
+struct buffer *bi_swpbuf(struct channel *chn, struct buffer *buf)
+{
+ struct buffer *old;
+
+ if (unlikely(channel_input_closed(chn)))
+ return NULL;
+
+ if (!chn->buf->size || !buffer_empty(chn->buf))
+ return buf;
+
+ old = chn->buf;
+ chn->buf = buf;
+
+ if (!buf->i)
+ return old;
+
+ chn->total += buf->i;
+
+ if (chn->to_forward) {
+ unsigned long fwd = buf->i;
+ if (chn->to_forward != CHN_INFINITE_FORWARD) {
+ if (fwd > chn->to_forward)
+ fwd = chn->to_forward;
+ chn->to_forward -= fwd;
+ }
+ b_adv(chn->buf, fwd);
+ }
+
+ /* notify that some data was read from the SI into the buffer */
+ chn->flags |= CF_READ_PARTIAL;
+ return old;
+}
+
+/* Gets one text line out of a channel's buffer from a stream interface.
+ * Return values :
+ * >0 : number of bytes read. Includes the \n if present before len or end.
+ * =0 : no '\n' before end found. <str> is left undefined.
+ * <0 : no more bytes readable because output is shut.
+ * The channel status is not changed. The caller must call bo_skip() to
+ * update it. The '\n' is waited for as long as neither the buffer nor the
+ * output are full. If either of them is full, the string may be returned
+ * as is, without the '\n'.
+ */
+int bo_getline(struct channel *chn, char *str, int len)
+{
+ int ret, max;
+ char *p;
+
+ ret = 0;
+ max = len;
+
+ /* closed or empty + imminent close = -1; empty = 0 */
+ if (unlikely((chn->flags & CF_SHUTW) || channel_is_empty(chn))) {
+ if (chn->flags & (CF_SHUTW|CF_SHUTW_NOW))
+ ret = -1;
+ goto out;
+ }
+
+ p = bo_ptr(chn->buf);
+
+ if (max > chn->buf->o) {
+ max = chn->buf->o;
+ str[max-1] = 0;
+ }
+ while (max) {
+ *str++ = *p;
+ ret++;
+ max--;
+
+ if (*p == '\n')
+ break;
+ p = buffer_wrap_add(chn->buf, p + 1);
+ }
+ if (ret > 0 && ret < len &&
+ (ret < chn->buf->o || channel_may_recv(chn)) &&
+ *(str-1) != '\n' &&
+ !(chn->flags & (CF_SHUTW|CF_SHUTW_NOW)))
+ ret = 0;
+ out:
+ if (max)
+ *str = 0;
+ return ret;
+}
+
+/* Gets one full block of data at once from a channel's buffer, optionally from
+ * a specific offset. Return values :
+ * >0 : number of bytes read, equal to requested size.
+ * =0 : not enough data available. <blk> is left undefined.
+ * <0 : no more bytes readable because output is shut.
+ * The channel status is not changed. The caller must call bo_skip() to
+ * update it.
+ */
+int bo_getblk(struct channel *chn, char *blk, int len, int offset)
+{
+ int firstblock;
+
+ if (chn->flags & CF_SHUTW)
+ return -1;
+
+ if (len + offset > chn->buf->o) {
+ if (chn->flags & (CF_SHUTW|CF_SHUTW_NOW))
+ return -1;
+ return 0;
+ }
+
+ firstblock = chn->buf->data + chn->buf->size - bo_ptr(chn->buf);
+ if (firstblock > offset) {
+ if (firstblock >= len + offset) {
+ memcpy(blk, bo_ptr(chn->buf) + offset, len);
+ return len;
+ }
+
+ memcpy(blk, bo_ptr(chn->buf) + offset, firstblock - offset);
+ memcpy(blk + firstblock - offset, chn->buf->data, len - firstblock + offset);
+ return len;
+ }
+
+ memcpy(blk, chn->buf->data + offset - firstblock, len);
+ return len;
+}
+
+/* Gets one or two blocks of data at once from a channel's output buffer.
+ * Return values :
+ * >0 : number of blocks filled (1 or 2). blk1 is always filled before blk2.
+ * =0 : not enough data available. <blk*> are left undefined.
+ * <0 : no more bytes readable because output is shut.
+ * The channel status is not changed. The caller must call bo_skip() to
+ * update it. Unused buffers are left in an undefined state.
+ */
+int bo_getblk_nc(struct channel *chn, char **blk1, int *len1, char **blk2, int *len2)
+{
+ if (unlikely(chn->buf->o == 0)) {
+ if (chn->flags & CF_SHUTW)
+ return -1;
+ return 0;
+ }
+
+ if (unlikely(chn->buf->p - chn->buf->o < chn->buf->data)) {
+ *blk1 = chn->buf->p - chn->buf->o + chn->buf->size;
+ *len1 = chn->buf->data + chn->buf->size - *blk1;
+ *blk2 = chn->buf->data;
+ *len2 = chn->buf->p - chn->buf->data;
+ return 2;
+ }
+
+ *blk1 = chn->buf->p - chn->buf->o;
+ *len1 = chn->buf->o;
+ return 1;
+}
+
+/* Gets one text line out of a channel's output buffer from a stream interface.
+ * Return values :
+ * >0 : number of blocks returned (1 or 2). blk1 is always filled before blk2.
+ * =0 : not enough data available.
+ * <0 : no more bytes readable because output is shut.
+ * The '\n' is waited for as long as neither the buffer nor the output are
+ * full. If either of them is full, the string may be returned as is, without
+ * the '\n'. Unused buffers are left in an undefined state.
+ */
+int bo_getline_nc(struct channel *chn,
+ char **blk1, int *len1,
+ char **blk2, int *len2)
+{
+ int retcode;
+ int l;
+
+ retcode = bo_getblk_nc(chn, blk1, len1, blk2, len2);
+ if (unlikely(retcode) <= 0)
+ return retcode;
+
+ for (l = 0; l < *len1 && (*blk1)[l] != '\n'; l++);
+ if (l < *len1 && (*blk1)[l] == '\n') {
+ *len1 = l + 1;
+ return 1;
+ }
+
+ if (retcode >= 2) {
+ for (l = 0; l < *len2 && (*blk2)[l] != '\n'; l++);
+ if (l < *len2 && (*blk2)[l] == '\n') {
+ *len2 = l + 1;
+ return 2;
+ }
+ }
+
+ if (chn->flags & CF_SHUTW) {
+ /* If we have found no LF and the buffer is shut, then
+ * the resulting string is made of the concatenation of
+ * the pending blocks (1 or 2).
+ */
+ return retcode;
+ }
+
+ /* No LF yet and not shut yet */
+ return 0;
+}
+
+/* Gets one full block of data at once from a channel's input buffer.
+ * This function can return the data slitted in one or two blocks.
+ * Return values :
+ * >0 : number of blocks returned (1 or 2). blk1 is always filled before blk2.
+ * =0 : not enough data available.
+ * <0 : no more bytes readable because input is shut.
+ */
+int bi_getblk_nc(struct channel *chn,
+ char **blk1, int *len1,
+ char **blk2, int *len2)
+{
+ if (unlikely(chn->buf->i == 0)) {
+ if (chn->flags & CF_SHUTR)
+ return -1;
+ return 0;
+ }
+
+ if (unlikely(chn->buf->p + chn->buf->i > chn->buf->data + chn->buf->size)) {
+ *blk1 = chn->buf->p;
+ *len1 = chn->buf->data + chn->buf->size - chn->buf->p;
+ *blk2 = chn->buf->data;
+ *len2 = chn->buf->i - *len1;
+ return 2;
+ }
+
+ *blk1 = chn->buf->p;
+ *len1 = chn->buf->i;
+ return 1;
+}
+
+/* Gets one text line out of a channel's input buffer from a stream interface.
+ * Return values :
+ * >0 : number of blocks returned (1 or 2). blk1 is always filled before blk2.
+ * =0 : not enough data available.
+ * <0 : no more bytes readable because output is shut.
+ * The '\n' is waited for as long as neither the buffer nor the input are
+ * full. If either of them is full, the string may be returned as is, without
+ * the '\n'. Unused buffers are left in an undefined state.
+ */
+int bi_getline_nc(struct channel *chn,
+ char **blk1, int *len1,
+ char **blk2, int *len2)
+{
+ int retcode;
+ int l;
+
+ retcode = bi_getblk_nc(chn, blk1, len1, blk2, len2);
+ if (unlikely(retcode) <= 0)
+ return retcode;
+
+ for (l = 0; l < *len1 && (*blk1)[l] != '\n'; l++);
+ if (l < *len1 && (*blk1)[l] == '\n') {
+ *len1 = l + 1;
+ return 1;
+ }
+
+ if (retcode >= 2) {
+ for (l = 0; l < *len2 && (*blk2)[l] != '\n'; l++);
+ if (l < *len2 && (*blk2)[l] == '\n') {
+ *len2 = l + 1;
+ return 2;
+ }
+ }
+
+ if (chn->flags & CF_SHUTW) {
+ /* If we have found no LF and the buffer is shut, then
+ * the resulting string is made of the concatenation of
+ * the pending blocks (1 or 2).
+ */
+ return retcode;
+ }
+
+ /* No LF yet and not shut yet */
+ return 0;
+}
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * Health-checks functions.
+ *
+ * Copyright 2000-2009 Willy Tarreau <w@1wt.eu>
+ * Copyright 2007-2009 Krzysztof Piotr Oledzki <ole@ans.pl>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <assert.h>
+#include <ctype.h>
+#include <errno.h>
+#include <fcntl.h>
+#include <signal.h>
+#include <stdarg.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <time.h>
+#include <unistd.h>
+#include <sys/socket.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <netinet/in.h>
+#include <netinet/tcp.h>
+#include <arpa/inet.h>
+
+#include <common/chunk.h>
+#include <common/compat.h>
+#include <common/config.h>
+#include <common/mini-clist.h>
+#include <common/standard.h>
+#include <common/time.h>
+
+#include <types/global.h>
+#include <types/mailers.h>
+#include <types/dns.h>
+
+#ifdef USE_OPENSSL
+#include <types/ssl_sock.h>
+#include <proto/ssl_sock.h>
+#endif /* USE_OPENSSL */
+
+#include <proto/backend.h>
+#include <proto/checks.h>
+#include <proto/dumpstats.h>
+#include <proto/fd.h>
+#include <proto/log.h>
+#include <proto/queue.h>
+#include <proto/port_range.h>
+#include <proto/proto_http.h>
+#include <proto/proto_tcp.h>
+#include <proto/protocol.h>
+#include <proto/proxy.h>
+#include <proto/raw_sock.h>
+#include <proto/server.h>
+#include <proto/stream_interface.h>
+#include <proto/task.h>
+#include <proto/log.h>
+#include <proto/dns.h>
+#include <proto/proto_udp.h>
+
+static int httpchk_expect(struct server *s, int done);
+static int tcpcheck_get_step_id(struct check *);
+static char * tcpcheck_get_step_comment(struct check *, int);
+static void tcpcheck_main(struct connection *);
+
+static const struct check_status check_statuses[HCHK_STATUS_SIZE] = {
+ [HCHK_STATUS_UNKNOWN] = { CHK_RES_UNKNOWN, "UNK", "Unknown" },
+ [HCHK_STATUS_INI] = { CHK_RES_UNKNOWN, "INI", "Initializing" },
+ [HCHK_STATUS_START] = { /* SPECIAL STATUS*/ },
+
+ /* Below we have finished checks */
+ [HCHK_STATUS_CHECKED] = { CHK_RES_NEUTRAL, "CHECKED", "No status change" },
+ [HCHK_STATUS_HANA] = { CHK_RES_FAILED, "HANA", "Health analyze" },
+
+ [HCHK_STATUS_SOCKERR] = { CHK_RES_FAILED, "SOCKERR", "Socket error" },
+
+ [HCHK_STATUS_L4OK] = { CHK_RES_PASSED, "L4OK", "Layer4 check passed" },
+ [HCHK_STATUS_L4TOUT] = { CHK_RES_FAILED, "L4TOUT", "Layer4 timeout" },
+ [HCHK_STATUS_L4CON] = { CHK_RES_FAILED, "L4CON", "Layer4 connection problem" },
+
+ [HCHK_STATUS_L6OK] = { CHK_RES_PASSED, "L6OK", "Layer6 check passed" },
+ [HCHK_STATUS_L6TOUT] = { CHK_RES_FAILED, "L6TOUT", "Layer6 timeout" },
+ [HCHK_STATUS_L6RSP] = { CHK_RES_FAILED, "L6RSP", "Layer6 invalid response" },
+
+ [HCHK_STATUS_L7TOUT] = { CHK_RES_FAILED, "L7TOUT", "Layer7 timeout" },
+ [HCHK_STATUS_L7RSP] = { CHK_RES_FAILED, "L7RSP", "Layer7 invalid response" },
+
+ [HCHK_STATUS_L57DATA] = { /* DUMMY STATUS */ },
+
+ [HCHK_STATUS_L7OKD] = { CHK_RES_PASSED, "L7OK", "Layer7 check passed" },
+ [HCHK_STATUS_L7OKCD] = { CHK_RES_CONDPASS, "L7OKC", "Layer7 check conditionally passed" },
+ [HCHK_STATUS_L7STS] = { CHK_RES_FAILED, "L7STS", "Layer7 wrong status" },
+
+ [HCHK_STATUS_PROCERR] = { CHK_RES_FAILED, "PROCERR", "External check error" },
+ [HCHK_STATUS_PROCTOUT] = { CHK_RES_FAILED, "PROCTOUT", "External check timeout" },
+ [HCHK_STATUS_PROCOK] = { CHK_RES_PASSED, "PROCOK", "External check passed" },
+};
+
+const struct extcheck_env extcheck_envs[EXTCHK_SIZE] = {
+ [EXTCHK_PATH] = { "PATH", EXTCHK_SIZE_EVAL_INIT },
+ [EXTCHK_HAPROXY_PROXY_NAME] = { "HAPROXY_PROXY_NAME", EXTCHK_SIZE_EVAL_INIT },
+ [EXTCHK_HAPROXY_PROXY_ID] = { "HAPROXY_PROXY_ID", EXTCHK_SIZE_EVAL_INIT },
+ [EXTCHK_HAPROXY_PROXY_ADDR] = { "HAPROXY_PROXY_ADDR", EXTCHK_SIZE_EVAL_INIT },
+ [EXTCHK_HAPROXY_PROXY_PORT] = { "HAPROXY_PROXY_PORT", EXTCHK_SIZE_EVAL_INIT },
+ [EXTCHK_HAPROXY_SERVER_NAME] = { "HAPROXY_SERVER_NAME", EXTCHK_SIZE_EVAL_INIT },
+ [EXTCHK_HAPROXY_SERVER_ID] = { "HAPROXY_SERVER_ID", EXTCHK_SIZE_EVAL_INIT },
+ [EXTCHK_HAPROXY_SERVER_ADDR] = { "HAPROXY_SERVER_ADDR", EXTCHK_SIZE_EVAL_INIT },
+ [EXTCHK_HAPROXY_SERVER_PORT] = { "HAPROXY_SERVER_PORT", EXTCHK_SIZE_EVAL_INIT },
+ [EXTCHK_HAPROXY_SERVER_MAXCONN] = { "HAPROXY_SERVER_MAXCONN", EXTCHK_SIZE_EVAL_INIT },
+ [EXTCHK_HAPROXY_SERVER_CURCONN] = { "HAPROXY_SERVER_CURCONN", EXTCHK_SIZE_ULONG },
+};
+
+static const struct analyze_status analyze_statuses[HANA_STATUS_SIZE] = { /* 0: ignore, 1: error, 2: OK */
+ [HANA_STATUS_UNKNOWN] = { "Unknown", { 0, 0 }},
+
+ [HANA_STATUS_L4_OK] = { "L4 successful connection", { 2, 0 }},
+ [HANA_STATUS_L4_ERR] = { "L4 unsuccessful connection", { 1, 1 }},
+
+ [HANA_STATUS_HTTP_OK] = { "Correct http response", { 0, 2 }},
+ [HANA_STATUS_HTTP_STS] = { "Wrong http response", { 0, 1 }},
+ [HANA_STATUS_HTTP_HDRRSP] = { "Invalid http response (headers)", { 0, 1 }},
+ [HANA_STATUS_HTTP_RSP] = { "Invalid http response", { 0, 1 }},
+
+ [HANA_STATUS_HTTP_READ_ERROR] = { "Read error (http)", { 0, 1 }},
+ [HANA_STATUS_HTTP_READ_TIMEOUT] = { "Read timeout (http)", { 0, 1 }},
+ [HANA_STATUS_HTTP_BROKEN_PIPE] = { "Close from server (http)", { 0, 1 }},
+};
+
+/*
+ * Convert check_status code to description
+ */
+const char *get_check_status_description(short check_status) {
+
+ const char *desc;
+
+ if (check_status < HCHK_STATUS_SIZE)
+ desc = check_statuses[check_status].desc;
+ else
+ desc = NULL;
+
+ if (desc && *desc)
+ return desc;
+ else
+ return check_statuses[HCHK_STATUS_UNKNOWN].desc;
+}
+
+/*
+ * Convert check_status code to short info
+ */
+const char *get_check_status_info(short check_status) {
+
+ const char *info;
+
+ if (check_status < HCHK_STATUS_SIZE)
+ info = check_statuses[check_status].info;
+ else
+ info = NULL;
+
+ if (info && *info)
+ return info;
+ else
+ return check_statuses[HCHK_STATUS_UNKNOWN].info;
+}
+
+const char *get_analyze_status(short analyze_status) {
+
+ const char *desc;
+
+ if (analyze_status < HANA_STATUS_SIZE)
+ desc = analyze_statuses[analyze_status].desc;
+ else
+ desc = NULL;
+
+ if (desc && *desc)
+ return desc;
+ else
+ return analyze_statuses[HANA_STATUS_UNKNOWN].desc;
+}
+
+/* Builds a string containing some information about the health check's result.
+ * The output string is allocated from the trash chunks. If the check is NULL,
+ * NULL is returned. This is designed to be used when emitting logs about health
+ * checks.
+ */
+static const char *check_reason_string(struct check *check)
+{
+ struct chunk *msg;
+
+ if (!check)
+ return NULL;
+
+ msg = get_trash_chunk();
+ chunk_printf(msg, "reason: %s", get_check_status_description(check->status));
+
+ if (check->status >= HCHK_STATUS_L57DATA)
+ chunk_appendf(msg, ", code: %d", check->code);
+
+ if (*check->desc) {
+ struct chunk src;
+
+ chunk_appendf(msg, ", info: \"");
+
+ chunk_initlen(&src, check->desc, 0, strlen(check->desc));
+ chunk_asciiencode(msg, &src, '"');
+
+ chunk_appendf(msg, "\"");
+ }
+
+ if (check->duration >= 0)
+ chunk_appendf(msg, ", check duration: %ldms", check->duration);
+
+ return msg->str;
+}
+
+/*
+ * Set check->status, update check->duration and fill check->result with
+ * an adequate CHK_RES_* value. The new check->health is computed based
+ * on the result.
+ *
+ * Show information in logs about failed health check if server is UP
+ * or succeeded health checks if server is DOWN.
+ */
+static void set_server_check_status(struct check *check, short status, const char *desc)
+{
+ struct server *s = check->server;
+ short prev_status = check->status;
+ int report = 0;
+
+ if (status == HCHK_STATUS_START) {
+ check->result = CHK_RES_UNKNOWN; /* no result yet */
+ check->desc[0] = '\0';
+ check->start = now;
+ return;
+ }
+
+ if (!check->status)
+ return;
+
+ if (desc && *desc) {
+ strncpy(check->desc, desc, HCHK_DESC_LEN-1);
+ check->desc[HCHK_DESC_LEN-1] = '\0';
+ } else
+ check->desc[0] = '\0';
+
+ check->status = status;
+ if (check_statuses[status].result)
+ check->result = check_statuses[status].result;
+
+ if (status == HCHK_STATUS_HANA)
+ check->duration = -1;
+ else if (!tv_iszero(&check->start)) {
+ /* set_server_check_status() may be called more than once */
+ check->duration = tv_ms_elapsed(&check->start, &now);
+ tv_zero(&check->start);
+ }
+
+ /* no change is expected if no state change occurred */
+ if (check->result == CHK_RES_NEUTRAL)
+ return;
+
+ report = 0;
+
+ switch (check->result) {
+ case CHK_RES_FAILED:
+ /* Failure to connect to the agent as a secondary check should not
+ * cause the server to be marked down.
+ */
+ if ((!(check->state & CHK_ST_AGENT) ||
+ (check->status >= HCHK_STATUS_L57DATA)) &&
+ (check->health >= check->rise)) {
+ s->counters.failed_checks++;
+ report = 1;
+ check->health--;
+ if (check->health < check->rise)
+ check->health = 0;
+ }
+ break;
+
+ case CHK_RES_PASSED:
+ case CHK_RES_CONDPASS: /* "condpass" cannot make the first step but it OK after a "passed" */
+ if ((check->health < check->rise + check->fall - 1) &&
+ (check->result == CHK_RES_PASSED || check->health > 0)) {
+ report = 1;
+ check->health++;
+
+ if (check->health >= check->rise)
+ check->health = check->rise + check->fall - 1; /* OK now */
+ }
+
+ /* clear consecutive_errors if observing is enabled */
+ if (s->onerror)
+ s->consecutive_errors = 0;
+ break;
+
+ default:
+ break;
+ }
+
+ if (s->proxy->options2 & PR_O2_LOGHCHKS &&
+ (status != prev_status || report)) {
+ chunk_printf(&trash,
+ "%s check for %sserver %s/%s %s%s",
+ (check->state & CHK_ST_AGENT) ? "Agent" : "Health",
+ s->flags & SRV_F_BACKUP ? "backup " : "",
+ s->proxy->id, s->id,
+ (check->result == CHK_RES_CONDPASS) ? "conditionally ":"",
+ (check->result >= CHK_RES_PASSED) ? "succeeded" : "failed");
+
+ srv_append_status(&trash, s, check_reason_string(check), -1, 0);
+
+ chunk_appendf(&trash, ", status: %d/%d %s",
+ (check->health >= check->rise) ? check->health - check->rise + 1 : check->health,
+ (check->health >= check->rise) ? check->fall : check->rise,
+ (check->health >= check->rise) ? (s->uweight ? "UP" : "DRAIN") : "DOWN");
+
+ Warning("%s.\n", trash.str);
+ send_log(s->proxy, LOG_NOTICE, "%s.\n", trash.str);
+ send_email_alert(s, LOG_INFO, "%s", trash.str);
+ }
+}
+
+/* Marks the check <check>'s server down if the current check is already failed
+ * and the server is not down yet nor in maintenance.
+ */
+static void check_notify_failure(struct check *check)
+{
+ struct server *s = check->server;
+
+ /* The agent secondary check should only cause a server to be marked
+ * as down if check->status is HCHK_STATUS_L7STS, which indicates
+ * that the agent returned "fail", "stopped" or "down".
+ * The implication here is that failure to connect to the agent
+ * as a secondary check should not cause the server to be marked
+ * down. */
+ if ((check->state & CHK_ST_AGENT) && check->status != HCHK_STATUS_L7STS)
+ return;
+
+ if (check->health > 0)
+ return;
+
+ /* We only report a reason for the check if we did not do so previously */
+ srv_set_stopped(s, (!s->track && !(s->proxy->options2 & PR_O2_LOGHCHKS)) ? check_reason_string(check) : NULL);
+}
+
+/* Marks the check <check> as valid and tries to set its server up, provided
+ * it isn't in maintenance, it is not tracking a down server and other checks
+ * comply. The rule is simple : by default, a server is up, unless any of the
+ * following conditions is true :
+ * - health check failed (check->health < rise)
+ * - agent check failed (agent->health < rise)
+ * - the server tracks a down server (track && track->state == STOPPED)
+ * Note that if the server has a slowstart, it will switch to STARTING instead
+ * of RUNNING. Also, only the health checks support the nolb mode, so the
+ * agent's success may not take the server out of this mode.
+ */
+static void check_notify_success(struct check *check)
+{
+ struct server *s = check->server;
+
+ if (s->admin & SRV_ADMF_MAINT)
+ return;
+
+ if (s->track && s->track->state == SRV_ST_STOPPED)
+ return;
+
+ if ((s->check.state & CHK_ST_ENABLED) && (s->check.health < s->check.rise))
+ return;
+
+ if ((s->agent.state & CHK_ST_ENABLED) && (s->agent.health < s->agent.rise))
+ return;
+
+ if ((check->state & CHK_ST_AGENT) && s->state == SRV_ST_STOPPING)
+ return;
+
+ srv_set_running(s, (!s->track && !(s->proxy->options2 & PR_O2_LOGHCHKS)) ? check_reason_string(check) : NULL);
+}
+
+/* Marks the check <check> as valid and tries to set its server into stopping mode
+ * if it was running or starting, and provided it isn't in maintenance and other
+ * checks comply. The conditions for the server to be marked in stopping mode are
+ * the same as for it to be turned up. Also, only the health checks support the
+ * nolb mode.
+ */
+static void check_notify_stopping(struct check *check)
+{
+ struct server *s = check->server;
+
+ if (s->admin & SRV_ADMF_MAINT)
+ return;
+
+ if (check->state & CHK_ST_AGENT)
+ return;
+
+ if (s->track && s->track->state == SRV_ST_STOPPED)
+ return;
+
+ if ((s->check.state & CHK_ST_ENABLED) && (s->check.health < s->check.rise))
+ return;
+
+ if ((s->agent.state & CHK_ST_ENABLED) && (s->agent.health < s->agent.rise))
+ return;
+
+ srv_set_stopping(s, (!s->track && !(s->proxy->options2 & PR_O2_LOGHCHKS)) ? check_reason_string(check) : NULL);
+}
+
+/* note: use health_adjust() only, which first checks that the observe mode is
+ * enabled.
+ */
+void __health_adjust(struct server *s, short status)
+{
+ int failed;
+ int expire;
+
+ if (s->observe >= HANA_OBS_SIZE)
+ return;
+
+ if (status >= HANA_STATUS_SIZE || !analyze_statuses[status].desc)
+ return;
+
+ switch (analyze_statuses[status].lr[s->observe - 1]) {
+ case 1:
+ failed = 1;
+ break;
+
+ case 2:
+ failed = 0;
+ break;
+
+ default:
+ return;
+ }
+
+ if (!failed) {
+ /* good: clear consecutive_errors */
+ s->consecutive_errors = 0;
+ return;
+ }
+
+ s->consecutive_errors++;
+
+ if (s->consecutive_errors < s->consecutive_errors_limit)
+ return;
+
+ chunk_printf(&trash, "Detected %d consecutive errors, last one was: %s",
+ s->consecutive_errors, get_analyze_status(status));
+
+ switch (s->onerror) {
+ case HANA_ONERR_FASTINTER:
+ /* force fastinter - nothing to do here as all modes force it */
+ break;
+
+ case HANA_ONERR_SUDDTH:
+ /* simulate a pre-fatal failed health check */
+ if (s->check.health > s->check.rise)
+ s->check.health = s->check.rise + 1;
+
+ /* no break - fall through */
+
+ case HANA_ONERR_FAILCHK:
+ /* simulate a failed health check */
+ set_server_check_status(&s->check, HCHK_STATUS_HANA, trash.str);
+ check_notify_failure(&s->check);
+ break;
+
+ case HANA_ONERR_MARKDWN:
+ /* mark server down */
+ s->check.health = s->check.rise;
+ set_server_check_status(&s->check, HCHK_STATUS_HANA, trash.str);
+ check_notify_failure(&s->check);
+ break;
+
+ default:
+ /* write a warning? */
+ break;
+ }
+
+ s->consecutive_errors = 0;
+ s->counters.failed_hana++;
+
+ if (s->check.fastinter) {
+ expire = tick_add(now_ms, MS_TO_TICKS(s->check.fastinter));
+ if (s->check.task->expire > expire) {
+ s->check.task->expire = expire;
+ /* requeue check task with new expire */
+ task_queue(s->check.task);
+ }
+ }
+}
+
+static int httpchk_build_status_header(struct server *s, char *buffer, int size)
+{
+ int sv_state;
+ int ratio;
+ int hlen = 0;
+ char addr[46];
+ char port[6];
+ const char *srv_hlt_st[7] = { "DOWN", "DOWN %d/%d",
+ "UP %d/%d", "UP",
+ "NOLB %d/%d", "NOLB",
+ "no check" };
+
+ memcpy(buffer + hlen, "X-Haproxy-Server-State: ", 24);
+ hlen += 24;
+
+ if (!(s->check.state & CHK_ST_ENABLED))
+ sv_state = 6;
+ else if (s->state != SRV_ST_STOPPED) {
+ if (s->check.health == s->check.rise + s->check.fall - 1)
+ sv_state = 3; /* UP */
+ else
+ sv_state = 2; /* going down */
+
+ if (s->state == SRV_ST_STOPPING)
+ sv_state += 2;
+ } else {
+ if (s->check.health)
+ sv_state = 1; /* going up */
+ else
+ sv_state = 0; /* DOWN */
+ }
+
+ hlen += snprintf(buffer + hlen, size - hlen,
+ srv_hlt_st[sv_state],
+ (s->state != SRV_ST_STOPPED) ? (s->check.health - s->check.rise + 1) : (s->check.health),
+ (s->state != SRV_ST_STOPPED) ? (s->check.fall) : (s->check.rise));
+
+ addr_to_str(&s->addr, addr, sizeof(addr));
+ port_to_str(&s->addr, port, sizeof(port));
+
+ hlen += snprintf(buffer + hlen, size - hlen, "; address=%s; port=%s; name=%s/%s; node=%s; weight=%d/%d; scur=%d/%d; qcur=%d",
+ addr, port, s->proxy->id, s->id,
+ global.node,
+ (s->eweight * s->proxy->lbprm.wmult + s->proxy->lbprm.wdiv - 1) / s->proxy->lbprm.wdiv,
+ (s->proxy->lbprm.tot_weight * s->proxy->lbprm.wmult + s->proxy->lbprm.wdiv - 1) / s->proxy->lbprm.wdiv,
+ s->cur_sess, s->proxy->beconn - s->proxy->nbpend,
+ s->nbpend);
+
+ if ((s->state == SRV_ST_STARTING) &&
+ now.tv_sec < s->last_change + s->slowstart &&
+ now.tv_sec >= s->last_change) {
+ ratio = MAX(1, 100 * (now.tv_sec - s->last_change) / s->slowstart);
+ hlen += snprintf(buffer + hlen, size - hlen, "; throttle=%d%%", ratio);
+ }
+
+ buffer[hlen++] = '\r';
+ buffer[hlen++] = '\n';
+
+ return hlen;
+}
+
+/* Check the connection. If an error has already been reported or the socket is
+ * closed, keep errno intact as it is supposed to contain the valid error code.
+ * If no error is reported, check the socket's error queue using getsockopt().
+ * Warning, this must be done only once when returning from poll, and never
+ * after an I/O error was attempted, otherwise the error queue might contain
+ * inconsistent errors. If an error is detected, the CO_FL_ERROR is set on the
+ * socket. Returns non-zero if an error was reported, zero if everything is
+ * clean (including a properly closed socket).
+ */
+static int retrieve_errno_from_socket(struct connection *conn)
+{
+ int skerr;
+ socklen_t lskerr = sizeof(skerr);
+
+ if (conn->flags & CO_FL_ERROR && ((errno && errno != EAGAIN) || !conn->ctrl))
+ return 1;
+
+ if (!conn_ctrl_ready(conn))
+ return 0;
+
+ if (getsockopt(conn->t.sock.fd, SOL_SOCKET, SO_ERROR, &skerr, &lskerr) == 0)
+ errno = skerr;
+
+ if (errno == EAGAIN)
+ errno = 0;
+
+ if (!errno) {
+ /* we could not retrieve an error, that does not mean there is
+ * none. Just don't change anything and only report the prior
+ * error if any.
+ */
+ if (conn->flags & CO_FL_ERROR)
+ return 1;
+ else
+ return 0;
+ }
+
+ conn->flags |= CO_FL_ERROR | CO_FL_SOCK_WR_SH | CO_FL_SOCK_RD_SH;
+ return 1;
+}
+
+/* Try to collect as much information as possible on the connection status,
+ * and adjust the server status accordingly. It may make use of <errno_bck>
+ * if non-null when the caller is absolutely certain of its validity (eg:
+ * checked just after a syscall). If the caller doesn't have a valid errno,
+ * it can pass zero, and retrieve_errno_from_socket() will be called to try
+ * to extract errno from the socket. If no error is reported, it will consider
+ * the <expired> flag. This is intended to be used when a connection error was
+ * reported in conn->flags or when a timeout was reported in <expired>. The
+ * function takes care of not updating a server status which was already set.
+ * All situations where at least one of <expired> or CO_FL_ERROR are set
+ * produce a status.
+ */
+static void chk_report_conn_err(struct connection *conn, int errno_bck, int expired)
+{
+ struct check *check = conn->owner;
+ const char *err_msg;
+ struct chunk *chk;
+ int step;
+ char *comment;
+
+ if (check->result != CHK_RES_UNKNOWN)
+ return;
+
+ errno = errno_bck;
+ if (!errno || errno == EAGAIN)
+ retrieve_errno_from_socket(conn);
+
+ if (!(conn->flags & CO_FL_ERROR) && !expired)
+ return;
+
+ /* we'll try to build a meaningful error message depending on the
+ * context of the error possibly present in conn->err_code, and the
+ * socket error possibly collected above. This is useful to know the
+ * exact step of the L6 layer (eg: SSL handshake).
+ */
+ chk = get_trash_chunk();
+
+ if (check->type == PR_O2_TCPCHK_CHK) {
+ step = tcpcheck_get_step_id(check);
+ if (!step)
+ chunk_printf(chk, " at initial connection step of tcp-check");
+ else {
+ chunk_printf(chk, " at step %d of tcp-check", step);
+ /* we were looking for a string */
+ if (check->last_started_step && check->last_started_step->action == TCPCHK_ACT_CONNECT) {
+ if (check->last_started_step->port)
+ chunk_appendf(chk, " (connect port %d)" ,check->last_started_step->port);
+ else
+ chunk_appendf(chk, " (connect)");
+ }
+ else if (check->last_started_step && check->last_started_step->action == TCPCHK_ACT_EXPECT) {
+ if (check->last_started_step->string)
+ chunk_appendf(chk, " (expect string '%s')", check->last_started_step->string);
+ else if (check->last_started_step->expect_regex)
+ chunk_appendf(chk, " (expect regex)");
+ }
+ else if (check->last_started_step && check->last_started_step->action == TCPCHK_ACT_SEND) {
+ chunk_appendf(chk, " (send)");
+ }
+
+ comment = tcpcheck_get_step_comment(check, step);
+ if (comment)
+ chunk_appendf(chk, " comment: '%s'", comment);
+ }
+ }
+
+ if (conn->err_code) {
+ if (errno && errno != EAGAIN)
+ chunk_printf(&trash, "%s (%s)%s", conn_err_code_str(conn), strerror(errno), chk->str);
+ else
+ chunk_printf(&trash, "%s%s", conn_err_code_str(conn), chk->str);
+ err_msg = trash.str;
+ }
+ else {
+ if (errno && errno != EAGAIN) {
+ chunk_printf(&trash, "%s%s", strerror(errno), chk->str);
+ err_msg = trash.str;
+ }
+ else {
+ err_msg = chk->str;
+ }
+ }
+
+ if ((conn->flags & (CO_FL_CONNECTED|CO_FL_WAIT_L4_CONN)) == CO_FL_WAIT_L4_CONN) {
+ /* L4 not established (yet) */
+ if (conn->flags & CO_FL_ERROR)
+ set_server_check_status(check, HCHK_STATUS_L4CON, err_msg);
+ else if (expired)
+ set_server_check_status(check, HCHK_STATUS_L4TOUT, err_msg);
+
+ /*
+ * might be due to a server IP change.
+ * Let's trigger a DNS resolution if none are currently running.
+ */
+ if ((check->server->resolution) && (check->server->resolution->step == RSLV_STEP_NONE))
+ trigger_resolution(check->server);
+
+ }
+ else if ((conn->flags & (CO_FL_CONNECTED|CO_FL_WAIT_L6_CONN)) == CO_FL_WAIT_L6_CONN) {
+ /* L6 not established (yet) */
+ if (conn->flags & CO_FL_ERROR)
+ set_server_check_status(check, HCHK_STATUS_L6RSP, err_msg);
+ else if (expired)
+ set_server_check_status(check, HCHK_STATUS_L6TOUT, err_msg);
+ }
+ else if (conn->flags & CO_FL_ERROR) {
+ /* I/O error after connection was established and before we could diagnose */
+ set_server_check_status(check, HCHK_STATUS_SOCKERR, err_msg);
+ }
+ else if (expired) {
+ /* connection established but expired check */
+ if (check->type == PR_O2_SSL3_CHK)
+ set_server_check_status(check, HCHK_STATUS_L6TOUT, err_msg);
+ else /* HTTP, SMTP, ... */
+ set_server_check_status(check, HCHK_STATUS_L7TOUT, err_msg);
+ }
+
+ return;
+}
+
+/*
+ * This function is used only for server health-checks. It handles
+ * the connection acknowledgement. If the proxy requires L7 health-checks,
+ * it sends the request. In other cases, it calls set_server_check_status()
+ * to set check->status, check->duration and check->result.
+ */
+static void event_srv_chk_w(struct connection *conn)
+{
+ struct check *check = conn->owner;
+ struct server *s = check->server;
+ struct task *t = check->task;
+
+ if (unlikely(check->result == CHK_RES_FAILED))
+ goto out_wakeup;
+
+ if (conn->flags & CO_FL_HANDSHAKE)
+ return;
+
+ if (retrieve_errno_from_socket(conn)) {
+ chk_report_conn_err(conn, errno, 0);
+ __conn_data_stop_both(conn);
+ goto out_wakeup;
+ }
+
+ if (conn->flags & (CO_FL_SOCK_WR_SH | CO_FL_DATA_WR_SH)) {
+ /* if the output is closed, we can't do anything */
+ conn->flags |= CO_FL_ERROR;
+ chk_report_conn_err(conn, 0, 0);
+ goto out_wakeup;
+ }
+
+ /* here, we know that the connection is established. That's enough for
+ * a pure TCP check.
+ */
+ if (!check->type)
+ goto out_wakeup;
+
+ if (check->type == PR_O2_TCPCHK_CHK) {
+ tcpcheck_main(conn);
+ return;
+ }
+
+ if (check->bo->o) {
+ conn->xprt->snd_buf(conn, check->bo, 0);
+ if (conn->flags & CO_FL_ERROR) {
+ chk_report_conn_err(conn, errno, 0);
+ __conn_data_stop_both(conn);
+ goto out_wakeup;
+ }
+ if (check->bo->o)
+ return;
+ }
+
+ /* full request sent, we allow up to <timeout.check> if nonzero for a response */
+ if (s->proxy->timeout.check) {
+ t->expire = tick_add_ifset(now_ms, s->proxy->timeout.check);
+ task_queue(t);
+ }
+ goto out_nowake;
+
+ out_wakeup:
+ task_wakeup(t, TASK_WOKEN_IO);
+ out_nowake:
+ __conn_data_stop_send(conn); /* nothing more to write */
+}
+
+/*
+ * This function is used only for server health-checks. It handles the server's
+ * reply to an HTTP request, SSL HELLO or MySQL client Auth. It calls
+ * set_server_check_status() to update check->status, check->duration
+ * and check->result.
+
+ * The set_server_check_status function is called with HCHK_STATUS_L7OKD if
+ * an HTTP server replies HTTP 2xx or 3xx (valid responses), if an SMTP server
+ * returns 2xx, HCHK_STATUS_L6OK if an SSL server returns at least 5 bytes in
+ * response to an SSL HELLO (the principle is that this is enough to
+ * distinguish between an SSL server and a pure TCP relay). All other cases will
+ * call it with a proper error status like HCHK_STATUS_L7STS, HCHK_STATUS_L6RSP,
+ * etc.
+ */
+static void event_srv_chk_r(struct connection *conn)
+{
+ struct check *check = conn->owner;
+ struct server *s = check->server;
+ struct task *t = check->task;
+ char *desc;
+ int done;
+ unsigned short msglen;
+
+ if (unlikely(check->result == CHK_RES_FAILED))
+ goto out_wakeup;
+
+ if (conn->flags & CO_FL_HANDSHAKE)
+ return;
+
+ if (check->type == PR_O2_TCPCHK_CHK) {
+ tcpcheck_main(conn);
+ return;
+ }
+
+ /* Warning! Linux returns EAGAIN on SO_ERROR if data are still available
+ * but the connection was closed on the remote end. Fortunately, recv still
+ * works correctly and we don't need to do the getsockopt() on linux.
+ */
+
+ /* Set buffer to point to the end of the data already read, and check
+ * that there is free space remaining. If the buffer is full, proceed
+ * with running the checks without attempting another socket read.
+ */
+
+ done = 0;
+
+ conn->xprt->rcv_buf(conn, check->bi, check->bi->size);
+ if (conn->flags & (CO_FL_ERROR | CO_FL_SOCK_RD_SH | CO_FL_DATA_RD_SH)) {
+ done = 1;
+ if ((conn->flags & CO_FL_ERROR) && !check->bi->i) {
+ /* Report network errors only if we got no other data. Otherwise
+ * we'll let the upper layers decide whether the response is OK
+ * or not. It is very common that an RST sent by the server is
+ * reported as an error just after the last data chunk.
+ */
+ chk_report_conn_err(conn, errno, 0);
+ goto out_wakeup;
+ }
+ }
+
+
+ /* Intermediate or complete response received.
+ * Terminate string in check->bi->data buffer.
+ */
+ if (check->bi->i < check->bi->size)
+ check->bi->data[check->bi->i] = '\0';
+ else {
+ check->bi->data[check->bi->i - 1] = '\0';
+ done = 1; /* buffer full, don't wait for more data */
+ }
+
+ /* Run the checks... */
+ switch (check->type) {
+ case PR_O2_HTTP_CHK:
+ if (!done && check->bi->i < strlen("HTTP/1.0 000\r"))
+ goto wait_more_data;
+
+ /* Check if the server speaks HTTP 1.X */
+ if ((check->bi->i < strlen("HTTP/1.0 000\r")) ||
+ (memcmp(check->bi->data, "HTTP/1.", 7) != 0 ||
+ (*(check->bi->data + 12) != ' ' && *(check->bi->data + 12) != '\r')) ||
+ !isdigit((unsigned char) *(check->bi->data + 9)) || !isdigit((unsigned char) *(check->bi->data + 10)) ||
+ !isdigit((unsigned char) *(check->bi->data + 11))) {
+ cut_crlf(check->bi->data);
+ set_server_check_status(check, HCHK_STATUS_L7RSP, check->bi->data);
+
+ goto out_wakeup;
+ }
+
+ check->code = str2uic(check->bi->data + 9);
+ desc = ltrim(check->bi->data + 12, ' ');
+
+ if ((s->proxy->options & PR_O_DISABLE404) &&
+ (s->state != SRV_ST_STOPPED) && (check->code == 404)) {
+ /* 404 may be accepted as "stopping" only if the server was up */
+ cut_crlf(desc);
+ set_server_check_status(check, HCHK_STATUS_L7OKCD, desc);
+ }
+ else if (s->proxy->options2 & PR_O2_EXP_TYPE) {
+ /* Run content verification check... We know we have at least 13 chars */
+ if (!httpchk_expect(s, done))
+ goto wait_more_data;
+ }
+ /* check the reply : HTTP/1.X 2xx and 3xx are OK */
+ else if (*(check->bi->data + 9) == '2' || *(check->bi->data + 9) == '3') {
+ cut_crlf(desc);
+ set_server_check_status(check, HCHK_STATUS_L7OKD, desc);
+ }
+ else {
+ cut_crlf(desc);
+ set_server_check_status(check, HCHK_STATUS_L7STS, desc);
+ }
+ break;
+
+ case PR_O2_SSL3_CHK:
+ if (!done && check->bi->i < 5)
+ goto wait_more_data;
+
+ /* Check for SSLv3 alert or handshake */
+ if ((check->bi->i >= 5) && (*check->bi->data == 0x15 || *check->bi->data == 0x16))
+ set_server_check_status(check, HCHK_STATUS_L6OK, NULL);
+ else
+ set_server_check_status(check, HCHK_STATUS_L6RSP, NULL);
+ break;
+
+ case PR_O2_SMTP_CHK:
+ if (!done && check->bi->i < strlen("000\r"))
+ goto wait_more_data;
+
+ /* Check if the server speaks SMTP */
+ if ((check->bi->i < strlen("000\r")) ||
+ (*(check->bi->data + 3) != ' ' && *(check->bi->data + 3) != '\r') ||
+ !isdigit((unsigned char) *check->bi->data) || !isdigit((unsigned char) *(check->bi->data + 1)) ||
+ !isdigit((unsigned char) *(check->bi->data + 2))) {
+ cut_crlf(check->bi->data);
+ set_server_check_status(check, HCHK_STATUS_L7RSP, check->bi->data);
+
+ goto out_wakeup;
+ }
+
+ check->code = str2uic(check->bi->data);
+
+ desc = ltrim(check->bi->data + 3, ' ');
+ cut_crlf(desc);
+
+ /* Check for SMTP code 2xx (should be 250) */
+ if (*check->bi->data == '2')
+ set_server_check_status(check, HCHK_STATUS_L7OKD, desc);
+ else
+ set_server_check_status(check, HCHK_STATUS_L7STS, desc);
+ break;
+
+ case PR_O2_LB_AGENT_CHK: {
+ int status = HCHK_STATUS_CHECKED;
+ const char *hs = NULL; /* health status */
+ const char *as = NULL; /* admin status */
+ const char *ps = NULL; /* performance status */
+ const char *err = NULL; /* first error to report */
+ const char *wrn = NULL; /* first warning to report */
+ char *cmd, *p;
+
+ /* We're getting an agent check response. The agent could
+ * have been disabled in the mean time with a long check
+ * still pending. It is important that we ignore the whole
+ * response.
+ */
+ if (!(check->server->agent.state & CHK_ST_ENABLED))
+ break;
+
+ /* The agent supports strings made of a single line ended by the
+ * first CR ('\r') or LF ('\n'). This line is composed of words
+ * delimited by spaces (' '), tabs ('\t'), or commas (','). The
+ * line may optionally contained a description of a state change
+ * after a sharp ('#'), which is only considered if a health state
+ * is announced.
+ *
+ * Words may be composed of :
+ * - a numeric weight suffixed by the percent character ('%').
+ * - a health status among "up", "down", "stopped", and "fail".
+ * - an admin status among "ready", "drain", "maint".
+ *
+ * These words may appear in any order. If multiple words of the
+ * same category appear, the last one wins.
+ */
+
+ p = check->bi->data;
+ while (*p && *p != '\n' && *p != '\r')
+ p++;
+
+ if (!*p) {
+ if (!done)
+ goto wait_more_data;
+
+ /* at least inform the admin that the agent is mis-behaving */
+ set_server_check_status(check, check->status, "Ignoring incomplete line from agent");
+ break;
+ }
+
+ *p = 0;
+ cmd = check->bi->data;
+
+ while (*cmd) {
+ /* look for next word */
+ if (*cmd == ' ' || *cmd == '\t' || *cmd == ',') {
+ cmd++;
+ continue;
+ }
+
+ if (*cmd == '#') {
+ /* this is the beginning of a health status description,
+ * skip the sharp and blanks.
+ */
+ cmd++;
+ while (*cmd == '\t' || *cmd == ' ')
+ cmd++;
+ break;
+ }
+
+ /* find the end of the word so that we have a null-terminated
+ * word between <cmd> and <p>.
+ */
+ p = cmd + 1;
+ while (*p && *p != '\t' && *p != ' ' && *p != '\n' && *p != ',')
+ p++;
+ if (*p)
+ *p++ = 0;
+
+ /* first, health statuses */
+ if (strcasecmp(cmd, "up") == 0) {
+ check->health = check->rise + check->fall - 1;
+ status = HCHK_STATUS_L7OKD;
+ hs = cmd;
+ }
+ else if (strcasecmp(cmd, "down") == 0) {
+ check->health = 0;
+ status = HCHK_STATUS_L7STS;
+ hs = cmd;
+ }
+ else if (strcasecmp(cmd, "stopped") == 0) {
+ check->health = 0;
+ status = HCHK_STATUS_L7STS;
+ hs = cmd;
+ }
+ else if (strcasecmp(cmd, "fail") == 0) {
+ check->health = 0;
+ status = HCHK_STATUS_L7STS;
+ hs = cmd;
+ }
+ /* admin statuses */
+ else if (strcasecmp(cmd, "ready") == 0) {
+ as = cmd;
+ }
+ else if (strcasecmp(cmd, "drain") == 0) {
+ as = cmd;
+ }
+ else if (strcasecmp(cmd, "maint") == 0) {
+ as = cmd;
+ }
+ /* else try to parse a weight here and keep the last one */
+ else if (isdigit((unsigned char)*cmd) && strchr(cmd, '%') != NULL) {
+ ps = cmd;
+ }
+ else {
+ /* keep a copy of the first error */
+ if (!err)
+ err = cmd;
+ }
+ /* skip to next word */
+ cmd = p;
+ }
+ /* here, cmd points either to \0 or to the beginning of a
+ * description. Skip possible leading spaces.
+ */
+ while (*cmd == ' ' || *cmd == '\n')
+ cmd++;
+
+ /* First, update the admin status so that we avoid sending other
+ * possibly useless warnings and can also update the health if
+ * present after going back up.
+ */
+ if (as) {
+ if (strcasecmp(as, "drain") == 0)
+ srv_adm_set_drain(check->server);
+ else if (strcasecmp(as, "maint") == 0)
+ srv_adm_set_maint(check->server);
+ else
+ srv_adm_set_ready(check->server);
+ }
+
+ /* now change weights */
+ if (ps) {
+ const char *msg;
+
+ msg = server_parse_weight_change_request(s, ps);
+ if (!wrn || !*wrn)
+ wrn = msg;
+ }
+
+ /* and finally health status */
+ if (hs) {
+ /* We'll report some of the warnings and errors we have
+ * here. Down reports are critical, we leave them untouched.
+ * Lack of report, or report of 'UP' leaves the room for
+ * ERR first, then WARN.
+ */
+ const char *msg = cmd;
+ struct chunk *t;
+
+ if (!*msg || status == HCHK_STATUS_L7OKD) {
+ if (err && *err)
+ msg = err;
+ else if (wrn && *wrn)
+ msg = wrn;
+ }
+
+ t = get_trash_chunk();
+ chunk_printf(t, "via agent : %s%s%s%s",
+ hs, *msg ? " (" : "",
+ msg, *msg ? ")" : "");
+
+ set_server_check_status(check, status, t->str);
+ }
+ else if (err && *err) {
+ /* No status change but we'd like to report something odd.
+ * Just report the current state and copy the message.
+ */
+ chunk_printf(&trash, "agent reports an error : %s", err);
+ set_server_check_status(check, status/*check->status*/, trash.str);
+
+ }
+ else if (wrn && *wrn) {
+ /* No status change but we'd like to report something odd.
+ * Just report the current state and copy the message.
+ */
+ chunk_printf(&trash, "agent warns : %s", wrn);
+ set_server_check_status(check, status/*check->status*/, trash.str);
+ }
+ else
+ set_server_check_status(check, status, NULL);
+ break;
+ }
+
+ case PR_O2_PGSQL_CHK:
+ if (!done && check->bi->i < 9)
+ goto wait_more_data;
+
+ if (check->bi->data[0] == 'R') {
+ set_server_check_status(check, HCHK_STATUS_L7OKD, "PostgreSQL server is ok");
+ }
+ else {
+ if ((check->bi->data[0] == 'E') && (check->bi->data[5]!=0) && (check->bi->data[6]!=0))
+ desc = &check->bi->data[6];
+ else
+ desc = "PostgreSQL unknown error";
+
+ set_server_check_status(check, HCHK_STATUS_L7STS, desc);
+ }
+ break;
+
+ case PR_O2_REDIS_CHK:
+ if (!done && check->bi->i < 7)
+ goto wait_more_data;
+
+ if (strcmp(check->bi->data, "+PONG\r\n") == 0) {
+ set_server_check_status(check, HCHK_STATUS_L7OKD, "Redis server is ok");
+ }
+ else {
+ set_server_check_status(check, HCHK_STATUS_L7STS, check->bi->data);
+ }
+ break;
+
+ case PR_O2_MYSQL_CHK:
+ if (!done && check->bi->i < 5)
+ goto wait_more_data;
+
+ if (s->proxy->check_len == 0) { // old mode
+ if (*(check->bi->data + 4) != '\xff') {
+ /* We set the MySQL Version in description for information purpose
+ * FIXME : it can be cool to use MySQL Version for other purpose,
+ * like mark as down old MySQL server.
+ */
+ if (check->bi->i > 51) {
+ desc = ltrim(check->bi->data + 5, ' ');
+ set_server_check_status(check, HCHK_STATUS_L7OKD, desc);
+ }
+ else {
+ if (!done)
+ goto wait_more_data;
+ /* it seems we have a OK packet but without a valid length,
+ * it must be a protocol error
+ */
+ set_server_check_status(check, HCHK_STATUS_L7RSP, check->bi->data);
+ }
+ }
+ else {
+ /* An error message is attached in the Error packet */
+ desc = ltrim(check->bi->data + 7, ' ');
+ set_server_check_status(check, HCHK_STATUS_L7STS, desc);
+ }
+ } else {
+ unsigned int first_packet_len = ((unsigned int) *check->bi->data) +
+ (((unsigned int) *(check->bi->data + 1)) << 8) +
+ (((unsigned int) *(check->bi->data + 2)) << 16);
+
+ if (check->bi->i == first_packet_len + 4) {
+ /* MySQL Error packet always begin with field_count = 0xff */
+ if (*(check->bi->data + 4) != '\xff') {
+ /* We have only one MySQL packet and it is a Handshake Initialization packet
+ * but we need to have a second packet to know if it is alright
+ */
+ if (!done && check->bi->i < first_packet_len + 5)
+ goto wait_more_data;
+ }
+ else {
+ /* We have only one packet and it is an Error packet,
+ * an error message is attached, so we can display it
+ */
+ desc = &check->bi->data[7];
+ //Warning("onlyoneERR: %s\n", desc);
+ set_server_check_status(check, HCHK_STATUS_L7STS, desc);
+ }
+ } else if (check->bi->i > first_packet_len + 4) {
+ unsigned int second_packet_len = ((unsigned int) *(check->bi->data + first_packet_len + 4)) +
+ (((unsigned int) *(check->bi->data + first_packet_len + 5)) << 8) +
+ (((unsigned int) *(check->bi->data + first_packet_len + 6)) << 16);
+
+ if (check->bi->i == first_packet_len + 4 + second_packet_len + 4 ) {
+ /* We have 2 packets and that's good */
+ /* Check if the second packet is a MySQL Error packet or not */
+ if (*(check->bi->data + first_packet_len + 8) != '\xff') {
+ /* No error packet */
+ /* We set the MySQL Version in description for information purpose */
+ desc = &check->bi->data[5];
+ //Warning("2packetOK: %s\n", desc);
+ set_server_check_status(check, HCHK_STATUS_L7OKD, desc);
+ }
+ else {
+ /* An error message is attached in the Error packet
+ * so we can display it ! :)
+ */
+ desc = &check->bi->data[first_packet_len+11];
+ //Warning("2packetERR: %s\n", desc);
+ set_server_check_status(check, HCHK_STATUS_L7STS, desc);
+ }
+ }
+ }
+ else {
+ if (!done)
+ goto wait_more_data;
+ /* it seems we have a Handshake Initialization packet but without a valid length,
+ * it must be a protocol error
+ */
+ desc = &check->bi->data[5];
+ //Warning("protoerr: %s\n", desc);
+ set_server_check_status(check, HCHK_STATUS_L7RSP, desc);
+ }
+ }
+ break;
+
+ case PR_O2_LDAP_CHK:
+ if (!done && check->bi->i < 14)
+ goto wait_more_data;
+
+ /* Check if the server speaks LDAP (ASN.1/BER)
+ * http://en.wikipedia.org/wiki/Basic_Encoding_Rules
+ * http://tools.ietf.org/html/rfc4511
+ */
+
+ /* http://tools.ietf.org/html/rfc4511#section-4.1.1
+ * LDAPMessage: 0x30: SEQUENCE
+ */
+ if ((check->bi->i < 14) || (*(check->bi->data) != '\x30')) {
+ set_server_check_status(check, HCHK_STATUS_L7RSP, "Not LDAPv3 protocol");
+ }
+ else {
+ /* size of LDAPMessage */
+ msglen = (*(check->bi->data + 1) & 0x80) ? (*(check->bi->data + 1) & 0x7f) : 0;
+
+ /* http://tools.ietf.org/html/rfc4511#section-4.2.2
+ * messageID: 0x02 0x01 0x01: INTEGER 1
+ * protocolOp: 0x61: bindResponse
+ */
+ if ((msglen > 2) ||
+ (memcmp(check->bi->data + 2 + msglen, "\x02\x01\x01\x61", 4) != 0)) {
+ set_server_check_status(check, HCHK_STATUS_L7RSP, "Not LDAPv3 protocol");
+
+ goto out_wakeup;
+ }
+
+ /* size of bindResponse */
+ msglen += (*(check->bi->data + msglen + 6) & 0x80) ? (*(check->bi->data + msglen + 6) & 0x7f) : 0;
+
+ /* http://tools.ietf.org/html/rfc4511#section-4.1.9
+ * ldapResult: 0x0a 0x01: ENUMERATION
+ */
+ if ((msglen > 4) ||
+ (memcmp(check->bi->data + 7 + msglen, "\x0a\x01", 2) != 0)) {
+ set_server_check_status(check, HCHK_STATUS_L7RSP, "Not LDAPv3 protocol");
+
+ goto out_wakeup;
+ }
+
+ /* http://tools.ietf.org/html/rfc4511#section-4.1.9
+ * resultCode
+ */
+ check->code = *(check->bi->data + msglen + 9);
+ if (check->code) {
+ set_server_check_status(check, HCHK_STATUS_L7STS, "See RFC: http://tools.ietf.org/html/rfc4511#section-4.1.9");
+ } else {
+ set_server_check_status(check, HCHK_STATUS_L7OKD, "Success");
+ }
+ }
+ break;
+
+ default:
+ /* for other checks (eg: pure TCP), delegate to the main task */
+ break;
+ } /* switch */
+
+ out_wakeup:
+ /* collect possible new errors */
+ if (conn->flags & CO_FL_ERROR)
+ chk_report_conn_err(conn, 0, 0);
+
+ /* Reset the check buffer... */
+ *check->bi->data = '\0';
+ check->bi->i = 0;
+
+ /* Close the connection... We absolutely want to perform a hard close
+ * and reset the connection if some data are pending, otherwise we end
+ * up with many TIME_WAITs and eat all the source port range quickly.
+ * To avoid sending RSTs all the time, we first try to drain pending
+ * data.
+ */
+ __conn_data_stop_both(conn);
+ conn_data_shutw_hard(conn);
+
+ /* OK, let's not stay here forever */
+ if (check->result == CHK_RES_FAILED)
+ conn->flags |= CO_FL_ERROR;
+
+ task_wakeup(t, TASK_WOKEN_IO);
+ return;
+
+ wait_more_data:
+ __conn_data_want_recv(conn);
+}
+
+/*
+ * This function is used only for server health-checks. It handles connection
+ * status updates including errors. If necessary, it wakes the check task up.
+ * It always returns 0.
+ */
+static int wake_srv_chk(struct connection *conn)
+{
+ struct check *check = conn->owner;
+
+ if (unlikely(conn->flags & CO_FL_ERROR)) {
+ /* We may get error reports bypassing the I/O handlers, typically
+ * the case when sending a pure TCP check which fails, then the I/O
+ * handlers above are not called. This is completely handled by the
+ * main processing task so let's simply wake it up. If we get here,
+ * we expect errno to still be valid.
+ */
+ chk_report_conn_err(conn, errno, 0);
+
+ __conn_data_stop_both(conn);
+ task_wakeup(check->task, TASK_WOKEN_IO);
+ }
+ else if (!(conn->flags & (CO_FL_DATA_RD_ENA|CO_FL_DATA_WR_ENA|CO_FL_HANDSHAKE))) {
+ /* we may get here if only a connection probe was required : we
+ * don't have any data to send nor anything expected in response,
+ * so the completion of the connection establishment is enough.
+ */
+ task_wakeup(check->task, TASK_WOKEN_IO);
+ }
+
+ if (check->result != CHK_RES_UNKNOWN) {
+ /* We're here because nobody wants to handle the error, so we
+ * sure want to abort the hard way.
+ */
+ conn_sock_drain(conn);
+ conn_force_close(conn);
+ }
+ return 0;
+}
+
+struct data_cb check_conn_cb = {
+ .recv = event_srv_chk_r,
+ .send = event_srv_chk_w,
+ .wake = wake_srv_chk,
+};
+
+/*
+ * updates the server's weight during a warmup stage. Once the final weight is
+ * reached, the task automatically stops. Note that any server status change
+ * must have updated s->last_change accordingly.
+ */
+static struct task *server_warmup(struct task *t)
+{
+ struct server *s = t->context;
+
+ /* by default, plan on stopping the task */
+ t->expire = TICK_ETERNITY;
+ if ((s->admin & SRV_ADMF_MAINT) ||
+ (s->state != SRV_ST_STARTING))
+ return t;
+
+ /* recalculate the weights and update the state */
+ server_recalc_eweight(s);
+
+ /* probably that we can refill this server with a bit more connections */
+ pendconn_grab_from_px(s);
+
+ /* get back there in 1 second or 1/20th of the slowstart interval,
+ * whichever is greater, resulting in small 5% steps.
+ */
+ if (s->state == SRV_ST_STARTING)
+ t->expire = tick_add(now_ms, MS_TO_TICKS(MAX(1000, s->slowstart / 20)));
+ return t;
+}
+
+/*
+ * establish a server health-check that makes use of a connection.
+ *
+ * It can return one of :
+ * - SF_ERR_NONE if everything's OK and tcpcheck_main() was not called
+ * - SF_ERR_UP if if everything's OK and tcpcheck_main() was called
+ * - SF_ERR_SRVTO if there are no more servers
+ * - SF_ERR_SRVCL if the connection was refused by the server
+ * - SF_ERR_PRXCOND if the connection has been limited by the proxy (maxconn)
+ * - SF_ERR_RESOURCE if a system resource is lacking (eg: fd limits, ports, ...)
+ * - SF_ERR_INTERNAL for any other purely internal errors
+ * Additionnally, in the case of SF_ERR_RESOURCE, an emergency log will be emitted.
+ * Note that we try to prevent the network stack from sending the ACK during the
+ * connect() when a pure TCP check is used (without PROXY protocol).
+ */
+static int connect_conn_chk(struct task *t)
+{
+ struct check *check = t->context;
+ struct server *s = check->server;
+ struct connection *conn = check->conn;
+ struct protocol *proto;
+ int ret;
+ int quickack;
+
+ /* tcpcheck send/expect initialisation */
+ if (check->type == PR_O2_TCPCHK_CHK)
+ check->current_step = NULL;
+
+ /* prepare the check buffer.
+ * This should not be used if check is the secondary agent check
+ * of a server as s->proxy->check_req will relate to the
+ * configuration of the primary check. Similarly, tcp-check uses
+ * its own strings.
+ */
+ if (check->type && check->type != PR_O2_TCPCHK_CHK && !(check->state & CHK_ST_AGENT)) {
+ bo_putblk(check->bo, s->proxy->check_req, s->proxy->check_len);
+
+ /* we want to check if this host replies to HTTP or SSLv3 requests
+ * so we'll send the request, and won't wake the checker up now.
+ */
+ if ((check->type) == PR_O2_SSL3_CHK) {
+ /* SSL requires that we put Unix time in the request */
+ int gmt_time = htonl(date.tv_sec);
+ memcpy(check->bo->data + 11, &gmt_time, 4);
+ }
+ else if ((check->type) == PR_O2_HTTP_CHK) {
+ if (s->proxy->options2 & PR_O2_CHK_SNDST)
+ bo_putblk(check->bo, trash.str, httpchk_build_status_header(s, trash.str, trash.size));
+ /* prevent HTTP keep-alive when "http-check expect" is used */
+ if (s->proxy->options2 & PR_O2_EXP_TYPE)
+ bo_putstr(check->bo, "Connection: close\r\n");
+ bo_putstr(check->bo, "\r\n");
+ *check->bo->p = '\0'; /* to make gdb output easier to read */
+ }
+ }
+
+ /* prepare a new connection */
+ conn_init(conn);
+
+ if (is_addr(&check->addr)) {
+ /* we'll connect to the check addr specified on the server */
+ conn->addr.to = check->addr;
+ }
+ else {
+ /* we'll connect to the addr on the server */
+ conn->addr.to = s->addr;
+ }
+
+ if (check->port) {
+ set_host_port(&conn->addr.to, check->port);
+ }
+
+ proto = protocol_by_family(conn->addr.to.ss_family);
+
+ conn_prepare(conn, proto, check->xprt);
+ conn_attach(conn, check, &check_conn_cb);
+ conn->target = &s->obj_type;
+
+ /* no client address */
+ clear_addr(&conn->addr.from);
+
+ /* only plain tcp-check supports quick ACK */
+ quickack = check->type == 0 || check->type == PR_O2_TCPCHK_CHK;
+
+ if (check->type == PR_O2_TCPCHK_CHK && !LIST_ISEMPTY(check->tcpcheck_rules)) {
+ struct tcpcheck_rule *r;
+
+ r = LIST_NEXT(check->tcpcheck_rules, struct tcpcheck_rule *, list);
+
+ /* if first step is a 'connect', then tcpcheck_main must run it */
+ if (r->action == TCPCHK_ACT_CONNECT) {
+ tcpcheck_main(conn);
+ return SF_ERR_UP;
+ }
+ if (r->action == TCPCHK_ACT_EXPECT)
+ quickack = 0;
+ }
+
+ ret = SF_ERR_INTERNAL;
+ if (proto->connect)
+ ret = proto->connect(conn, check->type, quickack ? 2 : 0);
+ conn->flags |= CO_FL_WAKE_DATA;
+ if (s->check.send_proxy) {
+ conn->send_proxy_ofs = 1;
+ conn->flags |= CO_FL_SEND_PROXY;
+ }
+
+ return ret;
+}
+
+static struct list pid_list = LIST_HEAD_INIT(pid_list);
+static struct pool_head *pool2_pid_list;
+
+void block_sigchld(void)
+{
+ sigset_t set;
+ sigemptyset(&set);
+ sigaddset(&set, SIGCHLD);
+ assert(sigprocmask(SIG_SETMASK, &set, NULL) == 0);
+}
+
+void unblock_sigchld(void)
+{
+ sigset_t set;
+ sigemptyset(&set);
+ assert(sigprocmask(SIG_SETMASK, &set, NULL) == 0);
+}
+
+/* Call with SIGCHLD blocked */
+static struct pid_list *pid_list_add(pid_t pid, struct task *t)
+{
+ struct pid_list *elem;
+ struct check *check = t->context;
+
+ elem = pool_alloc2(pool2_pid_list);
+ if (!elem)
+ return NULL;
+ elem->pid = pid;
+ elem->t = t;
+ elem->exited = 0;
+ check->curpid = elem;
+ LIST_INIT(&elem->list);
+ LIST_ADD(&pid_list, &elem->list);
+ return elem;
+}
+
+/* Blocks blocks and then unblocks SIGCHLD */
+static void pid_list_del(struct pid_list *elem)
+{
+ struct check *check;
+
+ if (!elem)
+ return;
+
+ block_sigchld();
+ LIST_DEL(&elem->list);
+ unblock_sigchld();
+ if (!elem->exited)
+ kill(elem->pid, SIGTERM);
+
+ check = elem->t->context;
+ check->curpid = NULL;
+ pool_free2(pool2_pid_list, elem);
+}
+
+/* Called from inside SIGCHLD handler, SIGCHLD is blocked */
+static void pid_list_expire(pid_t pid, int status)
+{
+ struct pid_list *elem;
+
+ list_for_each_entry(elem, &pid_list, list) {
+ if (elem->pid == pid) {
+ elem->t->expire = now_ms;
+ elem->status = status;
+ elem->exited = 1;
+ task_wakeup(elem->t, TASK_WOKEN_IO);
+ return;
+ }
+ }
+}
+
+static void sigchld_handler(int signal)
+{
+ pid_t pid;
+ int status;
+ while ((pid = waitpid(0, &status, WNOHANG)) > 0)
+ pid_list_expire(pid, status);
+}
+
+static int init_pid_list(void) {
+ struct sigaction action = {
+ .sa_handler = sigchld_handler,
+ .sa_flags = SA_NOCLDSTOP
+ };
+
+ if (pool2_pid_list != NULL)
+ /* Nothing to do */
+ return 0;
+
+ if (sigaction(SIGCHLD, &action, NULL)) {
+ Alert("Failed to set signal handler for external health checks: %s. Aborting.\n",
+ strerror(errno));
+ return 1;
+ }
+
+ pool2_pid_list = create_pool("pid_list", sizeof(struct pid_list), MEM_F_SHARED);
+ if (pool2_pid_list == NULL) {
+ Alert("Failed to allocate memory pool for external health checks: %s. Aborting.\n",
+ strerror(errno));
+ return 1;
+ }
+
+ return 0;
+}
+
+/* helper macro to set an environment variable and jump to a specific label on failure. */
+#define EXTCHK_SETENV(check, envidx, value, fail) { if (extchk_setenv(check, envidx, value)) goto fail; }
+
+/*
+ * helper function to allocate enough memory to store an environment variable.
+ * It will also check that the environment variable is updatable, and silently
+ * fail if not.
+ */
+static int extchk_setenv(struct check *check, int idx, const char *value)
+{
+ int len, ret;
+ char *envname;
+ int vmaxlen;
+
+ if (idx < 0 || idx >= EXTCHK_SIZE) {
+ Alert("Illegal environment variable index %d. Aborting.\n", idx);
+ return 1;
+ }
+
+ envname = extcheck_envs[idx].name;
+ vmaxlen = extcheck_envs[idx].vmaxlen;
+
+ /* Check if the environment variable is already set, and silently reject
+ * the update if this one is not updatable. */
+ if ((vmaxlen == EXTCHK_SIZE_EVAL_INIT) && (check->envp[idx]))
+ return 0;
+
+ /* Instead of sending NOT_USED, sending an empty value is preferable */
+ if (strcmp(value, "NOT_USED") == 0) {
+ value = "";
+ }
+
+ len = strlen(envname) + 1;
+ if (vmaxlen == EXTCHK_SIZE_EVAL_INIT)
+ len += strlen(value);
+ else
+ len += vmaxlen;
+
+ if (!check->envp[idx])
+ check->envp[idx] = malloc(len + 1);
+
+ if (!check->envp[idx]) {
+ Alert("Failed to allocate memory for the environment variable '%s'. Aborting.\n", envname);
+ return 1;
+ }
+ ret = snprintf(check->envp[idx], len + 1, "%s=%s", envname, value);
+ if (ret < 0) {
+ Alert("Failed to store the environment variable '%s'. Reason : %s. Aborting.\n", envname, strerror(errno));
+ return 1;
+ }
+ else if (ret > len) {
+ Alert("Environment variable '%s' was truncated. Aborting.\n", envname);
+ return 1;
+ }
+ return 0;
+}
+
+static int prepare_external_check(struct check *check)
+{
+ struct server *s = check->server;
+ struct proxy *px = s->proxy;
+ struct listener *listener = NULL, *l;
+ int i;
+ const char *path = px->check_path ? px->check_path : DEF_CHECK_PATH;
+ char buf[256];
+
+ list_for_each_entry(l, &px->conf.listeners, by_fe)
+ /* Use the first INET, INET6 or UNIX listener */
+ if (l->addr.ss_family == AF_INET ||
+ l->addr.ss_family == AF_INET6 ||
+ l->addr.ss_family == AF_UNIX) {
+ listener = l;
+ break;
+ }
+
+ check->curpid = NULL;
+ check->envp = calloc((EXTCHK_SIZE + 1), sizeof(char *));
+ if (!check->envp) {
+ Alert("Failed to allocate memory for environment variables. Aborting\n");
+ goto err;
+ }
+
+ check->argv = calloc(6, sizeof(char *));
+ if (!check->argv) {
+ Alert("Starting [%s:%s] check: out of memory.\n", px->id, s->id);
+ goto err;
+ }
+
+ check->argv[0] = px->check_command;
+
+ if (!listener) {
+ check->argv[1] = strdup("NOT_USED");
+ check->argv[2] = strdup("NOT_USED");
+ }
+ else if (listener->addr.ss_family == AF_INET ||
+ listener->addr.ss_family == AF_INET6) {
+ addr_to_str(&listener->addr, buf, sizeof(buf));
+ check->argv[1] = strdup(buf);
+ port_to_str(&listener->addr, buf, sizeof(buf));
+ check->argv[2] = strdup(buf);
+ }
+ else if (listener->addr.ss_family == AF_UNIX) {
+ const struct sockaddr_un *un;
+
+ un = (struct sockaddr_un *)&listener->addr;
+ check->argv[1] = strdup(un->sun_path);
+ check->argv[2] = strdup("NOT_USED");
+ }
+ else {
+ Alert("Starting [%s:%s] check: unsupported address family.\n", px->id, s->id);
+ goto err;
+ }
+
+ addr_to_str(&s->addr, buf, sizeof(buf));
+ check->argv[3] = strdup(buf);
+ port_to_str(&s->addr, buf, sizeof(buf));
+ check->argv[4] = strdup(buf);
+
+ for (i = 0; i < 5; i++) {
+ if (!check->argv[i]) {
+ Alert("Starting [%s:%s] check: out of memory.\n", px->id, s->id);
+ goto err;
+ }
+ }
+
+ EXTCHK_SETENV(check, EXTCHK_PATH, path, err);
+ /* Add proxy environment variables */
+ EXTCHK_SETENV(check, EXTCHK_HAPROXY_PROXY_NAME, px->id, err);
+ EXTCHK_SETENV(check, EXTCHK_HAPROXY_PROXY_ID, ultoa_r(px->uuid, buf, sizeof(buf)), err);
+ EXTCHK_SETENV(check, EXTCHK_HAPROXY_PROXY_ADDR, check->argv[1], err);
+ EXTCHK_SETENV(check, EXTCHK_HAPROXY_PROXY_PORT, check->argv[2], err);
+ /* Add server environment variables */
+ EXTCHK_SETENV(check, EXTCHK_HAPROXY_SERVER_NAME, s->id, err);
+ EXTCHK_SETENV(check, EXTCHK_HAPROXY_SERVER_ID, ultoa_r(s->puid, buf, sizeof(buf)), err);
+ EXTCHK_SETENV(check, EXTCHK_HAPROXY_SERVER_ADDR, check->argv[3], err);
+ EXTCHK_SETENV(check, EXTCHK_HAPROXY_SERVER_PORT, check->argv[4], err);
+ EXTCHK_SETENV(check, EXTCHK_HAPROXY_SERVER_MAXCONN, ultoa_r(s->maxconn, buf, sizeof(buf)), err);
+ EXTCHK_SETENV(check, EXTCHK_HAPROXY_SERVER_CURCONN, ultoa_r(s->cur_sess, buf, sizeof(buf)), err);
+
+ /* Ensure that we don't leave any hole in check->envp */
+ for (i = 0; i < EXTCHK_SIZE; i++)
+ if (!check->envp[i])
+ EXTCHK_SETENV(check, i, "", err);
+
+ return 1;
+err:
+ if (check->envp) {
+ for (i = 0; i < EXTCHK_SIZE; i++)
+ free(check->envp[i]);
+ free(check->envp);
+ check->envp = NULL;
+ }
+
+ if (check->argv) {
+ for (i = 1; i < 5; i++)
+ free(check->argv[i]);
+ free(check->argv);
+ check->argv = NULL;
+ }
+ return 0;
+}
+
+/*
+ * establish a server health-check that makes use of a process.
+ *
+ * It can return one of :
+ * - SF_ERR_NONE if everything's OK
+ * - SF_ERR_SRVTO if there are no more servers
+ * - SF_ERR_SRVCL if the connection was refused by the server
+ * - SF_ERR_PRXCOND if the connection has been limited by the proxy (maxconn)
+ * - SF_ERR_RESOURCE if a system resource is lacking (eg: fd limits, ports, ...)
+ * - SF_ERR_INTERNAL for any other purely internal errors
+ * Additionnally, in the case of SF_ERR_RESOURCE, an emergency log will be emitted.
+ *
+ * Blocks and then unblocks SIGCHLD
+ */
+static int connect_proc_chk(struct task *t)
+{
+ char buf[256];
+ struct check *check = t->context;
+ struct server *s = check->server;
+ struct proxy *px = s->proxy;
+ int status;
+ pid_t pid;
+
+ status = SF_ERR_RESOURCE;
+
+ block_sigchld();
+
+ pid = fork();
+ if (pid < 0) {
+ Alert("Failed to fork process for external health check: %s. Aborting.\n",
+ strerror(errno));
+ set_server_check_status(check, HCHK_STATUS_SOCKERR, strerror(errno));
+ goto out;
+ }
+ if (pid == 0) {
+ /* Child */
+ extern char **environ;
+ environ = check->envp;
+ extchk_setenv(check, EXTCHK_HAPROXY_SERVER_CURCONN, ultoa_r(s->cur_sess, buf, sizeof(buf)));
+ execvp(px->check_command, check->argv);
+ Alert("Failed to exec process for external health check: %s. Aborting.\n",
+ strerror(errno));
+ exit(-1);
+ }
+
+ /* Parent */
+ if (check->result == CHK_RES_UNKNOWN) {
+ if (pid_list_add(pid, t) != NULL) {
+ t->expire = tick_add(now_ms, MS_TO_TICKS(check->inter));
+
+ if (px->timeout.check && px->timeout.connect) {
+ int t_con = tick_add(now_ms, px->timeout.connect);
+ t->expire = tick_first(t->expire, t_con);
+ }
+ status = SF_ERR_NONE;
+ goto out;
+ }
+ else {
+ set_server_check_status(check, HCHK_STATUS_SOCKERR, strerror(errno));
+ }
+ kill(pid, SIGTERM); /* process creation error */
+ }
+ else
+ set_server_check_status(check, HCHK_STATUS_SOCKERR, strerror(errno));
+
+out:
+ unblock_sigchld();
+ return status;
+}
+
+/*
+ * manages a server health-check that uses a process. Returns
+ * the time the task accepts to wait, or TIME_ETERNITY for infinity.
+ */
+static struct task *process_chk_proc(struct task *t)
+{
+ struct check *check = t->context;
+ struct server *s = check->server;
+ struct connection *conn = check->conn;
+ int rv;
+ int ret;
+ int expired = tick_is_expired(t->expire, now_ms);
+
+ if (!(check->state & CHK_ST_INPROGRESS)) {
+ /* no check currently running */
+ if (!expired) /* woke up too early */
+ return t;
+
+ /* we don't send any health-checks when the proxy is
+ * stopped, the server should not be checked or the check
+ * is disabled.
+ */
+ if (((check->state & (CHK_ST_ENABLED | CHK_ST_PAUSED)) != CHK_ST_ENABLED) ||
+ s->proxy->state == PR_STSTOPPED)
+ goto reschedule;
+
+ /* we'll initiate a new check */
+ set_server_check_status(check, HCHK_STATUS_START, NULL);
+
+ check->state |= CHK_ST_INPROGRESS;
+
+ ret = connect_proc_chk(t);
+ switch (ret) {
+ case SF_ERR_UP:
+ return t;
+ case SF_ERR_NONE:
+ /* we allow up to min(inter, timeout.connect) for a connection
+ * to establish but only when timeout.check is set
+ * as it may be to short for a full check otherwise
+ */
+ t->expire = tick_add(now_ms, MS_TO_TICKS(check->inter));
+
+ if (s->proxy->timeout.check && s->proxy->timeout.connect) {
+ int t_con = tick_add(now_ms, s->proxy->timeout.connect);
+ t->expire = tick_first(t->expire, t_con);
+ }
+
+ goto reschedule;
+
+ case SF_ERR_SRVTO: /* ETIMEDOUT */
+ case SF_ERR_SRVCL: /* ECONNREFUSED, ENETUNREACH, ... */
+ conn->flags |= CO_FL_ERROR;
+ chk_report_conn_err(conn, errno, 0);
+ break;
+ case SF_ERR_PRXCOND:
+ case SF_ERR_RESOURCE:
+ case SF_ERR_INTERNAL:
+ conn->flags |= CO_FL_ERROR;
+ chk_report_conn_err(conn, 0, 0);
+ break;
+ }
+
+ /* here, we have seen a synchronous error, no fd was allocated */
+
+ check->state &= ~CHK_ST_INPROGRESS;
+ check_notify_failure(check);
+
+ /* we allow up to min(inter, timeout.connect) for a connection
+ * to establish but only when timeout.check is set
+ * as it may be to short for a full check otherwise
+ */
+ while (tick_is_expired(t->expire, now_ms)) {
+ int t_con;
+
+ t_con = tick_add(t->expire, s->proxy->timeout.connect);
+ t->expire = tick_add(t->expire, MS_TO_TICKS(check->inter));
+
+ if (s->proxy->timeout.check)
+ t->expire = tick_first(t->expire, t_con);
+ }
+ }
+ else {
+ /* there was a test running.
+ * First, let's check whether there was an uncaught error,
+ * which can happen on connect timeout or error.
+ */
+ if (check->result == CHK_RES_UNKNOWN) {
+ /* good connection is enough for pure TCP check */
+ struct pid_list *elem = check->curpid;
+ int status = HCHK_STATUS_UNKNOWN;
+
+ if (elem->exited) {
+ status = elem->status; /* Save in case the process exits between use below */
+ if (!WIFEXITED(status))
+ check->code = -1;
+ else
+ check->code = WEXITSTATUS(status);
+ if (!WIFEXITED(status) || WEXITSTATUS(status))
+ status = HCHK_STATUS_PROCERR;
+ else
+ status = HCHK_STATUS_PROCOK;
+ } else if (expired) {
+ status = HCHK_STATUS_PROCTOUT;
+ Warning("kill %d\n", (int)elem->pid);
+ kill(elem->pid, SIGTERM);
+ }
+ set_server_check_status(check, status, NULL);
+ }
+
+ if (check->result == CHK_RES_FAILED) {
+ /* a failure or timeout detected */
+ check_notify_failure(check);
+ }
+ else if (check->result == CHK_RES_CONDPASS) {
+ /* check is OK but asks for stopping mode */
+ check_notify_stopping(check);
+ }
+ else if (check->result == CHK_RES_PASSED) {
+ /* a success was detected */
+ check_notify_success(check);
+ }
+ check->state &= ~CHK_ST_INPROGRESS;
+
+ pid_list_del(check->curpid);
+
+ rv = 0;
+ if (global.spread_checks > 0) {
+ rv = srv_getinter(check) * global.spread_checks / 100;
+ rv -= (int) (2 * rv * (rand() / (RAND_MAX + 1.0)));
+ }
+ t->expire = tick_add(now_ms, MS_TO_TICKS(srv_getinter(check) + rv));
+ }
+
+ reschedule:
+ while (tick_is_expired(t->expire, now_ms))
+ t->expire = tick_add(t->expire, MS_TO_TICKS(check->inter));
+ return t;
+}
+
+/*
+ * manages a server health-check that uses a connection. Returns
+ * the time the task accepts to wait, or TIME_ETERNITY for infinity.
+ */
+static struct task *process_chk_conn(struct task *t)
+{
+ struct check *check = t->context;
+ struct server *s = check->server;
+ struct connection *conn = check->conn;
+ int rv;
+ int ret;
+ int expired = tick_is_expired(t->expire, now_ms);
+
+ if (!(check->state & CHK_ST_INPROGRESS)) {
+ /* no check currently running */
+ if (!expired) /* woke up too early */
+ return t;
+
+ /* we don't send any health-checks when the proxy is
+ * stopped, the server should not be checked or the check
+ * is disabled.
+ */
+ if (((check->state & (CHK_ST_ENABLED | CHK_ST_PAUSED)) != CHK_ST_ENABLED) ||
+ s->proxy->state == PR_STSTOPPED)
+ goto reschedule;
+
+ /* we'll initiate a new check */
+ set_server_check_status(check, HCHK_STATUS_START, NULL);
+
+ check->state |= CHK_ST_INPROGRESS;
+ check->bi->p = check->bi->data;
+ check->bi->i = 0;
+ check->bo->p = check->bo->data;
+ check->bo->o = 0;
+
+ ret = connect_conn_chk(t);
+ switch (ret) {
+ case SF_ERR_UP:
+ return t;
+ case SF_ERR_NONE:
+ /* we allow up to min(inter, timeout.connect) for a connection
+ * to establish but only when timeout.check is set
+ * as it may be to short for a full check otherwise
+ */
+ t->expire = tick_add(now_ms, MS_TO_TICKS(check->inter));
+
+ if (s->proxy->timeout.check && s->proxy->timeout.connect) {
+ int t_con = tick_add(now_ms, s->proxy->timeout.connect);
+ t->expire = tick_first(t->expire, t_con);
+ }
+
+ if (check->type)
+ conn_data_want_recv(conn); /* prepare for reading a possible reply */
+
+ goto reschedule;
+
+ case SF_ERR_SRVTO: /* ETIMEDOUT */
+ case SF_ERR_SRVCL: /* ECONNREFUSED, ENETUNREACH, ... */
+ conn->flags |= CO_FL_ERROR;
+ chk_report_conn_err(conn, errno, 0);
+ break;
+ case SF_ERR_PRXCOND:
+ case SF_ERR_RESOURCE:
+ case SF_ERR_INTERNAL:
+ conn->flags |= CO_FL_ERROR;
+ chk_report_conn_err(conn, 0, 0);
+ break;
+ }
+
+ /* here, we have seen a synchronous error, no fd was allocated */
+
+ check->state &= ~CHK_ST_INPROGRESS;
+ check_notify_failure(check);
+
+ /* we allow up to min(inter, timeout.connect) for a connection
+ * to establish but only when timeout.check is set
+ * as it may be to short for a full check otherwise
+ */
+ while (tick_is_expired(t->expire, now_ms)) {
+ int t_con;
+
+ t_con = tick_add(t->expire, s->proxy->timeout.connect);
+ t->expire = tick_add(t->expire, MS_TO_TICKS(check->inter));
+
+ if (s->proxy->timeout.check)
+ t->expire = tick_first(t->expire, t_con);
+ }
+ }
+ else {
+ /* there was a test running.
+ * First, let's check whether there was an uncaught error,
+ * which can happen on connect timeout or error.
+ */
+ if (check->result == CHK_RES_UNKNOWN) {
+ /* good connection is enough for pure TCP check */
+ if ((conn->flags & CO_FL_CONNECTED) && !check->type) {
+ if (check->use_ssl)
+ set_server_check_status(check, HCHK_STATUS_L6OK, NULL);
+ else
+ set_server_check_status(check, HCHK_STATUS_L4OK, NULL);
+ }
+ else if ((conn->flags & CO_FL_ERROR) || expired) {
+ chk_report_conn_err(conn, 0, expired);
+ }
+ else
+ goto out_wait; /* timeout not reached, wait again */
+ }
+
+ /* check complete or aborted */
+ if (conn->xprt) {
+ /* The check was aborted and the connection was not yet closed.
+ * This can happen upon timeout, or when an external event such
+ * as a failed response coupled with "observe layer7" caused the
+ * server state to be suddenly changed.
+ */
+ conn_sock_drain(conn);
+ conn_force_close(conn);
+ }
+
+ if (check->result == CHK_RES_FAILED) {
+ /* a failure or timeout detected */
+ check_notify_failure(check);
+ }
+ else if (check->result == CHK_RES_CONDPASS) {
+ /* check is OK but asks for stopping mode */
+ check_notify_stopping(check);
+ }
+ else if (check->result == CHK_RES_PASSED) {
+ /* a success was detected */
+ check_notify_success(check);
+ }
+ check->state &= ~CHK_ST_INPROGRESS;
+
+ rv = 0;
+ if (global.spread_checks > 0) {
+ rv = srv_getinter(check) * global.spread_checks / 100;
+ rv -= (int) (2 * rv * (rand() / (RAND_MAX + 1.0)));
+ }
+ t->expire = tick_add(now_ms, MS_TO_TICKS(srv_getinter(check) + rv));
+ }
+
+ reschedule:
+ while (tick_is_expired(t->expire, now_ms))
+ t->expire = tick_add(t->expire, MS_TO_TICKS(check->inter));
+ out_wait:
+ return t;
+}
+
+/*
+ * manages a server health-check. Returns
+ * the time the task accepts to wait, or TIME_ETERNITY for infinity.
+ */
+static struct task *process_chk(struct task *t)
+{
+ struct check *check = t->context;
+ struct server *s = check->server;
+ struct dns_resolution *resolution = s->resolution;
+
+ /* trigger name resolution */
+ if ((s->check.state & CHK_ST_ENABLED) && (resolution)) {
+ /* check if a no resolution is running for this server */
+ if (resolution->step == RSLV_STEP_NONE) {
+ /*
+ * if there has not been any name resolution for a longer period than
+ * hold.valid, let's trigger a new one.
+ */
+ if (!resolution->last_resolution || tick_is_expired(tick_add(resolution->last_resolution, resolution->resolvers->hold.valid), now_ms)) {
+ trigger_resolution(s);
+ }
+ }
+ }
+
+ if (check->type == PR_O2_EXT_CHK)
+ return process_chk_proc(t);
+ return process_chk_conn(t);
+
+}
+
+/*
+ * Initiates a new name resolution:
+ * - generates a query id
+ * - configure the resolution structure
+ * - startup the resolvers task if required
+ *
+ * returns:
+ * - 0 in case of error or if resolution already running
+ * - 1 if everything started properly
+ */
+int trigger_resolution(struct server *s)
+{
+ struct dns_resolution *resolution;
+ struct dns_resolvers *resolvers;
+ int query_id;
+ int i;
+
+ resolution = s->resolution;
+ resolvers = resolution->resolvers;
+
+ /*
+ * check if a resolution has already been started for this server
+ * return directly to avoid resolution pill up
+ */
+ if (resolution->step != RSLV_STEP_NONE)
+ return 0;
+
+ /* generates a query id */
+ i = 0;
+ do {
+ query_id = dns_rnd16();
+ /* we do try only 100 times to find a free query id */
+ if (i++ > 100) {
+ chunk_printf(&trash, "could not generate a query id for %s/%s, in resolvers %s",
+ s->proxy->id, s->id, resolvers->id);
+
+ send_log(s->proxy, LOG_NOTICE, "%s.\n", trash.str);
+ return 0;
+ }
+ } while (eb32_lookup(&resolvers->query_ids, query_id));
+
+ LIST_ADDQ(&resolvers->curr_resolution, &resolution->list);
+
+ /* now update resolution parameters */
+ resolution->query_id = query_id;
+ resolution->qid.key = query_id;
+ resolution->step = RSLV_STEP_RUNNING;
+ resolution->resolver_family_priority = s->resolver_family_priority;
+ if (resolution->resolver_family_priority == AF_INET) {
+ resolution->query_type = DNS_RTYPE_A;
+ } else {
+ resolution->query_type = DNS_RTYPE_AAAA;
+ }
+ resolution->try = resolvers->resolve_retries;
+ resolution->try_cname = 0;
+ resolution->nb_responses = 0;
+ eb32_insert(&resolvers->query_ids, &resolution->qid);
+
+ dns_send_query(resolution);
+ resolution->try -= 1;
+
+ /* update wakeup date if this resolution is the only one in the FIFO list */
+ if (dns_check_resolution_queue(resolvers) == 1) {
+ /* update task timeout */
+ dns_update_resolvers_timeout(resolvers);
+ task_queue(resolvers->t);
+ }
+
+ return 1;
+}
+
+static int start_check_task(struct check *check, int mininter,
+ int nbcheck, int srvpos)
+{
+ struct task *t;
+ /* task for the check */
+ if ((t = task_new()) == NULL) {
+ Alert("Starting [%s:%s] check: out of memory.\n",
+ check->server->proxy->id, check->server->id);
+ return 0;
+ }
+
+ check->task = t;
+ t->process = process_chk;
+ t->context = check;
+
+ if (mininter < srv_getinter(check))
+ mininter = srv_getinter(check);
+
+ if (global.max_spread_checks && mininter > global.max_spread_checks)
+ mininter = global.max_spread_checks;
+
+ /* check this every ms */
+ t->expire = tick_add(now_ms, MS_TO_TICKS(mininter * srvpos / nbcheck));
+ check->start = now;
+ task_queue(t);
+
+ return 1;
+}
+
+/*
+ * Start health-check.
+ * Returns 0 if OK, -1 if error, and prints the error in this case.
+ */
+int start_checks() {
+
+ struct proxy *px;
+ struct server *s;
+ struct task *t;
+ int nbcheck=0, mininter=0, srvpos=0;
+
+ /* 1- count the checkers to run simultaneously.
+ * We also determine the minimum interval among all of those which
+ * have an interval larger than SRV_CHK_INTER_THRES. This interval
+ * will be used to spread their start-up date. Those which have
+ * a shorter interval will start independently and will not dictate
+ * too short an interval for all others.
+ */
+ for (px = proxy; px; px = px->next) {
+ for (s = px->srv; s; s = s->next) {
+ if (s->slowstart) {
+ if ((t = task_new()) == NULL) {
+ Alert("Starting [%s:%s] check: out of memory.\n", px->id, s->id);
+ return -1;
+ }
+ /* We need a warmup task that will be called when the server
+ * state switches from down to up.
+ */
+ s->warmup = t;
+ t->process = server_warmup;
+ t->context = s;
+ t->expire = TICK_ETERNITY;
+ /* server can be in this state only because of */
+ if (s->state == SRV_ST_STARTING)
+ task_schedule(s->warmup, tick_add(now_ms, MS_TO_TICKS(MAX(1000, (now.tv_sec - s->last_change)) / 20)));
+ }
+
+ if (s->check.state & CHK_ST_CONFIGURED) {
+ nbcheck++;
+ if ((srv_getinter(&s->check) >= SRV_CHK_INTER_THRES) &&
+ (!mininter || mininter > srv_getinter(&s->check)))
+ mininter = srv_getinter(&s->check);
+ }
+
+ if (s->agent.state & CHK_ST_CONFIGURED) {
+ nbcheck++;
+ if ((srv_getinter(&s->agent) >= SRV_CHK_INTER_THRES) &&
+ (!mininter || mininter > srv_getinter(&s->agent)))
+ mininter = srv_getinter(&s->agent);
+ }
+ }
+ }
+
+ if (!nbcheck)
+ return 0;
+
+ srand((unsigned)time(NULL));
+
+ /*
+ * 2- start them as far as possible from each others. For this, we will
+ * start them after their interval set to the min interval divided by
+ * the number of servers, weighted by the server's position in the list.
+ */
+ for (px = proxy; px; px = px->next) {
+ if ((px->options2 & PR_O2_CHK_ANY) == PR_O2_EXT_CHK) {
+ if (init_pid_list()) {
+ Alert("Starting [%s] check: out of memory.\n", px->id);
+ return -1;
+ }
+ }
+
+ for (s = px->srv; s; s = s->next) {
+ /* A task for the main check */
+ if (s->check.state & CHK_ST_CONFIGURED) {
+ if (s->check.type == PR_O2_EXT_CHK) {
+ if (!prepare_external_check(&s->check))
+ return -1;
+ }
+ if (!start_check_task(&s->check, mininter, nbcheck, srvpos))
+ return -1;
+ srvpos++;
+ }
+
+ /* A task for a auxiliary agent check */
+ if (s->agent.state & CHK_ST_CONFIGURED) {
+ if (!start_check_task(&s->agent, mininter, nbcheck, srvpos)) {
+ return -1;
+ }
+ srvpos++;
+ }
+ }
+ }
+ return 0;
+}
+
+/*
+ * Perform content verification check on data in s->check.buffer buffer.
+ * The buffer MUST be terminated by a null byte before calling this function.
+ * Sets server status appropriately. The caller is responsible for ensuring
+ * that the buffer contains at least 13 characters. If <done> is zero, we may
+ * return 0 to indicate that data is required to decide of a match.
+ */
+static int httpchk_expect(struct server *s, int done)
+{
+ static char status_msg[] = "HTTP status check returned code <000>";
+ char status_code[] = "000";
+ char *contentptr;
+ int crlf;
+ int ret;
+
+ switch (s->proxy->options2 & PR_O2_EXP_TYPE) {
+ case PR_O2_EXP_STS:
+ case PR_O2_EXP_RSTS:
+ memcpy(status_code, s->check.bi->data + 9, 3);
+ memcpy(status_msg + strlen(status_msg) - 4, s->check.bi->data + 9, 3);
+
+ if ((s->proxy->options2 & PR_O2_EXP_TYPE) == PR_O2_EXP_STS)
+ ret = strncmp(s->proxy->expect_str, status_code, 3) == 0;
+ else
+ ret = regex_exec(s->proxy->expect_regex, status_code);
+
+ /* we necessarily have the response, so there are no partial failures */
+ if (s->proxy->options2 & PR_O2_EXP_INV)
+ ret = !ret;
+
+ set_server_check_status(&s->check, ret ? HCHK_STATUS_L7OKD : HCHK_STATUS_L7STS, status_msg);
+ break;
+
+ case PR_O2_EXP_STR:
+ case PR_O2_EXP_RSTR:
+ /* very simple response parser: ignore CR and only count consecutive LFs,
+ * stop with contentptr pointing to first char after the double CRLF or
+ * to '\0' if crlf < 2.
+ */
+ crlf = 0;
+ for (contentptr = s->check.bi->data; *contentptr; contentptr++) {
+ if (crlf >= 2)
+ break;
+ if (*contentptr == '\r')
+ continue;
+ else if (*contentptr == '\n')
+ crlf++;
+ else
+ crlf = 0;
+ }
+
+ /* Check that response contains a body... */
+ if (crlf < 2) {
+ if (!done)
+ return 0;
+
+ set_server_check_status(&s->check, HCHK_STATUS_L7RSP,
+ "HTTP content check could not find a response body");
+ return 1;
+ }
+
+ /* Check that response body is not empty... */
+ if (*contentptr == '\0') {
+ if (!done)
+ return 0;
+
+ set_server_check_status(&s->check, HCHK_STATUS_L7RSP,
+ "HTTP content check found empty response body");
+ return 1;
+ }
+
+ /* Check the response content against the supplied string
+ * or regex... */
+ if ((s->proxy->options2 & PR_O2_EXP_TYPE) == PR_O2_EXP_STR)
+ ret = strstr(contentptr, s->proxy->expect_str) != NULL;
+ else
+ ret = regex_exec(s->proxy->expect_regex, contentptr);
+
+ /* if we don't match, we may need to wait more */
+ if (!ret && !done)
+ return 0;
+
+ if (ret) {
+ /* content matched */
+ if (s->proxy->options2 & PR_O2_EXP_INV)
+ set_server_check_status(&s->check, HCHK_STATUS_L7RSP,
+ "HTTP check matched unwanted content");
+ else
+ set_server_check_status(&s->check, HCHK_STATUS_L7OKD,
+ "HTTP content check matched");
+ }
+ else {
+ if (s->proxy->options2 & PR_O2_EXP_INV)
+ set_server_check_status(&s->check, HCHK_STATUS_L7OKD,
+ "HTTP check did not match unwanted content");
+ else
+ set_server_check_status(&s->check, HCHK_STATUS_L7RSP,
+ "HTTP content check did not match");
+ }
+ break;
+ }
+ return 1;
+}
+
+/*
+ * return the id of a step in a send/expect session
+ */
+static int tcpcheck_get_step_id(struct check *check)
+{
+ struct tcpcheck_rule *cur = NULL, *next = NULL;
+ int i = 0;
+
+ /* not even started anything yet => step 0 = initial connect */
+ if (!check->current_step)
+ return 0;
+
+ cur = check->last_started_step;
+
+ /* no step => first step */
+ if (cur == NULL)
+ return 1;
+
+ /* increment i until current step */
+ list_for_each_entry(next, check->tcpcheck_rules, list) {
+ if (next->list.p == &cur->list)
+ break;
+ ++i;
+ }
+
+ return i;
+}
+
+/*
+ * return the latest known comment before (including) the given stepid
+ * returns NULL if no comment found
+ */
+static char * tcpcheck_get_step_comment(struct check *check, int stepid)
+{
+ struct tcpcheck_rule *cur = NULL;
+ char *ret = NULL;
+ int i = 0;
+
+ /* not even started anything yet, return latest comment found before any action */
+ if (!check->current_step) {
+ list_for_each_entry(cur, check->tcpcheck_rules, list) {
+ if (cur->action == TCPCHK_ACT_COMMENT)
+ ret = cur->comment;
+ else
+ goto return_comment;
+ }
+ }
+
+ i = 1;
+ list_for_each_entry(cur, check->tcpcheck_rules, list) {
+ if (cur->comment)
+ ret = cur->comment;
+
+ if (i >= stepid)
+ goto return_comment;
+
+ ++i;
+ }
+
+ return_comment:
+ return ret;
+}
+
+static void tcpcheck_main(struct connection *conn)
+{
+ char *contentptr, *comment;
+ struct tcpcheck_rule *next;
+ int done = 0, ret = 0, step = 0;
+ struct check *check = conn->owner;
+ struct server *s = check->server;
+ struct task *t = check->task;
+ struct list *head = check->tcpcheck_rules;
+
+ /* here, we know that the check is complete or that it failed */
+ if (check->result != CHK_RES_UNKNOWN)
+ goto out_end_tcpcheck;
+
+ /* We have 4 possibilities here :
+ * 1. we've not yet attempted step 1, and step 1 is a connect, so no
+ * connection attempt was made yet ;
+ * 2. we've not yet attempted step 1, and step 1 is a not connect or
+ * does not exist (no rule), so a connection attempt was made
+ * before coming here.
+ * 3. we're coming back after having started with step 1, so we may
+ * be waiting for a connection attempt to complete.
+ * 4. the connection + handshake are complete
+ *
+ * #2 and #3 are quite similar, we want both the connection and the
+ * handshake to complete before going any further. Thus we must always
+ * wait for a connection to complete unless we're before and existing
+ * step 1.
+ */
+
+ /* find first rule and skip comments */
+ next = LIST_NEXT(head, struct tcpcheck_rule *, list);
+ while (&next->list != head && next->action == TCPCHK_ACT_COMMENT)
+ next = LIST_NEXT(&next->list, struct tcpcheck_rule *, list);
+
+ if ((!(conn->flags & CO_FL_CONNECTED) || (conn->flags & CO_FL_HANDSHAKE)) &&
+ (check->current_step || &next->list == head)) {
+ /* we allow up to min(inter, timeout.connect) for a connection
+ * to establish but only when timeout.check is set
+ * as it may be to short for a full check otherwise
+ */
+ while (tick_is_expired(t->expire, now_ms)) {
+ int t_con;
+
+ t_con = tick_add(t->expire, s->proxy->timeout.connect);
+ t->expire = tick_add(t->expire, MS_TO_TICKS(check->inter));
+
+ if (s->proxy->timeout.check)
+ t->expire = tick_first(t->expire, t_con);
+ }
+ return;
+ }
+
+ /* special case: option tcp-check with no rule, a connect is enough */
+ if (&next->list == head) {
+ set_server_check_status(check, HCHK_STATUS_L4OK, NULL);
+ goto out_end_tcpcheck;
+ }
+
+ /* no step means first step initialisation */
+ if (check->current_step == NULL) {
+ check->last_started_step = NULL;
+ check->bo->p = check->bo->data;
+ check->bo->o = 0;
+ check->bi->p = check->bi->data;
+ check->bi->i = 0;
+ check->current_step = next;
+ t->expire = tick_add(now_ms, MS_TO_TICKS(check->inter));
+ if (s->proxy->timeout.check)
+ t->expire = tick_add_ifset(now_ms, s->proxy->timeout.check);
+ }
+
+ /* It's only the rules which will enable send/recv */
+ __conn_data_stop_both(conn);
+
+ while (1) {
+ /* We have to try to flush the output buffer before reading, at
+ * the end, or if we're about to send a string that does not fit
+ * in the remaining space. That explains why we break out of the
+ * loop after this control.
+ */
+ if (check->bo->o &&
+ (&check->current_step->list == head ||
+ check->current_step->action != TCPCHK_ACT_SEND ||
+ check->current_step->string_len >= buffer_total_space(check->bo))) {
+
+ if (conn->xprt->snd_buf(conn, check->bo, 0) <= 0) {
+ if (conn->flags & CO_FL_ERROR) {
+ chk_report_conn_err(conn, errno, 0);
+ __conn_data_stop_both(conn);
+ goto out_end_tcpcheck;
+ }
+ break;
+ }
+ }
+
+ if (&check->current_step->list == head)
+ break;
+
+ /* have 'next' point to the next rule or NULL if we're on the
+ * last one, connect() needs this.
+ */
+ next = LIST_NEXT(&check->current_step->list, struct tcpcheck_rule *, list);
+
+ /* bypass all comment rules */
+ while (&next->list != head && next->action == TCPCHK_ACT_COMMENT)
+ next = LIST_NEXT(&next->list, struct tcpcheck_rule *, list);
+
+ /* NULL if we're on the last rule */
+ if (&next->list == head)
+ next = NULL;
+
+ if (check->current_step->action == TCPCHK_ACT_CONNECT) {
+ struct protocol *proto;
+ struct xprt_ops *xprt;
+
+ /* mark the step as started */
+ check->last_started_step = check->current_step;
+ /* first, shut existing connection */
+ conn_force_close(conn);
+
+ /* prepare new connection */
+ /* initialization */
+ conn_init(conn);
+ conn_attach(conn, check, &check_conn_cb);
+ conn->target = &s->obj_type;
+
+ /* no client address */
+ clear_addr(&conn->addr.from);
+
+ if (is_addr(&check->addr)) {
+ /* we'll connect to the check addr specified on the server */
+ conn->addr.to = check->addr;
+ }
+ else {
+ /* we'll connect to the addr on the server */
+ conn->addr.to = s->addr;
+ }
+ proto = protocol_by_family(conn->addr.to.ss_family);
+
+ /* port */
+ if (check->current_step->port)
+ set_host_port(&conn->addr.to, check->current_step->port);
+ else if (check->port)
+ set_host_port(&conn->addr.to, check->port);
+
+#ifdef USE_OPENSSL
+ if (check->current_step->conn_opts & TCPCHK_OPT_SSL) {
+ xprt = &ssl_sock;
+ }
+ else {
+ xprt = &raw_sock;
+ }
+#else /* USE_OPENSSL */
+ xprt = &raw_sock;
+#endif /* USE_OPENSSL */
+ conn_prepare(conn, proto, xprt);
+
+ ret = SF_ERR_INTERNAL;
+ if (proto->connect)
+ ret = proto->connect(conn,
+ 1 /* I/O polling is always needed */,
+ (next && next->action == TCPCHK_ACT_EXPECT) ? 0 : 2);
+ conn->flags |= CO_FL_WAKE_DATA;
+ if (check->current_step->conn_opts & TCPCHK_OPT_SEND_PROXY) {
+ conn->send_proxy_ofs = 1;
+ conn->flags |= CO_FL_SEND_PROXY;
+ }
+
+ /* It can return one of :
+ * - SF_ERR_NONE if everything's OK
+ * - SF_ERR_SRVTO if there are no more servers
+ * - SF_ERR_SRVCL if the connection was refused by the server
+ * - SF_ERR_PRXCOND if the connection has been limited by the proxy (maxconn)
+ * - SF_ERR_RESOURCE if a system resource is lacking (eg: fd limits, ports, ...)
+ * - SF_ERR_INTERNAL for any other purely internal errors
+ * Additionnally, in the case of SF_ERR_RESOURCE, an emergency log will be emitted.
+ * Note that we try to prevent the network stack from sending the ACK during the
+ * connect() when a pure TCP check is used (without PROXY protocol).
+ */
+ switch (ret) {
+ case SF_ERR_NONE:
+ /* we allow up to min(inter, timeout.connect) for a connection
+ * to establish but only when timeout.check is set
+ * as it may be to short for a full check otherwise
+ */
+ t->expire = tick_add(now_ms, MS_TO_TICKS(check->inter));
+
+ if (s->proxy->timeout.check && s->proxy->timeout.connect) {
+ int t_con = tick_add(now_ms, s->proxy->timeout.connect);
+ t->expire = tick_first(t->expire, t_con);
+ }
+ break;
+ case SF_ERR_SRVTO: /* ETIMEDOUT */
+ case SF_ERR_SRVCL: /* ECONNREFUSED, ENETUNREACH, ... */
+ step = tcpcheck_get_step_id(check);
+ chunk_printf(&trash, "TCPCHK error establishing connection at step %d: %s",
+ step, strerror(errno));
+ comment = tcpcheck_get_step_comment(check, step);
+ if (comment)
+ chunk_appendf(&trash, " comment: '%s'", comment);
+ set_server_check_status(check, HCHK_STATUS_L4CON, trash.str);
+ goto out_end_tcpcheck;
+ case SF_ERR_PRXCOND:
+ case SF_ERR_RESOURCE:
+ case SF_ERR_INTERNAL:
+ step = tcpcheck_get_step_id(check);
+ chunk_printf(&trash, "TCPCHK error establishing connection at step %d", step);
+ comment = tcpcheck_get_step_comment(check, step);
+ if (comment)
+ chunk_appendf(&trash, " comment: '%s'", comment);
+ set_server_check_status(check, HCHK_STATUS_SOCKERR, trash.str);
+ goto out_end_tcpcheck;
+ }
+
+ /* allow next rule */
+ check->current_step = LIST_NEXT(&check->current_step->list, struct tcpcheck_rule *, list);
+
+ /* bypass all comment rules */
+ while (&check->current_step->list != head &&
+ check->current_step->action == TCPCHK_ACT_COMMENT)
+ check->current_step = LIST_NEXT(&check->current_step->list, struct tcpcheck_rule *, list);
+
+ if (&check->current_step->list == head)
+ break;
+
+ /* don't do anything until the connection is established */
+ if (!(conn->flags & CO_FL_CONNECTED)) {
+ /* update expire time, should be done by process_chk */
+ /* we allow up to min(inter, timeout.connect) for a connection
+ * to establish but only when timeout.check is set
+ * as it may be to short for a full check otherwise
+ */
+ while (tick_is_expired(t->expire, now_ms)) {
+ int t_con;
+
+ t_con = tick_add(t->expire, s->proxy->timeout.connect);
+ t->expire = tick_add(t->expire, MS_TO_TICKS(check->inter));
+
+ if (s->proxy->timeout.check)
+ t->expire = tick_first(t->expire, t_con);
+ }
+ return;
+ }
+
+ } /* end 'connect' */
+ else if (check->current_step->action == TCPCHK_ACT_SEND) {
+ /* mark the step as started */
+ check->last_started_step = check->current_step;
+
+ /* reset the read buffer */
+ if (*check->bi->data != '\0') {
+ *check->bi->data = '\0';
+ check->bi->i = 0;
+ }
+
+ if (conn->flags & (CO_FL_SOCK_WR_SH | CO_FL_DATA_WR_SH)) {
+ conn->flags |= CO_FL_ERROR;
+ chk_report_conn_err(conn, 0, 0);
+ goto out_end_tcpcheck;
+ }
+
+ if (check->current_step->string_len >= check->bo->size) {
+ chunk_printf(&trash, "tcp-check send : string too large (%d) for buffer size (%d) at step %d",
+ check->current_step->string_len, check->bo->size,
+ tcpcheck_get_step_id(check));
+ set_server_check_status(check, HCHK_STATUS_L7RSP, trash.str);
+ goto out_end_tcpcheck;
+ }
+
+ /* do not try to send if there is no space */
+ if (check->current_step->string_len >= buffer_total_space(check->bo))
+ continue;
+
+ bo_putblk(check->bo, check->current_step->string, check->current_step->string_len);
+ *check->bo->p = '\0'; /* to make gdb output easier to read */
+
+ /* go to next rule and try to send */
+ check->current_step = LIST_NEXT(&check->current_step->list, struct tcpcheck_rule *, list);
+
+ /* bypass all comment rules */
+ while (&check->current_step->list != head &&
+ check->current_step->action == TCPCHK_ACT_COMMENT)
+ check->current_step = LIST_NEXT(&check->current_step->list, struct tcpcheck_rule *, list);
+
+ if (&check->current_step->list == head)
+ break;
+ } /* end 'send' */
+ else if (check->current_step->action == TCPCHK_ACT_EXPECT) {
+ if (unlikely(check->result == CHK_RES_FAILED))
+ goto out_end_tcpcheck;
+
+ if (conn->xprt->rcv_buf(conn, check->bi, check->bi->size) <= 0) {
+ if (conn->flags & (CO_FL_ERROR | CO_FL_SOCK_RD_SH | CO_FL_DATA_RD_SH)) {
+ done = 1;
+ if ((conn->flags & CO_FL_ERROR) && !check->bi->i) {
+ /* Report network errors only if we got no other data. Otherwise
+ * we'll let the upper layers decide whether the response is OK
+ * or not. It is very common that an RST sent by the server is
+ * reported as an error just after the last data chunk.
+ */
+ chk_report_conn_err(conn, errno, 0);
+ goto out_end_tcpcheck;
+ }
+ }
+ else
+ break;
+ }
+
+ /* mark the step as started */
+ check->last_started_step = check->current_step;
+
+
+ /* Intermediate or complete response received.
+ * Terminate string in check->bi->data buffer.
+ */
+ if (check->bi->i < check->bi->size) {
+ check->bi->data[check->bi->i] = '\0';
+ }
+ else {
+ check->bi->data[check->bi->i - 1] = '\0';
+ done = 1; /* buffer full, don't wait for more data */
+ }
+
+ contentptr = check->bi->data;
+
+ /* Check that response body is not empty... */
+ if (!check->bi->i) {
+ if (!done)
+ continue;
+
+ /* empty response */
+ step = tcpcheck_get_step_id(check);
+ chunk_printf(&trash, "TCPCHK got an empty response at step %d", step);
+ comment = tcpcheck_get_step_comment(check, step);
+ if (comment)
+ chunk_appendf(&trash, " comment: '%s'", comment);
+ set_server_check_status(check, HCHK_STATUS_L7RSP, trash.str);
+
+ goto out_end_tcpcheck;
+ }
+
+ if (!done && (check->current_step->string != NULL) && (check->bi->i < check->current_step->string_len) )
+ continue; /* try to read more */
+
+ tcpcheck_expect:
+ if (check->current_step->string != NULL)
+ ret = my_memmem(contentptr, check->bi->i, check->current_step->string, check->current_step->string_len) != NULL;
+ else if (check->current_step->expect_regex != NULL)
+ ret = regex_exec(check->current_step->expect_regex, contentptr);
+
+ if (!ret && !done)
+ continue; /* try to read more */
+
+ /* matched */
+ step = tcpcheck_get_step_id(check);
+ if (ret) {
+ /* matched but we did not want to => ERROR */
+ if (check->current_step->inverse) {
+ /* we were looking for a string */
+ if (check->current_step->string != NULL) {
+ chunk_printf(&trash, "TCPCHK matched unwanted content '%s' at step %d",
+ check->current_step->string, step);
+ }
+ else {
+ /* we were looking for a regex */
+ chunk_printf(&trash, "TCPCHK matched unwanted content (regex) at step %d", step);
+ }
+ comment = tcpcheck_get_step_comment(check, step);
+ if (comment)
+ chunk_appendf(&trash, " comment: '%s'", comment);
+ set_server_check_status(check, HCHK_STATUS_L7RSP, trash.str);
+ goto out_end_tcpcheck;
+ }
+ /* matched and was supposed to => OK, next step */
+ else {
+ /* allow next rule */
+ check->current_step = LIST_NEXT(&check->current_step->list, struct tcpcheck_rule *, list);
+
+ /* bypass all comment rules */
+ while (&check->current_step->list != head &&
+ check->current_step->action == TCPCHK_ACT_COMMENT)
+ check->current_step = LIST_NEXT(&check->current_step->list, struct tcpcheck_rule *, list);
+
+ if (&check->current_step->list == head)
+ break;
+
+ if (check->current_step->action == TCPCHK_ACT_EXPECT)
+ goto tcpcheck_expect;
+ __conn_data_stop_recv(conn);
+ }
+ }
+ else {
+ /* not matched */
+ /* not matched and was not supposed to => OK, next step */
+ if (check->current_step->inverse) {
+ /* allow next rule */
+ check->current_step = LIST_NEXT(&check->current_step->list, struct tcpcheck_rule *, list);
+
+ /* bypass all comment rules */
+ while (&check->current_step->list != head &&
+ check->current_step->action == TCPCHK_ACT_COMMENT)
+ check->current_step = LIST_NEXT(&check->current_step->list, struct tcpcheck_rule *, list);
+
+ if (&check->current_step->list == head)
+ break;
+
+ if (check->current_step->action == TCPCHK_ACT_EXPECT)
+ goto tcpcheck_expect;
+ __conn_data_stop_recv(conn);
+ }
+ /* not matched but was supposed to => ERROR */
+ else {
+ /* we were looking for a string */
+ if (check->current_step->string != NULL) {
+ chunk_printf(&trash, "TCPCHK did not match content '%s' at step %d",
+ check->current_step->string, step);
+ }
+ else {
+ /* we were looking for a regex */
+ chunk_printf(&trash, "TCPCHK did not match content (regex) at step %d",
+ step);
+ }
+ comment = tcpcheck_get_step_comment(check, step);
+ if (comment)
+ chunk_appendf(&trash, " comment: '%s'", comment);
+ set_server_check_status(check, HCHK_STATUS_L7RSP, trash.str);
+ goto out_end_tcpcheck;
+ }
+ }
+ } /* end expect */
+ } /* end loop over double chained step list */
+
+ /* We're waiting for some I/O to complete, we've reached the end of the
+ * rules, or both. Do what we have to do, otherwise we're done.
+ */
+ if (&check->current_step->list == head && !check->bo->o) {
+ set_server_check_status(check, HCHK_STATUS_L7OKD, "(tcp-check)");
+ goto out_end_tcpcheck;
+ }
+
+ /* warning, current_step may now point to the head */
+ if (check->bo->o)
+ __conn_data_want_send(conn);
+
+ if (&check->current_step->list != head &&
+ check->current_step->action == TCPCHK_ACT_EXPECT)
+ __conn_data_want_recv(conn);
+ return;
+
+ out_end_tcpcheck:
+ /* collect possible new errors */
+ if (conn->flags & CO_FL_ERROR)
+ chk_report_conn_err(conn, 0, 0);
+
+ /* cleanup before leaving */
+ check->current_step = NULL;
+
+ if (check->result == CHK_RES_FAILED)
+ conn->flags |= CO_FL_ERROR;
+
+ __conn_data_stop_both(conn);
+ return;
+}
+
+const char *init_check(struct check *check, int type)
+{
+ check->type = type;
+
+ /* Allocate buffer for requests... */
+ if ((check->bi = calloc(sizeof(struct buffer) + global.tune.chksize, sizeof(char))) == NULL) {
+ return "out of memory while allocating check buffer";
+ }
+ check->bi->size = global.tune.chksize;
+
+ /* Allocate buffer for responses... */
+ if ((check->bo = calloc(sizeof(struct buffer) + global.tune.chksize, sizeof(char))) == NULL) {
+ return "out of memory while allocating check buffer";
+ }
+ check->bo->size = global.tune.chksize;
+
+ /* Allocate buffer for partial results... */
+ if ((check->conn = calloc(1, sizeof(struct connection))) == NULL) {
+ return "out of memory while allocating check connection";
+ }
+
+ check->conn->t.sock.fd = -1; /* no agent in progress yet */
+
+ return NULL;
+}
+
+void free_check(struct check *check)
+{
+ free(check->bi);
+ free(check->bo);
+ free(check->conn);
+}
+
+void email_alert_free(struct email_alert *alert)
+{
+ struct tcpcheck_rule *rule, *back;
+
+ if (!alert)
+ return;
+
+ list_for_each_entry_safe(rule, back, &alert->tcpcheck_rules, list)
+ free(rule);
+ free(alert);
+}
+
+static struct task *process_email_alert(struct task *t)
+{
+ struct check *check = t->context;
+ struct email_alertq *q;
+
+ q = container_of(check, typeof(*q), check);
+
+ if (!(check->state & CHK_ST_ENABLED)) {
+ if (LIST_ISEMPTY(&q->email_alerts)) {
+ /* All alerts processed, delete check */
+ task_delete(t);
+ task_free(t);
+ check->task = NULL;
+ return NULL;
+ } else {
+ struct email_alert *alert;
+
+ alert = LIST_NEXT(&q->email_alerts, typeof(alert), list);
+ check->tcpcheck_rules = &alert->tcpcheck_rules;
+ LIST_DEL(&alert->list);
+
+ check->state |= CHK_ST_ENABLED;
+ }
+
+ }
+
+ process_chk(t);
+
+ if (!(check->state & CHK_ST_INPROGRESS) && check->tcpcheck_rules) {
+ struct email_alert *alert;
+
+ alert = container_of(check->tcpcheck_rules, typeof(*alert), tcpcheck_rules);
+ email_alert_free(alert);
+
+ check->tcpcheck_rules = NULL;
+ check->state &= ~CHK_ST_ENABLED;
+ }
+ return t;
+}
+
+static int init_email_alert_checks(struct server *s)
+{
+ int i;
+ struct mailer *mailer;
+ const char *err_str;
+ struct proxy *p = s->proxy;
+
+ if (p->email_alert.queues)
+ /* Already initialised, nothing to do */
+ return 1;
+
+ p->email_alert.queues = calloc(p->email_alert.mailers.m->count, sizeof *p->email_alert.queues);
+ if (!p->email_alert.queues) {
+ err_str = "out of memory while allocating checks array";
+ goto error_alert;
+ }
+
+ for (i = 0, mailer = p->email_alert.mailers.m->mailer_list;
+ i < p->email_alert.mailers.m->count; i++, mailer = mailer->next) {
+ struct email_alertq *q = &p->email_alert.queues[i];
+ struct check *check = &q->check;
+
+
+ LIST_INIT(&q->email_alerts);
+
+ check->inter = DEF_CHKINTR; /* XXX: Would like to Skip to the next alert, if any, ASAP.
+ * But need enough time so that timeouts don't occur
+ * during tcp check procssing. For now just us an arbitrary default. */
+ check->rise = DEF_AGENT_RISETIME;
+ check->fall = DEF_AGENT_FALLTIME;
+ err_str = init_check(check, PR_O2_TCPCHK_CHK);
+ if (err_str) {
+ goto error_free;
+ }
+
+ check->xprt = mailer->xprt;
+ if (!get_host_port(&mailer->addr))
+ /* Default to submission port */
+ check->port = 587;
+ check->addr = mailer->addr;
+ check->server = s;
+ }
+
+ return 1;
+
+error_free:
+ while (i-- > 1)
+ task_free(p->email_alert.queues[i].check.task);
+ free(p->email_alert.queues);
+ p->email_alert.queues = NULL;
+error_alert:
+ Alert("Email alert [%s] could not be initialised: %s\n", p->id, err_str);
+ return 0;
+}
+
+
+static int add_tcpcheck_expect_str(struct list *list, const char *str)
+{
+ struct tcpcheck_rule *tcpcheck;
+
+ tcpcheck = calloc(1, sizeof *tcpcheck);
+ if (!tcpcheck)
+ return 0;
+
+ tcpcheck->action = TCPCHK_ACT_EXPECT;
+ tcpcheck->string = strdup(str);
+ if (!tcpcheck->string) {
+ free(tcpcheck);
+ return 0;
+ }
+
+ LIST_ADDQ(list, &tcpcheck->list);
+ return 1;
+}
+
+static int add_tcpcheck_send_strs(struct list *list, const char * const *strs)
+{
+ struct tcpcheck_rule *tcpcheck;
+ int i;
+
+ tcpcheck = calloc(1, sizeof *tcpcheck);
+ if (!tcpcheck)
+ return 0;
+
+ tcpcheck->action = TCPCHK_ACT_SEND;
+
+ tcpcheck->string_len = 0;
+ for (i = 0; strs[i]; i++)
+ tcpcheck->string_len += strlen(strs[i]);
+
+ tcpcheck->string = malloc(tcpcheck->string_len + 1);
+ if (!tcpcheck->string) {
+ free(tcpcheck);
+ return 0;
+ }
+ tcpcheck->string[0] = '\0';
+
+ for (i = 0; strs[i]; i++)
+ strcat(tcpcheck->string, strs[i]);
+
+ LIST_ADDQ(list, &tcpcheck->list);
+ return 1;
+}
+
+static int enqueue_one_email_alert(struct email_alertq *q, const char *msg)
+{
+ struct email_alert *alert = NULL;
+ struct tcpcheck_rule *tcpcheck;
+ struct check *check = &q->check;
+ struct proxy *p = check->server->proxy;
+
+ alert = calloc(1, sizeof *alert);
+ if (!alert) {
+ goto error;
+ }
+ LIST_INIT(&alert->tcpcheck_rules);
+
+ tcpcheck = calloc(1, sizeof *tcpcheck);
+ if (!tcpcheck)
+ goto error;
+ tcpcheck->action = TCPCHK_ACT_CONNECT;
+ LIST_ADDQ(&alert->tcpcheck_rules, &tcpcheck->list);
+
+ if (!add_tcpcheck_expect_str(&alert->tcpcheck_rules, "220 "))
+ goto error;
+
+ {
+ const char * const strs[4] = { "EHLO ", p->email_alert.myhostname, "\r\n" };
+ if (!add_tcpcheck_send_strs(&alert->tcpcheck_rules, strs))
+ goto error;
+ }
+
+ if (!add_tcpcheck_expect_str(&alert->tcpcheck_rules, "250 "))
+ goto error;
+
+ {
+ const char * const strs[4] = { "MAIL FROM:<", p->email_alert.from, ">\r\n" };
+ if (!add_tcpcheck_send_strs(&alert->tcpcheck_rules, strs))
+ goto error;
+ }
+
+ if (!add_tcpcheck_expect_str(&alert->tcpcheck_rules, "250 "))
+ goto error;
+
+ {
+ const char * const strs[4] = { "RCPT TO:<", p->email_alert.to, ">\r\n" };
+ if (!add_tcpcheck_send_strs(&alert->tcpcheck_rules, strs))
+ goto error;
+ }
+
+ if (!add_tcpcheck_expect_str(&alert->tcpcheck_rules, "250 "))
+ goto error;
+
+ {
+ const char * const strs[2] = { "DATA\r\n" };
+ if (!add_tcpcheck_send_strs(&alert->tcpcheck_rules, strs))
+ goto error;
+ }
+
+ if (!add_tcpcheck_expect_str(&alert->tcpcheck_rules, "354 "))
+ goto error;
+
+ {
+ struct tm tm;
+ char datestr[48];
+ const char * const strs[18] = {
+ "From: ", p->email_alert.from, "\n",
+ "To: ", p->email_alert.to, "\n",
+ "Date: ", datestr, "\n",
+ "Subject: [HAproxy Alert] ", msg, "\n",
+ "\n",
+ msg, "\n",
+ "\r\n",
+ ".\r\n",
+ NULL
+ };
+
+ get_localtime(date.tv_sec, &tm);
+
+ if (strftime(datestr, sizeof(datestr), "%a, %d %b %Y %T %z (%Z)", &tm) == 0) {
+ goto error;
+ }
+
+ if (!add_tcpcheck_send_strs(&alert->tcpcheck_rules, strs))
+ goto error;
+ }
+
+ if (!add_tcpcheck_expect_str(&alert->tcpcheck_rules, "250 "))
+ goto error;
+
+ {
+ const char * const strs[2] = { "QUIT\r\n" };
+ if (!add_tcpcheck_send_strs(&alert->tcpcheck_rules, strs))
+ goto error;
+ }
+
+ if (!add_tcpcheck_expect_str(&alert->tcpcheck_rules, "221 "))
+ goto error;
+
+ if (!check->task) {
+ struct task *t;
+
+ if ((t = task_new()) == NULL)
+ goto error;
+
+ check->task = t;
+ t->process = process_email_alert;
+ t->context = check;
+
+ /* check this in one ms */
+ t->expire = tick_add(now_ms, MS_TO_TICKS(1));
+ check->start = now;
+ task_queue(t);
+ }
+
+ LIST_ADDQ(&q->email_alerts, &alert->list);
+
+ return 1;
+
+error:
+ email_alert_free(alert);
+ return 0;
+}
+
+static void enqueue_email_alert(struct proxy *p, const char *msg)
+{
+ int i;
+ struct mailer *mailer;
+
+ for (i = 0, mailer = p->email_alert.mailers.m->mailer_list;
+ i < p->email_alert.mailers.m->count; i++, mailer = mailer->next) {
+ if (!enqueue_one_email_alert(&p->email_alert.queues[i], msg)) {
+ Alert("Email alert [%s] could not be enqueued: out of memory\n", p->id);
+ return;
+ }
+ }
+
+ return;
+}
+
+/*
+ * Send email alert if configured.
+ */
+void send_email_alert(struct server *s, int level, const char *format, ...)
+{
+ va_list argp;
+ char buf[1024];
+ int len;
+ struct proxy *p = s->proxy;
+
+ if (!p->email_alert.mailers.m || level > p->email_alert.level ||
+ format == NULL || !init_email_alert_checks(s))
+ return;
+
+ va_start(argp, format);
+ len = vsnprintf(buf, sizeof(buf), format, argp);
+ va_end(argp);
+
+ if (len < 0) {
+ Alert("Email alert [%s] could not format message\n", p->id);
+ return;
+ }
+
+ enqueue_email_alert(p, buf);
+}
+
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * Chunk management functions.
+ *
+ * Copyright 2000-2012 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <ctype.h>
+#include <stdarg.h>
+#include <stdio.h>
+#include <string.h>
+
+#include <common/config.h>
+#include <common/chunk.h>
+
+/* trash chunks used for various conversions */
+static struct chunk *trash_chunk;
+static struct chunk trash_chunk1;
+static struct chunk trash_chunk2;
+
+/* trash buffers used for various conversions */
+static int trash_size;
+static char *trash_buf1;
+static char *trash_buf2;
+
+/*
+* Returns a pre-allocated and initialized trash chunk that can be used for any
+* type of conversion. Two chunks and their respective buffers are alternatively
+* returned so that it is always possible to iterate data transformations without
+* losing the data being transformed. The blocks are initialized to the size of
+* a standard buffer, so they should be enough for everything. For convenience,
+* a zero is always emitted at the beginning of the string so that it may be
+* used as an empty string as well.
+*/
+struct chunk *get_trash_chunk(void)
+{
+ char *trash_buf;
+
+ if (trash_chunk == &trash_chunk1) {
+ trash_chunk = &trash_chunk2;
+ trash_buf = trash_buf2;
+ }
+ else {
+ trash_chunk = &trash_chunk1;
+ trash_buf = trash_buf1;
+ }
+ *trash_buf = 0;
+ chunk_init(trash_chunk, trash_buf, trash_size);
+ return trash_chunk;
+}
+
+/* (re)allocates the trash buffers. Returns 0 in case of failure. It is
+ * possible to call this function multiple times if the trash size changes.
+ */
+int alloc_trash_buffers(int bufsize)
+{
+ trash_size = bufsize;
+ trash_buf1 = (char *)realloc(trash_buf1, bufsize);
+ trash_buf2 = (char *)realloc(trash_buf2, bufsize);
+ return trash_buf1 && trash_buf2;
+}
+
+/*
+ * free the trash buffers
+ */
+void free_trash_buffers(void)
+{
+ free(trash_buf2);
+ free(trash_buf1);
+ trash_buf2 = NULL;
+ trash_buf1 = NULL;
+}
+
+/*
+ * Does an snprintf() at the beginning of chunk <chk>, respecting the limit of
+ * at most chk->size chars. If the chk->len is over, nothing is added. Returns
+ * the new chunk size, or < 0 in case of failure.
+ */
+int chunk_printf(struct chunk *chk, const char *fmt, ...)
+{
+ va_list argp;
+ int ret;
+
+ if (!chk->str || !chk->size)
+ return 0;
+
+ va_start(argp, fmt);
+ ret = vsnprintf(chk->str, chk->size, fmt, argp);
+ va_end(argp);
+ chk->len = ret;
+
+ if (ret >= chk->size)
+ ret = -1;
+
+ chk->len = ret;
+ return chk->len;
+}
+
+/*
+ * Does an snprintf() at the end of chunk <chk>, respecting the limit of
+ * at most chk->size chars. If the chk->len is over, nothing is added. Returns
+ * the new chunk size.
+ */
+int chunk_appendf(struct chunk *chk, const char *fmt, ...)
+{
+ va_list argp;
+ int ret;
+
+ if (!chk->str || !chk->size)
+ return 0;
+
+ va_start(argp, fmt);
+ ret = vsnprintf(chk->str + chk->len, chk->size - chk->len, fmt, argp);
+ if (ret >= chk->size - chk->len)
+ /* do not copy anything in case of truncation */
+ chk->str[chk->len] = 0;
+ else
+ chk->len += ret;
+ va_end(argp);
+ return chk->len;
+}
+
+/*
+ * Encode chunk <src> into chunk <dst>, respecting the limit of at most
+ * chk->size chars. Replace non-printable or special chracters with "&#%d;".
+ * If the chk->len is over, nothing is added. Returns the new chunk size.
+ */
+int chunk_htmlencode(struct chunk *dst, struct chunk *src)
+{
+ int i, l;
+ int olen, free;
+ char c;
+
+ olen = dst->len;
+
+ for (i = 0; i < src->len; i++) {
+ free = dst->size - dst->len;
+
+ if (!free) {
+ dst->len = olen;
+ return dst->len;
+ }
+
+ c = src->str[i];
+
+ if (!isascii(c) || !isprint((unsigned char)c) || c == '&' || c == '"' || c == '\'' || c == '<' || c == '>') {
+ l = snprintf(dst->str + dst->len, free, "&#%u;", (unsigned char)c);
+
+ if (free < l) {
+ dst->len = olen;
+ return dst->len;
+ }
+
+ dst->len += l;
+ } else {
+ dst->str[dst->len] = c;
+ dst->len++;
+ }
+ }
+
+ return dst->len;
+}
+
+/*
+ * Encode chunk <src> into chunk <dst>, respecting the limit of at most
+ * chk->size chars. Replace non-printable or char passed in qc with "<%02X>".
+ * If the chk->len is over, nothing is added. Returns the new chunk size.
+ */
+int chunk_asciiencode(struct chunk *dst, struct chunk *src, char qc)
+{
+ int i, l;
+ int olen, free;
+ char c;
+
+ olen = dst->len;
+
+ for (i = 0; i < src->len; i++) {
+ free = dst->size - dst->len;
+
+ if (!free) {
+ dst->len = olen;
+ return dst->len;
+ }
+
+ c = src->str[i];
+
+ if (!isascii(c) || !isprint((unsigned char)c) || c == '<' || c == '>' || c == qc) {
+ l = snprintf(dst->str + dst->len, free, "<%02X>", (unsigned char)c);
+
+ if (free < l) {
+ dst->len = olen;
+ return dst->len;
+ }
+
+ dst->len += l;
+ } else {
+ dst->str[dst->len] = c;
+ dst->len++;
+ }
+ }
+
+ return dst->len;
+}
+
+/* Compares the string in chunk <chk> with the string in <str> which must be
+ * zero-terminated. Return is the same as with strcmp(). Neither is allowed
+ * to be null.
+ */
+int chunk_strcmp(const struct chunk *chk, const char *str)
+{
+ const char *s1 = chk->str;
+ int len = chk->len;
+ int diff = 0;
+
+ do {
+ if (--len < 0) {
+ diff = (unsigned char)0 - (unsigned char)*str;
+ break;
+ }
+ diff = (unsigned char)*(s1++) - (unsigned char)*(str++);
+ } while (!diff);
+ return diff;
+}
+
+/* Case-insensitively compares the string in chunk <chk> with the string in
+ * <str> which must be zero-terminated. Return is the same as with strcmp().
+ * Neither is allowed to be null.
+ */
+int chunk_strcasecmp(const struct chunk *chk, const char *str)
+{
+ const char *s1 = chk->str;
+ int len = chk->len;
+ int diff = 0;
+
+ do {
+ if (--len < 0) {
+ diff = (unsigned char)0 - (unsigned char)*str;
+ break;
+ }
+ diff = (unsigned char)*s1 - (unsigned char)*str;
+ if (unlikely(diff)) {
+ unsigned int l = (unsigned char)*s1;
+ unsigned int r = (unsigned char)*str;
+
+ l -= 'a';
+ r -= 'a';
+
+ if (likely(l <= (unsigned char)'z' - 'a'))
+ l -= 'a' - 'A';
+ if (likely(r <= (unsigned char)'z' - 'a'))
+ r -= 'a' - 'A';
+ diff = l - r;
+ }
+ s1++; str++;
+ } while (!diff);
+ return diff;
+}
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * HTTP compression.
+ *
+ * Copyright 2012 Exceliance, David Du Colombier <dducolombier@exceliance.fr>
+ * William Lallemand <wlallemand@exceliance.fr>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <stdio.h>
+
+#if defined(USE_SLZ)
+#include <slz.h>
+#elif defined(USE_ZLIB)
+/* Note: the crappy zlib and openssl libs both define the "free_func" type.
+ * That's a very clever idea to use such a generic name in general purpose
+ * libraries, really... The zlib one is easier to redefine than openssl's,
+ * so let's only fix this one.
+ */
+#define free_func zlib_free_func
+#include <zlib.h>
+#undef free_func
+#endif /* USE_ZLIB */
+
+#include <common/compat.h>
+#include <common/memory.h>
+
+#include <types/global.h>
+#include <types/compression.h>
+
+#include <proto/acl.h>
+#include <proto/compression.h>
+#include <proto/freq_ctr.h>
+#include <proto/proto_http.h>
+#include <proto/stream.h>
+
+
+#ifdef USE_ZLIB
+
+static void *alloc_zlib(void *opaque, unsigned int items, unsigned int size);
+static void free_zlib(void *opaque, void *ptr);
+
+/* zlib allocation */
+static struct pool_head *zlib_pool_deflate_state = NULL;
+static struct pool_head *zlib_pool_window = NULL;
+static struct pool_head *zlib_pool_prev = NULL;
+static struct pool_head *zlib_pool_head = NULL;
+static struct pool_head *zlib_pool_pending_buf = NULL;
+
+long zlib_used_memory = 0;
+
+#endif
+
+unsigned int compress_min_idle = 0;
+static struct pool_head *pool_comp_ctx = NULL;
+
+static int identity_init(struct comp_ctx **comp_ctx, int level);
+static int identity_add_data(struct comp_ctx *comp_ctx, const char *in_data, int in_len, struct buffer *out);
+static int identity_flush(struct comp_ctx *comp_ctx, struct buffer *out);
+static int identity_finish(struct comp_ctx *comp_ctx, struct buffer *out);
+static int identity_end(struct comp_ctx **comp_ctx);
+
+#if defined(USE_SLZ)
+
+static int rfc1950_init(struct comp_ctx **comp_ctx, int level);
+static int rfc1951_init(struct comp_ctx **comp_ctx, int level);
+static int rfc1952_init(struct comp_ctx **comp_ctx, int level);
+static int rfc195x_add_data(struct comp_ctx *comp_ctx, const char *in_data, int in_len, struct buffer *out);
+static int rfc195x_flush(struct comp_ctx *comp_ctx, struct buffer *out);
+static int rfc195x_finish(struct comp_ctx *comp_ctx, struct buffer *out);
+static int rfc195x_end(struct comp_ctx **comp_ctx);
+
+#elif defined(USE_ZLIB)
+
+static int gzip_init(struct comp_ctx **comp_ctx, int level);
+static int raw_def_init(struct comp_ctx **comp_ctx, int level);
+static int deflate_init(struct comp_ctx **comp_ctx, int level);
+static int deflate_add_data(struct comp_ctx *comp_ctx, const char *in_data, int in_len, struct buffer *out);
+static int deflate_flush(struct comp_ctx *comp_ctx, struct buffer *out);
+static int deflate_finish(struct comp_ctx *comp_ctx, struct buffer *out);
+static int deflate_end(struct comp_ctx **comp_ctx);
+
+#endif /* USE_ZLIB */
+
+
+const struct comp_algo comp_algos[] =
+{
+ { "identity", 8, "identity", 8, identity_init, identity_add_data, identity_flush, identity_finish, identity_end },
+#if defined(USE_SLZ)
+ { "deflate", 7, "deflate", 7, rfc1950_init, rfc195x_add_data, rfc195x_flush, rfc195x_finish, rfc195x_end },
+ { "raw-deflate", 11, "deflate", 7, rfc1951_init, rfc195x_add_data, rfc195x_flush, rfc195x_finish, rfc195x_end },
+ { "gzip", 4, "gzip", 4, rfc1952_init, rfc195x_add_data, rfc195x_flush, rfc195x_finish, rfc195x_end },
+#elif defined(USE_ZLIB)
+ { "deflate", 7, "deflate", 7, deflate_init, deflate_add_data, deflate_flush, deflate_finish, deflate_end },
+ { "raw-deflate", 11, "deflate", 7, raw_def_init, deflate_add_data, deflate_flush, deflate_finish, deflate_end },
+ { "gzip", 4, "gzip", 4, gzip_init, deflate_add_data, deflate_flush, deflate_finish, deflate_end },
+#endif /* USE_ZLIB */
+ { NULL, 0, NULL, 0, NULL , NULL, NULL, NULL, NULL }
+};
+
+/*
+ * Add a content-type in the configuration
+ */
+int comp_append_type(struct comp *comp, const char *type)
+{
+ struct comp_type *comp_type;
+
+ comp_type = calloc(1, sizeof(struct comp_type));
+ comp_type->name_len = strlen(type);
+ comp_type->name = strdup(type);
+ comp_type->next = comp->types;
+ comp->types = comp_type;
+ return 0;
+}
+
+/*
+ * Add an algorithm in the configuration
+ */
+int comp_append_algo(struct comp *comp, const char *algo)
+{
+ struct comp_algo *comp_algo;
+ int i;
+
+ for (i = 0; comp_algos[i].cfg_name; i++) {
+ if (!strcmp(algo, comp_algos[i].cfg_name)) {
+ comp_algo = calloc(1, sizeof(struct comp_algo));
+ memmove(comp_algo, &comp_algos[i], sizeof(struct comp_algo));
+ comp_algo->next = comp->algos;
+ comp->algos = comp_algo;
+ return 0;
+ }
+ }
+ return -1;
+}
+
+/* emit the chunksize followed by a CRLF on the output and return the number of
+ * bytes written. It goes backwards and starts with the byte before <end>. It
+ * returns the number of bytes written which will not exceed 10 (8 digits, CR,
+ * and LF). The caller is responsible for ensuring there is enough room left in
+ * the output buffer for the string.
+ */
+int http_emit_chunk_size(char *end, unsigned int chksz)
+{
+ char *beg = end;
+
+ *--beg = '\n';
+ *--beg = '\r';
+ do {
+ *--beg = hextab[chksz & 0xF];
+ } while (chksz >>= 4);
+ return end - beg;
+}
+
+/*
+ * Init HTTP compression
+ */
+int http_compression_buffer_init(struct stream *s, struct buffer *in, struct buffer *out)
+{
+ /* output stream requires at least 10 bytes for the gzip header, plus
+ * at least 8 bytes for the gzip trailer (crc+len), plus a possible
+ * plus at most 5 bytes per 32kB block and 2 bytes to close the stream.
+ */
+ if (in->size - buffer_len(in) < 20 + 5 * ((in->i + 32767) >> 15))
+ return -1;
+
+ /* prepare an empty output buffer in which we reserve enough room for
+ * copying the output bytes from <in>, plus 10 extra bytes to write
+ * the chunk size. We don't copy the bytes yet so that if we have to
+ * cancel the operation later, it's cheap.
+ */
+ b_reset(out);
+ out->o = in->o;
+ out->p += out->o;
+ out->i = 10;
+ return 0;
+}
+
+/*
+ * Add data to compress
+ */
+int http_compression_buffer_add_data(struct stream *s, struct buffer *in, struct buffer *out)
+{
+ struct http_msg *msg = &s->txn->rsp;
+ int consumed_data = 0;
+ int data_process_len;
+ int block1, block2;
+
+ /*
+ * Temporarily skip already parsed data and chunks to jump to the
+ * actual data block. It is fixed before leaving.
+ */
+ b_adv(in, msg->next);
+
+ /*
+ * select the smallest size between the announced chunk size, the input
+ * data, and the available output buffer size. The compressors are
+ * assumed to be able to process all the bytes we pass to them at once.
+ */
+ data_process_len = MIN(in->i, msg->chunk_len);
+ data_process_len = MIN(out->size - buffer_len(out), data_process_len);
+
+ block1 = data_process_len;
+ if (block1 > bi_contig_data(in))
+ block1 = bi_contig_data(in);
+ block2 = data_process_len - block1;
+
+ /* compressors return < 0 upon error or the amount of bytes read */
+ consumed_data = s->comp_algo->add_data(s->comp_ctx, bi_ptr(in), block1, out);
+ if (consumed_data >= 0 && block2 > 0) {
+ consumed_data = s->comp_algo->add_data(s->comp_ctx, in->data, block2, out);
+ if (consumed_data >= 0)
+ consumed_data += block1;
+ }
+
+ /* restore original buffer pointer */
+ b_rew(in, msg->next);
+
+ if (consumed_data > 0) {
+ msg->next += consumed_data;
+ msg->chunk_len -= consumed_data;
+ }
+ return consumed_data;
+}
+
+/*
+ * Flush data in process, and write the header and footer of the chunk. Upon
+ * success, in and out buffers are swapped to avoid a copy.
+ */
+int http_compression_buffer_end(struct stream *s, struct buffer **in, struct buffer **out, int end)
+{
+ int to_forward;
+ int left;
+ struct http_msg *msg = &s->txn->rsp;
+ struct buffer *ib = *in, *ob = *out;
+ char *tail;
+
+#if defined(USE_SLZ) || defined(USE_ZLIB)
+ int ret;
+
+ /* flush data here */
+
+ if (end)
+ ret = s->comp_algo->finish(s->comp_ctx, ob); /* end of data */
+ else
+ ret = s->comp_algo->flush(s->comp_ctx, ob); /* end of buffer */
+
+ if (ret < 0)
+ return -1; /* flush failed */
+
+#endif /* USE_ZLIB */
+
+ if (ob->i == 10) {
+ /* No data were appended, let's drop the output buffer and
+ * keep the input buffer unchanged.
+ */
+ return 0;
+ }
+
+ /* OK so at this stage, we have an output buffer <ob> looking like this :
+ *
+ * <-- o --> <------ i ----->
+ * +---------+---+------------+-----------+
+ * | out | c | comp_in | empty |
+ * +---------+---+------------+-----------+
+ * data p size
+ *
+ * <out> is the room reserved to copy ib->o. It starts at ob->data and
+ * has not yet been filled. <c> is the room reserved to write the chunk
+ * size (10 bytes). <comp_in> is the compressed equivalent of the data
+ * part of ib->i. <empty> is the amount of empty bytes at the end of
+ * the buffer, into which we may have to copy the remaining bytes from
+ * ib->i after the data (chunk size, trailers, ...).
+ */
+
+ /* Write real size at the begining of the chunk, no need of wrapping.
+ * We write the chunk using a dynamic length and adjust ob->p and ob->i
+ * accordingly afterwards. That will move <out> away from <data>.
+ */
+ left = 10 - http_emit_chunk_size(ob->p + 10, ob->i - 10);
+ ob->p += left;
+ ob->i -= left;
+
+ /* Copy previous data from ib->o into ob->o */
+ if (ib->o > 0) {
+ left = bo_contig_data(ib);
+ memcpy(ob->p - ob->o, bo_ptr(ib), left);
+ if (ib->o - left) /* second part of the buffer */
+ memcpy(ob->p - ob->o + left, ib->data, ib->o - left);
+ }
+
+ /* chunked encoding requires CRLF after data */
+ tail = ob->p + ob->i;
+ *tail++ = '\r';
+ *tail++ = '\n';
+
+ /* At the end of data, we must write the empty chunk 0<CRLF>,
+ * and terminate the trailers section with a last <CRLF>. If
+ * we're forwarding a chunked-encoded response, we'll have a
+ * trailers section after the empty chunk which needs to be
+ * forwarded and which will provide the last CRLF. Otherwise
+ * we write it ourselves.
+ */
+ if (msg->msg_state >= HTTP_MSG_TRAILERS) {
+ memcpy(tail, "0\r\n", 3);
+ tail += 3;
+ if (msg->msg_state >= HTTP_MSG_DONE) {
+ memcpy(tail, "\r\n", 2);
+ tail += 2;
+ }
+ }
+ ob->i = tail - ob->p;
+
+ to_forward = ob->i;
+
+ /* update input rate */
+ if (s->comp_ctx && s->comp_ctx->cur_lvl > 0) {
+ update_freq_ctr(&global.comp_bps_in, msg->next);
+ strm_fe(s)->fe_counters.comp_in += msg->next;
+ s->be->be_counters.comp_in += msg->next;
+ } else {
+ strm_fe(s)->fe_counters.comp_byp += msg->next;
+ s->be->be_counters.comp_byp += msg->next;
+ }
+
+ /* copy the remaining data in the tmp buffer. */
+ b_adv(ib, msg->next);
+ msg->next = 0;
+
+ if (ib->i > 0) {
+ left = bi_contig_data(ib);
+ memcpy(ob->p + ob->i, bi_ptr(ib), left);
+ ob->i += left;
+ if (ib->i - left) {
+ memcpy(ob->p + ob->i, ib->data, ib->i - left);
+ ob->i += ib->i - left;
+ }
+ }
+
+ /* swap the buffers */
+ *in = ob;
+ *out = ib;
+
+ if (s->comp_ctx && s->comp_ctx->cur_lvl > 0) {
+ update_freq_ctr(&global.comp_bps_out, to_forward);
+ strm_fe(s)->fe_counters.comp_out += to_forward;
+ s->be->be_counters.comp_out += to_forward;
+ }
+
+ /* forward the new chunk without remaining data */
+ b_adv(ob, to_forward);
+
+ return to_forward;
+}
+
+/*
+ * Alloc the comp_ctx
+ */
+static inline int init_comp_ctx(struct comp_ctx **comp_ctx)
+{
+#ifdef USE_ZLIB
+ z_stream *strm;
+
+ if (global.maxzlibmem > 0 && (global.maxzlibmem - zlib_used_memory) < sizeof(struct comp_ctx))
+ return -1;
+#endif
+
+ if (unlikely(pool_comp_ctx == NULL))
+ pool_comp_ctx = create_pool("comp_ctx", sizeof(struct comp_ctx), MEM_F_SHARED);
+
+ *comp_ctx = pool_alloc2(pool_comp_ctx);
+ if (*comp_ctx == NULL)
+ return -1;
+#if defined(USE_SLZ)
+ (*comp_ctx)->direct_ptr = NULL;
+ (*comp_ctx)->direct_len = 0;
+ (*comp_ctx)->queued = NULL;
+#elif defined(USE_ZLIB)
+ zlib_used_memory += sizeof(struct comp_ctx);
+
+ strm = &(*comp_ctx)->strm;
+ strm->zalloc = alloc_zlib;
+ strm->zfree = free_zlib;
+ strm->opaque = *comp_ctx;
+#endif
+ return 0;
+}
+
+/*
+ * Dealloc the comp_ctx
+ */
+static inline int deinit_comp_ctx(struct comp_ctx **comp_ctx)
+{
+ if (!*comp_ctx)
+ return 0;
+
+ pool_free2(pool_comp_ctx, *comp_ctx);
+ *comp_ctx = NULL;
+
+#ifdef USE_ZLIB
+ zlib_used_memory -= sizeof(struct comp_ctx);
+#endif
+ return 0;
+}
+
+
+/****************************
+ **** Identity algorithm ****
+ ****************************/
+
+/*
+ * Init the identity algorithm
+ */
+static int identity_init(struct comp_ctx **comp_ctx, int level)
+{
+ return 0;
+}
+
+/*
+ * Process data
+ * Return size of consumed data or -1 on error
+ */
+static int identity_add_data(struct comp_ctx *comp_ctx, const char *in_data, int in_len, struct buffer *out)
+{
+ char *out_data = bi_end(out);
+ int out_len = out->size - buffer_len(out);
+
+ if (out_len < in_len)
+ return -1;
+
+ memcpy(out_data, in_data, in_len);
+
+ out->i += in_len;
+
+ return in_len;
+}
+
+static int identity_flush(struct comp_ctx *comp_ctx, struct buffer *out)
+{
+ return 0;
+}
+
+static int identity_finish(struct comp_ctx *comp_ctx, struct buffer *out)
+{
+ return 0;
+}
+
+/*
+ * Deinit the algorithm
+ */
+static int identity_end(struct comp_ctx **comp_ctx)
+{
+ return 0;
+}
+
+
+#ifdef USE_SLZ
+
+/* SLZ's gzip format (RFC1952). Returns < 0 on error. */
+static int rfc1952_init(struct comp_ctx **comp_ctx, int level)
+{
+ if (init_comp_ctx(comp_ctx) < 0)
+ return -1;
+
+ (*comp_ctx)->cur_lvl = !!level;
+ return slz_rfc1952_init(&(*comp_ctx)->strm, !!level);
+}
+
+/* SLZ's raw deflate format (RFC1951). Returns < 0 on error. */
+static int rfc1951_init(struct comp_ctx **comp_ctx, int level)
+{
+ if (init_comp_ctx(comp_ctx) < 0)
+ return -1;
+
+ (*comp_ctx)->cur_lvl = !!level;
+ return slz_rfc1951_init(&(*comp_ctx)->strm, !!level);
+}
+
+/* SLZ's zlib format (RFC1950). Returns < 0 on error. */
+static int rfc1950_init(struct comp_ctx **comp_ctx, int level)
+{
+ if (init_comp_ctx(comp_ctx) < 0)
+ return -1;
+
+ (*comp_ctx)->cur_lvl = !!level;
+ return slz_rfc1950_init(&(*comp_ctx)->strm, !!level);
+}
+
+/* Return the size of consumed data or -1. The output buffer is unused at this
+ * point, we only keep a reference to the input data or a copy of them if the
+ * reference is already used.
+ */
+static int rfc195x_add_data(struct comp_ctx *comp_ctx, const char *in_data, int in_len, struct buffer *out)
+{
+ static struct buffer *tmpbuf = &buf_empty;
+
+ if (in_len <= 0)
+ return 0;
+
+ if (comp_ctx->direct_ptr && !comp_ctx->queued) {
+ /* data already being pointed to, we're in front of fragmented
+ * data and need a buffer now. We reuse the same buffer, as it's
+ * not used out of the scope of a series of add_data()*, end().
+ */
+ if (unlikely(!tmpbuf->size)) {
+ /* this is the first time we need the compression buffer */
+ if (b_alloc(&tmpbuf) == NULL)
+ return -1; /* no memory */
+ }
+ b_reset(tmpbuf);
+ memcpy(bi_end(tmpbuf), comp_ctx->direct_ptr, comp_ctx->direct_len);
+ tmpbuf->i += comp_ctx->direct_len;
+ comp_ctx->direct_ptr = NULL;
+ comp_ctx->direct_len = 0;
+ comp_ctx->queued = tmpbuf;
+ /* fall through buffer copy */
+ }
+
+ if (comp_ctx->queued) {
+ /* data already pending */
+ memcpy(bi_end(comp_ctx->queued), in_data, in_len);
+ comp_ctx->queued->i += in_len;
+ return in_len;
+ }
+
+ comp_ctx->direct_ptr = in_data;
+ comp_ctx->direct_len = in_len;
+ return in_len;
+}
+
+/* Compresses the data accumulated using add_data(), and optionally sends the
+ * format-specific trailer if <finish> is non-null. <out> is expected to have a
+ * large enough free non-wrapping space as verified by http_comp_buffer_init().
+ * The number of bytes emitted is reported.
+ */
+static int rfc195x_flush_or_finish(struct comp_ctx *comp_ctx, struct buffer *out, int finish)
+{
+ struct slz_stream *strm = &comp_ctx->strm;
+ const char *in_ptr;
+ int in_len;
+ int out_len;
+
+ in_ptr = comp_ctx->direct_ptr;
+ in_len = comp_ctx->direct_len;
+
+ if (comp_ctx->queued) {
+ in_ptr = comp_ctx->queued->p;
+ in_len = comp_ctx->queued->i;
+ }
+
+ out_len = out->i;
+
+ if (in_ptr)
+ out->i += slz_encode(strm, bi_end(out), in_ptr, in_len, !finish);
+
+ if (finish)
+ out->i += slz_finish(strm, bi_end(out));
+
+ out_len = out->i - out_len;
+
+ /* very important, we must wipe the data we've just flushed */
+ comp_ctx->direct_len = 0;
+ comp_ctx->direct_ptr = NULL;
+ comp_ctx->queued = NULL;
+
+ /* Verify compression rate limiting and CPU usage */
+ if ((global.comp_rate_lim > 0 && (read_freq_ctr(&global.comp_bps_out) > global.comp_rate_lim)) || /* rate */
+ (idle_pct < compress_min_idle)) { /* idle */
+ if (comp_ctx->cur_lvl > 0)
+ strm->level = --comp_ctx->cur_lvl;
+ }
+ else if (comp_ctx->cur_lvl < global.tune.comp_maxlevel && comp_ctx->cur_lvl < 1) {
+ strm->level = ++comp_ctx->cur_lvl;
+ }
+
+ /* and that's all */
+ return out_len;
+}
+
+static int rfc195x_flush(struct comp_ctx *comp_ctx, struct buffer *out)
+{
+ return rfc195x_flush_or_finish(comp_ctx, out, 0);
+}
+
+static int rfc195x_finish(struct comp_ctx *comp_ctx, struct buffer *out)
+{
+ return rfc195x_flush_or_finish(comp_ctx, out, 1);
+}
+
+/* we just need to free the comp_ctx here, nothing was allocated */
+static int rfc195x_end(struct comp_ctx **comp_ctx)
+{
+ deinit_comp_ctx(comp_ctx);
+ return 0;
+}
+
+#elif defined(USE_ZLIB) /* ! USE_SLZ */
+
+/*
+ * This is a tricky allocation function using the zlib.
+ * This is based on the allocation order in deflateInit2.
+ */
+static void *alloc_zlib(void *opaque, unsigned int items, unsigned int size)
+{
+ struct comp_ctx *ctx = opaque;
+ static char round = 0; /* order in deflateInit2 */
+ void *buf = NULL;
+ struct pool_head *pool = NULL;
+
+ if (global.maxzlibmem > 0 && (global.maxzlibmem - zlib_used_memory) < (long)(items * size))
+ goto end;
+
+ switch (round) {
+ case 0:
+ if (zlib_pool_deflate_state == NULL)
+ zlib_pool_deflate_state = create_pool("zlib_state", size * items, MEM_F_SHARED);
+ pool = zlib_pool_deflate_state;
+ ctx->zlib_deflate_state = buf = pool_alloc2(pool);
+ break;
+
+ case 1:
+ if (zlib_pool_window == NULL)
+ zlib_pool_window = create_pool("zlib_window", size * items, MEM_F_SHARED);
+ pool = zlib_pool_window;
+ ctx->zlib_window = buf = pool_alloc2(pool);
+ break;
+
+ case 2:
+ if (zlib_pool_prev == NULL)
+ zlib_pool_prev = create_pool("zlib_prev", size * items, MEM_F_SHARED);
+ pool = zlib_pool_prev;
+ ctx->zlib_prev = buf = pool_alloc2(pool);
+ break;
+
+ case 3:
+ if (zlib_pool_head == NULL)
+ zlib_pool_head = create_pool("zlib_head", size * items, MEM_F_SHARED);
+ pool = zlib_pool_head;
+ ctx->zlib_head = buf = pool_alloc2(pool);
+ break;
+
+ case 4:
+ if (zlib_pool_pending_buf == NULL)
+ zlib_pool_pending_buf = create_pool("zlib_pending_buf", size * items, MEM_F_SHARED);
+ pool = zlib_pool_pending_buf;
+ ctx->zlib_pending_buf = buf = pool_alloc2(pool);
+ break;
+ }
+ if (buf != NULL)
+ zlib_used_memory += pool->size;
+
+end:
+
+ /* deflateInit2() first allocates and checks the deflate_state, then if
+ * it succeeds, it allocates all other 4 areas at ones and checks them
+ * at the end. So we want to correctly count the rounds depending on when
+ * zlib is supposed to abort.
+ */
+ if (buf || round)
+ round = (round + 1) % 5;
+ return buf;
+}
+
+static void free_zlib(void *opaque, void *ptr)
+{
+ struct comp_ctx *ctx = opaque;
+ struct pool_head *pool = NULL;
+
+ if (ptr == ctx->zlib_window)
+ pool = zlib_pool_window;
+ else if (ptr == ctx->zlib_deflate_state)
+ pool = zlib_pool_deflate_state;
+ else if (ptr == ctx->zlib_prev)
+ pool = zlib_pool_prev;
+ else if (ptr == ctx->zlib_head)
+ pool = zlib_pool_head;
+ else if (ptr == ctx->zlib_pending_buf)
+ pool = zlib_pool_pending_buf;
+
+ pool_free2(pool, ptr);
+ zlib_used_memory -= pool->size;
+}
+
+/**************************
+**** gzip algorithm ****
+***************************/
+static int gzip_init(struct comp_ctx **comp_ctx, int level)
+{
+ z_stream *strm;
+
+ if (init_comp_ctx(comp_ctx) < 0)
+ return -1;
+
+ strm = &(*comp_ctx)->strm;
+
+ if (deflateInit2(strm, level, Z_DEFLATED, global.tune.zlibwindowsize + 16, global.tune.zlibmemlevel, Z_DEFAULT_STRATEGY) != Z_OK) {
+ deinit_comp_ctx(comp_ctx);
+ return -1;
+ }
+
+ (*comp_ctx)->cur_lvl = level;
+
+ return 0;
+}
+
+/* Raw deflate algorithm */
+static int raw_def_init(struct comp_ctx **comp_ctx, int level)
+{
+ z_stream *strm;
+
+ if (init_comp_ctx(comp_ctx) < 0)
+ return -1;
+
+ strm = &(*comp_ctx)->strm;
+
+ if (deflateInit2(strm, level, Z_DEFLATED, -global.tune.zlibwindowsize, global.tune.zlibmemlevel, Z_DEFAULT_STRATEGY) != Z_OK) {
+ deinit_comp_ctx(comp_ctx);
+ return -1;
+ }
+
+ (*comp_ctx)->cur_lvl = level;
+ return 0;
+}
+
+/**************************
+**** Deflate algorithm ****
+***************************/
+
+static int deflate_init(struct comp_ctx **comp_ctx, int level)
+{
+ z_stream *strm;
+
+ if (init_comp_ctx(comp_ctx) < 0)
+ return -1;
+
+ strm = &(*comp_ctx)->strm;
+
+ if (deflateInit2(strm, level, Z_DEFLATED, global.tune.zlibwindowsize, global.tune.zlibmemlevel, Z_DEFAULT_STRATEGY) != Z_OK) {
+ deinit_comp_ctx(comp_ctx);
+ return -1;
+ }
+
+ (*comp_ctx)->cur_lvl = level;
+
+ return 0;
+}
+
+/* Return the size of consumed data or -1 */
+static int deflate_add_data(struct comp_ctx *comp_ctx, const char *in_data, int in_len, struct buffer *out)
+{
+ int ret;
+ z_stream *strm = &comp_ctx->strm;
+ char *out_data = bi_end(out);
+ int out_len = out->size - buffer_len(out);
+
+ if (in_len <= 0)
+ return 0;
+
+
+ if (out_len <= 0)
+ return -1;
+
+ strm->next_in = (unsigned char *)in_data;
+ strm->avail_in = in_len;
+ strm->next_out = (unsigned char *)out_data;
+ strm->avail_out = out_len;
+
+ ret = deflate(strm, Z_NO_FLUSH);
+ if (ret != Z_OK)
+ return -1;
+
+ /* deflate update the available data out */
+ out->i += out_len - strm->avail_out;
+
+ return in_len - strm->avail_in;
+}
+
+static int deflate_flush_or_finish(struct comp_ctx *comp_ctx, struct buffer *out, int flag)
+{
+ int ret;
+ int out_len = 0;
+ z_stream *strm = &comp_ctx->strm;
+
+ strm->next_out = (unsigned char *)bi_end(out);
+ strm->avail_out = out->size - buffer_len(out);
+
+ ret = deflate(strm, flag);
+ if (ret != Z_OK && ret != Z_STREAM_END)
+ return -1;
+
+ out_len = (out->size - buffer_len(out)) - strm->avail_out;
+ out->i += out_len;
+
+ /* compression limit */
+ if ((global.comp_rate_lim > 0 && (read_freq_ctr(&global.comp_bps_out) > global.comp_rate_lim)) || /* rate */
+ (idle_pct < compress_min_idle)) { /* idle */
+ /* decrease level */
+ if (comp_ctx->cur_lvl > 0) {
+ comp_ctx->cur_lvl--;
+ deflateParams(&comp_ctx->strm, comp_ctx->cur_lvl, Z_DEFAULT_STRATEGY);
+ }
+
+ } else if (comp_ctx->cur_lvl < global.tune.comp_maxlevel) {
+ /* increase level */
+ comp_ctx->cur_lvl++ ;
+ deflateParams(&comp_ctx->strm, comp_ctx->cur_lvl, Z_DEFAULT_STRATEGY);
+ }
+
+ return out_len;
+}
+
+static int deflate_flush(struct comp_ctx *comp_ctx, struct buffer *out)
+{
+ return deflate_flush_or_finish(comp_ctx, out, Z_SYNC_FLUSH);
+}
+
+static int deflate_finish(struct comp_ctx *comp_ctx, struct buffer *out)
+{
+ return deflate_flush_or_finish(comp_ctx, out, Z_FINISH);
+}
+
+static int deflate_end(struct comp_ctx **comp_ctx)
+{
+ z_stream *strm = &(*comp_ctx)->strm;
+ int ret;
+
+ ret = deflateEnd(strm);
+
+ deinit_comp_ctx(comp_ctx);
+
+ return ret;
+}
+
+#endif /* USE_ZLIB */
+
+/* boolean, returns true if compression is used (either gzip or deflate) in the response */
+static int
+smp_fetch_res_comp(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ smp->data.type = SMP_T_BOOL;
+ smp->data.u.sint = (smp->strm->comp_algo != NULL);
+ return 1;
+}
+
+/* string, returns algo */
+static int
+smp_fetch_res_comp_algo(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ if (!smp->strm->comp_algo)
+ return 0;
+
+ smp->data.type = SMP_T_STR;
+ smp->flags = SMP_F_CONST;
+ smp->data.u.str.str = smp->strm->comp_algo->cfg_name;
+ smp->data.u.str.len = smp->strm->comp_algo->cfg_name_len;
+ return 1;
+}
+
+/* Note: must not be declared <const> as its list will be overwritten */
+static struct acl_kw_list acl_kws = {ILH, {
+ { /* END */ },
+}};
+
+/* Note: must not be declared <const> as its list will be overwritten */
+static struct sample_fetch_kw_list sample_fetch_keywords = {ILH, {
+ { "res.comp", smp_fetch_res_comp, 0, NULL, SMP_T_BOOL, SMP_USE_HRSHP },
+ { "res.comp_algo", smp_fetch_res_comp_algo, 0, NULL, SMP_T_STR, SMP_USE_HRSHP },
+ { /* END */ },
+}};
+
+__attribute__((constructor))
+static void __comp_fetch_init(void)
+{
+#ifdef USE_SLZ
+ slz_make_crc_table();
+ slz_prepare_dist_table();
+#endif
+ acl_register_keywords(&acl_kws);
+ sample_register_fetches(&sample_fetch_keywords);
+}
--- /dev/null
+/*
+ * Connection management functions
+ *
+ * Copyright 2000-2012 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <errno.h>
+
+#include <common/compat.h>
+#include <common/config.h>
+#include <common/namespace.h>
+
+#include <proto/connection.h>
+#include <proto/fd.h>
+#include <proto/frontend.h>
+#include <proto/proto_tcp.h>
+#include <proto/stream_interface.h>
+
+#ifdef USE_OPENSSL
+#include <proto/ssl_sock.h>
+#endif
+
+struct pool_head *pool2_connection;
+
+/* perform minimal intializations, report 0 in case of error, 1 if OK. */
+int init_connection()
+{
+ pool2_connection = create_pool("connection", sizeof (struct connection), MEM_F_SHARED);
+ return pool2_connection != NULL;
+}
+
+/* I/O callback for fd-based connections. It calls the read/write handlers
+ * provided by the connection's sock_ops, which must be valid. It returns 0.
+ */
+int conn_fd_handler(int fd)
+{
+ struct connection *conn = fdtab[fd].owner;
+ unsigned int flags;
+
+ if (unlikely(!conn))
+ return 0;
+
+ conn_refresh_polling_flags(conn);
+ flags = conn->flags & ~CO_FL_ERROR; /* ensure to call the wake handler upon error */
+
+ process_handshake:
+ /* The handshake callbacks are called in sequence. If either of them is
+ * missing something, it must enable the required polling at the socket
+ * layer of the connection. Polling state is not guaranteed when entering
+ * these handlers, so any handshake handler which does not complete its
+ * work must explicitly disable events it's not interested in. Error
+ * handling is also performed here in order to reduce the number of tests
+ * around.
+ */
+ while (unlikely(conn->flags & (CO_FL_HANDSHAKE | CO_FL_ERROR))) {
+ if (unlikely(conn->flags & CO_FL_ERROR))
+ goto leave;
+
+ if (conn->flags & CO_FL_ACCEPT_PROXY)
+ if (!conn_recv_proxy(conn, CO_FL_ACCEPT_PROXY))
+ goto leave;
+
+ if (conn->flags & CO_FL_SEND_PROXY)
+ if (!conn_si_send_proxy(conn, CO_FL_SEND_PROXY))
+ goto leave;
+#ifdef USE_OPENSSL
+ if (conn->flags & CO_FL_SSL_WAIT_HS)
+ if (!ssl_sock_handshake(conn, CO_FL_SSL_WAIT_HS))
+ goto leave;
+#endif
+ }
+
+ /* Once we're purely in the data phase, we disable handshake polling */
+ if (!(conn->flags & CO_FL_POLL_SOCK))
+ __conn_sock_stop_both(conn);
+
+ /* The data layer might not be ready yet (eg: when using embryonic
+ * sessions). If we're about to move data, we must initialize it first.
+ * The function may fail and cause the connection to be destroyed, thus
+ * we must not use it anymore and should immediately leave instead.
+ */
+ if ((conn->flags & CO_FL_INIT_DATA) && conn->data->init(conn) < 0)
+ return 0;
+
+ /* The data transfer starts here and stops on error and handshakes. Note
+ * that we must absolutely test conn->xprt at each step in case it suddenly
+ * changes due to a quick unexpected close().
+ */
+ if (conn->xprt && fd_recv_ready(fd) &&
+ ((conn->flags & (CO_FL_DATA_RD_ENA|CO_FL_WAIT_ROOM|CO_FL_ERROR|CO_FL_HANDSHAKE)) == CO_FL_DATA_RD_ENA)) {
+ /* force detection of a flag change : it's impossible to have both
+ * CONNECTED and WAIT_CONN so we're certain to trigger a change.
+ */
+ flags = CO_FL_WAIT_L4_CONN | CO_FL_CONNECTED;
+ conn->data->recv(conn);
+ }
+
+ if (conn->xprt && fd_send_ready(fd) &&
+ ((conn->flags & (CO_FL_DATA_WR_ENA|CO_FL_WAIT_DATA|CO_FL_ERROR|CO_FL_HANDSHAKE)) == CO_FL_DATA_WR_ENA)) {
+ /* force detection of a flag change : it's impossible to have both
+ * CONNECTED and WAIT_CONN so we're certain to trigger a change.
+ */
+ flags = CO_FL_WAIT_L4_CONN | CO_FL_CONNECTED;
+ conn->data->send(conn);
+ }
+
+ /* It may happen during the data phase that a handshake is
+ * enabled again (eg: SSL)
+ */
+ if (unlikely(conn->flags & (CO_FL_HANDSHAKE | CO_FL_ERROR)))
+ goto process_handshake;
+
+ if (unlikely(conn->flags & CO_FL_WAIT_L4_CONN)) {
+ /* still waiting for a connection to establish and nothing was
+ * attempted yet to probe the connection. Then let's retry the
+ * connect().
+ */
+ if (!tcp_connect_probe(conn))
+ goto leave;
+ }
+
+ leave:
+ /* The wake callback may be used to process a critical error and abort the
+ * connection. If so, we don't want to go further as the connection will
+ * have been released and the FD destroyed.
+ */
+ if ((conn->flags & CO_FL_WAKE_DATA) &&
+ ((conn->flags ^ flags) & CO_FL_CONN_STATE) &&
+ conn->data->wake(conn) < 0)
+ return 0;
+
+ /* Last check, verify if the connection just established */
+ if (unlikely(!(conn->flags & (CO_FL_WAIT_L4_CONN | CO_FL_WAIT_L6_CONN | CO_FL_CONNECTED))))
+ conn->flags |= CO_FL_CONNECTED;
+
+ /* remove the events before leaving */
+ fdtab[fd].ev &= FD_POLL_STICKY;
+
+ /* commit polling changes */
+ conn_cond_update_polling(conn);
+ return 0;
+}
+
+/* Update polling on connection <c>'s file descriptor depending on its current
+ * state as reported in the connection's CO_FL_CURR_* flags, reports of EAGAIN
+ * in CO_FL_WAIT_*, and the data layer expectations indicated by CO_FL_DATA_*.
+ * The connection flags are updated with the new flags at the end of the
+ * operation. Polling is totally disabled if an error was reported.
+ */
+void conn_update_data_polling(struct connection *c)
+{
+ unsigned int f = c->flags;
+
+ if (!conn_ctrl_ready(c))
+ return;
+
+ /* update read status if needed */
+ if (unlikely((f & (CO_FL_CURR_RD_ENA|CO_FL_DATA_RD_ENA)) == CO_FL_DATA_RD_ENA)) {
+ fd_want_recv(c->t.sock.fd);
+ f |= CO_FL_CURR_RD_ENA;
+ }
+ else if (unlikely((f & (CO_FL_CURR_RD_ENA|CO_FL_DATA_RD_ENA)) == CO_FL_CURR_RD_ENA)) {
+ fd_stop_recv(c->t.sock.fd);
+ f &= ~CO_FL_CURR_RD_ENA;
+ }
+
+ /* update write status if needed */
+ if (unlikely((f & (CO_FL_CURR_WR_ENA|CO_FL_DATA_WR_ENA)) == CO_FL_DATA_WR_ENA)) {
+ fd_want_send(c->t.sock.fd);
+ f |= CO_FL_CURR_WR_ENA;
+ }
+ else if (unlikely((f & (CO_FL_CURR_WR_ENA|CO_FL_DATA_WR_ENA)) == CO_FL_CURR_WR_ENA)) {
+ fd_stop_send(c->t.sock.fd);
+ f &= ~CO_FL_CURR_WR_ENA;
+ }
+ c->flags = f;
+}
+
+/* Update polling on connection <c>'s file descriptor depending on its current
+ * state as reported in the connection's CO_FL_CURR_* flags, reports of EAGAIN
+ * in CO_FL_WAIT_*, and the sock layer expectations indicated by CO_FL_SOCK_*.
+ * The connection flags are updated with the new flags at the end of the
+ * operation. Polling is totally disabled if an error was reported.
+ */
+void conn_update_sock_polling(struct connection *c)
+{
+ unsigned int f = c->flags;
+
+ if (!conn_ctrl_ready(c))
+ return;
+
+ /* update read status if needed */
+ if (unlikely((f & (CO_FL_CURR_RD_ENA|CO_FL_SOCK_RD_ENA)) == CO_FL_SOCK_RD_ENA)) {
+ fd_want_recv(c->t.sock.fd);
+ f |= CO_FL_CURR_RD_ENA;
+ }
+ else if (unlikely((f & (CO_FL_CURR_RD_ENA|CO_FL_SOCK_RD_ENA)) == CO_FL_CURR_RD_ENA)) {
+ fd_stop_recv(c->t.sock.fd);
+ f &= ~CO_FL_CURR_RD_ENA;
+ }
+
+ /* update write status if needed */
+ if (unlikely((f & (CO_FL_CURR_WR_ENA|CO_FL_SOCK_WR_ENA)) == CO_FL_SOCK_WR_ENA)) {
+ fd_want_send(c->t.sock.fd);
+ f |= CO_FL_CURR_WR_ENA;
+ }
+ else if (unlikely((f & (CO_FL_CURR_WR_ENA|CO_FL_SOCK_WR_ENA)) == CO_FL_CURR_WR_ENA)) {
+ fd_stop_send(c->t.sock.fd);
+ f &= ~CO_FL_CURR_WR_ENA;
+ }
+ c->flags = f;
+}
+
+/* Send a message over an established connection. It makes use of send() and
+ * returns the same return code and errno. If the socket layer is not ready yet
+ * then -1 is returned and ENOTSOCK is set into errno. If the fd is not marked
+ * as ready, or if EAGAIN or ENOTCONN is returned, then we return 0. It returns
+ * EMSGSIZE if called with a zero length message. The purpose is to simplify
+ * some rare attempts to directly write on the socket from above the connection
+ * (typically send_proxy). In case of EAGAIN, the fd is marked as "cant_send".
+ * It automatically retries on EINTR. Other errors cause the connection to be
+ * marked as in error state. It takes similar arguments as send() except the
+ * first one which is the connection instead of the file descriptor. Note,
+ * MSG_DONTWAIT and MSG_NOSIGNAL are forced on the flags.
+ */
+int conn_sock_send(struct connection *conn, const void *buf, int len, int flags)
+{
+ int ret;
+
+ ret = -1;
+ errno = ENOTSOCK;
+
+ if (conn->flags & CO_FL_SOCK_WR_SH)
+ goto fail;
+
+ if (!conn_ctrl_ready(conn))
+ goto fail;
+
+ errno = EMSGSIZE;
+ if (!len)
+ goto fail;
+
+ if (!fd_send_ready(conn->t.sock.fd))
+ goto wait;
+
+ do {
+ ret = send(conn->t.sock.fd, buf, len, flags | MSG_DONTWAIT | MSG_NOSIGNAL);
+ } while (ret < 0 && errno == EINTR);
+
+
+ if (ret > 0)
+ return ret;
+
+ if (ret == 0 || errno == EAGAIN || errno == ENOTCONN) {
+ wait:
+ fd_cant_send(conn->t.sock.fd);
+ return 0;
+ }
+ fail:
+ conn->flags |= CO_FL_SOCK_RD_SH | CO_FL_SOCK_WR_SH | CO_FL_ERROR;
+ return ret;
+}
+
+/* Drains possibly pending incoming data on the file descriptor attached to the
+ * connection and update the connection's flags accordingly. This is used to
+ * know whether we need to disable lingering on close. Returns non-zero if it
+ * is safe to close without disabling lingering, otherwise zero. The SOCK_RD_SH
+ * flag may also be updated if the incoming shutdown was reported by the drain()
+ * function.
+ */
+int conn_sock_drain(struct connection *conn)
+{
+ if (!conn_ctrl_ready(conn))
+ return 1;
+
+ if (conn->flags & (CO_FL_ERROR | CO_FL_SOCK_RD_SH))
+ return 1;
+
+ if (fdtab[conn->t.sock.fd].ev & (FD_POLL_ERR|FD_POLL_HUP)) {
+ fdtab[conn->t.sock.fd].linger_risk = 0;
+ }
+ else {
+ if (!fd_recv_ready(conn->t.sock.fd))
+ return 0;
+
+ /* disable draining if we were called and have no drain function */
+ if (!conn->ctrl->drain) {
+ __conn_data_stop_recv(conn);
+ return 0;
+ }
+
+ if (conn->ctrl->drain(conn->t.sock.fd) <= 0)
+ return 0;
+ }
+
+ conn->flags |= CO_FL_SOCK_RD_SH;
+ return 1;
+}
+
+/*
+ * Get data length from tlv
+ */
+static int get_tlv_length(const struct tlv *src)
+{
+ return (src->length_hi << 8) | src->length_lo;
+}
+
+/* This handshake handler waits a PROXY protocol header at the beginning of the
+ * raw data stream. The header looks like this :
+ *
+ * "PROXY" <SP> PROTO <SP> SRC3 <SP> DST3 <SP> SRC4 <SP> <DST4> "\r\n"
+ *
+ * There must be exactly one space between each field. Fields are :
+ * - PROTO : layer 4 protocol, which must be "TCP4" or "TCP6".
+ * - SRC3 : layer 3 (eg: IP) source address in standard text form
+ * - DST3 : layer 3 (eg: IP) destination address in standard text form
+ * - SRC4 : layer 4 (eg: TCP port) source address in standard text form
+ * - DST4 : layer 4 (eg: TCP port) destination address in standard text form
+ *
+ * This line MUST be at the beginning of the buffer and MUST NOT wrap.
+ *
+ * The header line is small and in all cases smaller than the smallest normal
+ * TCP MSS. So it MUST always be delivered as one segment, which ensures we
+ * can safely use MSG_PEEK and avoid buffering.
+ *
+ * Once the data is fetched, the values are set in the connection's address
+ * fields, and data are removed from the socket's buffer. The function returns
+ * zero if it needs to wait for more data or if it fails, or 1 if it completed
+ * and removed itself.
+ */
+int conn_recv_proxy(struct connection *conn, int flag)
+{
+ char *line, *end;
+ struct proxy_hdr_v2 *hdr_v2;
+ const char v2sig[] = PP2_SIGNATURE;
+ int tlv_length = 0;
+ int tlv_offset = 0;
+
+ /* we might have been called just after an asynchronous shutr */
+ if (conn->flags & CO_FL_SOCK_RD_SH)
+ goto fail;
+
+ if (!conn_ctrl_ready(conn))
+ goto fail;
+
+ if (!fd_recv_ready(conn->t.sock.fd))
+ return 0;
+
+ do {
+ trash.len = recv(conn->t.sock.fd, trash.str, trash.size, MSG_PEEK);
+ if (trash.len < 0) {
+ if (errno == EINTR)
+ continue;
+ if (errno == EAGAIN) {
+ fd_cant_recv(conn->t.sock.fd);
+ return 0;
+ }
+ goto recv_abort;
+ }
+ } while (0);
+
+ if (!trash.len) {
+ /* client shutdown */
+ conn->err_code = CO_ER_PRX_EMPTY;
+ goto fail;
+ }
+
+ if (trash.len < 6)
+ goto missing;
+
+ line = trash.str;
+ end = trash.str + trash.len;
+
+ /* Decode a possible proxy request, fail early if it does not match */
+ if (strncmp(line, "PROXY ", 6) != 0)
+ goto not_v1;
+
+ line += 6;
+ if (trash.len < 9) /* shortest possible line */
+ goto missing;
+
+ if (!memcmp(line, "TCP4 ", 5) != 0) {
+ u32 src3, dst3, sport, dport;
+
+ line += 5;
+
+ src3 = inetaddr_host_lim_ret(line, end, &line);
+ if (line == end)
+ goto missing;
+ if (*line++ != ' ')
+ goto bad_header;
+
+ dst3 = inetaddr_host_lim_ret(line, end, &line);
+ if (line == end)
+ goto missing;
+ if (*line++ != ' ')
+ goto bad_header;
+
+ sport = read_uint((const char **)&line, end);
+ if (line == end)
+ goto missing;
+ if (*line++ != ' ')
+ goto bad_header;
+
+ dport = read_uint((const char **)&line, end);
+ if (line > end - 2)
+ goto missing;
+ if (*line++ != '\r')
+ goto bad_header;
+ if (*line++ != '\n')
+ goto bad_header;
+
+ /* update the session's addresses and mark them set */
+ ((struct sockaddr_in *)&conn->addr.from)->sin_family = AF_INET;
+ ((struct sockaddr_in *)&conn->addr.from)->sin_addr.s_addr = htonl(src3);
+ ((struct sockaddr_in *)&conn->addr.from)->sin_port = htons(sport);
+
+ ((struct sockaddr_in *)&conn->addr.to)->sin_family = AF_INET;
+ ((struct sockaddr_in *)&conn->addr.to)->sin_addr.s_addr = htonl(dst3);
+ ((struct sockaddr_in *)&conn->addr.to)->sin_port = htons(dport);
+ conn->flags |= CO_FL_ADDR_FROM_SET | CO_FL_ADDR_TO_SET;
+ }
+ else if (!memcmp(line, "TCP6 ", 5) != 0) {
+ u32 sport, dport;
+ char *src_s;
+ char *dst_s, *sport_s, *dport_s;
+ struct in6_addr src3, dst3;
+
+ line += 5;
+
+ src_s = line;
+ dst_s = sport_s = dport_s = NULL;
+ while (1) {
+ if (line > end - 2) {
+ goto missing;
+ }
+ else if (*line == '\r') {
+ *line = 0;
+ line++;
+ if (*line++ != '\n')
+ goto bad_header;
+ break;
+ }
+
+ if (*line == ' ') {
+ *line = 0;
+ if (!dst_s)
+ dst_s = line + 1;
+ else if (!sport_s)
+ sport_s = line + 1;
+ else if (!dport_s)
+ dport_s = line + 1;
+ }
+ line++;
+ }
+
+ if (!dst_s || !sport_s || !dport_s)
+ goto bad_header;
+
+ sport = read_uint((const char **)&sport_s,dport_s - 1);
+ if (*sport_s != 0)
+ goto bad_header;
+
+ dport = read_uint((const char **)&dport_s,line - 2);
+ if (*dport_s != 0)
+ goto bad_header;
+
+ if (inet_pton(AF_INET6, src_s, (void *)&src3) != 1)
+ goto bad_header;
+
+ if (inet_pton(AF_INET6, dst_s, (void *)&dst3) != 1)
+ goto bad_header;
+
+ /* update the session's addresses and mark them set */
+ ((struct sockaddr_in6 *)&conn->addr.from)->sin6_family = AF_INET6;
+ memcpy(&((struct sockaddr_in6 *)&conn->addr.from)->sin6_addr, &src3, sizeof(struct in6_addr));
+ ((struct sockaddr_in6 *)&conn->addr.from)->sin6_port = htons(sport);
+
+ ((struct sockaddr_in6 *)&conn->addr.to)->sin6_family = AF_INET6;
+ memcpy(&((struct sockaddr_in6 *)&conn->addr.to)->sin6_addr, &dst3, sizeof(struct in6_addr));
+ ((struct sockaddr_in6 *)&conn->addr.to)->sin6_port = htons(dport);
+ conn->flags |= CO_FL_ADDR_FROM_SET | CO_FL_ADDR_TO_SET;
+ }
+ else if (memcmp(line, "UNKNOWN\r\n", 9) == 0) {
+ /* This can be a UNIX socket forwarded by an haproxy upstream */
+ line += 9;
+ }
+ else {
+ /* The protocol does not match something known (TCP4/TCP6/UNKNOWN) */
+ conn->err_code = CO_ER_PRX_BAD_PROTO;
+ goto fail;
+ }
+
+ trash.len = line - trash.str;
+ goto eat_header;
+
+ not_v1:
+ /* try PPv2 */
+ if (trash.len < PP2_HEADER_LEN)
+ goto missing;
+
+ hdr_v2 = (struct proxy_hdr_v2 *)trash.str;
+
+ if (memcmp(hdr_v2->sig, v2sig, PP2_SIGNATURE_LEN) != 0 ||
+ (hdr_v2->ver_cmd & PP2_VERSION_MASK) != PP2_VERSION) {
+ conn->err_code = CO_ER_PRX_NOT_HDR;
+ goto fail;
+ }
+
+ if (trash.len < PP2_HEADER_LEN + ntohs(hdr_v2->len))
+ goto missing;
+
+ switch (hdr_v2->ver_cmd & PP2_CMD_MASK) {
+ case 0x01: /* PROXY command */
+ switch (hdr_v2->fam) {
+ case 0x11: /* TCPv4 */
+ if (ntohs(hdr_v2->len) < PP2_ADDR_LEN_INET)
+ goto bad_header;
+
+ ((struct sockaddr_in *)&conn->addr.from)->sin_family = AF_INET;
+ ((struct sockaddr_in *)&conn->addr.from)->sin_addr.s_addr = hdr_v2->addr.ip4.src_addr;
+ ((struct sockaddr_in *)&conn->addr.from)->sin_port = hdr_v2->addr.ip4.src_port;
+ ((struct sockaddr_in *)&conn->addr.to)->sin_family = AF_INET;
+ ((struct sockaddr_in *)&conn->addr.to)->sin_addr.s_addr = hdr_v2->addr.ip4.dst_addr;
+ ((struct sockaddr_in *)&conn->addr.to)->sin_port = hdr_v2->addr.ip4.dst_port;
+ conn->flags |= CO_FL_ADDR_FROM_SET | CO_FL_ADDR_TO_SET;
+ tlv_offset = PP2_HEADER_LEN + PP2_ADDR_LEN_INET;
+ tlv_length = ntohs(hdr_v2->len) - PP2_ADDR_LEN_INET;
+ break;
+ case 0x21: /* TCPv6 */
+ if (ntohs(hdr_v2->len) < PP2_ADDR_LEN_INET6)
+ goto bad_header;
+
+ ((struct sockaddr_in6 *)&conn->addr.from)->sin6_family = AF_INET6;
+ memcpy(&((struct sockaddr_in6 *)&conn->addr.from)->sin6_addr, hdr_v2->addr.ip6.src_addr, 16);
+ ((struct sockaddr_in6 *)&conn->addr.from)->sin6_port = hdr_v2->addr.ip6.src_port;
+ ((struct sockaddr_in6 *)&conn->addr.to)->sin6_family = AF_INET6;
+ memcpy(&((struct sockaddr_in6 *)&conn->addr.to)->sin6_addr, hdr_v2->addr.ip6.dst_addr, 16);
+ ((struct sockaddr_in6 *)&conn->addr.to)->sin6_port = hdr_v2->addr.ip6.dst_port;
+ conn->flags |= CO_FL_ADDR_FROM_SET | CO_FL_ADDR_TO_SET;
+ tlv_offset = PP2_HEADER_LEN + PP2_ADDR_LEN_INET6;
+ tlv_length = ntohs(hdr_v2->len) - PP2_ADDR_LEN_INET6;
+ break;
+ }
+
+ /* TLV parsing */
+ if (tlv_length > 0) {
+ while (tlv_offset + TLV_HEADER_SIZE <= trash.len) {
+ const struct tlv *tlv_packet = (struct tlv *) &trash.str[tlv_offset];
+ const int tlv_len = get_tlv_length(tlv_packet);
+ tlv_offset += tlv_len + TLV_HEADER_SIZE;
+
+ switch (tlv_packet->type) {
+#ifdef CONFIG_HAP_NS
+ case PP2_TYPE_NETNS: {
+ const struct netns_entry *ns;
+ ns = netns_store_lookup((char*)tlv_packet->value, tlv_len);
+ if (ns)
+ conn->proxy_netns = ns;
+ break;
+ }
+#endif
+ default:
+ break;
+ }
+ }
+ }
+
+ /* unsupported protocol, keep local connection address */
+ break;
+ case 0x00: /* LOCAL command */
+ /* keep local connection address for LOCAL */
+ break;
+ default:
+ goto bad_header; /* not a supported command */
+ }
+
+ trash.len = PP2_HEADER_LEN + ntohs(hdr_v2->len);
+ goto eat_header;
+
+ eat_header:
+ /* remove the PROXY line from the request. For this we re-read the
+ * exact line at once. If we don't get the exact same result, we
+ * fail.
+ */
+ do {
+ int len2 = recv(conn->t.sock.fd, trash.str, trash.len, 0);
+ if (len2 < 0 && errno == EINTR)
+ continue;
+ if (len2 != trash.len)
+ goto recv_abort;
+ } while (0);
+
+ conn->flags &= ~flag;
+ return 1;
+
+ missing:
+ /* Missing data. Since we're using MSG_PEEK, we can only poll again if
+ * we have not read anything. Otherwise we need to fail because we won't
+ * be able to poll anymore.
+ */
+ conn->err_code = CO_ER_PRX_TRUNCATED;
+ goto fail;
+
+ bad_header:
+ /* This is not a valid proxy protocol header */
+ conn->err_code = CO_ER_PRX_BAD_HDR;
+ goto fail;
+
+ recv_abort:
+ conn->err_code = CO_ER_PRX_ABORT;
+ conn->flags |= CO_FL_SOCK_RD_SH | CO_FL_SOCK_WR_SH;
+ goto fail;
+
+ fail:
+ __conn_sock_stop_both(conn);
+ conn->flags |= CO_FL_ERROR;
+ return 0;
+}
+
+int make_proxy_line(char *buf, int buf_len, struct server *srv, struct connection *remote)
+{
+ int ret = 0;
+
+ if (srv && (srv->pp_opts & SRV_PP_V2)) {
+ ret = make_proxy_line_v2(buf, buf_len, srv, remote);
+ }
+ else {
+ if (remote)
+ ret = make_proxy_line_v1(buf, buf_len, &remote->addr.from, &remote->addr.to);
+ else
+ ret = make_proxy_line_v1(buf, buf_len, NULL, NULL);
+ }
+
+ return ret;
+}
+
+/* Makes a PROXY protocol line from the two addresses. The output is sent to
+ * buffer <buf> for a maximum size of <buf_len> (including the trailing zero).
+ * It returns the number of bytes composing this line (including the trailing
+ * LF), or zero in case of failure (eg: not enough space). It supports TCP4,
+ * TCP6 and "UNKNOWN" formats. If any of <src> or <dst> is null, UNKNOWN is
+ * emitted as well.
+ */
+int make_proxy_line_v1(char *buf, int buf_len, struct sockaddr_storage *src, struct sockaddr_storage *dst)
+{
+ int ret = 0;
+
+ if (src && dst && src->ss_family == dst->ss_family && src->ss_family == AF_INET) {
+ ret = snprintf(buf + ret, buf_len - ret, "PROXY TCP4 ");
+ if (ret >= buf_len)
+ return 0;
+
+ /* IPv4 src */
+ if (!inet_ntop(src->ss_family, &((struct sockaddr_in *)src)->sin_addr, buf + ret, buf_len - ret))
+ return 0;
+
+ ret += strlen(buf + ret);
+ if (ret >= buf_len)
+ return 0;
+
+ buf[ret++] = ' ';
+
+ /* IPv4 dst */
+ if (!inet_ntop(dst->ss_family, &((struct sockaddr_in *)dst)->sin_addr, buf + ret, buf_len - ret))
+ return 0;
+
+ ret += strlen(buf + ret);
+ if (ret >= buf_len)
+ return 0;
+
+ /* source and destination ports */
+ ret += snprintf(buf + ret, buf_len - ret, " %u %u\r\n",
+ ntohs(((struct sockaddr_in *)src)->sin_port),
+ ntohs(((struct sockaddr_in *)dst)->sin_port));
+ if (ret >= buf_len)
+ return 0;
+ }
+ else if (src && dst && src->ss_family == dst->ss_family && src->ss_family == AF_INET6) {
+ ret = snprintf(buf + ret, buf_len - ret, "PROXY TCP6 ");
+ if (ret >= buf_len)
+ return 0;
+
+ /* IPv6 src */
+ if (!inet_ntop(src->ss_family, &((struct sockaddr_in6 *)src)->sin6_addr, buf + ret, buf_len - ret))
+ return 0;
+
+ ret += strlen(buf + ret);
+ if (ret >= buf_len)
+ return 0;
+
+ buf[ret++] = ' ';
+
+ /* IPv6 dst */
+ if (!inet_ntop(dst->ss_family, &((struct sockaddr_in6 *)dst)->sin6_addr, buf + ret, buf_len - ret))
+ return 0;
+
+ ret += strlen(buf + ret);
+ if (ret >= buf_len)
+ return 0;
+
+ /* source and destination ports */
+ ret += snprintf(buf + ret, buf_len - ret, " %u %u\r\n",
+ ntohs(((struct sockaddr_in6 *)src)->sin6_port),
+ ntohs(((struct sockaddr_in6 *)dst)->sin6_port));
+ if (ret >= buf_len)
+ return 0;
+ }
+ else {
+ /* unknown family combination */
+ ret = snprintf(buf, buf_len, "PROXY UNKNOWN\r\n");
+ if (ret >= buf_len)
+ return 0;
+ }
+ return ret;
+}
+
+#if defined(USE_OPENSSL) || defined(CONFIG_HAP_NS)
+static int make_tlv(char *dest, int dest_len, char type, uint16_t length, const char *value)
+{
+ struct tlv *tlv;
+
+ if (!dest || (length + sizeof(*tlv) > dest_len))
+ return 0;
+
+ tlv = (struct tlv *)dest;
+
+ tlv->type = type;
+ tlv->length_hi = length >> 8;
+ tlv->length_lo = length & 0x00ff;
+ memcpy(tlv->value, value, length);
+ return length + sizeof(*tlv);
+}
+#endif
+
+int make_proxy_line_v2(char *buf, int buf_len, struct server *srv, struct connection *remote)
+{
+ const char pp2_signature[] = PP2_SIGNATURE;
+ int ret = 0;
+ struct proxy_hdr_v2 *hdr = (struct proxy_hdr_v2 *)buf;
+ struct sockaddr_storage null_addr = {0};
+ struct sockaddr_storage *src = &null_addr;
+ struct sockaddr_storage *dst = &null_addr;
+
+#ifdef USE_OPENSSL
+ char *value = NULL;
+ struct tlv_ssl *tlv;
+ int ssl_tlv_len = 0;
+ struct chunk *cn_trash;
+#endif
+
+ if (buf_len < PP2_HEADER_LEN)
+ return 0;
+ memcpy(hdr->sig, pp2_signature, PP2_SIGNATURE_LEN);
+
+ if (remote) {
+ src = &remote->addr.from;
+ dst = &remote->addr.to;
+ }
+
+ if (src && dst && src->ss_family == dst->ss_family && src->ss_family == AF_INET) {
+ if (buf_len < PP2_HDR_LEN_INET)
+ return 0;
+ hdr->ver_cmd = PP2_VERSION | PP2_CMD_PROXY;
+ hdr->fam = PP2_FAM_INET | PP2_TRANS_STREAM;
+ hdr->addr.ip4.src_addr = ((struct sockaddr_in *)src)->sin_addr.s_addr;
+ hdr->addr.ip4.dst_addr = ((struct sockaddr_in *)dst)->sin_addr.s_addr;
+ hdr->addr.ip4.src_port = ((struct sockaddr_in *)src)->sin_port;
+ hdr->addr.ip4.dst_port = ((struct sockaddr_in *)dst)->sin_port;
+ ret = PP2_HDR_LEN_INET;
+ }
+ else if (src && dst && src->ss_family == dst->ss_family && src->ss_family == AF_INET6) {
+ if (buf_len < PP2_HDR_LEN_INET6)
+ return 0;
+ hdr->ver_cmd = PP2_VERSION | PP2_CMD_PROXY;
+ hdr->fam = PP2_FAM_INET6 | PP2_TRANS_STREAM;
+ memcpy(hdr->addr.ip6.src_addr, &((struct sockaddr_in6 *)src)->sin6_addr, 16);
+ memcpy(hdr->addr.ip6.dst_addr, &((struct sockaddr_in6 *)dst)->sin6_addr, 16);
+ hdr->addr.ip6.src_port = ((struct sockaddr_in6 *)src)->sin6_port;
+ hdr->addr.ip6.dst_port = ((struct sockaddr_in6 *)dst)->sin6_port;
+ ret = PP2_HDR_LEN_INET6;
+ }
+ else {
+ if (buf_len < PP2_HDR_LEN_UNSPEC)
+ return 0;
+ hdr->ver_cmd = PP2_VERSION | PP2_CMD_LOCAL;
+ hdr->fam = PP2_FAM_UNSPEC | PP2_TRANS_UNSPEC;
+ ret = PP2_HDR_LEN_UNSPEC;
+ }
+
+#ifdef USE_OPENSSL
+ if (srv->pp_opts & SRV_PP_V2_SSL) {
+ if ((buf_len - ret) < sizeof(struct tlv_ssl))
+ return 0;
+ tlv = (struct tlv_ssl *)&buf[ret];
+ memset(tlv, 0, sizeof(struct tlv_ssl));
+ ssl_tlv_len += sizeof(struct tlv_ssl);
+ tlv->tlv.type = PP2_TYPE_SSL;
+ if (ssl_sock_is_ssl(remote)) {
+ tlv->client |= PP2_CLIENT_SSL;
+ value = ssl_sock_get_version(remote);
+ if (value) {
+ ssl_tlv_len += make_tlv(&buf[ret+ssl_tlv_len], (buf_len-ret-ssl_tlv_len), PP2_TYPE_SSL_VERSION, strlen(value), value);
+ }
+ if (ssl_sock_get_cert_used_sess(remote)) {
+ tlv->client |= PP2_CLIENT_CERT_SESS;
+ tlv->verify = htonl(ssl_sock_get_verify_result(remote));
+ if (ssl_sock_get_cert_used_conn(remote))
+ tlv->client |= PP2_CLIENT_CERT_CONN;
+ }
+ if (srv->pp_opts & SRV_PP_V2_SSL_CN) {
+ cn_trash = get_trash_chunk();
+ if (ssl_sock_get_remote_common_name(remote, cn_trash) > 0) {
+ ssl_tlv_len += make_tlv(&buf[ret+ssl_tlv_len], (buf_len - ret - ssl_tlv_len), PP2_TYPE_SSL_CN, cn_trash->len, cn_trash->str);
+ }
+ }
+ }
+ tlv->tlv.length_hi = (uint16_t)(ssl_tlv_len - sizeof(struct tlv)) >> 8;
+ tlv->tlv.length_lo = (uint16_t)(ssl_tlv_len - sizeof(struct tlv)) & 0x00ff;
+ ret += ssl_tlv_len;
+ }
+#endif
+
+#ifdef CONFIG_HAP_NS
+ if (remote && (remote->proxy_netns)) {
+ if ((buf_len - ret) < sizeof(struct tlv))
+ return 0;
+ ret += make_tlv(&buf[ret], buf_len, PP2_TYPE_NETNS, remote->proxy_netns->name_len, remote->proxy_netns->node.key);
+ }
+#endif
+
+ hdr->len = htons((uint16_t)(ret - PP2_HEADER_LEN));
+
+ return ret;
+}
--- /dev/null
+#include <stdio.h>
+
+#include <common/cfgparse.h>
+#include <proto/arg.h>
+#include <proto/log.h>
+#include <proto/proto_http.h>
+#include <proto/sample.h>
+#include <import/da.h>
+
+static int da_json_file(char **args, int section_type, struct proxy *curpx,
+ struct proxy *defpx, const char *file, int line,
+ char **err)
+{
+ if (*(args[1]) == 0) {
+ memprintf(err, "deviceatlas json file : expects a json path.\n");
+ return -1;
+ }
+ global.deviceatlas.jsonpath = strdup(args[1]);
+ return 0;
+}
+
+static int da_log_level(char **args, int section_type, struct proxy *curpx,
+ struct proxy *defpx, const char *file, int line,
+ char **err)
+{
+ int loglevel;
+ if (*(args[1]) == 0) {
+ memprintf(err, "deviceatlas log level : expects an integer argument.\n");
+ return -1;
+ }
+
+ loglevel = atol(args[1]);
+ if (loglevel < 0 || loglevel > 3) {
+ memprintf(err, "deviceatlas log level : expects a log level between 0 and 3, %s given.\n", args[1]);
+ } else {
+ global.deviceatlas.loglevel = (da_severity_t)loglevel;
+ }
+
+ return 0;
+}
+
+static int da_property_separator(char **args, int section_type, struct proxy *curpx,
+ struct proxy *defpx, const char *file, int line,
+ char **err)
+{
+ if (*(args[1]) == 0) {
+ memprintf(err, "deviceatlas property separator : expects a character argument.\n");
+ return -1;
+ }
+ global.deviceatlas.separator = *args[1];
+ return 0;
+}
+
+static int da_properties_cookie(char **args, int section_type, struct proxy *curpx,
+ struct proxy *defpx, const char *file, int line,
+ char **err)
+{
+ if (*(args[1]) == 0) {
+ memprintf(err, "deviceatlas cookie name : expects a string argument.\n");
+ return -1;
+ } else {
+ global.deviceatlas.cookiename = strdup(args[1]);
+ }
+ global.deviceatlas.cookienamelen = strlen(global.deviceatlas.cookiename);
+ return 0;
+}
+
+static size_t da_haproxy_read(void *ctx, size_t len, char *buf)
+{
+ return fread(buf, 1, len, ctx);
+}
+
+static da_status_t da_haproxy_seek(void *ctx, off_t off)
+{
+ return fseek(ctx, off, SEEK_SET) != -1 ? DA_OK : DA_SYS;
+}
+
+static void da_haproxy_log(da_severity_t severity, da_status_t status,
+ const char *fmt, va_list args)
+{
+ if (global.deviceatlas.loglevel && severity <= global.deviceatlas.loglevel) {
+ char logbuf[256];
+ vsnprintf(logbuf, sizeof(logbuf), fmt, args);
+ Warning("deviceatlas : %s.\n", logbuf);
+ }
+}
+
+#define DA_COOKIENAME_DEFAULT "DAPROPS"
+
+int init_deviceatlas(void)
+{
+ da_status_t status = DA_SYS;
+ if (global.deviceatlas.jsonpath != 0) {
+ FILE *jsonp;
+ da_property_decl_t extraprops[] = {{0, 0}};
+ size_t atlasimglen;
+ da_status_t status;
+
+ jsonp = fopen(global.deviceatlas.jsonpath, "r");
+ if (jsonp == 0) {
+ Alert("deviceatlas : '%s' json file has invalid path or is not readable.\n",
+ global.deviceatlas.jsonpath);
+ goto out;
+ }
+
+ da_init();
+ da_seterrorfunc(da_haproxy_log);
+ status = da_atlas_compile(jsonp, da_haproxy_read, da_haproxy_seek,
+ &global.deviceatlas.atlasimgptr, &atlasimglen);
+ fclose(jsonp);
+ if (status != DA_OK) {
+ Alert("deviceatlas : '%s' json file is invalid.\n",
+ global.deviceatlas.jsonpath);
+ goto out;
+ }
+
+ status = da_atlas_open(&global.deviceatlas.atlas, extraprops,
+ global.deviceatlas.atlasimgptr, atlasimglen);
+
+ if (status != DA_OK) {
+ Alert("deviceatlas : data could not be compiled.\n");
+ goto out;
+ }
+
+ if (global.deviceatlas.cookiename == 0) {
+ global.deviceatlas.cookiename = strdup(DA_COOKIENAME_DEFAULT);
+ global.deviceatlas.cookienamelen = strlen(global.deviceatlas.cookiename);
+ }
+
+ global.deviceatlas.useragentid = da_atlas_header_evidence_id(&global.deviceatlas.atlas,
+ "user-agent");
+ global.deviceatlas.daset = 1;
+
+ fprintf(stdout, "Deviceatlas module loaded.\n");
+ }
+
+out:
+ return status == DA_OK;
+}
+
+void deinit_deviceatlas(void)
+{
+ if (global.deviceatlas.jsonpath != 0) {
+ free(global.deviceatlas.jsonpath);
+ }
+
+ if (global.deviceatlas.daset == 1) {
+ free(global.deviceatlas.cookiename);
+ da_atlas_close(&global.deviceatlas.atlas);
+ free(global.deviceatlas.atlasimgptr);
+ }
+
+ da_fini();
+}
+
+static int da_haproxy(const struct arg *args, struct sample *smp, da_deviceinfo_t *devinfo)
+{
+ struct chunk *tmp;
+ da_propid_t prop, *pprop;
+ da_status_t status;
+ da_type_t proptype;
+ const char *propname;
+ int i;
+
+ tmp = get_trash_chunk();
+ chunk_reset(tmp);
+
+ propname = (const char *)args[0].data.str.str;
+ i = 0;
+
+ for (; propname != 0; i ++, propname = (const char *)args[i].data.str.str) {
+ status = da_atlas_getpropid(&global.deviceatlas.atlas,
+ propname, &prop);
+ if (status != DA_OK) {
+ chunk_appendf(tmp, "%c", global.deviceatlas.separator);
+ continue;
+ }
+ pprop = ∝
+ da_atlas_getproptype(&global.deviceatlas.atlas, *pprop, &proptype);
+
+ switch (proptype) {
+ case DA_TYPE_BOOLEAN: {
+ bool val;
+ status = da_getpropboolean(devinfo, *pprop, &val);
+ if (status == DA_OK) {
+ chunk_appendf(tmp, "%d", val);
+ }
+ break;
+ }
+ case DA_TYPE_INTEGER:
+ case DA_TYPE_NUMBER: {
+ long val;
+ status = da_getpropinteger(devinfo, *pprop, &val);
+ if (status == DA_OK) {
+ chunk_appendf(tmp, "%ld", val);
+ }
+ break;
+ }
+ case DA_TYPE_STRING: {
+ const char *val;
+ status = da_getpropstring(devinfo, *pprop, &val);
+ if (status == DA_OK) {
+ chunk_appendf(tmp, "%s", val);
+ }
+ break;
+ }
+ default:
+ break;
+ }
+
+ chunk_appendf(tmp, "%c", global.deviceatlas.separator);
+ }
+
+ da_close(devinfo);
+
+ if (tmp->len) {
+ --tmp->len;
+ tmp->str[tmp->len] = 0;
+ }
+
+ smp->data.u.str.str = tmp->str;
+ smp->data.u.str.len = tmp->len;
+
+ return 1;
+}
+
+static int da_haproxy_conv(const struct arg *args, struct sample *smp, void *private)
+{
+ da_deviceinfo_t devinfo;
+ da_status_t status;
+ const char *useragent;
+ char useragentbuf[1024] = { 0 };
+ int i;
+
+ if (global.deviceatlas.daset == 0 || smp->data.u.str.len == 0) {
+ return 1;
+ }
+
+ i = smp->data.u.str.len > sizeof(useragentbuf) ? sizeof(useragentbuf) : smp->data.u.str.len;
+ memcpy(useragentbuf, smp->data.u.str.str, i - 1);
+ useragentbuf[i - 1] = 0;
+
+ useragent = (const char *)useragentbuf;
+
+ status = da_search(&global.deviceatlas.atlas, &devinfo,
+ global.deviceatlas.useragentid, useragent, 0);
+
+ return status != DA_OK ? 0 : da_haproxy(args, smp, &devinfo);
+}
+
+#define DA_MAX_HEADERS 24
+
+static int da_haproxy_fetch(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct hdr_idx *hidx;
+ struct hdr_ctx hctx;
+ const struct http_msg *hmsg;
+ da_evidence_t ev[DA_MAX_HEADERS];
+ da_deviceinfo_t devinfo;
+ da_status_t status;
+ char vbuf[DA_MAX_HEADERS][1024] = {{ 0 }};
+ int i, nbh = 0;
+
+ if (global.deviceatlas.daset == 0) {
+ return 1;
+ }
+
+ CHECK_HTTP_MESSAGE_FIRST();
+ smp->data.type = SMP_T_STR;
+
+ /**
+ * Here we go through the whole list of headers from start
+ * they will be filtered via the DeviceAtlas API itself
+ */
+ hctx.idx = 0;
+ hidx = &smp->strm->txn->hdr_idx;
+ hmsg = &smp->strm->txn->req;
+
+ while (http_find_next_header(hmsg->chn->buf->p, hidx, &hctx) == 1 &&
+ nbh < DA_MAX_HEADERS) {
+ char *pval;
+ size_t vlen;
+ da_evidence_id_t evid = -1;
+ char hbuf[24] = { 0 };
+
+ /* The HTTP headers used by the DeviceAtlas API are not longer */
+ if (hctx.del >= sizeof(hbuf)) {
+ continue;
+ }
+
+ vlen = hctx.vlen;
+ memcpy(hbuf, hctx.line, hctx.del);
+ hbuf[hctx.del] = 0;
+ pval = (hctx.line + hctx.val);
+
+ if (strcmp(hbuf, "Accept-Language") == 0) {
+ evid = da_atlas_accept_language_evidence_id(&global.deviceatlas.
+ atlas);
+ } else if (strcmp(hbuf, "Cookie") == 0) {
+ char *p, *eval;
+ int pl;
+
+ eval = pval + hctx.vlen;
+ /**
+ * The cookie value, if it exists, is located between the current header's
+ * value position and the next one
+ */
+ if (extract_cookie_value(pval, eval, global.deviceatlas.cookiename,
+ global.deviceatlas.cookienamelen, 1, &p, &pl) == NULL) {
+ continue;
+ }
+
+ vlen = (size_t)pl;
+ pval = p;
+ evid = da_atlas_clientprop_evidence_id(&global.deviceatlas.atlas);
+ } else {
+ evid = da_atlas_header_evidence_id(&global.deviceatlas.atlas,
+ hbuf);
+ }
+
+ if (evid == -1) {
+ continue;
+ }
+
+ i = vlen > sizeof(vbuf[nbh]) ? sizeof(vbuf[nbh]) : vlen;
+ memcpy(vbuf[nbh], pval, i - 1);
+ vbuf[nbh][i - 1] = 0;
+ ev[nbh].key = evid;
+ ev[nbh].value = vbuf[nbh];
+ ev[nbh].value[vlen] = 0;
+ ++ nbh;
+ }
+
+ status = da_searchv(&global.deviceatlas.atlas, &devinfo,
+ ev, nbh);
+
+ return status != DA_OK ? 0 : da_haproxy(args, smp, &devinfo);
+}
+
+static struct cfg_kw_list dacfg_kws = {{ }, {
+ { CFG_GLOBAL, "deviceatlas-json-file", da_json_file },
+ { CFG_GLOBAL, "deviceatlas-log-level", da_log_level },
+ { CFG_GLOBAL, "deviceatlas-property-separator", da_property_separator },
+ { CFG_GLOBAL, "deviceatlas-properties-cookie", da_properties_cookie },
+ { 0, NULL, NULL },
+}};
+
+/* Note: must not be declared <const> as its list will be overwritten */
+static struct sample_fetch_kw_list fetch_kws = {ILH, {
+ { "da-csv-fetch", da_haproxy_fetch, ARG5(1,STR,STR,STR,STR,STR), NULL, SMP_T_STR, SMP_USE_HRQHV },
+ { NULL, NULL, 0, 0, 0 },
+}};
+
+/* Note: must not be declared <const> as its list will be overwritten */
+static struct sample_conv_kw_list conv_kws = {ILH, {
+ { "da-csv-conv", da_haproxy_conv, ARG5(1,STR,STR,STR,STR,STR), NULL, SMP_T_STR, SMP_T_STR },
+ { NULL, NULL, 0, 0, 0 },
+}};
+
+__attribute__((constructor))
+static void __da_init(void)
+{
+ /* register sample fetch and format conversion keywords */
+ sample_register_fetches(&fetch_kws);
+ sample_register_convs(&conv_kws);
+ cfg_register_keywords(&dacfg_kws);
+}
--- /dev/null
+/*
+ * Name server resolution
+ *
+ * Copyright 2014 Baptiste Assmann <bedis9@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <errno.h>
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <unistd.h>
+
+#include <sys/types.h>
+
+#include <common/time.h>
+#include <common/ticks.h>
+
+#include <types/global.h>
+#include <types/dns.h>
+#include <types/proto_udp.h>
+
+#include <proto/checks.h>
+#include <proto/dns.h>
+#include <proto/fd.h>
+#include <proto/log.h>
+#include <proto/server.h>
+#include <proto/task.h>
+#include <proto/proto_udp.h>
+
+struct list dns_resolvers = LIST_HEAD_INIT(dns_resolvers);
+struct dns_resolution *resolution = NULL;
+
+static int64_t dns_query_id_seed; /* random seed */
+
+/* proto_udp callback functions for a DNS resolution */
+struct dgram_data_cb resolve_dgram_cb = {
+ .recv = dns_resolve_recv,
+ .send = dns_resolve_send,
+};
+
+#if DEBUG
+/*
+ * go through the resolutions associated to a resolvers section and print the ID and hostname in
+ * domain name format
+ * should be used for debug purpose only
+ */
+void dns_print_current_resolutions(struct dns_resolvers *resolvers)
+{
+ list_for_each_entry(resolution, &resolvers->curr_resolution, list) {
+ printf(" resolution %d for %s\n", resolution->query_id, resolution->hostname_dn);
+ }
+}
+#endif
+
+/*
+ * check if there is more than 1 resolution in the resolver's resolution list
+ * return value:
+ * 0: empty list
+ * 1: exactly one entry in the list
+ * 2: more than one entry in the list
+ */
+int dns_check_resolution_queue(struct dns_resolvers *resolvers)
+{
+
+ if (LIST_ISEMPTY(&resolvers->curr_resolution))
+ return 0;
+
+ if ((resolvers->curr_resolution.n) && (resolvers->curr_resolution.n == resolvers->curr_resolution.p))
+ return 1;
+
+ if (! ((resolvers->curr_resolution.n == resolvers->curr_resolution.p)
+ && (&resolvers->curr_resolution != resolvers->curr_resolution.n)))
+ return 2;
+
+ return 0;
+}
+
+/*
+ * reset all parameters of a DNS resolution to 0 (or equivalent)
+ * and clean it up from all associated lists (resolution->qid and resolution->list)
+ */
+void dns_reset_resolution(struct dns_resolution *resolution)
+{
+ /* update resolution status */
+ resolution->step = RSLV_STEP_NONE;
+
+ resolution->try = 0;
+ resolution->try_cname = 0;
+ resolution->last_resolution = now_ms;
+ resolution->nb_responses = 0;
+
+ /* clean up query id */
+ eb32_delete(&resolution->qid);
+ resolution->query_id = 0;
+ resolution->qid.key = 0;
+
+ /* default values */
+ if (resolution->resolver_family_priority == AF_INET) {
+ resolution->query_type = DNS_RTYPE_A;
+ } else {
+ resolution->query_type = DNS_RTYPE_AAAA;
+ }
+
+ /* the second resolution in the queue becomes the first one */
+ LIST_DEL(&resolution->list);
+}
+
+/*
+ * function called when a network IO is generated on a name server socket for an incoming packet
+ * It performs the following actions:
+ * - check if the packet requires processing (not outdated resolution)
+ * - ensure the DNS packet received is valid and call requester's callback
+ * - call requester's error callback if invalid response
+ */
+void dns_resolve_recv(struct dgram_conn *dgram)
+{
+ struct dns_nameserver *nameserver;
+ struct dns_resolvers *resolvers;
+ struct dns_resolution *resolution;
+ unsigned char buf[DNS_MAX_UDP_MESSAGE + 1];
+ unsigned char *bufend;
+ int fd, buflen, ret;
+ unsigned short query_id;
+ struct eb32_node *eb;
+
+ fd = dgram->t.sock.fd;
+
+ /* check if ready for reading */
+ if (!fd_recv_ready(fd))
+ return;
+
+ /* no need to go further if we can't retrieve the nameserver */
+ if ((nameserver = (struct dns_nameserver *)dgram->owner) == NULL)
+ return;
+
+ resolvers = nameserver->resolvers;
+
+ /* process all pending input messages */
+ while (1) {
+ /* read message received */
+ memset(buf, '\0', DNS_MAX_UDP_MESSAGE + 1);
+ if ((buflen = recv(fd, (char*)buf , DNS_MAX_UDP_MESSAGE, 0)) < 0) {
+ /* FIXME : for now we consider EAGAIN only */
+ fd_cant_recv(fd);
+ break;
+ }
+
+ /* message too big */
+ if (buflen > DNS_MAX_UDP_MESSAGE) {
+ nameserver->counters.too_big += 1;
+ continue;
+ }
+
+ /* initializing variables */
+ bufend = buf + buflen; /* pointer to mark the end of the buffer */
+
+ /* read the query id from the packet (16 bits) */
+ if (buf + 2 > bufend) {
+ nameserver->counters.invalid += 1;
+ continue;
+ }
+ query_id = dns_response_get_query_id(buf);
+
+ /* search the query_id in the pending resolution tree */
+ eb = eb32_lookup(&resolvers->query_ids, query_id);
+ if (eb == NULL) {
+ /* unknown query id means an outdated response and can be safely ignored */
+ nameserver->counters.outdated += 1;
+ continue;
+ }
+
+ /* known query id means a resolution in prgress */
+ resolution = eb32_entry(eb, struct dns_resolution, qid);
+
+ if (!resolution) {
+ nameserver->counters.outdated += 1;
+ continue;
+ }
+
+ /* number of responses received */
+ resolution->nb_responses += 1;
+
+ ret = dns_validate_dns_response(buf, bufend, resolution->hostname_dn, resolution->hostname_dn_len);
+
+ /* treat only errors */
+ switch (ret) {
+ case DNS_RESP_INVALID:
+ case DNS_RESP_WRONG_NAME:
+ nameserver->counters.invalid += 1;
+ resolution->requester_error_cb(resolution, DNS_RESP_INVALID);
+ continue;
+
+ case DNS_RESP_ERROR:
+ nameserver->counters.other += 1;
+ resolution->requester_error_cb(resolution, DNS_RESP_ERROR);
+ continue;
+
+ case DNS_RESP_ANCOUNT_ZERO:
+ nameserver->counters.any_err += 1;
+ resolution->requester_error_cb(resolution, DNS_RESP_ANCOUNT_ZERO);
+ continue;
+
+ case DNS_RESP_NX_DOMAIN:
+ nameserver->counters.nx += 1;
+ resolution->requester_error_cb(resolution, DNS_RESP_NX_DOMAIN);
+ continue;
+
+ case DNS_RESP_REFUSED:
+ nameserver->counters.refused += 1;
+ resolution->requester_error_cb(resolution, DNS_RESP_REFUSED);
+ continue;
+
+ case DNS_RESP_CNAME_ERROR:
+ nameserver->counters.cname_error += 1;
+ resolution->requester_error_cb(resolution, DNS_RESP_CNAME_ERROR);
+ continue;
+
+ case DNS_RESP_TRUNCATED:
+ nameserver->counters.truncated += 1;
+ resolution->requester_error_cb(resolution, DNS_RESP_TRUNCATED);
+ continue;
+
+ case DNS_RESP_NO_EXPECTED_RECORD:
+ nameserver->counters.other += 1;
+ resolution->requester_error_cb(resolution, DNS_RESP_NO_EXPECTED_RECORD);
+ continue;
+ }
+
+ nameserver->counters.valid += 1;
+ resolution->requester_cb(resolution, nameserver, buf, buflen);
+ }
+}
+
+/*
+ * function called when a resolvers network socket is ready to send data
+ * It performs the following actions:
+ */
+void dns_resolve_send(struct dgram_conn *dgram)
+{
+ int fd;
+ struct dns_nameserver *nameserver;
+ struct dns_resolvers *resolvers;
+ struct dns_resolution *resolution;
+
+ fd = dgram->t.sock.fd;
+
+ /* check if ready for sending */
+ if (!fd_send_ready(fd))
+ return;
+
+ /* we don't want/need to be waked up any more for sending */
+ fd_stop_send(fd);
+
+ /* no need to go further if we can't retrieve the nameserver */
+ if ((nameserver = (struct dns_nameserver *)dgram->owner) == NULL)
+ return;
+
+ resolvers = nameserver->resolvers;
+ resolution = LIST_NEXT(&resolvers->curr_resolution, struct dns_resolution *, list);
+
+ dns_send_query(resolution);
+ dns_update_resolvers_timeout(resolvers);
+}
+
+/*
+ * forge and send a DNS query to resolvers associated to a resolution
+ * It performs the following actions:
+ * returns:
+ * 0 in case of error or safe ignorance
+ * 1 if no error
+ */
+int dns_send_query(struct dns_resolution *resolution)
+{
+ struct dns_resolvers *resolvers;
+ struct dns_nameserver *nameserver;
+ int ret, send_error, bufsize, fd;
+
+ resolvers = resolution->resolvers;
+
+ ret = send_error = 0;
+ bufsize = dns_build_query(resolution->query_id, resolution->query_type, resolution->hostname_dn,
+ resolution->hostname_dn_len, trash.str, trash.size);
+
+ if (bufsize == -1)
+ return 0;
+
+ list_for_each_entry(nameserver, &resolvers->nameserver_list, list) {
+ fd = nameserver->dgram->t.sock.fd;
+ errno = 0;
+
+ ret = send(fd, trash.str, bufsize, 0);
+
+ if (ret > 0)
+ nameserver->counters.sent += 1;
+
+ if (ret == 0 || errno == EAGAIN) {
+ /* nothing written, let's update the poller that we wanted to send
+ * but we were not able to */
+ fd_want_send(fd);
+ fd_cant_send(fd);
+ }
+ }
+
+ /* update resolution */
+ resolution->nb_responses = 0;
+ resolution->last_sent_packet = now_ms;
+
+ return 1;
+}
+
+/*
+ * update a resolvers' task timeout for next wake up
+ */
+void dns_update_resolvers_timeout(struct dns_resolvers *resolvers)
+{
+ struct dns_resolution *resolution;
+
+ if (LIST_ISEMPTY(&resolvers->curr_resolution)) {
+ /* no more resolution pending, so no wakeup anymore */
+ resolvers->t->expire = TICK_ETERNITY;
+ }
+ else {
+ resolution = LIST_NEXT(&resolvers->curr_resolution, struct dns_resolution *, list);
+ resolvers->t->expire = tick_add(resolution->last_sent_packet, resolvers->timeout.retry);
+ }
+}
+
+/*
+ * Function to validate that the buffer DNS response provided in <resp> and
+ * finishing before <bufend> is valid from a DNS protocol point of view.
+ * The caller can also ask the function to check if the response contains data
+ * for a domain name <dn_name> whose length is <dn_name_len> returns one of the
+ * DNS_RESP_* code.
+ */
+int dns_validate_dns_response(unsigned char *resp, unsigned char *bufend, char *dn_name, int dn_name_len)
+{
+ unsigned char *reader, *cname, *ptr;
+ int i, len, flags, type, ancount, cnamelen, expected_record;
+
+ reader = resp;
+ cname = NULL;
+ cnamelen = 0;
+ len = 0;
+ expected_record = 0; /* flag to report if at least one expected record type is found in the response.
+ * For now, only records containing an IP address (A and AAAA) are
+ * considered as expected.
+ * Later, this function may be updated to let the caller decide what type
+ * of record is expected to consider the response as valid. (SRV or TXT types)
+ */
+
+ /* move forward 2 bytes for the query id */
+ reader += 2;
+ if (reader >= bufend)
+ return DNS_RESP_INVALID;
+
+ /*
+ * flags are stored over 2 bytes
+ * First byte contains:
+ * - response flag (1 bit)
+ * - opcode (4 bits)
+ * - authoritative (1 bit)
+ * - truncated (1 bit)
+ * - recursion desired (1 bit)
+ */
+ if (reader + 2 >= bufend)
+ return DNS_RESP_INVALID;
+
+ flags = reader[0] * 256 + reader[1];
+
+ if (flags & DNS_FLAG_TRUNCATED)
+ return DNS_RESP_TRUNCATED;
+
+ if ((flags & DNS_FLAG_REPLYCODE) != DNS_RCODE_NO_ERROR) {
+ if ((flags & DNS_FLAG_REPLYCODE) == DNS_RCODE_NX_DOMAIN)
+ return DNS_RESP_NX_DOMAIN;
+ else if ((flags & DNS_FLAG_REPLYCODE) == DNS_RCODE_REFUSED)
+ return DNS_RESP_REFUSED;
+
+ return DNS_RESP_ERROR;
+ }
+
+ /* move forward 2 bytes for flags */
+ reader += 2;
+ if (reader >= bufend)
+ return DNS_RESP_INVALID;
+
+ /* move forward 2 bytes for question count */
+ reader += 2;
+ if (reader >= bufend)
+ return DNS_RESP_INVALID;
+
+ /* analyzing answer count */
+ if (reader + 2 > bufend)
+ return DNS_RESP_INVALID;
+ ancount = reader[0] * 256 + reader[1];
+
+ if (ancount == 0)
+ return DNS_RESP_ANCOUNT_ZERO;
+
+ /* move forward 2 bytes for answer count */
+ reader += 2;
+ if (reader >= bufend)
+ return DNS_RESP_INVALID;
+
+ /* move forward 4 bytes authority and additional count */
+ reader += 4;
+ if (reader >= bufend)
+ return DNS_RESP_INVALID;
+
+ /* check if the name can stand in response */
+ if (dn_name && ((reader + dn_name_len + 1) > bufend))
+ return DNS_RESP_INVALID;
+
+ /* check hostname */
+ if (dn_name && (memcmp(reader, dn_name, dn_name_len) != 0))
+ return DNS_RESP_WRONG_NAME;
+
+ /* move forward hostname len bytes + 1 for NULL byte */
+ if (dn_name) {
+ reader = reader + dn_name_len + 1;
+ }
+ else {
+ ptr = reader;
+ while (*ptr) {
+ ptr++;
+ if (ptr >= bufend)
+ return DNS_RESP_INVALID;
+ }
+ reader = ptr + 1;
+ }
+
+ /* move forward 4 bytes for question type and question class */
+ reader += 4;
+ if (reader >= bufend)
+ return DNS_RESP_INVALID;
+
+ /* now parsing response records */
+ for (i = 1; i <= ancount; i++) {
+ if (reader >= bufend)
+ return DNS_RESP_INVALID;
+
+ /*
+ * name can be a pointer, so move forward reader cursor accordingly
+ * if 1st byte is '11XXXXXX', it means name is a pointer
+ * and 2nd byte gives the offset from resp where the hostname can
+ * be found
+ */
+ if ((*reader & 0xc0) == 0xc0) {
+ /*
+ * pointer, hostname can be found at resp + *(reader + 1)
+ */
+ if (reader + 1 > bufend)
+ return DNS_RESP_INVALID;
+
+ ptr = resp + *(reader + 1);
+
+ /* check if the pointer points inside the buffer */
+ if (ptr >= bufend)
+ return DNS_RESP_INVALID;
+ }
+ else {
+ /*
+ * name is a string which starts at first byte
+ * checking against last cname when recursing through the response
+ */
+ /* look for the end of the string and ensure it's in the buffer */
+ ptr = reader;
+ len = 0;
+ while (*ptr) {
+ ++len;
+ ++ptr;
+ if (ptr >= bufend)
+ return DNS_RESP_INVALID;
+ }
+
+ /* if cname is set, it means a CNAME recursion is in progress */
+ ptr = reader;
+ }
+
+ /* ptr now points to the name */
+ if ((*reader & 0xc0) != 0xc0) {
+ /* if cname is set, it means a CNAME recursion is in progress */
+ if (cname) {
+ /* check if the name can stand in response */
+ if ((reader + cnamelen) > bufend)
+ return DNS_RESP_INVALID;
+ /* compare cname and current name */
+ if (memcmp(ptr, cname, cnamelen) != 0)
+ return DNS_RESP_CNAME_ERROR;
+
+ cname = reader;
+ cnamelen = dns_str_to_dn_label_len((const char *)cname);
+
+ /* move forward cnamelen bytes + NULL byte */
+ reader += (cnamelen + 1);
+ }
+ /* compare server hostname to current name */
+ else if (dn_name) {
+ /* check if the name can stand in response */
+ if ((reader + dn_name_len) > bufend)
+ return DNS_RESP_INVALID;
+ if (memcmp(ptr, dn_name, dn_name_len) != 0)
+ return DNS_RESP_WRONG_NAME;
+
+ reader += (dn_name_len + 1);
+ }
+ else {
+ reader += (len + 1);
+ }
+ }
+ else {
+ /* shortname in progress */
+ /* move forward 2 bytes for information pointer and address pointer */
+ reader += 2;
+ }
+
+ if (reader >= bufend)
+ return DNS_RESP_INVALID;
+
+ /*
+ * we know the record is either for our server hostname
+ * or a valid CNAME in a crecursion
+ */
+
+ /* now reading record type (A, AAAA, CNAME, etc...) */
+ if (reader + 2 > bufend)
+ return DNS_RESP_INVALID;
+ type = reader[0] * 256 + reader[1];
+
+ /* move forward 2 bytes for type (2) */
+ reader += 2;
+
+ /* move forward 6 bytes for class (2) and ttl (4) */
+ reader += 6;
+ if (reader >= bufend)
+ return DNS_RESP_INVALID;
+
+ /* now reading data len */
+ if (reader + 2 > bufend)
+ return DNS_RESP_INVALID;
+ len = reader[0] * 256 + reader[1];
+
+ /* move forward 2 bytes for data len */
+ reader += 2;
+
+ /* analyzing record content */
+ switch (type) {
+ case DNS_RTYPE_A:
+ /* ipv4 is stored on 4 bytes */
+ if (len != 4)
+ return DNS_RESP_INVALID;
+ expected_record = 1;
+ break;
+
+ case DNS_RTYPE_CNAME:
+ cname = reader;
+ cnamelen = len;
+ break;
+
+ case DNS_RTYPE_AAAA:
+ /* ipv6 is stored on 16 bytes */
+ if (len != 16)
+ return DNS_RESP_INVALID;
+ expected_record = 1;
+ break;
+ } /* switch (record type) */
+
+ /* move forward len for analyzing next record in the response */
+ reader += len;
+ } /* for i 0 to ancount */
+
+ if (expected_record == 0)
+ return DNS_RESP_NO_EXPECTED_RECORD;
+
+ return DNS_RESP_VALID;
+}
+
+/*
+ * search dn_name resolution in resp.
+ * If existing IP not found, return the first IP matching family_priority,
+ * otherwise, first ip found
+ * The following tasks are the responsibility of the caller:
+ * - resp contains an error free DNS response
+ * - the response matches the dn_name
+ * For both cases above, dns_validate_dns_response is required
+ * returns one of the DNS_UPD_* code
+ */
+int dns_get_ip_from_response(unsigned char *resp, unsigned char *resp_end,
+ char *dn_name, int dn_name_len, void *currentip, short currentip_sin_family,
+ int family_priority, void **newip, short *newip_sin_family)
+{
+ int i, ancount, cnamelen, type, data_len, currentip_found;
+ unsigned char *reader, *cname, *ptr, *newip4, *newip6;
+
+ cname = *newip = newip4 = newip6 = NULL;
+ cnamelen = currentip_found = 0;
+ *newip_sin_family = AF_UNSPEC;
+ ancount = (((struct dns_header *)resp)->ancount);
+ ancount = *(resp + 7);
+
+ /* bypass DNS response header */
+ reader = resp + sizeof(struct dns_header);
+
+ /* bypass DNS query section */
+ /* move forward hostname len bytes + 1 for NULL byte */
+ reader = reader + dn_name_len + 1;
+
+ /* move forward 4 bytes for question type and question class */
+ reader += 4;
+
+ /* now parsing response records */
+ for (i = 1; i <= ancount; i++) {
+ /*
+ * name can be a pointer, so move forward reader cursor accordingly
+ * if 1st byte is '11XXXXXX', it means name is a pointer
+ * and 2nd byte gives the offset from buf where the hostname can
+ * be found
+ */
+ if ((*reader & 0xc0) == 0xc0)
+ ptr = resp + *(reader + 1);
+ else
+ ptr = reader;
+
+ if (cname) {
+ if (memcmp(ptr, cname, cnamelen)) {
+ return DNS_UPD_NAME_ERROR;
+ }
+ }
+ else if (memcmp(ptr, dn_name, dn_name_len))
+ return DNS_UPD_NAME_ERROR;
+
+ if ((*reader & 0xc0) == 0xc0) {
+ /* move forward 2 bytes for information pointer and address pointer */
+ reader += 2;
+ }
+ else {
+ if (cname) {
+ cname = reader;
+ cnamelen = dns_str_to_dn_label_len((char *)cname);
+
+ /* move forward cnamelen bytes + NULL byte */
+ reader += (cnamelen + 1);
+ }
+ else {
+ /* move forward dn_name_len bytes + NULL byte */
+ reader += (dn_name_len + 1);
+ }
+ }
+
+ /*
+ * we know the record is either for our server hostname
+ * or a valid CNAME in a crecursion
+ */
+
+ /* now reading record type (A, AAAA, CNAME, etc...) */
+ type = reader[0] * 256 + reader[1];
+
+ /* move forward 2 bytes for type (2) */
+ reader += 2;
+
+ /* move forward 6 bytes for class (2) and ttl (4) */
+ reader += 6;
+
+ /* now reading data len */
+ data_len = reader[0] * 256 + reader[1];
+
+ /* move forward 2 bytes for data len */
+ reader += 2;
+
+ /* analyzing record content */
+ switch (type) {
+ case DNS_RTYPE_A:
+ /* check if current reccord's IP is the same as server one's */
+ if ((currentip_sin_family == AF_INET)
+ && (*(uint32_t *)reader == *(uint32_t *)currentip)) {
+ currentip_found = 1;
+ newip4 = reader;
+ /* we can stop now if server's family preference is IPv4
+ * and its current IP is found in the response list */
+ if (family_priority == AF_INET)
+ return DNS_UPD_NO; /* DNS_UPD matrix #1 */
+ }
+ else if (!newip4) {
+ newip4 = reader;
+ }
+
+ /* move forward data_len for analyzing next record in the response */
+ reader += data_len;
+ break;
+
+ case DNS_RTYPE_CNAME:
+ cname = reader;
+ cnamelen = data_len;
+
+ reader += data_len;
+ break;
+
+ case DNS_RTYPE_AAAA:
+ /* check if current record's IP is the same as server's one */
+ if ((currentip_sin_family == AF_INET6) && (memcmp(reader, currentip, 16) == 0)) {
+ currentip_found = 1;
+ newip6 = reader;
+ /* we can stop now if server's preference is IPv6 or is not
+ * set (which implies we prioritize IPv6 over IPv4 */
+ if (family_priority == AF_INET6)
+ return DNS_UPD_NO;
+ }
+ else if (!newip6) {
+ newip6 = reader;
+ }
+
+ /* move forward data_len for analyzing next record in the response */
+ reader += data_len;
+ break;
+
+ default:
+ /* not supported record type */
+ /* move forward data_len for analyzing next record in the response */
+ reader += data_len;
+ } /* switch (record type) */
+ } /* for i 0 to ancount */
+
+ /* only CNAMEs in the response, no IP found */
+ if (cname && !newip4 && !newip6) {
+ return DNS_UPD_CNAME;
+ }
+
+ /* no IP found in the response */
+ if (!newip4 && !newip6) {
+ return DNS_UPD_NO_IP_FOUND;
+ }
+
+ /* case when the caller looks first for an IPv4 address */
+ if (family_priority == AF_INET) {
+ if (newip4) {
+ *newip = newip4;
+ *newip_sin_family = AF_INET;
+ if (currentip_found == 1)
+ return DNS_UPD_NO;
+ return DNS_UPD_SRVIP_NOT_FOUND;
+ }
+ else if (newip6) {
+ *newip = newip6;
+ *newip_sin_family = AF_INET6;
+ if (currentip_found == 1)
+ return DNS_UPD_NO;
+ return DNS_UPD_SRVIP_NOT_FOUND;
+ }
+ }
+ /* case when the caller looks first for an IPv6 address */
+ else if (family_priority == AF_INET6) {
+ if (newip6) {
+ *newip = newip6;
+ *newip_sin_family = AF_INET6;
+ if (currentip_found == 1)
+ return DNS_UPD_NO;
+ return DNS_UPD_SRVIP_NOT_FOUND;
+ }
+ else if (newip4) {
+ *newip = newip4;
+ *newip_sin_family = AF_INET;
+ if (currentip_found == 1)
+ return DNS_UPD_NO;
+ return DNS_UPD_SRVIP_NOT_FOUND;
+ }
+ }
+ /* case when the caller have no preference (we prefer IPv6) */
+ else if (family_priority == AF_UNSPEC) {
+ if (newip6) {
+ *newip = newip6;
+ *newip_sin_family = AF_INET6;
+ if (currentip_found == 1)
+ return DNS_UPD_NO;
+ return DNS_UPD_SRVIP_NOT_FOUND;
+ }
+ else if (newip4) {
+ *newip = newip4;
+ *newip_sin_family = AF_INET;
+ if (currentip_found == 1)
+ return DNS_UPD_NO;
+ return DNS_UPD_SRVIP_NOT_FOUND;
+ }
+ }
+
+ /* no reason why we should change the server's IP address */
+ return DNS_UPD_NO;
+}
+
+/*
+ * returns the query id contained in a DNS response
+ */
+int dns_response_get_query_id(unsigned char *resp)
+{
+ /* read the query id from the response */
+ return resp[0] * 256 + resp[1];
+}
+
+/*
+ * used during haproxy's init phase
+ * parses resolvers sections and initializes:
+ * - task (time events) for each resolvers section
+ * - the datagram layer (network IO events) for each nameserver
+ * returns:
+ * 0 in case of error
+ * 1 when no error
+ */
+int dns_init_resolvers(void)
+{
+ struct dns_resolvers *curr_resolvers;
+ struct dns_nameserver *curnameserver;
+ struct dgram_conn *dgram;
+ struct task *t;
+ int fd;
+
+ /* give a first random value to our dns query_id seed */
+ dns_query_id_seed = random();
+
+ /* run through the resolvers section list */
+ list_for_each_entry(curr_resolvers, &dns_resolvers, list) {
+ /* create the task associated to the resolvers section */
+ if ((t = task_new()) == NULL) {
+ Alert("Starting [%s] resolvers: out of memory.\n", curr_resolvers->id);
+ return 0;
+ }
+
+ /* update task's parameters */
+ t->process = dns_process_resolve;
+ t->context = curr_resolvers;
+ t->expire = TICK_ETERNITY;
+
+ curr_resolvers->t = t;
+
+ list_for_each_entry(curnameserver, &curr_resolvers->nameserver_list, list) {
+ if ((dgram = calloc(1, sizeof(struct dgram_conn))) == NULL) {
+ Alert("Starting [%s/%s] nameserver: out of memory.\n", curr_resolvers->id,
+ curnameserver->id);
+ return 0;
+ }
+ /* update datagram's parameters */
+ dgram->owner = (void *)curnameserver;
+ dgram->data = &resolve_dgram_cb;
+
+ /* create network UDP socket for this nameserver */
+ if ((fd = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP)) == -1) {
+ Alert("Starting [%s/%s] nameserver: can't create socket.\n", curr_resolvers->id,
+ curnameserver->id);
+ free(dgram);
+ dgram = NULL;
+ return 0;
+ }
+
+ /* "connect" the UDP socket to the name server IP */
+ if (connect(fd, (struct sockaddr*)&curnameserver->addr, get_addr_len(&curnameserver->addr)) == -1) {
+ Alert("Starting [%s/%s] nameserver: can't connect socket.\n", curr_resolvers->id,
+ curnameserver->id);
+ close(fd);
+ free(dgram);
+ dgram = NULL;
+ return 0;
+ }
+
+ /* make the socket non blocking */
+ fcntl(fd, F_SETFL, O_NONBLOCK);
+
+ /* add the fd in the fd list and update its parameters */
+ fd_insert(fd);
+ fdtab[fd].owner = dgram;
+ fdtab[fd].iocb = dgram_fd_handler;
+ fd_want_recv(fd);
+ dgram->t.sock.fd = fd;
+
+ /* update nameserver's datagram property */
+ curnameserver->dgram = dgram;
+
+ continue;
+ }
+
+ /* task can be queued */
+ task_queue(t);
+ }
+
+ return 1;
+}
+
+/*
+ * Forge a DNS query. It needs the following information from the caller:
+ * - <query_id>: the DNS query id corresponding to this query
+ * - <query_type>: DNS_RTYPE_* request DNS record type (A, AAAA, ANY, etc...)
+ * - <hostname_dn>: hostname in domain name format
+ * - <hostname_dn_len>: length of <hostname_dn>
+ * To store the query, the caller must pass a buffer <buf> and its size <bufsize>
+ *
+ * the DNS query is stored in <buf>
+ * returns:
+ * -1 if <buf> is too short
+ */
+int dns_build_query(int query_id, int query_type, char *hostname_dn, int hostname_dn_len, char *buf, int bufsize)
+{
+ struct dns_header *dns;
+ struct dns_question *qinfo;
+ char *ptr, *bufend;
+
+ memset(buf, '\0', bufsize);
+ ptr = buf;
+ bufend = buf + bufsize;
+
+ /* check if there is enough room for DNS headers */
+ if (ptr + sizeof(struct dns_header) >= bufend)
+ return -1;
+
+ /* set dns query headers */
+ dns = (struct dns_header *)ptr;
+ dns->id = (unsigned short) htons(query_id);
+ dns->qr = 0; /* query */
+ dns->opcode = 0;
+ dns->aa = 0;
+ dns->tc = 0;
+ dns->rd = 1; /* recursion desired */
+ dns->ra = 0;
+ dns->z = 0;
+ dns->rcode = 0;
+ dns->qdcount = htons(1); /* 1 question */
+ dns->ancount = 0;
+ dns->nscount = 0;
+ dns->arcount = 0;
+
+ /* move forward ptr */
+ ptr += sizeof(struct dns_header);
+
+ /* check if there is enough room for query hostname */
+ if ((ptr + hostname_dn_len) >= bufend)
+ return -1;
+
+ /* set up query hostname */
+ memcpy(ptr, hostname_dn, hostname_dn_len);
+ ptr[hostname_dn_len + 1] = '\0';
+
+ /* move forward ptr */
+ ptr += (hostname_dn_len + 1);
+
+ /* check if there is enough room for query hostname*/
+ if (ptr + sizeof(struct dns_question) >= bufend)
+ return -1;
+
+ /* set up query info (type and class) */
+ qinfo = (struct dns_question *)ptr;
+ qinfo->qtype = htons(query_type);
+ qinfo->qclass = htons(DNS_RCLASS_IN);
+
+ ptr += sizeof(struct dns_question);
+
+ return ptr - buf;
+}
+
+/*
+ * turn a string into domain name label:
+ * www.haproxy.org into 3www7haproxy3org
+ * if dn memory is pre-allocated, you must provide its size in dn_len
+ * if dn memory isn't allocated, dn_len must be set to 0.
+ * In the second case, memory will be allocated.
+ * in case of error, -1 is returned, otherwise, number of bytes copied in dn
+ */
+char *dns_str_to_dn_label(const char *string, char *dn, int dn_len)
+{
+ char *c, *d;
+ int i, offset;
+
+ /* offset between string size and theorical dn size */
+ offset = 1;
+
+ /*
+ * first, get the size of the string turned into its domain name version
+ * This function also validates the string respect the RFC
+ */
+ if ((i = dns_str_to_dn_label_len(string)) == -1)
+ return NULL;
+
+ /* yes, so let's check there is enough memory */
+ if (dn_len < i + offset)
+ return NULL;
+
+ i = strlen(string);
+ memcpy(dn + offset, string, i);
+ dn[i + offset] = '\0';
+ /* avoid a '\0' at the beginning of dn string which may prevent the for loop
+ * below from working.
+ * Actually, this is the reason of the offset. */
+ dn[0] = '0';
+
+ for (c = dn; *c ; ++c) {
+ /* c points to the first '0' char or a dot, which we don't want to read */
+ d = c + offset;
+ i = 0;
+ while (*d != '.' && *d) {
+ i++;
+ d++;
+ }
+ *c = i;
+
+ c = d - 1; /* because of c++ of the for loop */
+ }
+
+ return dn;
+}
+
+/*
+ * compute and return the length of <string> it it were translated into domain name
+ * label:
+ * www.haproxy.org into 3www7haproxy3org would return 16
+ * NOTE: add +1 for '\0' when allocating memory ;)
+ */
+int dns_str_to_dn_label_len(const char *string)
+{
+ return strlen(string) + 1;
+}
+
+/*
+ * validates host name:
+ * - total size
+ * - each label size individually
+ * returns:
+ * 0 in case of error. If <err> is not NULL, an error message is stored there.
+ * 1 when no error. <err> is left unaffected.
+ */
+int dns_hostname_validation(const char *string, char **err)
+{
+ const char *c, *d;
+ int i;
+
+ if (strlen(string) > DNS_MAX_NAME_SIZE) {
+ if (err)
+ *err = DNS_TOO_LONG_FQDN;
+ return 0;
+ }
+
+ c = string;
+ while (*c) {
+ d = c;
+
+ i = 0;
+ while (*d != '.' && *d && i <= DNS_MAX_LABEL_SIZE) {
+ i++;
+ if (!((*d == '-') || (*d == '_') ||
+ ((*d >= 'a') && (*d <= 'z')) ||
+ ((*d >= 'A') && (*d <= 'Z')) ||
+ ((*d >= '0') && (*d <= '9')))) {
+ if (err)
+ *err = DNS_INVALID_CHARACTER;
+ return 0;
+ }
+ d++;
+ }
+
+ if ((i >= DNS_MAX_LABEL_SIZE) && (d[i] != '.')) {
+ if (err)
+ *err = DNS_LABEL_TOO_LONG;
+ return 0;
+ }
+
+ if (*d == '\0')
+ goto out;
+
+ c = ++d;
+ }
+ out:
+ return 1;
+}
+
+/*
+ * 2 bytes random generator to generate DNS query ID
+ */
+uint16_t dns_rnd16(void)
+{
+ dns_query_id_seed ^= dns_query_id_seed << 13;
+ dns_query_id_seed ^= dns_query_id_seed >> 7;
+ dns_query_id_seed ^= dns_query_id_seed << 17;
+ return dns_query_id_seed;
+}
+
+
+/*
+ * function called when a timeout occurs during name resolution process
+ * if max number of tries is reached, then stop, otherwise, retry.
+ */
+struct task *dns_process_resolve(struct task *t)
+{
+ struct dns_resolvers *resolvers = t->context;
+ struct dns_resolution *resolution, *res_back;
+
+ /* timeout occurs inevitably for the first element of the FIFO queue */
+ if (LIST_ISEMPTY(&resolvers->curr_resolution)) {
+ /* no first entry, so wake up was useless */
+ t->expire = TICK_ETERNITY;
+ return t;
+ }
+
+ /* look for the first resolution which is not expired */
+ list_for_each_entry_safe(resolution, res_back, &resolvers->curr_resolution, list) {
+ /* when we find the first resolution in the future, then we can stop here */
+ if (tick_is_le(now_ms, resolution->last_sent_packet))
+ goto out;
+
+ /*
+ * if current resolution has been tried too many times and finishes in timeout
+ * we update its status and remove it from the list
+ */
+ if (resolution->try <= 0) {
+ /* clean up resolution information and remove from the list */
+ dns_reset_resolution(resolution);
+
+ /* notify the result to the requester */
+ resolution->requester_error_cb(resolution, DNS_RESP_TIMEOUT);
+ }
+
+ resolution->try -= 1;
+
+ /* check current resolution status */
+ if (resolution->step == RSLV_STEP_RUNNING) {
+ /* resend the DNS query */
+ dns_send_query(resolution);
+
+ /* check if we have more than one resolution in the list */
+ if (dns_check_resolution_queue(resolvers) > 1) {
+ /* move the rsolution to the end of the list */
+ LIST_DEL(&resolution->list);
+ LIST_ADDQ(&resolvers->curr_resolution, &resolution->list);
+ }
+ }
+ }
+
+ out:
+ dns_update_resolvers_timeout(resolvers);
+ return t;
+}
--- /dev/null
+/*
+ * Functions dedicated to statistics output and the stats socket
+ *
+ * Copyright 2000-2012 Willy Tarreau <w@1wt.eu>
+ * Copyright 2007-2009 Krzysztof Piotr Oledzki <ole@ans.pl>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <ctype.h>
+#include <errno.h>
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <pwd.h>
+#include <grp.h>
+
+#include <sys/socket.h>
+#include <sys/stat.h>
+#include <sys/types.h>
+
+#include <common/cfgparse.h>
+#include <common/compat.h>
+#include <common/config.h>
+#include <common/debug.h>
+#include <common/memory.h>
+#include <common/mini-clist.h>
+#include <common/standard.h>
+#include <common/ticks.h>
+#include <common/time.h>
+#include <common/uri_auth.h>
+#include <common/version.h>
+#include <common/base64.h>
+
+#include <types/applet.h>
+#include <types/global.h>
+#include <types/dns.h>
+
+#include <proto/backend.h>
+#include <proto/channel.h>
+#include <proto/checks.h>
+#include <proto/compression.h>
+#include <proto/dumpstats.h>
+#include <proto/fd.h>
+#include <proto/freq_ctr.h>
+#include <proto/frontend.h>
+#include <proto/log.h>
+#include <proto/pattern.h>
+#include <proto/pipe.h>
+#include <proto/listener.h>
+#include <proto/map.h>
+#include <proto/proto_http.h>
+#include <proto/proto_uxst.h>
+#include <proto/proxy.h>
+#include <proto/sample.h>
+#include <proto/session.h>
+#include <proto/stream.h>
+#include <proto/server.h>
+#include <proto/raw_sock.h>
+#include <proto/stream_interface.h>
+#include <proto/task.h>
+
+#ifdef USE_OPENSSL
+#include <proto/ssl_sock.h>
+#include <types/ssl_sock.h>
+#endif
+
+/* stats socket states */
+enum {
+ STAT_CLI_INIT = 0, /* initial state, must leave to zero ! */
+ STAT_CLI_END, /* final state, let's close */
+ STAT_CLI_GETREQ, /* wait for a request */
+ STAT_CLI_OUTPUT, /* all states after this one are responses */
+ STAT_CLI_PROMPT, /* display the prompt (first output, same code) */
+ STAT_CLI_PRINT, /* display message in cli->msg */
+ STAT_CLI_PRINT_FREE, /* display message in cli->msg. After the display, free the pointer */
+ STAT_CLI_O_INFO, /* dump info */
+ STAT_CLI_O_SESS, /* dump streams */
+ STAT_CLI_O_ERR, /* dump errors */
+ STAT_CLI_O_TAB, /* dump tables */
+ STAT_CLI_O_CLR, /* clear tables */
+ STAT_CLI_O_SET, /* set entries in tables */
+ STAT_CLI_O_STAT, /* dump stats */
+ STAT_CLI_O_PATS, /* list all pattern reference avalaible */
+ STAT_CLI_O_PAT, /* list all entries of a pattern */
+ STAT_CLI_O_MLOOK, /* lookup a map entry */
+ STAT_CLI_O_POOLS, /* dump memory pools */
+ STAT_CLI_O_TLSK, /* list all TLS ticket keys references */
+ STAT_CLI_O_RESOLVERS,/* dump a resolver's section nameservers counters */
+ STAT_CLI_O_SERVERS_STATE, /* dump server state and changing information */
+ STAT_CLI_O_BACKEND, /* dump backend list */
+};
+
+/* Actions available for the stats admin forms */
+enum {
+ ST_ADM_ACTION_NONE = 0,
+
+ /* enable/disable health checks */
+ ST_ADM_ACTION_DHLTH,
+ ST_ADM_ACTION_EHLTH,
+
+ /* force health check status */
+ ST_ADM_ACTION_HRUNN,
+ ST_ADM_ACTION_HNOLB,
+ ST_ADM_ACTION_HDOWN,
+
+ /* enable/disable agent checks */
+ ST_ADM_ACTION_DAGENT,
+ ST_ADM_ACTION_EAGENT,
+
+ /* force agent check status */
+ ST_ADM_ACTION_ARUNN,
+ ST_ADM_ACTION_ADOWN,
+
+ /* set admin state */
+ ST_ADM_ACTION_READY,
+ ST_ADM_ACTION_DRAIN,
+ ST_ADM_ACTION_MAINT,
+ ST_ADM_ACTION_SHUTDOWN,
+ /* these are the ancient actions, still available for compatibility */
+ ST_ADM_ACTION_DISABLE,
+ ST_ADM_ACTION_ENABLE,
+ ST_ADM_ACTION_STOP,
+ ST_ADM_ACTION_START,
+};
+
+static int stats_dump_backend_to_buffer(struct stream_interface *si);
+static int stats_dump_info_to_buffer(struct stream_interface *si);
+static int stats_dump_servers_state_to_buffer(struct stream_interface *si);
+static int stats_dump_pools_to_buffer(struct stream_interface *si);
+static int stats_dump_full_sess_to_buffer(struct stream_interface *si, struct stream *sess);
+static int stats_dump_sess_to_buffer(struct stream_interface *si);
+static int stats_dump_errors_to_buffer(struct stream_interface *si);
+static int stats_table_request(struct stream_interface *si, int show);
+static int stats_dump_proxy_to_buffer(struct stream_interface *si, struct proxy *px, struct uri_auth *uri);
+static int stats_dump_stat_to_buffer(struct stream_interface *si, struct uri_auth *uri);
+static int stats_dump_resolvers_to_buffer(struct stream_interface *si);
+static int stats_pats_list(struct stream_interface *si);
+static int stats_pat_list(struct stream_interface *si);
+static int stats_map_lookup(struct stream_interface *si);
+#if (defined SSL_CTRL_SET_TLSEXT_TICKET_KEY_CB && TLS_TICKETS_NO > 0)
+static int stats_tlskeys_list(struct stream_interface *si);
+#endif
+static void cli_release_handler(struct appctx *appctx);
+
+static void dump_servers_state(struct proxy *backend, struct chunk *buf);
+
+/*
+ * cli_io_handler()
+ * -> stats_dump_sess_to_buffer() // "show sess"
+ * -> stats_dump_errors_to_buffer() // "show errors"
+ * -> stats_dump_info_to_buffer() // "show info"
+ * -> stats_dump_backend_to_buffer() // "show backend"
+ * -> stats_dump_servers_state_to_buffer() // "show servers state [<backend name>]"
+ * -> stats_dump_stat_to_buffer() // "show stat"
+ * -> stats_dump_resolvers_to_buffer() // "show stat resolver <id>"
+ * -> stats_dump_csv_header()
+ * -> stats_dump_proxy_to_buffer()
+ * -> stats_dump_fe_stats()
+ * -> stats_dump_li_stats()
+ * -> stats_dump_sv_stats()
+ * -> stats_dump_be_stats()
+ *
+ * http_stats_io_handler()
+ * -> stats_dump_stat_to_buffer() // same as above, but used for CSV or HTML
+ * -> stats_dump_csv_header() // emits the CSV headers (same as above)
+ * -> stats_dump_html_head() // emits the HTML headers
+ * -> stats_dump_html_info() // emits the equivalent of "show info" at the top
+ * -> stats_dump_proxy_to_buffer() // same as above, valid for CSV and HTML
+ * -> stats_dump_html_px_hdr()
+ * -> stats_dump_fe_stats()
+ * -> stats_dump_li_stats()
+ * -> stats_dump_sv_stats()
+ * -> stats_dump_be_stats()
+ * -> stats_dump_html_px_end()
+ * -> stats_dump_html_end() // emits HTML trailer
+ */
+
+static struct applet cli_applet;
+
+static const char stats_sock_usage_msg[] =
+ "Unknown command. Please enter one of the following commands only :\n"
+ " clear counters : clear max statistics counters (add 'all' for all counters)\n"
+ " clear table : remove an entry from a table\n"
+ " help : this message\n"
+ " prompt : toggle interactive mode with prompt\n"
+ " quit : disconnect\n"
+ " show backend : list backends in the current running config\n"
+ " show info : report information about the running process\n"
+ " show pools : report information about the memory pools usage\n"
+ " show stat : report counters for each proxy and server\n"
+ " show errors : report last request and response errors for each proxy\n"
+ " show sess [id] : report the list of current sessions or dump this session\n"
+ " show table [id]: report table usage stats or dump this table's contents\n"
+ " show servers state [id]: dump volatile server information (for backend <id>)\n"
+ " get weight : report a server's current weight\n"
+ " set weight : change a server's weight\n"
+ " set server : change a server's state, weight or address\n"
+ " set table [id] : update or create a table entry's data\n"
+ " set timeout : change a timeout setting\n"
+ " set maxconn : change a maxconn setting\n"
+ " set rate-limit : change a rate limiting value\n"
+ " disable : put a server or frontend in maintenance mode\n"
+ " enable : re-enable a server or frontend which is in maintenance mode\n"
+ " shutdown : kill a session or a frontend (eg:to release listening ports)\n"
+ " show acl [id] : report avalaible acls or dump an acl's contents\n"
+ " get acl : reports the patterns matching a sample for an ACL\n"
+ " add acl : add acl entry\n"
+ " del acl : delete acl entry\n"
+ " clear acl <id> : clear the content of this acl\n"
+ " show map [id] : report avalaible maps or dump a map's contents\n"
+ " get map : reports the keys and values matching a sample for a map\n"
+ " set map : modify map entry\n"
+ " add map : add map entry\n"
+ " del map : delete map entry\n"
+ " clear map <id> : clear the content of this map\n"
+ " set ssl <stmt> : set statement for ssl\n"
+ "";
+
+static const char stats_permission_denied_msg[] =
+ "Permission denied\n"
+ "";
+
+/* data transmission states for the stats responses */
+enum {
+ STAT_ST_INIT = 0,
+ STAT_ST_HEAD,
+ STAT_ST_INFO,
+ STAT_ST_LIST,
+ STAT_ST_END,
+ STAT_ST_FIN,
+};
+
+/* data transmission states for the stats responses inside a proxy */
+enum {
+ STAT_PX_ST_INIT = 0,
+ STAT_PX_ST_TH,
+ STAT_PX_ST_FE,
+ STAT_PX_ST_LI,
+ STAT_PX_ST_SV,
+ STAT_PX_ST_BE,
+ STAT_PX_ST_END,
+ STAT_PX_ST_FIN,
+};
+
+extern const char *stat_status_codes[];
+
+/* allocate a new stats frontend named <name>, and return it
+ * (or NULL in case of lack of memory).
+ */
+static struct proxy *alloc_stats_fe(const char *name, const char *file, int line)
+{
+ struct proxy *fe;
+
+ fe = (struct proxy *)calloc(1, sizeof(struct proxy));
+ if (!fe)
+ return NULL;
+
+ init_new_proxy(fe);
+ fe->next = proxy;
+ proxy = fe;
+ fe->last_change = now.tv_sec;
+ fe->id = strdup("GLOBAL");
+ fe->cap = PR_CAP_FE;
+ fe->maxconn = 10; /* default to 10 concurrent connections */
+ fe->timeout.client = MS_TO_TICKS(10000); /* default timeout of 10 seconds */
+ fe->conf.file = strdup(file);
+ fe->conf.line = line;
+ fe->accept = frontend_accept;
+ fe->default_target = &cli_applet.obj_type;
+
+ /* the stats frontend is the only one able to assign ID #0 */
+ fe->conf.id.key = fe->uuid = 0;
+ eb32_insert(&used_proxy_id, &fe->conf.id);
+ return fe;
+}
+
+/* This function parses a "stats" statement in the "global" section. It returns
+ * -1 if there is any error, otherwise zero. If it returns -1, it will write an
+ * error message into the <err> buffer which will be preallocated. The trailing
+ * '\n' must not be written. The function must be called with <args> pointing to
+ * the first word after "stats".
+ */
+static int stats_parse_global(char **args, int section_type, struct proxy *curpx,
+ struct proxy *defpx, const char *file, int line,
+ char **err)
+{
+ struct bind_conf *bind_conf;
+ struct listener *l;
+
+ if (!strcmp(args[1], "socket")) {
+ int cur_arg;
+
+ if (*args[2] == 0) {
+ memprintf(err, "'%s %s' in global section expects an address or a path to a UNIX socket", args[0], args[1]);
+ return -1;
+ }
+
+ if (!global.stats_fe) {
+ if ((global.stats_fe = alloc_stats_fe("GLOBAL", file, line)) == NULL) {
+ memprintf(err, "'%s %s' : out of memory trying to allocate a frontend", args[0], args[1]);
+ return -1;
+ }
+ }
+
+ bind_conf = bind_conf_alloc(&global.stats_fe->conf.bind, file, line, args[2]);
+ bind_conf->level = ACCESS_LVL_OPER; /* default access level */
+
+ if (!str2listener(args[2], global.stats_fe, bind_conf, file, line, err)) {
+ memprintf(err, "parsing [%s:%d] : '%s %s' : %s\n",
+ file, line, args[0], args[1], err && *err ? *err : "error");
+ return -1;
+ }
+
+ cur_arg = 3;
+ while (*args[cur_arg]) {
+ static int bind_dumped;
+ struct bind_kw *kw;
+
+ kw = bind_find_kw(args[cur_arg]);
+ if (kw) {
+ if (!kw->parse) {
+ memprintf(err, "'%s %s' : '%s' option is not implemented in this version (check build options).",
+ args[0], args[1], args[cur_arg]);
+ return -1;
+ }
+
+ if (kw->parse(args, cur_arg, global.stats_fe, bind_conf, err) != 0) {
+ if (err && *err)
+ memprintf(err, "'%s %s' : '%s'", args[0], args[1], *err);
+ else
+ memprintf(err, "'%s %s' : error encountered while processing '%s'",
+ args[0], args[1], args[cur_arg]);
+ return -1;
+ }
+
+ cur_arg += 1 + kw->skip;
+ continue;
+ }
+
+ if (!bind_dumped) {
+ bind_dump_kws(err);
+ indent_msg(err, 4);
+ bind_dumped = 1;
+ }
+
+ memprintf(err, "'%s %s' : unknown keyword '%s'.%s%s",
+ args[0], args[1], args[cur_arg],
+ err && *err ? " Registered keywords :" : "", err && *err ? *err : "");
+ return -1;
+ }
+
+ list_for_each_entry(l, &bind_conf->listeners, by_bind) {
+ l->maxconn = global.stats_fe->maxconn;
+ l->backlog = global.stats_fe->backlog;
+ l->accept = session_accept_fd;
+ l->handler = process_stream;
+ l->default_target = global.stats_fe->default_target;
+ l->options |= LI_O_UNLIMITED; /* don't make the peers subject to global limits */
+ l->nice = -64; /* we want to boost priority for local stats */
+ global.maxsock += l->maxconn;
+ }
+ }
+ else if (!strcmp(args[1], "timeout")) {
+ unsigned timeout;
+ const char *res = parse_time_err(args[2], &timeout, TIME_UNIT_MS);
+
+ if (res) {
+ memprintf(err, "'%s %s' : unexpected character '%c'", args[0], args[1], *res);
+ return -1;
+ }
+
+ if (!timeout) {
+ memprintf(err, "'%s %s' expects a positive value", args[0], args[1]);
+ return -1;
+ }
+ if (!global.stats_fe) {
+ if ((global.stats_fe = alloc_stats_fe("GLOBAL", file, line)) == NULL) {
+ memprintf(err, "'%s %s' : out of memory trying to allocate a frontend", args[0], args[1]);
+ return -1;
+ }
+ }
+ global.stats_fe->timeout.client = MS_TO_TICKS(timeout);
+ }
+ else if (!strcmp(args[1], "maxconn")) {
+ int maxconn = atol(args[2]);
+
+ if (maxconn <= 0) {
+ memprintf(err, "'%s %s' expects a positive value", args[0], args[1]);
+ return -1;
+ }
+
+ if (!global.stats_fe) {
+ if ((global.stats_fe = alloc_stats_fe("GLOBAL", file, line)) == NULL) {
+ memprintf(err, "'%s %s' : out of memory trying to allocate a frontend", args[0], args[1]);
+ return -1;
+ }
+ }
+ global.stats_fe->maxconn = maxconn;
+ }
+ else if (!strcmp(args[1], "bind-process")) { /* enable the socket only on some processes */
+ int cur_arg = 2;
+ unsigned long set = 0;
+
+ if (!global.stats_fe) {
+ if ((global.stats_fe = alloc_stats_fe("GLOBAL", file, line)) == NULL) {
+ memprintf(err, "'%s %s' : out of memory trying to allocate a frontend", args[0], args[1]);
+ return -1;
+ }
+ }
+
+ while (*args[cur_arg]) {
+ unsigned int low, high;
+
+ if (strcmp(args[cur_arg], "all") == 0) {
+ set = 0;
+ break;
+ }
+ else if (strcmp(args[cur_arg], "odd") == 0) {
+ set |= ~0UL/3UL; /* 0x555....555 */
+ }
+ else if (strcmp(args[cur_arg], "even") == 0) {
+ set |= (~0UL/3UL) << 1; /* 0xAAA...AAA */
+ }
+ else if (isdigit((int)*args[cur_arg])) {
+ char *dash = strchr(args[cur_arg], '-');
+
+ low = high = str2uic(args[cur_arg]);
+ if (dash)
+ high = str2uic(dash + 1);
+
+ if (high < low) {
+ unsigned int swap = low;
+ low = high;
+ high = swap;
+ }
+
+ if (low < 1 || high > LONGBITS) {
+ memprintf(err, "'%s %s' supports process numbers from 1 to %d.\n",
+ args[0], args[1], LONGBITS);
+ return -1;
+ }
+ while (low <= high)
+ set |= 1UL << (low++ - 1);
+ }
+ else {
+ memprintf(err,
+ "'%s %s' expects 'all', 'odd', 'even', or a list of process ranges with numbers from 1 to %d.\n",
+ args[0], args[1], LONGBITS);
+ return -1;
+ }
+ cur_arg++;
+ }
+ global.stats_fe->bind_proc = set;
+ }
+ else {
+ memprintf(err, "'%s' only supports 'socket', 'maxconn', 'bind-process' and 'timeout' (got '%s')", args[0], args[1]);
+ return -1;
+ }
+ return 0;
+}
+
+/* Dumps the stats CSV header to the trash buffer which. The caller is responsible
+ * for clearing it if needed.
+ * NOTE: Some tools happen to rely on the field position instead of its name,
+ * so please only append new fields at the end, never in the middle.
+ */
+static void stats_dump_csv_header()
+{
+ chunk_appendf(&trash,
+ "# pxname,svname,"
+ "qcur,qmax,"
+ "scur,smax,slim,stot,"
+ "bin,bout,"
+ "dreq,dresp,"
+ "ereq,econ,eresp,"
+ "wretr,wredis,"
+ "status,weight,act,bck,"
+ "chkfail,chkdown,lastchg,downtime,qlimit,"
+ "pid,iid,sid,throttle,lbtot,tracked,type,"
+ "rate,rate_lim,rate_max,"
+ "check_status,check_code,check_duration,"
+ "hrsp_1xx,hrsp_2xx,hrsp_3xx,hrsp_4xx,hrsp_5xx,hrsp_other,hanafail,"
+ "req_rate,req_rate_max,req_tot,"
+ "cli_abrt,srv_abrt,"
+ "comp_in,comp_out,comp_byp,comp_rsp,lastsess,last_chk,last_agt,qtime,ctime,rtime,ttime,"
+ "\n");
+}
+
+/* print a string of text buffer to <out>. The format is :
+ * Non-printable chars \t, \n, \r and \e are * encoded in C format.
+ * Other non-printable chars are encoded "\xHH". Space and '\' are also escaped.
+ * Print stopped if null char or <bsize> is reached, or if no more place in the chunk.
+ */
+static int dump_text(struct chunk *out, const char *buf, int bsize)
+{
+ unsigned char c;
+ int ptr = 0;
+
+ while (buf[ptr] && ptr < bsize) {
+ c = buf[ptr];
+ if (isprint(c) && isascii(c) && c != '\\' && c != ' ') {
+ if (out->len > out->size - 1)
+ break;
+ out->str[out->len++] = c;
+ }
+ else if (c == '\t' || c == '\n' || c == '\r' || c == '\e' || c == '\\' || c == ' ') {
+ if (out->len > out->size - 2)
+ break;
+ out->str[out->len++] = '\\';
+ switch (c) {
+ case ' ': c = ' '; break;
+ case '\t': c = 't'; break;
+ case '\n': c = 'n'; break;
+ case '\r': c = 'r'; break;
+ case '\e': c = 'e'; break;
+ case '\\': c = '\\'; break;
+ }
+ out->str[out->len++] = c;
+ }
+ else {
+ if (out->len > out->size - 4)
+ break;
+ out->str[out->len++] = '\\';
+ out->str[out->len++] = 'x';
+ out->str[out->len++] = hextab[(c >> 4) & 0xF];
+ out->str[out->len++] = hextab[c & 0xF];
+ }
+ ptr++;
+ }
+
+ return ptr;
+}
+
+/* print a buffer in hexa.
+ * Print stopped if <bsize> is reached, or if no more place in the chunk.
+ */
+static int dump_binary(struct chunk *out, const char *buf, int bsize)
+{
+ unsigned char c;
+ int ptr = 0;
+
+ while (ptr < bsize) {
+ c = buf[ptr];
+
+ if (out->len > out->size - 2)
+ break;
+ out->str[out->len++] = hextab[(c >> 4) & 0xF];
+ out->str[out->len++] = hextab[c & 0xF];
+
+ ptr++;
+ }
+ return ptr;
+}
+
+/* Dump the status of a table to a stream interface's
+ * read buffer. It returns 0 if the output buffer is full
+ * and needs to be called again, otherwise non-zero.
+ */
+static int stats_dump_table_head_to_buffer(struct chunk *msg, struct stream_interface *si,
+ struct proxy *proxy, struct proxy *target)
+{
+ struct stream *s = si_strm(si);
+
+ chunk_appendf(msg, "# table: %s, type: %s, size:%d, used:%d\n",
+ proxy->id, stktable_types[proxy->table.type].kw, proxy->table.size, proxy->table.current);
+
+ /* any other information should be dumped here */
+
+ if (target && strm_li(s)->bind_conf->level < ACCESS_LVL_OPER)
+ chunk_appendf(msg, "# contents not dumped due to insufficient privileges\n");
+
+ if (bi_putchk(si_ic(si), msg) == -1) {
+ si_applet_cant_put(si);
+ return 0;
+ }
+
+ return 1;
+}
+
+/* Dump the a table entry to a stream interface's
+ * read buffer. It returns 0 if the output buffer is full
+ * and needs to be called again, otherwise non-zero.
+ */
+static int stats_dump_table_entry_to_buffer(struct chunk *msg, struct stream_interface *si,
+ struct proxy *proxy, struct stksess *entry)
+{
+ int dt;
+
+ chunk_appendf(msg, "%p:", entry);
+
+ if (proxy->table.type == SMP_T_IPV4) {
+ char addr[INET_ADDRSTRLEN];
+ inet_ntop(AF_INET, (const void *)&entry->key.key, addr, sizeof(addr));
+ chunk_appendf(msg, " key=%s", addr);
+ }
+ else if (proxy->table.type == SMP_T_IPV6) {
+ char addr[INET6_ADDRSTRLEN];
+ inet_ntop(AF_INET6, (const void *)&entry->key.key, addr, sizeof(addr));
+ chunk_appendf(msg, " key=%s", addr);
+ }
+ else if (proxy->table.type == SMP_T_SINT) {
+ chunk_appendf(msg, " key=%u", *(unsigned int *)entry->key.key);
+ }
+ else if (proxy->table.type == SMP_T_STR) {
+ chunk_appendf(msg, " key=");
+ dump_text(msg, (const char *)entry->key.key, proxy->table.key_size);
+ }
+ else {
+ chunk_appendf(msg, " key=");
+ dump_binary(msg, (const char *)entry->key.key, proxy->table.key_size);
+ }
+
+ chunk_appendf(msg, " use=%d exp=%d", entry->ref_cnt - 1, tick_remain(now_ms, entry->expire));
+
+ for (dt = 0; dt < STKTABLE_DATA_TYPES; dt++) {
+ void *ptr;
+
+ if (proxy->table.data_ofs[dt] == 0)
+ continue;
+ if (stktable_data_types[dt].arg_type == ARG_T_DELAY)
+ chunk_appendf(msg, " %s(%d)=", stktable_data_types[dt].name, proxy->table.data_arg[dt].u);
+ else
+ chunk_appendf(msg, " %s=", stktable_data_types[dt].name);
+
+ ptr = stktable_data_ptr(&proxy->table, entry, dt);
+ switch (stktable_data_types[dt].std_type) {
+ case STD_T_SINT:
+ chunk_appendf(msg, "%d", stktable_data_cast(ptr, std_t_sint));
+ break;
+ case STD_T_UINT:
+ chunk_appendf(msg, "%u", stktable_data_cast(ptr, std_t_uint));
+ break;
+ case STD_T_ULL:
+ chunk_appendf(msg, "%lld", stktable_data_cast(ptr, std_t_ull));
+ break;
+ case STD_T_FRQP:
+ chunk_appendf(msg, "%d",
+ read_freq_ctr_period(&stktable_data_cast(ptr, std_t_frqp),
+ proxy->table.data_arg[dt].u));
+ break;
+ }
+ }
+ chunk_appendf(msg, "\n");
+
+ if (bi_putchk(si_ic(si), msg) == -1) {
+ si_applet_cant_put(si);
+ return 0;
+ }
+
+ return 1;
+}
+
+static void stats_sock_table_key_request(struct stream_interface *si, char **args, int action)
+{
+ struct stream *s = si_strm(si);
+ struct appctx *appctx = __objt_appctx(si->end);
+ struct proxy *px = appctx->ctx.table.target;
+ struct stksess *ts;
+ uint32_t uint32_key;
+ unsigned char ip6_key[sizeof(struct in6_addr)];
+ long long value;
+ int data_type;
+ int cur_arg;
+ void *ptr;
+ struct freq_ctr_period *frqp;
+
+ appctx->st0 = STAT_CLI_OUTPUT;
+
+ if (!*args[4]) {
+ appctx->ctx.cli.msg = "Key value expected\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return;
+ }
+
+ switch (px->table.type) {
+ case SMP_T_IPV4:
+ uint32_key = htonl(inetaddr_host(args[4]));
+ static_table_key->key = &uint32_key;
+ break;
+ case SMP_T_IPV6:
+ inet_pton(AF_INET6, args[4], ip6_key);
+ static_table_key->key = &ip6_key;
+ break;
+ case SMP_T_SINT:
+ {
+ char *endptr;
+ unsigned long val;
+ errno = 0;
+ val = strtoul(args[4], &endptr, 10);
+ if ((errno == ERANGE && val == ULONG_MAX) ||
+ (errno != 0 && val == 0) || endptr == args[4] ||
+ val > 0xffffffff) {
+ appctx->ctx.cli.msg = "Invalid key\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return;
+ }
+ uint32_key = (uint32_t) val;
+ static_table_key->key = &uint32_key;
+ break;
+ }
+ break;
+ case SMP_T_STR:
+ static_table_key->key = args[4];
+ static_table_key->key_len = strlen(args[4]);
+ break;
+ default:
+ switch (action) {
+ case STAT_CLI_O_TAB:
+ appctx->ctx.cli.msg = "Showing keys from tables of type other than ip, ipv6, string and integer is not supported\n";
+ break;
+ case STAT_CLI_O_CLR:
+ appctx->ctx.cli.msg = "Removing keys from ip tables of type other than ip, ipv6, string and integer is not supported\n";
+ break;
+ default:
+ appctx->ctx.cli.msg = "Unknown action\n";
+ break;
+ }
+ appctx->st0 = STAT_CLI_PRINT;
+ return;
+ }
+
+ /* check permissions */
+ if (strm_li(s)->bind_conf->level < ACCESS_LVL_OPER) {
+ appctx->ctx.cli.msg = stats_permission_denied_msg;
+ appctx->st0 = STAT_CLI_PRINT;
+ return;
+ }
+
+ ts = stktable_lookup_key(&px->table, static_table_key);
+
+ switch (action) {
+ case STAT_CLI_O_TAB:
+ if (!ts)
+ return;
+ chunk_reset(&trash);
+ if (!stats_dump_table_head_to_buffer(&trash, si, px, px))
+ return;
+ stats_dump_table_entry_to_buffer(&trash, si, px, ts);
+ return;
+
+ case STAT_CLI_O_CLR:
+ if (!ts)
+ return;
+ if (ts->ref_cnt) {
+ /* don't delete an entry which is currently referenced */
+ appctx->ctx.cli.msg = "Entry currently in use, cannot remove\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return;
+ }
+ stksess_kill(&px->table, ts);
+ break;
+
+ case STAT_CLI_O_SET:
+ if (ts)
+ stktable_touch(&px->table, ts, 1);
+ else {
+ ts = stksess_new(&px->table, static_table_key);
+ if (!ts) {
+ /* don't delete an entry which is currently referenced */
+ appctx->ctx.cli.msg = "Unable to allocate a new entry\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return;
+ }
+ stktable_store(&px->table, ts, 1);
+ }
+
+ for (cur_arg = 5; *args[cur_arg]; cur_arg += 2) {
+ if (strncmp(args[cur_arg], "data.", 5) != 0) {
+ appctx->ctx.cli.msg = "\"data.<type>\" followed by a value expected\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return;
+ }
+
+ data_type = stktable_get_data_type(args[cur_arg] + 5);
+ if (data_type < 0) {
+ appctx->ctx.cli.msg = "Unknown data type\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return;
+ }
+
+ if (!px->table.data_ofs[data_type]) {
+ appctx->ctx.cli.msg = "Data type not stored in this table\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return;
+ }
+
+ if (!*args[cur_arg+1] || strl2llrc(args[cur_arg+1], strlen(args[cur_arg+1]), &value) != 0) {
+ appctx->ctx.cli.msg = "Require a valid integer value to store\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return;
+ }
+
+ ptr = stktable_data_ptr(&px->table, ts, data_type);
+
+ switch (stktable_data_types[data_type].std_type) {
+ case STD_T_SINT:
+ stktable_data_cast(ptr, std_t_sint) = value;
+ break;
+ case STD_T_UINT:
+ stktable_data_cast(ptr, std_t_uint) = value;
+ break;
+ case STD_T_ULL:
+ stktable_data_cast(ptr, std_t_ull) = value;
+ break;
+ case STD_T_FRQP:
+ /* We set both the current and previous values. That way
+ * the reported frequency is stable during all the period
+ * then slowly fades out. This allows external tools to
+ * push measures without having to update them too often.
+ */
+ frqp = &stktable_data_cast(ptr, std_t_frqp);
+ frqp->curr_tick = now_ms;
+ frqp->prev_ctr = 0;
+ frqp->curr_ctr = value;
+ break;
+ }
+ }
+ break;
+
+ default:
+ appctx->ctx.cli.msg = "Unknown action\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ break;
+ }
+}
+
+static void stats_sock_table_data_request(struct stream_interface *si, char **args, int action)
+{
+ struct appctx *appctx = __objt_appctx(si->end);
+
+ if (action != STAT_CLI_O_TAB && action != STAT_CLI_O_CLR) {
+ appctx->ctx.cli.msg = "content-based lookup is only supported with the \"show\" and \"clear\" actions";
+ appctx->st0 = STAT_CLI_PRINT;
+ return;
+ }
+
+ /* condition on stored data value */
+ appctx->ctx.table.data_type = stktable_get_data_type(args[3] + 5);
+ if (appctx->ctx.table.data_type < 0) {
+ appctx->ctx.cli.msg = "Unknown data type\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return;
+ }
+
+ if (!((struct proxy *)appctx->ctx.table.target)->table.data_ofs[appctx->ctx.table.data_type]) {
+ appctx->ctx.cli.msg = "Data type not stored in this table\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return;
+ }
+
+ appctx->ctx.table.data_op = get_std_op(args[4]);
+ if (appctx->ctx.table.data_op < 0) {
+ appctx->ctx.cli.msg = "Require and operator among \"eq\", \"ne\", \"le\", \"ge\", \"lt\", \"gt\"\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return;
+ }
+
+ if (!*args[5] || strl2llrc(args[5], strlen(args[5]), &appctx->ctx.table.value) != 0) {
+ appctx->ctx.cli.msg = "Require a valid integer value to compare against\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return;
+ }
+}
+
+static void stats_sock_table_request(struct stream_interface *si, char **args, int action)
+{
+ struct appctx *appctx = __objt_appctx(si->end);
+
+ appctx->ctx.table.data_type = -1;
+ appctx->st2 = STAT_ST_INIT;
+ appctx->ctx.table.target = NULL;
+ appctx->ctx.table.proxy = NULL;
+ appctx->ctx.table.entry = NULL;
+ appctx->st0 = action;
+
+ if (*args[2]) {
+ appctx->ctx.table.target = proxy_tbl_by_name(args[2]);
+ if (!appctx->ctx.table.target) {
+ appctx->ctx.cli.msg = "No such table\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return;
+ }
+ }
+ else {
+ if (action != STAT_CLI_O_TAB)
+ goto err_args;
+ return;
+ }
+
+ if (strcmp(args[3], "key") == 0)
+ stats_sock_table_key_request(si, args, action);
+ else if (strncmp(args[3], "data.", 5) == 0)
+ stats_sock_table_data_request(si, args, action);
+ else if (*args[3])
+ goto err_args;
+
+ return;
+
+err_args:
+ switch (action) {
+ case STAT_CLI_O_TAB:
+ appctx->ctx.cli.msg = "Optional argument only supports \"data.<store_data_type>\" <operator> <value> and key <key>\n";
+ break;
+ case STAT_CLI_O_CLR:
+ appctx->ctx.cli.msg = "Required arguments: <table> \"data.<store_data_type>\" <operator> <value> or <table> key <key>\n";
+ break;
+ default:
+ appctx->ctx.cli.msg = "Unknown action\n";
+ break;
+ }
+ appctx->st0 = STAT_CLI_PRINT;
+}
+
+/* Expects to find a frontend named <arg> and returns it, otherwise displays various
+ * adequate error messages and returns NULL. This function also expects the stream
+ * level to be admin.
+ */
+static struct proxy *expect_frontend_admin(struct stream *s, struct stream_interface *si, const char *arg)
+{
+ struct appctx *appctx = __objt_appctx(si->end);
+ struct proxy *px;
+
+ if (strm_li(s)->bind_conf->level < ACCESS_LVL_ADMIN) {
+ appctx->ctx.cli.msg = stats_permission_denied_msg;
+ appctx->st0 = STAT_CLI_PRINT;
+ return NULL;
+ }
+
+ if (!*arg) {
+ appctx->ctx.cli.msg = "A frontend name is expected.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return NULL;
+ }
+
+ px = proxy_fe_by_name(arg);
+ if (!px) {
+ appctx->ctx.cli.msg = "No such frontend.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return NULL;
+ }
+ return px;
+}
+
+/* Expects to find a backend and a server in <arg> under the form <backend>/<server>,
+ * and returns the pointer to the server. Otherwise, display adequate error messages
+ * and returns NULL. This function also expects the stream level to be admin. Note:
+ * the <arg> is modified to remove the '/'.
+ */
+static struct server *expect_server_admin(struct stream *s, struct stream_interface *si, char *arg)
+{
+ struct appctx *appctx = __objt_appctx(si->end);
+ struct proxy *px;
+ struct server *sv;
+ char *line;
+
+ if (strm_li(s)->bind_conf->level < ACCESS_LVL_ADMIN) {
+ appctx->ctx.cli.msg = stats_permission_denied_msg;
+ appctx->st0 = STAT_CLI_PRINT;
+ return NULL;
+ }
+
+ /* split "backend/server" and make <line> point to server */
+ for (line = arg; *line; line++)
+ if (*line == '/') {
+ *line++ = '\0';
+ break;
+ }
+
+ if (!*line || !*arg) {
+ appctx->ctx.cli.msg = "Require 'backend/server'.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return NULL;
+ }
+
+ if (!get_backend_server(arg, line, &px, &sv)) {
+ appctx->ctx.cli.msg = px ? "No such server.\n" : "No such backend.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return NULL;
+ }
+
+ if (px->state == PR_STSTOPPED) {
+ appctx->ctx.cli.msg = "Proxy is disabled.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return NULL;
+ }
+
+ return sv;
+}
+
+/* This function is used with TLS ticket keys management. It permits to browse
+ * each reference. The variable <getnext> must contain the current node,
+ * <end> point to the root node.
+ */
+#if (defined SSL_CTRL_SET_TLSEXT_TICKET_KEY_CB && TLS_TICKETS_NO > 0)
+static inline
+struct tls_keys_ref *tlskeys_list_get_next(struct tls_keys_ref *getnext, struct list *end)
+{
+ struct tls_keys_ref *ref = getnext;
+
+ while (1) {
+
+ /* Get next list entry. */
+ ref = LIST_NEXT(&ref->list, struct tls_keys_ref *, list);
+
+ /* If the entry is the last of the list, return NULL. */
+ if (&ref->list == end)
+ return NULL;
+
+ return ref;
+ }
+}
+
+static inline
+struct tls_keys_ref *tlskeys_ref_lookup_ref(const char *reference)
+{
+ int id;
+ char *error;
+
+ /* If the reference starts by a '#', this is numeric id. */
+ if (reference[0] == '#') {
+ /* Try to convert the numeric id. If the conversion fails, the lookup fails. */
+ id = strtol(reference + 1, &error, 10);
+ if (*error != '\0')
+ return NULL;
+
+ /* Perform the unique id lookup. */
+ return tlskeys_ref_lookupid(id);
+ }
+
+ /* Perform the string lookup. */
+ return tlskeys_ref_lookup(reference);
+}
+#endif
+
+/* This function is used with map and acl management. It permits to browse
+ * each reference. The variable <getnext> must contain the current node,
+ * <end> point to the root node and the <flags> permit to filter required
+ * nodes.
+ */
+static inline
+struct pat_ref *pat_list_get_next(struct pat_ref *getnext, struct list *end,
+ unsigned int flags)
+{
+ struct pat_ref *ref = getnext;
+
+ while (1) {
+
+ /* Get next list entry. */
+ ref = LIST_NEXT(&ref->list, struct pat_ref *, list);
+
+ /* If the entry is the last of the list, return NULL. */
+ if (&ref->list == end)
+ return NULL;
+
+ /* If the entry match the flag, return it. */
+ if (ref->flags & flags)
+ return ref;
+ }
+}
+
+static inline
+struct pat_ref *pat_ref_lookup_ref(const char *reference)
+{
+ int id;
+ char *error;
+
+ /* If the reference starts by a '#', this is numeric id. */
+ if (reference[0] == '#') {
+ /* Try to convert the numeric id. If the conversion fails, the lookup fails. */
+ id = strtol(reference + 1, &error, 10);
+ if (*error != '\0')
+ return NULL;
+
+ /* Perform the unique id lookup. */
+ return pat_ref_lookupid(id);
+ }
+
+ /* Perform the string lookup. */
+ return pat_ref_lookup(reference);
+}
+
+/* This function is used with map and acl management. It permits to browse
+ * each reference.
+ */
+static inline
+struct pattern_expr *pat_expr_get_next(struct pattern_expr *getnext, struct list *end)
+{
+ struct pattern_expr *expr;
+ expr = LIST_NEXT(&getnext->list, struct pattern_expr *, list);
+ if (&expr->list == end)
+ return NULL;
+ return expr;
+}
+
+/* Processes the stats interpreter on the statistics socket. This function is
+ * called from an applet running in a stream interface. The function returns 1
+ * if the request was understood, otherwise zero. It sets appctx->st0 to a value
+ * designating the function which will have to process the request, which can
+ * also be the print function to display the return message set into cli.msg.
+ */
+static int stats_sock_parse_request(struct stream_interface *si, char *line)
+{
+ struct stream *s = si_strm(si);
+ struct appctx *appctx = __objt_appctx(si->end);
+ char *args[MAX_STATS_ARGS + 1];
+ int arg;
+ int i, j;
+
+ while (isspace((unsigned char)*line))
+ line++;
+
+ arg = 0;
+ args[arg] = line;
+
+ while (*line && arg < MAX_STATS_ARGS) {
+ if (*line == '\\') {
+ line++;
+ if (*line == '\0')
+ break;
+ }
+ else if (isspace((unsigned char)*line)) {
+ *line++ = '\0';
+
+ while (isspace((unsigned char)*line))
+ line++;
+
+ args[++arg] = line;
+ continue;
+ }
+
+ line++;
+ }
+
+ while (++arg <= MAX_STATS_ARGS)
+ args[arg] = line;
+
+ /* remove \ */
+ arg = 0;
+ while (*args[arg] != '\0') {
+ j = 0;
+ for (i=0; args[arg][i] != '\0'; i++) {
+ if (args[arg][i] == '\\')
+ continue;
+ args[arg][j] = args[arg][i];
+ j++;
+ }
+ args[arg][j] = '\0';
+ arg++;
+ }
+
+ appctx->ctx.stats.scope_str = 0;
+ appctx->ctx.stats.scope_len = 0;
+ appctx->ctx.stats.flags = 0;
+ if (strcmp(args[0], "show") == 0) {
+ if (strcmp(args[1], "backend") == 0) {
+ appctx->st2 = STAT_ST_INIT;
+ appctx->st0 = STAT_CLI_O_BACKEND;
+ }
+ else if (strcmp(args[1], "stat") == 0) {
+ if (strcmp(args[2], "resolvers") == 0) {
+ struct dns_resolvers *presolvers;
+
+ if (*args[3]) {
+ appctx->ctx.resolvers.ptr = NULL;
+ list_for_each_entry(presolvers, &dns_resolvers, list) {
+ if (strcmp(presolvers->id, args[3]) == 0) {
+ appctx->ctx.resolvers.ptr = presolvers;
+ break;
+ }
+ }
+ if (appctx->ctx.resolvers.ptr == NULL) {
+ appctx->ctx.cli.msg = "Can't find that resolvers section\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+ }
+
+ appctx->st2 = STAT_ST_INIT;
+ appctx->st0 = STAT_CLI_O_RESOLVERS;
+ return 1;
+ }
+ else if (*args[2] && *args[3] && *args[4]) {
+ appctx->ctx.stats.flags |= STAT_BOUND;
+ appctx->ctx.stats.iid = atoi(args[2]);
+ appctx->ctx.stats.type = atoi(args[3]);
+ appctx->ctx.stats.sid = atoi(args[4]);
+ }
+
+ appctx->st2 = STAT_ST_INIT;
+ appctx->st0 = STAT_CLI_O_STAT; // stats_dump_stat_to_buffer
+ }
+ else if (strcmp(args[1], "info") == 0) {
+ appctx->st2 = STAT_ST_INIT;
+ appctx->st0 = STAT_CLI_O_INFO; // stats_dump_info_to_buffer
+ }
+ else if (strcmp(args[1], "servers") == 0 && strcmp(args[2], "state") == 0) {
+ appctx->ctx.server_state.backend = NULL;
+
+ /* check if a backend name has been provided */
+ if (*args[3]) {
+ /* read server state from local file */
+ appctx->ctx.server_state.backend = proxy_be_by_name(args[3]);
+
+ if (appctx->ctx.server_state.backend == NULL) {
+ appctx->ctx.cli.msg = "Can't find backend.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+ }
+ appctx->st2 = STAT_ST_INIT;
+ appctx->st0 = STAT_CLI_O_SERVERS_STATE; // stats_dump_servers_state_to_buffer
+ return 1;
+ }
+ else if (strcmp(args[1], "pools") == 0) {
+ appctx->st2 = STAT_ST_INIT;
+ appctx->st0 = STAT_CLI_O_POOLS; // stats_dump_pools_to_buffer
+ }
+ else if (strcmp(args[1], "sess") == 0) {
+ appctx->st2 = STAT_ST_INIT;
+ if (strm_li(s)->bind_conf->level < ACCESS_LVL_OPER) {
+ appctx->ctx.cli.msg = stats_permission_denied_msg;
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+ if (*args[2] && strcmp(args[2], "all") == 0)
+ appctx->ctx.sess.target = (void *)-1;
+ else if (*args[2])
+ appctx->ctx.sess.target = (void *)strtoul(args[2], NULL, 0);
+ else
+ appctx->ctx.sess.target = NULL;
+ appctx->ctx.sess.section = 0; /* start with stream status */
+ appctx->ctx.sess.pos = 0;
+ appctx->st0 = STAT_CLI_O_SESS; // stats_dump_sess_to_buffer
+ }
+ else if (strcmp(args[1], "errors") == 0) {
+ if (strm_li(s)->bind_conf->level < ACCESS_LVL_OPER) {
+ appctx->ctx.cli.msg = stats_permission_denied_msg;
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+ if (*args[2])
+ appctx->ctx.errors.iid = atoi(args[2]);
+ else
+ appctx->ctx.errors.iid = -1;
+ appctx->ctx.errors.px = NULL;
+ appctx->st2 = STAT_ST_INIT;
+ appctx->st0 = STAT_CLI_O_ERR; // stats_dump_errors_to_buffer
+ }
+ else if (strcmp(args[1], "table") == 0) {
+ stats_sock_table_request(si, args, STAT_CLI_O_TAB);
+ }
+ else if (strcmp(args[1], "tls-keys") == 0) {
+#if (defined SSL_CTRL_SET_TLSEXT_TICKET_KEY_CB && TLS_TICKETS_NO > 0)
+ appctx->st2 = STAT_ST_INIT;
+ appctx->st0 = STAT_CLI_O_TLSK;
+#else
+ appctx->ctx.cli.msg = "HAProxy was compiled against a version of OpenSSL "
+ "that doesn't support specifying TLS ticket keys\n";
+ appctx->st0 = STAT_CLI_PRINT;
+#endif
+ return 1;
+ }
+ else if (strcmp(args[1], "map") == 0 ||
+ strcmp(args[1], "acl") == 0) {
+
+ /* Set ACL or MAP flags. */
+ if (args[1][0] == 'm')
+ appctx->ctx.map.display_flags = PAT_REF_MAP;
+ else
+ appctx->ctx.map.display_flags = PAT_REF_ACL;
+
+ /* no parameter: display all map avalaible */
+ if (!*args[2]) {
+ appctx->st2 = STAT_ST_INIT;
+ appctx->st0 = STAT_CLI_O_PATS;
+ return 1;
+ }
+
+ /* lookup into the refs and check the map flag */
+ appctx->ctx.map.ref = pat_ref_lookup_ref(args[2]);
+ if (!appctx->ctx.map.ref ||
+ !(appctx->ctx.map.ref->flags & appctx->ctx.map.display_flags)) {
+ if (appctx->ctx.map.display_flags == PAT_REF_MAP)
+ appctx->ctx.cli.msg = "Unknown map identifier. Please use #<id> or <file>.\n";
+ else
+ appctx->ctx.cli.msg = "Unknown ACL identifier. Please use #<id> or <file>.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+ appctx->st2 = STAT_ST_INIT;
+ appctx->st0 = STAT_CLI_O_PAT;
+ }
+ else { /* neither "stat" nor "info" nor "sess" nor "errors" nor "table" */
+ return 0;
+ }
+ }
+ else if (strcmp(args[0], "clear") == 0) {
+ if (strcmp(args[1], "counters") == 0) {
+ struct proxy *px;
+ struct server *sv;
+ struct listener *li;
+ int clrall = 0;
+
+ if (strcmp(args[2], "all") == 0)
+ clrall = 1;
+
+ /* check permissions */
+ if (strm_li(s)->bind_conf->level < ACCESS_LVL_OPER ||
+ (clrall && strm_li(s)->bind_conf->level < ACCESS_LVL_ADMIN)) {
+ appctx->ctx.cli.msg = stats_permission_denied_msg;
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ for (px = proxy; px; px = px->next) {
+ if (clrall) {
+ memset(&px->be_counters, 0, sizeof(px->be_counters));
+ memset(&px->fe_counters, 0, sizeof(px->fe_counters));
+ }
+ else {
+ px->be_counters.conn_max = 0;
+ px->be_counters.p.http.rps_max = 0;
+ px->be_counters.sps_max = 0;
+ px->be_counters.cps_max = 0;
+ px->be_counters.nbpend_max = 0;
+
+ px->fe_counters.conn_max = 0;
+ px->fe_counters.p.http.rps_max = 0;
+ px->fe_counters.sps_max = 0;
+ px->fe_counters.cps_max = 0;
+ px->fe_counters.nbpend_max = 0;
+ }
+
+ for (sv = px->srv; sv; sv = sv->next)
+ if (clrall)
+ memset(&sv->counters, 0, sizeof(sv->counters));
+ else {
+ sv->counters.cur_sess_max = 0;
+ sv->counters.nbpend_max = 0;
+ sv->counters.sps_max = 0;
+ }
+
+ list_for_each_entry(li, &px->conf.listeners, by_fe)
+ if (li->counters) {
+ if (clrall)
+ memset(li->counters, 0, sizeof(*li->counters));
+ else
+ li->counters->conn_max = 0;
+ }
+ }
+
+ global.cps_max = 0;
+ global.sps_max = 0;
+ return 1;
+ }
+ else if (strcmp(args[1], "table") == 0) {
+ stats_sock_table_request(si, args, STAT_CLI_O_CLR);
+ /* end of processing */
+ return 1;
+ }
+ else if (strcmp(args[1], "map") == 0 || strcmp(args[1], "acl") == 0) {
+ /* Set ACL or MAP flags. */
+ if (args[1][0] == 'm')
+ appctx->ctx.map.display_flags = PAT_REF_MAP;
+ else
+ appctx->ctx.map.display_flags = PAT_REF_ACL;
+
+ /* no parameter */
+ if (!*args[2]) {
+ if (appctx->ctx.map.display_flags == PAT_REF_MAP)
+ appctx->ctx.cli.msg = "Missing map identifier.\n";
+ else
+ appctx->ctx.cli.msg = "Missing ACL identifier.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ /* lookup into the refs and check the map flag */
+ appctx->ctx.map.ref = pat_ref_lookup_ref(args[2]);
+ if (!appctx->ctx.map.ref ||
+ !(appctx->ctx.map.ref->flags & appctx->ctx.map.display_flags)) {
+ if (appctx->ctx.map.display_flags == PAT_REF_MAP)
+ appctx->ctx.cli.msg = "Unknown map identifier. Please use #<id> or <file>.\n";
+ else
+ appctx->ctx.cli.msg = "Unknown ACL identifier. Please use #<id> or <file>.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ /* Clear all. */
+ pat_ref_prune(appctx->ctx.map.ref);
+
+ /* return response */
+ appctx->st0 = STAT_CLI_PROMPT;
+ return 1;
+ }
+ else {
+ /* unknown "clear" argument */
+ return 0;
+ }
+ }
+ else if (strcmp(args[0], "get") == 0) {
+ if (strcmp(args[1], "weight") == 0) {
+ struct proxy *px;
+ struct server *sv;
+
+ /* split "backend/server" and make <line> point to server */
+ for (line = args[2]; *line; line++)
+ if (*line == '/') {
+ *line++ = '\0';
+ break;
+ }
+
+ if (!*line) {
+ appctx->ctx.cli.msg = "Require 'backend/server'.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ if (!get_backend_server(args[2], line, &px, &sv)) {
+ appctx->ctx.cli.msg = px ? "No such server.\n" : "No such backend.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ /* return server's effective weight at the moment */
+ snprintf(trash.str, trash.size, "%d (initial %d)\n", sv->uweight, sv->iweight);
+ if (bi_putstr(si_ic(si), trash.str) == -1)
+ si_applet_cant_put(si);
+
+ return 1;
+ }
+ else if (strcmp(args[1], "map") == 0 || strcmp(args[1], "acl") == 0) {
+ /* Set flags. */
+ if (args[1][0] == 'm')
+ appctx->ctx.map.display_flags = PAT_REF_MAP;
+ else
+ appctx->ctx.map.display_flags = PAT_REF_ACL;
+
+ /* No parameter. */
+ if (!*args[2] || !*args[3]) {
+ if (appctx->ctx.map.display_flags == PAT_REF_MAP)
+ appctx->ctx.cli.msg = "Missing map identifier and/or key.\n";
+ else
+ appctx->ctx.cli.msg = "Missing ACL identifier and/or key.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ /* lookup into the maps */
+ appctx->ctx.map.ref = pat_ref_lookup_ref(args[2]);
+ if (!appctx->ctx.map.ref) {
+ if (appctx->ctx.map.display_flags == PAT_REF_MAP)
+ appctx->ctx.cli.msg = "Unknown map identifier. Please use #<id> or <file>.\n";
+ else
+ appctx->ctx.cli.msg = "Unknown ACL identifier. Please use #<id> or <file>.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ /* copy input string. The string must be allocated because
+ * it may be used over multiple iterations. It's released
+ * at the end and upon abort anyway.
+ */
+ appctx->ctx.map.chunk.len = strlen(args[3]);
+ appctx->ctx.map.chunk.size = appctx->ctx.map.chunk.len + 1;
+ appctx->ctx.map.chunk.str = strdup(args[3]);
+ if (!appctx->ctx.map.chunk.str) {
+ appctx->ctx.cli.msg = "Out of memory error.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ /* prepare response */
+ appctx->st2 = STAT_ST_INIT;
+ appctx->st0 = STAT_CLI_O_MLOOK;
+ }
+ else { /* not "get weight" */
+ return 0;
+ }
+ }
+ else if (strcmp(args[0], "set") == 0) {
+ if (strcmp(args[1], "weight") == 0) {
+ struct server *sv;
+ const char *warning;
+
+ sv = expect_server_admin(s, si, args[2]);
+ if (!sv)
+ return 1;
+
+ warning = server_parse_weight_change_request(sv, args[3]);
+ if (warning) {
+ appctx->ctx.cli.msg = warning;
+ appctx->st0 = STAT_CLI_PRINT;
+ }
+ return 1;
+ }
+ else if (strcmp(args[1], "server") == 0) {
+ struct server *sv;
+ const char *warning;
+
+ sv = expect_server_admin(s, si, args[2]);
+ if (!sv)
+ return 1;
+
+ if (strcmp(args[3], "weight") == 0) {
+ warning = server_parse_weight_change_request(sv, args[4]);
+ if (warning) {
+ appctx->ctx.cli.msg = warning;
+ appctx->st0 = STAT_CLI_PRINT;
+ }
+ }
+ else if (strcmp(args[3], "state") == 0) {
+ if (strcmp(args[4], "ready") == 0)
+ srv_adm_set_ready(sv);
+ else if (strcmp(args[4], "drain") == 0)
+ srv_adm_set_drain(sv);
+ else if (strcmp(args[4], "maint") == 0)
+ srv_adm_set_maint(sv);
+ else {
+ appctx->ctx.cli.msg = "'set server <srv> state' expects 'ready', 'drain' and 'maint'.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ }
+ }
+ else if (strcmp(args[3], "health") == 0) {
+ if (sv->track) {
+ appctx->ctx.cli.msg = "cannot change health on a tracking server.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ }
+ else if (strcmp(args[4], "up") == 0) {
+ sv->check.health = sv->check.rise + sv->check.fall - 1;
+ srv_set_running(sv, "changed from CLI");
+ }
+ else if (strcmp(args[4], "stopping") == 0) {
+ sv->check.health = sv->check.rise + sv->check.fall - 1;
+ srv_set_stopping(sv, "changed from CLI");
+ }
+ else if (strcmp(args[4], "down") == 0) {
+ sv->check.health = 0;
+ srv_set_stopped(sv, "changed from CLI");
+ }
+ else {
+ appctx->ctx.cli.msg = "'set server <srv> health' expects 'up', 'stopping', or 'down'.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ }
+ }
+ else if (strcmp(args[3], "agent") == 0) {
+ if (!(sv->agent.state & CHK_ST_ENABLED)) {
+ appctx->ctx.cli.msg = "agent checks are not enabled on this server.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ }
+ else if (strcmp(args[4], "up") == 0) {
+ sv->agent.health = sv->agent.rise + sv->agent.fall - 1;
+ srv_set_running(sv, "changed from CLI");
+ }
+ else if (strcmp(args[4], "down") == 0) {
+ sv->agent.health = 0;
+ srv_set_stopped(sv, "changed from CLI");
+ }
+ else {
+ appctx->ctx.cli.msg = "'set server <srv> agent' expects 'up' or 'down'.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ }
+ }
+ else if (strcmp(args[3], "addr") == 0) {
+ warning = server_parse_addr_change_request(sv, args[4]);
+ if (warning) {
+ appctx->ctx.cli.msg = warning;
+ appctx->st0 = STAT_CLI_PRINT;
+ }
+ }
+ else {
+ appctx->ctx.cli.msg = "'set server <srv>' only supports 'agent', 'health', 'state', 'weight' and 'addr'.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ }
+ return 1;
+ }
+ else if (strcmp(args[1], "timeout") == 0) {
+ if (strcmp(args[2], "cli") == 0) {
+ unsigned timeout;
+ const char *res;
+
+ if (!*args[3]) {
+ appctx->ctx.cli.msg = "Expects an integer value.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ res = parse_time_err(args[3], &timeout, TIME_UNIT_S);
+ if (res || timeout < 1) {
+ appctx->ctx.cli.msg = "Invalid timeout value.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ s->req.rto = s->res.wto = 1 + MS_TO_TICKS(timeout*1000);
+ return 1;
+ }
+ else {
+ appctx->ctx.cli.msg = "'set timeout' only supports 'cli'.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+ }
+ else if (strcmp(args[1], "maxconn") == 0) {
+ if (strcmp(args[2], "frontend") == 0) {
+ struct proxy *px;
+ struct listener *l;
+ int v;
+
+ px = expect_frontend_admin(s, si, args[3]);
+ if (!px)
+ return 1;
+
+ if (!*args[4]) {
+ appctx->ctx.cli.msg = "Integer value expected.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ v = atoi(args[4]);
+ if (v < 0) {
+ appctx->ctx.cli.msg = "Value out of range.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ /* OK, the value is fine, so we assign it to the proxy and to all of
+ * its listeners. The blocked ones will be dequeued.
+ */
+ px->maxconn = v;
+ list_for_each_entry(l, &px->conf.listeners, by_fe) {
+ l->maxconn = v;
+ if (l->state == LI_FULL)
+ resume_listener(l);
+ }
+
+ if (px->maxconn > px->feconn && !LIST_ISEMPTY(&strm_fe(s)->listener_queue))
+ dequeue_all_listeners(&strm_fe(s)->listener_queue);
+
+ return 1;
+ }
+ else if (strcmp(args[2], "global") == 0) {
+ int v;
+
+ if (strm_li(s)->bind_conf->level < ACCESS_LVL_ADMIN) {
+ appctx->ctx.cli.msg = stats_permission_denied_msg;
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ if (!*args[3]) {
+ appctx->ctx.cli.msg = "Expects an integer value.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ v = atoi(args[3]);
+ if (v > global.hardmaxconn) {
+ appctx->ctx.cli.msg = "Value out of range.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ /* check for unlimited values */
+ if (v <= 0)
+ v = global.hardmaxconn;
+
+ global.maxconn = v;
+
+ /* Dequeues all of the listeners waiting for a resource */
+ if (!LIST_ISEMPTY(&global_listener_queue))
+ dequeue_all_listeners(&global_listener_queue);
+
+ return 1;
+ }
+ else {
+ appctx->ctx.cli.msg = "'set maxconn' only supports 'frontend' and 'global'.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+ }
+ else if (strcmp(args[1], "rate-limit") == 0) {
+ if (strcmp(args[2], "connections") == 0) {
+ if (strcmp(args[3], "global") == 0) {
+ int v;
+
+ if (strm_li(s)->bind_conf->level < ACCESS_LVL_ADMIN) {
+ appctx->ctx.cli.msg = stats_permission_denied_msg;
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ if (!*args[4]) {
+ appctx->ctx.cli.msg = "Expects an integer value.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ v = atoi(args[4]);
+ if (v < 0) {
+ appctx->ctx.cli.msg = "Value out of range.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ global.cps_lim = v;
+
+ /* Dequeues all of the listeners waiting for a resource */
+ if (!LIST_ISEMPTY(&global_listener_queue))
+ dequeue_all_listeners(&global_listener_queue);
+
+ return 1;
+ }
+ else {
+ appctx->ctx.cli.msg = "'set rate-limit connections' only supports 'global'.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+ }
+ else if (strcmp(args[2], "sessions") == 0) {
+ if (strcmp(args[3], "global") == 0) {
+ int v;
+
+ if (strm_li(s)->bind_conf->level < ACCESS_LVL_ADMIN) {
+ appctx->ctx.cli.msg = stats_permission_denied_msg;
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ if (!*args[4]) {
+ appctx->ctx.cli.msg = "Expects an integer value.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ v = atoi(args[4]);
+ if (v < 0) {
+ appctx->ctx.cli.msg = "Value out of range.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ global.sps_lim = v;
+
+ /* Dequeues all of the listeners waiting for a resource */
+ if (!LIST_ISEMPTY(&global_listener_queue))
+ dequeue_all_listeners(&global_listener_queue);
+
+ return 1;
+ }
+ else {
+ appctx->ctx.cli.msg = "'set rate-limit sessions' only supports 'global'.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+ }
+#ifdef USE_OPENSSL
+ else if (strcmp(args[2], "ssl-sessions") == 0) {
+ if (strcmp(args[3], "global") == 0) {
+ int v;
+
+ if (strm_li(s)->bind_conf->level < ACCESS_LVL_ADMIN) {
+ appctx->ctx.cli.msg = stats_permission_denied_msg;
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ if (!*args[4]) {
+ appctx->ctx.cli.msg = "Expects an integer value.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ v = atoi(args[4]);
+ if (v < 0) {
+ appctx->ctx.cli.msg = "Value out of range.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ global.ssl_lim = v;
+
+ /* Dequeues all of the listeners waiting for a resource */
+ if (!LIST_ISEMPTY(&global_listener_queue))
+ dequeue_all_listeners(&global_listener_queue);
+
+ return 1;
+ }
+ else {
+ appctx->ctx.cli.msg = "'set rate-limit ssl-sessions' only supports 'global'.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+ }
+#endif
+ else if (strcmp(args[2], "http-compression") == 0) {
+ if (strcmp(args[3], "global") == 0) {
+ int v;
+
+ if (strm_li(s)->bind_conf->level < ACCESS_LVL_ADMIN) {
+ appctx->ctx.cli.msg = stats_permission_denied_msg;
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ if (!*args[4]) {
+ appctx->ctx.cli.msg = "Expects a maximum input byte rate in kB/s.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ v = atoi(args[4]);
+ global.comp_rate_lim = v * 1024; /* Kilo to bytes. */
+ }
+ else {
+ appctx->ctx.cli.msg = "'set rate-limit http-compression' only supports 'global'.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+ }
+ else {
+ appctx->ctx.cli.msg = "'set rate-limit' supports 'connections', 'sessions', 'ssl-sessions', and 'http-compression'.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+ }
+ else if (strcmp(args[1], "table") == 0) {
+ stats_sock_table_request(si, args, STAT_CLI_O_SET);
+ }
+ else if (strcmp(args[1], "map") == 0) {
+ char *err;
+
+ /* Set flags. */
+ appctx->ctx.map.display_flags = PAT_REF_MAP;
+
+ /* Expect three parameters: map name, key and new value. */
+ if (!*args[2] || !*args[3] || !*args[4]) {
+ appctx->ctx.cli.msg = "'set map' expects three parameters: map identifier, key and value.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ /* Lookup the reference in the maps. */
+ appctx->ctx.map.ref = pat_ref_lookup_ref(args[2]);
+ if (!appctx->ctx.map.ref) {
+ appctx->ctx.cli.msg = "Unknown map identifier. Please use #<id> or <file>.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ /* If the entry identifier start with a '#', it is considered as
+ * pointer id
+ */
+ if (args[3][0] == '#' && args[3][1] == '0' && args[3][2] == 'x') {
+ struct pat_ref_elt *ref;
+ long long int conv;
+ char *error;
+
+ /* Convert argument to integer value. */
+ conv = strtoll(&args[3][1], &error, 16);
+ if (*error != '\0') {
+ appctx->ctx.cli.msg = "Malformed identifier. Please use #<id> or <file>.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ /* Convert and check integer to pointer. */
+ ref = (struct pat_ref_elt *)(long)conv;
+ if ((long long int)(long)ref != conv) {
+ appctx->ctx.cli.msg = "Malformed identifier. Please use #<id> or <file>.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ /* Try to delete the entry. */
+ err = NULL;
+ if (!pat_ref_set_by_id(appctx->ctx.map.ref, ref, args[4], &err)) {
+ if (err)
+ memprintf(&err, "%s.\n", err);
+ appctx->ctx.cli.err = err;
+ appctx->st0 = STAT_CLI_PRINT_FREE;
+ return 1;
+ }
+ }
+ else {
+ /* Else, use the entry identifier as pattern
+ * string, and update the value.
+ */
+ err = NULL;
+ if (!pat_ref_set(appctx->ctx.map.ref, args[3], args[4], &err)) {
+ if (err)
+ memprintf(&err, "%s.\n", err);
+ appctx->ctx.cli.err = err;
+ appctx->st0 = STAT_CLI_PRINT_FREE;
+ return 1;
+ }
+ }
+
+ /* The set is done, send message. */
+ appctx->st0 = STAT_CLI_PROMPT;
+ return 1;
+ }
+#ifdef USE_OPENSSL
+ else if (strcmp(args[1], "ssl") == 0) {
+ if (strcmp(args[2], "ocsp-response") == 0) {
+#if (defined SSL_CTRL_SET_TLSEXT_STATUS_REQ_CB && !defined OPENSSL_NO_OCSP)
+ char *err = NULL;
+
+ /* Expect one parameter: the new response in base64 encoding */
+ if (!*args[3]) {
+ appctx->ctx.cli.msg = "'set ssl ocsp-response' expects response in base64 encoding.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ trash.len = base64dec(args[3], strlen(args[3]), trash.str, trash.size);
+ if (trash.len < 0) {
+ appctx->ctx.cli.msg = "'set ssl ocsp-response' received invalid base64 encoded response.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ if (ssl_sock_update_ocsp_response(&trash, &err)) {
+ if (err) {
+ memprintf(&err, "%s.\n", err);
+ appctx->ctx.cli.err = err;
+ appctx->st0 = STAT_CLI_PRINT_FREE;
+ }
+ return 1;
+ }
+ appctx->ctx.cli.msg = "OCSP Response updated!";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+#else
+ appctx->ctx.cli.msg = "HAProxy was compiled against a version of OpenSSL that doesn't support OCSP stapling.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+#endif
+ }
+ else if (strcmp(args[2], "tls-key") == 0) {
+#if (defined SSL_CTRL_SET_TLSEXT_TICKET_KEY_CB && TLS_TICKETS_NO > 0)
+ /* Expect two parameters: the filename and the new new TLS key in encoding */
+ if (!*args[3] || !*args[4]) {
+ appctx->ctx.cli.msg = "'set ssl tls-key' expects a filename and the new TLS key in base64 encoding.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ appctx->ctx.tlskeys.ref = tlskeys_ref_lookup_ref(args[3]);
+ if(!appctx->ctx.tlskeys.ref) {
+ appctx->ctx.cli.msg = "'set ssl tls-key' unable to locate referenced filename\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ trash.len = base64dec(args[4], strlen(args[4]), trash.str, trash.size);
+ if (trash.len != sizeof(struct tls_sess_key)) {
+ appctx->ctx.cli.msg = "'set ssl tls-key' received invalid base64 encoded TLS key.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ memcpy(appctx->ctx.tlskeys.ref->tlskeys + ((appctx->ctx.tlskeys.ref->tls_ticket_enc_index + 2) % TLS_TICKETS_NO), trash.str, trash.len);
+ appctx->ctx.tlskeys.ref->tls_ticket_enc_index = (appctx->ctx.tlskeys.ref->tls_ticket_enc_index + 1) % TLS_TICKETS_NO;
+
+ appctx->ctx.cli.msg = "TLS ticket key updated!";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+#else
+ appctx->ctx.cli.msg = "HAProxy was compiled against a version of OpenSSL "
+ "that doesn't support specifying TLS ticket keys\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+#endif
+ }
+ else {
+ appctx->ctx.cli.msg = "'set ssl' only supports 'ocsp-response'.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+ }
+#endif
+ else { /* unknown "set" parameter */
+ return 0;
+ }
+ }
+ else if (strcmp(args[0], "enable") == 0) {
+ if (strcmp(args[1], "agent") == 0) {
+ struct server *sv;
+
+ sv = expect_server_admin(s, si, args[2]);
+ if (!sv)
+ return 1;
+
+ if (!(sv->agent.state & CHK_ST_CONFIGURED)) {
+ appctx->ctx.cli.msg = "Agent was not configured on this server, cannot enable.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ sv->agent.state |= CHK_ST_ENABLED;
+ return 1;
+ }
+ else if (strcmp(args[1], "health") == 0) {
+ struct server *sv;
+
+ sv = expect_server_admin(s, si, args[2]);
+ if (!sv)
+ return 1;
+
+ if (!(sv->check.state & CHK_ST_CONFIGURED)) {
+ appctx->ctx.cli.msg = "Health checks are not configured on this server, cannot enable.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ sv->check.state |= CHK_ST_ENABLED;
+ return 1;
+ }
+ else if (strcmp(args[1], "server") == 0) {
+ struct server *sv;
+
+ sv = expect_server_admin(s, si, args[2]);
+ if (!sv)
+ return 1;
+
+ srv_adm_set_ready(sv);
+ return 1;
+ }
+ else if (strcmp(args[1], "frontend") == 0) {
+ struct proxy *px;
+
+ px = expect_frontend_admin(s, si, args[2]);
+ if (!px)
+ return 1;
+
+ if (px->state == PR_STSTOPPED) {
+ appctx->ctx.cli.msg = "Frontend was previously shut down, cannot enable.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ if (px->state != PR_STPAUSED) {
+ appctx->ctx.cli.msg = "Frontend is already enabled.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ if (!resume_proxy(px)) {
+ appctx->ctx.cli.msg = "Failed to resume frontend, check logs for precise cause (port conflict?).\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+ return 1;
+ }
+ else { /* unknown "enable" parameter */
+ appctx->ctx.cli.msg = "'enable' only supports 'agent', 'frontend', 'health', and 'server'.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+ }
+ else if (strcmp(args[0], "disable") == 0) {
+ if (strcmp(args[1], "agent") == 0) {
+ struct server *sv;
+
+ sv = expect_server_admin(s, si, args[2]);
+ if (!sv)
+ return 1;
+
+ sv->agent.state &= ~CHK_ST_ENABLED;
+ return 1;
+ }
+ else if (strcmp(args[1], "health") == 0) {
+ struct server *sv;
+
+ sv = expect_server_admin(s, si, args[2]);
+ if (!sv)
+ return 1;
+
+ sv->check.state &= ~CHK_ST_ENABLED;
+ return 1;
+ }
+ else if (strcmp(args[1], "server") == 0) {
+ struct server *sv;
+
+ sv = expect_server_admin(s, si, args[2]);
+ if (!sv)
+ return 1;
+
+ srv_adm_set_maint(sv);
+ return 1;
+ }
+ else if (strcmp(args[1], "frontend") == 0) {
+ struct proxy *px;
+
+ px = expect_frontend_admin(s, si, args[2]);
+ if (!px)
+ return 1;
+
+ if (px->state == PR_STSTOPPED) {
+ appctx->ctx.cli.msg = "Frontend was previously shut down, cannot disable.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ if (px->state == PR_STPAUSED) {
+ appctx->ctx.cli.msg = "Frontend is already disabled.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ if (!pause_proxy(px)) {
+ appctx->ctx.cli.msg = "Failed to pause frontend, check logs for precise cause.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+ return 1;
+ }
+ else { /* unknown "disable" parameter */
+ appctx->ctx.cli.msg = "'disable' only supports 'agent', 'frontend', 'health', and 'server'.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+ }
+ else if (strcmp(args[0], "shutdown") == 0) {
+ if (strcmp(args[1], "frontend") == 0) {
+ struct proxy *px;
+
+ px = expect_frontend_admin(s, si, args[2]);
+ if (!px)
+ return 1;
+
+ if (px->state == PR_STSTOPPED) {
+ appctx->ctx.cli.msg = "Frontend was already shut down.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ Warning("Proxy %s stopped (FE: %lld conns, BE: %lld conns).\n",
+ px->id, px->fe_counters.cum_conn, px->be_counters.cum_conn);
+ send_log(px, LOG_WARNING, "Proxy %s stopped (FE: %lld conns, BE: %lld conns).\n",
+ px->id, px->fe_counters.cum_conn, px->be_counters.cum_conn);
+ stop_proxy(px);
+ return 1;
+ }
+ else if (strcmp(args[1], "session") == 0) {
+ struct stream *sess, *ptr;
+
+ if (strm_li(s)->bind_conf->level < ACCESS_LVL_ADMIN) {
+ appctx->ctx.cli.msg = stats_permission_denied_msg;
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ if (!*args[2]) {
+ appctx->ctx.cli.msg = "Session pointer expected (use 'show sess').\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ ptr = (void *)strtoul(args[2], NULL, 0);
+
+ /* first, look for the requested stream in the stream table */
+ list_for_each_entry(sess, &streams, list) {
+ if (sess == ptr)
+ break;
+ }
+
+ /* do we have the stream ? */
+ if (sess != ptr) {
+ appctx->ctx.cli.msg = "No such session (use 'show sess').\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ stream_shutdown(sess, SF_ERR_KILLED);
+ return 1;
+ }
+ else if (strcmp(args[1], "sessions") == 0) {
+ if (strcmp(args[2], "server") == 0) {
+ struct server *sv;
+ struct stream *sess, *sess_bck;
+
+ sv = expect_server_admin(s, si, args[3]);
+ if (!sv)
+ return 1;
+
+ /* kill all the stream that are on this server */
+ list_for_each_entry_safe(sess, sess_bck, &sv->actconns, by_srv)
+ if (sess->srv_conn == sv)
+ stream_shutdown(sess, SF_ERR_KILLED);
+
+ return 1;
+ }
+ else {
+ appctx->ctx.cli.msg = "'shutdown sessions' only supports 'server'.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+ }
+ else { /* unknown "disable" parameter */
+ appctx->ctx.cli.msg = "'shutdown' only supports 'frontend', 'session' and 'sessions'.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+ }
+ else if (strcmp(args[0], "del") == 0) {
+ if (strcmp(args[1], "map") == 0 || strcmp(args[1], "acl") == 0) {
+ if (args[1][0] == 'm')
+ appctx->ctx.map.display_flags = PAT_REF_MAP;
+ else
+ appctx->ctx.map.display_flags = PAT_REF_ACL;
+
+ /* Expect two parameters: map name and key. */
+ if (appctx->ctx.map.display_flags == PAT_REF_MAP) {
+ if (!*args[2] || !*args[3]) {
+ appctx->ctx.cli.msg = "This command expects two parameters: map identifier and key.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+ }
+
+ else {
+ if (!*args[2] || !*args[3]) {
+ appctx->ctx.cli.msg = "This command expects two parameters: ACL identifier and key.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+ }
+
+ /* Lookup the reference in the maps. */
+ appctx->ctx.map.ref = pat_ref_lookup_ref(args[2]);
+ if (!appctx->ctx.map.ref ||
+ !(appctx->ctx.map.ref->flags & appctx->ctx.map.display_flags)) {
+ appctx->ctx.cli.msg = "Unknown map identifier. Please use #<id> or <file>.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ /* If the entry identifier start with a '#', it is considered as
+ * pointer id
+ */
+ if (args[3][0] == '#' && args[3][1] == '0' && args[3][2] == 'x') {
+ struct pat_ref_elt *ref;
+ long long int conv;
+ char *error;
+
+ /* Convert argument to integer value. */
+ conv = strtoll(&args[3][1], &error, 16);
+ if (*error != '\0') {
+ appctx->ctx.cli.msg = "Malformed identifier. Please use #<id> or <file>.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ /* Convert and check integer to pointer. */
+ ref = (struct pat_ref_elt *)(long)conv;
+ if ((long long int)(long)ref != conv) {
+ appctx->ctx.cli.msg = "Malformed identifier. Please use #<id> or <file>.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ /* Try to delete the entry. */
+ if (!pat_ref_delete_by_id(appctx->ctx.map.ref, ref)) {
+ /* The entry is not found, send message. */
+ appctx->ctx.cli.msg = "Key not found.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+ }
+ else {
+ /* Else, use the entry identifier as pattern
+ * string and try to delete the entry.
+ */
+ if (!pat_ref_delete(appctx->ctx.map.ref, args[3])) {
+ /* The entry is not found, send message. */
+ appctx->ctx.cli.msg = "Key not found.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+ }
+
+ /* The deletion is done, send message. */
+ appctx->st0 = STAT_CLI_PROMPT;
+ return 1;
+ }
+ else { /* unknown "del" parameter */
+ appctx->ctx.cli.msg = "'del' only supports 'map' or 'acl'.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+ }
+ else if (strcmp(args[0], "add") == 0) {
+ if (strcmp(args[1], "map") == 0 ||
+ strcmp(args[1], "acl") == 0) {
+ int ret;
+ char *err;
+
+ /* Set flags. */
+ if (args[1][0] == 'm')
+ appctx->ctx.map.display_flags = PAT_REF_MAP;
+ else
+ appctx->ctx.map.display_flags = PAT_REF_ACL;
+
+ /* If the keywork is "map", we expect three parameters, if it
+ * is "acl", we expect only two parameters
+ */
+ if (appctx->ctx.map.display_flags == PAT_REF_MAP) {
+ if (!*args[2] || !*args[3] || !*args[4]) {
+ appctx->ctx.cli.msg = "'add map' expects three parameters: map identifier, key and value.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+ }
+ else {
+ if (!*args[2] || !*args[3]) {
+ appctx->ctx.cli.msg = "'add acl' expects two parameters: ACL identifier and pattern.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+ }
+
+ /* Lookup for the reference. */
+ appctx->ctx.map.ref = pat_ref_lookup_ref(args[2]);
+ if (!appctx->ctx.map.ref) {
+ if (appctx->ctx.map.display_flags == PAT_REF_MAP)
+ appctx->ctx.cli.msg = "Unknown map identifier. Please use #<id> or <file>.\n";
+ else
+ appctx->ctx.cli.msg = "Unknown ACL identifier. Please use #<id> or <file>.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ /* The command "add acl" is prohibited if the reference
+ * use samples.
+ */
+ if ((appctx->ctx.map.display_flags & PAT_REF_ACL) &&
+ (appctx->ctx.map.ref->flags & PAT_REF_SMP)) {
+ appctx->ctx.cli.msg = "This ACL is shared with a map containing samples. "
+ "You must use the command 'add map' to add values.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ /* Add value. */
+ err = NULL;
+ if (appctx->ctx.map.display_flags == PAT_REF_MAP)
+ ret = pat_ref_add(appctx->ctx.map.ref, args[3], args[4], &err);
+ else
+ ret = pat_ref_add(appctx->ctx.map.ref, args[3], NULL, &err);
+ if (!ret) {
+ if (err)
+ memprintf(&err, "%s.\n", err);
+ appctx->ctx.cli.err = err;
+ appctx->st0 = STAT_CLI_PRINT_FREE;
+ return 1;
+ }
+
+ /* The add is done, send message. */
+ appctx->st0 = STAT_CLI_PROMPT;
+ return 1;
+ }
+ else { /* unknown "del" parameter */
+ appctx->ctx.cli.msg = "'add' only supports 'map'.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+ }
+ else { /* not "show" nor "clear" nor "get" nor "set" nor "enable" nor "disable" */
+ return 0;
+ }
+ return 1;
+}
+
+/* This I/O handler runs as an applet embedded in a stream interface. It is
+ * used to processes I/O from/to the stats unix socket. The system relies on a
+ * state machine handling requests and various responses. We read a request,
+ * then we process it and send the response, and we possibly display a prompt.
+ * Then we can read again. The state is stored in appctx->st0 and is one of the
+ * STAT_CLI_* constants. appctx->st1 is used to indicate whether prompt is enabled
+ * or not.
+ */
+static void cli_io_handler(struct appctx *appctx)
+{
+ struct stream_interface *si = appctx->owner;
+ struct channel *req = si_oc(si);
+ struct channel *res = si_ic(si);
+ int reql;
+ int len;
+
+ if (unlikely(si->state == SI_ST_DIS || si->state == SI_ST_CLO))
+ goto out;
+
+ while (1) {
+ if (appctx->st0 == STAT_CLI_INIT) {
+ /* Stats output not initialized yet */
+ memset(&appctx->ctx.stats, 0, sizeof(appctx->ctx.stats));
+ appctx->st0 = STAT_CLI_GETREQ;
+ }
+ else if (appctx->st0 == STAT_CLI_END) {
+ /* Let's close for real now. We just close the request
+ * side, the conditions below will complete if needed.
+ */
+ si_shutw(si);
+ break;
+ }
+ else if (appctx->st0 == STAT_CLI_GETREQ) {
+ /* ensure we have some output room left in the event we
+ * would want to return some info right after parsing.
+ */
+ if (buffer_almost_full(si_ib(si))) {
+ si_applet_cant_put(si);
+ break;
+ }
+
+ reql = bo_getline(si_oc(si), trash.str, trash.size);
+ if (reql <= 0) { /* closed or EOL not found */
+ if (reql == 0)
+ break;
+ appctx->st0 = STAT_CLI_END;
+ continue;
+ }
+
+ /* seek for a possible semi-colon. If we find one, we
+ * replace it with an LF and skip only this part.
+ */
+ for (len = 0; len < reql; len++)
+ if (trash.str[len] == ';') {
+ trash.str[len] = '\n';
+ reql = len + 1;
+ break;
+ }
+
+ /* now it is time to check that we have a full line,
+ * remove the trailing \n and possibly \r, then cut the
+ * line.
+ */
+ len = reql - 1;
+ if (trash.str[len] != '\n') {
+ appctx->st0 = STAT_CLI_END;
+ continue;
+ }
+
+ if (len && trash.str[len-1] == '\r')
+ len--;
+
+ trash.str[len] = '\0';
+
+ appctx->st0 = STAT_CLI_PROMPT;
+ if (len) {
+ if (strcmp(trash.str, "quit") == 0) {
+ appctx->st0 = STAT_CLI_END;
+ continue;
+ }
+ else if (strcmp(trash.str, "prompt") == 0)
+ appctx->st1 = !appctx->st1;
+ else if (strcmp(trash.str, "help") == 0 ||
+ !stats_sock_parse_request(si, trash.str)) {
+ appctx->ctx.cli.msg = stats_sock_usage_msg;
+ appctx->st0 = STAT_CLI_PRINT;
+ }
+ /* NB: stats_sock_parse_request() may have put
+ * another STAT_CLI_O_* into appctx->st0.
+ */
+ }
+ else if (!appctx->st1) {
+ /* if prompt is disabled, print help on empty lines,
+ * so that the user at least knows how to enable
+ * prompt and find help.
+ */
+ appctx->ctx.cli.msg = stats_sock_usage_msg;
+ appctx->st0 = STAT_CLI_PRINT;
+ }
+
+ /* re-adjust req buffer */
+ bo_skip(si_oc(si), reql);
+ req->flags |= CF_READ_DONTWAIT; /* we plan to read small requests */
+ }
+ else { /* output functions */
+ switch (appctx->st0) {
+ case STAT_CLI_PROMPT:
+ break;
+ case STAT_CLI_PRINT:
+ if (bi_putstr(si_ic(si), appctx->ctx.cli.msg) != -1)
+ appctx->st0 = STAT_CLI_PROMPT;
+ else
+ si_applet_cant_put(si);
+ break;
+ case STAT_CLI_PRINT_FREE:
+ if (bi_putstr(si_ic(si), appctx->ctx.cli.err) != -1) {
+ free(appctx->ctx.cli.err);
+ appctx->st0 = STAT_CLI_PROMPT;
+ }
+ else
+ si_applet_cant_put(si);
+ break;
+ case STAT_CLI_O_BACKEND:
+ if (stats_dump_backend_to_buffer(si))
+ appctx->st0 = STAT_CLI_PROMPT;
+ break;
+ case STAT_CLI_O_INFO:
+ if (stats_dump_info_to_buffer(si))
+ appctx->st0 = STAT_CLI_PROMPT;
+ break;
+ case STAT_CLI_O_SERVERS_STATE:
+ if (stats_dump_servers_state_to_buffer(si))
+ appctx->st0 = STAT_CLI_PROMPT;
+ break;
+ case STAT_CLI_O_STAT:
+ if (stats_dump_stat_to_buffer(si, NULL))
+ appctx->st0 = STAT_CLI_PROMPT;
+ break;
+ case STAT_CLI_O_RESOLVERS:
+ if (stats_dump_resolvers_to_buffer(si))
+ appctx->st0 = STAT_CLI_PROMPT;
+ break;
+ case STAT_CLI_O_SESS:
+ if (stats_dump_sess_to_buffer(si))
+ appctx->st0 = STAT_CLI_PROMPT;
+ break;
+ case STAT_CLI_O_ERR: /* errors dump */
+ if (stats_dump_errors_to_buffer(si))
+ appctx->st0 = STAT_CLI_PROMPT;
+ break;
+ case STAT_CLI_O_TAB:
+ case STAT_CLI_O_CLR:
+ if (stats_table_request(si, appctx->st0))
+ appctx->st0 = STAT_CLI_PROMPT;
+ break;
+ case STAT_CLI_O_PATS:
+ if (stats_pats_list(si))
+ appctx->st0 = STAT_CLI_PROMPT;
+ break;
+ case STAT_CLI_O_PAT:
+ if (stats_pat_list(si))
+ appctx->st0 = STAT_CLI_PROMPT;
+ break;
+ case STAT_CLI_O_MLOOK:
+ if (stats_map_lookup(si))
+ appctx->st0 = STAT_CLI_PROMPT;
+ break;
+ case STAT_CLI_O_POOLS:
+ if (stats_dump_pools_to_buffer(si))
+ appctx->st0 = STAT_CLI_PROMPT;
+ break;
+#if (defined SSL_CTRL_SET_TLSEXT_TICKET_KEY_CB && TLS_TICKETS_NO > 0)
+ case STAT_CLI_O_TLSK:
+ if (stats_tlskeys_list(si))
+ appctx->st0 = STAT_CLI_PROMPT;
+ break;
+#endif
+ default: /* abnormal state */
+ si->flags |= SI_FL_ERR;
+ break;
+ }
+
+ /* The post-command prompt is either LF alone or LF + '> ' in interactive mode */
+ if (appctx->st0 == STAT_CLI_PROMPT) {
+ if (bi_putstr(si_ic(si), appctx->st1 ? "\n> " : "\n") != -1)
+ appctx->st0 = STAT_CLI_GETREQ;
+ else
+ si_applet_cant_put(si);
+ }
+
+ /* If the output functions are still there, it means they require more room. */
+ if (appctx->st0 >= STAT_CLI_OUTPUT)
+ break;
+
+ /* Now we close the output if one of the writers did so,
+ * or if we're not in interactive mode and the request
+ * buffer is empty. This still allows pipelined requests
+ * to be sent in non-interactive mode.
+ */
+ if ((res->flags & (CF_SHUTW|CF_SHUTW_NOW)) || (!appctx->st1 && !req->buf->o)) {
+ appctx->st0 = STAT_CLI_END;
+ continue;
+ }
+
+ /* switch state back to GETREQ to read next requests */
+ appctx->st0 = STAT_CLI_GETREQ;
+ }
+ }
+
+ if ((res->flags & CF_SHUTR) && (si->state == SI_ST_EST)) {
+ DPRINTF(stderr, "%s@%d: si to buf closed. req=%08x, res=%08x, st=%d\n",
+ __FUNCTION__, __LINE__, req->flags, res->flags, si->state);
+ /* Other side has closed, let's abort if we have no more processing to do
+ * and nothing more to consume. This is comparable to a broken pipe, so
+ * we forward the close to the request side so that it flows upstream to
+ * the client.
+ */
+ si_shutw(si);
+ }
+
+ if ((req->flags & CF_SHUTW) && (si->state == SI_ST_EST) && (appctx->st0 < STAT_CLI_OUTPUT)) {
+ DPRINTF(stderr, "%s@%d: buf to si closed. req=%08x, res=%08x, st=%d\n",
+ __FUNCTION__, __LINE__, req->flags, res->flags, si->state);
+ /* We have no more processing to do, and nothing more to send, and
+ * the client side has closed. So we'll forward this state downstream
+ * on the response buffer.
+ */
+ si_shutr(si);
+ res->flags |= CF_READ_NULL;
+ }
+
+ out:
+ DPRINTF(stderr, "%s@%d: st=%d, rqf=%x, rpf=%x, rqh=%d, rqs=%d, rh=%d, rs=%d\n",
+ __FUNCTION__, __LINE__,
+ si->state, req->flags, res->flags, req->buf->i, req->buf->o, res->buf->i, res->buf->o);
+}
+
+/* This function dumps information onto the stream interface's read buffer.
+ * It returns 0 as long as it does not complete, non-zero upon completion.
+ * No state is used.
+ */
+static int stats_dump_info_to_buffer(struct stream_interface *si)
+{
+ unsigned int up = (now.tv_sec - start_date.tv_sec);
+
+#ifdef USE_OPENSSL
+ int ssl_sess_rate = read_freq_ctr(&global.ssl_per_sec);
+ int ssl_key_rate = read_freq_ctr(&global.ssl_fe_keys_per_sec);
+ int ssl_reuse = 0;
+
+ if (ssl_key_rate < ssl_sess_rate) {
+ /* count the ssl reuse ratio and avoid overflows in both directions */
+ ssl_reuse = 100 - (100 * ssl_key_rate + (ssl_sess_rate - 1) / 2) / ssl_sess_rate;
+ }
+#endif
+
+ chunk_printf(&trash,
+ "Name: " PRODUCT_NAME "\n"
+ "Version: " HAPROXY_VERSION "\n"
+ "Release_date: " HAPROXY_DATE "\n"
+ "Nbproc: %d\n"
+ "Process_num: %d\n"
+ "Pid: %d\n"
+ "Uptime: %dd %dh%02dm%02ds\n"
+ "Uptime_sec: %d\n"
+ "Memmax_MB: %d\n"
+ "Ulimit-n: %d\n"
+ "Maxsock: %d\n"
+ "Maxconn: %d\n"
+ "Hard_maxconn: %d\n"
+ "CurrConns: %d\n"
+ "CumConns: %d\n"
+ "CumReq: %u\n"
+#ifdef USE_OPENSSL
+ "MaxSslConns: %d\n"
+ "CurrSslConns: %d\n"
+ "CumSslConns: %d\n"
+#endif
+ "Maxpipes: %d\n"
+ "PipesUsed: %d\n"
+ "PipesFree: %d\n"
+ "ConnRate: %d\n"
+ "ConnRateLimit: %d\n"
+ "MaxConnRate: %d\n"
+ "SessRate: %d\n"
+ "SessRateLimit: %d\n"
+ "MaxSessRate: %d\n"
+#ifdef USE_OPENSSL
+ "SslRate: %d\n"
+ "SslRateLimit: %d\n"
+ "MaxSslRate: %d\n"
+ "SslFrontendKeyRate: %d\n"
+ "SslFrontendMaxKeyRate: %d\n"
+ "SslFrontendSessionReuse_pct: %d\n"
+ "SslBackendKeyRate: %d\n"
+ "SslBackendMaxKeyRate: %d\n"
+ "SslCacheLookups: %u\n"
+ "SslCacheMisses: %u\n"
+#endif
+ "CompressBpsIn: %u\n"
+ "CompressBpsOut: %u\n"
+ "CompressBpsRateLim: %u\n"
+#ifdef USE_ZLIB
+ "ZlibMemUsage: %ld\n"
+ "MaxZlibMemUsage: %ld\n"
+#endif
+ "Tasks: %d\n"
+ "Run_queue: %d\n"
+ "Idle_pct: %d\n"
+ "node: %s\n"
+ "description: %s\n"
+ "",
+ global.nbproc,
+ relative_pid,
+ pid,
+ up / 86400, (up % 86400) / 3600, (up % 3600) / 60, (up % 60),
+ up,
+ global.rlimit_memmax,
+ global.rlimit_nofile,
+ global.maxsock, global.maxconn, global.hardmaxconn,
+ actconn, totalconn, global.req_count,
+#ifdef USE_OPENSSL
+ global.maxsslconn, sslconns, totalsslconns,
+#endif
+ global.maxpipes, pipes_used, pipes_free,
+ read_freq_ctr(&global.conn_per_sec), global.cps_lim, global.cps_max,
+ read_freq_ctr(&global.sess_per_sec), global.sps_lim, global.sps_max,
+#ifdef USE_OPENSSL
+ ssl_sess_rate, global.ssl_lim, global.ssl_max,
+ ssl_key_rate, global.ssl_fe_keys_max,
+ ssl_reuse,
+ read_freq_ctr(&global.ssl_be_keys_per_sec), global.ssl_be_keys_max,
+ global.shctx_lookups, global.shctx_misses,
+#endif
+ read_freq_ctr(&global.comp_bps_in), read_freq_ctr(&global.comp_bps_out),
+ global.comp_rate_lim,
+#ifdef USE_ZLIB
+ zlib_used_memory, global.maxzlibmem,
+#endif
+ nb_tasks_cur, run_queue_cur, idle_pct,
+ global.node, global.desc ? global.desc : ""
+ );
+
+ if (bi_putchk(si_ic(si), &trash) == -1) {
+ si_applet_cant_put(si);
+ return 0;
+ }
+
+ return 1;
+}
+
+/* dumps server state information into <buf> for all the servers found in <backend>
+ * These information are all the parameters which may change during HAProxy runtime.
+ * By default, we only export to the last known server state file format.
+ * These information can be used at next startup to recover same level of server state.
+ */
+static void dump_servers_state(struct proxy *backend, struct chunk *buf)
+{
+ struct server *srv;
+ char srv_addr[INET6_ADDRSTRLEN + 1];
+ time_t srv_time_since_last_change;
+ int bk_f_forced_id, srv_f_forced_id;
+
+ /* we don't want to report any state if the backend is not enabled on this process */
+ if (backend->bind_proc && !(backend->bind_proc & (1UL << (relative_pid - 1))))
+ return;
+
+ srv = backend->srv;
+
+ while (srv) {
+ srv_addr[0] = '\0';
+ srv_time_since_last_change = 0;
+ bk_f_forced_id = 0;
+ srv_f_forced_id = 0;
+
+ switch (srv->addr.ss_family) {
+ case AF_INET:
+ inet_ntop(srv->addr.ss_family, &((struct sockaddr_in *)&srv->addr)->sin_addr,
+ srv_addr, INET_ADDRSTRLEN + 1);
+ break;
+ case AF_INET6:
+ inet_ntop(srv->addr.ss_family, &((struct sockaddr_in6 *)&srv->addr)->sin6_addr,
+ srv_addr, INET6_ADDRSTRLEN + 1);
+ break;
+ }
+ srv_time_since_last_change = now.tv_sec - srv->last_change;
+ bk_f_forced_id = backend->options & PR_O_FORCED_ID ? 1 : 0;
+ srv_f_forced_id = srv->flags & SRV_F_FORCED_ID ? 1 : 0;
+
+ chunk_appendf(buf,
+ "%d %s "
+ "%d %s %s "
+ "%d %d %d %d %ld "
+ "%d %d %d %d %d "
+ "%d %d"
+ "\n",
+ backend->uuid, backend->id,
+ srv->puid, srv->id, srv_addr,
+ srv->state, srv->admin, srv->uweight, srv->iweight, (long int)srv_time_since_last_change,
+ srv->check.status, srv->check.result, srv->check.health, srv->check.state, srv->agent.state,
+ bk_f_forced_id, srv_f_forced_id);
+
+ srv = srv->next;
+ }
+}
+
+/* Parses backend list and simply report backend names */
+static int stats_dump_backend_to_buffer(struct stream_interface *si)
+{
+ extern struct proxy *proxy;
+ struct proxy *curproxy;
+
+ chunk_reset(&trash);
+ chunk_printf(&trash, "# name\n");
+
+ for (curproxy = proxy; curproxy != NULL; curproxy = curproxy->next) {
+ /* looking for backends only */
+ if (!(curproxy->cap & PR_CAP_BE))
+ continue;
+
+ /* we don't want to list a backend which is bound to this process */
+ if (curproxy->bind_proc && !(curproxy->bind_proc & (1UL << (relative_pid - 1))))
+ continue;
+
+ chunk_appendf(&trash, "%s\n", curproxy->id);
+ }
+
+ if (bi_putchk(si_ic(si), &trash) == -1) {
+ si_applet_cant_put(si);
+ return 0;
+ }
+
+ return 1;
+}
+
+/* Parses backend list or simply use backend name provided by the user to return
+ * states of servers to stdout.
+ */
+static int stats_dump_servers_state_to_buffer(struct stream_interface *si)
+{
+ struct appctx *appctx = __objt_appctx(si->end);
+ extern struct proxy *proxy;
+ struct proxy *curproxy;
+
+ chunk_reset(&trash);
+
+ chunk_printf(&trash, "%d\n# %s\n", SRV_STATE_FILE_VERSION, SRV_STATE_FILE_FIELD_NAMES);
+
+ if (appctx->ctx.server_state.backend) {
+ dump_servers_state(appctx->ctx.server_state.backend, &trash);
+ }
+ else {
+ for (curproxy = proxy; curproxy != NULL; curproxy = curproxy->next) {
+ /* servers are only in backends */
+ if (!(curproxy->cap & PR_CAP_BE))
+ continue;
+
+ dump_servers_state(curproxy, &trash);
+ }
+ }
+
+ if (bi_putchk(si_ic(si), &trash) == -1) {
+ si_applet_cant_put(si);
+ return 0;
+ }
+
+ return 1;
+}
+
+/* This function dumps memory usage information onto the stream interface's
+ * read buffer. It returns 0 as long as it does not complete, non-zero upon
+ * completion. No state is used.
+ */
+static int stats_dump_pools_to_buffer(struct stream_interface *si)
+{
+ dump_pools_to_trash();
+ if (bi_putchk(si_ic(si), &trash) == -1) {
+ si_applet_cant_put(si);
+ return 0;
+ }
+ return 1;
+}
+
+/* Dumps a frontend's line to the trash for the current proxy <px> and uses
+ * the state from stream interface <si>. The caller is responsible for clearing
+ * the trash if needed. Returns non-zero if it emits anything, zero otherwise.
+ */
+static int stats_dump_fe_stats(struct stream_interface *si, struct proxy *px)
+{
+ struct appctx *appctx = __objt_appctx(si->end);
+ int i;
+
+ if (!(px->cap & PR_CAP_FE))
+ return 0;
+
+ if ((appctx->ctx.stats.flags & STAT_BOUND) && !(appctx->ctx.stats.type & (1 << STATS_TYPE_FE)))
+ return 0;
+
+ if (appctx->ctx.stats.flags & STAT_FMT_HTML) {
+ chunk_appendf(&trash,
+ /* name, queue */
+ "<tr class=\"frontend\">");
+
+ if (px->cap & PR_CAP_BE && px->srv && (appctx->ctx.stats.flags & STAT_ADMIN)) {
+ /* Column sub-heading for Enable or Disable server */
+ chunk_appendf(&trash, "<td></td>");
+ }
+
+ chunk_appendf(&trash,
+ "<td class=ac>"
+ "<a name=\"%s/Frontend\"></a>"
+ "<a class=lfsb href=\"#%s/Frontend\">Frontend</a></td>"
+ "<td colspan=3></td>"
+ "",
+ px->id, px->id);
+
+ chunk_appendf(&trash,
+ /* sessions rate : current */
+ "<td><u>%s<div class=tips><table class=det>"
+ "<tr><th>Current connection rate:</th><td>%s/s</td></tr>"
+ "<tr><th>Current session rate:</th><td>%s/s</td></tr>"
+ "",
+ U2H(read_freq_ctr(&px->fe_sess_per_sec)),
+ U2H(read_freq_ctr(&px->fe_conn_per_sec)),
+ U2H(read_freq_ctr(&px->fe_sess_per_sec)));
+
+ if (px->mode == PR_MODE_HTTP)
+ chunk_appendf(&trash,
+ "<tr><th>Current request rate:</th><td>%s/s</td></tr>",
+ U2H(read_freq_ctr(&px->fe_req_per_sec)));
+
+ chunk_appendf(&trash,
+ "</table></div></u></td>"
+ /* sessions rate : max */
+ "<td><u>%s<div class=tips><table class=det>"
+ "<tr><th>Max connection rate:</th><td>%s/s</td></tr>"
+ "<tr><th>Max session rate:</th><td>%s/s</td></tr>"
+ "",
+ U2H(px->fe_counters.sps_max),
+ U2H(px->fe_counters.cps_max),
+ U2H(px->fe_counters.sps_max));
+
+ if (px->mode == PR_MODE_HTTP)
+ chunk_appendf(&trash,
+ "<tr><th>Max request rate:</th><td>%s/s</td></tr>",
+ U2H(px->fe_counters.p.http.rps_max));
+
+ chunk_appendf(&trash,
+ "</table></div></u></td>"
+ /* sessions rate : limit */
+ "<td>%s</td>",
+ LIM2A(px->fe_sps_lim, "-"));
+
+ chunk_appendf(&trash,
+ /* sessions: current, max, limit, total */
+ "<td>%s</td><td>%s</td><td>%s</td>"
+ "<td><u>%s<div class=tips><table class=det>"
+ "<tr><th>Cum. connections:</th><td>%s</td></tr>"
+ "<tr><th>Cum. sessions:</th><td>%s</td></tr>"
+ "",
+ U2H(px->feconn), U2H(px->fe_counters.conn_max), U2H(px->maxconn),
+ U2H(px->fe_counters.cum_sess),
+ U2H(px->fe_counters.cum_conn),
+ U2H(px->fe_counters.cum_sess));
+
+ /* http response (via hover): 1xx, 2xx, 3xx, 4xx, 5xx, other */
+ if (px->mode == PR_MODE_HTTP) {
+ chunk_appendf(&trash,
+ "<tr><th>Cum. HTTP requests:</th><td>%s</td></tr>"
+ "<tr><th>- HTTP 1xx responses:</th><td>%s</td></tr>"
+ "<tr><th>- HTTP 2xx responses:</th><td>%s</td></tr>"
+ "<tr><th> Compressed 2xx:</th><td>%s</td><td>(%d%%)</td></tr>"
+ "<tr><th>- HTTP 3xx responses:</th><td>%s</td></tr>"
+ "<tr><th>- HTTP 4xx responses:</th><td>%s</td></tr>"
+ "<tr><th>- HTTP 5xx responses:</th><td>%s</td></tr>"
+ "<tr><th>- other responses:</th><td>%s</td></tr>"
+ "<tr><th>Intercepted requests:</th><td>%s</td></tr>"
+ "",
+ U2H(px->fe_counters.p.http.cum_req),
+ U2H(px->fe_counters.p.http.rsp[1]),
+ U2H(px->fe_counters.p.http.rsp[2]),
+ U2H(px->fe_counters.p.http.comp_rsp),
+ px->fe_counters.p.http.rsp[2] ?
+ (int)(100*px->fe_counters.p.http.comp_rsp/px->fe_counters.p.http.rsp[2]) : 0,
+ U2H(px->fe_counters.p.http.rsp[3]),
+ U2H(px->fe_counters.p.http.rsp[4]),
+ U2H(px->fe_counters.p.http.rsp[5]),
+ U2H(px->fe_counters.p.http.rsp[0]),
+ U2H(px->fe_counters.intercepted_req));
+ }
+
+ chunk_appendf(&trash,
+ "</table></div></u></td>"
+ /* sessions: lbtot, lastsess */
+ "<td></td><td></td>"
+ /* bytes : in */
+ "<td>%s</td>"
+ "",
+ U2H(px->fe_counters.bytes_in));
+
+ chunk_appendf(&trash,
+ /* bytes:out + compression stats (via hover): comp_in, comp_out, comp_byp */
+ "<td>%s%s<div class=tips><table class=det>"
+ "<tr><th>Response bytes in:</th><td>%s</td></tr>"
+ "<tr><th>Compression in:</th><td>%s</td></tr>"
+ "<tr><th>Compression out:</th><td>%s</td><td>(%d%%)</td></tr>"
+ "<tr><th>Compression bypass:</th><td>%s</td></tr>"
+ "<tr><th>Total bytes saved:</th><td>%s</td><td>(%d%%)</td></tr>"
+ "</table></div>%s</td>",
+ (px->fe_counters.comp_in || px->fe_counters.comp_byp) ? "<u>":"",
+ U2H(px->fe_counters.bytes_out),
+ U2H(px->fe_counters.bytes_out),
+ U2H(px->fe_counters.comp_in),
+ U2H(px->fe_counters.comp_out),
+ px->fe_counters.comp_in ? (int)(px->fe_counters.comp_out * 100 / px->fe_counters.comp_in) : 0,
+ U2H(px->fe_counters.comp_byp),
+ U2H(px->fe_counters.comp_in - px->fe_counters.comp_out),
+ px->fe_counters.bytes_out ? (int)((px->fe_counters.comp_in - px->fe_counters.comp_out) * 100 / px->fe_counters.bytes_out) : 0,
+ (px->fe_counters.comp_in || px->fe_counters.comp_byp) ? "</u>":"");
+
+ chunk_appendf(&trash,
+ /* denied: req, resp */
+ "<td>%s</td><td>%s</td>"
+ /* errors : request, connect, response */
+ "<td>%s</td><td></td><td></td>"
+ /* warnings: retries, redispatches */
+ "<td></td><td></td>"
+ /* server status : reflect frontend status */
+ "<td class=ac>%s</td>"
+ /* rest of server: nothing */
+ "<td class=ac colspan=8></td></tr>"
+ "",
+ U2H(px->fe_counters.denied_req), U2H(px->fe_counters.denied_resp),
+ U2H(px->fe_counters.failed_req),
+ px->state == PR_STREADY ? "OPEN" :
+ px->state == PR_STFULL ? "FULL" : "STOP");
+ }
+ else { /* CSV mode */
+ chunk_appendf(&trash,
+ /* pxid, name, queue cur, queue max, */
+ "%s,FRONTEND,,,"
+ /* sessions : current, max, limit, total */
+ "%d,%d,%d,%lld,"
+ /* bytes : in, out */
+ "%lld,%lld,"
+ /* denied: req, resp */
+ "%lld,%lld,"
+ /* errors : request, connect, response */
+ "%lld,,,"
+ /* warnings: retries, redispatches */
+ ",,"
+ /* server status : reflect frontend status */
+ "%s,"
+ /* rest of server: nothing */
+ ",,,,,,,,"
+ /* pid, iid, sid, throttle, lbtot, tracked, type */
+ "%d,%d,0,,,,%d,"
+ /* rate, rate_lim, rate_max */
+ "%u,%u,%u,"
+ /* check_status, check_code, check_duration */
+ ",,,",
+ px->id,
+ px->feconn, px->fe_counters.conn_max, px->maxconn, px->fe_counters.cum_sess,
+ px->fe_counters.bytes_in, px->fe_counters.bytes_out,
+ px->fe_counters.denied_req, px->fe_counters.denied_resp,
+ px->fe_counters.failed_req,
+ px->state == PR_STREADY ? "OPEN" :
+ px->state == PR_STFULL ? "FULL" : "STOP",
+ relative_pid, px->uuid, STATS_TYPE_FE,
+ read_freq_ctr(&px->fe_sess_per_sec),
+ px->fe_sps_lim, px->fe_counters.sps_max);
+
+ /* http response: 1xx, 2xx, 3xx, 4xx, 5xx, other */
+ if (px->mode == PR_MODE_HTTP) {
+ for (i=1; i<6; i++)
+ chunk_appendf(&trash, "%lld,", px->fe_counters.p.http.rsp[i]);
+ chunk_appendf(&trash, "%lld,", px->fe_counters.p.http.rsp[0]);
+ }
+ else
+ chunk_appendf(&trash, ",,,,,,");
+
+ /* failed health analyses */
+ chunk_appendf(&trash, ",");
+
+ /* requests : req_rate, req_rate_max, req_tot, */
+ chunk_appendf(&trash, "%u,%u,%lld,",
+ read_freq_ctr(&px->fe_req_per_sec),
+ px->fe_counters.p.http.rps_max, px->fe_counters.p.http.cum_req);
+
+ /* errors: cli_aborts, srv_aborts */
+ chunk_appendf(&trash, ",,");
+
+ /* compression: in, out, bypassed */
+ chunk_appendf(&trash, "%lld,%lld,%lld,",
+ px->fe_counters.comp_in, px->fe_counters.comp_out, px->fe_counters.comp_byp);
+
+ /* compression: comp_rsp */
+ chunk_appendf(&trash, "%lld,",
+ px->fe_counters.p.http.comp_rsp);
+
+ /* lastsess, last_chk, last_agt, qtime, ctime, rtime, ttime, */
+ chunk_appendf(&trash, ",,,,,,,");
+
+ /* finish with EOL */
+ chunk_appendf(&trash, "\n");
+ }
+ return 1;
+}
+
+/* Dumps a line for listener <l> and proxy <px> to the trash and uses the state
+ * from stream interface <si>, and stats flags <flags>. The caller is responsible
+ * for clearing the trash if needed. Returns non-zero if it emits anything, zero
+ * otherwise.
+ */
+static int stats_dump_li_stats(struct stream_interface *si, struct proxy *px, struct listener *l, int flags)
+{
+ struct appctx *appctx = __objt_appctx(si->end);
+
+ if (appctx->ctx.stats.flags & STAT_FMT_HTML) {
+ chunk_appendf(&trash, "<tr class=socket>");
+ if (px->cap & PR_CAP_BE && px->srv && (appctx->ctx.stats.flags & STAT_ADMIN)) {
+ /* Column sub-heading for Enable or Disable server */
+ chunk_appendf(&trash, "<td></td>");
+ }
+ chunk_appendf(&trash,
+ /* frontend name, listener name */
+ "<td class=ac><a name=\"%s/+%s\"></a>%s"
+ "<a class=lfsb href=\"#%s/+%s\">%s</a>"
+ "",
+ px->id, l->name,
+ (flags & ST_SHLGNDS)?"<u>":"",
+ px->id, l->name, l->name);
+
+ if (flags & ST_SHLGNDS) {
+ char str[INET6_ADDRSTRLEN];
+ int port;
+
+ chunk_appendf(&trash, "<div class=tips>");
+
+ port = get_host_port(&l->addr);
+ switch (addr_to_str(&l->addr, str, sizeof(str))) {
+ case AF_INET:
+ chunk_appendf(&trash, "IPv4: %s:%d, ", str, port);
+ break;
+ case AF_INET6:
+ chunk_appendf(&trash, "IPv6: [%s]:%d, ", str, port);
+ break;
+ case AF_UNIX:
+ chunk_appendf(&trash, "unix, ");
+ break;
+ case -1:
+ chunk_appendf(&trash, "(%s), ", strerror(errno));
+ break;
+ }
+
+ /* id */
+ chunk_appendf(&trash, "id: %d</div>", l->luid);
+ }
+
+ chunk_appendf(&trash,
+ /* queue */
+ "%s</td><td colspan=3></td>"
+ /* sessions rate: current, max, limit */
+ "<td colspan=3> </td>"
+ /* sessions: current, max, limit, total, lbtot, lastsess */
+ "<td>%s</td><td>%s</td><td>%s</td>"
+ "<td>%s</td><td> </td><td> </td>"
+ /* bytes: in, out */
+ "<td>%s</td><td>%s</td>"
+ "",
+ (flags & ST_SHLGNDS)?"</u>":"",
+ U2H(l->nbconn), U2H(l->counters->conn_max), U2H(l->maxconn),
+ U2H(l->counters->cum_conn), U2H(l->counters->bytes_in), U2H(l->counters->bytes_out));
+
+ chunk_appendf(&trash,
+ /* denied: req, resp */
+ "<td>%s</td><td>%s</td>"
+ /* errors: request, connect, response */
+ "<td>%s</td><td></td><td></td>"
+ /* warnings: retries, redispatches */
+ "<td></td><td></td>"
+ /* server status: reflect listener status */
+ "<td class=ac>%s</td>"
+ /* rest of server: nothing */
+ "<td class=ac colspan=8></td></tr>"
+ "",
+ U2H(l->counters->denied_req), U2H(l->counters->denied_resp),
+ U2H(l->counters->failed_req),
+ (l->nbconn < l->maxconn) ? (l->state == LI_LIMITED) ? "WAITING" : "OPEN" : "FULL");
+ }
+ else { /* CSV mode */
+ chunk_appendf(&trash,
+ /* pxid, name, queue cur, queue max, */
+ "%s,%s,,,"
+ /* sessions: current, max, limit, total */
+ "%d,%d,%d,%lld,"
+ /* bytes: in, out */
+ "%lld,%lld,"
+ /* denied: req, resp */
+ "%lld,%lld,"
+ /* errors: request, connect, response */
+ "%lld,,,"
+ /* warnings: retries, redispatches */
+ ",,"
+ /* server status: reflect listener status */
+ "%s,"
+ /* rest of server: nothing */
+ ",,,,,,,,"
+ /* pid, iid, sid, throttle, lbtot, tracked, type */
+ "%d,%d,%d,,,,%d,"
+ /* rate, rate_lim, rate_max */
+ ",,,"
+ /* check_status, check_code, check_duration */
+ ",,,"
+ /* http response: 1xx, 2xx, 3xx, 4xx, 5xx, other */
+ ",,,,,,"
+ /* failed health analyses */
+ ","
+ /* requests : req_rate, req_rate_max, req_tot, */
+ ",,,"
+ /* errors: cli_aborts, srv_aborts */
+ ",,"
+ /* compression: in, out, bypassed, comp_rsp */
+ ",,,,"
+ /* lastsess, last_chk, last_agt, qtime, ctime, rtime, ttime, */
+ ",,,,,,,"
+ "\n",
+ px->id, l->name,
+ l->nbconn, l->counters->conn_max,
+ l->maxconn, l->counters->cum_conn,
+ l->counters->bytes_in, l->counters->bytes_out,
+ l->counters->denied_req, l->counters->denied_resp,
+ l->counters->failed_req,
+ (l->nbconn < l->maxconn) ? "OPEN" : "FULL",
+ relative_pid, px->uuid, l->luid, STATS_TYPE_SO);
+ }
+ return 1;
+}
+
+enum srv_stats_state {
+ SRV_STATS_STATE_DOWN = 0,
+ SRV_STATS_STATE_DOWN_AGENT,
+ SRV_STATS_STATE_GOING_UP,
+ SRV_STATS_STATE_UP_GOING_DOWN,
+ SRV_STATS_STATE_UP,
+ SRV_STATS_STATE_NOLB_GOING_DOWN,
+ SRV_STATS_STATE_NOLB,
+ SRV_STATS_STATE_DRAIN_GOING_DOWN,
+ SRV_STATS_STATE_DRAIN,
+ SRV_STATS_STATE_DRAIN_AGENT,
+ SRV_STATS_STATE_NO_CHECK,
+
+ SRV_STATS_STATE_COUNT, /* Must be last */
+};
+
+enum srv_stats_colour {
+ SRV_STATS_COLOUR_DOWN = 0,
+ SRV_STATS_COLOUR_GOING_UP,
+ SRV_STATS_COLOUR_GOING_DOWN,
+ SRV_STATS_COLOUR_UP,
+ SRV_STATS_COLOUR_NOLB,
+ SRV_STATS_COLOUR_DRAINING,
+ SRV_STATS_COLOUR_NO_CHECK,
+
+ SRV_STATS_COLOUR_COUNT, /* Must be last */
+};
+
+static const char *srv_stats_colour_st[SRV_STATS_COLOUR_COUNT] = {
+ [SRV_STATS_COLOUR_DOWN] = "down",
+ [SRV_STATS_COLOUR_GOING_UP] = "going_up",
+ [SRV_STATS_COLOUR_GOING_DOWN] = "going_down",
+ [SRV_STATS_COLOUR_UP] = "up",
+ [SRV_STATS_COLOUR_NOLB] = "nolb",
+ [SRV_STATS_COLOUR_DRAINING] = "draining",
+ [SRV_STATS_COLOUR_NO_CHECK] = "no_check",
+};
+
+/* Dumps a line for server <sv> and proxy <px> to the trash and uses the state
+ * from stream interface <si>, stats flags <flags>, and server state <state>.
+ * The caller is responsible for clearing the trash if needed. Returns non-zero
+ * if it emits anything, zero otherwise.
+ */
+static int stats_dump_sv_stats(struct stream_interface *si, struct proxy *px, int flags, struct server *sv,
+ enum srv_stats_state state, enum srv_stats_colour colour)
+{
+ struct appctx *appctx = __objt_appctx(si->end);
+ struct server *via, *ref;
+ char str[INET6_ADDRSTRLEN];
+ struct chunk src;
+ int i;
+
+ /* we have "via" which is the tracked server as described in the configuration,
+ * and "ref" which is the checked server and the end of the chain.
+ */
+ via = sv->track ? sv->track : sv;
+ ref = via;
+ while (ref->track)
+ ref = ref->track;
+
+ if (appctx->ctx.stats.flags & STAT_FMT_HTML) {
+ static char *srv_hlt_st[SRV_STATS_STATE_COUNT] = {
+ [SRV_STATS_STATE_DOWN] = "DOWN",
+ [SRV_STATS_STATE_DOWN_AGENT] = "DOWN (agent)",
+ [SRV_STATS_STATE_GOING_UP] = "DN %d/%d ↑",
+ [SRV_STATS_STATE_UP_GOING_DOWN] = "UP %d/%d ↓",
+ [SRV_STATS_STATE_UP] = "UP",
+ [SRV_STATS_STATE_NOLB_GOING_DOWN] = "NOLB %d/%d ↓",
+ [SRV_STATS_STATE_NOLB] = "NOLB",
+ [SRV_STATS_STATE_DRAIN_GOING_DOWN] = "DRAIN %d/%d ↓",
+ [SRV_STATS_STATE_DRAIN] = "DRAIN",
+ [SRV_STATS_STATE_DRAIN_AGENT] = "DRAIN (agent)",
+ [SRV_STATS_STATE_NO_CHECK] = "<i>no check</i>",
+ };
+
+ if (sv->admin & SRV_ADMF_MAINT)
+ chunk_appendf(&trash, "<tr class=\"maintain\">");
+ else
+ chunk_appendf(&trash,
+ "<tr class=\"%s_%s\">",
+ (sv->flags & SRV_F_BACKUP) ? "backup" : "active", srv_stats_colour_st[colour]);
+
+ if ((px->cap & PR_CAP_BE) && px->srv && (appctx->ctx.stats.flags & STAT_ADMIN))
+ chunk_appendf(&trash,
+ "<td><input type=\"checkbox\" name=\"s\" value=\"%s\"></td>",
+ sv->id);
+
+ chunk_appendf(&trash,
+ "<td class=ac><a name=\"%s/%s\"></a>%s"
+ "<a class=lfsb href=\"#%s/%s\">%s</a>"
+ "",
+ px->id, sv->id,
+ (flags & ST_SHLGNDS) ? "<u>" : "",
+ px->id, sv->id, sv->id);
+
+ if (flags & ST_SHLGNDS) {
+ chunk_appendf(&trash, "<div class=tips>");
+
+ switch (addr_to_str(&sv->addr, str, sizeof(str))) {
+ case AF_INET:
+ chunk_appendf(&trash, "IPv4: %s:%d, ", str, get_host_port(&sv->addr));
+ break;
+ case AF_INET6:
+ chunk_appendf(&trash, "IPv6: [%s]:%d, ", str, get_host_port(&sv->addr));
+ break;
+ case AF_UNIX:
+ chunk_appendf(&trash, "unix, ");
+ break;
+ case -1:
+ chunk_appendf(&trash, "(%s), ", strerror(errno));
+ break;
+ default: /* address family not supported */
+ break;
+ }
+
+ /* id */
+ chunk_appendf(&trash, "id: %d", sv->puid);
+
+ /* cookie */
+ if (sv->cookie) {
+ chunk_appendf(&trash, ", cookie: '");
+
+ chunk_initlen(&src, sv->cookie, 0, strlen(sv->cookie));
+ chunk_htmlencode(&trash, &src);
+
+ chunk_appendf(&trash, "'");
+ }
+
+ chunk_appendf(&trash, "</div>");
+ }
+
+ chunk_appendf(&trash,
+ /* queue : current, max, limit */
+ "%s</td><td>%s</td><td>%s</td><td>%s</td>"
+ /* sessions rate : current, max, limit */
+ "<td>%s</td><td>%s</td><td></td>"
+ "",
+ (flags & ST_SHLGNDS) ? "</u>" : "",
+ U2H(sv->nbpend), U2H(sv->counters.nbpend_max), LIM2A(sv->maxqueue, "-"),
+ U2H(read_freq_ctr(&sv->sess_per_sec)), U2H(sv->counters.sps_max));
+
+
+ chunk_appendf(&trash,
+ /* sessions: current, max, limit, total */
+ "<td>%s</td><td>%s</td><td>%s</td>"
+ "<td><u>%s<div class=tips><table class=det>"
+ "<tr><th>Cum. sessions:</th><td>%s</td></tr>"
+ "",
+ U2H(sv->cur_sess), U2H(sv->counters.cur_sess_max), LIM2A(sv->maxconn, "-"),
+ U2H(sv->counters.cum_sess),
+ U2H(sv->counters.cum_sess));
+
+ /* http response (via hover): 1xx, 2xx, 3xx, 4xx, 5xx, other */
+ if (px->mode == PR_MODE_HTTP) {
+ unsigned long long tot;
+ for (tot = i = 0; i < 6; i++)
+ tot += sv->counters.p.http.rsp[i];
+
+ chunk_appendf(&trash,
+ "<tr><th>Cum. HTTP responses:</th><td>%s</td></tr>"
+ "<tr><th>- HTTP 1xx responses:</th><td>%s</td><td>(%d%%)</td></tr>"
+ "<tr><th>- HTTP 2xx responses:</th><td>%s</td><td>(%d%%)</td></tr>"
+ "<tr><th>- HTTP 3xx responses:</th><td>%s</td><td>(%d%%)</td></tr>"
+ "<tr><th>- HTTP 4xx responses:</th><td>%s</td><td>(%d%%)</td></tr>"
+ "<tr><th>- HTTP 5xx responses:</th><td>%s</td><td>(%d%%)</td></tr>"
+ "<tr><th>- other responses:</th><td>%s</td><td>(%d%%)</td></tr>"
+ "",
+ U2H(tot),
+ U2H(sv->counters.p.http.rsp[1]), tot ? (int)(100*sv->counters.p.http.rsp[1] / tot) : 0,
+ U2H(sv->counters.p.http.rsp[2]), tot ? (int)(100*sv->counters.p.http.rsp[2] / tot) : 0,
+ U2H(sv->counters.p.http.rsp[3]), tot ? (int)(100*sv->counters.p.http.rsp[3] / tot) : 0,
+ U2H(sv->counters.p.http.rsp[4]), tot ? (int)(100*sv->counters.p.http.rsp[4] / tot) : 0,
+ U2H(sv->counters.p.http.rsp[5]), tot ? (int)(100*sv->counters.p.http.rsp[5] / tot) : 0,
+ U2H(sv->counters.p.http.rsp[0]), tot ? (int)(100*sv->counters.p.http.rsp[0] / tot) : 0);
+ }
+
+ chunk_appendf(&trash, "<tr><th colspan=3>Avg over last 1024 success. conn.</th></tr>");
+ chunk_appendf(&trash, "<tr><th>- Queue time:</th><td>%s</td><td>ms</td></tr>", U2H(swrate_avg(sv->counters.q_time, TIME_STATS_SAMPLES)));
+ chunk_appendf(&trash, "<tr><th>- Connect time:</th><td>%s</td><td>ms</td></tr>", U2H(swrate_avg(sv->counters.c_time, TIME_STATS_SAMPLES)));
+ if (px->mode == PR_MODE_HTTP)
+ chunk_appendf(&trash, "<tr><th>- Response time:</th><td>%s</td><td>ms</td></tr>", U2H(swrate_avg(sv->counters.d_time, TIME_STATS_SAMPLES)));
+ chunk_appendf(&trash, "<tr><th>- Total time:</th><td>%s</td><td>ms</td></tr>", U2H(swrate_avg(sv->counters.t_time, TIME_STATS_SAMPLES)));
+
+ chunk_appendf(&trash,
+ "</table></div></u></td>"
+ /* sessions: lbtot, last */
+ "<td>%s</td><td>%s</td>",
+ U2H(sv->counters.cum_lbconn),
+ human_time(srv_lastsession(sv), 1));
+
+ chunk_appendf(&trash,
+ /* bytes : in, out */
+ "<td>%s</td><td>%s</td>"
+ /* denied: req, resp */
+ "<td></td><td>%s</td>"
+ /* errors : request, connect */
+ "<td></td><td>%s</td>"
+ /* errors : response */
+ "<td><u>%s<div class=tips>Connection resets during transfers: %lld client, %lld server</div></u></td>"
+ /* warnings: retries, redispatches */
+ "<td>%lld</td><td>%lld</td>"
+ "",
+ U2H(sv->counters.bytes_in), U2H(sv->counters.bytes_out),
+ U2H(sv->counters.failed_secu),
+ U2H(sv->counters.failed_conns),
+ U2H(sv->counters.failed_resp),
+ sv->counters.cli_aborts,
+ sv->counters.srv_aborts,
+ sv->counters.retries, sv->counters.redispatches);
+
+ /* status, lest check */
+ chunk_appendf(&trash, "<td class=ac>");
+
+ if (sv->admin & SRV_ADMF_MAINT) {
+ chunk_appendf(&trash, "%s ", human_time(now.tv_sec - sv->last_change, 1));
+ chunk_appendf(&trash, "MAINT");
+ }
+ else if ((ref->agent.state & CHK_ST_ENABLED) && !(sv->agent.health) && (ref->state == SRV_ST_STOPPED)) {
+ chunk_appendf(&trash, "%s ", human_time(now.tv_sec - ref->last_change, 1));
+ /* DOWN (agent) */
+ chunk_appendf(&trash, srv_hlt_st[1], "GCC: your -Werror=format-security is bogus, annoying, and hides real bugs, I don't thank you, really!");
+ }
+ else if (ref->check.state & CHK_ST_ENABLED) {
+ chunk_appendf(&trash, "%s ", human_time(now.tv_sec - ref->last_change, 1));
+ chunk_appendf(&trash,
+ srv_hlt_st[state],
+ (ref->state != SRV_ST_STOPPED) ? (ref->check.health - ref->check.rise + 1) : (ref->check.health),
+ (ref->state != SRV_ST_STOPPED) ? (ref->check.fall) : (ref->check.rise));
+ }
+
+ if ((sv->state == SRV_ST_STOPPED) &&
+ ((sv->agent.state & (CHK_ST_ENABLED|CHK_ST_PAUSED)) == CHK_ST_ENABLED) && !(sv->agent.health)) {
+ chunk_appendf(&trash,
+ "</td><td class=ac><u> %s%s",
+ (sv->agent.state & CHK_ST_INPROGRESS) ? "* " : "",
+ get_check_status_info(sv->agent.status));
+
+ if (sv->agent.status >= HCHK_STATUS_L57DATA)
+ chunk_appendf(&trash, "/%d", sv->agent.code);
+
+ if (sv->agent.status >= HCHK_STATUS_CHECKED && sv->agent.duration >= 0)
+ chunk_appendf(&trash, " in %lums", sv->agent.duration);
+
+ chunk_appendf(&trash, "<div class=tips>%s",
+ get_check_status_description(sv->agent.status));
+ if (*sv->agent.desc) {
+ chunk_appendf(&trash, ": ");
+ chunk_initlen(&src, sv->agent.desc, 0, strlen(sv->agent.desc));
+ chunk_htmlencode(&trash, &src);
+ }
+ chunk_appendf(&trash, "</div></u>");
+ }
+ else if ((sv->check.state & (CHK_ST_ENABLED|CHK_ST_PAUSED)) == CHK_ST_ENABLED) {
+ chunk_appendf(&trash,
+ "</td><td class=ac><u> %s%s",
+ (sv->check.state & CHK_ST_INPROGRESS) ? "* " : "",
+ get_check_status_info(sv->check.status));
+
+ if (sv->check.status >= HCHK_STATUS_L57DATA)
+ chunk_appendf(&trash, "/%d", sv->check.code);
+
+ if (sv->check.status >= HCHK_STATUS_CHECKED && sv->check.duration >= 0)
+ chunk_appendf(&trash, " in %lums", sv->check.duration);
+
+ chunk_appendf(&trash, "<div class=tips>%s",
+ get_check_status_description(sv->check.status));
+ if (*sv->check.desc) {
+ chunk_appendf(&trash, ": ");
+ chunk_initlen(&src, sv->check.desc, 0, strlen(sv->check.desc));
+ chunk_htmlencode(&trash, &src);
+ }
+ chunk_appendf(&trash, "</div></u>");
+ }
+ else
+ chunk_appendf(&trash, "</td><td>");
+
+ chunk_appendf(&trash,
+ /* weight */
+ "</td><td class=ac>%d</td>"
+ /* act, bck */
+ "<td class=ac>%s</td><td class=ac>%s</td>"
+ "",
+ (sv->eweight * px->lbprm.wmult + px->lbprm.wdiv - 1) / px->lbprm.wdiv,
+ (sv->flags & SRV_F_BACKUP) ? "-" : "Y",
+ (sv->flags & SRV_F_BACKUP) ? "Y" : "-");
+
+ /* check failures: unique, fatal, down time */
+ if (sv->check.state & CHK_ST_ENABLED) {
+ chunk_appendf(&trash, "<td><u>%lld", ref->counters.failed_checks);
+
+ if (ref->observe)
+ chunk_appendf(&trash, "/%lld", ref->counters.failed_hana);
+
+ chunk_appendf(&trash,
+ "<div class=tips>Failed Health Checks%s</div></u></td>"
+ "<td>%lld</td><td>%s</td>"
+ "",
+ ref->observe ? "/Health Analyses" : "",
+ ref->counters.down_trans, human_time(srv_downtime(sv), 1));
+ }
+ else if (!(sv->admin & SRV_ADMF_FMAINT) && sv != ref) {
+ /* tracking a server */
+ chunk_appendf(&trash,
+ "<td class=ac colspan=3><a class=lfsb href=\"#%s/%s\">via %s/%s</a></td>",
+ via->proxy->id, via->id, via->proxy->id, via->id);
+ }
+ else
+ chunk_appendf(&trash, "<td colspan=3></td>");
+
+ /* throttle */
+ if (sv->state == SRV_ST_STARTING && !server_is_draining(sv))
+ chunk_appendf(&trash, "<td class=ac>%d %%</td></tr>\n", server_throttle_rate(sv));
+ else
+ chunk_appendf(&trash, "<td class=ac>-</td></tr>\n");
+ }
+ else { /* CSV mode */
+ struct chunk *out = get_trash_chunk();
+ static char *srv_hlt_st[SRV_STATS_STATE_COUNT] = {
+ [SRV_STATS_STATE_DOWN] = "DOWN,",
+ [SRV_STATS_STATE_DOWN_AGENT] = "DOWN (agent),",
+ [SRV_STATS_STATE_GOING_UP] = "DOWN %d/%d,",
+ [SRV_STATS_STATE_UP_GOING_DOWN] = "UP %d/%d,",
+ [SRV_STATS_STATE_UP] = "UP,",
+ [SRV_STATS_STATE_NOLB_GOING_DOWN] = "NOLB %d/%d,",
+ [SRV_STATS_STATE_NOLB] = "NOLB,",
+ [SRV_STATS_STATE_DRAIN_GOING_DOWN] = "DRAIN %d/%d,",
+ [SRV_STATS_STATE_DRAIN] = "DRAIN,",
+ [SRV_STATS_STATE_DRAIN_AGENT] = "DRAIN (agent)",
+ [SRV_STATS_STATE_NO_CHECK] = "no check,"
+ };
+
+ chunk_appendf(&trash,
+ /* pxid, name */
+ "%s,%s,"
+ /* queue : current, max */
+ "%d,%d,"
+ /* sessions : current, max, limit, total */
+ "%d,%d,%s,%lld,"
+ /* bytes : in, out */
+ "%lld,%lld,"
+ /* denied: req, resp */
+ ",%lld,"
+ /* errors : request, connect, response */
+ ",%lld,%lld,"
+ /* warnings: retries, redispatches */
+ "%lld,%lld,"
+ "",
+ px->id, sv->id,
+ sv->nbpend, sv->counters.nbpend_max,
+ sv->cur_sess, sv->counters.cur_sess_max, LIM2A(sv->maxconn, ""), sv->counters.cum_sess,
+ sv->counters.bytes_in, sv->counters.bytes_out,
+ sv->counters.failed_secu,
+ sv->counters.failed_conns, sv->counters.failed_resp,
+ sv->counters.retries, sv->counters.redispatches);
+
+ /* status */
+ if (sv->admin & SRV_ADMF_IMAINT)
+ chunk_appendf(&trash, "MAINT (via %s/%s),", via->proxy->id, via->id);
+ else if (sv->admin & SRV_ADMF_MAINT)
+ chunk_appendf(&trash, "MAINT,");
+ else
+ chunk_appendf(&trash,
+ srv_hlt_st[state],
+ (ref->state != SRV_ST_STOPPED) ? (ref->check.health - ref->check.rise + 1) : (ref->check.health),
+ (ref->state != SRV_ST_STOPPED) ? (ref->check.fall) : (ref->check.rise));
+
+ chunk_appendf(&trash,
+ /* weight, active, backup */
+ "%d,%d,%d,"
+ "",
+ (sv->eweight * px->lbprm.wmult + px->lbprm.wdiv - 1) / px->lbprm.wdiv,
+ (sv->flags & SRV_F_BACKUP) ? 0 : 1,
+ (sv->flags & SRV_F_BACKUP) ? 1 : 0);
+
+ /* check failures: unique, fatal; last change, total downtime */
+ if (sv->check.state & CHK_ST_ENABLED)
+ chunk_appendf(&trash,
+ "%lld,%lld,%d,%d,",
+ sv->counters.failed_checks, sv->counters.down_trans,
+ (int)(now.tv_sec - sv->last_change), srv_downtime(sv));
+ else
+ chunk_appendf(&trash, ",,,,");
+
+ /* queue limit, pid, iid, sid, */
+ chunk_appendf(&trash,
+ "%s,"
+ "%d,%d,%d,",
+ LIM2A(sv->maxqueue, ""),
+ relative_pid, px->uuid, sv->puid);
+
+ /* throttle */
+ if (sv->state == SRV_ST_STARTING && !server_is_draining(sv))
+ chunk_appendf(&trash, "%d", server_throttle_rate(sv));
+
+ /* sessions: lbtot */
+ chunk_appendf(&trash, ",%lld,", sv->counters.cum_lbconn);
+
+ /* tracked */
+ if (sv->track)
+ chunk_appendf(&trash, "%s/%s,",
+ sv->track->proxy->id, sv->track->id);
+ else
+ chunk_appendf(&trash, ",");
+
+ /* type */
+ chunk_appendf(&trash, "%d,", STATS_TYPE_SV);
+
+ /* rate */
+ chunk_appendf(&trash, "%u,,%u,",
+ read_freq_ctr(&sv->sess_per_sec),
+ sv->counters.sps_max);
+
+ if (sv->check.state & CHK_ST_ENABLED) {
+ /* check_status */
+ chunk_appendf(&trash, "%s,", csv_enc(get_check_status_info(sv->check.status), 1, out));
+
+ /* check_code */
+ if (sv->check.status >= HCHK_STATUS_L57DATA)
+ chunk_appendf(&trash, "%u,", sv->check.code);
+ else
+ chunk_appendf(&trash, ",");
+
+ /* check_duration */
+ if (sv->check.status >= HCHK_STATUS_CHECKED)
+ chunk_appendf(&trash, "%lu,", sv->check.duration);
+ else
+ chunk_appendf(&trash, ",");
+
+ }
+ else
+ chunk_appendf(&trash, ",,,");
+
+ /* http response: 1xx, 2xx, 3xx, 4xx, 5xx, other */
+ if (px->mode == PR_MODE_HTTP) {
+ for (i=1; i<6; i++)
+ chunk_appendf(&trash, "%lld,", sv->counters.p.http.rsp[i]);
+
+ chunk_appendf(&trash, "%lld,", sv->counters.p.http.rsp[0]);
+ }
+ else
+ chunk_appendf(&trash, ",,,,,,");
+
+ /* failed health analyses */
+ chunk_appendf(&trash, "%lld,", sv->counters.failed_hana);
+
+ /* requests : req_rate, req_rate_max, req_tot, */
+ chunk_appendf(&trash, ",,,");
+
+ /* errors: cli_aborts, srv_aborts */
+ chunk_appendf(&trash, "%lld,%lld,",
+ sv->counters.cli_aborts, sv->counters.srv_aborts);
+
+ /* compression: in, out, bypassed, comp_rsp */
+ chunk_appendf(&trash, ",,,,");
+
+ /* lastsess */
+ chunk_appendf(&trash, "%d,", srv_lastsession(sv));
+
+ /* capture of last check and agent statuses */
+ chunk_appendf(&trash, "%s,", ((sv->check.state & (CHK_ST_ENABLED|CHK_ST_PAUSED)) == CHK_ST_ENABLED) ? csv_enc(cstr(sv->check.desc), 1, out) : "");
+ chunk_appendf(&trash, "%s,", ((sv->agent.state & (CHK_ST_ENABLED|CHK_ST_PAUSED)) == CHK_ST_ENABLED) ? csv_enc(cstr(sv->agent.desc), 1, out) : "");
+
+ /* qtime, ctime, rtime, ttime, */
+ chunk_appendf(&trash, "%u,%u,%u,%u,",
+ swrate_avg(sv->counters.q_time, TIME_STATS_SAMPLES),
+ swrate_avg(sv->counters.c_time, TIME_STATS_SAMPLES),
+ swrate_avg(sv->counters.d_time, TIME_STATS_SAMPLES),
+ swrate_avg(sv->counters.t_time, TIME_STATS_SAMPLES));
+
+ /* finish with EOL */
+ chunk_appendf(&trash, "\n");
+ }
+ return 1;
+}
+
+/* Dumps a line for backend <px> to the trash for and uses the state from stream
+ * interface <si> and stats flags <flags>. The caller is responsible for clearing
+ * the trash if needed. Returns non-zero if it emits anything, zero otherwise.
+ */
+static int stats_dump_be_stats(struct stream_interface *si, struct proxy *px, int flags)
+{
+ struct appctx *appctx = __objt_appctx(si->end);
+ struct chunk src;
+ int i;
+
+ if (!(px->cap & PR_CAP_BE))
+ return 0;
+
+ if ((appctx->ctx.stats.flags & STAT_BOUND) && !(appctx->ctx.stats.type & (1 << STATS_TYPE_BE)))
+ return 0;
+
+ if (appctx->ctx.stats.flags & STAT_FMT_HTML) {
+ chunk_appendf(&trash, "<tr class=\"backend\">");
+ if (px->srv && (appctx->ctx.stats.flags & STAT_ADMIN)) {
+ /* Column sub-heading for Enable or Disable server */
+ chunk_appendf(&trash, "<td></td>");
+ }
+ chunk_appendf(&trash,
+ "<td class=ac>"
+ /* name */
+ "%s<a name=\"%s/Backend\"></a>"
+ "<a class=lfsb href=\"#%s/Backend\">Backend</a>"
+ "",
+ (flags & ST_SHLGNDS)?"<u>":"",
+ px->id, px->id);
+
+ if (flags & ST_SHLGNDS) {
+ /* balancing */
+ chunk_appendf(&trash, "<div class=tips>balancing: %s",
+ backend_lb_algo_str(px->lbprm.algo & BE_LB_ALGO));
+
+ /* cookie */
+ if (px->cookie_name) {
+ chunk_appendf(&trash, ", cookie: '");
+ chunk_initlen(&src, px->cookie_name, 0, strlen(px->cookie_name));
+ chunk_htmlencode(&trash, &src);
+ chunk_appendf(&trash, "'");
+ }
+ chunk_appendf(&trash, "</div>");
+ }
+
+ chunk_appendf(&trash,
+ "%s</td>"
+ /* queue : current, max */
+ "<td>%s</td><td>%s</td><td></td>"
+ /* sessions rate : current, max, limit */
+ "<td>%s</td><td>%s</td><td></td>"
+ "",
+ (flags & ST_SHLGNDS)?"</u>":"",
+ U2H(px->nbpend) /* or px->totpend ? */, U2H(px->be_counters.nbpend_max),
+ U2H(read_freq_ctr(&px->be_sess_per_sec)), U2H(px->be_counters.sps_max));
+
+ chunk_appendf(&trash,
+ /* sessions: current, max, limit, total */
+ "<td>%s</td><td>%s</td><td>%s</td>"
+ "<td><u>%s<div class=tips><table class=det>"
+ "<tr><th>Cum. sessions:</th><td>%s</td></tr>"
+ "",
+ U2H(px->beconn), U2H(px->be_counters.conn_max), U2H(px->fullconn),
+ U2H(px->be_counters.cum_conn),
+ U2H(px->be_counters.cum_conn));
+
+ /* http response (via hover): 1xx, 2xx, 3xx, 4xx, 5xx, other */
+ if (px->mode == PR_MODE_HTTP) {
+ chunk_appendf(&trash,
+ "<tr><th>Cum. HTTP requests:</th><td>%s</td></tr>"
+ "<tr><th>- HTTP 1xx responses:</th><td>%s</td></tr>"
+ "<tr><th>- HTTP 2xx responses:</th><td>%s</td></tr>"
+ "<tr><th> Compressed 2xx:</th><td>%s</td><td>(%d%%)</td></tr>"
+ "<tr><th>- HTTP 3xx responses:</th><td>%s</td></tr>"
+ "<tr><th>- HTTP 4xx responses:</th><td>%s</td></tr>"
+ "<tr><th>- HTTP 5xx responses:</th><td>%s</td></tr>"
+ "<tr><th>- other responses:</th><td>%s</td></tr>"
+ "<tr><th>Intercepted requests:</th><td>%s</td></tr>"
+ "<tr><th colspan=3>Avg over last 1024 success. conn.</th></tr>"
+ "",
+ U2H(px->be_counters.p.http.cum_req),
+ U2H(px->be_counters.p.http.rsp[1]),
+ U2H(px->be_counters.p.http.rsp[2]),
+ U2H(px->be_counters.p.http.comp_rsp),
+ px->be_counters.p.http.rsp[2] ?
+ (int)(100*px->be_counters.p.http.comp_rsp/px->be_counters.p.http.rsp[2]) : 0,
+ U2H(px->be_counters.p.http.rsp[3]),
+ U2H(px->be_counters.p.http.rsp[4]),
+ U2H(px->be_counters.p.http.rsp[5]),
+ U2H(px->be_counters.p.http.rsp[0]),
+ U2H(px->be_counters.intercepted_req));
+ }
+
+ chunk_appendf(&trash, "<tr><th>- Queue time:</th><td>%s</td><td>ms</td></tr>", U2H(swrate_avg(px->be_counters.q_time, TIME_STATS_SAMPLES)));
+ chunk_appendf(&trash, "<tr><th>- Connect time:</th><td>%s</td><td>ms</td></tr>", U2H(swrate_avg(px->be_counters.c_time, TIME_STATS_SAMPLES)));
+ if (px->mode == PR_MODE_HTTP)
+ chunk_appendf(&trash, "<tr><th>- Response time:</th><td>%s</td><td>ms</td></tr>", U2H(swrate_avg(px->be_counters.d_time, TIME_STATS_SAMPLES)));
+ chunk_appendf(&trash, "<tr><th>- Total time:</th><td>%s</td><td>ms</td></tr>", U2H(swrate_avg(px->be_counters.t_time, TIME_STATS_SAMPLES)));
+
+ chunk_appendf(&trash,
+ "</table></div></u></td>"
+ /* sessions: lbtot, last */
+ "<td>%s</td><td>%s</td>"
+ /* bytes: in */
+ "<td>%s</td>"
+ "",
+ U2H(px->be_counters.cum_lbconn),
+ human_time(be_lastsession(px), 1),
+ U2H(px->be_counters.bytes_in));
+
+ chunk_appendf(&trash,
+ /* bytes:out + compression stats (via hover): comp_in, comp_out, comp_byp */
+ "<td>%s%s<div class=tips><table class=det>"
+ "<tr><th>Response bytes in:</th><td>%s</td></tr>"
+ "<tr><th>Compression in:</th><td>%s</td></tr>"
+ "<tr><th>Compression out:</th><td>%s</td><td>(%d%%)</td></tr>"
+ "<tr><th>Compression bypass:</th><td>%s</td></tr>"
+ "<tr><th>Total bytes saved:</th><td>%s</td><td>(%d%%)</td></tr>"
+ "</table></div>%s</td>",
+ (px->be_counters.comp_in || px->be_counters.comp_byp) ? "<u>":"",
+ U2H(px->be_counters.bytes_out),
+ U2H(px->be_counters.bytes_out),
+ U2H(px->be_counters.comp_in),
+ U2H(px->be_counters.comp_out),
+ px->be_counters.comp_in ? (int)(px->be_counters.comp_out * 100 / px->be_counters.comp_in) : 0,
+ U2H(px->be_counters.comp_byp),
+ U2H(px->be_counters.comp_in - px->be_counters.comp_out),
+ px->be_counters.bytes_out ? (int)((px->be_counters.comp_in - px->be_counters.comp_out) * 100 / px->be_counters.bytes_out) : 0,
+ (px->be_counters.comp_in || px->be_counters.comp_byp) ? "</u>":"");
+
+ chunk_appendf(&trash,
+ /* denied: req, resp */
+ "<td>%s</td><td>%s</td>"
+ /* errors : request, connect */
+ "<td></td><td>%s</td>"
+ /* errors : response */
+ "<td><u>%s<div class=tips>Connection resets during transfers: %lld client, %lld server</div></u></td>"
+ /* warnings: retries, redispatches */
+ "<td>%lld</td><td>%lld</td>"
+ /* backend status: reflect backend status (up/down): we display UP
+ * if the backend has known working servers or if it has no server at
+ * all (eg: for stats). Then we display the total weight, number of
+ * active and backups. */
+ "<td class=ac>%s %s</td><td class=ac> </td><td class=ac>%d</td>"
+ "<td class=ac>%d</td><td class=ac>%d</td>"
+ "",
+ U2H(px->be_counters.denied_req), U2H(px->be_counters.denied_resp),
+ U2H(px->be_counters.failed_conns),
+ U2H(px->be_counters.failed_resp),
+ px->be_counters.cli_aborts,
+ px->be_counters.srv_aborts,
+ px->be_counters.retries, px->be_counters.redispatches,
+ human_time(now.tv_sec - px->last_change, 1),
+ (px->lbprm.tot_weight > 0 || !px->srv) ? "UP" :
+ "<font color=\"red\"><b>DOWN</b></font>",
+ (px->lbprm.tot_weight * px->lbprm.wmult + px->lbprm.wdiv - 1) / px->lbprm.wdiv,
+ px->srv_act, px->srv_bck);
+
+ chunk_appendf(&trash,
+ /* rest of backend: nothing, down transitions, total downtime, throttle */
+ "<td class=ac> </td><td>%d</td>"
+ "<td>%s</td>"
+ "<td></td>"
+ "</tr>",
+ px->down_trans,
+ px->srv?human_time(be_downtime(px), 1):" ");
+ }
+ else { /* CSV mode */
+ chunk_appendf(&trash,
+ /* pxid, name */
+ "%s,BACKEND,"
+ /* queue : current, max */
+ "%d,%d,"
+ /* sessions : current, max, limit, total */
+ "%d,%d,%d,%lld,"
+ /* bytes : in, out */
+ "%lld,%lld,"
+ /* denied: req, resp */
+ "%lld,%lld,"
+ /* errors : request, connect, response */
+ ",%lld,%lld,"
+ /* warnings: retries, redispatches */
+ "%lld,%lld,"
+ /* backend status: reflect backend status (up/down): we display UP
+ * if the backend has known working servers or if it has no server at
+ * all (eg: for stats). Then we display the total weight, number of
+ * active and backups. */
+ "%s,"
+ "%d,%d,%d,"
+ /* rest of backend: nothing, down transitions, last change, total downtime */
+ ",%d,%d,%d,,"
+ /* pid, iid, sid, throttle, lbtot, tracked, type */
+ "%d,%d,0,,%lld,,%d,"
+ /* rate, rate_lim, rate_max, */
+ "%u,,%u,"
+ /* check_status, check_code, check_duration */
+ ",,,",
+ px->id,
+ px->nbpend /* or px->totpend ? */, px->be_counters.nbpend_max,
+ px->beconn, px->be_counters.conn_max, px->fullconn, px->be_counters.cum_conn,
+ px->be_counters.bytes_in, px->be_counters.bytes_out,
+ px->be_counters.denied_req, px->be_counters.denied_resp,
+ px->be_counters.failed_conns, px->be_counters.failed_resp,
+ px->be_counters.retries, px->be_counters.redispatches,
+ (px->lbprm.tot_weight > 0 || !px->srv) ? "UP" : "DOWN",
+ (px->lbprm.tot_weight * px->lbprm.wmult + px->lbprm.wdiv - 1) / px->lbprm.wdiv,
+ px->srv_act, px->srv_bck,
+ px->down_trans, (int)(now.tv_sec - px->last_change),
+ px->srv?be_downtime(px):0,
+ relative_pid, px->uuid,
+ px->be_counters.cum_lbconn, STATS_TYPE_BE,
+ read_freq_ctr(&px->be_sess_per_sec),
+ px->be_counters.sps_max);
+
+ /* http response: 1xx, 2xx, 3xx, 4xx, 5xx, other */
+ if (px->mode == PR_MODE_HTTP) {
+ for (i=1; i<6; i++)
+ chunk_appendf(&trash, "%lld,", px->be_counters.p.http.rsp[i]);
+ chunk_appendf(&trash, "%lld,", px->be_counters.p.http.rsp[0]);
+ }
+ else
+ chunk_appendf(&trash, ",,,,,,");
+
+ /* failed health analyses */
+ chunk_appendf(&trash, ",");
+
+ /* requests : req_rate, req_rate_max, req_tot, */
+ chunk_appendf(&trash, ",,,");
+
+ /* errors: cli_aborts, srv_aborts */
+ chunk_appendf(&trash, "%lld,%lld,",
+ px->be_counters.cli_aborts, px->be_counters.srv_aborts);
+
+ /* compression: in, out, bypassed */
+ chunk_appendf(&trash, "%lld,%lld,%lld,",
+ px->be_counters.comp_in, px->be_counters.comp_out, px->be_counters.comp_byp);
+
+ /* compression: comp_rsp */
+ chunk_appendf(&trash, "%lld,", px->be_counters.p.http.comp_rsp);
+
+ /* lastsess, last_chk, last_agt, */
+ chunk_appendf(&trash, "%d,,,", be_lastsession(px));
+
+ /* qtime, ctime, rtime, ttime, */
+ chunk_appendf(&trash, "%u,%u,%u,%u,",
+ swrate_avg(px->be_counters.q_time, TIME_STATS_SAMPLES),
+ swrate_avg(px->be_counters.c_time, TIME_STATS_SAMPLES),
+ swrate_avg(px->be_counters.d_time, TIME_STATS_SAMPLES),
+ swrate_avg(px->be_counters.t_time, TIME_STATS_SAMPLES));
+
+ /* finish with EOL */
+ chunk_appendf(&trash, "\n");
+ }
+ return 1;
+}
+
+/* Dumps the HTML table header for proxy <px> to the trash for and uses the state from
+ * stream interface <si> and per-uri parameters <uri>. The caller is responsible
+ * for clearing the trash if needed.
+ */
+static void stats_dump_html_px_hdr(struct stream_interface *si, struct proxy *px, struct uri_auth *uri)
+{
+ struct appctx *appctx = __objt_appctx(si->end);
+ char scope_txt[STAT_SCOPE_TXT_MAXLEN + sizeof STAT_SCOPE_PATTERN];
+
+ if (px->cap & PR_CAP_BE && px->srv && (appctx->ctx.stats.flags & STAT_ADMIN)) {
+ /* A form to enable/disable this proxy servers */
+
+ /* scope_txt = search pattern + search query, appctx->ctx.stats.scope_len is always <= STAT_SCOPE_TXT_MAXLEN */
+ scope_txt[0] = 0;
+ if (appctx->ctx.stats.scope_len) {
+ strcpy(scope_txt, STAT_SCOPE_PATTERN);
+ memcpy(scope_txt + strlen(STAT_SCOPE_PATTERN), bo_ptr(si_ob(si)) + appctx->ctx.stats.scope_str, appctx->ctx.stats.scope_len);
+ scope_txt[strlen(STAT_SCOPE_PATTERN) + appctx->ctx.stats.scope_len] = 0;
+ }
+
+ chunk_appendf(&trash,
+ "<form method=\"post\">");
+ }
+
+ /* print a new table */
+ chunk_appendf(&trash,
+ "<table class=\"tbl\" width=\"100%%\">\n"
+ "<tr class=\"titre\">"
+ "<th class=\"pxname\" width=\"10%%\">");
+
+ chunk_appendf(&trash,
+ "<a name=\"%s\"></a>%s"
+ "<a class=px href=\"#%s\">%s</a>",
+ px->id,
+ (uri->flags & ST_SHLGNDS) ? "<u>":"",
+ px->id, px->id);
+
+ if (uri->flags & ST_SHLGNDS) {
+ /* cap, mode, id */
+ chunk_appendf(&trash, "<div class=tips>cap: %s, mode: %s, id: %d",
+ proxy_cap_str(px->cap), proxy_mode_str(px->mode),
+ px->uuid);
+ chunk_appendf(&trash, "</div>");
+ }
+
+ chunk_appendf(&trash,
+ "%s</th>"
+ "<th class=\"%s\" width=\"90%%\">%s</th>"
+ "</tr>\n"
+ "</table>\n"
+ "<table class=\"tbl\" width=\"100%%\">\n"
+ "<tr class=\"titre\">",
+ (uri->flags & ST_SHLGNDS) ? "</u>":"",
+ px->desc ? "desc" : "empty", px->desc ? px->desc : "");
+
+ if ((px->cap & PR_CAP_BE) && px->srv && (appctx->ctx.stats.flags & STAT_ADMIN)) {
+ /* Column heading for Enable or Disable server */
+ chunk_appendf(&trash, "<th rowspan=2 width=1></th>");
+ }
+
+ chunk_appendf(&trash,
+ "<th rowspan=2></th>"
+ "<th colspan=3>Queue</th>"
+ "<th colspan=3>Session rate</th><th colspan=6>Sessions</th>"
+ "<th colspan=2>Bytes</th><th colspan=2>Denied</th>"
+ "<th colspan=3>Errors</th><th colspan=2>Warnings</th>"
+ "<th colspan=9>Server</th>"
+ "</tr>\n"
+ "<tr class=\"titre\">"
+ "<th>Cur</th><th>Max</th><th>Limit</th>"
+ "<th>Cur</th><th>Max</th><th>Limit</th><th>Cur</th><th>Max</th>"
+ "<th>Limit</th><th>Total</th><th>LbTot</th><th>Last</th><th>In</th><th>Out</th>"
+ "<th>Req</th><th>Resp</th><th>Req</th><th>Conn</th>"
+ "<th>Resp</th><th>Retr</th><th>Redis</th>"
+ "<th>Status</th><th>LastChk</th><th>Wght</th><th>Act</th>"
+ "<th>Bck</th><th>Chk</th><th>Dwn</th><th>Dwntme</th>"
+ "<th>Thrtle</th>\n"
+ "</tr>");
+}
+
+/* Dumps the HTML table trailer for proxy <px> to the trash for and uses the state from
+ * stream interface <si>. The caller is responsible for clearing the trash if needed.
+ */
+static void stats_dump_html_px_end(struct stream_interface *si, struct proxy *px)
+{
+ struct appctx *appctx = __objt_appctx(si->end);
+ chunk_appendf(&trash, "</table>");
+
+ if ((px->cap & PR_CAP_BE) && px->srv && (appctx->ctx.stats.flags & STAT_ADMIN)) {
+ /* close the form used to enable/disable this proxy servers */
+ chunk_appendf(&trash,
+ "Choose the action to perform on the checked servers : "
+ "<select name=action>"
+ "<option value=\"\"></option>"
+ "<option value=\"ready\">Set state to READY</option>"
+ "<option value=\"drain\">Set state to DRAIN</option>"
+ "<option value=\"maint\">Set state to MAINT</option>"
+ "<option value=\"dhlth\">Health: disable checks</option>"
+ "<option value=\"ehlth\">Health: enable checks</option>"
+ "<option value=\"hrunn\">Health: force UP</option>"
+ "<option value=\"hnolb\">Health: force NOLB</option>"
+ "<option value=\"hdown\">Health: force DOWN</option>"
+ "<option value=\"dagent\">Agent: disable checks</option>"
+ "<option value=\"eagent\">Agent: enable checks</option>"
+ "<option value=\"arunn\">Agent: force UP</option>"
+ "<option value=\"adown\">Agent: force DOWN</option>"
+ "<option value=\"shutdown\">Kill Sessions</option>"
+ "</select>"
+ "<input type=\"hidden\" name=\"b\" value=\"#%d\">"
+ " <input type=\"submit\" value=\"Apply\">"
+ "</form>",
+ px->uuid);
+ }
+
+ chunk_appendf(&trash, "<p>\n");
+}
+
+/*
+ * Dumps statistics for a proxy. The output is sent to the stream interface's
+ * input buffer. Returns 0 if it had to stop dumping data because of lack of
+ * buffer space, or non-zero if everything completed. This function is used
+ * both by the CLI and the HTTP entry points, and is able to dump the output
+ * in HTML or CSV formats. If the later, <uri> must be NULL.
+ */
+static int stats_dump_proxy_to_buffer(struct stream_interface *si, struct proxy *px, struct uri_auth *uri)
+{
+ struct appctx *appctx = __objt_appctx(si->end);
+ struct stream *s = si_strm(si);
+ struct channel *rep = si_ic(si);
+ struct server *sv, *svs; /* server and server-state, server-state=server or server->track */
+ struct listener *l;
+
+ chunk_reset(&trash);
+
+ switch (appctx->ctx.stats.px_st) {
+ case STAT_PX_ST_INIT:
+ /* we are on a new proxy */
+ if (uri && uri->scope) {
+ /* we have a limited scope, we have to check the proxy name */
+ struct stat_scope *scope;
+ int len;
+
+ len = strlen(px->id);
+ scope = uri->scope;
+
+ while (scope) {
+ /* match exact proxy name */
+ if (scope->px_len == len && !memcmp(px->id, scope->px_id, len))
+ break;
+
+ /* match '.' which means 'self' proxy */
+ if (!strcmp(scope->px_id, ".") && px == s->be)
+ break;
+ scope = scope->next;
+ }
+
+ /* proxy name not found : don't dump anything */
+ if (scope == NULL)
+ return 1;
+ }
+
+ /* if the user has requested a limited output and the proxy
+ * name does not match, skip it.
+ */
+ if (appctx->ctx.stats.scope_len &&
+ strnistr(px->id, strlen(px->id), bo_ptr(si_ob(si)) + appctx->ctx.stats.scope_str, appctx->ctx.stats.scope_len) == NULL)
+ return 1;
+
+ if ((appctx->ctx.stats.flags & STAT_BOUND) &&
+ (appctx->ctx.stats.iid != -1) &&
+ (px->uuid != appctx->ctx.stats.iid))
+ return 1;
+
+ appctx->ctx.stats.px_st = STAT_PX_ST_TH;
+ /* fall through */
+
+ case STAT_PX_ST_TH:
+ if (appctx->ctx.stats.flags & STAT_FMT_HTML) {
+ stats_dump_html_px_hdr(si, px, uri);
+ if (bi_putchk(rep, &trash) == -1) {
+ si_applet_cant_put(si);
+ return 0;
+ }
+ }
+
+ appctx->ctx.stats.px_st = STAT_PX_ST_FE;
+ /* fall through */
+
+ case STAT_PX_ST_FE:
+ /* print the frontend */
+ if (stats_dump_fe_stats(si, px)) {
+ if (bi_putchk(rep, &trash) == -1) {
+ si_applet_cant_put(si);
+ return 0;
+ }
+ }
+
+ appctx->ctx.stats.l = px->conf.listeners.n;
+ appctx->ctx.stats.px_st = STAT_PX_ST_LI;
+ /* fall through */
+
+ case STAT_PX_ST_LI:
+ /* stats.l has been initialized above */
+ for (; appctx->ctx.stats.l != &px->conf.listeners; appctx->ctx.stats.l = l->by_fe.n) {
+ if (buffer_almost_full(rep->buf)) {
+ si_applet_cant_put(si);
+ return 0;
+ }
+
+ l = LIST_ELEM(appctx->ctx.stats.l, struct listener *, by_fe);
+ if (!l->counters)
+ continue;
+
+ if (appctx->ctx.stats.flags & STAT_BOUND) {
+ if (!(appctx->ctx.stats.type & (1 << STATS_TYPE_SO)))
+ break;
+
+ if (appctx->ctx.stats.sid != -1 && l->luid != appctx->ctx.stats.sid)
+ continue;
+ }
+
+ /* print the frontend */
+ if (stats_dump_li_stats(si, px, l, uri ? uri->flags : 0)) {
+ if (bi_putchk(rep, &trash) == -1) {
+ si_applet_cant_put(si);
+ return 0;
+ }
+ }
+ }
+
+ appctx->ctx.stats.sv = px->srv; /* may be NULL */
+ appctx->ctx.stats.px_st = STAT_PX_ST_SV;
+ /* fall through */
+
+ case STAT_PX_ST_SV:
+ /* stats.sv has been initialized above */
+ for (; appctx->ctx.stats.sv != NULL; appctx->ctx.stats.sv = sv->next) {
+ enum srv_stats_state sv_state;
+ enum srv_stats_colour sv_colour;
+
+ if (buffer_almost_full(rep->buf)) {
+ si_applet_cant_put(si);
+ return 0;
+ }
+
+ sv = appctx->ctx.stats.sv;
+
+ if (appctx->ctx.stats.flags & STAT_BOUND) {
+ if (!(appctx->ctx.stats.type & (1 << STATS_TYPE_SV)))
+ break;
+
+ if (appctx->ctx.stats.sid != -1 && sv->puid != appctx->ctx.stats.sid)
+ continue;
+ }
+
+ svs = sv;
+ while (svs->track)
+ svs = svs->track;
+
+ if (sv->state == SRV_ST_RUNNING || sv->state == SRV_ST_STARTING) {
+ if ((svs->check.state & CHK_ST_ENABLED) &&
+ (svs->check.health < svs->check.rise + svs->check.fall - 1)) {
+ sv_state = SRV_STATS_STATE_UP_GOING_DOWN;
+ sv_colour = SRV_STATS_COLOUR_GOING_DOWN;
+ } else {
+ sv_state = SRV_STATS_STATE_UP;
+ sv_colour = SRV_STATS_COLOUR_UP;
+ }
+
+ if (sv_state == SRV_STATS_STATE_UP && !svs->uweight)
+ sv_colour = SRV_STATS_COLOUR_DRAINING;
+
+ if (sv->admin & SRV_ADMF_DRAIN) {
+ if (svs->agent.state & CHK_ST_ENABLED)
+ sv_state = SRV_STATS_STATE_DRAIN_AGENT;
+ else if (sv_state == SRV_STATS_STATE_UP_GOING_DOWN)
+ sv_state = SRV_STATS_STATE_DRAIN_GOING_DOWN;
+ else
+ sv_state = SRV_STATS_STATE_DRAIN;
+ }
+
+ if (sv_state == SRV_STATS_STATE_UP && !(svs->check.state & CHK_ST_ENABLED)) {
+ sv_state = SRV_STATS_STATE_NO_CHECK;
+ sv_colour = SRV_STATS_COLOUR_NO_CHECK;
+ }
+ }
+ else if (sv->state == SRV_ST_STOPPING) {
+ if ((!(sv->check.state & CHK_ST_ENABLED) && !sv->track) ||
+ (svs->check.health == svs->check.rise + svs->check.fall - 1)) {
+ sv_state = SRV_STATS_STATE_NOLB;
+ sv_colour = SRV_STATS_COLOUR_NOLB;
+ } else {
+ sv_state = SRV_STATS_STATE_NOLB_GOING_DOWN;
+ sv_colour = SRV_STATS_COLOUR_GOING_DOWN;
+ }
+ }
+ else { /* stopped */
+ if ((svs->agent.state & CHK_ST_ENABLED) && !svs->agent.health) {
+ sv_state = SRV_STATS_STATE_DOWN_AGENT;
+ sv_colour = SRV_STATS_COLOUR_DOWN;
+ } else if ((svs->check.state & CHK_ST_ENABLED) && !svs->check.health) {
+ sv_state = SRV_STATS_STATE_DOWN; /* DOWN */
+ sv_colour = SRV_STATS_COLOUR_DOWN;
+ } else if ((svs->agent.state & CHK_ST_ENABLED) || (svs->check.state & CHK_ST_ENABLED)) {
+ sv_state = SRV_STATS_STATE_GOING_UP;
+ sv_colour = SRV_STATS_COLOUR_GOING_UP;
+ } else {
+ sv_state = SRV_STATS_STATE_DOWN; /* DOWN, unchecked */
+ sv_colour = SRV_STATS_COLOUR_DOWN;
+ }
+ }
+
+ if (((sv_state <= 1) || (sv->admin & SRV_ADMF_MAINT)) && (appctx->ctx.stats.flags & STAT_HIDE_DOWN)) {
+ /* do not report servers which are DOWN */
+ appctx->ctx.stats.sv = sv->next;
+ continue;
+ }
+
+ if (stats_dump_sv_stats(si, px, uri ? uri->flags : 0, sv, sv_state, sv_colour)) {
+ if (bi_putchk(rep, &trash) == -1) {
+ si_applet_cant_put(si);
+ return 0;
+ }
+ }
+ } /* for sv */
+
+ appctx->ctx.stats.px_st = STAT_PX_ST_BE;
+ /* fall through */
+
+ case STAT_PX_ST_BE:
+ /* print the backend */
+ if (stats_dump_be_stats(si, px, uri ? uri->flags : 0)) {
+ if (bi_putchk(rep, &trash) == -1) {
+ si_applet_cant_put(si);
+ return 0;
+ }
+ }
+
+ appctx->ctx.stats.px_st = STAT_PX_ST_END;
+ /* fall through */
+
+ case STAT_PX_ST_END:
+ if (appctx->ctx.stats.flags & STAT_FMT_HTML) {
+ stats_dump_html_px_end(si, px);
+ if (bi_putchk(rep, &trash) == -1) {
+ si_applet_cant_put(si);
+ return 0;
+ }
+ }
+
+ appctx->ctx.stats.px_st = STAT_PX_ST_FIN;
+ /* fall through */
+
+ case STAT_PX_ST_FIN:
+ return 1;
+
+ default:
+ /* unknown state, we should put an abort() here ! */
+ return 1;
+ }
+}
+
+/* Dumps the HTTP stats head block to the trash for and uses the per-uri
+ * parameters <uri>. The caller is responsible for clearing the trash if needed.
+ */
+static void stats_dump_html_head(struct uri_auth *uri)
+{
+ /* WARNING! This must fit in the first buffer !!! */
+ chunk_appendf(&trash,
+ "<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.01 Transitional//EN\"\n"
+ "\"http://www.w3.org/TR/html4/loose.dtd\">\n"
+ "<html><head><title>Statistics Report for " PRODUCT_NAME "%s%s</title>\n"
+ "<meta http-equiv=\"content-type\" content=\"text/html; charset=iso-8859-1\">\n"
+ "<style type=\"text/css\"><!--\n"
+ "body {"
+ " font-family: arial, helvetica, sans-serif;"
+ " font-size: 12px;"
+ " font-weight: normal;"
+ " color: black;"
+ " background: white;"
+ "}\n"
+ "th,td {"
+ " font-size: 10px;"
+ "}\n"
+ "h1 {"
+ " font-size: x-large;"
+ " margin-bottom: 0.5em;"
+ "}\n"
+ "h2 {"
+ " font-family: helvetica, arial;"
+ " font-size: x-large;"
+ " font-weight: bold;"
+ " font-style: italic;"
+ " color: #6020a0;"
+ " margin-top: 0em;"
+ " margin-bottom: 0em;"
+ "}\n"
+ "h3 {"
+ " font-family: helvetica, arial;"
+ " font-size: 16px;"
+ " font-weight: bold;"
+ " color: #b00040;"
+ " background: #e8e8d0;"
+ " margin-top: 0em;"
+ " margin-bottom: 0em;"
+ "}\n"
+ "li {"
+ " margin-top: 0.25em;"
+ " margin-right: 2em;"
+ "}\n"
+ ".hr {margin-top: 0.25em;"
+ " border-color: black;"
+ " border-bottom-style: solid;"
+ "}\n"
+ ".titre {background: #20D0D0;color: #000000; font-weight: bold; text-align: center;}\n"
+ ".total {background: #20D0D0;color: #ffff80;}\n"
+ ".frontend {background: #e8e8d0;}\n"
+ ".socket {background: #d0d0d0;}\n"
+ ".backend {background: #e8e8d0;}\n"
+ ".active_down {background: #ff9090;}\n"
+ ".active_going_up {background: #ffd020;}\n"
+ ".active_going_down {background: #ffffa0;}\n"
+ ".active_up {background: #c0ffc0;}\n"
+ ".active_nolb {background: #20a0ff;}\n"
+ ".active_draining {background: #20a0FF;}\n"
+ ".active_no_check {background: #e0e0e0;}\n"
+ ".backup_down {background: #ff9090;}\n"
+ ".backup_going_up {background: #ff80ff;}\n"
+ ".backup_going_down {background: #c060ff;}\n"
+ ".backup_up {background: #b0d0ff;}\n"
+ ".backup_nolb {background: #90b0e0;}\n"
+ ".backup_draining {background: #cc9900;}\n"
+ ".backup_no_check {background: #e0e0e0;}\n"
+ ".maintain {background: #c07820;}\n"
+ ".rls {letter-spacing: 0.2em; margin-right: 1px;}\n" /* right letter spacing (used for grouping digits) */
+ "\n"
+ "a.px:link {color: #ffff40; text-decoration: none;}"
+ "a.px:visited {color: #ffff40; text-decoration: none;}"
+ "a.px:hover {color: #ffffff; text-decoration: none;}"
+ "a.lfsb:link {color: #000000; text-decoration: none;}"
+ "a.lfsb:visited {color: #000000; text-decoration: none;}"
+ "a.lfsb:hover {color: #505050; text-decoration: none;}"
+ "\n"
+ "table.tbl { border-collapse: collapse; border-style: none;}\n"
+ "table.tbl td { text-align: right; border-width: 1px 1px 1px 1px; border-style: solid solid solid solid; padding: 2px 3px; border-color: gray; white-space: nowrap;}\n"
+ "table.tbl td.ac { text-align: center;}\n"
+ "table.tbl th { border-width: 1px; border-style: solid solid solid solid; border-color: gray;}\n"
+ "table.tbl th.pxname { background: #b00040; color: #ffff40; font-weight: bold; border-style: solid solid none solid; padding: 2px 3px; white-space: nowrap;}\n"
+ "table.tbl th.empty { border-style: none; empty-cells: hide; background: white;}\n"
+ "table.tbl th.desc { background: white; border-style: solid solid none solid; text-align: left; padding: 2px 3px;}\n"
+ "\n"
+ "table.lgd { border-collapse: collapse; border-width: 1px; border-style: none none none solid; border-color: black;}\n"
+ "table.lgd td { border-width: 1px; border-style: solid solid solid solid; border-color: gray; padding: 2px;}\n"
+ "table.lgd td.noborder { border-style: none; padding: 2px; white-space: nowrap;}\n"
+ "table.det { border-collapse: collapse; border-style: none; }\n"
+ "table.det th { text-align: left; border-width: 0px; padding: 0px 1px 0px 0px; font-style:normal;font-size:11px;font-weight:bold;font-family: sans-serif;}\n"
+ "table.det td { text-align: right; border-width: 0px; padding: 0px 0px 0px 4px; white-space: nowrap; font-style:normal;font-size:11px;font-weight:normal;}\n"
+ "u {text-decoration:none; border-bottom: 1px dotted black;}\n"
+ "div.tips {\n"
+ " display:block;\n"
+ " visibility:hidden;\n"
+ " z-index:2147483647;\n"
+ " position:absolute;\n"
+ " padding:2px 4px 3px;\n"
+ " background:#f0f060; color:#000000;\n"
+ " border:1px solid #7040c0;\n"
+ " white-space:nowrap;\n"
+ " font-style:normal;font-size:11px;font-weight:normal;\n"
+ " -moz-border-radius:3px;-webkit-border-radius:3px;border-radius:3px;\n"
+ " -moz-box-shadow:gray 2px 2px 3px;-webkit-box-shadow:gray 2px 2px 3px;box-shadow:gray 2px 2px 3px;\n"
+ "}\n"
+ "u:hover div.tips {visibility:visible;}\n"
+ "-->\n"
+ "</style></head>\n",
+ (uri->flags & ST_SHNODE) ? " on " : "",
+ (uri->flags & ST_SHNODE) ? (uri->node ? uri->node : global.node) : ""
+ );
+}
+
+/* Dumps the HTML stats information block to the trash for and uses the state from
+ * stream interface <si> and per-uri parameters <uri>. The caller is responsible
+ * for clearing the trash if needed.
+ */
+static void stats_dump_html_info(struct stream_interface *si, struct uri_auth *uri)
+{
+ struct appctx *appctx = __objt_appctx(si->end);
+ unsigned int up = (now.tv_sec - start_date.tv_sec);
+ char scope_txt[STAT_SCOPE_TXT_MAXLEN + sizeof STAT_SCOPE_PATTERN];
+
+ /* WARNING! this has to fit the first packet too.
+ * We are around 3.5 kB, add adding entries will
+ * become tricky if we want to support 4kB buffers !
+ */
+ chunk_appendf(&trash,
+ "<body><h1><a href=\"" PRODUCT_URL "\" style=\"text-decoration: none;\">"
+ PRODUCT_NAME "%s</a></h1>\n"
+ "<h2>Statistics Report for pid %d%s%s%s%s</h2>\n"
+ "<hr width=\"100%%\" class=\"hr\">\n"
+ "<h3>> General process information</h3>\n"
+ "<table border=0><tr><td align=\"left\" nowrap width=\"1%%\">\n"
+ "<p><b>pid = </b> %d (process #%d, nbproc = %d)<br>\n"
+ "<b>uptime = </b> %dd %dh%02dm%02ds<br>\n"
+ "<b>system limits:</b> memmax = %s%s; ulimit-n = %d<br>\n"
+ "<b>maxsock = </b> %d; <b>maxconn = </b> %d; <b>maxpipes = </b> %d<br>\n"
+ "current conns = %d; current pipes = %d/%d; conn rate = %d/sec<br>\n"
+ "Running tasks: %d/%d; idle = %d %%<br>\n"
+ "</td><td align=\"center\" nowrap>\n"
+ "<table class=\"lgd\"><tr>\n"
+ "<td class=\"active_up\"> </td><td class=\"noborder\">active UP </td>"
+ "<td class=\"backup_up\"> </td><td class=\"noborder\">backup UP </td>"
+ "</tr><tr>\n"
+ "<td class=\"active_going_down\"></td><td class=\"noborder\">active UP, going down </td>"
+ "<td class=\"backup_going_down\"></td><td class=\"noborder\">backup UP, going down </td>"
+ "</tr><tr>\n"
+ "<td class=\"active_going_up\"></td><td class=\"noborder\">active DOWN, going up </td>"
+ "<td class=\"backup_going_up\"></td><td class=\"noborder\">backup DOWN, going up </td>"
+ "</tr><tr>\n"
+ "<td class=\"active_down\"></td><td class=\"noborder\">active or backup DOWN </td>"
+ "<td class=\"active_no_check\"></td><td class=\"noborder\">not checked </td>"
+ "</tr><tr>\n"
+ "<td class=\"maintain\"></td><td class=\"noborder\" colspan=\"3\">active or backup DOWN for maintenance (MAINT) </td>"
+ "</tr><tr>\n"
+ "<td class=\"active_draining\"></td><td class=\"noborder\" colspan=\"3\">active or backup SOFT STOPPED for maintenance </td>"
+ "</tr></table>\n"
+ "Note: \"NOLB\"/\"DRAIN\" = UP with load-balancing disabled."
+ "</td>"
+ "<td align=\"left\" valign=\"top\" nowrap width=\"1%%\">"
+ "<b>Display option:</b><ul style=\"margin-top: 0.25em;\">"
+ "",
+ (uri->flags & ST_HIDEVER) ? "" : (STATS_VERSION_STRING),
+ pid, (uri->flags & ST_SHNODE) ? " on " : "",
+ (uri->flags & ST_SHNODE) ? (uri->node ? uri->node : global.node) : "",
+ (uri->flags & ST_SHDESC) ? ": " : "",
+ (uri->flags & ST_SHDESC) ? (uri->desc ? uri->desc : global.desc) : "",
+ pid, relative_pid, global.nbproc,
+ up / 86400, (up % 86400) / 3600,
+ (up % 3600) / 60, (up % 60),
+ global.rlimit_memmax ? ultoa(global.rlimit_memmax) : "unlimited",
+ global.rlimit_memmax ? " MB" : "",
+ global.rlimit_nofile,
+ global.maxsock, global.maxconn, global.maxpipes,
+ actconn, pipes_used, pipes_used+pipes_free, read_freq_ctr(&global.conn_per_sec),
+ run_queue_cur, nb_tasks_cur, idle_pct
+ );
+
+ /* scope_txt = search query, appctx->ctx.stats.scope_len is always <= STAT_SCOPE_TXT_MAXLEN */
+ memcpy(scope_txt, bo_ptr(si_ob(si)) + appctx->ctx.stats.scope_str, appctx->ctx.stats.scope_len);
+ scope_txt[appctx->ctx.stats.scope_len] = '\0';
+
+ chunk_appendf(&trash,
+ "<li><form method=\"GET\">Scope : <input value=\"%s\" name=\"" STAT_SCOPE_INPUT_NAME "\" size=\"8\" maxlength=\"%d\" tabindex=\"1\"/></form>\n",
+ (appctx->ctx.stats.scope_len > 0) ? scope_txt : "",
+ STAT_SCOPE_TXT_MAXLEN);
+
+ /* scope_txt = search pattern + search query, appctx->ctx.stats.scope_len is always <= STAT_SCOPE_TXT_MAXLEN */
+ scope_txt[0] = 0;
+ if (appctx->ctx.stats.scope_len) {
+ strcpy(scope_txt, STAT_SCOPE_PATTERN);
+ memcpy(scope_txt + strlen(STAT_SCOPE_PATTERN), bo_ptr(si_ob(si)) + appctx->ctx.stats.scope_str, appctx->ctx.stats.scope_len);
+ scope_txt[strlen(STAT_SCOPE_PATTERN) + appctx->ctx.stats.scope_len] = 0;
+ }
+
+ if (appctx->ctx.stats.flags & STAT_HIDE_DOWN)
+ chunk_appendf(&trash,
+ "<li><a href=\"%s%s%s%s\">Show all servers</a><br>\n",
+ uri->uri_prefix,
+ "",
+ (appctx->ctx.stats.flags & STAT_NO_REFRESH) ? ";norefresh" : "",
+ scope_txt);
+ else
+ chunk_appendf(&trash,
+ "<li><a href=\"%s%s%s%s\">Hide 'DOWN' servers</a><br>\n",
+ uri->uri_prefix,
+ ";up",
+ (appctx->ctx.stats.flags & STAT_NO_REFRESH) ? ";norefresh" : "",
+ scope_txt);
+
+ if (uri->refresh > 0) {
+ if (appctx->ctx.stats.flags & STAT_NO_REFRESH)
+ chunk_appendf(&trash,
+ "<li><a href=\"%s%s%s%s\">Enable refresh</a><br>\n",
+ uri->uri_prefix,
+ (appctx->ctx.stats.flags & STAT_HIDE_DOWN) ? ";up" : "",
+ "",
+ scope_txt);
+ else
+ chunk_appendf(&trash,
+ "<li><a href=\"%s%s%s%s\">Disable refresh</a><br>\n",
+ uri->uri_prefix,
+ (appctx->ctx.stats.flags & STAT_HIDE_DOWN) ? ";up" : "",
+ ";norefresh",
+ scope_txt);
+ }
+
+ chunk_appendf(&trash,
+ "<li><a href=\"%s%s%s%s\">Refresh now</a><br>\n",
+ uri->uri_prefix,
+ (appctx->ctx.stats.flags & STAT_HIDE_DOWN) ? ";up" : "",
+ (appctx->ctx.stats.flags & STAT_NO_REFRESH) ? ";norefresh" : "",
+ scope_txt);
+
+ chunk_appendf(&trash,
+ "<li><a href=\"%s;csv%s%s\">CSV export</a><br>\n",
+ uri->uri_prefix,
+ (uri->refresh > 0) ? ";norefresh" : "",
+ scope_txt);
+
+ chunk_appendf(&trash,
+ "</ul></td>"
+ "<td align=\"left\" valign=\"top\" nowrap width=\"1%%\">"
+ "<b>External resources:</b><ul style=\"margin-top: 0.25em;\">\n"
+ "<li><a href=\"" PRODUCT_URL "\">Primary site</a><br>\n"
+ "<li><a href=\"" PRODUCT_URL_UPD "\">Updates (v" PRODUCT_BRANCH ")</a><br>\n"
+ "<li><a href=\"" PRODUCT_URL_DOC "\">Online manual</a><br>\n"
+ "</ul>"
+ "</td>"
+ "</tr></table>\n"
+ ""
+ );
+
+ if (appctx->ctx.stats.st_code) {
+ switch (appctx->ctx.stats.st_code) {
+ case STAT_STATUS_DONE:
+ chunk_appendf(&trash,
+ "<p><div class=active_up>"
+ "<a class=lfsb href=\"%s%s%s%s\" title=\"Remove this message\">[X]</a> "
+ "Action processed successfully."
+ "</div>\n", uri->uri_prefix,
+ (appctx->ctx.stats.flags & STAT_HIDE_DOWN) ? ";up" : "",
+ (appctx->ctx.stats.flags & STAT_NO_REFRESH) ? ";norefresh" : "",
+ scope_txt);
+ break;
+ case STAT_STATUS_NONE:
+ chunk_appendf(&trash,
+ "<p><div class=active_going_down>"
+ "<a class=lfsb href=\"%s%s%s%s\" title=\"Remove this message\">[X]</a> "
+ "Nothing has changed."
+ "</div>\n", uri->uri_prefix,
+ (appctx->ctx.stats.flags & STAT_HIDE_DOWN) ? ";up" : "",
+ (appctx->ctx.stats.flags & STAT_NO_REFRESH) ? ";norefresh" : "",
+ scope_txt);
+ break;
+ case STAT_STATUS_PART:
+ chunk_appendf(&trash,
+ "<p><div class=active_going_down>"
+ "<a class=lfsb href=\"%s%s%s%s\" title=\"Remove this message\">[X]</a> "
+ "Action partially processed.<br>"
+ "Some server names are probably unknown or ambiguous (duplicated names in the backend)."
+ "</div>\n", uri->uri_prefix,
+ (appctx->ctx.stats.flags & STAT_HIDE_DOWN) ? ";up" : "",
+ (appctx->ctx.stats.flags & STAT_NO_REFRESH) ? ";norefresh" : "",
+ scope_txt);
+ break;
+ case STAT_STATUS_ERRP:
+ chunk_appendf(&trash,
+ "<p><div class=active_down>"
+ "<a class=lfsb href=\"%s%s%s%s\" title=\"Remove this message\">[X]</a> "
+ "Action not processed because of invalid parameters."
+ "<ul>"
+ "<li>The action is maybe unknown.</li>"
+ "<li>The backend name is probably unknown or ambiguous (duplicated names).</li>"
+ "<li>Some server names are probably unknown or ambiguous (duplicated names in the backend).</li>"
+ "</ul>"
+ "</div>\n", uri->uri_prefix,
+ (appctx->ctx.stats.flags & STAT_HIDE_DOWN) ? ";up" : "",
+ (appctx->ctx.stats.flags & STAT_NO_REFRESH) ? ";norefresh" : "",
+ scope_txt);
+ break;
+ case STAT_STATUS_EXCD:
+ chunk_appendf(&trash,
+ "<p><div class=active_down>"
+ "<a class=lfsb href=\"%s%s%s%s\" title=\"Remove this message\">[X]</a> "
+ "<b>Action not processed : the buffer couldn't store all the data.<br>"
+ "You should retry with less servers at a time.</b>"
+ "</div>\n", uri->uri_prefix,
+ (appctx->ctx.stats.flags & STAT_HIDE_DOWN) ? ";up" : "",
+ (appctx->ctx.stats.flags & STAT_NO_REFRESH) ? ";norefresh" : "",
+ scope_txt);
+ break;
+ case STAT_STATUS_DENY:
+ chunk_appendf(&trash,
+ "<p><div class=active_down>"
+ "<a class=lfsb href=\"%s%s%s%s\" title=\"Remove this message\">[X]</a> "
+ "<b>Action denied.</b>"
+ "</div>\n", uri->uri_prefix,
+ (appctx->ctx.stats.flags & STAT_HIDE_DOWN) ? ";up" : "",
+ (appctx->ctx.stats.flags & STAT_NO_REFRESH) ? ";norefresh" : "",
+ scope_txt);
+ break;
+ default:
+ chunk_appendf(&trash,
+ "<p><div class=active_no_check>"
+ "<a class=lfsb href=\"%s%s%s%s\" title=\"Remove this message\">[X]</a> "
+ "Unexpected result."
+ "</div>\n", uri->uri_prefix,
+ (appctx->ctx.stats.flags & STAT_HIDE_DOWN) ? ";up" : "",
+ (appctx->ctx.stats.flags & STAT_NO_REFRESH) ? ";norefresh" : "",
+ scope_txt);
+ }
+ chunk_appendf(&trash, "<p>\n");
+ }
+}
+
+/* Dumps the HTML stats trailer block to the trash. The caller is responsible
+ * for clearing the trash if needed.
+ */
+static void stats_dump_html_end()
+{
+ chunk_appendf(&trash, "</body></html>\n");
+}
+
+/* This function dumps statistics onto the stream interface's read buffer in
+ * either CSV or HTML format. <uri> contains some HTML-specific parameters that
+ * are ignored for CSV format (hence <uri> may be NULL there). It returns 0 if
+ * it had to stop writing data and an I/O is needed, 1 if the dump is finished
+ * and the stream must be closed, or -1 in case of any error. This function is
+ * used by both the CLI and the HTTP handlers.
+ */
+static int stats_dump_stat_to_buffer(struct stream_interface *si, struct uri_auth *uri)
+{
+ struct appctx *appctx = __objt_appctx(si->end);
+ struct channel *rep = si_ic(si);
+ struct proxy *px;
+
+ chunk_reset(&trash);
+
+ switch (appctx->st2) {
+ case STAT_ST_INIT:
+ appctx->st2 = STAT_ST_HEAD; /* let's start producing data */
+ /* fall through */
+
+ case STAT_ST_HEAD:
+ if (appctx->ctx.stats.flags & STAT_FMT_HTML)
+ stats_dump_html_head(uri);
+ else
+ stats_dump_csv_header();
+
+ if (bi_putchk(rep, &trash) == -1) {
+ si_applet_cant_put(si);
+ return 0;
+ }
+
+ appctx->st2 = STAT_ST_INFO;
+ /* fall through */
+
+ case STAT_ST_INFO:
+ if (appctx->ctx.stats.flags & STAT_FMT_HTML) {
+ stats_dump_html_info(si, uri);
+ if (bi_putchk(rep, &trash) == -1) {
+ si_applet_cant_put(si);
+ return 0;
+ }
+ }
+
+ appctx->ctx.stats.px = proxy;
+ appctx->ctx.stats.px_st = STAT_PX_ST_INIT;
+ appctx->st2 = STAT_ST_LIST;
+ /* fall through */
+
+ case STAT_ST_LIST:
+ /* dump proxies */
+ while (appctx->ctx.stats.px) {
+ if (buffer_almost_full(rep->buf)) {
+ si_applet_cant_put(si);
+ return 0;
+ }
+
+ px = appctx->ctx.stats.px;
+ /* skip the disabled proxies, global frontend and non-networked ones */
+ if (px->state != PR_STSTOPPED && px->uuid > 0 && (px->cap & (PR_CAP_FE | PR_CAP_BE)))
+ if (stats_dump_proxy_to_buffer(si, px, uri) == 0)
+ return 0;
+
+ appctx->ctx.stats.px = px->next;
+ appctx->ctx.stats.px_st = STAT_PX_ST_INIT;
+ }
+ /* here, we just have reached the last proxy */
+
+ appctx->st2 = STAT_ST_END;
+ /* fall through */
+
+ case STAT_ST_END:
+ if (appctx->ctx.stats.flags & STAT_FMT_HTML) {
+ stats_dump_html_end();
+ if (bi_putchk(rep, &trash) == -1) {
+ si_applet_cant_put(si);
+ return 0;
+ }
+ }
+
+ appctx->st2 = STAT_ST_FIN;
+ /* fall through */
+
+ case STAT_ST_FIN:
+ return 1;
+
+ default:
+ /* unknown state ! */
+ appctx->st2 = STAT_ST_FIN;
+ return -1;
+ }
+}
+
+/* We reached the stats page through a POST request. The appctx is
+ * expected to have already been allocated by the caller.
+ * Parse the posted data and enable/disable servers if necessary.
+ * Returns 1 if request was parsed or zero if it needs more data.
+ */
+static int stats_process_http_post(struct stream_interface *si)
+{
+ struct stream *s = si_strm(si);
+ struct appctx *appctx = objt_appctx(si->end);
+
+ struct proxy *px = NULL;
+ struct server *sv = NULL;
+
+ char key[LINESIZE];
+ int action = ST_ADM_ACTION_NONE;
+ int reprocess = 0;
+
+ int total_servers = 0;
+ int altered_servers = 0;
+
+ char *first_param, *cur_param, *next_param, *end_params;
+ char *st_cur_param = NULL;
+ char *st_next_param = NULL;
+
+ struct chunk *temp;
+ int reql;
+
+ temp = get_trash_chunk();
+ if (temp->size < s->txn->req.body_len) {
+ /* too large request */
+ appctx->ctx.stats.st_code = STAT_STATUS_EXCD;
+ goto out;
+ }
+
+ reql = bo_getblk(si_oc(si), temp->str, s->txn->req.body_len, s->txn->req.eoh + 2);
+ if (reql <= 0) {
+ /* we need more data */
+ appctx->ctx.stats.st_code = STAT_STATUS_NONE;
+ return 0;
+ }
+
+ first_param = temp->str;
+ end_params = temp->str + reql;
+ cur_param = next_param = end_params;
+ *end_params = '\0';
+
+ appctx->ctx.stats.st_code = STAT_STATUS_NONE;
+
+ /*
+ * Parse the parameters in reverse order to only store the last value.
+ * From the html form, the backend and the action are at the end.
+ */
+ while (cur_param > first_param) {
+ char *value;
+ int poffset, plen;
+
+ cur_param--;
+
+ if ((*cur_param == '&') || (cur_param == first_param)) {
+ reprocess_servers:
+ /* Parse the key */
+ poffset = (cur_param != first_param ? 1 : 0);
+ plen = next_param - cur_param + (cur_param == first_param ? 1 : 0);
+ if ((plen > 0) && (plen <= sizeof(key))) {
+ strncpy(key, cur_param + poffset, plen);
+ key[plen - 1] = '\0';
+ } else {
+ appctx->ctx.stats.st_code = STAT_STATUS_EXCD;
+ goto out;
+ }
+
+ /* Parse the value */
+ value = key;
+ while (*value != '\0' && *value != '=') {
+ value++;
+ }
+ if (*value == '=') {
+ /* Ok, a value is found, we can mark the end of the key */
+ *value++ = '\0';
+ }
+ if (url_decode(key) < 0 || url_decode(value) < 0)
+ break;
+
+ /* Now we can check the key to see what to do */
+ if (!px && (strcmp(key, "b") == 0)) {
+ if ((px = proxy_be_by_name(value)) == NULL) {
+ /* the backend name is unknown or ambiguous (duplicate names) */
+ appctx->ctx.stats.st_code = STAT_STATUS_ERRP;
+ goto out;
+ }
+ }
+ else if (!action && (strcmp(key, "action") == 0)) {
+ if (strcmp(value, "ready") == 0) {
+ action = ST_ADM_ACTION_READY;
+ }
+ else if (strcmp(value, "drain") == 0) {
+ action = ST_ADM_ACTION_DRAIN;
+ }
+ else if (strcmp(value, "maint") == 0) {
+ action = ST_ADM_ACTION_MAINT;
+ }
+ else if (strcmp(value, "shutdown") == 0) {
+ action = ST_ADM_ACTION_SHUTDOWN;
+ }
+ else if (strcmp(value, "dhlth") == 0) {
+ action = ST_ADM_ACTION_DHLTH;
+ }
+ else if (strcmp(value, "ehlth") == 0) {
+ action = ST_ADM_ACTION_EHLTH;
+ }
+ else if (strcmp(value, "hrunn") == 0) {
+ action = ST_ADM_ACTION_HRUNN;
+ }
+ else if (strcmp(value, "hnolb") == 0) {
+ action = ST_ADM_ACTION_HNOLB;
+ }
+ else if (strcmp(value, "hdown") == 0) {
+ action = ST_ADM_ACTION_HDOWN;
+ }
+ else if (strcmp(value, "dagent") == 0) {
+ action = ST_ADM_ACTION_DAGENT;
+ }
+ else if (strcmp(value, "eagent") == 0) {
+ action = ST_ADM_ACTION_EAGENT;
+ }
+ else if (strcmp(value, "arunn") == 0) {
+ action = ST_ADM_ACTION_ARUNN;
+ }
+ else if (strcmp(value, "adown") == 0) {
+ action = ST_ADM_ACTION_ADOWN;
+ }
+ /* else these are the old supported methods */
+ else if (strcmp(value, "disable") == 0) {
+ action = ST_ADM_ACTION_DISABLE;
+ }
+ else if (strcmp(value, "enable") == 0) {
+ action = ST_ADM_ACTION_ENABLE;
+ }
+ else if (strcmp(value, "stop") == 0) {
+ action = ST_ADM_ACTION_STOP;
+ }
+ else if (strcmp(value, "start") == 0) {
+ action = ST_ADM_ACTION_START;
+ }
+ else {
+ appctx->ctx.stats.st_code = STAT_STATUS_ERRP;
+ goto out;
+ }
+ }
+ else if (strcmp(key, "s") == 0) {
+ if (!(px && action)) {
+ /*
+ * Indicates that we'll need to reprocess the parameters
+ * as soon as backend and action are known
+ */
+ if (!reprocess) {
+ st_cur_param = cur_param;
+ st_next_param = next_param;
+ }
+ reprocess = 1;
+ }
+ else if ((sv = findserver(px, value)) != NULL) {
+ switch (action) {
+ case ST_ADM_ACTION_DISABLE:
+ if (!(sv->admin & SRV_ADMF_FMAINT)) {
+ altered_servers++;
+ total_servers++;
+ srv_set_admin_flag(sv, SRV_ADMF_FMAINT);
+ }
+ break;
+ case ST_ADM_ACTION_ENABLE:
+ if (sv->admin & SRV_ADMF_FMAINT) {
+ altered_servers++;
+ total_servers++;
+ srv_clr_admin_flag(sv, SRV_ADMF_FMAINT);
+ }
+ break;
+ case ST_ADM_ACTION_STOP:
+ if (!(sv->admin & SRV_ADMF_FDRAIN)) {
+ srv_set_admin_flag(sv, SRV_ADMF_FDRAIN);
+ altered_servers++;
+ total_servers++;
+ }
+ break;
+ case ST_ADM_ACTION_START:
+ if (sv->admin & SRV_ADMF_FDRAIN) {
+ srv_clr_admin_flag(sv, SRV_ADMF_FDRAIN);
+ altered_servers++;
+ total_servers++;
+ }
+ break;
+ case ST_ADM_ACTION_DHLTH:
+ if (sv->check.state & CHK_ST_CONFIGURED) {
+ sv->check.state &= ~CHK_ST_ENABLED;
+ altered_servers++;
+ total_servers++;
+ }
+ break;
+ case ST_ADM_ACTION_EHLTH:
+ if (sv->check.state & CHK_ST_CONFIGURED) {
+ sv->check.state |= CHK_ST_ENABLED;
+ altered_servers++;
+ total_servers++;
+ }
+ break;
+ case ST_ADM_ACTION_HRUNN:
+ if (!(sv->track)) {
+ sv->check.health = sv->check.rise + sv->check.fall - 1;
+ srv_set_running(sv, "changed from Web interface");
+ altered_servers++;
+ total_servers++;
+ }
+ break;
+ case ST_ADM_ACTION_HNOLB:
+ if (!(sv->track)) {
+ sv->check.health = sv->check.rise + sv->check.fall - 1;
+ srv_set_stopping(sv, "changed from Web interface");
+ altered_servers++;
+ total_servers++;
+ }
+ break;
+ case ST_ADM_ACTION_HDOWN:
+ if (!(sv->track)) {
+ sv->check.health = 0;
+ srv_set_stopped(sv, "changed from Web interface");
+ altered_servers++;
+ total_servers++;
+ }
+ break;
+ case ST_ADM_ACTION_DAGENT:
+ if (sv->agent.state & CHK_ST_CONFIGURED) {
+ sv->agent.state &= ~CHK_ST_ENABLED;
+ altered_servers++;
+ total_servers++;
+ }
+ break;
+ case ST_ADM_ACTION_EAGENT:
+ if (sv->agent.state & CHK_ST_CONFIGURED) {
+ sv->agent.state |= CHK_ST_ENABLED;
+ altered_servers++;
+ total_servers++;
+ }
+ break;
+ case ST_ADM_ACTION_ARUNN:
+ if (sv->agent.state & CHK_ST_ENABLED) {
+ sv->agent.health = sv->agent.rise + sv->agent.fall - 1;
+ srv_set_running(sv, "changed from Web interface");
+ altered_servers++;
+ total_servers++;
+ }
+ break;
+ case ST_ADM_ACTION_ADOWN:
+ if (sv->agent.state & CHK_ST_ENABLED) {
+ sv->agent.health = 0;
+ srv_set_stopped(sv, "changed from Web interface");
+ altered_servers++;
+ total_servers++;
+ }
+ break;
+ case ST_ADM_ACTION_READY:
+ srv_adm_set_ready(sv);
+ altered_servers++;
+ total_servers++;
+ break;
+ case ST_ADM_ACTION_DRAIN:
+ srv_adm_set_drain(sv);
+ altered_servers++;
+ total_servers++;
+ break;
+ case ST_ADM_ACTION_MAINT:
+ srv_adm_set_maint(sv);
+ altered_servers++;
+ total_servers++;
+ break;
+ case ST_ADM_ACTION_SHUTDOWN:
+ if (px->state != PR_STSTOPPED) {
+ struct stream *sess, *sess_bck;
+
+ list_for_each_entry_safe(sess, sess_bck, &sv->actconns, by_srv)
+ if (sess->srv_conn == sv)
+ stream_shutdown(sess, SF_ERR_KILLED);
+
+ altered_servers++;
+ total_servers++;
+ }
+ break;
+ }
+ } else {
+ /* the server name is unknown or ambiguous (duplicate names) */
+ total_servers++;
+ }
+ }
+ if (reprocess && px && action) {
+ /* Now, we know the backend and the action chosen by the user.
+ * We can safely restart from the first server parameter
+ * to reprocess them
+ */
+ cur_param = st_cur_param;
+ next_param = st_next_param;
+ reprocess = 0;
+ goto reprocess_servers;
+ }
+
+ next_param = cur_param;
+ }
+ }
+
+ if (total_servers == 0) {
+ appctx->ctx.stats.st_code = STAT_STATUS_NONE;
+ }
+ else if (altered_servers == 0) {
+ appctx->ctx.stats.st_code = STAT_STATUS_ERRP;
+ }
+ else if (altered_servers == total_servers) {
+ appctx->ctx.stats.st_code = STAT_STATUS_DONE;
+ }
+ else {
+ appctx->ctx.stats.st_code = STAT_STATUS_PART;
+ }
+ out:
+ return 1;
+}
+
+
+static int stats_send_http_headers(struct stream_interface *si)
+{
+ struct stream *s = si_strm(si);
+ struct uri_auth *uri = s->be->uri_auth;
+ struct appctx *appctx = objt_appctx(si->end);
+
+ chunk_printf(&trash,
+ "HTTP/1.1 200 OK\r\n"
+ "Cache-Control: no-cache\r\n"
+ "Connection: close\r\n"
+ "Content-Type: %s\r\n",
+ (appctx->ctx.stats.flags & STAT_FMT_HTML) ? "text/html" : "text/plain");
+
+ if (uri->refresh > 0 && !(appctx->ctx.stats.flags & STAT_NO_REFRESH))
+ chunk_appendf(&trash, "Refresh: %d\r\n",
+ uri->refresh);
+
+ /* we don't send the CRLF in chunked mode, it will be sent with the first chunk's size */
+
+ if (appctx->ctx.stats.flags & STAT_CHUNKED)
+ chunk_appendf(&trash, "Transfer-Encoding: chunked\r\n");
+ else
+ chunk_appendf(&trash, "\r\n");
+
+ s->txn->status = 200;
+ s->logs.tv_request = now;
+
+ if (bi_putchk(si_ic(si), &trash) == -1) {
+ si_applet_cant_put(si);
+ return 0;
+ }
+
+ return 1;
+}
+
+static int stats_send_http_redirect(struct stream_interface *si)
+{
+ char scope_txt[STAT_SCOPE_TXT_MAXLEN + sizeof STAT_SCOPE_PATTERN];
+ struct stream *s = si_strm(si);
+ struct uri_auth *uri = s->be->uri_auth;
+ struct appctx *appctx = objt_appctx(si->end);
+
+ /* scope_txt = search pattern + search query, appctx->ctx.stats.scope_len is always <= STAT_SCOPE_TXT_MAXLEN */
+ scope_txt[0] = 0;
+ if (appctx->ctx.stats.scope_len) {
+ strcpy(scope_txt, STAT_SCOPE_PATTERN);
+ memcpy(scope_txt + strlen(STAT_SCOPE_PATTERN), bo_ptr(si_ob(si)) + appctx->ctx.stats.scope_str, appctx->ctx.stats.scope_len);
+ scope_txt[strlen(STAT_SCOPE_PATTERN) + appctx->ctx.stats.scope_len] = 0;
+ }
+
+ /* We don't want to land on the posted stats page because a refresh will
+ * repost the data. We don't want this to happen on accident so we redirect
+ * the browse to the stats page with a GET.
+ */
+ chunk_printf(&trash,
+ "HTTP/1.1 303 See Other\r\n"
+ "Cache-Control: no-cache\r\n"
+ "Content-Type: text/plain\r\n"
+ "Connection: close\r\n"
+ "Location: %s;st=%s%s%s%s\r\n"
+ "\r\n",
+ uri->uri_prefix,
+ ((appctx->ctx.stats.st_code > STAT_STATUS_INIT) &&
+ (appctx->ctx.stats.st_code < STAT_STATUS_SIZE) &&
+ stat_status_codes[appctx->ctx.stats.st_code]) ?
+ stat_status_codes[appctx->ctx.stats.st_code] :
+ stat_status_codes[STAT_STATUS_UNKN],
+ (appctx->ctx.stats.flags & STAT_HIDE_DOWN) ? ";up" : "",
+ (appctx->ctx.stats.flags & STAT_NO_REFRESH) ? ";norefresh" : "",
+ scope_txt);
+
+ s->txn->status = 303;
+ s->logs.tv_request = now;
+
+ if (bi_putchk(si_ic(si), &trash) == -1) {
+ si_applet_cant_put(si);
+ return 0;
+ }
+
+ return 1;
+}
+
+/* This I/O handler runs as an applet embedded in a stream interface. It is
+ * used to send HTTP stats over a TCP socket. The mechanism is very simple.
+ * appctx->st0 contains the operation in progress (dump, done). The handler
+ * automatically unregisters itself once transfer is complete.
+ */
+static void http_stats_io_handler(struct appctx *appctx)
+{
+ struct stream_interface *si = appctx->owner;
+ struct stream *s = si_strm(si);
+ struct channel *req = si_oc(si);
+ struct channel *res = si_ic(si);
+
+ if (unlikely(si->state == SI_ST_DIS || si->state == SI_ST_CLO))
+ goto out;
+
+ /* check that the output is not closed */
+ if (res->flags & (CF_SHUTW|CF_SHUTW_NOW))
+ appctx->st0 = STAT_HTTP_DONE;
+
+ /* all states are processed in sequence */
+ if (appctx->st0 == STAT_HTTP_HEAD) {
+ if (stats_send_http_headers(si)) {
+ if (s->txn->meth == HTTP_METH_HEAD)
+ appctx->st0 = STAT_HTTP_DONE;
+ else
+ appctx->st0 = STAT_HTTP_DUMP;
+ }
+ }
+
+ if (appctx->st0 == STAT_HTTP_DUMP) {
+ unsigned int prev_len = si_ib(si)->i;
+ unsigned int data_len;
+ unsigned int last_len;
+ unsigned int last_fwd = 0;
+
+ if (appctx->ctx.stats.flags & STAT_CHUNKED) {
+ /* One difficulty we're facing is that we must prevent
+ * the input data from being automatically forwarded to
+ * the output area. For this, we temporarily disable
+ * forwarding on the channel.
+ */
+ last_fwd = si_ic(si)->to_forward;
+ si_ic(si)->to_forward = 0;
+ chunk_printf(&trash, "\r\n000000\r\n");
+ if (bi_putchk(si_ic(si), &trash) == -1) {
+ si_applet_cant_put(si);
+ si_ic(si)->to_forward = last_fwd;
+ goto out;
+ }
+ }
+
+ data_len = si_ib(si)->i;
+ if (stats_dump_stat_to_buffer(si, s->be->uri_auth))
+ appctx->st0 = STAT_HTTP_DONE;
+
+ last_len = si_ib(si)->i;
+
+ /* Now we must either adjust or remove the chunk size. This is
+ * not easy because the chunk size might wrap at the end of the
+ * buffer, so we pretend we have nothing in the buffer, we write
+ * the size, then restore the buffer's contents. Note that we can
+ * only do that because no forwarding is scheduled on the stats
+ * applet.
+ */
+ if (appctx->ctx.stats.flags & STAT_CHUNKED) {
+ si_ic(si)->total -= (last_len - prev_len);
+ si_ib(si)->i -= (last_len - prev_len);
+
+ if (last_len != data_len) {
+ chunk_printf(&trash, "\r\n%06x\r\n", (last_len - data_len));
+ if (bi_putchk(si_ic(si), &trash) == -1)
+ si_applet_cant_put(si);
+
+ si_ic(si)->total += (last_len - data_len);
+ si_ib(si)->i += (last_len - data_len);
+ }
+ /* now re-enable forwarding */
+ channel_forward(si_ic(si), last_fwd);
+ }
+ }
+
+ if (appctx->st0 == STAT_HTTP_POST) {
+ if (stats_process_http_post(si))
+ appctx->st0 = STAT_HTTP_LAST;
+ else if (si_oc(si)->flags & CF_SHUTR)
+ appctx->st0 = STAT_HTTP_DONE;
+ }
+
+ if (appctx->st0 == STAT_HTTP_LAST) {
+ if (stats_send_http_redirect(si))
+ appctx->st0 = STAT_HTTP_DONE;
+ }
+
+ if (appctx->st0 == STAT_HTTP_DONE) {
+ if (appctx->ctx.stats.flags & STAT_CHUNKED) {
+ chunk_printf(&trash, "\r\n0\r\n\r\n");
+ if (bi_putchk(si_ic(si), &trash) == -1) {
+ si_applet_cant_put(si);
+ goto out;
+ }
+ }
+ /* eat the whole request */
+ bo_skip(si_oc(si), si_ob(si)->o);
+ res->flags |= CF_READ_NULL;
+ si_shutr(si);
+ }
+
+ if ((res->flags & CF_SHUTR) && (si->state == SI_ST_EST))
+ si_shutw(si);
+
+ if (appctx->st0 == STAT_HTTP_DONE) {
+ if ((req->flags & CF_SHUTW) && (si->state == SI_ST_EST)) {
+ si_shutr(si);
+ res->flags |= CF_READ_NULL;
+ }
+ }
+ out:
+ /* just to make gcc happy */ ;
+}
+
+
+static inline const char *get_conn_ctrl_name(const struct connection *conn)
+{
+ if (!conn_ctrl_ready(conn))
+ return "NONE";
+ return conn->ctrl->name;
+}
+
+static inline const char *get_conn_xprt_name(const struct connection *conn)
+{
+ static char ptr[17];
+
+ if (!conn_xprt_ready(conn))
+ return "NONE";
+
+ if (conn->xprt == &raw_sock)
+ return "RAW";
+
+#ifdef USE_OPENSSL
+ if (conn->xprt == &ssl_sock)
+ return "SSL";
+#endif
+ snprintf(ptr, sizeof(ptr), "%p", conn->xprt);
+ return ptr;
+}
+
+static inline const char *get_conn_data_name(const struct connection *conn)
+{
+ static char ptr[17];
+
+ if (!conn->data)
+ return "NONE";
+
+ if (conn->data == &sess_conn_cb)
+ return "SESS";
+
+ if (conn->data == &si_conn_cb)
+ return "STRM";
+
+ if (conn->data == &check_conn_cb)
+ return "CHCK";
+
+ snprintf(ptr, sizeof(ptr), "%p", conn->data);
+ return ptr;
+}
+
+/* This function dumps a complete stream state onto the stream interface's
+ * read buffer. The stream has to be set in sess->target. It returns
+ * 0 if the output buffer is full and it needs to be called again, otherwise
+ * non-zero. It is designed to be called from stats_dump_sess_to_buffer() below.
+ */
+static int stats_dump_full_sess_to_buffer(struct stream_interface *si, struct stream *sess)
+{
+ struct appctx *appctx = __objt_appctx(si->end);
+ struct tm tm;
+ extern const char *monthname[12];
+ char pn[INET6_ADDRSTRLEN];
+ struct connection *conn;
+ struct appctx *tmpctx;
+
+ chunk_reset(&trash);
+
+ if (appctx->ctx.sess.section > 0 && appctx->ctx.sess.uid != sess->uniq_id) {
+ /* stream changed, no need to go any further */
+ chunk_appendf(&trash, " *** session terminated while we were watching it ***\n");
+ if (bi_putchk(si_ic(si), &trash) == -1) {
+ si_applet_cant_put(si);
+ return 0;
+ }
+ appctx->ctx.sess.uid = 0;
+ appctx->ctx.sess.section = 0;
+ return 1;
+ }
+
+ switch (appctx->ctx.sess.section) {
+ case 0: /* main status of the stream */
+ appctx->ctx.sess.uid = sess->uniq_id;
+ appctx->ctx.sess.section = 1;
+ /* fall through */
+
+ case 1:
+ get_localtime(sess->logs.accept_date.tv_sec, &tm);
+ chunk_appendf(&trash,
+ "%p: [%02d/%s/%04d:%02d:%02d:%02d.%06d] id=%u proto=%s",
+ sess,
+ tm.tm_mday, monthname[tm.tm_mon], tm.tm_year+1900,
+ tm.tm_hour, tm.tm_min, tm.tm_sec, (int)(sess->logs.accept_date.tv_usec),
+ sess->uniq_id,
+ strm_li(sess) ? strm_li(sess)->proto->name : "?");
+
+ conn = objt_conn(strm_orig(sess));
+ switch (conn ? addr_to_str(&conn->addr.from, pn, sizeof(pn)) : AF_UNSPEC) {
+ case AF_INET:
+ case AF_INET6:
+ chunk_appendf(&trash, " source=%s:%d\n",
+ pn, get_host_port(&conn->addr.from));
+ break;
+ case AF_UNIX:
+ chunk_appendf(&trash, " source=unix:%d\n", strm_li(sess)->luid);
+ break;
+ default:
+ /* no more information to print right now */
+ chunk_appendf(&trash, "\n");
+ break;
+ }
+
+ chunk_appendf(&trash,
+ " flags=0x%x, conn_retries=%d, srv_conn=%p, pend_pos=%p\n",
+ sess->flags, sess->si[1].conn_retries, sess->srv_conn, sess->pend_pos);
+
+ chunk_appendf(&trash,
+ " frontend=%s (id=%u mode=%s), listener=%s (id=%u)",
+ strm_fe(sess)->id, strm_fe(sess)->uuid, strm_fe(sess)->mode ? "http" : "tcp",
+ strm_li(sess) ? strm_li(sess)->name ? strm_li(sess)->name : "?" : "?",
+ strm_li(sess) ? strm_li(sess)->luid : 0);
+
+ if (conn)
+ conn_get_to_addr(conn);
+
+ switch (conn ? addr_to_str(&conn->addr.to, pn, sizeof(pn)) : AF_UNSPEC) {
+ case AF_INET:
+ case AF_INET6:
+ chunk_appendf(&trash, " addr=%s:%d\n",
+ pn, get_host_port(&conn->addr.to));
+ break;
+ case AF_UNIX:
+ chunk_appendf(&trash, " addr=unix:%d\n", strm_li(sess)->luid);
+ break;
+ default:
+ /* no more information to print right now */
+ chunk_appendf(&trash, "\n");
+ break;
+ }
+
+ if (sess->be->cap & PR_CAP_BE)
+ chunk_appendf(&trash,
+ " backend=%s (id=%u mode=%s)",
+ sess->be->id,
+ sess->be->uuid, sess->be->mode ? "http" : "tcp");
+ else
+ chunk_appendf(&trash, " backend=<NONE> (id=-1 mode=-)");
+
+ conn = objt_conn(sess->si[1].end);
+ if (conn)
+ conn_get_from_addr(conn);
+
+ switch (conn ? addr_to_str(&conn->addr.from, pn, sizeof(pn)) : AF_UNSPEC) {
+ case AF_INET:
+ case AF_INET6:
+ chunk_appendf(&trash, " addr=%s:%d\n",
+ pn, get_host_port(&conn->addr.from));
+ break;
+ case AF_UNIX:
+ chunk_appendf(&trash, " addr=unix\n");
+ break;
+ default:
+ /* no more information to print right now */
+ chunk_appendf(&trash, "\n");
+ break;
+ }
+
+ if (sess->be->cap & PR_CAP_BE)
+ chunk_appendf(&trash,
+ " server=%s (id=%u)",
+ objt_server(sess->target) ? objt_server(sess->target)->id : "<none>",
+ objt_server(sess->target) ? objt_server(sess->target)->puid : 0);
+ else
+ chunk_appendf(&trash, " server=<NONE> (id=-1)");
+
+ if (conn)
+ conn_get_to_addr(conn);
+
+ switch (conn ? addr_to_str(&conn->addr.to, pn, sizeof(pn)) : AF_UNSPEC) {
+ case AF_INET:
+ case AF_INET6:
+ chunk_appendf(&trash, " addr=%s:%d\n",
+ pn, get_host_port(&conn->addr.to));
+ break;
+ case AF_UNIX:
+ chunk_appendf(&trash, " addr=unix\n");
+ break;
+ default:
+ /* no more information to print right now */
+ chunk_appendf(&trash, "\n");
+ break;
+ }
+
+ chunk_appendf(&trash,
+ " task=%p (state=0x%02x nice=%d calls=%d exp=%s%s",
+ sess->task,
+ sess->task->state,
+ sess->task->nice, sess->task->calls,
+ sess->task->expire ?
+ tick_is_expired(sess->task->expire, now_ms) ? "<PAST>" :
+ human_time(TICKS_TO_MS(sess->task->expire - now_ms),
+ TICKS_TO_MS(1000)) : "<NEVER>",
+ task_in_rq(sess->task) ? ", running" : "");
+
+ chunk_appendf(&trash,
+ " age=%s)\n",
+ human_time(now.tv_sec - sess->logs.accept_date.tv_sec, 1));
+
+ if (sess->txn)
+ chunk_appendf(&trash,
+ " txn=%p flags=0x%x meth=%d status=%d req.st=%s rsp.st=%s waiting=%d\n",
+ sess->txn, sess->txn->flags, sess->txn->meth, sess->txn->status,
+ http_msg_state_str(sess->txn->req.msg_state), http_msg_state_str(sess->txn->rsp.msg_state), !LIST_ISEMPTY(&sess->buffer_wait));
+
+ chunk_appendf(&trash,
+ " si[0]=%p (state=%s flags=0x%02x endp0=%s:%p exp=%s, et=0x%03x)\n",
+ &sess->si[0],
+ si_state_str(sess->si[0].state),
+ sess->si[0].flags,
+ obj_type_name(sess->si[0].end),
+ obj_base_ptr(sess->si[0].end),
+ sess->si[0].exp ?
+ tick_is_expired(sess->si[0].exp, now_ms) ? "<PAST>" :
+ human_time(TICKS_TO_MS(sess->si[0].exp - now_ms),
+ TICKS_TO_MS(1000)) : "<NEVER>",
+ sess->si[0].err_type);
+
+ chunk_appendf(&trash,
+ " si[1]=%p (state=%s flags=0x%02x endp1=%s:%p exp=%s, et=0x%03x)\n",
+ &sess->si[1],
+ si_state_str(sess->si[1].state),
+ sess->si[1].flags,
+ obj_type_name(sess->si[1].end),
+ obj_base_ptr(sess->si[1].end),
+ sess->si[1].exp ?
+ tick_is_expired(sess->si[1].exp, now_ms) ? "<PAST>" :
+ human_time(TICKS_TO_MS(sess->si[1].exp - now_ms),
+ TICKS_TO_MS(1000)) : "<NEVER>",
+ sess->si[1].err_type);
+
+ if ((conn = objt_conn(sess->si[0].end)) != NULL) {
+ chunk_appendf(&trash,
+ " co0=%p ctrl=%s xprt=%s data=%s target=%s:%p\n",
+ conn,
+ get_conn_ctrl_name(conn),
+ get_conn_xprt_name(conn),
+ get_conn_data_name(conn),
+ obj_type_name(conn->target),
+ obj_base_ptr(conn->target));
+
+ chunk_appendf(&trash,
+ " flags=0x%08x fd=%d fd.state=%02x fd.cache=%d updt=%d\n",
+ conn->flags,
+ conn->t.sock.fd,
+ conn->t.sock.fd >= 0 ? fdtab[conn->t.sock.fd].state : 0,
+ conn->t.sock.fd >= 0 ? fdtab[conn->t.sock.fd].cache : 0,
+ conn->t.sock.fd >= 0 ? fdtab[conn->t.sock.fd].updated : 0);
+ }
+ else if ((tmpctx = objt_appctx(sess->si[0].end)) != NULL) {
+ chunk_appendf(&trash,
+ " app0=%p st0=%d st1=%d st2=%d applet=%s\n",
+ tmpctx,
+ tmpctx->st0,
+ tmpctx->st1,
+ tmpctx->st2,
+ tmpctx->applet->name);
+ }
+
+ if ((conn = objt_conn(sess->si[1].end)) != NULL) {
+ chunk_appendf(&trash,
+ " co1=%p ctrl=%s xprt=%s data=%s target=%s:%p\n",
+ conn,
+ get_conn_ctrl_name(conn),
+ get_conn_xprt_name(conn),
+ get_conn_data_name(conn),
+ obj_type_name(conn->target),
+ obj_base_ptr(conn->target));
+
+ chunk_appendf(&trash,
+ " flags=0x%08x fd=%d fd_spec_e=%02x fd_spec_p=%d updt=%d\n",
+ conn->flags,
+ conn->t.sock.fd,
+ conn->t.sock.fd >= 0 ? fdtab[conn->t.sock.fd].state : 0,
+ conn->t.sock.fd >= 0 ? fdtab[conn->t.sock.fd].cache : 0,
+ conn->t.sock.fd >= 0 ? fdtab[conn->t.sock.fd].updated : 0);
+ }
+ else if ((tmpctx = objt_appctx(sess->si[1].end)) != NULL) {
+ chunk_appendf(&trash,
+ " app1=%p st0=%d st1=%d st2=%d applet=%s\n",
+ tmpctx,
+ tmpctx->st0,
+ tmpctx->st1,
+ tmpctx->st2,
+ tmpctx->applet->name);
+ }
+
+ chunk_appendf(&trash,
+ " req=%p (f=0x%06x an=0x%x pipe=%d tofwd=%d total=%lld)\n"
+ " an_exp=%s",
+ &sess->req,
+ sess->req.flags, sess->req.analysers,
+ sess->req.pipe ? sess->req.pipe->data : 0,
+ sess->req.to_forward, sess->req.total,
+ sess->req.analyse_exp ?
+ human_time(TICKS_TO_MS(sess->req.analyse_exp - now_ms),
+ TICKS_TO_MS(1000)) : "<NEVER>");
+
+ chunk_appendf(&trash,
+ " rex=%s",
+ sess->req.rex ?
+ human_time(TICKS_TO_MS(sess->req.rex - now_ms),
+ TICKS_TO_MS(1000)) : "<NEVER>");
+
+ chunk_appendf(&trash,
+ " wex=%s\n"
+ " buf=%p data=%p o=%d p=%d req.next=%d i=%d size=%d\n",
+ sess->req.wex ?
+ human_time(TICKS_TO_MS(sess->req.wex - now_ms),
+ TICKS_TO_MS(1000)) : "<NEVER>",
+ sess->req.buf,
+ sess->req.buf->data, sess->req.buf->o,
+ (int)(sess->req.buf->p - sess->req.buf->data),
+ sess->txn ? sess->txn->req.next : 0, sess->req.buf->i,
+ sess->req.buf->size);
+
+ chunk_appendf(&trash,
+ " res=%p (f=0x%06x an=0x%x pipe=%d tofwd=%d total=%lld)\n"
+ " an_exp=%s",
+ &sess->res,
+ sess->res.flags, sess->res.analysers,
+ sess->res.pipe ? sess->res.pipe->data : 0,
+ sess->res.to_forward, sess->res.total,
+ sess->res.analyse_exp ?
+ human_time(TICKS_TO_MS(sess->res.analyse_exp - now_ms),
+ TICKS_TO_MS(1000)) : "<NEVER>");
+
+ chunk_appendf(&trash,
+ " rex=%s",
+ sess->res.rex ?
+ human_time(TICKS_TO_MS(sess->res.rex - now_ms),
+ TICKS_TO_MS(1000)) : "<NEVER>");
+
+ chunk_appendf(&trash,
+ " wex=%s\n"
+ " buf=%p data=%p o=%d p=%d rsp.next=%d i=%d size=%d\n",
+ sess->res.wex ?
+ human_time(TICKS_TO_MS(sess->res.wex - now_ms),
+ TICKS_TO_MS(1000)) : "<NEVER>",
+ sess->res.buf,
+ sess->res.buf->data, sess->res.buf->o,
+ (int)(sess->res.buf->p - sess->res.buf->data),
+ sess->txn ? sess->txn->rsp.next : 0, sess->res.buf->i,
+ sess->res.buf->size);
+
+ if (bi_putchk(si_ic(si), &trash) == -1) {
+ si_applet_cant_put(si);
+ return 0;
+ }
+
+ /* use other states to dump the contents */
+ }
+ /* end of dump */
+ appctx->ctx.sess.uid = 0;
+ appctx->ctx.sess.section = 0;
+ return 1;
+}
+
+#if (defined SSL_CTRL_SET_TLSEXT_TICKET_KEY_CB && TLS_TICKETS_NO > 0)
+static int stats_tlskeys_list(struct stream_interface *si) {
+ struct appctx *appctx = __objt_appctx(si->end);
+
+ switch (appctx->st2) {
+ case STAT_ST_INIT:
+ /* Display the column headers. If the message cannot be sent,
+ * quit the fucntion with returning 0. The function is called
+ * later and restart at the state "STAT_ST_INIT".
+ */
+ chunk_reset(&trash);
+ chunk_appendf(&trash, "# id (file)\n");
+ if (bi_putchk(si_ic(si), &trash) == -1) {
+ si_applet_cant_put(si);
+ return 0;
+ }
+
+ /* Now, we start the browsing of the references lists.
+ * Note that the following call to LIST_ELEM return bad pointer. The only
+ * avalaible field of this pointer is <list>. It is used with the function
+ * tlskeys_list_get_next() for retruning the first avalaible entry
+ */
+ appctx->ctx.tlskeys.ref = LIST_ELEM(&tlskeys_reference, struct tls_keys_ref *, list);
+ appctx->ctx.tlskeys.ref = tlskeys_list_get_next(appctx->ctx.tlskeys.ref, &tlskeys_reference);
+
+ appctx->st2 = STAT_ST_LIST;
+ /* fall through */
+
+ case STAT_ST_LIST:
+ while (appctx->ctx.tlskeys.ref) {
+ chunk_reset(&trash);
+
+ chunk_appendf(&trash, "%d (%s)\n", appctx->ctx.tlskeys.ref->unique_id,
+ appctx->ctx.tlskeys.ref->filename);
+
+ if (bi_putchk(si_ic(si), &trash) == -1) {
+ /* let's try again later from this stream. We add ourselves into
+ * this stream's users so that it can remove us upon termination.
+ */
+ si_applet_cant_put(si);
+ return 0;
+ }
+
+ /* get next list entry and check the end of the list */
+ appctx->ctx.tlskeys.ref = tlskeys_list_get_next(appctx->ctx.tlskeys.ref, &tlskeys_reference);
+ }
+
+ appctx->st2 = STAT_ST_FIN;
+ /* fall through */
+
+ default:
+ appctx->st2 = STAT_ST_FIN;
+ return 1;
+ }
+ return 0;
+}
+#endif
+
+static int stats_pats_list(struct stream_interface *si)
+{
+ struct appctx *appctx = __objt_appctx(si->end);
+
+ switch (appctx->st2) {
+ case STAT_ST_INIT:
+ /* Display the column headers. If the message cannot be sent,
+ * quit the fucntion with returning 0. The function is called
+ * later and restart at the state "STAT_ST_INIT".
+ */
+ chunk_reset(&trash);
+ chunk_appendf(&trash, "# id (file) description\n");
+ if (bi_putchk(si_ic(si), &trash) == -1) {
+ si_applet_cant_put(si);
+ return 0;
+ }
+
+ /* Now, we start the browsing of the references lists.
+ * Note that the following call to LIST_ELEM return bad pointer. The only
+ * avalaible field of this pointer is <list>. It is used with the function
+ * pat_list_get_next() for retruning the first avalaible entry
+ */
+ appctx->ctx.map.ref = LIST_ELEM(&pattern_reference, struct pat_ref *, list);
+ appctx->ctx.map.ref = pat_list_get_next(appctx->ctx.map.ref, &pattern_reference,
+ appctx->ctx.map.display_flags);
+ appctx->st2 = STAT_ST_LIST;
+ /* fall through */
+
+ case STAT_ST_LIST:
+ while (appctx->ctx.map.ref) {
+ chunk_reset(&trash);
+
+ /* Build messages. If the reference is used by another category than
+ * the listed categorie, display the information in the massage.
+ */
+ chunk_appendf(&trash, "%d (%s) %s\n", appctx->ctx.map.ref->unique_id,
+ appctx->ctx.map.ref->reference ? appctx->ctx.map.ref->reference : "",
+ appctx->ctx.map.ref->display);
+
+ if (bi_putchk(si_ic(si), &trash) == -1) {
+ /* let's try again later from this stream. We add ourselves into
+ * this stream's users so that it can remove us upon termination.
+ */
+ si_applet_cant_put(si);
+ return 0;
+ }
+
+ /* get next list entry and check the end of the list */
+ appctx->ctx.map.ref = pat_list_get_next(appctx->ctx.map.ref, &pattern_reference,
+ appctx->ctx.map.display_flags);
+ }
+
+ appctx->st2 = STAT_ST_FIN;
+ /* fall through */
+
+ default:
+ appctx->st2 = STAT_ST_FIN;
+ return 1;
+ }
+ return 0;
+}
+
+static int stats_map_lookup(struct stream_interface *si)
+{
+ struct appctx *appctx = __objt_appctx(si->end);
+ struct sample sample;
+ struct pattern *pat;
+ int match_method;
+
+ switch (appctx->st2) {
+ case STAT_ST_INIT:
+ /* Init to the first entry. The list cannot be change */
+ appctx->ctx.map.expr = LIST_ELEM(&appctx->ctx.map.ref->pat, struct pattern_expr *, list);
+ appctx->ctx.map.expr = pat_expr_get_next(appctx->ctx.map.expr, &appctx->ctx.map.ref->pat);
+ appctx->st2 = STAT_ST_LIST;
+ /* fall through */
+
+ case STAT_ST_LIST:
+ /* for each lookup type */
+ while (appctx->ctx.map.expr) {
+ /* initialise chunk to build new message */
+ chunk_reset(&trash);
+
+ /* execute pattern matching */
+ sample.data.type = SMP_T_STR;
+ sample.flags |= SMP_F_CONST;
+ sample.data.u.str.len = appctx->ctx.map.chunk.len;
+ sample.data.u.str.str = appctx->ctx.map.chunk.str;
+ if (appctx->ctx.map.expr->pat_head->match &&
+ sample_convert(&sample, appctx->ctx.map.expr->pat_head->expect_type))
+ pat = appctx->ctx.map.expr->pat_head->match(&sample, appctx->ctx.map.expr, 1);
+ else
+ pat = NULL;
+
+ /* build return message: set type of match */
+ for (match_method=0; match_method<PAT_MATCH_NUM; match_method++)
+ if (appctx->ctx.map.expr->pat_head->match == pat_match_fcts[match_method])
+ break;
+ if (match_method >= PAT_MATCH_NUM)
+ chunk_appendf(&trash, "type=unknown(%p)", appctx->ctx.map.expr->pat_head->match);
+ else
+ chunk_appendf(&trash, "type=%s", pat_match_names[match_method]);
+
+ /* case sensitive */
+ if (appctx->ctx.map.expr->mflags & PAT_MF_IGNORE_CASE)
+ chunk_appendf(&trash, ", case=insensitive");
+ else
+ chunk_appendf(&trash, ", case=sensitive");
+
+ /* Display no match, and set default value */
+ if (!pat) {
+ if (appctx->ctx.map.display_flags == PAT_REF_MAP)
+ chunk_appendf(&trash, ", found=no");
+ else
+ chunk_appendf(&trash, ", match=no");
+ }
+
+ /* Display match and match info */
+ else {
+ /* display match */
+ if (appctx->ctx.map.display_flags == PAT_REF_MAP)
+ chunk_appendf(&trash, ", found=yes");
+ else
+ chunk_appendf(&trash, ", match=yes");
+
+ /* display index mode */
+ if (pat->sflags & PAT_SF_TREE)
+ chunk_appendf(&trash, ", idx=tree");
+ else
+ chunk_appendf(&trash, ", idx=list");
+
+ /* display pattern */
+ if (appctx->ctx.map.display_flags == PAT_REF_MAP) {
+ if (pat->ref && pat->ref->pattern)
+ chunk_appendf(&trash, ", key=\"%s\"", pat->ref->pattern);
+ else
+ chunk_appendf(&trash, ", key=unknown");
+ }
+ else {
+ if (pat->ref && pat->ref->pattern)
+ chunk_appendf(&trash, ", pattern=\"%s\"", pat->ref->pattern);
+ else
+ chunk_appendf(&trash, ", pattern=unknown");
+ }
+
+ /* display return value */
+ if (appctx->ctx.map.display_flags == PAT_REF_MAP) {
+ if (pat->data && pat->ref && pat->ref->sample)
+ chunk_appendf(&trash, ", value=\"%s\", type=\"%s\"", pat->ref->sample,
+ smp_to_type[pat->data->type]);
+ else
+ chunk_appendf(&trash, ", value=none");
+ }
+ }
+
+ chunk_appendf(&trash, "\n");
+
+ /* display response */
+ if (bi_putchk(si_ic(si), &trash) == -1) {
+ /* let's try again later from this stream. We add ourselves into
+ * this stream's users so that it can remove us upon termination.
+ */
+ si_applet_cant_put(si);
+ return 0;
+ }
+
+ /* get next entry */
+ appctx->ctx.map.expr = pat_expr_get_next(appctx->ctx.map.expr,
+ &appctx->ctx.map.ref->pat);
+ }
+
+ appctx->st2 = STAT_ST_FIN;
+ /* fall through */
+
+ default:
+ appctx->st2 = STAT_ST_FIN;
+ free(appctx->ctx.map.chunk.str);
+ return 1;
+ }
+}
+
+static int stats_pat_list(struct stream_interface *si)
+{
+ struct appctx *appctx = __objt_appctx(si->end);
+
+ switch (appctx->st2) {
+
+ case STAT_ST_INIT:
+ /* Init to the first entry. The list cannot be change */
+ appctx->ctx.map.elt = LIST_NEXT(&appctx->ctx.map.ref->head,
+ struct pat_ref_elt *, list);
+ if (&appctx->ctx.map.elt->list == &appctx->ctx.map.ref->head)
+ appctx->ctx.map.elt = NULL;
+ appctx->st2 = STAT_ST_LIST;
+ /* fall through */
+
+ case STAT_ST_LIST:
+ while (appctx->ctx.map.elt) {
+ chunk_reset(&trash);
+
+ /* build messages */
+ if (appctx->ctx.map.elt->sample)
+ chunk_appendf(&trash, "%p %s %s\n",
+ appctx->ctx.map.elt, appctx->ctx.map.elt->pattern,
+ appctx->ctx.map.elt->sample);
+ else
+ chunk_appendf(&trash, "%p %s\n",
+ appctx->ctx.map.elt, appctx->ctx.map.elt->pattern);
+
+ if (bi_putchk(si_ic(si), &trash) == -1) {
+ /* let's try again later from this stream. We add ourselves into
+ * this stream's users so that it can remove us upon termination.
+ */
+ si_applet_cant_put(si);
+ return 0;
+ }
+
+ /* get next list entry and check the end of the list */
+ appctx->ctx.map.elt = LIST_NEXT(&appctx->ctx.map.elt->list,
+ struct pat_ref_elt *, list);
+ if (&appctx->ctx.map.elt->list == &appctx->ctx.map.ref->head)
+ break;
+ }
+
+ appctx->st2 = STAT_ST_FIN;
+ /* fall through */
+
+ default:
+ appctx->st2 = STAT_ST_FIN;
+ return 1;
+ }
+}
+
+/* This function dumps all streams' states onto the stream interface's
+ * read buffer. It returns 0 if the output buffer is full and it needs
+ * to be called again, otherwise non-zero. It is designed to be called
+ * from stats_dump_sess_to_buffer() below.
+ */
+static int stats_dump_sess_to_buffer(struct stream_interface *si)
+{
+ struct appctx *appctx = __objt_appctx(si->end);
+ struct connection *conn;
+
+ if (unlikely(si_ic(si)->flags & (CF_WRITE_ERROR|CF_SHUTW))) {
+ /* If we're forced to shut down, we might have to remove our
+ * reference to the last stream being dumped.
+ */
+ if (appctx->st2 == STAT_ST_LIST) {
+ if (!LIST_ISEMPTY(&appctx->ctx.sess.bref.users)) {
+ LIST_DEL(&appctx->ctx.sess.bref.users);
+ LIST_INIT(&appctx->ctx.sess.bref.users);
+ }
+ }
+ return 1;
+ }
+
+ chunk_reset(&trash);
+
+ switch (appctx->st2) {
+ case STAT_ST_INIT:
+ /* the function had not been called yet, let's prepare the
+ * buffer for a response. We initialize the current stream
+ * pointer to the first in the global list. When a target
+ * stream is being destroyed, it is responsible for updating
+ * this pointer. We know we have reached the end when this
+ * pointer points back to the head of the streams list.
+ */
+ LIST_INIT(&appctx->ctx.sess.bref.users);
+ appctx->ctx.sess.bref.ref = streams.n;
+ appctx->st2 = STAT_ST_LIST;
+ /* fall through */
+
+ case STAT_ST_LIST:
+ /* first, let's detach the back-ref from a possible previous stream */
+ if (!LIST_ISEMPTY(&appctx->ctx.sess.bref.users)) {
+ LIST_DEL(&appctx->ctx.sess.bref.users);
+ LIST_INIT(&appctx->ctx.sess.bref.users);
+ }
+
+ /* and start from where we stopped */
+ while (appctx->ctx.sess.bref.ref != &streams) {
+ char pn[INET6_ADDRSTRLEN];
+ struct stream *curr_sess;
+
+ curr_sess = LIST_ELEM(appctx->ctx.sess.bref.ref, struct stream *, list);
+
+ if (appctx->ctx.sess.target) {
+ if (appctx->ctx.sess.target != (void *)-1 && appctx->ctx.sess.target != curr_sess)
+ goto next_sess;
+
+ LIST_ADDQ(&curr_sess->back_refs, &appctx->ctx.sess.bref.users);
+ /* call the proper dump() function and return if we're missing space */
+ if (!stats_dump_full_sess_to_buffer(si, curr_sess))
+ return 0;
+
+ /* stream dump complete */
+ LIST_DEL(&appctx->ctx.sess.bref.users);
+ LIST_INIT(&appctx->ctx.sess.bref.users);
+ if (appctx->ctx.sess.target != (void *)-1) {
+ appctx->ctx.sess.target = NULL;
+ break;
+ }
+ else
+ goto next_sess;
+ }
+
+ chunk_appendf(&trash,
+ "%p: proto=%s",
+ curr_sess,
+ strm_li(curr_sess) ? strm_li(curr_sess)->proto->name : "?");
+
+ conn = objt_conn(strm_orig(curr_sess));
+ switch (conn ? addr_to_str(&conn->addr.from, pn, sizeof(pn)) : AF_UNSPEC) {
+ case AF_INET:
+ case AF_INET6:
+ chunk_appendf(&trash,
+ " src=%s:%d fe=%s be=%s srv=%s",
+ pn,
+ get_host_port(&conn->addr.from),
+ strm_fe(curr_sess)->id,
+ (curr_sess->be->cap & PR_CAP_BE) ? curr_sess->be->id : "<NONE>",
+ objt_server(curr_sess->target) ? objt_server(curr_sess->target)->id : "<none>"
+ );
+ break;
+ case AF_UNIX:
+ chunk_appendf(&trash,
+ " src=unix:%d fe=%s be=%s srv=%s",
+ strm_li(curr_sess)->luid,
+ strm_fe(curr_sess)->id,
+ (curr_sess->be->cap & PR_CAP_BE) ? curr_sess->be->id : "<NONE>",
+ objt_server(curr_sess->target) ? objt_server(curr_sess->target)->id : "<none>"
+ );
+ break;
+ }
+
+ chunk_appendf(&trash,
+ " ts=%02x age=%s calls=%d",
+ curr_sess->task->state,
+ human_time(now.tv_sec - curr_sess->logs.tv_accept.tv_sec, 1),
+ curr_sess->task->calls);
+
+ chunk_appendf(&trash,
+ " rq[f=%06xh,i=%d,an=%02xh,rx=%s",
+ curr_sess->req.flags,
+ curr_sess->req.buf->i,
+ curr_sess->req.analysers,
+ curr_sess->req.rex ?
+ human_time(TICKS_TO_MS(curr_sess->req.rex - now_ms),
+ TICKS_TO_MS(1000)) : "");
+
+ chunk_appendf(&trash,
+ ",wx=%s",
+ curr_sess->req.wex ?
+ human_time(TICKS_TO_MS(curr_sess->req.wex - now_ms),
+ TICKS_TO_MS(1000)) : "");
+
+ chunk_appendf(&trash,
+ ",ax=%s]",
+ curr_sess->req.analyse_exp ?
+ human_time(TICKS_TO_MS(curr_sess->req.analyse_exp - now_ms),
+ TICKS_TO_MS(1000)) : "");
+
+ chunk_appendf(&trash,
+ " rp[f=%06xh,i=%d,an=%02xh,rx=%s",
+ curr_sess->res.flags,
+ curr_sess->res.buf->i,
+ curr_sess->res.analysers,
+ curr_sess->res.rex ?
+ human_time(TICKS_TO_MS(curr_sess->res.rex - now_ms),
+ TICKS_TO_MS(1000)) : "");
+
+ chunk_appendf(&trash,
+ ",wx=%s",
+ curr_sess->res.wex ?
+ human_time(TICKS_TO_MS(curr_sess->res.wex - now_ms),
+ TICKS_TO_MS(1000)) : "");
+
+ chunk_appendf(&trash,
+ ",ax=%s]",
+ curr_sess->res.analyse_exp ?
+ human_time(TICKS_TO_MS(curr_sess->res.analyse_exp - now_ms),
+ TICKS_TO_MS(1000)) : "");
+
+ conn = objt_conn(curr_sess->si[0].end);
+ chunk_appendf(&trash,
+ " s0=[%d,%1xh,fd=%d,ex=%s]",
+ curr_sess->si[0].state,
+ curr_sess->si[0].flags,
+ conn ? conn->t.sock.fd : -1,
+ curr_sess->si[0].exp ?
+ human_time(TICKS_TO_MS(curr_sess->si[0].exp - now_ms),
+ TICKS_TO_MS(1000)) : "");
+
+ conn = objt_conn(curr_sess->si[1].end);
+ chunk_appendf(&trash,
+ " s1=[%d,%1xh,fd=%d,ex=%s]",
+ curr_sess->si[1].state,
+ curr_sess->si[1].flags,
+ conn ? conn->t.sock.fd : -1,
+ curr_sess->si[1].exp ?
+ human_time(TICKS_TO_MS(curr_sess->si[1].exp - now_ms),
+ TICKS_TO_MS(1000)) : "");
+
+ chunk_appendf(&trash,
+ " exp=%s",
+ curr_sess->task->expire ?
+ human_time(TICKS_TO_MS(curr_sess->task->expire - now_ms),
+ TICKS_TO_MS(1000)) : "");
+ if (task_in_rq(curr_sess->task))
+ chunk_appendf(&trash, " run(nice=%d)", curr_sess->task->nice);
+
+ chunk_appendf(&trash, "\n");
+
+ if (bi_putchk(si_ic(si), &trash) == -1) {
+ /* let's try again later from this stream. We add ourselves into
+ * this stream's users so that it can remove us upon termination.
+ */
+ si_applet_cant_put(si);
+ LIST_ADDQ(&curr_sess->back_refs, &appctx->ctx.sess.bref.users);
+ return 0;
+ }
+
+ next_sess:
+ appctx->ctx.sess.bref.ref = curr_sess->list.n;
+ }
+
+ if (appctx->ctx.sess.target && appctx->ctx.sess.target != (void *)-1) {
+ /* specified stream not found */
+ if (appctx->ctx.sess.section > 0)
+ chunk_appendf(&trash, " *** session terminated while we were watching it ***\n");
+ else
+ chunk_appendf(&trash, "Session not found.\n");
+
+ if (bi_putchk(si_ic(si), &trash) == -1) {
+ si_applet_cant_put(si);
+ return 0;
+ }
+
+ appctx->ctx.sess.target = NULL;
+ appctx->ctx.sess.uid = 0;
+ return 1;
+ }
+
+ appctx->st2 = STAT_ST_FIN;
+ /* fall through */
+
+ default:
+ appctx->st2 = STAT_ST_FIN;
+ return 1;
+ }
+}
+
+/* This is called when the stream interface is closed. For instance, upon an
+ * external abort, we won't call the i/o handler anymore so we may need to
+ * remove back references to the stream currently being dumped.
+ */
+static void cli_release_handler(struct appctx *appctx)
+{
+ if (appctx->st0 == STAT_CLI_O_SESS && appctx->st2 == STAT_ST_LIST) {
+ if (!LIST_ISEMPTY(&appctx->ctx.sess.bref.users))
+ LIST_DEL(&appctx->ctx.sess.bref.users);
+ }
+ else if (appctx->st0 == STAT_CLI_PRINT_FREE) {
+ free(appctx->ctx.cli.err);
+ appctx->ctx.cli.err = NULL;
+ }
+ else if (appctx->st0 == STAT_CLI_O_MLOOK) {
+ free(appctx->ctx.map.chunk.str);
+ appctx->ctx.map.chunk.str = NULL;
+ }
+}
+
+/* This function is used to either dump tables states (when action is set
+ * to STAT_CLI_O_TAB) or clear tables (when action is STAT_CLI_O_CLR).
+ * It returns 0 if the output buffer is full and it needs to be called
+ * again, otherwise non-zero.
+ */
+static int stats_table_request(struct stream_interface *si, int action)
+{
+ struct appctx *appctx = __objt_appctx(si->end);
+ struct stream *s = si_strm(si);
+ struct ebmb_node *eb;
+ int dt;
+ int skip_entry;
+ int show = action == STAT_CLI_O_TAB;
+
+ /*
+ * We have 3 possible states in appctx->st2 :
+ * - STAT_ST_INIT : the first call
+ * - STAT_ST_INFO : the proxy pointer points to the next table to
+ * dump, the entry pointer is NULL ;
+ * - STAT_ST_LIST : the proxy pointer points to the current table
+ * and the entry pointer points to the next entry to be dumped,
+ * and the refcount on the next entry is held ;
+ * - STAT_ST_END : nothing left to dump, the buffer may contain some
+ * data though.
+ */
+
+ if (unlikely(si_ic(si)->flags & (CF_WRITE_ERROR|CF_SHUTW))) {
+ /* in case of abort, remove any refcount we might have set on an entry */
+ if (appctx->st2 == STAT_ST_LIST) {
+ appctx->ctx.table.entry->ref_cnt--;
+ stksess_kill_if_expired(&appctx->ctx.table.proxy->table, appctx->ctx.table.entry);
+ }
+ return 1;
+ }
+
+ chunk_reset(&trash);
+
+ while (appctx->st2 != STAT_ST_FIN) {
+ switch (appctx->st2) {
+ case STAT_ST_INIT:
+ appctx->ctx.table.proxy = appctx->ctx.table.target;
+ if (!appctx->ctx.table.proxy)
+ appctx->ctx.table.proxy = proxy;
+
+ appctx->ctx.table.entry = NULL;
+ appctx->st2 = STAT_ST_INFO;
+ break;
+
+ case STAT_ST_INFO:
+ if (!appctx->ctx.table.proxy ||
+ (appctx->ctx.table.target &&
+ appctx->ctx.table.proxy != appctx->ctx.table.target)) {
+ appctx->st2 = STAT_ST_END;
+ break;
+ }
+
+ if (appctx->ctx.table.proxy->table.size) {
+ if (show && !stats_dump_table_head_to_buffer(&trash, si, appctx->ctx.table.proxy,
+ appctx->ctx.table.target))
+ return 0;
+
+ if (appctx->ctx.table.target &&
+ strm_li(s)->bind_conf->level >= ACCESS_LVL_OPER) {
+ /* dump entries only if table explicitly requested */
+ eb = ebmb_first(&appctx->ctx.table.proxy->table.keys);
+ if (eb) {
+ appctx->ctx.table.entry = ebmb_entry(eb, struct stksess, key);
+ appctx->ctx.table.entry->ref_cnt++;
+ appctx->st2 = STAT_ST_LIST;
+ break;
+ }
+ }
+ }
+ appctx->ctx.table.proxy = appctx->ctx.table.proxy->next;
+ break;
+
+ case STAT_ST_LIST:
+ skip_entry = 0;
+
+ if (appctx->ctx.table.data_type >= 0) {
+ /* we're filtering on some data contents */
+ void *ptr;
+ long long data;
+
+ dt = appctx->ctx.table.data_type;
+ ptr = stktable_data_ptr(&appctx->ctx.table.proxy->table,
+ appctx->ctx.table.entry,
+ dt);
+
+ data = 0;
+ switch (stktable_data_types[dt].std_type) {
+ case STD_T_SINT:
+ data = stktable_data_cast(ptr, std_t_sint);
+ break;
+ case STD_T_UINT:
+ data = stktable_data_cast(ptr, std_t_uint);
+ break;
+ case STD_T_ULL:
+ data = stktable_data_cast(ptr, std_t_ull);
+ break;
+ case STD_T_FRQP:
+ data = read_freq_ctr_period(&stktable_data_cast(ptr, std_t_frqp),
+ appctx->ctx.table.proxy->table.data_arg[dt].u);
+ break;
+ }
+
+ /* skip the entry if the data does not match the test and the value */
+ if ((data < appctx->ctx.table.value &&
+ (appctx->ctx.table.data_op == STD_OP_EQ ||
+ appctx->ctx.table.data_op == STD_OP_GT ||
+ appctx->ctx.table.data_op == STD_OP_GE)) ||
+ (data == appctx->ctx.table.value &&
+ (appctx->ctx.table.data_op == STD_OP_NE ||
+ appctx->ctx.table.data_op == STD_OP_GT ||
+ appctx->ctx.table.data_op == STD_OP_LT)) ||
+ (data > appctx->ctx.table.value &&
+ (appctx->ctx.table.data_op == STD_OP_EQ ||
+ appctx->ctx.table.data_op == STD_OP_LT ||
+ appctx->ctx.table.data_op == STD_OP_LE)))
+ skip_entry = 1;
+ }
+
+ if (show && !skip_entry &&
+ !stats_dump_table_entry_to_buffer(&trash, si, appctx->ctx.table.proxy,
+ appctx->ctx.table.entry))
+ return 0;
+
+ appctx->ctx.table.entry->ref_cnt--;
+
+ eb = ebmb_next(&appctx->ctx.table.entry->key);
+ if (eb) {
+ struct stksess *old = appctx->ctx.table.entry;
+ appctx->ctx.table.entry = ebmb_entry(eb, struct stksess, key);
+ if (show)
+ stksess_kill_if_expired(&appctx->ctx.table.proxy->table, old);
+ else if (!skip_entry && !appctx->ctx.table.entry->ref_cnt)
+ stksess_kill(&appctx->ctx.table.proxy->table, old);
+ appctx->ctx.table.entry->ref_cnt++;
+ break;
+ }
+
+
+ if (show)
+ stksess_kill_if_expired(&appctx->ctx.table.proxy->table, appctx->ctx.table.entry);
+ else if (!skip_entry && !appctx->ctx.table.entry->ref_cnt)
+ stksess_kill(&appctx->ctx.table.proxy->table, appctx->ctx.table.entry);
+
+ appctx->ctx.table.proxy = appctx->ctx.table.proxy->next;
+ appctx->st2 = STAT_ST_INFO;
+ break;
+
+ case STAT_ST_END:
+ appctx->st2 = STAT_ST_FIN;
+ break;
+ }
+ }
+ return 1;
+}
+
+/* print a line of text buffer (limited to 70 bytes) to <out>. The format is :
+ * <2 spaces> <offset=5 digits> <space or plus> <space> <70 chars max> <\n>
+ * which is 60 chars per line. Non-printable chars \t, \n, \r and \e are
+ * encoded in C format. Other non-printable chars are encoded "\xHH". Original
+ * lines are respected within the limit of 70 output chars. Lines that are
+ * continuation of a previous truncated line begin with "+" instead of " "
+ * after the offset. The new pointer is returned.
+ */
+static int dump_text_line(struct chunk *out, const char *buf, int bsize, int len,
+ int *line, int ptr)
+{
+ int end;
+ unsigned char c;
+
+ end = out->len + 80;
+ if (end > out->size)
+ return ptr;
+
+ chunk_appendf(out, " %05d%c ", ptr, (ptr == *line) ? ' ' : '+');
+
+ while (ptr < len && ptr < bsize) {
+ c = buf[ptr];
+ if (isprint(c) && isascii(c) && c != '\\') {
+ if (out->len > end - 2)
+ break;
+ out->str[out->len++] = c;
+ } else if (c == '\t' || c == '\n' || c == '\r' || c == '\e' || c == '\\') {
+ if (out->len > end - 3)
+ break;
+ out->str[out->len++] = '\\';
+ switch (c) {
+ case '\t': c = 't'; break;
+ case '\n': c = 'n'; break;
+ case '\r': c = 'r'; break;
+ case '\e': c = 'e'; break;
+ case '\\': c = '\\'; break;
+ }
+ out->str[out->len++] = c;
+ } else {
+ if (out->len > end - 5)
+ break;
+ out->str[out->len++] = '\\';
+ out->str[out->len++] = 'x';
+ out->str[out->len++] = hextab[(c >> 4) & 0xF];
+ out->str[out->len++] = hextab[c & 0xF];
+ }
+ if (buf[ptr++] == '\n') {
+ /* we had a line break, let's return now */
+ out->str[out->len++] = '\n';
+ *line = ptr;
+ return ptr;
+ }
+ }
+ /* we have an incomplete line, we return it as-is */
+ out->str[out->len++] = '\n';
+ return ptr;
+}
+
+/* This function dumps counters from all resolvers section and associated name servers.
+ * It returns 0 if the output buffer is full and it needs
+ * to be called again, otherwise non-zero.
+ */
+static int stats_dump_resolvers_to_buffer(struct stream_interface *si)
+{
+ struct appctx *appctx = __objt_appctx(si->end);
+ struct dns_resolvers *presolvers;
+ struct dns_nameserver *pnameserver;
+
+ chunk_reset(&trash);
+
+ switch (appctx->st2) {
+ case STAT_ST_INIT:
+ appctx->st2 = STAT_ST_LIST; /* let's start producing data */
+ /* fall through */
+
+ case STAT_ST_LIST:
+ if (LIST_ISEMPTY(&dns_resolvers)) {
+ chunk_appendf(&trash, "No resolvers found\n");
+ }
+ else {
+ list_for_each_entry(presolvers, &dns_resolvers, list) {
+ if (appctx->ctx.resolvers.ptr != NULL && appctx->ctx.resolvers.ptr != presolvers)
+ continue;
+
+ chunk_appendf(&trash, "Resolvers section %s\n", presolvers->id);
+ list_for_each_entry(pnameserver, &presolvers->nameserver_list, list) {
+ chunk_appendf(&trash, " nameserver %s:\n", pnameserver->id);
+ chunk_appendf(&trash, " sent: %ld\n", pnameserver->counters.sent);
+ chunk_appendf(&trash, " valid: %ld\n", pnameserver->counters.valid);
+ chunk_appendf(&trash, " update: %ld\n", pnameserver->counters.update);
+ chunk_appendf(&trash, " cname: %ld\n", pnameserver->counters.cname);
+ chunk_appendf(&trash, " cname_error: %ld\n", pnameserver->counters.cname_error);
+ chunk_appendf(&trash, " any_err: %ld\n", pnameserver->counters.any_err);
+ chunk_appendf(&trash, " nx: %ld\n", pnameserver->counters.nx);
+ chunk_appendf(&trash, " timeout: %ld\n", pnameserver->counters.timeout);
+ chunk_appendf(&trash, " refused: %ld\n", pnameserver->counters.refused);
+ chunk_appendf(&trash, " other: %ld\n", pnameserver->counters.other);
+ chunk_appendf(&trash, " invalid: %ld\n", pnameserver->counters.invalid);
+ chunk_appendf(&trash, " too_big: %ld\n", pnameserver->counters.too_big);
+ chunk_appendf(&trash, " truncated: %ld\n", pnameserver->counters.truncated);
+ chunk_appendf(&trash, " outdated: %ld\n", pnameserver->counters.outdated);
+ }
+ }
+ }
+
+ /* display response */
+ if (bi_putchk(si_ic(si), &trash) == -1) {
+ /* let's try again later from this session. We add ourselves into
+ * this session's users so that it can remove us upon termination.
+ */
+ si->flags |= SI_FL_WAIT_ROOM;
+ return 0;
+ }
+
+ appctx->st2 = STAT_ST_FIN;
+ /* fall through */
+
+ default:
+ appctx->st2 = STAT_ST_FIN;
+ return 1;
+ }
+}
+
+/* This function dumps all captured errors onto the stream interface's
+ * read buffer. It returns 0 if the output buffer is full and it needs
+ * to be called again, otherwise non-zero.
+ */
+static int stats_dump_errors_to_buffer(struct stream_interface *si)
+{
+ struct appctx *appctx = __objt_appctx(si->end);
+ extern const char *monthname[12];
+
+ if (unlikely(si_ic(si)->flags & (CF_WRITE_ERROR|CF_SHUTW)))
+ return 1;
+
+ chunk_reset(&trash);
+
+ if (!appctx->ctx.errors.px) {
+ /* the function had not been called yet, let's prepare the
+ * buffer for a response.
+ */
+ struct tm tm;
+
+ get_localtime(date.tv_sec, &tm);
+ chunk_appendf(&trash, "Total events captured on [%02d/%s/%04d:%02d:%02d:%02d.%03d] : %u\n",
+ tm.tm_mday, monthname[tm.tm_mon], tm.tm_year+1900,
+ tm.tm_hour, tm.tm_min, tm.tm_sec, (int)(date.tv_usec/1000),
+ error_snapshot_id);
+
+ if (bi_putchk(si_ic(si), &trash) == -1) {
+ /* Socket buffer full. Let's try again later from the same point */
+ si_applet_cant_put(si);
+ return 0;
+ }
+
+ appctx->ctx.errors.px = proxy;
+ appctx->ctx.errors.buf = 0;
+ appctx->ctx.errors.bol = 0;
+ appctx->ctx.errors.ptr = -1;
+ }
+
+ /* we have two inner loops here, one for the proxy, the other one for
+ * the buffer.
+ */
+ while (appctx->ctx.errors.px) {
+ struct error_snapshot *es;
+
+ if (appctx->ctx.errors.buf == 0)
+ es = &appctx->ctx.errors.px->invalid_req;
+ else
+ es = &appctx->ctx.errors.px->invalid_rep;
+
+ if (!es->when.tv_sec)
+ goto next;
+
+ if (appctx->ctx.errors.iid >= 0 &&
+ appctx->ctx.errors.px->uuid != appctx->ctx.errors.iid &&
+ es->oe->uuid != appctx->ctx.errors.iid)
+ goto next;
+
+ if (appctx->ctx.errors.ptr < 0) {
+ /* just print headers now */
+
+ char pn[INET6_ADDRSTRLEN];
+ struct tm tm;
+ int port;
+
+ get_localtime(es->when.tv_sec, &tm);
+ chunk_appendf(&trash, " \n[%02d/%s/%04d:%02d:%02d:%02d.%03d]",
+ tm.tm_mday, monthname[tm.tm_mon], tm.tm_year+1900,
+ tm.tm_hour, tm.tm_min, tm.tm_sec, (int)(es->when.tv_usec/1000));
+
+ switch (addr_to_str(&es->src, pn, sizeof(pn))) {
+ case AF_INET:
+ case AF_INET6:
+ port = get_host_port(&es->src);
+ break;
+ default:
+ port = 0;
+ }
+
+ switch (appctx->ctx.errors.buf) {
+ case 0:
+ chunk_appendf(&trash,
+ " frontend %s (#%d): invalid request\n"
+ " backend %s (#%d)",
+ appctx->ctx.errors.px->id, appctx->ctx.errors.px->uuid,
+ (es->oe->cap & PR_CAP_BE) ? es->oe->id : "<NONE>",
+ (es->oe->cap & PR_CAP_BE) ? es->oe->uuid : -1);
+ break;
+ case 1:
+ chunk_appendf(&trash,
+ " backend %s (#%d): invalid response\n"
+ " frontend %s (#%d)",
+ appctx->ctx.errors.px->id, appctx->ctx.errors.px->uuid,
+ es->oe->id, es->oe->uuid);
+ break;
+ }
+
+ chunk_appendf(&trash,
+ ", server %s (#%d), event #%u\n"
+ " src %s:%d, session #%d, session flags 0x%08x\n"
+ " HTTP msg state %d, msg flags 0x%08x, tx flags 0x%08x\n"
+ " HTTP chunk len %lld bytes, HTTP body len %lld bytes\n"
+ " buffer flags 0x%08x, out %d bytes, total %lld bytes\n"
+ " pending %d bytes, wrapping at %d, error at position %d:\n \n",
+ es->srv ? es->srv->id : "<NONE>", es->srv ? es->srv->puid : -1,
+ es->ev_id,
+ pn, port, es->sid, es->s_flags,
+ es->state, es->m_flags, es->t_flags,
+ es->m_clen, es->m_blen,
+ es->b_flags, es->b_out, es->b_tot,
+ es->len, es->b_wrap, es->pos);
+
+ if (bi_putchk(si_ic(si), &trash) == -1) {
+ /* Socket buffer full. Let's try again later from the same point */
+ si_applet_cant_put(si);
+ return 0;
+ }
+ appctx->ctx.errors.ptr = 0;
+ appctx->ctx.errors.sid = es->sid;
+ }
+
+ if (appctx->ctx.errors.sid != es->sid) {
+ /* the snapshot changed while we were dumping it */
+ chunk_appendf(&trash,
+ " WARNING! update detected on this snapshot, dump interrupted. Please re-check!\n");
+ if (bi_putchk(si_ic(si), &trash) == -1) {
+ si_applet_cant_put(si);
+ return 0;
+ }
+ goto next;
+ }
+
+ /* OK, ptr >= 0, so we have to dump the current line */
+ while (appctx->ctx.errors.ptr < es->len && appctx->ctx.errors.ptr < sizeof(es->buf)) {
+ int newptr;
+ int newline;
+
+ newline = appctx->ctx.errors.bol;
+ newptr = dump_text_line(&trash, es->buf, sizeof(es->buf), es->len, &newline, appctx->ctx.errors.ptr);
+ if (newptr == appctx->ctx.errors.ptr)
+ return 0;
+
+ if (bi_putchk(si_ic(si), &trash) == -1) {
+ /* Socket buffer full. Let's try again later from the same point */
+ si_applet_cant_put(si);
+ return 0;
+ }
+ appctx->ctx.errors.ptr = newptr;
+ appctx->ctx.errors.bol = newline;
+ };
+ next:
+ appctx->ctx.errors.bol = 0;
+ appctx->ctx.errors.ptr = -1;
+ appctx->ctx.errors.buf++;
+ if (appctx->ctx.errors.buf > 1) {
+ appctx->ctx.errors.buf = 0;
+ appctx->ctx.errors.px = appctx->ctx.errors.px->next;
+ }
+ }
+
+ /* dump complete */
+ return 1;
+}
+
+/* parse the "level" argument on the bind lines */
+static int bind_parse_level(char **args, int cur_arg, struct proxy *px, struct bind_conf *conf, char **err)
+{
+ if (!*args[cur_arg + 1]) {
+ memprintf(err, "'%s' : missing level", args[cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ if (!strcmp(args[cur_arg+1], "user"))
+ conf->level = ACCESS_LVL_USER;
+ else if (!strcmp(args[cur_arg+1], "operator"))
+ conf->level = ACCESS_LVL_OPER;
+ else if (!strcmp(args[cur_arg+1], "admin"))
+ conf->level = ACCESS_LVL_ADMIN;
+ else {
+ memprintf(err, "'%s' only supports 'user', 'operator', and 'admin' (got '%s')",
+ args[cur_arg], args[cur_arg+1]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ return 0;
+}
+
+struct applet http_stats_applet = {
+ .obj_type = OBJ_TYPE_APPLET,
+ .name = "<STATS>", /* used for logging */
+ .fct = http_stats_io_handler,
+ .release = NULL,
+};
+
+static struct applet cli_applet = {
+ .obj_type = OBJ_TYPE_APPLET,
+ .name = "<CLI>", /* used for logging */
+ .fct = cli_io_handler,
+ .release = cli_release_handler,
+};
+
+static struct cfg_kw_list cfg_kws = {ILH, {
+ { CFG_GLOBAL, "stats", stats_parse_global },
+ { 0, NULL, NULL },
+}};
+
+static struct bind_kw_list bind_kws = { "STAT", { }, {
+ { "level", bind_parse_level, 1 }, /* set the unix socket admin level */
+ { NULL, NULL, 0 },
+}};
+
+__attribute__((constructor))
+static void __dumpstats_module_init(void)
+{
+ cfg_register_keywords(&cfg_kws);
+ bind_register_keywords(&bind_kws);
+}
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * FD polling functions for Linux epoll
+ *
+ * Copyright 2000-2014 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <unistd.h>
+#include <sys/time.h>
+#include <sys/types.h>
+
+#include <common/compat.h>
+#include <common/config.h>
+#include <common/debug.h>
+#include <common/epoll.h>
+#include <common/standard.h>
+#include <common/ticks.h>
+#include <common/time.h>
+#include <common/tools.h>
+
+#include <types/global.h>
+
+#include <proto/fd.h>
+
+
+/* private data */
+static struct epoll_event *epoll_events;
+static int epoll_fd;
+
+/* This structure may be used for any purpose. Warning! do not use it in
+ * recursive functions !
+ */
+static struct epoll_event ev;
+
+#ifndef EPOLLRDHUP
+/* EPOLLRDHUP was defined late in libc, and it appeared in kernel 2.6.17 */
+#define EPOLLRDHUP 0x2000
+#endif
+
+/*
+ * Immediately remove file descriptor from epoll set upon close.
+ * Since we forked, some fds share inodes with the other process, and epoll may
+ * send us events even though this process closed the fd (see man 7 epoll,
+ * "Questions and answers", Q 6).
+ */
+REGPRM1 static void __fd_clo(int fd)
+{
+ if (unlikely(fdtab[fd].cloned)) {
+ epoll_ctl(epoll_fd, EPOLL_CTL_DEL, fd, &ev);
+ }
+}
+
+/*
+ * Linux epoll() poller
+ */
+REGPRM2 static void _do_poll(struct poller *p, int exp)
+{
+ int status, eo, en;
+ int fd, opcode;
+ int count;
+ int updt_idx;
+ int wait_time;
+
+ /* first, scan the update list to find polling changes */
+ for (updt_idx = 0; updt_idx < fd_nbupdt; updt_idx++) {
+ fd = fd_updt[updt_idx];
+ fdtab[fd].updated = 0;
+ fdtab[fd].new = 0;
+
+ if (!fdtab[fd].owner)
+ continue;
+
+ eo = fdtab[fd].state;
+ en = fd_compute_new_polled_status(eo);
+
+ if ((eo ^ en) & FD_EV_POLLED_RW) {
+ /* poll status changed */
+ fdtab[fd].state = en;
+
+ if ((en & FD_EV_POLLED_RW) == 0) {
+ /* fd removed from poll list */
+ opcode = EPOLL_CTL_DEL;
+ }
+ else if ((eo & FD_EV_POLLED_RW) == 0) {
+ /* new fd in the poll list */
+ opcode = EPOLL_CTL_ADD;
+ }
+ else {
+ /* fd status changed */
+ opcode = EPOLL_CTL_MOD;
+ }
+
+ /* construct the epoll events based on new state */
+ ev.events = 0;
+ if (en & FD_EV_POLLED_R)
+ ev.events |= EPOLLIN | EPOLLRDHUP;
+
+ if (en & FD_EV_POLLED_W)
+ ev.events |= EPOLLOUT;
+
+ ev.data.fd = fd;
+ epoll_ctl(epoll_fd, opcode, fd, &ev);
+ }
+ }
+ fd_nbupdt = 0;
+
+ /* compute the epoll_wait() timeout */
+ if (!exp)
+ wait_time = MAX_DELAY_MS;
+ else if (tick_is_expired(exp, now_ms))
+ wait_time = 0;
+ else {
+ wait_time = TICKS_TO_MS(tick_remain(now_ms, exp)) + 1;
+ if (wait_time > MAX_DELAY_MS)
+ wait_time = MAX_DELAY_MS;
+ }
+
+ /* now let's wait for polled events */
+
+ gettimeofday(&before_poll, NULL);
+ status = epoll_wait(epoll_fd, epoll_events, global.tune.maxpollevents, wait_time);
+ tv_update_date(wait_time, status);
+ measure_idle();
+
+ /* process polled events */
+
+ for (count = 0; count < status; count++) {
+ unsigned int n;
+ unsigned int e = epoll_events[count].events;
+ fd = epoll_events[count].data.fd;
+
+ if (!fdtab[fd].owner)
+ continue;
+
+ /* it looks complicated but gcc can optimize it away when constants
+ * have same values... In fact it depends on gcc :-(
+ */
+ fdtab[fd].ev &= FD_POLL_STICKY;
+ if (EPOLLIN == FD_POLL_IN && EPOLLOUT == FD_POLL_OUT &&
+ EPOLLPRI == FD_POLL_PRI && EPOLLERR == FD_POLL_ERR &&
+ EPOLLHUP == FD_POLL_HUP) {
+ n = e & (EPOLLIN|EPOLLOUT|EPOLLPRI|EPOLLERR|EPOLLHUP);
+ }
+ else {
+ n = ((e & EPOLLIN ) ? FD_POLL_IN : 0) |
+ ((e & EPOLLPRI) ? FD_POLL_PRI : 0) |
+ ((e & EPOLLOUT) ? FD_POLL_OUT : 0) |
+ ((e & EPOLLERR) ? FD_POLL_ERR : 0) |
+ ((e & EPOLLHUP) ? FD_POLL_HUP : 0);
+ }
+
+ /* always remap RDHUP to HUP as they're used similarly */
+ if (e & EPOLLRDHUP)
+ n |= FD_POLL_HUP;
+
+ fdtab[fd].ev |= n;
+ if (n & (FD_POLL_IN | FD_POLL_HUP | FD_POLL_ERR))
+ fd_may_recv(fd);
+
+ if (n & (FD_POLL_OUT | FD_POLL_ERR))
+ fd_may_send(fd);
+ }
+ /* the caller will take care of cached events */
+}
+
+/*
+ * Initialization of the epoll() poller.
+ * Returns 0 in case of failure, non-zero in case of success. If it fails, it
+ * disables the poller by setting its pref to 0.
+ */
+REGPRM1 static int _do_init(struct poller *p)
+{
+ p->private = NULL;
+
+ epoll_fd = epoll_create(global.maxsock + 1);
+ if (epoll_fd < 0)
+ goto fail_fd;
+
+ epoll_events = (struct epoll_event*)
+ calloc(1, sizeof(struct epoll_event) * global.tune.maxpollevents);
+
+ if (epoll_events == NULL)
+ goto fail_ee;
+
+ return 1;
+
+ fail_ee:
+ close(epoll_fd);
+ epoll_fd = -1;
+ fail_fd:
+ p->pref = 0;
+ return 0;
+}
+
+/*
+ * Termination of the epoll() poller.
+ * Memory is released and the poller is marked as unselectable.
+ */
+REGPRM1 static void _do_term(struct poller *p)
+{
+ free(epoll_events);
+
+ if (epoll_fd >= 0) {
+ close(epoll_fd);
+ epoll_fd = -1;
+ }
+
+ epoll_events = NULL;
+ p->private = NULL;
+ p->pref = 0;
+}
+
+/*
+ * Check that the poller works.
+ * Returns 1 if OK, otherwise 0.
+ */
+REGPRM1 static int _do_test(struct poller *p)
+{
+ int fd;
+
+ fd = epoll_create(global.maxsock + 1);
+ if (fd < 0)
+ return 0;
+ close(fd);
+ return 1;
+}
+
+/*
+ * Recreate the epoll file descriptor after a fork(). Returns 1 if OK,
+ * otherwise 0. It will ensure that all processes will not share their
+ * epoll_fd. Some side effects were encountered because of this, such
+ * as epoll_wait() returning an FD which was previously deleted.
+ */
+REGPRM1 static int _do_fork(struct poller *p)
+{
+ if (epoll_fd >= 0)
+ close(epoll_fd);
+ epoll_fd = epoll_create(global.maxsock + 1);
+ if (epoll_fd < 0)
+ return 0;
+ return 1;
+}
+
+/*
+ * It is a constructor, which means that it will automatically be called before
+ * main(). This is GCC-specific but it works at least since 2.95.
+ * Special care must be taken so that it does not need any uninitialized data.
+ */
+__attribute__((constructor))
+static void _do_register(void)
+{
+ struct poller *p;
+
+ if (nbpollers >= MAX_POLLERS)
+ return;
+
+ epoll_fd = -1;
+ p = &pollers[nbpollers++];
+
+ p->name = "epoll";
+ p->pref = 300;
+ p->private = NULL;
+
+ p->clo = __fd_clo;
+ p->test = _do_test;
+ p->init = _do_init;
+ p->term = _do_term;
+ p->poll = _do_poll;
+ p->fork = _do_fork;
+}
+
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * FD polling functions for FreeBSD kqueue()
+ *
+ * Copyright 2000-2014 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <unistd.h>
+#include <sys/time.h>
+#include <sys/types.h>
+
+#include <sys/event.h>
+#include <sys/time.h>
+
+#include <common/compat.h>
+#include <common/config.h>
+#include <common/ticks.h>
+#include <common/time.h>
+#include <common/tools.h>
+
+#include <types/global.h>
+
+#include <proto/fd.h>
+
+
+/* private data */
+static int kqueue_fd;
+static struct kevent *kev = NULL;
+
+/*
+ * kqueue() poller
+ */
+REGPRM2 static void _do_poll(struct poller *p, int exp)
+{
+ int status;
+ int count, fd, delta_ms;
+ struct timespec timeout;
+ int updt_idx, en, eo;
+ int changes = 0;
+
+ /* first, scan the update list to find changes */
+ for (updt_idx = 0; updt_idx < fd_nbupdt; updt_idx++) {
+ fd = fd_updt[updt_idx];
+ fdtab[fd].updated = 0;
+ fdtab[fd].new = 0;
+
+ if (!fdtab[fd].owner)
+ continue;
+
+ eo = fdtab[fd].state;
+ en = fd_compute_new_polled_status(eo);
+
+ if ((eo ^ en) & FD_EV_POLLED_RW) {
+ /* poll status changed */
+ fdtab[fd].state = en;
+
+ if ((eo ^ en) & FD_EV_POLLED_R) {
+ /* read poll status changed */
+ if (en & FD_EV_POLLED_R) {
+ EV_SET(&kev[changes], fd, EVFILT_READ, EV_ADD, 0, 0, NULL);
+ changes++;
+ }
+ else {
+ EV_SET(&kev[changes], fd, EVFILT_READ, EV_DELETE, 0, 0, NULL);
+ changes++;
+ }
+ }
+
+ if ((eo ^ en) & FD_EV_POLLED_W) {
+ /* write poll status changed */
+ if (en & FD_EV_POLLED_W) {
+ EV_SET(&kev[changes], fd, EVFILT_WRITE, EV_ADD, 0, 0, NULL);
+ changes++;
+ }
+ else {
+ EV_SET(&kev[changes], fd, EVFILT_WRITE, EV_DELETE, 0, 0, NULL);
+ changes++;
+ }
+ }
+ }
+ }
+ if (changes)
+ kevent(kqueue_fd, kev, changes, NULL, 0, NULL);
+ fd_nbupdt = 0;
+
+ delta_ms = 0;
+ timeout.tv_sec = 0;
+ timeout.tv_nsec = 0;
+
+ if (!exp) {
+ delta_ms = MAX_DELAY_MS;
+ timeout.tv_sec = (MAX_DELAY_MS / 1000);
+ timeout.tv_nsec = (MAX_DELAY_MS % 1000) * 1000000;
+ }
+ else if (!tick_is_expired(exp, now_ms)) {
+ delta_ms = TICKS_TO_MS(tick_remain(now_ms, exp)) + 1;
+ if (delta_ms > MAX_DELAY_MS)
+ delta_ms = MAX_DELAY_MS;
+ timeout.tv_sec = (delta_ms / 1000);
+ timeout.tv_nsec = (delta_ms % 1000) * 1000000;
+ }
+
+ fd = MIN(maxfd, global.tune.maxpollevents);
+ gettimeofday(&before_poll, NULL);
+ status = kevent(kqueue_fd, // int kq
+ NULL, // const struct kevent *changelist
+ 0, // int nchanges
+ kev, // struct kevent *eventlist
+ fd, // int nevents
+ &timeout); // const struct timespec *timeout
+ tv_update_date(delta_ms, status);
+ measure_idle();
+
+ for (count = 0; count < status; count++) {
+ fd = kev[count].ident;
+
+ if (!fdtab[fd].owner)
+ continue;
+
+ fdtab[fd].ev &= FD_POLL_STICKY;
+
+ if (kev[count].filter == EVFILT_READ) {
+ if ((fdtab[fd].state & FD_EV_STATUS_R))
+ fdtab[fd].ev |= FD_POLL_IN;
+ }
+ else if (kev[count].filter == EVFILT_WRITE) {
+ if ((fdtab[fd].state & FD_EV_STATUS_W))
+ fdtab[fd].ev |= FD_POLL_OUT;
+ }
+
+ if (fdtab[fd].ev & (FD_POLL_IN | FD_POLL_HUP | FD_POLL_ERR))
+ fd_may_recv(fd);
+
+ if (fdtab[fd].ev & (FD_POLL_OUT | FD_POLL_ERR))
+ fd_may_send(fd);
+ }
+}
+
+/*
+ * Initialization of the kqueue() poller.
+ * Returns 0 in case of failure, non-zero in case of success. If it fails, it
+ * disables the poller by setting its pref to 0.
+ */
+REGPRM1 static int _do_init(struct poller *p)
+{
+ p->private = NULL;
+
+ kqueue_fd = kqueue();
+ if (kqueue_fd < 0)
+ goto fail_fd;
+
+ /* we can have up to two events per fd (*/
+ kev = (struct kevent*)calloc(1, sizeof(struct kevent) * 2 * global.maxsock);
+ if (kev == NULL)
+ goto fail_kev;
+
+ return 1;
+
+ fail_kev:
+ close(kqueue_fd);
+ kqueue_fd = -1;
+ fail_fd:
+ p->pref = 0;
+ return 0;
+}
+
+/*
+ * Termination of the kqueue() poller.
+ * Memory is released and the poller is marked as unselectable.
+ */
+REGPRM1 static void _do_term(struct poller *p)
+{
+ free(kev);
+
+ if (kqueue_fd >= 0) {
+ close(kqueue_fd);
+ kqueue_fd = -1;
+ }
+
+ p->private = NULL;
+ p->pref = 0;
+}
+
+/*
+ * Check that the poller works.
+ * Returns 1 if OK, otherwise 0.
+ */
+REGPRM1 static int _do_test(struct poller *p)
+{
+ int fd;
+
+ fd = kqueue();
+ if (fd < 0)
+ return 0;
+ close(fd);
+ return 1;
+}
+
+/*
+ * Recreate the kqueue file descriptor after a fork(). Returns 1 if OK,
+ * otherwise 0. Note that some pollers need to be reopened after a fork()
+ * (such as kqueue), and some others may fail to do so in a chroot.
+ */
+REGPRM1 static int _do_fork(struct poller *p)
+{
+ if (kqueue_fd >= 0)
+ close(kqueue_fd);
+ kqueue_fd = kqueue();
+ if (kqueue_fd < 0)
+ return 0;
+ return 1;
+}
+
+/*
+ * It is a constructor, which means that it will automatically be called before
+ * main(). This is GCC-specific but it works at least since 2.95.
+ * Special care must be taken so that it does not need any uninitialized data.
+ */
+__attribute__((constructor))
+static void _do_register(void)
+{
+ struct poller *p;
+
+ if (nbpollers >= MAX_POLLERS)
+ return;
+
+ kqueue_fd = -1;
+ p = &pollers[nbpollers++];
+
+ p->name = "kqueue";
+ p->pref = 300;
+ p->private = NULL;
+
+ p->clo = NULL;
+ p->test = _do_test;
+ p->init = _do_init;
+ p->term = _do_term;
+ p->poll = _do_poll;
+ p->fork = _do_fork;
+}
+
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * FD polling functions for generic poll()
+ *
+ * Copyright 2000-2014 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <unistd.h>
+#include <sys/poll.h>
+#include <sys/time.h>
+#include <sys/types.h>
+
+#include <common/compat.h>
+#include <common/config.h>
+#include <common/ticks.h>
+#include <common/time.h>
+
+#include <types/global.h>
+
+#include <proto/fd.h>
+
+
+static unsigned int *fd_evts[2];
+
+/* private data */
+static struct pollfd *poll_events = NULL;
+
+
+static inline unsigned int hap_fd_isset(int fd, unsigned int *evts)
+{
+ return evts[fd / (8*sizeof(*evts))] & (1U << (fd & (8*sizeof(*evts) - 1)));
+}
+
+static inline void hap_fd_set(int fd, unsigned int *evts)
+{
+ evts[fd / (8*sizeof(*evts))] |= 1U << (fd & (8*sizeof(*evts) - 1));
+}
+
+static inline void hap_fd_clr(int fd, unsigned int *evts)
+{
+ evts[fd / (8*sizeof(*evts))] &= ~(1U << (fd & (8*sizeof(*evts) - 1)));
+}
+
+REGPRM1 static void __fd_clo(int fd)
+{
+ hap_fd_clr(fd, fd_evts[DIR_RD]);
+ hap_fd_clr(fd, fd_evts[DIR_WR]);
+}
+
+/*
+ * Poll() poller
+ */
+REGPRM2 static void _do_poll(struct poller *p, int exp)
+{
+ int status;
+ int fd, nbfd;
+ int wait_time;
+ int updt_idx, en, eo;
+ int fds, count;
+ int sr, sw;
+ unsigned rn, wn; /* read new, write new */
+
+ /* first, scan the update list to find changes */
+ for (updt_idx = 0; updt_idx < fd_nbupdt; updt_idx++) {
+ fd = fd_updt[updt_idx];
+ fdtab[fd].updated = 0;
+ fdtab[fd].new = 0;
+
+ if (!fdtab[fd].owner)
+ continue;
+
+ eo = fdtab[fd].state;
+ en = fd_compute_new_polled_status(eo);
+
+ if ((eo ^ en) & FD_EV_POLLED_RW) {
+ /* poll status changed, update the lists */
+ fdtab[fd].state = en;
+
+ if ((eo & ~en) & FD_EV_POLLED_R)
+ hap_fd_clr(fd, fd_evts[DIR_RD]);
+ else if ((en & ~eo) & FD_EV_POLLED_R)
+ hap_fd_set(fd, fd_evts[DIR_RD]);
+
+ if ((eo & ~en) & FD_EV_POLLED_W)
+ hap_fd_clr(fd, fd_evts[DIR_WR]);
+ else if ((en & ~eo) & FD_EV_POLLED_W)
+ hap_fd_set(fd, fd_evts[DIR_WR]);
+ }
+ }
+ fd_nbupdt = 0;
+
+ nbfd = 0;
+ for (fds = 0; (fds * 8*sizeof(**fd_evts)) < maxfd; fds++) {
+ rn = fd_evts[DIR_RD][fds];
+ wn = fd_evts[DIR_WR][fds];
+
+ if (!(rn|wn))
+ continue;
+
+ for (count = 0, fd = fds * 8*sizeof(**fd_evts); count < 8*sizeof(**fd_evts) && fd < maxfd; count++, fd++) {
+ sr = (rn >> count) & 1;
+ sw = (wn >> count) & 1;
+ if ((sr|sw)) {
+ poll_events[nbfd].fd = fd;
+ poll_events[nbfd].events = (sr ? POLLIN : 0) | (sw ? POLLOUT : 0);
+ nbfd++;
+ }
+ }
+ }
+
+ /* now let's wait for events */
+ if (!exp)
+ wait_time = MAX_DELAY_MS;
+ else if (tick_is_expired(exp, now_ms))
+ wait_time = 0;
+ else {
+ wait_time = TICKS_TO_MS(tick_remain(now_ms, exp)) + 1;
+ if (wait_time > MAX_DELAY_MS)
+ wait_time = MAX_DELAY_MS;
+ }
+
+ gettimeofday(&before_poll, NULL);
+ status = poll(poll_events, nbfd, wait_time);
+ tv_update_date(wait_time, status);
+ measure_idle();
+
+ for (count = 0; status > 0 && count < nbfd; count++) {
+ int e = poll_events[count].revents;
+ fd = poll_events[count].fd;
+
+ if (!(e & ( POLLOUT | POLLIN | POLLERR | POLLHUP )))
+ continue;
+
+ /* ok, we found one active fd */
+ status--;
+
+ if (!fdtab[fd].owner)
+ continue;
+
+ /* it looks complicated but gcc can optimize it away when constants
+ * have same values... In fact it depends on gcc :-(
+ */
+ fdtab[fd].ev &= FD_POLL_STICKY;
+ if (POLLIN == FD_POLL_IN && POLLOUT == FD_POLL_OUT &&
+ POLLERR == FD_POLL_ERR && POLLHUP == FD_POLL_HUP) {
+ fdtab[fd].ev |= e & (POLLIN|POLLOUT|POLLERR|POLLHUP);
+ }
+ else {
+ fdtab[fd].ev |=
+ ((e & POLLIN ) ? FD_POLL_IN : 0) |
+ ((e & POLLOUT) ? FD_POLL_OUT : 0) |
+ ((e & POLLERR) ? FD_POLL_ERR : 0) |
+ ((e & POLLHUP) ? FD_POLL_HUP : 0);
+ }
+
+ if (fdtab[fd].ev & (FD_POLL_IN | FD_POLL_HUP | FD_POLL_ERR))
+ fd_may_recv(fd);
+
+ if (fdtab[fd].ev & (FD_POLL_OUT | FD_POLL_ERR))
+ fd_may_send(fd);
+ }
+
+}
+
+/*
+ * Initialization of the poll() poller.
+ * Returns 0 in case of failure, non-zero in case of success. If it fails, it
+ * disables the poller by setting its pref to 0.
+ */
+REGPRM1 static int _do_init(struct poller *p)
+{
+ __label__ fail_swevt, fail_srevt, fail_pe;
+ int fd_evts_bytes;
+
+ p->private = NULL;
+ fd_evts_bytes = (global.maxsock + sizeof(**fd_evts) - 1) / sizeof(**fd_evts) * sizeof(**fd_evts);
+
+ poll_events = calloc(1, sizeof(struct pollfd) * global.maxsock);
+
+ if (poll_events == NULL)
+ goto fail_pe;
+
+ if ((fd_evts[DIR_RD] = calloc(1, fd_evts_bytes)) == NULL)
+ goto fail_srevt;
+
+ if ((fd_evts[DIR_WR] = calloc(1, fd_evts_bytes)) == NULL)
+ goto fail_swevt;
+
+ return 1;
+
+ fail_swevt:
+ free(fd_evts[DIR_RD]);
+ fail_srevt:
+ free(poll_events);
+ fail_pe:
+ p->pref = 0;
+ return 0;
+}
+
+/*
+ * Termination of the poll() poller.
+ * Memory is released and the poller is marked as unselectable.
+ */
+REGPRM1 static void _do_term(struct poller *p)
+{
+ free(fd_evts[DIR_WR]);
+ free(fd_evts[DIR_RD]);
+ free(poll_events);
+ p->private = NULL;
+ p->pref = 0;
+}
+
+/*
+ * Check that the poller works.
+ * Returns 1 if OK, otherwise 0.
+ */
+REGPRM1 static int _do_test(struct poller *p)
+{
+ return 1;
+}
+
+/*
+ * It is a constructor, which means that it will automatically be called before
+ * main(). This is GCC-specific but it works at least since 2.95.
+ * Special care must be taken so that it does not need any uninitialized data.
+ */
+__attribute__((constructor))
+static void _do_register(void)
+{
+ struct poller *p;
+
+ if (nbpollers >= MAX_POLLERS)
+ return;
+ p = &pollers[nbpollers++];
+
+ p->name = "poll";
+ p->pref = 200;
+ p->private = NULL;
+
+ p->clo = __fd_clo;
+ p->test = _do_test;
+ p->init = _do_init;
+ p->term = _do_term;
+ p->poll = _do_poll;
+}
+
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * FD polling functions for generic select()
+ *
+ * Copyright 2000-2014 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <unistd.h>
+#include <sys/time.h>
+#include <sys/types.h>
+
+#include <common/compat.h>
+#include <common/config.h>
+#include <common/ticks.h>
+#include <common/time.h>
+
+#include <types/global.h>
+
+#include <proto/fd.h>
+
+
+static fd_set *fd_evts[2];
+static fd_set *tmp_evts[2];
+
+/* Immediately remove the entry upon close() */
+REGPRM1 static void __fd_clo(int fd)
+{
+ FD_CLR(fd, fd_evts[DIR_RD]);
+ FD_CLR(fd, fd_evts[DIR_WR]);
+}
+
+/*
+ * Select() poller
+ */
+REGPRM2 static void _do_poll(struct poller *p, int exp)
+{
+ int status;
+ int fd, i;
+ struct timeval delta;
+ int delta_ms;
+ int readnotnull, writenotnull;
+ int fds;
+ int updt_idx, en, eo;
+ char count;
+
+ /* first, scan the update list to find changes */
+ for (updt_idx = 0; updt_idx < fd_nbupdt; updt_idx++) {
+ fd = fd_updt[updt_idx];
+ fdtab[fd].updated = 0;
+ fdtab[fd].new = 0;
+
+ if (!fdtab[fd].owner)
+ continue;
+
+ eo = fdtab[fd].state;
+ en = fd_compute_new_polled_status(eo);
+
+ if ((eo ^ en) & FD_EV_POLLED_RW) {
+ /* poll status changed, update the lists */
+ fdtab[fd].state = en;
+
+ if ((eo & ~en) & FD_EV_POLLED_R)
+ FD_CLR(fd, fd_evts[DIR_RD]);
+ else if ((en & ~eo) & FD_EV_POLLED_R)
+ FD_SET(fd, fd_evts[DIR_RD]);
+
+ if ((eo & ~en) & FD_EV_POLLED_W)
+ FD_CLR(fd, fd_evts[DIR_WR]);
+ else if ((en & ~eo) & FD_EV_POLLED_W)
+ FD_SET(fd, fd_evts[DIR_WR]);
+ }
+ }
+ fd_nbupdt = 0;
+
+ delta_ms = 0;
+ delta.tv_sec = 0;
+ delta.tv_usec = 0;
+
+ if (!exp) {
+ delta_ms = MAX_DELAY_MS;
+ delta.tv_sec = (MAX_DELAY_MS / 1000);
+ delta.tv_usec = (MAX_DELAY_MS % 1000) * 1000;
+ }
+ else if (!tick_is_expired(exp, now_ms)) {
+ delta_ms = TICKS_TO_MS(tick_remain(now_ms, exp)) + SCHEDULER_RESOLUTION;
+ if (delta_ms > MAX_DELAY_MS)
+ delta_ms = MAX_DELAY_MS;
+ delta.tv_sec = (delta_ms / 1000);
+ delta.tv_usec = (delta_ms % 1000) * 1000;
+ }
+
+ /* let's restore fdset state */
+
+ readnotnull = 0; writenotnull = 0;
+ for (i = 0; i < (maxfd + FD_SETSIZE - 1)/(8*sizeof(int)); i++) {
+ readnotnull |= (*(((int*)tmp_evts[DIR_RD])+i) = *(((int*)fd_evts[DIR_RD])+i)) != 0;
+ writenotnull |= (*(((int*)tmp_evts[DIR_WR])+i) = *(((int*)fd_evts[DIR_WR])+i)) != 0;
+ }
+
+ // /* just a verification code, needs to be removed for performance */
+ // for (i=0; i<maxfd; i++) {
+ // if (FD_ISSET(i, tmp_evts[DIR_RD]) != FD_ISSET(i, fd_evts[DIR_RD]))
+ // abort();
+ // if (FD_ISSET(i, tmp_evts[DIR_WR]) != FD_ISSET(i, fd_evts[DIR_WR]))
+ // abort();
+ //
+ // }
+
+ gettimeofday(&before_poll, NULL);
+ status = select(maxfd,
+ readnotnull ? tmp_evts[DIR_RD] : NULL,
+ writenotnull ? tmp_evts[DIR_WR] : NULL,
+ NULL,
+ &delta);
+
+ tv_update_date(delta_ms, status);
+ measure_idle();
+
+ if (status <= 0)
+ return;
+
+ for (fds = 0; (fds * BITS_PER_INT) < maxfd; fds++) {
+ if ((((int *)(tmp_evts[DIR_RD]))[fds] | ((int *)(tmp_evts[DIR_WR]))[fds]) == 0)
+ continue;
+
+ for (count = BITS_PER_INT, fd = fds * BITS_PER_INT; count && fd < maxfd; count--, fd++) {
+ /* if we specify read first, the accepts and zero reads will be
+ * seen first. Moreover, system buffers will be flushed faster.
+ */
+ if (!fdtab[fd].owner)
+ continue;
+
+ fdtab[fd].ev &= FD_POLL_STICKY;
+ if (FD_ISSET(fd, tmp_evts[DIR_RD]))
+ fdtab[fd].ev |= FD_POLL_IN;
+
+ if (FD_ISSET(fd, tmp_evts[DIR_WR]))
+ fdtab[fd].ev |= FD_POLL_OUT;
+
+ if (fdtab[fd].ev & (FD_POLL_IN | FD_POLL_HUP | FD_POLL_ERR))
+ fd_may_recv(fd);
+
+ if (fdtab[fd].ev & (FD_POLL_OUT | FD_POLL_ERR))
+ fd_may_send(fd);
+ }
+ }
+}
+
+/*
+ * Initialization of the select() poller.
+ * Returns 0 in case of failure, non-zero in case of success. If it fails, it
+ * disables the poller by setting its pref to 0.
+ */
+REGPRM1 static int _do_init(struct poller *p)
+{
+ __label__ fail_swevt, fail_srevt, fail_wevt, fail_revt;
+ int fd_set_bytes;
+
+ p->private = NULL;
+
+ if (global.maxsock > FD_SETSIZE)
+ goto fail_revt;
+
+ fd_set_bytes = sizeof(fd_set) * (global.maxsock + FD_SETSIZE - 1) / FD_SETSIZE;
+
+ if ((tmp_evts[DIR_RD] = (fd_set *)calloc(1, fd_set_bytes)) == NULL)
+ goto fail_revt;
+
+ if ((tmp_evts[DIR_WR] = (fd_set *)calloc(1, fd_set_bytes)) == NULL)
+ goto fail_wevt;
+
+ if ((fd_evts[DIR_RD] = (fd_set *)calloc(1, fd_set_bytes)) == NULL)
+ goto fail_srevt;
+
+ if ((fd_evts[DIR_WR] = (fd_set *)calloc(1, fd_set_bytes)) == NULL)
+ goto fail_swevt;
+
+ return 1;
+
+ fail_swevt:
+ free(fd_evts[DIR_RD]);
+ fail_srevt:
+ free(tmp_evts[DIR_WR]);
+ fail_wevt:
+ free(tmp_evts[DIR_RD]);
+ fail_revt:
+ p->pref = 0;
+ return 0;
+}
+
+/*
+ * Termination of the select() poller.
+ * Memory is released and the poller is marked as unselectable.
+ */
+REGPRM1 static void _do_term(struct poller *p)
+{
+ free(fd_evts[DIR_WR]);
+ free(fd_evts[DIR_RD]);
+ free(tmp_evts[DIR_WR]);
+ free(tmp_evts[DIR_RD]);
+ p->private = NULL;
+ p->pref = 0;
+}
+
+/*
+ * Check that the poller works.
+ * Returns 1 if OK, otherwise 0.
+ */
+REGPRM1 static int _do_test(struct poller *p)
+{
+ if (global.maxsock > FD_SETSIZE)
+ return 0;
+
+ return 1;
+}
+
+/*
+ * It is a constructor, which means that it will automatically be called before
+ * main(). This is GCC-specific but it works at least since 2.95.
+ * Special care must be taken so that it does not need any uninitialized data.
+ */
+__attribute__((constructor))
+static void _do_register(void)
+{
+ struct poller *p;
+
+ if (nbpollers >= MAX_POLLERS)
+ return;
+ p = &pollers[nbpollers++];
+
+ p->name = "select";
+ p->pref = 150;
+ p->private = NULL;
+
+ p->clo = __fd_clo;
+ p->test = _do_test;
+ p->init = _do_init;
+ p->term = _do_term;
+ p->poll = _do_poll;
+}
+
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * File descriptors management functions.
+ *
+ * Copyright 2000-2014 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ * This code implements an events cache for file descriptors. It remembers the
+ * readiness of a file descriptor after a return from poll() and the fact that
+ * an I/O attempt failed on EAGAIN. Events in the cache which are still marked
+ * ready and active are processed just as if they were reported by poll().
+ *
+ * This serves multiple purposes. First, it significantly improves performance
+ * by avoiding to subscribe to polling unless absolutely necessary, so most
+ * events are processed without polling at all, especially send() which
+ * benefits from the socket buffers. Second, it is the only way to support
+ * edge-triggered pollers (eg: EPOLL_ET). And third, it enables I/O operations
+ * that are backed by invisible buffers. For example, SSL is able to read a
+ * whole socket buffer and not deliver it to the application buffer because
+ * it's full. Unfortunately, it won't be reported by a poller anymore until
+ * some new activity happens. The only way to call it again thus is to keep
+ * this readiness information in the cache and to access it without polling
+ * once the FD is enabled again.
+ *
+ * One interesting feature of the cache is that it maintains the principle
+ * of speculative I/O introduced in haproxy 1.3 : the first time an event is
+ * enabled, the FD is considered as ready so that the I/O attempt is performed
+ * via the cache without polling. And the polling happens only when EAGAIN is
+ * first met. This avoids polling for HTTP requests, especially when the
+ * defer-accept mode is used. It also avoids polling for sending short data
+ * such as requests to servers or short responses to clients.
+ *
+ * The cache consists in a list of active events and a list of updates.
+ * Active events are events that are expected to come and that we must report
+ * to the application until it asks to stop or asks to poll. Updates are new
+ * requests for changing an FD state. Updates are the only way to create new
+ * events. This is important because it means that the number of cached events
+ * cannot increase between updates and will only grow one at a time while
+ * processing updates. All updates must always be processed, though events
+ * might be processed by small batches if required.
+ *
+ * There is no direct link between the FD and the updates list. There is only a
+ * bit in the fdtab[] to indicate than a file descriptor is already present in
+ * the updates list. Once an fd is present in the updates list, it will have to
+ * be considered even if its changes are reverted in the middle or if the fd is
+ * replaced.
+ *
+ * It is important to understand that as long as all expected events are
+ * processed, they might starve the polled events, especially because polled
+ * I/O starvation quickly induces more cached I/O. One solution to this
+ * consists in only processing a part of the events at once, but one drawback
+ * is that unhandled events will still wake the poller up. Using an edge-
+ * triggered poller such as EPOLL_ET will solve this issue though.
+ *
+ * Since we do not want to scan all the FD list to find cached I/O events,
+ * we store them in a list consisting in a linear array holding only the FD
+ * indexes right now. Note that a closed FD cannot exist in the cache, because
+ * it is closed by fd_delete() which in turn calls fd_release_cache_entry()
+ * which always removes it from the list.
+ *
+ * The FD array has to hold a back reference to the cache. This reference is
+ * always valid unless the FD is not in the cache and is not updated, in which
+ * case the reference points to index 0.
+ *
+ * The event state for an FD, as found in fdtab[].state, is maintained for each
+ * direction. The state field is built this way, with R bits in the low nibble
+ * and W bits in the high nibble for ease of access and debugging :
+ *
+ * 7 6 5 4 3 2 1 0
+ * [ 0 | PW | RW | AW | 0 | PR | RR | AR ]
+ *
+ * A* = active *R = read
+ * P* = polled *W = write
+ * R* = ready
+ *
+ * An FD is marked "active" when there is a desire to use it.
+ * An FD is marked "polled" when it is registered in the polling.
+ * An FD is marked "ready" when it has not faced a new EAGAIN since last wake-up
+ * (it is a cache of the last EAGAIN regardless of polling changes).
+ *
+ * We have 8 possible states for each direction based on these 3 flags :
+ *
+ * +---+---+---+----------+---------------------------------------------+
+ * | P | R | A | State | Description |
+ * +---+---+---+----------+---------------------------------------------+
+ * | 0 | 0 | 0 | DISABLED | No activity desired, not ready. |
+ * | 0 | 0 | 1 | MUSTPOLL | Activity desired via polling. |
+ * | 0 | 1 | 0 | STOPPED | End of activity without polling. |
+ * | 0 | 1 | 1 | ACTIVE | Activity desired without polling. |
+ * | 1 | 0 | 0 | ABORT | Aborted poll(). Not frequently seen. |
+ * | 1 | 0 | 1 | POLLED | FD is being polled. |
+ * | 1 | 1 | 0 | PAUSED | FD was paused while ready (eg: buffer full) |
+ * | 1 | 1 | 1 | READY | FD was marked ready by poll() |
+ * +---+---+---+----------+---------------------------------------------+
+ *
+ * The transitions are pretty simple :
+ * - fd_want_*() : set flag A
+ * - fd_stop_*() : clear flag A
+ * - fd_cant_*() : clear flag R (when facing EAGAIN)
+ * - fd_may_*() : set flag R (upon return from poll())
+ * - sync() : if (A) { if (!R) P := 1 } else { P := 0 }
+ *
+ * The PAUSED, ABORT and MUSTPOLL states are transient for level-trigerred
+ * pollers and are fixed by the sync() which happens at the beginning of the
+ * poller. For event-triggered pollers, only the MUSTPOLL state will be
+ * transient and ABORT will lead to PAUSED. The ACTIVE state is the only stable
+ * one which has P != A.
+ *
+ * The READY state is a bit special as activity on the FD might be notified
+ * both by the poller or by the cache. But it is needed for some multi-layer
+ * protocols (eg: SSL) where connection activity is not 100% linked to FD
+ * activity. Also some pollers might prefer to implement it as ACTIVE if
+ * enabling/disabling the FD is cheap. The READY and ACTIVE states are the
+ * two states for which a cache entry is allocated.
+ *
+ * The state transitions look like the diagram below. Only the 4 right states
+ * have polling enabled :
+ *
+ * (POLLED=0) (POLLED=1)
+ *
+ * +----------+ sync +-------+
+ * | DISABLED | <----- | ABORT | (READY=0, ACTIVE=0)
+ * +----------+ +-------+
+ * clr | ^ set | ^
+ * | | | |
+ * v | set v | clr
+ * +----------+ sync +--------+
+ * | MUSTPOLL | -----> | POLLED | (READY=0, ACTIVE=1)
+ * +----------+ +--------+
+ * ^ poll | ^
+ * | | |
+ * | EAGAIN v | EAGAIN
+ * +--------+ +-------+
+ * | ACTIVE | | READY | (READY=1, ACTIVE=1)
+ * +--------+ +-------+
+ * clr | ^ set | ^
+ * | | | |
+ * v | set v | clr
+ * +---------+ sync +--------+
+ * | STOPPED | <------ | PAUSED | (READY=1, ACTIVE=0)
+ * +---------+ +--------+
+ */
+
+#include <stdio.h>
+#include <string.h>
+#include <unistd.h>
+#include <sys/types.h>
+
+#include <common/compat.h>
+#include <common/config.h>
+
+#include <types/global.h>
+
+#include <proto/fd.h>
+#include <proto/port_range.h>
+
+struct fdtab *fdtab = NULL; /* array of all the file descriptors */
+struct fdinfo *fdinfo = NULL; /* less-often used infos for file descriptors */
+int maxfd; /* # of the highest fd + 1 */
+int totalconn; /* total # of terminated sessions */
+int actconn; /* # of active sessions */
+
+struct poller pollers[MAX_POLLERS];
+struct poller cur_poller;
+int nbpollers = 0;
+
+unsigned int *fd_cache = NULL; // FD events cache
+unsigned int *fd_updt = NULL; // FD updates list
+int fd_cache_num = 0; // number of events in the cache
+int fd_nbupdt = 0; // number of updates in the list
+
+/* Deletes an FD from the fdsets, and recomputes the maxfd limit.
+ * The file descriptor is also closed.
+ */
+void fd_delete(int fd)
+{
+ if (fdtab[fd].linger_risk) {
+ /* this is generally set when connecting to servers */
+ setsockopt(fd, SOL_SOCKET, SO_LINGER,
+ (struct linger *) &nolinger, sizeof(struct linger));
+ }
+ if (cur_poller.clo)
+ cur_poller.clo(fd);
+
+ fd_release_cache_entry(fd);
+ fdtab[fd].state = 0;
+
+ port_range_release_port(fdinfo[fd].port_range, fdinfo[fd].local_port);
+ fdinfo[fd].port_range = NULL;
+ close(fd);
+ fdtab[fd].owner = NULL;
+ fdtab[fd].new = 0;
+
+ while ((maxfd-1 >= 0) && !fdtab[maxfd-1].owner)
+ maxfd--;
+}
+
+/* Scan and process the cached events. This should be called right after
+ * the poller. The loop may cause new entries to be created, for example
+ * if a listener causes an accept() to initiate a new incoming connection
+ * wanting to attempt an recv().
+ */
+void fd_process_cached_events()
+{
+ int fd, entry, e;
+
+ for (entry = 0; entry < fd_cache_num; ) {
+ fd = fd_cache[entry];
+ e = fdtab[fd].state;
+
+ fdtab[fd].ev &= FD_POLL_STICKY;
+
+ if ((e & (FD_EV_READY_R | FD_EV_ACTIVE_R)) == (FD_EV_READY_R | FD_EV_ACTIVE_R))
+ fdtab[fd].ev |= FD_POLL_IN;
+
+ if ((e & (FD_EV_READY_W | FD_EV_ACTIVE_W)) == (FD_EV_READY_W | FD_EV_ACTIVE_W))
+ fdtab[fd].ev |= FD_POLL_OUT;
+
+ if (fdtab[fd].iocb && fdtab[fd].owner && fdtab[fd].ev)
+ fdtab[fd].iocb(fd);
+ else
+ fd_release_cache_entry(fd);
+
+ /* If the fd was removed from the cache, it has been
+ * replaced by the next one that we don't want to skip !
+ */
+ if (entry < fd_cache_num && fd_cache[entry] != fd)
+ continue;
+ entry++;
+ }
+}
+
+/* disable the specified poller */
+void disable_poller(const char *poller_name)
+{
+ int p;
+
+ for (p = 0; p < nbpollers; p++)
+ if (strcmp(pollers[p].name, poller_name) == 0)
+ pollers[p].pref = 0;
+}
+
+/*
+ * Initialize the pollers till the best one is found.
+ * If none works, returns 0, otherwise 1.
+ */
+int init_pollers()
+{
+ int p;
+ struct poller *bp;
+
+ if ((fd_cache = (uint32_t *)calloc(1, sizeof(uint32_t) * global.maxsock)) == NULL)
+ goto fail_cache;
+
+ if ((fd_updt = (uint32_t *)calloc(1, sizeof(uint32_t) * global.maxsock)) == NULL)
+ goto fail_updt;
+
+ do {
+ bp = NULL;
+ for (p = 0; p < nbpollers; p++)
+ if (!bp || (pollers[p].pref > bp->pref))
+ bp = &pollers[p];
+
+ if (!bp || bp->pref == 0)
+ break;
+
+ if (bp->init(bp)) {
+ memcpy(&cur_poller, bp, sizeof(*bp));
+ return 1;
+ }
+ } while (!bp || bp->pref == 0);
+ return 0;
+
+ fail_updt:
+ free(fd_cache);
+ fail_cache:
+ return 0;
+}
+
+/*
+ * Deinitialize the pollers.
+ */
+void deinit_pollers() {
+
+ struct poller *bp;
+ int p;
+
+ for (p = 0; p < nbpollers; p++) {
+ bp = &pollers[p];
+
+ if (bp && bp->pref)
+ bp->term(bp);
+ }
+
+ free(fd_updt);
+ free(fd_cache);
+ fd_updt = NULL;
+ fd_cache = NULL;
+}
+
+/*
+ * Lists the known pollers on <out>.
+ * Should be performed only before initialization.
+ */
+int list_pollers(FILE *out)
+{
+ int p;
+ int last, next;
+ int usable;
+ struct poller *bp;
+
+ fprintf(out, "Available polling systems :\n");
+
+ usable = 0;
+ bp = NULL;
+ last = next = -1;
+ while (1) {
+ for (p = 0; p < nbpollers; p++) {
+ if ((next < 0 || pollers[p].pref > next)
+ && (last < 0 || pollers[p].pref < last)) {
+ next = pollers[p].pref;
+ if (!bp || (pollers[p].pref > bp->pref))
+ bp = &pollers[p];
+ }
+ }
+
+ if (next == -1)
+ break;
+
+ for (p = 0; p < nbpollers; p++) {
+ if (pollers[p].pref == next) {
+ fprintf(out, " %10s : ", pollers[p].name);
+ if (pollers[p].pref == 0)
+ fprintf(out, "disabled, ");
+ else
+ fprintf(out, "pref=%3d, ", pollers[p].pref);
+ if (pollers[p].test(&pollers[p])) {
+ fprintf(out, " test result OK");
+ if (next > 0)
+ usable++;
+ } else {
+ fprintf(out, " test result FAILED");
+ if (bp == &pollers[p])
+ bp = NULL;
+ }
+ fprintf(out, "\n");
+ }
+ }
+ last = next;
+ next = -1;
+ };
+ fprintf(out, "Total: %d (%d usable), will use %s.\n", nbpollers, usable, bp ? bp->name : "none");
+ return 0;
+}
+
+/*
+ * Some pollers may lose their connection after a fork(). It may be necessary
+ * to create initialize part of them again. Returns 0 in case of failure,
+ * otherwise 1. The fork() function may be NULL if unused. In case of error,
+ * the the current poller is destroyed and the caller is responsible for trying
+ * another one by calling init_pollers() again.
+ */
+int fork_poller()
+{
+ int fd;
+ for (fd = 0; fd <= maxfd; fd++) {
+ if (fdtab[fd].owner) {
+ fdtab[fd].cloned = 1;
+ }
+ }
+
+ if (cur_poller.fork) {
+ if (cur_poller.fork(&cur_poller))
+ return 1;
+ cur_poller.term(&cur_poller);
+ return 0;
+ }
+ return 1;
+}
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * Event rate calculation functions.
+ *
+ * Copyright 2000-2010 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <common/config.h>
+#include <common/standard.h>
+#include <common/time.h>
+#include <common/tools.h>
+#include <proto/freq_ctr.h>
+
+/* Read a frequency counter taking history into account for missing time in
+ * current period. Current second is sub-divided in 1000 chunks of one ms,
+ * and the missing ones are read proportionally from previous value. The
+ * return value has the same precision as one input data sample, so low rates
+ * will be inaccurate still appropriate for max checking. One trick we use for
+ * low values is to specially handle the case where the rate is between 0 and 1
+ * in order to avoid flapping while waiting for the next event.
+ *
+ * For immediate limit checking, it's recommended to use freq_ctr_remain() and
+ * next_event_delay() instead which do not have the flapping correction, so
+ * that even frequencies as low as one event/period are properly handled.
+ */
+unsigned int read_freq_ctr(struct freq_ctr *ctr)
+{
+ unsigned int curr, past;
+ unsigned int age;
+
+ age = now.tv_sec - ctr->curr_sec;
+ if (unlikely(age > 1))
+ return 0;
+
+ curr = 0;
+ past = ctr->curr_ctr;
+ if (likely(!age)) {
+ curr = past;
+ past = ctr->prev_ctr;
+ }
+
+ if (past <= 1 && !curr)
+ return past; /* very low rate, avoid flapping */
+
+ return curr + mul32hi(past, ms_left_scaled);
+}
+
+/* returns the number of remaining events that can occur on this freq counter
+ * while respecting <freq> and taking into account that <pend> events are
+ * already known to be pending. Returns 0 if limit was reached.
+ */
+unsigned int freq_ctr_remain(struct freq_ctr *ctr, unsigned int freq, unsigned int pend)
+{
+ unsigned int curr, past;
+ unsigned int age;
+
+ curr = 0;
+ age = now.tv_sec - ctr->curr_sec;
+
+ if (likely(age <= 1)) {
+ past = ctr->curr_ctr;
+ if (likely(!age)) {
+ curr = past;
+ past = ctr->prev_ctr;
+ }
+ curr += mul32hi(past, ms_left_scaled);
+ }
+ curr += pend;
+
+ if (curr >= freq)
+ return 0;
+ return freq - curr;
+}
+
+/* return the expected wait time in ms before the next event may occur,
+ * respecting frequency <freq>, and assuming there may already be some pending
+ * events. It returns zero if we can proceed immediately, otherwise the wait
+ * time, which will be rounded down 1ms for better accuracy, with a minimum
+ * of one ms.
+ */
+unsigned int next_event_delay(struct freq_ctr *ctr, unsigned int freq, unsigned int pend)
+{
+ unsigned int curr, past;
+ unsigned int wait, age;
+
+ past = 0;
+ curr = 0;
+ age = now.tv_sec - ctr->curr_sec;
+
+ if (likely(age <= 1)) {
+ past = ctr->curr_ctr;
+ if (likely(!age)) {
+ curr = past;
+ past = ctr->prev_ctr;
+ }
+ curr += mul32hi(past, ms_left_scaled);
+ }
+ curr += pend;
+
+ if (curr < freq)
+ return 0;
+
+ wait = 999 / curr;
+ return MAX(wait, 1);
+}
+
+/* Reads a frequency counter taking history into account for missing time in
+ * current period. The period has to be passed in number of ticks and must
+ * match the one used to feed the counter. The counter value is reported for
+ * current date (now_ms). The return value has the same precision as one input
+ * data sample, so low rates over the period will be inaccurate but still
+ * appropriate for max checking. One trick we use for low values is to specially
+ * handle the case where the rate is between 0 and 1 in order to avoid flapping
+ * while waiting for the next event.
+ *
+ * For immediate limit checking, it's recommended to use freq_ctr_period_remain()
+ * instead which does not have the flapping correction, so that even frequencies
+ * as low as one event/period are properly handled.
+ *
+ * For measures over a 1-second period, it's better to use the implicit functions
+ * above.
+ */
+unsigned int read_freq_ctr_period(struct freq_ctr_period *ctr, unsigned int period)
+{
+ unsigned int curr, past;
+ unsigned int remain;
+
+ curr = ctr->curr_ctr;
+ past = ctr->prev_ctr;
+
+ remain = ctr->curr_tick + period - now_ms;
+ if (unlikely((int)remain < 0)) {
+ /* We're past the first period, check if we can still report a
+ * part of last period or if we're too far away.
+ */
+ remain += period;
+ if ((int)remain < 0)
+ return 0;
+ past = curr;
+ curr = 0;
+ }
+ if (past <= 1 && !curr)
+ return past; /* very low rate, avoid flapping */
+
+ curr += div64_32((unsigned long long)past * remain, period);
+ return curr;
+}
+
+/* Returns the number of remaining events that can occur on this freq counter
+ * while respecting <freq> events per period, and taking into account that
+ * <pend> events are already known to be pending. Returns 0 if limit was reached.
+ */
+unsigned int freq_ctr_remain_period(struct freq_ctr_period *ctr, unsigned int period,
+ unsigned int freq, unsigned int pend)
+{
+ unsigned int curr, past;
+ unsigned int remain;
+
+ curr = ctr->curr_ctr;
+ past = ctr->prev_ctr;
+
+ remain = ctr->curr_tick + period - now_ms;
+ if (likely((int)remain < 0)) {
+ /* We're past the first period, check if we can still report a
+ * part of last period or if we're too far away.
+ */
+ past = curr;
+ curr = 0;
+ remain += period;
+ if ((int)remain < 0)
+ past = 0;
+ }
+ if (likely(past))
+ curr += div64_32((unsigned long long)past * remain, period);
+
+ curr += pend;
+ freq -= curr;
+ if ((int)freq < 0)
+ freq = 0;
+ return freq;
+}
+
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * Frontend variables and functions.
+ *
+ * Copyright 2000-2013 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <errno.h>
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+
+#include <sys/socket.h>
+#include <sys/stat.h>
+#include <sys/types.h>
+
+#include <netinet/tcp.h>
+
+#include <common/chunk.h>
+#include <common/compat.h>
+#include <common/config.h>
+#include <common/debug.h>
+#include <common/standard.h>
+#include <common/time.h>
+
+#include <types/global.h>
+
+#include <proto/acl.h>
+#include <proto/arg.h>
+#include <proto/channel.h>
+#include <proto/fd.h>
+#include <proto/frontend.h>
+#include <proto/log.h>
+#include <proto/hdr_idx.h>
+#include <proto/proto_tcp.h>
+#include <proto/proto_http.h>
+#include <proto/proxy.h>
+#include <proto/sample.h>
+#include <proto/stream.h>
+#include <proto/stream_interface.h>
+#include <proto/task.h>
+
+/* Finish a stream accept() for a proxy (TCP or HTTP). It returns a negative
+ * value in case of a critical failure which must cause the listener to be
+ * disabled, a positive or null value in case of success.
+ */
+int frontend_accept(struct stream *s)
+{
+ struct session *sess = s->sess;
+ struct connection *conn = objt_conn(sess->origin);
+ struct listener *l = sess->listener;
+ struct proxy *fe = sess->fe;
+
+ if (unlikely(fe->nb_req_cap > 0)) {
+ if ((s->req_cap = pool_alloc2(fe->req_cap_pool)) == NULL)
+ goto out_return; /* no memory */
+ memset(s->req_cap, 0, fe->nb_req_cap * sizeof(void *));
+ }
+
+ if (unlikely(fe->nb_rsp_cap > 0)) {
+ if ((s->res_cap = pool_alloc2(fe->rsp_cap_pool)) == NULL)
+ goto out_free_reqcap; /* no memory */
+ memset(s->res_cap, 0, fe->nb_rsp_cap * sizeof(void *));
+ }
+
+ if (fe->http_needed) {
+ /* we have to allocate header indexes only if we know
+ * that we may make use of them. This of course includes
+ * (mode == PR_MODE_HTTP).
+ */
+ if (unlikely(!http_alloc_txn(s)))
+ goto out_free_rspcap; /* no memory */
+
+ /* and now initialize the HTTP transaction state */
+ http_init_txn(s);
+ }
+
+ if ((fe->mode == PR_MODE_TCP || fe->mode == PR_MODE_HTTP)
+ && (!LIST_ISEMPTY(&fe->logsrvs))) {
+ if (likely(!LIST_ISEMPTY(&fe->logformat))) {
+ /* we have the client ip */
+ if (s->logs.logwait & LW_CLIP)
+ if (!(s->logs.logwait &= ~(LW_CLIP|LW_INIT)))
+ s->do_log(s);
+ }
+ else if (conn) {
+ char pn[INET6_ADDRSTRLEN], sn[INET6_ADDRSTRLEN];
+
+ conn_get_from_addr(conn);
+ conn_get_to_addr(conn);
+
+ switch (addr_to_str(&conn->addr.from, pn, sizeof(pn))) {
+ case AF_INET:
+ case AF_INET6:
+ addr_to_str(&conn->addr.to, sn, sizeof(sn));
+ send_log(fe, LOG_INFO, "Connect from %s:%d to %s:%d (%s/%s)\n",
+ pn, get_host_port(&conn->addr.from),
+ sn, get_host_port(&conn->addr.to),
+ fe->id, (fe->mode == PR_MODE_HTTP) ? "HTTP" : "TCP");
+ break;
+ case AF_UNIX:
+ /* UNIX socket, only the destination is known */
+ send_log(fe, LOG_INFO, "Connect to unix:%d (%s/%s)\n",
+ l->luid,
+ fe->id, (fe->mode == PR_MODE_HTTP) ? "HTTP" : "TCP");
+ break;
+ }
+ }
+ }
+
+ if (unlikely((global.mode & MODE_DEBUG) && conn &&
+ (!(global.mode & MODE_QUIET) || (global.mode & MODE_VERBOSE)))) {
+ char pn[INET6_ADDRSTRLEN];
+
+ conn_get_from_addr(conn);
+
+ switch (addr_to_str(&conn->addr.from, pn, sizeof(pn))) {
+ case AF_INET:
+ case AF_INET6:
+ chunk_printf(&trash, "%08x:%s.accept(%04x)=%04x from [%s:%d]\n",
+ s->uniq_id, fe->id, (unsigned short)l->fd, (unsigned short)conn->t.sock.fd,
+ pn, get_host_port(&conn->addr.from));
+ break;
+ case AF_UNIX:
+ /* UNIX socket, only the destination is known */
+ chunk_printf(&trash, "%08x:%s.accept(%04x)=%04x from [unix:%d]\n",
+ s->uniq_id, fe->id, (unsigned short)l->fd, (unsigned short)conn->t.sock.fd,
+ l->luid);
+ break;
+ }
+
+ shut_your_big_mouth_gcc(write(1, trash.str, trash.len));
+ }
+
+ if (fe->mode == PR_MODE_HTTP)
+ s->req.flags |= CF_READ_DONTWAIT; /* one read is usually enough */
+
+ /* everything's OK, let's go on */
+ return 1;
+
+ /* Error unrolling */
+ out_free_rspcap:
+ pool_free2(fe->rsp_cap_pool, s->res_cap);
+ out_free_reqcap:
+ pool_free2(fe->req_cap_pool, s->req_cap);
+ out_return:
+ return -1;
+}
+
+/************************************************************************/
+/* All supported sample and ACL keywords must be declared here. */
+/************************************************************************/
+
+/* set temp integer to the id of the frontend */
+static int
+smp_fetch_fe_id(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ smp->flags = SMP_F_VOL_SESS;
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = smp->sess->fe->uuid;
+ return 1;
+}
+
+/* set temp integer to the number of connections per second reaching the frontend.
+ * Accepts exactly 1 argument. Argument is a frontend, other types will cause
+ * an undefined behaviour.
+ */
+static int
+smp_fetch_fe_sess_rate(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = read_freq_ctr(&args->data.prx->fe_sess_per_sec);
+ return 1;
+}
+
+/* set temp integer to the number of concurrent connections on the frontend
+ * Accepts exactly 1 argument. Argument is a frontend, other types will cause
+ * an undefined behaviour.
+ */
+static int
+smp_fetch_fe_conn(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = args->data.prx->feconn;
+ return 1;
+}
+
+
+/* Note: must not be declared <const> as its list will be overwritten.
+ * Please take care of keeping this list alphabetically sorted.
+ */
+static struct sample_fetch_kw_list smp_kws = {ILH, {
+ { "fe_conn", smp_fetch_fe_conn, ARG1(1,FE), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "fe_id", smp_fetch_fe_id, 0, NULL, SMP_T_SINT, SMP_USE_FTEND, },
+ { "fe_sess_rate", smp_fetch_fe_sess_rate, ARG1(1,FE), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { /* END */ },
+}};
+
+
+/* Note: must not be declared <const> as its list will be overwritten.
+ * Please take care of keeping this list alphabetically sorted.
+ */
+static struct acl_kw_list acl_kws = {ILH, {
+ { /* END */ },
+}};
+
+
+__attribute__((constructor))
+static void __frontend_init(void)
+{
+ sample_register_fetches(&smp_kws);
+ acl_register_keywords(&acl_kws);
+}
+
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * Wrapper to make haproxy systemd-compliant.
+ *
+ * Copyright 2013 Marc-Antoine Perennou <Marc-Antoine@Perennou.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <errno.h>
+#include <signal.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <unistd.h>
+#include <sys/wait.h>
+
+#define REEXEC_FLAG "HAPROXY_SYSTEMD_REEXEC"
+#define SD_DEBUG "<7>"
+#define SD_NOTICE "<5>"
+
+static volatile sig_atomic_t caught_signal;
+
+static char *pid_file = "/run/haproxy.pid";
+static int wrapper_argc;
+static char **wrapper_argv;
+
+/* returns the path to the haproxy binary into <buffer>, whose size indicated
+ * in <buffer_size> must be at least 1 byte long.
+ */
+static void locate_haproxy(char *buffer, size_t buffer_size)
+{
+ char *end = NULL;
+ int len;
+
+ len = readlink("/proc/self/exe", buffer, buffer_size - 1);
+ if (len == -1)
+ goto fail;
+
+ buffer[len] = 0;
+ end = strrchr(buffer, '/');
+ if (end == NULL)
+ goto fail;
+
+ if (strcmp(end + strlen(end) - 16, "-systemd-wrapper") == 0) {
+ end[strlen(end) - 16] = '\0';
+ return;
+ }
+
+ end[1] = '\0';
+ strncpy(end + 1, "haproxy", buffer + buffer_size - (end + 1));
+ buffer[buffer_size - 1] = '\0';
+ return;
+ fail:
+ strncpy(buffer, "/usr/sbin/haproxy", buffer_size);
+ buffer[buffer_size - 1] = '\0';
+ return;
+}
+
+static void spawn_haproxy(char **pid_strv, int nb_pid)
+{
+ char haproxy_bin[512];
+ pid_t pid;
+ int main_argc;
+ char **main_argv;
+
+ main_argc = wrapper_argc - 1;
+ main_argv = wrapper_argv + 1;
+
+ pid = fork();
+ if (!pid) {
+ /* 3 for "haproxy -Ds -sf" */
+ char **argv = calloc(4 + main_argc + nb_pid + 1, sizeof(char *));
+ int i;
+ int argno = 0;
+ locate_haproxy(haproxy_bin, 512);
+ argv[argno++] = haproxy_bin;
+ for (i = 0; i < main_argc; ++i)
+ argv[argno++] = main_argv[i];
+ argv[argno++] = "-Ds";
+ if (nb_pid > 0) {
+ argv[argno++] = "-sf";
+ for (i = 0; i < nb_pid; ++i)
+ argv[argno++] = pid_strv[i];
+ }
+ argv[argno] = NULL;
+
+ fprintf(stderr, SD_DEBUG "haproxy-systemd-wrapper: executing ");
+ for (i = 0; argv[i]; ++i)
+ fprintf(stderr, "%s ", argv[i]);
+ fprintf(stderr, "\n");
+
+ execv(argv[0], argv);
+ exit(0);
+ }
+}
+
+static int read_pids(char ***pid_strv)
+{
+ FILE *f = fopen(pid_file, "r");
+ int read = 0, allocated = 8;
+ char pid_str[10];
+
+ if (!f)
+ return 0;
+
+ *pid_strv = malloc(allocated * sizeof(char *));
+ while (1 == fscanf(f, "%s\n", pid_str)) {
+ if (read == allocated) {
+ allocated *= 2;
+ *pid_strv = realloc(*pid_strv, allocated * sizeof(char *));
+ }
+ (*pid_strv)[read++] = strdup(pid_str);
+ }
+
+ fclose(f);
+
+ return read;
+}
+
+static void signal_handler(int signum)
+{
+ caught_signal = signum;
+}
+
+static void do_restart(void)
+{
+ setenv(REEXEC_FLAG, "1", 1);
+ fprintf(stderr, SD_NOTICE "haproxy-systemd-wrapper: re-executing\n");
+
+ execv(wrapper_argv[0], wrapper_argv);
+}
+
+static void do_shutdown(void)
+{
+ int i, pid;
+ char **pid_strv = NULL;
+ int nb_pid = read_pids(&pid_strv);
+ for (i = 0; i < nb_pid; ++i) {
+ pid = atoi(pid_strv[i]);
+ if (pid > 0) {
+ fprintf(stderr, SD_DEBUG "haproxy-systemd-wrapper: SIGINT -> %d\n", pid);
+ kill(pid, SIGINT);
+ free(pid_strv[i]);
+ }
+ }
+ free(pid_strv);
+}
+
+static void init(int argc, char **argv)
+{
+ while (argc > 1) {
+ if ((*argv)[0] == '-' && (*argv)[1] == 'p') {
+ pid_file = *(argv + 1);
+ }
+ --argc; ++argv;
+ }
+}
+
+int main(int argc, char **argv)
+{
+ int status;
+ struct sigaction sa;
+
+ wrapper_argc = argc;
+ wrapper_argv = argv;
+
+ --argc; ++argv;
+ init(argc, argv);
+
+ memset(&sa, 0, sizeof(struct sigaction));
+ sa.sa_handler = &signal_handler;
+ sigaction(SIGUSR2, &sa, NULL);
+ sigaction(SIGHUP, &sa, NULL);
+ sigaction(SIGINT, &sa, NULL);
+ sigaction(SIGTERM, &sa, NULL);
+
+ if (getenv(REEXEC_FLAG) != NULL) {
+ /* We are being re-executed: restart HAProxy gracefully */
+ int i;
+ char **pid_strv = NULL;
+ int nb_pid = read_pids(&pid_strv);
+
+ unsetenv(REEXEC_FLAG);
+ spawn_haproxy(pid_strv, nb_pid);
+
+ for (i = 0; i < nb_pid; ++i)
+ free(pid_strv[i]);
+ free(pid_strv);
+ }
+ else {
+ /* Start a fresh copy of HAProxy */
+ spawn_haproxy(NULL, 0);
+ }
+
+ status = -1;
+ while (-1 != wait(&status) || errno == EINTR) {
+ if (caught_signal == SIGUSR2 || caught_signal == SIGHUP) {
+ caught_signal = 0;
+ do_restart();
+ }
+ else if (caught_signal == SIGINT || caught_signal == SIGTERM) {
+ caught_signal = 0;
+ do_shutdown();
+ }
+ }
+
+ fprintf(stderr, SD_NOTICE "haproxy-systemd-wrapper: exit, haproxy RC=%d\n",
+ status);
+ return status;
+}
--- /dev/null
+/*
+ * HA-Proxy : High Availability-enabled HTTP/TCP proxy
+ * Copyright 2000-2015 Willy Tarreau <w@1wt.eu>.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ * Please refer to RFC2068 or RFC2616 for informations about HTTP protocol, and
+ * RFC2965 for informations about cookies usage. More generally, the IETF HTTP
+ * Working Group's web site should be consulted for protocol related changes :
+ *
+ * http://ftp.ics.uci.edu/pub/ietf/http/
+ *
+ * Pending bugs (may be not fixed because never reproduced) :
+ * - solaris only : sometimes, an HTTP proxy with only a dispatch address causes
+ * the proxy to terminate (no core) if the client breaks the connection during
+ * the response. Seen on 1.1.8pre4, but never reproduced. May not be related to
+ * the snprintf() bug since requests were simple (GET / HTTP/1.0), but may be
+ * related to missing setsid() (fixed in 1.1.15)
+ * - a proxy with an invalid config will prevent the startup even if disabled.
+ *
+ * ChangeLog has moved to the CHANGELOG file.
+ *
+ */
+
+#define _GNU_SOURCE
+#include <stdio.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <ctype.h>
+#include <sys/time.h>
+#include <sys/types.h>
+#include <sys/socket.h>
+#include <netinet/tcp.h>
+#include <netinet/in.h>
+#include <arpa/inet.h>
+#include <netdb.h>
+#include <fcntl.h>
+#include <errno.h>
+#include <signal.h>
+#include <stdarg.h>
+#include <sys/resource.h>
+#include <sys/wait.h>
+#include <time.h>
+#include <syslog.h>
+#include <grp.h>
+#ifdef USE_CPU_AFFINITY
+#include <sched.h>
+#ifdef __FreeBSD__
+#include <sys/param.h>
+#include <sys/cpuset.h>
+#endif
+#endif
+
+#ifdef DEBUG_FULL
+#include <assert.h>
+#endif
+
+#include <common/base64.h>
+#include <common/cfgparse.h>
+#include <common/chunk.h>
+#include <common/compat.h>
+#include <common/config.h>
+#include <common/defaults.h>
+#include <common/errors.h>
+#include <common/memory.h>
+#include <common/mini-clist.h>
+#include <common/namespace.h>
+#include <common/regex.h>
+#include <common/standard.h>
+#include <common/time.h>
+#include <common/uri_auth.h>
+#include <common/version.h>
+
+#include <types/capture.h>
+#include <types/global.h>
+#include <types/acl.h>
+#include <types/peers.h>
+
+#include <proto/acl.h>
+#include <proto/applet.h>
+#include <proto/arg.h>
+#include <proto/auth.h>
+#include <proto/backend.h>
+#include <proto/channel.h>
+#include <proto/checks.h>
+#include <proto/connection.h>
+#include <proto/fd.h>
+#include <proto/hdr_idx.h>
+#include <proto/hlua.h>
+#include <proto/listener.h>
+#include <proto/log.h>
+#include <proto/pattern.h>
+#include <proto/protocol.h>
+#include <proto/proto_http.h>
+#include <proto/proxy.h>
+#include <proto/queue.h>
+#include <proto/server.h>
+#include <proto/session.h>
+#include <proto/stream.h>
+#include <proto/signal.h>
+#include <proto/task.h>
+#include <proto/dns.h>
+
+#ifdef USE_OPENSSL
+#include <proto/ssl_sock.h>
+#endif
+
+#ifdef USE_DEVICEATLAS
+#include <import/da.h>
+#endif
+
+#ifdef USE_51DEGREES
+#include <import/51d.h>
+#endif
+
+/*********************************************************************/
+
+extern const struct comp_algo comp_algos[];
+
+/*********************************************************************/
+
+/* list of config files */
+static struct list cfg_cfgfiles = LIST_HEAD_INIT(cfg_cfgfiles);
+int pid; /* current process id */
+int relative_pid = 1; /* process id starting at 1 */
+
+/* global options */
+struct global global = {
+ .nbproc = 1,
+ .req_count = 0,
+ .logsrvs = LIST_HEAD_INIT(global.logsrvs),
+#if defined(USE_ZLIB) && defined(DEFAULT_MAXZLIBMEM)
+ .maxzlibmem = DEFAULT_MAXZLIBMEM * 1024U * 1024U,
+#else
+ .maxzlibmem = 0,
+#endif
+ .comp_rate_lim = 0,
+ .ssl_server_verify = SSL_SERVER_VERIFY_REQUIRED,
+ .unix_bind = {
+ .ux = {
+ .uid = -1,
+ .gid = -1,
+ .mode = 0,
+ }
+ },
+ .tune = {
+ .bufsize = BUFSIZE,
+ .maxrewrite = -1,
+ .chksize = BUFSIZE,
+ .reserved_bufs = RESERVED_BUFS,
+ .pattern_cache = DEFAULT_PAT_LRU_SIZE,
+#ifdef USE_OPENSSL
+ .sslcachesize = SSLCACHESIZE,
+ .ssl_default_dh_param = SSL_DEFAULT_DH_PARAM,
+#ifdef DEFAULT_SSL_MAX_RECORD
+ .ssl_max_record = DEFAULT_SSL_MAX_RECORD,
+#endif
+ .ssl_ctx_cache = DEFAULT_SSL_CTX_CACHE,
+#endif
+#ifdef USE_ZLIB
+ .zlibmemlevel = 8,
+ .zlibwindowsize = MAX_WBITS,
+#endif
+ .comp_maxlevel = 1,
+#ifdef DEFAULT_IDLE_TIMER
+ .idle_timer = DEFAULT_IDLE_TIMER,
+#else
+ .idle_timer = 1000, /* 1 second */
+#endif
+ },
+#ifdef USE_OPENSSL
+#ifdef DEFAULT_MAXSSLCONN
+ .maxsslconn = DEFAULT_MAXSSLCONN,
+#endif
+#endif
+#ifdef USE_DEVICEATLAS
+ .deviceatlas = {
+ .loglevel = 0,
+ .jsonpath = 0,
+ .cookiename = 0,
+ .cookienamelen = 0,
+ .useragentid = 0,
+ .daset = 0,
+ .separator = '|',
+ },
+#endif
+#ifdef USE_51DEGREES
+ ._51degrees = {
+ .property_separator = ',',
+ .property_names = LIST_HEAD_INIT(global._51degrees.property_names),
+ .data_file_path = NULL,
+#ifdef FIFTYONEDEGREES_H_PATTERN_INCLUDED
+ .data_set = { },
+#endif
+ .cache_size = 0,
+ },
+#endif
+ /* others NULL OK */
+};
+
+/*********************************************************************/
+
+int stopping; /* non zero means stopping in progress */
+int jobs = 0; /* number of active jobs (conns, listeners, active tasks, ...) */
+
+/* Here we store informations about the pids of the processes we may pause
+ * or kill. We will send them a signal every 10 ms until we can bind to all
+ * our ports. With 200 retries, that's about 2 seconds.
+ */
+#define MAX_START_RETRIES 200
+static int *oldpids = NULL;
+static int oldpids_sig; /* use USR1 or TERM */
+
+/* this is used to drain data, and as a temporary buffer for sprintf()... */
+struct chunk trash = { };
+
+/* this buffer is always the same size as standard buffers and is used for
+ * swapping data inside a buffer.
+ */
+char *swap_buffer = NULL;
+
+int nb_oldpids = 0;
+const int zero = 0;
+const int one = 1;
+const struct linger nolinger = { .l_onoff = 1, .l_linger = 0 };
+
+char hostname[MAX_HOSTNAME_LEN];
+char localpeer[MAX_HOSTNAME_LEN];
+
+/* used from everywhere just to drain results we don't want to read and which
+ * recent versions of gcc increasingly and annoyingly complain about.
+ */
+int shut_your_big_mouth_gcc_int = 0;
+
+/* list of the temporarily limited listeners because of lack of resource */
+struct list global_listener_queue = LIST_HEAD_INIT(global_listener_queue);
+struct task *global_listener_queue_task;
+static struct task *manage_global_listener_queue(struct task *t);
+
+/* bitfield of a few warnings to emit just once (WARN_*) */
+unsigned int warned = 0;
+
+/*********************************************************************/
+/* general purpose functions ***************************************/
+/*********************************************************************/
+
+void display_version()
+{
+ printf("HA-Proxy version " HAPROXY_VERSION " " HAPROXY_DATE"\n");
+ printf("Copyright 2000-2015 Willy Tarreau <willy@haproxy.org>\n\n");
+}
+
+void display_build_opts()
+{
+ printf("Build options :"
+#ifdef BUILD_TARGET
+ "\n TARGET = " BUILD_TARGET
+#endif
+#ifdef BUILD_CPU
+ "\n CPU = " BUILD_CPU
+#endif
+#ifdef BUILD_CC
+ "\n CC = " BUILD_CC
+#endif
+#ifdef BUILD_CFLAGS
+ "\n CFLAGS = " BUILD_CFLAGS
+#endif
+#ifdef BUILD_OPTIONS
+ "\n OPTIONS = " BUILD_OPTIONS
+#endif
+ "\n\nDefault settings :"
+ "\n maxconn = %d, bufsize = %d, maxrewrite = %d, maxpollevents = %d"
+ "\n\n",
+ DEFAULT_MAXCONN, BUFSIZE, MAXREWRITE, MAX_POLL_EVENTS);
+
+ printf("Encrypted password support via crypt(3): "
+#ifdef CONFIG_HAP_CRYPT
+ "yes"
+#else
+ "no"
+#endif
+ "\n");
+
+#ifdef USE_ZLIB
+ printf("Built with zlib version : " ZLIB_VERSION "\n");
+#elif defined(USE_SLZ)
+ printf("Built with libslz for stateless compression.\n");
+#else /* USE_ZLIB */
+ printf("Built without compression support (neither USE_ZLIB nor USE_SLZ are set)\n");
+#endif
+ printf("Compression algorithms supported :");
+ {
+ int i;
+
+ for (i = 0; comp_algos[i].cfg_name; i++) {
+ printf("%s %s(\"%s\")", (i == 0 ? "" : ","), comp_algos[i].cfg_name, comp_algos[i].ua_name);
+ }
+ if (i == 0) {
+ printf("none");
+ }
+ }
+ printf("\n");
+
+#ifdef USE_OPENSSL
+ printf("Built with OpenSSL version : "
+#ifdef OPENSSL_IS_BORINGSSL
+ "BoringSSL\n");
+#else /* OPENSSL_IS_BORINGSSL */
+ OPENSSL_VERSION_TEXT "\n");
+ printf("Running on OpenSSL version : %s%s\n",
+ SSLeay_version(SSLEAY_VERSION),
+ ((OPENSSL_VERSION_NUMBER ^ SSLeay()) >> 8) ? " (VERSIONS DIFFER!)" : "");
+#endif
+ printf("OpenSSL library supports TLS extensions : "
+#if OPENSSL_VERSION_NUMBER < 0x00907000L
+ "no (library version too old)"
+#elif defined(OPENSSL_NO_TLSEXT)
+ "no (disabled via OPENSSL_NO_TLSEXT)"
+#else
+ "yes"
+#endif
+ "\n");
+ printf("OpenSSL library supports SNI : "
+#ifdef SSL_CTRL_SET_TLSEXT_HOSTNAME
+ "yes"
+#else
+#ifdef OPENSSL_NO_TLSEXT
+ "no (because of OPENSSL_NO_TLSEXT)"
+#else
+ "no (version might be too old, 0.9.8f min needed)"
+#endif
+#endif
+ "\n");
+ printf("OpenSSL library supports prefer-server-ciphers : "
+#ifdef SSL_OP_CIPHER_SERVER_PREFERENCE
+ "yes"
+#else
+ "no (0.9.7 or later needed)"
+#endif
+ "\n");
+#else /* USE_OPENSSL */
+ printf("Built without OpenSSL support (USE_OPENSSL not set)\n");
+#endif
+
+#ifdef USE_PCRE
+ printf("Built with PCRE version : %s", pcre_version());
+ printf("\nPCRE library supports JIT : ");
+#ifdef USE_PCRE_JIT
+ {
+ int r;
+ pcre_config(PCRE_CONFIG_JIT, &r);
+ if (r)
+ printf("yes");
+ else
+ printf("no (libpcre build without JIT?)");
+ }
+#else
+ printf("no (USE_PCRE_JIT not set)");
+#endif
+ printf("\n");
+#else
+ printf("Built without PCRE support (using libc's regex instead)\n");
+#endif
+
+#ifdef USE_LUA
+ printf("Built with Lua version : %s\n", LUA_RELEASE);
+#else
+ printf("Built without Lua support\n");
+#endif
+
+#if defined(CONFIG_HAP_TRANSPARENT)
+ printf("Built with transparent proxy support using:"
+#if defined(IP_TRANSPARENT)
+ " IP_TRANSPARENT"
+#endif
+#if defined(IPV6_TRANSPARENT)
+ " IPV6_TRANSPARENT"
+#endif
+#if defined(IP_FREEBIND)
+ " IP_FREEBIND"
+#endif
+#if defined(IP_BINDANY)
+ " IP_BINDANY"
+#endif
+#if defined(IPV6_BINDANY)
+ " IPV6_BINDANY"
+#endif
+#if defined(SO_BINDANY)
+ " SO_BINDANY"
+#endif
+ "\n");
+#endif
+
+#if defined(CONFIG_HAP_NS)
+ printf("Built with network namespace support\n");
+#endif
+
+#ifdef USE_DEVICEATLAS
+ printf("Built with DeviceAtlas support\n");
+#endif
+#ifdef USE_51DEGREES
+ printf("Built with 51Degrees support\n");
+#endif
+ putchar('\n');
+
+ list_pollers(stdout);
+ putchar('\n');
+}
+
+/*
+ * This function prints the command line usage and exits
+ */
+void usage(char *name)
+{
+ display_version();
+ fprintf(stderr,
+ "Usage : %s [-f <cfgfile>]* [ -vdV"
+ "D ] [ -n <maxconn> ] [ -N <maxpconn> ]\n"
+ " [ -p <pidfile> ] [ -m <max megs> ] [ -C <dir> ] [-- <cfgfile>*]\n"
+ " -v displays version ; -vv shows known build options.\n"
+ " -d enters debug mode ; -db only disables background mode.\n"
+ " -dM[<byte>] poisons memory with <byte> (defaults to 0x50)\n"
+ " -V enters verbose mode (disables quiet mode)\n"
+ " -D goes daemon ; -C changes to <dir> before loading files.\n"
+ " -q quiet mode : don't display messages\n"
+ " -c check mode : only check config files and exit\n"
+ " -n sets the maximum total # of connections (%d)\n"
+ " -m limits the usable amount of memory (in MB)\n"
+ " -N sets the default, per-proxy maximum # of connections (%d)\n"
+ " -L set local peer name (default to hostname)\n"
+ " -p writes pids of all children to this file\n"
+#if defined(ENABLE_EPOLL)
+ " -de disables epoll() usage even when available\n"
+#endif
+#if defined(ENABLE_KQUEUE)
+ " -dk disables kqueue() usage even when available\n"
+#endif
+#if defined(ENABLE_POLL)
+ " -dp disables poll() usage even when available\n"
+#endif
+#if defined(CONFIG_HAP_LINUX_SPLICE)
+ " -dS disables splice usage (broken on old kernels)\n"
+#endif
+#if defined(USE_GETADDRINFO)
+ " -dG disables getaddrinfo() usage\n"
+#endif
+ " -dV disables SSL verify on servers side\n"
+ " -sf/-st [pid ]* finishes/terminates old pids.\n"
+ "\n",
+ name, DEFAULT_MAXCONN, cfg_maxpconn);
+ exit(1);
+}
+
+
+
+/*********************************************************************/
+/* more specific functions ***************************************/
+/*********************************************************************/
+
+/*
+ * upon SIGUSR1, let's have a soft stop. Note that soft_stop() broadcasts
+ * a signal zero to all subscribers. This means that it's as easy as
+ * subscribing to signal 0 to get informed about an imminent shutdown.
+ */
+void sig_soft_stop(struct sig_handler *sh)
+{
+ soft_stop();
+ signal_unregister_handler(sh);
+ pool_gc2();
+}
+
+/*
+ * upon SIGTTOU, we pause everything
+ */
+void sig_pause(struct sig_handler *sh)
+{
+ pause_proxies();
+ pool_gc2();
+}
+
+/*
+ * upon SIGTTIN, let's have a soft stop.
+ */
+void sig_listen(struct sig_handler *sh)
+{
+ resume_proxies();
+}
+
+/*
+ * this function dumps every server's state when the process receives SIGHUP.
+ */
+void sig_dump_state(struct sig_handler *sh)
+{
+ struct proxy *p = proxy;
+
+ Warning("SIGHUP received, dumping servers states.\n");
+ while (p) {
+ struct server *s = p->srv;
+
+ send_log(p, LOG_NOTICE, "SIGHUP received, dumping servers states for proxy %s.\n", p->id);
+ while (s) {
+ chunk_printf(&trash,
+ "SIGHUP: Server %s/%s is %s. Conn: %d act, %d pend, %lld tot.",
+ p->id, s->id,
+ (s->state != SRV_ST_STOPPED) ? "UP" : "DOWN",
+ s->cur_sess, s->nbpend, s->counters.cum_sess);
+ Warning("%s\n", trash.str);
+ send_log(p, LOG_NOTICE, "%s\n", trash.str);
+ s = s->next;
+ }
+
+ /* FIXME: those info are a bit outdated. We should be able to distinguish between FE and BE. */
+ if (!p->srv) {
+ chunk_printf(&trash,
+ "SIGHUP: Proxy %s has no servers. Conn: act(FE+BE): %d+%d, %d pend (%d unass), tot(FE+BE): %lld+%lld.",
+ p->id,
+ p->feconn, p->beconn, p->totpend, p->nbpend, p->fe_counters.cum_conn, p->be_counters.cum_conn);
+ } else if (p->srv_act == 0) {
+ chunk_printf(&trash,
+ "SIGHUP: Proxy %s %s ! Conn: act(FE+BE): %d+%d, %d pend (%d unass), tot(FE+BE): %lld+%lld.",
+ p->id,
+ (p->srv_bck) ? "is running on backup servers" : "has no server available",
+ p->feconn, p->beconn, p->totpend, p->nbpend, p->fe_counters.cum_conn, p->be_counters.cum_conn);
+ } else {
+ chunk_printf(&trash,
+ "SIGHUP: Proxy %s has %d active servers and %d backup servers available."
+ " Conn: act(FE+BE): %d+%d, %d pend (%d unass), tot(FE+BE): %lld+%lld.",
+ p->id, p->srv_act, p->srv_bck,
+ p->feconn, p->beconn, p->totpend, p->nbpend, p->fe_counters.cum_conn, p->be_counters.cum_conn);
+ }
+ Warning("%s\n", trash.str);
+ send_log(p, LOG_NOTICE, "%s\n", trash.str);
+
+ p = p->next;
+ }
+}
+
+void dump(struct sig_handler *sh)
+{
+ /* dump memory usage then free everything possible */
+ dump_pools();
+ pool_gc2();
+}
+
+/*
+ * This function initializes all the necessary variables. It only returns
+ * if everything is OK. If something fails, it exits.
+ */
+void init(int argc, char **argv)
+{
+ int arg_mode = 0; /* MODE_DEBUG, ... */
+ char *tmp;
+ char *cfg_pidfile = NULL;
+ int err_code = 0;
+ struct wordlist *wl;
+ char *progname;
+ char *change_dir = NULL;
+ struct tm curtime;
+
+ chunk_init(&trash, malloc(global.tune.bufsize), global.tune.bufsize);
+ alloc_trash_buffers(global.tune.bufsize);
+
+ /* NB: POSIX does not make it mandatory for gethostname() to NULL-terminate
+ * the string in case of truncation, and at least FreeBSD appears not to do
+ * it.
+ */
+ memset(hostname, 0, sizeof(hostname));
+ gethostname(hostname, sizeof(hostname) - 1);
+ memset(localpeer, 0, sizeof(localpeer));
+ memcpy(localpeer, hostname, (sizeof(hostname) > sizeof(localpeer) ? sizeof(localpeer) : sizeof(hostname)) - 1);
+
+ /*
+ * Initialize the previously static variables.
+ */
+
+ totalconn = actconn = maxfd = listeners = stopping = 0;
+
+
+#ifdef HAPROXY_MEMMAX
+ global.rlimit_memmax_all = HAPROXY_MEMMAX;
+#endif
+
+ tv_update_date(-1,-1);
+ start_date = now;
+
+ srandom(now_ms - getpid());
+
+ /* Get the numeric timezone. */
+ get_localtime(start_date.tv_sec, &curtime);
+ strftime(localtimezone, 6, "%z", &curtime);
+
+ signal_init();
+ if (init_acl() != 0)
+ exit(1);
+ init_task();
+ init_stream();
+ init_session();
+ init_connection();
+ /* warning, we init buffers later */
+ init_pendconn();
+ init_proto_http();
+
+ /* Initialise lua. */
+ hlua_init();
+
+ global.tune.options |= GTUNE_USE_SELECT; /* select() is always available */
+#if defined(ENABLE_POLL)
+ global.tune.options |= GTUNE_USE_POLL;
+#endif
+#if defined(ENABLE_EPOLL)
+ global.tune.options |= GTUNE_USE_EPOLL;
+#endif
+#if defined(ENABLE_KQUEUE)
+ global.tune.options |= GTUNE_USE_KQUEUE;
+#endif
+#if defined(CONFIG_HAP_LINUX_SPLICE)
+ global.tune.options |= GTUNE_USE_SPLICE;
+#endif
+#if defined(USE_GETADDRINFO)
+ global.tune.options |= GTUNE_USE_GAI;
+#endif
+
+ pid = getpid();
+ progname = *argv;
+ while ((tmp = strchr(progname, '/')) != NULL)
+ progname = tmp + 1;
+
+ /* the process name is used for the logs only */
+ chunk_initstr(&global.log_tag, strdup(progname));
+
+ argc--; argv++;
+ while (argc > 0) {
+ char *flag;
+
+ if (**argv == '-') {
+ flag = *argv+1;
+
+ /* 1 arg */
+ if (*flag == 'v') {
+ display_version();
+ if (flag[1] == 'v') /* -vv */
+ display_build_opts();
+ exit(0);
+ }
+#if defined(ENABLE_EPOLL)
+ else if (*flag == 'd' && flag[1] == 'e')
+ global.tune.options &= ~GTUNE_USE_EPOLL;
+#endif
+#if defined(ENABLE_POLL)
+ else if (*flag == 'd' && flag[1] == 'p')
+ global.tune.options &= ~GTUNE_USE_POLL;
+#endif
+#if defined(ENABLE_KQUEUE)
+ else if (*flag == 'd' && flag[1] == 'k')
+ global.tune.options &= ~GTUNE_USE_KQUEUE;
+#endif
+#if defined(CONFIG_HAP_LINUX_SPLICE)
+ else if (*flag == 'd' && flag[1] == 'S')
+ global.tune.options &= ~GTUNE_USE_SPLICE;
+#endif
+#if defined(USE_GETADDRINFO)
+ else if (*flag == 'd' && flag[1] == 'G')
+ global.tune.options &= ~GTUNE_USE_GAI;
+#endif
+ else if (*flag == 'd' && flag[1] == 'V')
+ global.ssl_server_verify = SSL_SERVER_VERIFY_NONE;
+ else if (*flag == 'V')
+ arg_mode |= MODE_VERBOSE;
+ else if (*flag == 'd' && flag[1] == 'b')
+ arg_mode |= MODE_FOREGROUND;
+ else if (*flag == 'd' && flag[1] == 'M')
+ mem_poison_byte = flag[2] ? strtol(flag + 2, NULL, 0) : 'P';
+ else if (*flag == 'd')
+ arg_mode |= MODE_DEBUG;
+ else if (*flag == 'c')
+ arg_mode |= MODE_CHECK;
+ else if (*flag == 'D') {
+ arg_mode |= MODE_DAEMON;
+ if (flag[1] == 's') /* -Ds */
+ arg_mode |= MODE_SYSTEMD;
+ }
+ else if (*flag == 'q')
+ arg_mode |= MODE_QUIET;
+ else if (*flag == 's' && (flag[1] == 'f' || flag[1] == 't')) {
+ /* list of pids to finish ('f') or terminate ('t') */
+
+ if (flag[1] == 'f')
+ oldpids_sig = SIGUSR1; /* finish then exit */
+ else
+ oldpids_sig = SIGTERM; /* terminate immediately */
+
+ while (argc > 1 && argv[1][0] != '-') {
+ oldpids = realloc(oldpids, (nb_oldpids + 1) * sizeof(int));
+ if (!oldpids) {
+ Alert("Cannot allocate old pid : out of memory.\n");
+ exit(1);
+ }
+ argc--; argv++;
+ oldpids[nb_oldpids] = atol(*argv);
+ if (oldpids[nb_oldpids] <= 0)
+ usage(progname);
+ nb_oldpids++;
+ }
+ }
+ else if (flag[0] == '-' && flag[1] == 0) { /* "--" */
+ /* now that's a cfgfile list */
+ argv++; argc--;
+ while (argc > 0) {
+ wl = (struct wordlist *)calloc(1, sizeof(*wl));
+ if (!wl) {
+ Alert("Cannot load configuration file %s : out of memory.\n", *argv);
+ exit(1);
+ }
+ wl->s = *argv;
+ LIST_ADDQ(&cfg_cfgfiles, &wl->list);
+ argv++; argc--;
+ }
+ break;
+ }
+ else { /* >=2 args */
+ argv++; argc--;
+ if (argc == 0)
+ usage(progname);
+
+ switch (*flag) {
+ case 'C' : change_dir = *argv; break;
+ case 'n' : cfg_maxconn = atol(*argv); break;
+ case 'm' : global.rlimit_memmax_all = atol(*argv); break;
+ case 'N' : cfg_maxpconn = atol(*argv); break;
+ case 'L' : strncpy(localpeer, *argv, sizeof(localpeer) - 1); break;
+ case 'f' :
+ wl = (struct wordlist *)calloc(1, sizeof(*wl));
+ if (!wl) {
+ Alert("Cannot load configuration file %s : out of memory.\n", *argv);
+ exit(1);
+ }
+ wl->s = *argv;
+ LIST_ADDQ(&cfg_cfgfiles, &wl->list);
+ break;
+ case 'p' : cfg_pidfile = *argv; break;
+ default: usage(progname);
+ }
+ }
+ }
+ else
+ usage(progname);
+ argv++; argc--;
+ }
+
+ global.mode = MODE_STARTING | /* during startup, we want most of the alerts */
+ (arg_mode & (MODE_DAEMON | MODE_SYSTEMD | MODE_FOREGROUND | MODE_VERBOSE
+ | MODE_QUIET | MODE_CHECK | MODE_DEBUG));
+
+ if (LIST_ISEMPTY(&cfg_cfgfiles))
+ usage(progname);
+
+ if (change_dir && chdir(change_dir) < 0) {
+ Alert("Could not change to directory %s : %s\n", change_dir, strerror(errno));
+ exit(1);
+ }
+
+ global.maxsock = 10; /* reserve 10 fds ; will be incremented by socket eaters */
+
+ init_default_instance();
+
+ list_for_each_entry(wl, &cfg_cfgfiles, list) {
+ int ret;
+
+ ret = readcfgfile(wl->s);
+ if (ret == -1) {
+ Alert("Could not open configuration file %s : %s\n",
+ wl->s, strerror(errno));
+ exit(1);
+ }
+ if (ret & (ERR_ABORT|ERR_FATAL))
+ Alert("Error(s) found in configuration file : %s\n", wl->s);
+ err_code |= ret;
+ if (err_code & ERR_ABORT)
+ exit(1);
+ }
+
+ pattern_finalize_config();
+#if (defined SSL_CTRL_SET_TLSEXT_TICKET_KEY_CB && TLS_TICKETS_NO > 0)
+ tlskeys_finalize_config();
+#endif
+
+ err_code |= check_config_validity();
+ if (err_code & (ERR_ABORT|ERR_FATAL)) {
+ Alert("Fatal errors found in configuration.\n");
+ exit(1);
+ }
+
+ /* recompute the amount of per-process memory depending on nbproc and
+ * the shared SSL cache size (allowed to exist in all processes).
+ */
+ if (global.rlimit_memmax_all) {
+#if defined (USE_OPENSSL) && !defined(USE_PRIVATE_CACHE)
+ int64_t ssl_cache_bytes = global.tune.sslcachesize * 200LL;
+
+ global.rlimit_memmax =
+ ((((int64_t)global.rlimit_memmax_all * 1048576LL) -
+ ssl_cache_bytes) / global.nbproc +
+ ssl_cache_bytes + 1048575LL) / 1048576LL;
+#else
+ global.rlimit_memmax = global.rlimit_memmax_all / global.nbproc;
+#endif
+ }
+
+#ifdef CONFIG_HAP_NS
+ err_code |= netns_init();
+ if (err_code & (ERR_ABORT|ERR_FATAL)) {
+ Alert("Failed to initialize namespace support.\n");
+ exit(1);
+ }
+#endif
+
+ if (global.mode & MODE_CHECK) {
+ struct peers *pr;
+ struct proxy *px;
+
+ for (pr = peers; pr; pr = pr->next)
+ if (pr->peers_fe)
+ break;
+
+ for (px = proxy; px; px = px->next)
+ if (px->state == PR_STNEW && !LIST_ISEMPTY(&px->conf.listeners))
+ break;
+
+ if (pr || px) {
+ /* At least one peer or one listener has been found */
+ qfprintf(stdout, "Configuration file is valid\n");
+ exit(0);
+ }
+ qfprintf(stdout, "Configuration file has no error but will not start (no listener) => exit(2).\n");
+ exit(2);
+ }
+
+ /* Apply server states */
+ apply_server_state();
+
+ global_listener_queue_task = task_new();
+ if (!global_listener_queue_task) {
+ Alert("Out of memory when initializing global task\n");
+ exit(1);
+ }
+ /* very simple initialization, users will queue the task if needed */
+ global_listener_queue_task->context = NULL; /* not even a context! */
+ global_listener_queue_task->process = manage_global_listener_queue;
+ global_listener_queue_task->expire = TICK_ETERNITY;
+
+ /* now we know the buffer size, we can initialize the channels and buffers */
+ init_buffer();
+#if defined(USE_DEVICEATLAS)
+ init_deviceatlas();
+#endif
+#ifdef USE_51DEGREES
+ init_51degrees();
+#endif
+
+ if (start_checks() < 0)
+ exit(1);
+
+ if (cfg_maxconn > 0)
+ global.maxconn = cfg_maxconn;
+
+ if (cfg_pidfile) {
+ free(global.pidfile);
+ global.pidfile = strdup(cfg_pidfile);
+ }
+
+ /* Now we want to compute the maxconn and possibly maxsslconn values.
+ * It's a bit tricky. If memmax is not set, maxconn defaults to
+ * DEFAULT_MAXCONN and maxsslconn defaults to DEFAULT_MAXSSLCONN.
+ *
+ * If memmax is set, then it depends on which values are set. If
+ * maxsslconn is set, we use memmax to determine how many cleartext
+ * connections may be added, and set maxconn to the sum of the two.
+ * If maxconn is set and not maxsslconn, maxsslconn is computed from
+ * the remaining amount of memory between memmax and the cleartext
+ * connections. If neither are set, then it is considered that all
+ * connections are SSL-capable, and maxconn is computed based on this,
+ * then maxsslconn accordingly. We need to know if SSL is used on the
+ * frontends, backends, or both, because when it's used on both sides,
+ * we need twice the value for maxsslconn, but we only count the
+ * handshake once since it is not performed on the two sides at the
+ * same time (frontend-side is terminated before backend-side begins).
+ * The SSL stack is supposed to have filled ssl_session_cost and
+ * ssl_handshake_cost during its initialization. In any case, if
+ * SYSTEM_MAXCONN is set, we still enforce it as an upper limit for
+ * maxconn in order to protect the system.
+ */
+ if (!global.rlimit_memmax) {
+ if (global.maxconn == 0) {
+ global.maxconn = DEFAULT_MAXCONN;
+ if (global.mode & (MODE_VERBOSE|MODE_DEBUG))
+ fprintf(stderr, "Note: setting global.maxconn to %d.\n", global.maxconn);
+ }
+ }
+#ifdef USE_OPENSSL
+ else if (!global.maxconn && !global.maxsslconn &&
+ (global.ssl_used_frontend || global.ssl_used_backend)) {
+ /* memmax is set, compute everything automatically. Here we want
+ * to ensure that all SSL connections will be served. We take
+ * care of the number of sides where SSL is used, and consider
+ * the worst case : SSL used on both sides and doing a handshake
+ * simultaneously. Note that we can't have more than maxconn
+ * handshakes at a time by definition, so for the worst case of
+ * two SSL conns per connection, we count a single handshake.
+ */
+ int sides = !!global.ssl_used_frontend + !!global.ssl_used_backend;
+ int64_t mem = global.rlimit_memmax * 1048576ULL;
+
+ mem -= global.tune.sslcachesize * 200; // about 200 bytes per SSL cache entry
+ mem -= global.maxzlibmem;
+ mem = mem * MEM_USABLE_RATIO;
+
+ global.maxconn = mem /
+ ((STREAM_MAX_COST + 2 * global.tune.bufsize) + // stream + 2 buffers per stream
+ sides * global.ssl_session_max_cost + // SSL buffers, one per side
+ global.ssl_handshake_max_cost); // 1 handshake per connection max
+
+ global.maxconn = round_2dig(global.maxconn);
+#ifdef SYSTEM_MAXCONN
+ if (global.maxconn > DEFAULT_MAXCONN)
+ global.maxconn = DEFAULT_MAXCONN;
+#endif /* SYSTEM_MAXCONN */
+ global.maxsslconn = sides * global.maxconn;
+ if (global.mode & (MODE_VERBOSE|MODE_DEBUG))
+ fprintf(stderr, "Note: setting global.maxconn to %d and global.maxsslconn to %d.\n",
+ global.maxconn, global.maxsslconn);
+ }
+ else if (!global.maxsslconn &&
+ (global.ssl_used_frontend || global.ssl_used_backend)) {
+ /* memmax and maxconn are known, compute maxsslconn automatically.
+ * maxsslconn being forced, we don't know how many of it will be
+ * on each side if both sides are being used. The worst case is
+ * when all connections use only one SSL instance because
+ * handshakes may be on two sides at the same time.
+ */
+ int sides = !!global.ssl_used_frontend + !!global.ssl_used_backend;
+ int64_t mem = global.rlimit_memmax * 1048576ULL;
+ int64_t sslmem;
+
+ mem -= global.tune.sslcachesize * 200; // about 200 bytes per SSL cache entry
+ mem -= global.maxzlibmem;
+ mem = mem * MEM_USABLE_RATIO;
+
+ sslmem = mem - global.maxconn * (int64_t)(STREAM_MAX_COST + 2 * global.tune.bufsize);
+ global.maxsslconn = sslmem / (global.ssl_session_max_cost + global.ssl_handshake_max_cost);
+ global.maxsslconn = round_2dig(global.maxsslconn);
+
+ if (sslmem <= 0 || global.maxsslconn < sides) {
+ Alert("Cannot compute the automatic maxsslconn because global.maxconn is already too "
+ "high for the global.memmax value (%d MB). The absolute maximum possible value "
+ "without SSL is %d, but %d was found and SSL is in use.\n",
+ global.rlimit_memmax,
+ (int)(mem / (STREAM_MAX_COST + 2 * global.tune.bufsize)),
+ global.maxconn);
+ exit(1);
+ }
+
+ if (global.maxsslconn > sides * global.maxconn)
+ global.maxsslconn = sides * global.maxconn;
+
+ if (global.mode & (MODE_VERBOSE|MODE_DEBUG))
+ fprintf(stderr, "Note: setting global.maxsslconn to %d\n", global.maxsslconn);
+ }
+#endif
+ else if (!global.maxconn) {
+ /* memmax and maxsslconn are known/unused, compute maxconn automatically */
+ int sides = !!global.ssl_used_frontend + !!global.ssl_used_backend;
+ int64_t mem = global.rlimit_memmax * 1048576ULL;
+ int64_t clearmem;
+
+ if (global.ssl_used_frontend || global.ssl_used_backend)
+ mem -= global.tune.sslcachesize * 200; // about 200 bytes per SSL cache entry
+
+ mem -= global.maxzlibmem;
+ mem = mem * MEM_USABLE_RATIO;
+
+ clearmem = mem;
+ if (sides)
+ clearmem -= (global.ssl_session_max_cost + global.ssl_handshake_max_cost) * (int64_t)global.maxsslconn;
+
+ global.maxconn = clearmem / (STREAM_MAX_COST + 2 * global.tune.bufsize);
+ global.maxconn = round_2dig(global.maxconn);
+#ifdef SYSTEM_MAXCONN
+ if (global.maxconn > DEFAULT_MAXCONN)
+ global.maxconn = DEFAULT_MAXCONN;
+#endif /* SYSTEM_MAXCONN */
+
+ if (clearmem <= 0 || !global.maxconn) {
+ Alert("Cannot compute the automatic maxconn because global.maxsslconn is already too "
+ "high for the global.memmax value (%d MB). The absolute maximum possible value "
+ "is %d, but %d was found.\n",
+ global.rlimit_memmax,
+ (int)(mem / (global.ssl_session_max_cost + global.ssl_handshake_max_cost)),
+ global.maxsslconn);
+ exit(1);
+ }
+
+ if (global.mode & (MODE_VERBOSE|MODE_DEBUG)) {
+ if (sides && global.maxsslconn > sides * global.maxconn) {
+ fprintf(stderr, "Note: global.maxsslconn is forced to %d which causes global.maxconn "
+ "to be limited to %d. Better reduce global.maxsslconn to get more "
+ "room for extra connections.\n", global.maxsslconn, global.maxconn);
+ }
+ fprintf(stderr, "Note: setting global.maxconn to %d\n", global.maxconn);
+ }
+ }
+
+ if (!global.maxpipes) {
+ /* maxpipes not specified. Count how many frontends and backends
+ * may be using splicing, and bound that to maxconn.
+ */
+ struct proxy *cur;
+ int nbfe = 0, nbbe = 0;
+
+ for (cur = proxy; cur; cur = cur->next) {
+ if (cur->options2 & (PR_O2_SPLIC_ANY)) {
+ if (cur->cap & PR_CAP_FE)
+ nbfe += cur->maxconn;
+ if (cur->cap & PR_CAP_BE)
+ nbbe += cur->fullconn ? cur->fullconn : global.maxconn;
+ }
+ }
+ global.maxpipes = MAX(nbfe, nbbe);
+ if (global.maxpipes > global.maxconn)
+ global.maxpipes = global.maxconn;
+ global.maxpipes /= 4;
+ }
+
+
+ global.hardmaxconn = global.maxconn; /* keep this max value */
+ global.maxsock += global.maxconn * 2; /* each connection needs two sockets */
+ global.maxsock += global.maxpipes * 2; /* each pipe needs two FDs */
+
+ if (global.stats_fe)
+ global.maxsock += global.stats_fe->maxconn;
+
+ if (peers) {
+ /* peers also need to bypass global maxconn */
+ struct peers *p = peers;
+
+ for (p = peers; p; p = p->next)
+ if (p->peers_fe)
+ global.maxsock += p->peers_fe->maxconn;
+ }
+
+ if (global.tune.maxpollevents <= 0)
+ global.tune.maxpollevents = MAX_POLL_EVENTS;
+
+ if (global.tune.recv_enough == 0)
+ global.tune.recv_enough = MIN_RECV_AT_ONCE_ENOUGH;
+
+ if (global.tune.maxrewrite < 0)
+ global.tune.maxrewrite = MAXREWRITE;
+
+ if (global.tune.maxrewrite >= global.tune.bufsize / 2)
+ global.tune.maxrewrite = global.tune.bufsize / 2;
+
+ if (arg_mode & (MODE_DEBUG | MODE_FOREGROUND)) {
+ /* command line debug mode inhibits configuration mode */
+ global.mode &= ~(MODE_DAEMON | MODE_SYSTEMD | MODE_QUIET);
+ global.mode |= (arg_mode & (MODE_DEBUG | MODE_FOREGROUND));
+ }
+
+ if (arg_mode & (MODE_DAEMON | MODE_SYSTEMD)) {
+ /* command line daemon mode inhibits foreground and debug modes mode */
+ global.mode &= ~(MODE_DEBUG | MODE_FOREGROUND);
+ global.mode |= (arg_mode & (MODE_DAEMON | MODE_SYSTEMD));
+ }
+
+ global.mode |= (arg_mode & (MODE_QUIET | MODE_VERBOSE));
+
+ if ((global.mode & MODE_DEBUG) && (global.mode & (MODE_DAEMON | MODE_SYSTEMD | MODE_QUIET))) {
+ Warning("<debug> mode incompatible with <quiet>, <daemon> and <systemd>. Keeping <debug> only.\n");
+ global.mode &= ~(MODE_DAEMON | MODE_SYSTEMD | MODE_QUIET);
+ }
+
+ if ((global.nbproc > 1) && !(global.mode & (MODE_DAEMON | MODE_SYSTEMD))) {
+ if (!(global.mode & (MODE_FOREGROUND | MODE_DEBUG)))
+ Warning("<nbproc> is only meaningful in daemon mode. Setting limit to 1 process.\n");
+ global.nbproc = 1;
+ }
+
+ if (global.nbproc < 1)
+ global.nbproc = 1;
+
+ swap_buffer = (char *)calloc(1, global.tune.bufsize);
+ get_http_auth_buff = (char *)calloc(1, global.tune.bufsize);
+ static_table_key = calloc(1, sizeof(*static_table_key));
+
+ fdinfo = (struct fdinfo *)calloc(1,
+ sizeof(struct fdinfo) * (global.maxsock));
+ fdtab = (struct fdtab *)calloc(1,
+ sizeof(struct fdtab) * (global.maxsock));
+ /*
+ * Note: we could register external pollers here.
+ * Built-in pollers have been registered before main().
+ */
+
+ if (!(global.tune.options & GTUNE_USE_KQUEUE))
+ disable_poller("kqueue");
+
+ if (!(global.tune.options & GTUNE_USE_EPOLL))
+ disable_poller("epoll");
+
+ if (!(global.tune.options & GTUNE_USE_POLL))
+ disable_poller("poll");
+
+ if (!(global.tune.options & GTUNE_USE_SELECT))
+ disable_poller("select");
+
+ /* Note: we could disable any poller by name here */
+
+ if (global.mode & (MODE_VERBOSE|MODE_DEBUG))
+ list_pollers(stderr);
+
+ if (!init_pollers()) {
+ Alert("No polling mechanism available.\n"
+ " It is likely that haproxy was built with TARGET=generic and that FD_SETSIZE\n"
+ " is too low on this platform to support maxconn and the number of listeners\n"
+ " and servers. You should rebuild haproxy specifying your system using TARGET=\n"
+ " in order to support other polling systems (poll, epoll, kqueue) or reduce the\n"
+ " global maxconn setting to accommodate the system's limitation. For reference,\n"
+ " FD_SETSIZE=%d on this system, global.maxconn=%d resulting in a maximum of\n"
+ " %d file descriptors. You should thus reduce global.maxconn by %d. Also,\n"
+ " check build settings using 'haproxy -vv'.\n\n",
+ FD_SETSIZE, global.maxconn, global.maxsock, (global.maxsock + 1 - FD_SETSIZE) / 2);
+ exit(1);
+ }
+ if (global.mode & (MODE_VERBOSE|MODE_DEBUG)) {
+ printf("Using %s() as the polling mechanism.\n", cur_poller.name);
+ }
+
+ if (!global.node)
+ global.node = strdup(hostname);
+
+ if (!hlua_post_init())
+ exit(1);
+
+ /* initialize structures for name resolution */
+ if (!dns_init_resolvers())
+ exit(1);
+}
+
+static void deinit_acl_cond(struct acl_cond *cond)
+{
+ struct acl_term_suite *suite, *suiteb;
+ struct acl_term *term, *termb;
+
+ if (!cond)
+ return;
+
+ list_for_each_entry_safe(suite, suiteb, &cond->suites, list) {
+ list_for_each_entry_safe(term, termb, &suite->terms, list) {
+ LIST_DEL(&term->list);
+ free(term);
+ }
+ LIST_DEL(&suite->list);
+ free(suite);
+ }
+
+ free(cond);
+}
+
+static void deinit_tcp_rules(struct list *rules)
+{
+ struct act_rule *trule, *truleb;
+
+ list_for_each_entry_safe(trule, truleb, rules, list) {
+ LIST_DEL(&trule->list);
+ deinit_acl_cond(trule->cond);
+ free(trule);
+ }
+}
+
+static void deinit_sample_arg(struct arg *p)
+{
+ struct arg *p_back = p;
+
+ if (!p)
+ return;
+
+ while (p->type != ARGT_STOP) {
+ if (p->type == ARGT_STR || p->unresolved) {
+ free(p->data.str.str);
+ p->data.str.str = NULL;
+ p->unresolved = 0;
+ }
+ else if (p->type == ARGT_REG) {
+ if (p->data.reg) {
+ regex_free(p->data.reg);
+ free(p->data.reg);
+ p->data.reg = NULL;
+ }
+ }
+ p++;
+ }
+
+ if (p_back != empty_arg_list)
+ free(p_back);
+}
+
+static void deinit_stick_rules(struct list *rules)
+{
+ struct sticking_rule *rule, *ruleb;
+
+ list_for_each_entry_safe(rule, ruleb, rules, list) {
+ LIST_DEL(&rule->list);
+ deinit_acl_cond(rule->cond);
+ if (rule->expr) {
+ struct sample_conv_expr *conv_expr, *conv_exprb;
+ list_for_each_entry_safe(conv_expr, conv_exprb, &rule->expr->conv_exprs, list)
+ deinit_sample_arg(conv_expr->arg_p);
+ deinit_sample_arg(rule->expr->arg_p);
+ free(rule->expr);
+ }
+ free(rule);
+ }
+}
+
+void deinit(void)
+{
+ struct proxy *p = proxy, *p0;
+ struct cap_hdr *h,*h_next;
+ struct server *s,*s_next;
+ struct listener *l,*l_next;
+ struct acl_cond *cond, *condb;
+ struct hdr_exp *exp, *expb;
+ struct acl *acl, *aclb;
+ struct switching_rule *rule, *ruleb;
+ struct server_rule *srule, *sruleb;
+ struct redirect_rule *rdr, *rdrb;
+ struct wordlist *wl, *wlb;
+ struct cond_wordlist *cwl, *cwlb;
+ struct uri_auth *uap, *ua = NULL;
+ struct logsrv *log, *logb;
+ struct logformat_node *lf, *lfb;
+ struct bind_conf *bind_conf, *bind_back;
+ int i;
+
+ deinit_signals();
+ while (p) {
+ free(p->conf.file);
+ free(p->id);
+ free(p->check_req);
+ free(p->cookie_name);
+ free(p->cookie_domain);
+ free(p->url_param_name);
+ free(p->capture_name);
+ free(p->monitor_uri);
+ free(p->rdp_cookie_name);
+ if (p->conf.logformat_string != default_http_log_format &&
+ p->conf.logformat_string != default_tcp_log_format &&
+ p->conf.logformat_string != clf_http_log_format)
+ free(p->conf.logformat_string);
+
+ free(p->conf.lfs_file);
+ free(p->conf.uniqueid_format_string);
+ free(p->conf.uif_file);
+ free(p->lbprm.map.srv);
+
+ if (p->conf.logformat_sd_string != default_rfc5424_sd_log_format)
+ free(p->conf.logformat_sd_string);
+ free(p->conf.lfsd_file);
+
+ for (i = 0; i < HTTP_ERR_SIZE; i++)
+ chunk_destroy(&p->errmsg[i]);
+
+ list_for_each_entry_safe(cwl, cwlb, &p->req_add, list) {
+ LIST_DEL(&cwl->list);
+ free(cwl->s);
+ free(cwl);
+ }
+
+ list_for_each_entry_safe(cwl, cwlb, &p->rsp_add, list) {
+ LIST_DEL(&cwl->list);
+ free(cwl->s);
+ free(cwl);
+ }
+
+ list_for_each_entry_safe(cond, condb, &p->mon_fail_cond, list) {
+ LIST_DEL(&cond->list);
+ prune_acl_cond(cond);
+ free(cond);
+ }
+
+ for (exp = p->req_exp; exp != NULL; ) {
+ if (exp->preg) {
+ regex_free(exp->preg);
+ free(exp->preg);
+ }
+
+ free((char *)exp->replace);
+ expb = exp;
+ exp = exp->next;
+ free(expb);
+ }
+
+ for (exp = p->rsp_exp; exp != NULL; ) {
+ if (exp->preg) {
+ regex_free(exp->preg);
+ free(exp->preg);
+ }
+
+ free((char *)exp->replace);
+ expb = exp;
+ exp = exp->next;
+ free(expb);
+ }
+
+ /* build a list of unique uri_auths */
+ if (!ua)
+ ua = p->uri_auth;
+ else {
+ /* check if p->uri_auth is unique */
+ for (uap = ua; uap; uap=uap->next)
+ if (uap == p->uri_auth)
+ break;
+
+ if (!uap && p->uri_auth) {
+ /* add it, if it is */
+ p->uri_auth->next = ua;
+ ua = p->uri_auth;
+ }
+ }
+
+ list_for_each_entry_safe(acl, aclb, &p->acl, list) {
+ LIST_DEL(&acl->list);
+ prune_acl(acl);
+ free(acl);
+ }
+
+ list_for_each_entry_safe(srule, sruleb, &p->server_rules, list) {
+ LIST_DEL(&srule->list);
+ prune_acl_cond(srule->cond);
+ free(srule->cond);
+ free(srule);
+ }
+
+ list_for_each_entry_safe(rule, ruleb, &p->switching_rules, list) {
+ LIST_DEL(&rule->list);
+ if (rule->cond) {
+ prune_acl_cond(rule->cond);
+ free(rule->cond);
+ }
+ free(rule);
+ }
+
+ list_for_each_entry_safe(rdr, rdrb, &p->redirect_rules, list) {
+ LIST_DEL(&rdr->list);
+ if (rdr->cond) {
+ prune_acl_cond(rdr->cond);
+ free(rdr->cond);
+ }
+ free(rdr->rdr_str);
+ list_for_each_entry_safe(lf, lfb, &rdr->rdr_fmt, list) {
+ LIST_DEL(&lf->list);
+ free(lf);
+ }
+ free(rdr);
+ }
+
+ list_for_each_entry_safe(log, logb, &p->logsrvs, list) {
+ LIST_DEL(&log->list);
+ free(log);
+ }
+
+ list_for_each_entry_safe(lf, lfb, &p->logformat, list) {
+ LIST_DEL(&lf->list);
+ free(lf);
+ }
+
+ list_for_each_entry_safe(lf, lfb, &p->logformat_sd, list) {
+ LIST_DEL(&lf->list);
+ free(lf);
+ }
+
+ deinit_tcp_rules(&p->tcp_req.inspect_rules);
+ deinit_tcp_rules(&p->tcp_req.l4_rules);
+
+ deinit_stick_rules(&p->storersp_rules);
+ deinit_stick_rules(&p->sticking_rules);
+
+ h = p->req_cap;
+ while (h) {
+ h_next = h->next;
+ free(h->name);
+ pool_destroy2(h->pool);
+ free(h);
+ h = h_next;
+ }/* end while(h) */
+
+ h = p->rsp_cap;
+ while (h) {
+ h_next = h->next;
+ free(h->name);
+ pool_destroy2(h->pool);
+ free(h);
+ h = h_next;
+ }/* end while(h) */
+
+ s = p->srv;
+ while (s) {
+ s_next = s->next;
+
+ if (s->check.task) {
+ task_delete(s->check.task);
+ task_free(s->check.task);
+ }
+ if (s->agent.task) {
+ task_delete(s->agent.task);
+ task_free(s->agent.task);
+ }
+
+ if (s->warmup) {
+ task_delete(s->warmup);
+ task_free(s->warmup);
+ }
+
+ free(s->id);
+ free(s->cookie);
+ free(s->check.bi);
+ free(s->check.bo);
+ free(s->agent.bi);
+ free(s->agent.bo);
+ free((char*)s->conf.file);
+#ifdef USE_OPENSSL
+ if (s->use_ssl || s->check.use_ssl)
+ ssl_sock_free_srv_ctx(s);
+#endif
+ free(s);
+ s = s_next;
+ }/* end while(s) */
+
+ list_for_each_entry_safe(l, l_next, &p->conf.listeners, by_fe) {
+ unbind_listener(l);
+ delete_listener(l);
+ LIST_DEL(&l->by_fe);
+ LIST_DEL(&l->by_bind);
+ free(l->name);
+ free(l->counters);
+ free(l);
+ }
+
+ /* Release unused SSL configs. */
+ list_for_each_entry_safe(bind_conf, bind_back, &p->conf.bind, by_fe) {
+#ifdef USE_OPENSSL
+ ssl_sock_free_ca(bind_conf);
+ ssl_sock_free_all_ctx(bind_conf);
+ free(bind_conf->ca_file);
+ free(bind_conf->ca_sign_file);
+ free(bind_conf->ca_sign_pass);
+ free(bind_conf->ciphers);
+ free(bind_conf->ecdhe);
+ free(bind_conf->crl_file);
+#endif /* USE_OPENSSL */
+ free(bind_conf->file);
+ free(bind_conf->arg);
+ LIST_DEL(&bind_conf->by_fe);
+ free(bind_conf);
+ }
+
+ free(p->desc);
+ free(p->fwdfor_hdr_name);
+
+ free_http_req_rules(&p->http_req_rules);
+ free_http_res_rules(&p->http_res_rules);
+ free(p->task);
+
+ pool_destroy2(p->req_cap_pool);
+ pool_destroy2(p->rsp_cap_pool);
+ pool_destroy2(p->table.pool);
+
+ p0 = p;
+ p = p->next;
+ free(p0);
+ }/* end while(p) */
+
+ while (ua) {
+ uap = ua;
+ ua = ua->next;
+
+ free(uap->uri_prefix);
+ free(uap->auth_realm);
+ free(uap->node);
+ free(uap->desc);
+
+ userlist_free(uap->userlist);
+ free_http_req_rules(&uap->http_req_rules);
+
+ free(uap);
+ }
+
+ userlist_free(userlist);
+
+ cfg_unregister_sections();
+
+ free_trash_buffers();
+ chunk_destroy(&trash);
+
+ protocol_unbind_all();
+
+#if defined(USE_DEVICEATLAS)
+ deinit_deviceatlas();
+#endif
+
+#ifdef USE_51DEGREES
+ deinit_51degrees();
+#endif
+
+ free(global.log_send_hostname); global.log_send_hostname = NULL;
+ chunk_destroy(&global.log_tag);
+ free(global.chroot); global.chroot = NULL;
+ free(global.pidfile); global.pidfile = NULL;
+ free(global.node); global.node = NULL;
+ free(global.desc); global.desc = NULL;
+ free(fdinfo); fdinfo = NULL;
+ free(fdtab); fdtab = NULL;
+ free(oldpids); oldpids = NULL;
+ free(static_table_key); static_table_key = NULL;
+ free(get_http_auth_buff); get_http_auth_buff = NULL;
+ free(swap_buffer); swap_buffer = NULL;
+ free(global_listener_queue_task); global_listener_queue_task = NULL;
+
+ list_for_each_entry_safe(log, logb, &global.logsrvs, list) {
+ LIST_DEL(&log->list);
+ free(log);
+ }
+ list_for_each_entry_safe(wl, wlb, &cfg_cfgfiles, list) {
+ LIST_DEL(&wl->list);
+ free(wl);
+ }
+
+ pool_destroy2(pool2_stream);
+ pool_destroy2(pool2_session);
+ pool_destroy2(pool2_connection);
+ pool_destroy2(pool2_buffer);
+ pool_destroy2(pool2_requri);
+ pool_destroy2(pool2_task);
+ pool_destroy2(pool2_capture);
+ pool_destroy2(pool2_pendconn);
+ pool_destroy2(pool2_sig_handlers);
+ pool_destroy2(pool2_hdr_idx);
+ pool_destroy2(pool2_http_txn);
+
+ deinit_pollers();
+} /* end deinit() */
+
+/* sends the signal <sig> to all pids found in <oldpids>. Returns the number of
+ * pids the signal was correctly delivered to.
+ */
+static int tell_old_pids(int sig)
+{
+ int p;
+ int ret = 0;
+ for (p = 0; p < nb_oldpids; p++)
+ if (kill(oldpids[p], sig) == 0)
+ ret++;
+ return ret;
+}
+
+/* Runs the polling loop */
+void run_poll_loop()
+{
+ int next;
+
+ tv_update_date(0,1);
+ while (1) {
+ /* Process a few tasks */
+ process_runnable_tasks();
+
+ /* check if we caught some signals and process them */
+ signal_process_queue();
+
+ /* Check if we can expire some tasks */
+ next = wake_expired_tasks();
+
+ /* stop when there's nothing left to do */
+ if (jobs == 0)
+ break;
+
+ /* expire immediately if events are pending */
+ if (fd_cache_num || run_queue || signal_queue_len || !LIST_ISEMPTY(&applet_active_queue))
+ next = now_ms;
+
+ /* The poller will ensure it returns around <next> */
+ cur_poller.poll(&cur_poller, next);
+ fd_process_cached_events();
+ applet_run_active();
+ }
+}
+
+/* This is the global management task for listeners. It enables listeners waiting
+ * for global resources when there are enough free resource, or at least once in
+ * a while. It is designed to be called as a task.
+ */
+static struct task *manage_global_listener_queue(struct task *t)
+{
+ int next = TICK_ETERNITY;
+ /* queue is empty, nothing to do */
+ if (LIST_ISEMPTY(&global_listener_queue))
+ goto out;
+
+ /* If there are still too many concurrent connections, let's wait for
+ * some of them to go away. We don't need to re-arm the timer because
+ * each of them will scan the queue anyway.
+ */
+ if (unlikely(actconn >= global.maxconn))
+ goto out;
+
+ /* We should periodically try to enable listeners waiting for a global
+ * resource here, because it is possible, though very unlikely, that
+ * they have been blocked by a temporary lack of global resource such
+ * as a file descriptor or memory and that the temporary condition has
+ * disappeared.
+ */
+ dequeue_all_listeners(&global_listener_queue);
+
+ out:
+ t->expire = next;
+ task_queue(t);
+ return t;
+}
+
+int main(int argc, char **argv)
+{
+ int err, retry;
+ struct rlimit limit;
+ char errmsg[100];
+ int pidfd = -1;
+
+ init(argc, argv);
+ signal_register_fct(SIGQUIT, dump, SIGQUIT);
+ signal_register_fct(SIGUSR1, sig_soft_stop, SIGUSR1);
+ signal_register_fct(SIGHUP, sig_dump_state, SIGHUP);
+
+ /* Always catch SIGPIPE even on platforms which define MSG_NOSIGNAL.
+ * Some recent FreeBSD setups report broken pipes, and MSG_NOSIGNAL
+ * was defined there, so let's stay on the safe side.
+ */
+ signal_register_fct(SIGPIPE, NULL, 0);
+
+ /* ulimits */
+ if (!global.rlimit_nofile)
+ global.rlimit_nofile = global.maxsock;
+
+ if (global.rlimit_nofile) {
+ limit.rlim_cur = limit.rlim_max = global.rlimit_nofile;
+ if (setrlimit(RLIMIT_NOFILE, &limit) == -1) {
+ Warning("[%s.main()] Cannot raise FD limit to %d.\n", argv[0], global.rlimit_nofile);
+ }
+ }
+
+ if (global.rlimit_memmax) {
+ limit.rlim_cur = limit.rlim_max =
+ global.rlimit_memmax * 1048576ULL;
+#ifdef RLIMIT_AS
+ if (setrlimit(RLIMIT_AS, &limit) == -1) {
+ Warning("[%s.main()] Cannot fix MEM limit to %d megs.\n",
+ argv[0], global.rlimit_memmax);
+ }
+#else
+ if (setrlimit(RLIMIT_DATA, &limit) == -1) {
+ Warning("[%s.main()] Cannot fix MEM limit to %d megs.\n",
+ argv[0], global.rlimit_memmax);
+ }
+#endif
+ }
+
+ /* We will loop at most 100 times with 10 ms delay each time.
+ * That's at most 1 second. We only send a signal to old pids
+ * if we cannot grab at least one port.
+ */
+ retry = MAX_START_RETRIES;
+ err = ERR_NONE;
+ while (retry >= 0) {
+ struct timeval w;
+ err = start_proxies(retry == 0 || nb_oldpids == 0);
+ /* exit the loop on no error or fatal error */
+ if ((err & (ERR_RETRYABLE|ERR_FATAL)) != ERR_RETRYABLE)
+ break;
+ if (nb_oldpids == 0 || retry == 0)
+ break;
+
+ /* FIXME-20060514: Solaris and OpenBSD do not support shutdown() on
+ * listening sockets. So on those platforms, it would be wiser to
+ * simply send SIGUSR1, which will not be undoable.
+ */
+ if (tell_old_pids(SIGTTOU) == 0) {
+ /* no need to wait if we can't contact old pids */
+ retry = 0;
+ continue;
+ }
+ /* give some time to old processes to stop listening */
+ w.tv_sec = 0;
+ w.tv_usec = 10*1000;
+ select(0, NULL, NULL, NULL, &w);
+ retry--;
+ }
+
+ /* Note: start_proxies() sends an alert when it fails. */
+ if ((err & ~ERR_WARN) != ERR_NONE) {
+ if (retry != MAX_START_RETRIES && nb_oldpids) {
+ protocol_unbind_all(); /* cleanup everything we can */
+ tell_old_pids(SIGTTIN);
+ }
+ exit(1);
+ }
+
+ if (listeners == 0) {
+ Alert("[%s.main()] No enabled listener found (check for 'bind' directives) ! Exiting.\n", argv[0]);
+ /* Note: we don't have to send anything to the old pids because we
+ * never stopped them. */
+ exit(1);
+ }
+
+ err = protocol_bind_all(errmsg, sizeof(errmsg));
+ if ((err & ~ERR_WARN) != ERR_NONE) {
+ if ((err & ERR_ALERT) || (err & ERR_WARN))
+ Alert("[%s.main()] %s.\n", argv[0], errmsg);
+
+ Alert("[%s.main()] Some protocols failed to start their listeners! Exiting.\n", argv[0]);
+ protocol_unbind_all(); /* cleanup everything we can */
+ if (nb_oldpids)
+ tell_old_pids(SIGTTIN);
+ exit(1);
+ } else if (err & ERR_WARN) {
+ Alert("[%s.main()] %s.\n", argv[0], errmsg);
+ }
+
+ /* prepare pause/play signals */
+ signal_register_fct(SIGTTOU, sig_pause, SIGTTOU);
+ signal_register_fct(SIGTTIN, sig_listen, SIGTTIN);
+
+ /* MODE_QUIET can inhibit alerts and warnings below this line */
+
+ global.mode &= ~MODE_STARTING;
+ if ((global.mode & MODE_QUIET) && !(global.mode & MODE_VERBOSE)) {
+ /* detach from the tty */
+ fclose(stdin); fclose(stdout); fclose(stderr);
+ }
+
+ /* open log & pid files before the chroot */
+ if (global.mode & (MODE_DAEMON | MODE_SYSTEMD) && global.pidfile != NULL) {
+ unlink(global.pidfile);
+ pidfd = open(global.pidfile, O_CREAT | O_WRONLY | O_TRUNC, 0644);
+ if (pidfd < 0) {
+ Alert("[%s.main()] Cannot create pidfile %s\n", argv[0], global.pidfile);
+ if (nb_oldpids)
+ tell_old_pids(SIGTTIN);
+ protocol_unbind_all();
+ exit(1);
+ }
+ }
+
+ if ((global.last_checks & LSTCHK_NETADM) && global.uid) {
+ Alert("[%s.main()] Some configuration options require full privileges, so global.uid cannot be changed.\n"
+ "", argv[0]);
+ protocol_unbind_all();
+ exit(1);
+ }
+
+ /* If the user is not root, we'll still let him try the configuration
+ * but we inform him that unexpected behaviour may occur.
+ */
+ if ((global.last_checks & LSTCHK_NETADM) && getuid())
+ Warning("[%s.main()] Some options which require full privileges"
+ " might not work well.\n"
+ "", argv[0]);
+
+ /* chroot if needed */
+ if (global.chroot != NULL) {
+ if (chroot(global.chroot) == -1 || chdir("/") == -1) {
+ Alert("[%s.main()] Cannot chroot(%s).\n", argv[0], global.chroot);
+ if (nb_oldpids)
+ tell_old_pids(SIGTTIN);
+ protocol_unbind_all();
+ exit(1);
+ }
+ }
+
+ if (nb_oldpids)
+ nb_oldpids = tell_old_pids(oldpids_sig);
+
+ /* Note that any error at this stage will be fatal because we will not
+ * be able to restart the old pids.
+ */
+
+ /* setgid / setuid */
+ if (global.gid) {
+ if (getgroups(0, NULL) > 0 && setgroups(0, NULL) == -1)
+ Warning("[%s.main()] Failed to drop supplementary groups. Using 'gid'/'group'"
+ " without 'uid'/'user' is generally useless.\n", argv[0]);
+
+ if (setgid(global.gid) == -1) {
+ Alert("[%s.main()] Cannot set gid %d.\n", argv[0], global.gid);
+ protocol_unbind_all();
+ exit(1);
+ }
+ }
+
+ if (global.uid && setuid(global.uid) == -1) {
+ Alert("[%s.main()] Cannot set uid %d.\n", argv[0], global.uid);
+ protocol_unbind_all();
+ exit(1);
+ }
+
+ /* check ulimits */
+ limit.rlim_cur = limit.rlim_max = 0;
+ getrlimit(RLIMIT_NOFILE, &limit);
+ if (limit.rlim_cur < global.maxsock) {
+ Warning("[%s.main()] FD limit (%d) too low for maxconn=%d/maxsock=%d. Please raise 'ulimit-n' to %d or more to avoid any trouble.\n",
+ argv[0], (int)limit.rlim_cur, global.maxconn, global.maxsock, global.maxsock);
+ }
+
+ if (global.mode & (MODE_DAEMON | MODE_SYSTEMD)) {
+ struct proxy *px;
+ struct peers *curpeers;
+ int ret = 0;
+ int *children = calloc(global.nbproc, sizeof(int));
+ int proc;
+
+ /* the father launches the required number of processes */
+ for (proc = 0; proc < global.nbproc; proc++) {
+ ret = fork();
+ if (ret < 0) {
+ Alert("[%s.main()] Cannot fork.\n", argv[0]);
+ protocol_unbind_all();
+ exit(1); /* there has been an error */
+ }
+ else if (ret == 0) /* child breaks here */
+ break;
+ children[proc] = ret;
+ if (pidfd >= 0) {
+ char pidstr[100];
+ snprintf(pidstr, sizeof(pidstr), "%d\n", ret);
+ shut_your_big_mouth_gcc(write(pidfd, pidstr, strlen(pidstr)));
+ }
+ relative_pid++; /* each child will get a different one */
+ }
+
+#ifdef USE_CPU_AFFINITY
+ if (proc < global.nbproc && /* child */
+ proc < LONGBITS && /* only the first 32/64 processes may be pinned */
+ global.cpu_map[proc]) /* only do this if the process has a CPU map */
+#ifdef __FreeBSD__
+ cpuset_setaffinity(CPU_LEVEL_WHICH, CPU_WHICH_PID, -1, sizeof(unsigned long), (void *)&global.cpu_map[proc]);
+#else
+ sched_setaffinity(0, sizeof(unsigned long), (void *)&global.cpu_map[proc]);
+#endif
+#endif
+ /* close the pidfile both in children and father */
+ if (pidfd >= 0) {
+ //lseek(pidfd, 0, SEEK_SET); /* debug: emulate eglibc bug */
+ close(pidfd);
+ }
+
+ /* We won't ever use this anymore */
+ free(oldpids); oldpids = NULL;
+ free(global.chroot); global.chroot = NULL;
+ free(global.pidfile); global.pidfile = NULL;
+
+ if (proc == global.nbproc) {
+ if (global.mode & MODE_SYSTEMD) {
+ protocol_unbind_all();
+ for (proc = 0; proc < global.nbproc; proc++)
+ while (waitpid(children[proc], NULL, 0) == -1 && errno == EINTR);
+ }
+ exit(0); /* parent must leave */
+ }
+
+ /* we might have to unbind some proxies from some processes */
+ px = proxy;
+ while (px != NULL) {
+ if (px->bind_proc && px->state != PR_STSTOPPED) {
+ if (!(px->bind_proc & (1UL << proc)))
+ stop_proxy(px);
+ }
+ px = px->next;
+ }
+
+ /* we might have to unbind some peers sections from some processes */
+ for (curpeers = peers; curpeers; curpeers = curpeers->next) {
+ if (!curpeers->peers_fe)
+ continue;
+
+ if (curpeers->peers_fe->bind_proc & (1UL << proc))
+ continue;
+
+ stop_proxy(curpeers->peers_fe);
+ /* disable this peer section so that it kills itself */
+ signal_unregister_handler(curpeers->sighandler);
+ task_delete(curpeers->sync_task);
+ task_free(curpeers->sync_task);
+ curpeers->sync_task = NULL;
+ task_free(curpeers->peers_fe->task);
+ curpeers->peers_fe->task = NULL;
+ curpeers->peers_fe = NULL;
+ }
+
+ free(children);
+ children = NULL;
+ /* if we're NOT in QUIET mode, we should now close the 3 first FDs to ensure
+ * that we can detach from the TTY. We MUST NOT do it in other cases since
+ * it would have already be done, and 0-2 would have been affected to listening
+ * sockets
+ */
+ if (!(global.mode & MODE_QUIET) || (global.mode & MODE_VERBOSE)) {
+ /* detach from the tty */
+ fclose(stdin); fclose(stdout); fclose(stderr);
+ global.mode &= ~MODE_VERBOSE;
+ global.mode |= MODE_QUIET; /* ensure that we won't say anything from now */
+ }
+ pid = getpid(); /* update child's pid */
+ setsid();
+ fork_poller();
+ }
+
+ protocol_enable_all();
+ /*
+ * That's it : the central polling loop. Run until we stop.
+ */
+ run_poll_loop();
+
+ /* Do some cleanup */
+ deinit();
+
+ exit(0);
+}
+
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * Hash function implementation
+ *
+ * See mailing list thread on "Consistent hashing alternative to sdbm"
+ * http://marc.info/?l=haproxy&m=138213693909219
+ *
+ * Copyright 2000-2010 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+
+#include <common/hash.h>
+
+
+unsigned int hash_wt6(const char *key, int len)
+{
+ unsigned h0 = 0xa53c965aUL;
+ unsigned h1 = 0x5ca6953aUL;
+ unsigned step0 = 6;
+ unsigned step1 = 18;
+
+ for (; len > 0; len--) {
+ unsigned int t;
+
+ t = ((unsigned int)*key);
+ key++;
+
+ h0 = ~(h0 ^ t);
+ h1 = ~(h1 + t);
+
+ t = (h1 << step0) | (h1 >> (32-step0));
+ h1 = (h0 << step1) | (h0 >> (32-step1));
+ h0 = t;
+
+ t = ((h0 >> 16) ^ h1) & 0xffff;
+ step0 = t & 0x1F;
+ step1 = t >> 11;
+ }
+ return h0 ^ h1;
+}
+
+unsigned int hash_djb2(const char *key, int len)
+{
+ unsigned int hash = 5381;
+
+ /* the hash unrolled eight times */
+ for (; len >= 8; len -= 8) {
+ hash = ((hash << 5) + hash) + *key++;
+ hash = ((hash << 5) + hash) + *key++;
+ hash = ((hash << 5) + hash) + *key++;
+ hash = ((hash << 5) + hash) + *key++;
+ hash = ((hash << 5) + hash) + *key++;
+ hash = ((hash << 5) + hash) + *key++;
+ hash = ((hash << 5) + hash) + *key++;
+ hash = ((hash << 5) + hash) + *key++;
+ }
+ switch (len) {
+ case 7: hash = ((hash << 5) + hash) + *key++; /* fallthrough... */
+ case 6: hash = ((hash << 5) + hash) + *key++; /* fallthrough... */
+ case 5: hash = ((hash << 5) + hash) + *key++; /* fallthrough... */
+ case 4: hash = ((hash << 5) + hash) + *key++; /* fallthrough... */
+ case 3: hash = ((hash << 5) + hash) + *key++; /* fallthrough... */
+ case 2: hash = ((hash << 5) + hash) + *key++; /* fallthrough... */
+ case 1: hash = ((hash << 5) + hash) + *key++; break;
+ default: /* case 0: */ break;
+ }
+ return hash;
+}
+
+unsigned int hash_sdbm(const char *key, int len)
+{
+ unsigned int hash = 0;
+ int c;
+
+ while (len--) {
+ c = *key++;
+ hash = c + (hash << 6) + (hash << 16) - hash;
+ }
+
+ return hash;
+}
+
+/* Small yet efficient CRC32 calculation loosely inspired from crc32b found
+ * here : http://www.hackersdelight.org/hdcodetxt/crc.c.txt
+ * The magic value represents the polynom with one bit per exponent. Much
+ * faster table-based versions exist but are pointless for our usage here,
+ * this hash already sustains gigabit speed which is far faster than what
+ * we'd ever need. Better preserve the CPU's cache instead.
+ */
+unsigned int hash_crc32(const char *key, int len)
+{
+ unsigned int hash;
+ int bit;
+
+ hash = ~0;
+ while (len--) {
+ hash ^= *key++;
+ for (bit = 0; bit < 8; bit++)
+ hash = (hash >> 1) ^ ((hash & 1) ? 0xedb88320 : 0);
+ }
+ return ~hash;
+}
--- /dev/null
+/*
+ * Header indexation functions.
+ *
+ * Copyright 2000-2011 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <common/config.h>
+#include <common/memory.h>
+#include <proto/hdr_idx.h>
+
+struct pool_head *pool2_hdr_idx = NULL;
+
+/*
+ * Add a header entry to <list> after element <after>. <after> is ignored when
+ * the list is empty or full. Common usage is to set <after> to list->tail.
+ *
+ * Returns the position of the new entry in the list (from 1 to size-1), or 0
+ * if the array is already full. An effort is made to fill the array linearly,
+ * but once the last entry has been used, we have to search for unused blocks,
+ * which takes much more time. For this reason, it's important to size is
+ * appropriately.
+ */
+int hdr_idx_add(int len, int cr, struct hdr_idx *list, int after)
+{
+ register struct hdr_idx_elem e = { .len=0, .cr=0, .next=0};
+ int new;
+
+ e.len = len;
+ e.cr = cr;
+
+ if (list->used == list->size) {
+ /* list is full */
+ return -1;
+ }
+
+
+ if (list->last < list->size) {
+ /* list is not completely used, we can fill linearly */
+ new = list->last++;
+ } else {
+ /* That's the worst situation :
+ * we have to scan the list for holes. We know that we
+ * will find a place because the list is not full.
+ */
+ new = 1;
+ while (list->v[new].len)
+ new++;
+ }
+
+ /* insert the new element between <after> and the next one (or end) */
+ e.next = list->v[after].next;
+ list->v[after].next = new;
+
+ list->used++;
+ list->v[new] = e;
+ list->tail = new;
+ return new;
+}
+
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+#include <sys/socket.h>
+
+#include <ctype.h>
+#include <setjmp.h>
+
+#include <lauxlib.h>
+#include <lua.h>
+#include <lualib.h>
+
+#if !defined(LUA_VERSION_NUM) || LUA_VERSION_NUM < 503
+#error "Requires Lua 5.3 or later."
+#endif
+
+#include <ebpttree.h>
+
+#include <common/cfgparse.h>
+
+#include <types/connection.h>
+#include <types/hlua.h>
+#include <types/proxy.h>
+
+#include <proto/arg.h>
+#include <proto/applet.h>
+#include <proto/channel.h>
+#include <proto/hdr_idx.h>
+#include <proto/hlua.h>
+#include <proto/map.h>
+#include <proto/obj_type.h>
+#include <proto/pattern.h>
+#include <proto/payload.h>
+#include <proto/proto_http.h>
+#include <proto/proto_tcp.h>
+#include <proto/raw_sock.h>
+#include <proto/sample.h>
+#include <proto/server.h>
+#include <proto/session.h>
+#include <proto/stream.h>
+#include <proto/ssl_sock.h>
+#include <proto/stream_interface.h>
+#include <proto/task.h>
+#include <proto/vars.h>
+
+/* Lua uses longjmp to perform yield or throwing errors. This
+ * macro is used only for identifying the function that can
+ * not return because a longjmp is executed.
+ * __LJMP marks a prototype of hlua file that can use longjmp.
+ * WILL_LJMP() marks an lua function that will use longjmp.
+ * MAY_LJMP() marks an lua function that may use longjmp.
+ */
+#define __LJMP
+#define WILL_LJMP(func) func
+#define MAY_LJMP(func) func
+
+/* This couple of function executes securely some Lua calls outside of
+ * the lua runtime environment. Each Lua call can return a longjmp
+ * if it encounter a memory error.
+ *
+ * Lua documentation extract:
+ *
+ * If an error happens outside any protected environment, Lua calls
+ * a panic function (see lua_atpanic) and then calls abort, thus
+ * exiting the host application. Your panic function can avoid this
+ * exit by never returning (e.g., doing a long jump to your own
+ * recovery point outside Lua).
+ *
+ * The panic function runs as if it were a message handler (see
+ * §2.3); in particular, the error message is at the top of the
+ * stack. However, there is no guarantee about stack space. To push
+ * anything on the stack, the panic function must first check the
+ * available space (see §4.2).
+ *
+ * We must check all the Lua entry point. This includes:
+ * - The include/proto/hlua.h exported functions
+ * - the task wrapper function
+ * - The action wrapper function
+ * - The converters wrapper function
+ * - The sample-fetch wrapper functions
+ *
+ * It is tolerated that the initilisation function returns an abort.
+ * Before each Lua abort, an error message is writed on stderr.
+ *
+ * The macro SET_SAFE_LJMP initialise the longjmp. The Macro
+ * RESET_SAFE_LJMP reset the longjmp. These function must be macro
+ * because they must be exists in the program stack when the longjmp
+ * is called.
+ */
+jmp_buf safe_ljmp_env;
+static int hlua_panic_safe(lua_State *L) { return 0; }
+static int hlua_panic_ljmp(lua_State *L) { longjmp(safe_ljmp_env, 1); }
+
+#define SET_SAFE_LJMP(__L) \
+ ({ \
+ int ret; \
+ if (setjmp(safe_ljmp_env) != 0) { \
+ lua_atpanic(__L, hlua_panic_safe); \
+ ret = 0; \
+ } else { \
+ lua_atpanic(__L, hlua_panic_ljmp); \
+ ret = 1; \
+ } \
+ ret; \
+ })
+
+/* If we are the last function catching Lua errors, we
+ * must reset the panic function.
+ */
+#define RESET_SAFE_LJMP(__L) \
+ do { \
+ lua_atpanic(__L, hlua_panic_safe); \
+ } while(0)
+
+/* Applet status flags */
+#define APPLET_DONE 0x01 /* applet processing is done. */
+#define APPLET_100C 0x02 /* 100 continue expected. */
+#define APPLET_HDR_SENT 0x04 /* Response header sent. */
+#define APPLET_CHUNKED 0x08 /* Use transfer encoding chunked. */
+#define APPLET_LAST_CHK 0x10 /* Last chunk sent. */
+#define APPLET_HTTP11 0x20 /* Last chunk sent. */
+
+#define HTTP_100C "HTTP/1.1 100 Continue\r\n\r\n"
+
+/* The main Lua execution context. */
+struct hlua gL;
+
+/* This is the memory pool containing all the signal structs. These
+ * struct are used to store each requiered signal between two tasks.
+ */
+struct pool_head *pool2_hlua_com;
+
+/* Used for Socket connection. */
+static struct proxy socket_proxy;
+static struct server socket_tcp;
+#ifdef USE_OPENSSL
+static struct server socket_ssl;
+#endif
+
+/* List head of the function called at the initialisation time. */
+struct list hlua_init_functions = LIST_HEAD_INIT(hlua_init_functions);
+
+/* The following variables contains the reference of the different
+ * Lua classes. These references are useful for identify metadata
+ * associated with an object.
+ */
+static int class_txn_ref;
+static int class_socket_ref;
+static int class_channel_ref;
+static int class_fetches_ref;
+static int class_converters_ref;
+static int class_http_ref;
+static int class_map_ref;
+static int class_applet_tcp_ref;
+static int class_applet_http_ref;
+
+/* Global Lua execution timeout. By default Lua, execution linked
+ * with stream (actions, sample-fetches and converters) have a
+ * short timeout. Lua linked with tasks doesn't have a timeout
+ * because a task may remain alive during all the haproxy execution.
+ */
+static unsigned int hlua_timeout_session = 4000; /* session timeout. */
+static unsigned int hlua_timeout_task = TICK_ETERNITY; /* task timeout. */
+static unsigned int hlua_timeout_applet = 4000; /* applet timeout. */
+
+/* Interrupts the Lua processing each "hlua_nb_instruction" instructions.
+ * it is used for preventing infinite loops.
+ *
+ * I test the scheer with an infinite loop containing one incrementation
+ * and one test. I run this loop between 10 seconds, I raise a ceil of
+ * 710M loops from one interrupt each 9000 instructions, so I fix the value
+ * to one interrupt each 10 000 instructions.
+ *
+ * configured | Number of
+ * instructions | loops executed
+ * between two | in milions
+ * forced yields |
+ * ---------------+---------------
+ * 10 | 160
+ * 500 | 670
+ * 1000 | 680
+ * 5000 | 700
+ * 7000 | 700
+ * 8000 | 700
+ * 9000 | 710 <- ceil
+ * 10000 | 710
+ * 100000 | 710
+ * 1000000 | 710
+ *
+ */
+static unsigned int hlua_nb_instruction = 10000;
+
+/* Descriptor for the memory allocation state. If limit is not null, it will
+ * be enforced on any memory allocation.
+ */
+struct hlua_mem_allocator {
+ size_t allocated;
+ size_t limit;
+};
+
+static struct hlua_mem_allocator hlua_global_allocator;
+
+static const char error_500[] =
+ "HTTP/1.0 500 Server Error\r\n"
+ "Cache-Control: no-cache\r\n"
+ "Connection: close\r\n"
+ "Content-Type: text/html\r\n"
+ "\r\n"
+ "<html><body><h1>500 Server Error</h1>\nAn internal server error occured.\n</body></html>\n";
+
+/* These functions converts types between HAProxy internal args or
+ * sample and LUA types. Another function permits to check if the
+ * LUA stack contains arguments according with an required ARG_T
+ * format.
+ */
+static int hlua_arg2lua(lua_State *L, const struct arg *arg);
+static int hlua_lua2arg(lua_State *L, int ud, struct arg *arg);
+__LJMP static int hlua_lua2arg_check(lua_State *L, int first, struct arg *argp,
+ unsigned int mask, struct proxy *p);
+static int hlua_smp2lua(lua_State *L, struct sample *smp);
+static int hlua_smp2lua_str(lua_State *L, struct sample *smp);
+static int hlua_lua2smp(lua_State *L, int ud, struct sample *smp);
+
+__LJMP static int hlua_http_get_headers(lua_State *L, struct hlua_txn *htxn, struct http_msg *msg);
+
+#define SEND_ERR(__be, __fmt, __args...) \
+ do { \
+ send_log(__be, LOG_ERR, __fmt, ## __args); \
+ if (!(global.mode & MODE_QUIET) || (global.mode & MODE_VERBOSE)) \
+ Alert(__fmt, ## __args); \
+ } while (0)
+
+/* Used to check an Lua function type in the stack. It creates and
+ * returns a reference of the function. This function throws an
+ * error if the rgument is not a "function".
+ */
+__LJMP unsigned int hlua_checkfunction(lua_State *L, int argno)
+{
+ if (!lua_isfunction(L, argno)) {
+ const char *msg = lua_pushfstring(L, "function expected, got %s", luaL_typename(L, -1));
+ WILL_LJMP(luaL_argerror(L, argno, msg));
+ }
+ lua_pushvalue(L, argno);
+ return luaL_ref(L, LUA_REGISTRYINDEX);
+}
+
+/* Return the string that is of the top of the stack. */
+const char *hlua_get_top_error_string(lua_State *L)
+{
+ if (lua_gettop(L) < 1)
+ return "unknown error";
+ if (lua_type(L, -1) != LUA_TSTRING)
+ return "unknown error";
+ return lua_tostring(L, -1);
+}
+
+/* The three following functions are useful for adding entries
+ * in a table. These functions takes a string and respectively an
+ * integer, a string or a function and add it to the table in the
+ * top of the stack.
+ *
+ * These functions throws an error if no more stack size is
+ * available.
+ */
+__LJMP static inline void hlua_class_const_int(lua_State *L, const char *name,
+ int value)
+{
+ if (!lua_checkstack(L, 2))
+ WILL_LJMP(luaL_error(L, "full stack"));
+ lua_pushstring(L, name);
+ lua_pushinteger(L, value);
+ lua_rawset(L, -3);
+}
+__LJMP static inline void hlua_class_const_str(lua_State *L, const char *name,
+ const char *value)
+{
+ if (!lua_checkstack(L, 2))
+ WILL_LJMP(luaL_error(L, "full stack"));
+ lua_pushstring(L, name);
+ lua_pushstring(L, value);
+ lua_rawset(L, -3);
+}
+__LJMP static inline void hlua_class_function(lua_State *L, const char *name,
+ int (*function)(lua_State *L))
+{
+ if (!lua_checkstack(L, 2))
+ WILL_LJMP(luaL_error(L, "full stack"));
+ lua_pushstring(L, name);
+ lua_pushcclosure(L, function, 0);
+ lua_rawset(L, -3);
+}
+
+__LJMP static int hlua_dump_object(struct lua_State *L)
+{
+ const char *name = (const char *)lua_tostring(L, lua_upvalueindex(1));
+ lua_pushfstring(L, "HAProxy class %s", name);
+ return 1;
+}
+
+/* This function check the number of arguments available in the
+ * stack. If the number of arguments available is not the same
+ * then <nb> an error is throwed.
+ */
+__LJMP static inline void check_args(lua_State *L, int nb, char *fcn)
+{
+ if (lua_gettop(L) == nb)
+ return;
+ WILL_LJMP(luaL_error(L, "'%s' needs %d arguments", fcn, nb));
+}
+
+/* Return true if the data in stack[<ud>] is an object of
+ * type <class_ref>.
+ */
+static int hlua_metaistype(lua_State *L, int ud, int class_ref)
+{
+ if (!lua_getmetatable(L, ud))
+ return 0;
+
+ lua_rawgeti(L, LUA_REGISTRYINDEX, class_ref);
+ if (!lua_rawequal(L, -1, -2)) {
+ lua_pop(L, 2);
+ return 0;
+ }
+
+ lua_pop(L, 2);
+ return 1;
+}
+
+/* Return an object of the expected type, or throws an error. */
+__LJMP static void *hlua_checkudata(lua_State *L, int ud, int class_ref)
+{
+ void *p;
+
+ /* Check if the stack entry is an array. */
+ if (!lua_istable(L, ud))
+ WILL_LJMP(luaL_argerror(L, ud, NULL));
+ /* Check if the metadata have the expected type. */
+ if (!hlua_metaistype(L, ud, class_ref))
+ WILL_LJMP(luaL_argerror(L, ud, NULL));
+ /* Push on the stack at the entry [0] of the table. */
+ lua_rawgeti(L, ud, 0);
+ /* Check if this entry is userdata. */
+ p = lua_touserdata(L, -1);
+ if (!p)
+ WILL_LJMP(luaL_argerror(L, ud, NULL));
+ /* Remove the entry returned by lua_rawgeti(). */
+ lua_pop(L, 1);
+ /* Return the associated struct. */
+ return p;
+}
+
+/* This fucntion push an error string prefixed by the file name
+ * and the line number where the error is encountered.
+ */
+static int hlua_pusherror(lua_State *L, const char *fmt, ...)
+{
+ va_list argp;
+ va_start(argp, fmt);
+ luaL_where(L, 1);
+ lua_pushvfstring(L, fmt, argp);
+ va_end(argp);
+ lua_concat(L, 2);
+ return 1;
+}
+
+/* This function register a new signal. "lua" is the current lua
+ * execution context. It contains a pointer to the associated task.
+ * "link" is a list head attached to an other task that must be wake
+ * the lua task if an event occurs. This is useful with external
+ * events like TCP I/O or sleep functions. This funcion allocate
+ * memory for the signal.
+ */
+static int hlua_com_new(struct hlua *lua, struct list *link)
+{
+ struct hlua_com *com = pool_alloc2(pool2_hlua_com);
+ if (!com)
+ return 0;
+ LIST_ADDQ(&lua->com, &com->purge_me);
+ LIST_ADDQ(link, &com->wake_me);
+ com->task = lua->task;
+ return 1;
+}
+
+/* This function purge all the pending signals when the LUA execution
+ * is finished. This prevent than a coprocess try to wake a deleted
+ * task. This function remove the memory associated to the signal.
+ */
+static void hlua_com_purge(struct hlua *lua)
+{
+ struct hlua_com *com, *back;
+
+ /* Delete all pending communication signals. */
+ list_for_each_entry_safe(com, back, &lua->com, purge_me) {
+ LIST_DEL(&com->purge_me);
+ LIST_DEL(&com->wake_me);
+ pool_free2(pool2_hlua_com, com);
+ }
+}
+
+/* This function sends signals. It wakes all the tasks attached
+ * to a list head, and remove the signal, and free the used
+ * memory.
+ */
+static void hlua_com_wake(struct list *wake)
+{
+ struct hlua_com *com, *back;
+
+ /* Wake task and delete all pending communication signals. */
+ list_for_each_entry_safe(com, back, wake, wake_me) {
+ LIST_DEL(&com->purge_me);
+ LIST_DEL(&com->wake_me);
+ task_wakeup(com->task, TASK_WOKEN_MSG);
+ pool_free2(pool2_hlua_com, com);
+ }
+}
+
+/* This functions is used with sample fetch and converters. It
+ * converts the HAProxy configuration argument in a lua stack
+ * values.
+ *
+ * It takes an array of "arg", and each entry of the array is
+ * converted and pushed in the LUA stack.
+ */
+static int hlua_arg2lua(lua_State *L, const struct arg *arg)
+{
+ switch (arg->type) {
+ case ARGT_SINT:
+ case ARGT_TIME:
+ case ARGT_SIZE:
+ lua_pushinteger(L, arg->data.sint);
+ break;
+
+ case ARGT_STR:
+ lua_pushlstring(L, arg->data.str.str, arg->data.str.len);
+ break;
+
+ case ARGT_IPV4:
+ case ARGT_IPV6:
+ case ARGT_MSK4:
+ case ARGT_MSK6:
+ case ARGT_FE:
+ case ARGT_BE:
+ case ARGT_TAB:
+ case ARGT_SRV:
+ case ARGT_USR:
+ case ARGT_MAP:
+ default:
+ lua_pushnil(L);
+ break;
+ }
+ return 1;
+}
+
+/* This function take one entrie in an LUA stack at the index "ud",
+ * and try to convert it in an HAProxy argument entry. This is useful
+ * with sample fetch wrappers. The input arguments are gived to the
+ * lua wrapper and converted as arg list by thi function.
+ */
+static int hlua_lua2arg(lua_State *L, int ud, struct arg *arg)
+{
+ switch (lua_type(L, ud)) {
+
+ case LUA_TNUMBER:
+ case LUA_TBOOLEAN:
+ arg->type = ARGT_SINT;
+ arg->data.sint = lua_tointeger(L, ud);
+ break;
+
+ case LUA_TSTRING:
+ arg->type = ARGT_STR;
+ arg->data.str.str = (char *)lua_tolstring(L, ud, (size_t *)&arg->data.str.len);
+ break;
+
+ case LUA_TUSERDATA:
+ case LUA_TNIL:
+ case LUA_TTABLE:
+ case LUA_TFUNCTION:
+ case LUA_TTHREAD:
+ case LUA_TLIGHTUSERDATA:
+ arg->type = ARGT_SINT;
+ arg->data.sint = 0;
+ break;
+ }
+ return 1;
+}
+
+/* the following functions are used to convert a struct sample
+ * in Lua type. This useful to convert the return of the
+ * fetchs or converters.
+ */
+static int hlua_smp2lua(lua_State *L, struct sample *smp)
+{
+ switch (smp->data.type) {
+ case SMP_T_SINT:
+ case SMP_T_BOOL:
+ lua_pushinteger(L, smp->data.u.sint);
+ break;
+
+ case SMP_T_BIN:
+ case SMP_T_STR:
+ lua_pushlstring(L, smp->data.u.str.str, smp->data.u.str.len);
+ break;
+
+ case SMP_T_METH:
+ switch (smp->data.u.meth.meth) {
+ case HTTP_METH_OPTIONS: lua_pushstring(L, "OPTIONS"); break;
+ case HTTP_METH_GET: lua_pushstring(L, "GET"); break;
+ case HTTP_METH_HEAD: lua_pushstring(L, "HEAD"); break;
+ case HTTP_METH_POST: lua_pushstring(L, "POST"); break;
+ case HTTP_METH_PUT: lua_pushstring(L, "PUT"); break;
+ case HTTP_METH_DELETE: lua_pushstring(L, "DELETE"); break;
+ case HTTP_METH_TRACE: lua_pushstring(L, "TRACE"); break;
+ case HTTP_METH_CONNECT: lua_pushstring(L, "CONNECT"); break;
+ case HTTP_METH_OTHER:
+ lua_pushlstring(L, smp->data.u.meth.str.str, smp->data.u.meth.str.len);
+ break;
+ default:
+ lua_pushnil(L);
+ break;
+ }
+ break;
+
+ case SMP_T_IPV4:
+ case SMP_T_IPV6:
+ case SMP_T_ADDR: /* This type is never used to qualify a sample. */
+ if (sample_casts[smp->data.type][SMP_T_STR] &&
+ sample_casts[smp->data.type][SMP_T_STR](smp))
+ lua_pushlstring(L, smp->data.u.str.str, smp->data.u.str.len);
+ else
+ lua_pushnil(L);
+ break;
+ default:
+ lua_pushnil(L);
+ break;
+ }
+ return 1;
+}
+
+/* the following functions are used to convert a struct sample
+ * in Lua strings. This is useful to convert the return of the
+ * fetchs or converters.
+ */
+static int hlua_smp2lua_str(lua_State *L, struct sample *smp)
+{
+ switch (smp->data.type) {
+
+ case SMP_T_BIN:
+ case SMP_T_STR:
+ lua_pushlstring(L, smp->data.u.str.str, smp->data.u.str.len);
+ break;
+
+ case SMP_T_METH:
+ switch (smp->data.u.meth.meth) {
+ case HTTP_METH_OPTIONS: lua_pushstring(L, "OPTIONS"); break;
+ case HTTP_METH_GET: lua_pushstring(L, "GET"); break;
+ case HTTP_METH_HEAD: lua_pushstring(L, "HEAD"); break;
+ case HTTP_METH_POST: lua_pushstring(L, "POST"); break;
+ case HTTP_METH_PUT: lua_pushstring(L, "PUT"); break;
+ case HTTP_METH_DELETE: lua_pushstring(L, "DELETE"); break;
+ case HTTP_METH_TRACE: lua_pushstring(L, "TRACE"); break;
+ case HTTP_METH_CONNECT: lua_pushstring(L, "CONNECT"); break;
+ case HTTP_METH_OTHER:
+ lua_pushlstring(L, smp->data.u.meth.str.str, smp->data.u.meth.str.len);
+ break;
+ default:
+ lua_pushstring(L, "");
+ break;
+ }
+ break;
+
+ case SMP_T_SINT:
+ case SMP_T_BOOL:
+ case SMP_T_IPV4:
+ case SMP_T_IPV6:
+ case SMP_T_ADDR: /* This type is never used to qualify a sample. */
+ if (sample_casts[smp->data.type][SMP_T_STR] &&
+ sample_casts[smp->data.type][SMP_T_STR](smp))
+ lua_pushlstring(L, smp->data.u.str.str, smp->data.u.str.len);
+ else
+ lua_pushstring(L, "");
+ break;
+ default:
+ lua_pushstring(L, "");
+ break;
+ }
+ return 1;
+}
+
+/* the following functions are used to convert an Lua type in a
+ * struct sample. This is useful to provide data from a converter
+ * to the LUA code.
+ */
+static int hlua_lua2smp(lua_State *L, int ud, struct sample *smp)
+{
+ switch (lua_type(L, ud)) {
+
+ case LUA_TNUMBER:
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = lua_tointeger(L, ud);
+ break;
+
+
+ case LUA_TBOOLEAN:
+ smp->data.type = SMP_T_BOOL;
+ smp->data.u.sint = lua_toboolean(L, ud);
+ break;
+
+ case LUA_TSTRING:
+ smp->data.type = SMP_T_STR;
+ smp->flags |= SMP_F_CONST;
+ smp->data.u.str.str = (char *)lua_tolstring(L, ud, (size_t *)&smp->data.u.str.len);
+ break;
+
+ case LUA_TUSERDATA:
+ case LUA_TNIL:
+ case LUA_TTABLE:
+ case LUA_TFUNCTION:
+ case LUA_TTHREAD:
+ case LUA_TLIGHTUSERDATA:
+ case LUA_TNONE:
+ default:
+ smp->data.type = SMP_T_BOOL;
+ smp->data.u.sint = 0;
+ break;
+ }
+ return 1;
+}
+
+/* This function check the "argp" builded by another conversion function
+ * is in accord with the expected argp defined by the "mask". The fucntion
+ * returns true or false. It can be adjust the types if there compatibles.
+ *
+ * This function assumes thant the argp argument contains ARGM_NBARGS + 1
+ * entries.
+ */
+__LJMP int hlua_lua2arg_check(lua_State *L, int first, struct arg *argp,
+ unsigned int mask, struct proxy *p)
+{
+ int min_arg;
+ int idx;
+ struct proxy *px;
+ char *sname, *pname;
+
+ idx = 0;
+ min_arg = ARGM(mask);
+ mask >>= ARGM_BITS;
+
+ while (1) {
+
+ /* Check oversize. */
+ if (idx >= ARGM_NBARGS && argp[idx].type != ARGT_STOP) {
+ WILL_LJMP(luaL_argerror(L, first + idx, "Malformed argument mask"));
+ }
+
+ /* Check for mandatory arguments. */
+ if (argp[idx].type == ARGT_STOP) {
+ if (idx < min_arg) {
+
+ /* If miss other argument than the first one, we return an error. */
+ if (idx > 0)
+ WILL_LJMP(luaL_argerror(L, first + idx, "Mandatory argument expected"));
+
+ /* If first argument have a certain type, some default values
+ * may be used. See the function smp_resolve_args().
+ */
+ switch (mask & ARGT_MASK) {
+
+ case ARGT_FE:
+ if (!(p->cap & PR_CAP_FE))
+ WILL_LJMP(luaL_argerror(L, first + idx, "Mandatory argument expected"));
+ argp[idx].data.prx = p;
+ argp[idx].type = ARGT_FE;
+ argp[idx+1].type = ARGT_STOP;
+ break;
+
+ case ARGT_BE:
+ if (!(p->cap & PR_CAP_BE))
+ WILL_LJMP(luaL_argerror(L, first + idx, "Mandatory argument expected"));
+ argp[idx].data.prx = p;
+ argp[idx].type = ARGT_BE;
+ argp[idx+1].type = ARGT_STOP;
+ break;
+
+ case ARGT_TAB:
+ argp[idx].data.prx = p;
+ argp[idx].type = ARGT_TAB;
+ argp[idx+1].type = ARGT_STOP;
+ break;
+
+ default:
+ WILL_LJMP(luaL_argerror(L, first + idx, "Mandatory argument expected"));
+ break;
+ }
+ }
+ return 0;
+ }
+
+ /* Check for exceed the number of requiered argument. */
+ if ((mask & ARGT_MASK) == ARGT_STOP &&
+ argp[idx].type != ARGT_STOP) {
+ WILL_LJMP(luaL_argerror(L, first + idx, "Last argument expected"));
+ }
+
+ if ((mask & ARGT_MASK) == ARGT_STOP &&
+ argp[idx].type == ARGT_STOP) {
+ return 0;
+ }
+
+ /* Convert some argument types. */
+ switch (mask & ARGT_MASK) {
+ case ARGT_SINT:
+ if (argp[idx].type != ARGT_SINT)
+ WILL_LJMP(luaL_argerror(L, first + idx, "integer expected"));
+ argp[idx].type = ARGT_SINT;
+ break;
+
+ case ARGT_TIME:
+ if (argp[idx].type != ARGT_SINT)
+ WILL_LJMP(luaL_argerror(L, first + idx, "integer expected"));
+ argp[idx].type = ARGT_TIME;
+ break;
+
+ case ARGT_SIZE:
+ if (argp[idx].type != ARGT_SINT)
+ WILL_LJMP(luaL_argerror(L, first + idx, "integer expected"));
+ argp[idx].type = ARGT_SIZE;
+ break;
+
+ case ARGT_FE:
+ if (argp[idx].type != ARGT_STR)
+ WILL_LJMP(luaL_argerror(L, first + idx, "string expected"));
+ memcpy(trash.str, argp[idx].data.str.str, argp[idx].data.str.len);
+ trash.str[argp[idx].data.str.len] = 0;
+ argp[idx].data.prx = proxy_fe_by_name(trash.str);
+ if (!argp[idx].data.prx)
+ WILL_LJMP(luaL_argerror(L, first + idx, "frontend doesn't exist"));
+ argp[idx].type = ARGT_FE;
+ break;
+
+ case ARGT_BE:
+ if (argp[idx].type != ARGT_STR)
+ WILL_LJMP(luaL_argerror(L, first + idx, "string expected"));
+ memcpy(trash.str, argp[idx].data.str.str, argp[idx].data.str.len);
+ trash.str[argp[idx].data.str.len] = 0;
+ argp[idx].data.prx = proxy_be_by_name(trash.str);
+ if (!argp[idx].data.prx)
+ WILL_LJMP(luaL_argerror(L, first + idx, "backend doesn't exist"));
+ argp[idx].type = ARGT_BE;
+ break;
+
+ case ARGT_TAB:
+ if (argp[idx].type != ARGT_STR)
+ WILL_LJMP(luaL_argerror(L, first + idx, "string expected"));
+ memcpy(trash.str, argp[idx].data.str.str, argp[idx].data.str.len);
+ trash.str[argp[idx].data.str.len] = 0;
+ argp[idx].data.prx = proxy_tbl_by_name(trash.str);
+ if (!argp[idx].data.prx)
+ WILL_LJMP(luaL_argerror(L, first + idx, "table doesn't exist"));
+ argp[idx].type = ARGT_TAB;
+ break;
+
+ case ARGT_SRV:
+ if (argp[idx].type != ARGT_STR)
+ WILL_LJMP(luaL_argerror(L, first + idx, "string expected"));
+ memcpy(trash.str, argp[idx].data.str.str, argp[idx].data.str.len);
+ trash.str[argp[idx].data.str.len] = 0;
+ sname = strrchr(trash.str, '/');
+ if (sname) {
+ *sname++ = '\0';
+ pname = trash.str;
+ px = proxy_be_by_name(pname);
+ if (!px)
+ WILL_LJMP(luaL_argerror(L, first + idx, "backend doesn't exist"));
+ }
+ else {
+ sname = trash.str;
+ px = p;
+ }
+ argp[idx].data.srv = findserver(px, sname);
+ if (!argp[idx].data.srv)
+ WILL_LJMP(luaL_argerror(L, first + idx, "server doesn't exist"));
+ argp[idx].type = ARGT_SRV;
+ break;
+
+ case ARGT_IPV4:
+ memcpy(trash.str, argp[idx].data.str.str, argp[idx].data.str.len);
+ trash.str[argp[idx].data.str.len] = 0;
+ if (inet_pton(AF_INET, trash.str, &argp[idx].data.ipv4))
+ WILL_LJMP(luaL_argerror(L, first + idx, "invalid IPv4 address"));
+ argp[idx].type = ARGT_IPV4;
+ break;
+
+ case ARGT_MSK4:
+ memcpy(trash.str, argp[idx].data.str.str, argp[idx].data.str.len);
+ trash.str[argp[idx].data.str.len] = 0;
+ if (!str2mask(trash.str, &argp[idx].data.ipv4))
+ WILL_LJMP(luaL_argerror(L, first + idx, "invalid IPv4 mask"));
+ argp[idx].type = ARGT_MSK4;
+ break;
+
+ case ARGT_IPV6:
+ memcpy(trash.str, argp[idx].data.str.str, argp[idx].data.str.len);
+ trash.str[argp[idx].data.str.len] = 0;
+ if (inet_pton(AF_INET6, trash.str, &argp[idx].data.ipv6))
+ WILL_LJMP(luaL_argerror(L, first + idx, "invalid IPv6 address"));
+ argp[idx].type = ARGT_IPV6;
+ break;
+
+ case ARGT_MSK6:
+ case ARGT_MAP:
+ case ARGT_REG:
+ case ARGT_USR:
+ WILL_LJMP(luaL_argerror(L, first + idx, "type not yet supported"));
+ break;
+ }
+
+ /* Check for type of argument. */
+ if ((mask & ARGT_MASK) != argp[idx].type) {
+ const char *msg = lua_pushfstring(L, "'%s' expected, got '%s'",
+ arg_type_names[(mask & ARGT_MASK)],
+ arg_type_names[argp[idx].type & ARGT_MASK]);
+ WILL_LJMP(luaL_argerror(L, first + idx, msg));
+ }
+
+ /* Next argument. */
+ mask >>= ARGT_BITS;
+ idx++;
+ }
+}
+
+/*
+ * The following functions are used to make correspondance between the the
+ * executed lua pointer and the "struct hlua *" that contain the context.
+ *
+ * - hlua_gethlua : return the hlua context associated with an lua_State.
+ * - hlua_sethlua : create the association between hlua context and lua_state.
+ */
+static inline struct hlua *hlua_gethlua(lua_State *L)
+{
+ struct hlua **hlua = lua_getextraspace(L);
+ return *hlua;
+}
+static inline void hlua_sethlua(struct hlua *hlua)
+{
+ struct hlua **hlua_store = lua_getextraspace(hlua->T);
+ *hlua_store = hlua;
+}
+
+/* This function is used to send logs. It try to send on screen (stderr)
+ * and on the default syslog server.
+ */
+static inline void hlua_sendlog(struct proxy *px, int level, const char *msg)
+{
+ struct tm tm;
+ char *p;
+
+ /* Cleanup the log message. */
+ p = trash.str;
+ for (; *msg != '\0'; msg++, p++) {
+ if (p >= trash.str + trash.size - 1) {
+ /* Break the message if exceed the buffer size. */
+ *(p-4) = ' ';
+ *(p-3) = '.';
+ *(p-2) = '.';
+ *(p-1) = '.';
+ break;
+ }
+ if (isprint(*msg))
+ *p = *msg;
+ else
+ *p = '.';
+ }
+ *p = '\0';
+
+ send_log(px, level, "%s\n", trash.str);
+ if (!(global.mode & MODE_QUIET) || (global.mode & (MODE_VERBOSE | MODE_STARTING))) {
+ get_localtime(date.tv_sec, &tm);
+ fprintf(stderr, "[%s] %03d/%02d%02d%02d (%d) : %s\n",
+ log_levels[level], tm.tm_yday, tm.tm_hour, tm.tm_min, tm.tm_sec,
+ (int)getpid(), trash.str);
+ fflush(stderr);
+ }
+}
+
+/* This function just ensure that the yield will be always
+ * returned with a timeout and permit to set some flags
+ */
+__LJMP void hlua_yieldk(lua_State *L, int nresults, int ctx,
+ lua_KFunction k, int timeout, unsigned int flags)
+{
+ struct hlua *hlua = hlua_gethlua(L);
+
+ /* Set the wake timeout. If timeout is required, we set
+ * the expiration time.
+ */
+ hlua->wake_time = timeout;
+
+ hlua->flags |= flags;
+
+ /* Process the yield. */
+ WILL_LJMP(lua_yieldk(L, nresults, ctx, k));
+}
+
+/* This function initialises the Lua environment stored in the stream.
+ * It must be called at the start of the stream. This function creates
+ * an LUA coroutine. It can not be use to crete the main LUA context.
+ *
+ * This function is particular. it initialises a new Lua thread. If the
+ * initialisation fails (example: out of memory error), the lua function
+ * throws an error (longjmp).
+ *
+ * This function manipulates two Lua stack: the main and the thread. Only
+ * the main stack can fail. The thread is not manipulated. This function
+ * MUST NOT manipulate the created thread stack state, because is not
+ * proctected agains error throwed by the thread stack.
+ */
+int hlua_ctx_init(struct hlua *lua, struct task *task)
+{
+ if (!SET_SAFE_LJMP(gL.T)) {
+ lua->Tref = LUA_REFNIL;
+ return 0;
+ }
+ lua->Mref = LUA_REFNIL;
+ lua->flags = 0;
+ LIST_INIT(&lua->com);
+ lua->T = lua_newthread(gL.T);
+ if (!lua->T) {
+ lua->Tref = LUA_REFNIL;
+ return 0;
+ }
+ hlua_sethlua(lua);
+ lua->Tref = luaL_ref(gL.T, LUA_REGISTRYINDEX);
+ lua->task = task;
+ RESET_SAFE_LJMP(gL.T);
+ return 1;
+}
+
+/* Used to destroy the Lua coroutine when the attached stream or task
+ * is destroyed. The destroy also the memory context. The struct "lua"
+ * is not freed.
+ */
+void hlua_ctx_destroy(struct hlua *lua)
+{
+ if (!lua->T)
+ return;
+
+ /* Purge all the pending signals. */
+ hlua_com_purge(lua);
+
+ luaL_unref(lua->T, LUA_REGISTRYINDEX, lua->Mref);
+ luaL_unref(gL.T, LUA_REGISTRYINDEX, lua->Tref);
+
+ /* Forces a garbage collecting process. If the Lua program is finished
+ * without error, we run the GC on the thread pointer. Its freed all
+ * the unused memory.
+ * If the thread is finnish with an error or is currently yielded,
+ * it seems that the GC applied on the thread doesn't clean anything,
+ * so e run the GC on the main thread.
+ * NOTE: maybe this action locks all the Lua threads untiml the en of
+ * the garbage collection.
+ */
+ if (lua->flags & HLUA_MUST_GC) {
+ lua_gc(lua->T, LUA_GCCOLLECT, 0);
+ if (lua_status(lua->T) != LUA_OK)
+ lua_gc(gL.T, LUA_GCCOLLECT, 0);
+ }
+
+ lua->T = NULL;
+}
+
+/* This function is used to restore the Lua context when a coroutine
+ * fails. This function copy the common memory between old coroutine
+ * and the new coroutine. The old coroutine is destroyed, and its
+ * replaced by the new coroutine.
+ * If the flag "keep_msg" is set, the last entry of the old is assumed
+ * as string error message and it is copied in the new stack.
+ */
+static int hlua_ctx_renew(struct hlua *lua, int keep_msg)
+{
+ lua_State *T;
+ int new_ref;
+
+ /* Renew the main LUA stack doesn't have sense. */
+ if (lua == &gL)
+ return 0;
+
+ /* New Lua coroutine. */
+ T = lua_newthread(gL.T);
+ if (!T)
+ return 0;
+
+ /* Copy last error message. */
+ if (keep_msg)
+ lua_xmove(lua->T, T, 1);
+
+ /* Copy data between the coroutines. */
+ lua_rawgeti(lua->T, LUA_REGISTRYINDEX, lua->Mref);
+ lua_xmove(lua->T, T, 1);
+ new_ref = luaL_ref(T, LUA_REGISTRYINDEX); /* Valur poped. */
+
+ /* Destroy old data. */
+ luaL_unref(lua->T, LUA_REGISTRYINDEX, lua->Mref);
+
+ /* The thread is garbage collected by Lua. */
+ luaL_unref(gL.T, LUA_REGISTRYINDEX, lua->Tref);
+
+ /* Fill the struct with the new coroutine values. */
+ lua->Mref = new_ref;
+ lua->T = T;
+ lua->Tref = luaL_ref(gL.T, LUA_REGISTRYINDEX);
+
+ /* Set context. */
+ hlua_sethlua(lua);
+
+ return 1;
+}
+
+void hlua_hook(lua_State *L, lua_Debug *ar)
+{
+ struct hlua *hlua = hlua_gethlua(L);
+
+ /* Lua cannot yield when its returning from a function,
+ * so, we can fix the interrupt hook to 1 instruction,
+ * expecting that the function is finnished.
+ */
+ if (lua_gethookmask(L) & LUA_MASKRET) {
+ lua_sethook(hlua->T, hlua_hook, LUA_MASKCOUNT, 1);
+ return;
+ }
+
+ /* restore the interrupt condition. */
+ lua_sethook(hlua->T, hlua_hook, LUA_MASKCOUNT, hlua_nb_instruction);
+
+ /* If we interrupt the Lua processing in yieldable state, we yield.
+ * If the state is not yieldable, trying yield causes an error.
+ */
+ if (lua_isyieldable(L))
+ WILL_LJMP(hlua_yieldk(L, 0, 0, NULL, TICK_ETERNITY, HLUA_CTRLYIELD));
+
+ /* If we cannot yield, update the clock and check the timeout. */
+ tv_update_date(0, 1);
+ hlua->run_time += now_ms - hlua->start_time;
+ if (hlua->max_time && hlua->run_time >= hlua->max_time) {
+ lua_pushfstring(L, "execution timeout");
+ WILL_LJMP(lua_error(L));
+ }
+
+ /* Update the start time. */
+ hlua->start_time = now_ms;
+
+ /* Try to interrupt the process at the end of the current
+ * unyieldable function.
+ */
+ lua_sethook(hlua->T, hlua_hook, LUA_MASKRET|LUA_MASKCOUNT, hlua_nb_instruction);
+}
+
+/* This function start or resumes the Lua stack execution. If the flag
+ * "yield_allowed" if no set and the LUA stack execution returns a yield
+ * The function return an error.
+ *
+ * The function can returns 4 values:
+ * - HLUA_E_OK : The execution is terminated without any errors.
+ * - HLUA_E_AGAIN : The execution must continue at the next associated
+ * task wakeup.
+ * - HLUA_E_ERRMSG : An error has occured, an error message is set in
+ * the top of the stack.
+ * - HLUA_E_ERR : An error has occured without error message.
+ *
+ * If an error occured, the stack is renewed and it is ready to run new
+ * LUA code.
+ */
+static enum hlua_exec hlua_ctx_resume(struct hlua *lua, int yield_allowed)
+{
+ int ret;
+ const char *msg;
+
+ /* Initialise run time counter. */
+ if (!HLUA_IS_RUNNING(lua))
+ lua->run_time = 0;
+
+resume_execution:
+
+ /* This hook interrupts the Lua processing each 'hlua_nb_instruction'
+ * instructions. it is used for preventing infinite loops.
+ */
+ lua_sethook(lua->T, hlua_hook, LUA_MASKCOUNT, hlua_nb_instruction);
+
+ /* Remove all flags except the running flags. */
+ HLUA_SET_RUN(lua);
+ HLUA_CLR_CTRLYIELD(lua);
+ HLUA_CLR_WAKERESWR(lua);
+ HLUA_CLR_WAKEREQWR(lua);
+
+ /* Update the start time. */
+ lua->start_time = now_ms;
+
+ /* Call the function. */
+ ret = lua_resume(lua->T, gL.T, lua->nargs);
+ switch (ret) {
+
+ case LUA_OK:
+ ret = HLUA_E_OK;
+ break;
+
+ case LUA_YIELD:
+ /* Check if the execution timeout is expired. It it is the case, we
+ * break the Lua execution.
+ */
+ tv_update_date(0, 1);
+ lua->run_time += now_ms - lua->start_time;
+ if (lua->max_time && lua->run_time > lua->max_time) {
+ lua_settop(lua->T, 0); /* Empty the stack. */
+ if (!lua_checkstack(lua->T, 1)) {
+ ret = HLUA_E_ERR;
+ break;
+ }
+ lua_pushfstring(lua->T, "execution timeout");
+ ret = HLUA_E_ERRMSG;
+ break;
+ }
+ /* Process the forced yield. if the general yield is not allowed or
+ * if no task were associated this the current Lua execution
+ * coroutine, we resume the execution. Else we want to return in the
+ * scheduler and we want to be waked up again, to continue the
+ * current Lua execution. So we schedule our own task.
+ */
+ if (HLUA_IS_CTRLYIELDING(lua)) {
+ if (!yield_allowed || !lua->task)
+ goto resume_execution;
+ task_wakeup(lua->task, TASK_WOKEN_MSG);
+ }
+ if (!yield_allowed) {
+ lua_settop(lua->T, 0); /* Empty the stack. */
+ if (!lua_checkstack(lua->T, 1)) {
+ ret = HLUA_E_ERR;
+ break;
+ }
+ lua_pushfstring(lua->T, "yield not allowed");
+ ret = HLUA_E_ERRMSG;
+ break;
+ }
+ ret = HLUA_E_AGAIN;
+ break;
+
+ case LUA_ERRRUN:
+
+ /* Special exit case. The traditionnal exit is returned as an error
+ * because the errors ares the only one mean to return immediately
+ * from and lua execution.
+ */
+ if (lua->flags & HLUA_EXIT) {
+ ret = HLUA_E_OK;
+ hlua_ctx_renew(lua, 0);
+ break;
+ }
+
+ lua->wake_time = TICK_ETERNITY;
+ if (!lua_checkstack(lua->T, 1)) {
+ ret = HLUA_E_ERR;
+ break;
+ }
+ msg = lua_tostring(lua->T, -1);
+ lua_settop(lua->T, 0); /* Empty the stack. */
+ lua_pop(lua->T, 1);
+ if (msg)
+ lua_pushfstring(lua->T, "runtime error: %s", msg);
+ else
+ lua_pushfstring(lua->T, "unknown runtime error");
+ ret = HLUA_E_ERRMSG;
+ break;
+
+ case LUA_ERRMEM:
+ lua->wake_time = TICK_ETERNITY;
+ lua_settop(lua->T, 0); /* Empty the stack. */
+ if (!lua_checkstack(lua->T, 1)) {
+ ret = HLUA_E_ERR;
+ break;
+ }
+ lua_pushfstring(lua->T, "out of memory error");
+ ret = HLUA_E_ERRMSG;
+ break;
+
+ case LUA_ERRERR:
+ lua->wake_time = TICK_ETERNITY;
+ if (!lua_checkstack(lua->T, 1)) {
+ ret = HLUA_E_ERR;
+ break;
+ }
+ msg = lua_tostring(lua->T, -1);
+ lua_settop(lua->T, 0); /* Empty the stack. */
+ lua_pop(lua->T, 1);
+ if (msg)
+ lua_pushfstring(lua->T, "message handler error: %s", msg);
+ else
+ lua_pushfstring(lua->T, "message handler error");
+ ret = HLUA_E_ERRMSG;
+ break;
+
+ default:
+ lua->wake_time = TICK_ETERNITY;
+ lua_settop(lua->T, 0); /* Empty the stack. */
+ if (!lua_checkstack(lua->T, 1)) {
+ ret = HLUA_E_ERR;
+ break;
+ }
+ lua_pushfstring(lua->T, "unknonwn error");
+ ret = HLUA_E_ERRMSG;
+ break;
+ }
+
+ /* This GC permits to destroy some object when a Lua timeout strikes. */
+ if (lua->flags & HLUA_MUST_GC &&
+ ret != HLUA_E_AGAIN)
+ lua_gc(lua->T, LUA_GCCOLLECT, 0);
+
+ switch (ret) {
+ case HLUA_E_AGAIN:
+ break;
+
+ case HLUA_E_ERRMSG:
+ hlua_com_purge(lua);
+ hlua_ctx_renew(lua, 1);
+ HLUA_CLR_RUN(lua);
+ break;
+
+ case HLUA_E_ERR:
+ HLUA_CLR_RUN(lua);
+ hlua_com_purge(lua);
+ hlua_ctx_renew(lua, 0);
+ break;
+
+ case HLUA_E_OK:
+ HLUA_CLR_RUN(lua);
+ hlua_com_purge(lua);
+ break;
+ }
+
+ return ret;
+}
+
+/* This function exit the current code. */
+__LJMP static int hlua_done(lua_State *L)
+{
+ struct hlua *hlua = hlua_gethlua(L);
+
+ hlua->flags |= HLUA_EXIT;
+ WILL_LJMP(lua_error(L));
+
+ return 0;
+}
+
+/* This function is an LUA binding. It provides a function
+ * for deleting ACL from a referenced ACL file.
+ */
+__LJMP static int hlua_del_acl(lua_State *L)
+{
+ const char *name;
+ const char *key;
+ struct pat_ref *ref;
+
+ MAY_LJMP(check_args(L, 2, "del_acl"));
+
+ name = MAY_LJMP(luaL_checkstring(L, 1));
+ key = MAY_LJMP(luaL_checkstring(L, 2));
+
+ ref = pat_ref_lookup(name);
+ if (!ref)
+ WILL_LJMP(luaL_error(L, "'del_acl': unknown acl file '%s'", name));
+
+ pat_ref_delete(ref, key);
+ return 0;
+}
+
+/* This function is an LUA binding. It provides a function
+ * for deleting map entry from a referenced map file.
+ */
+static int hlua_del_map(lua_State *L)
+{
+ const char *name;
+ const char *key;
+ struct pat_ref *ref;
+
+ MAY_LJMP(check_args(L, 2, "del_map"));
+
+ name = MAY_LJMP(luaL_checkstring(L, 1));
+ key = MAY_LJMP(luaL_checkstring(L, 2));
+
+ ref = pat_ref_lookup(name);
+ if (!ref)
+ WILL_LJMP(luaL_error(L, "'del_map': unknown acl file '%s'", name));
+
+ pat_ref_delete(ref, key);
+ return 0;
+}
+
+/* This function is an LUA binding. It provides a function
+ * for adding ACL pattern from a referenced ACL file.
+ */
+static int hlua_add_acl(lua_State *L)
+{
+ const char *name;
+ const char *key;
+ struct pat_ref *ref;
+
+ MAY_LJMP(check_args(L, 2, "add_acl"));
+
+ name = MAY_LJMP(luaL_checkstring(L, 1));
+ key = MAY_LJMP(luaL_checkstring(L, 2));
+
+ ref = pat_ref_lookup(name);
+ if (!ref)
+ WILL_LJMP(luaL_error(L, "'add_acl': unknown acl file '%s'", name));
+
+ if (pat_ref_find_elt(ref, key) == NULL)
+ pat_ref_add(ref, key, NULL, NULL);
+ return 0;
+}
+
+/* This function is an LUA binding. It provides a function
+ * for setting map pattern and sample from a referenced map
+ * file.
+ */
+static int hlua_set_map(lua_State *L)
+{
+ const char *name;
+ const char *key;
+ const char *value;
+ struct pat_ref *ref;
+
+ MAY_LJMP(check_args(L, 3, "set_map"));
+
+ name = MAY_LJMP(luaL_checkstring(L, 1));
+ key = MAY_LJMP(luaL_checkstring(L, 2));
+ value = MAY_LJMP(luaL_checkstring(L, 3));
+
+ ref = pat_ref_lookup(name);
+ if (!ref)
+ WILL_LJMP(luaL_error(L, "'set_map': unknown map file '%s'", name));
+
+ if (pat_ref_find_elt(ref, key) != NULL)
+ pat_ref_set(ref, key, value, NULL);
+ else
+ pat_ref_add(ref, key, value, NULL);
+ return 0;
+}
+
+/* A class is a lot of memory that contain data. This data can be a table,
+ * an integer or user data. This data is associated with a metatable. This
+ * metatable have an original version registred in the global context with
+ * the name of the object (_G[<name>] = <metable> ).
+ *
+ * A metable is a table that modify the standard behavior of a standard
+ * access to the associated data. The entries of this new metatable are
+ * defined as is:
+ *
+ * http://lua-users.org/wiki/MetatableEvents
+ *
+ * __index
+ *
+ * we access an absent field in a table, the result is nil. This is
+ * true, but it is not the whole truth. Actually, such access triggers
+ * the interpreter to look for an __index metamethod: If there is no
+ * such method, as usually happens, then the access results in nil;
+ * otherwise, the metamethod will provide the result.
+ *
+ * Control 'prototype' inheritance. When accessing "myTable[key]" and
+ * the key does not appear in the table, but the metatable has an __index
+ * property:
+ *
+ * - if the value is a function, the function is called, passing in the
+ * table and the key; the return value of that function is returned as
+ * the result.
+ *
+ * - if the value is another table, the value of the key in that table is
+ * asked for and returned (and if it doesn't exist in that table, but that
+ * table's metatable has an __index property, then it continues on up)
+ *
+ * - Use "rawget(myTable,key)" to skip this metamethod.
+ *
+ * http://www.lua.org/pil/13.4.1.html
+ *
+ * __newindex
+ *
+ * Like __index, but control property assignment.
+ *
+ * __mode - Control weak references. A string value with one or both
+ * of the characters 'k' and 'v' which specifies that the the
+ * keys and/or values in the table are weak references.
+ *
+ * __call - Treat a table like a function. When a table is followed by
+ * parenthesis such as "myTable( 'foo' )" and the metatable has
+ * a __call key pointing to a function, that function is invoked
+ * (passing any specified arguments) and the return value is
+ * returned.
+ *
+ * __metatable - Hide the metatable. When "getmetatable( myTable )" is
+ * called, if the metatable for myTable has a __metatable
+ * key, the value of that key is returned instead of the
+ * actual metatable.
+ *
+ * __tostring - Control string representation. When the builtin
+ * "tostring( myTable )" function is called, if the metatable
+ * for myTable has a __tostring property set to a function,
+ * that function is invoked (passing myTable to it) and the
+ * return value is used as the string representation.
+ *
+ * __len - Control table length. When the table length is requested using
+ * the length operator ( '#' ), if the metatable for myTable has
+ * a __len key pointing to a function, that function is invoked
+ * (passing myTable to it) and the return value used as the value
+ * of "#myTable".
+ *
+ * __gc - Userdata finalizer code. When userdata is set to be garbage
+ * collected, if the metatable has a __gc field pointing to a
+ * function, that function is first invoked, passing the userdata
+ * to it. The __gc metamethod is not called for tables.
+ * (See http://lua-users.org/lists/lua-l/2006-11/msg00508.html)
+ *
+ * Special metamethods for redefining standard operators:
+ * http://www.lua.org/pil/13.1.html
+ *
+ * __add "+"
+ * __sub "-"
+ * __mul "*"
+ * __div "/"
+ * __unm "!"
+ * __pow "^"
+ * __concat ".."
+ *
+ * Special methods for redfining standar relations
+ * http://www.lua.org/pil/13.2.html
+ *
+ * __eq "=="
+ * __lt "<"
+ * __le "<="
+ */
+
+/*
+ *
+ *
+ * Class Map
+ *
+ *
+ */
+
+/* Returns a struct hlua_map if the stack entry "ud" is
+ * a class session, otherwise it throws an error.
+ */
+__LJMP static struct map_descriptor *hlua_checkmap(lua_State *L, int ud)
+{
+ return (struct map_descriptor *)MAY_LJMP(hlua_checkudata(L, ud, class_map_ref));
+}
+
+/* This function is the map constructor. It don't need
+ * the class Map object. It creates and return a new Map
+ * object. It must be called only during "body" or "init"
+ * context because it process some filesystem accesses.
+ */
+__LJMP static int hlua_map_new(struct lua_State *L)
+{
+ const char *fn;
+ int match = PAT_MATCH_STR;
+ struct sample_conv conv;
+ const char *file = "";
+ int line = 0;
+ lua_Debug ar;
+ char *err = NULL;
+ struct arg args[2];
+
+ if (lua_gettop(L) < 1 || lua_gettop(L) > 2)
+ WILL_LJMP(luaL_error(L, "'new' needs at least 1 argument."));
+
+ fn = MAY_LJMP(luaL_checkstring(L, 1));
+
+ if (lua_gettop(L) >= 2) {
+ match = MAY_LJMP(luaL_checkinteger(L, 2));
+ if (match < 0 || match >= PAT_MATCH_NUM)
+ WILL_LJMP(luaL_error(L, "'new' needs a valid match method."));
+ }
+
+ /* Get Lua filename and line number. */
+ if (lua_getstack(L, 1, &ar)) { /* check function at level */
+ lua_getinfo(L, "Sl", &ar); /* get info about it */
+ if (ar.currentline > 0) { /* is there info? */
+ file = ar.short_src;
+ line = ar.currentline;
+ }
+ }
+
+ /* fill fake sample_conv struct. */
+ conv.kw = ""; /* unused. */
+ conv.process = NULL; /* unused. */
+ conv.arg_mask = 0; /* unused. */
+ conv.val_args = NULL; /* unused. */
+ conv.out_type = SMP_T_STR;
+ conv.private = (void *)(long)match;
+ switch (match) {
+ case PAT_MATCH_STR: conv.in_type = SMP_T_STR; break;
+ case PAT_MATCH_BEG: conv.in_type = SMP_T_STR; break;
+ case PAT_MATCH_SUB: conv.in_type = SMP_T_STR; break;
+ case PAT_MATCH_DIR: conv.in_type = SMP_T_STR; break;
+ case PAT_MATCH_DOM: conv.in_type = SMP_T_STR; break;
+ case PAT_MATCH_END: conv.in_type = SMP_T_STR; break;
+ case PAT_MATCH_REG: conv.in_type = SMP_T_STR; break;
+ case PAT_MATCH_INT: conv.in_type = SMP_T_SINT; break;
+ case PAT_MATCH_IP: conv.in_type = SMP_T_ADDR; break;
+ default:
+ WILL_LJMP(luaL_error(L, "'new' doesn't support this match mode."));
+ }
+
+ /* fill fake args. */
+ args[0].type = ARGT_STR;
+ args[0].data.str.str = (char *)fn;
+ args[1].type = ARGT_STOP;
+
+ /* load the map. */
+ if (!sample_load_map(args, &conv, file, line, &err)) {
+ /* error case: we cant use luaL_error because we must
+ * free the err variable.
+ */
+ luaL_where(L, 1);
+ lua_pushfstring(L, "'new': %s.", err);
+ lua_concat(L, 2);
+ free(err);
+ WILL_LJMP(lua_error(L));
+ }
+
+ /* create the lua object. */
+ lua_newtable(L);
+ lua_pushlightuserdata(L, args[0].data.map);
+ lua_rawseti(L, -2, 0);
+
+ /* Pop a class Map metatable and affect it to the userdata. */
+ lua_rawgeti(L, LUA_REGISTRYINDEX, class_map_ref);
+ lua_setmetatable(L, -2);
+
+
+ return 1;
+}
+
+__LJMP static inline int _hlua_map_lookup(struct lua_State *L, int str)
+{
+ struct map_descriptor *desc;
+ struct pattern *pat;
+ struct sample smp;
+
+ MAY_LJMP(check_args(L, 2, "lookup"));
+ desc = MAY_LJMP(hlua_checkmap(L, 1));
+ if (desc->pat.expect_type == SMP_T_SINT) {
+ smp.data.type = SMP_T_SINT;
+ smp.data.u.sint = MAY_LJMP(luaL_checkinteger(L, 2));
+ }
+ else {
+ smp.data.type = SMP_T_STR;
+ smp.flags = SMP_F_CONST;
+ smp.data.u.str.str = (char *)MAY_LJMP(luaL_checklstring(L, 2, (size_t *)&smp.data.u.str.len));
+ }
+
+ pat = pattern_exec_match(&desc->pat, &smp, 1);
+ if (!pat || !pat->data) {
+ if (str)
+ lua_pushstring(L, "");
+ else
+ lua_pushnil(L);
+ return 1;
+ }
+
+ /* The Lua pattern must return a string, so we can't check the returned type */
+ lua_pushlstring(L, pat->data->u.str.str, pat->data->u.str.len);
+ return 1;
+}
+
+__LJMP static int hlua_map_lookup(struct lua_State *L)
+{
+ return _hlua_map_lookup(L, 0);
+}
+
+__LJMP static int hlua_map_slookup(struct lua_State *L)
+{
+ return _hlua_map_lookup(L, 1);
+}
+
+/*
+ *
+ *
+ * Class Socket
+ *
+ *
+ */
+
+__LJMP static struct hlua_socket *hlua_checksocket(lua_State *L, int ud)
+{
+ return (struct hlua_socket *)MAY_LJMP(hlua_checkudata(L, ud, class_socket_ref));
+}
+
+/* This function is the handler called for each I/O on the established
+ * connection. It is used for notify space avalaible to send or data
+ * received.
+ */
+static void hlua_socket_handler(struct appctx *appctx)
+{
+ struct stream_interface *si = appctx->owner;
+ struct connection *c = objt_conn(si_opposite(si)->end);
+
+ /* If the connection object is not avalaible, close all the
+ * streams and wakeup everithing waiting for.
+ */
+ if (!c) {
+ si_shutw(si);
+ si_shutr(si);
+ si_ic(si)->flags |= CF_READ_NULL;
+ hlua_com_wake(&appctx->ctx.hlua.wake_on_read);
+ hlua_com_wake(&appctx->ctx.hlua.wake_on_write);
+ return;
+ }
+
+ /* If we cant write, wakeup the pending write signals. */
+ if (channel_output_closed(si_ic(si)))
+ hlua_com_wake(&appctx->ctx.hlua.wake_on_write);
+
+ /* If we cant read, wakeup the pending read signals. */
+ if (channel_input_closed(si_oc(si)))
+ hlua_com_wake(&appctx->ctx.hlua.wake_on_read);
+
+ /* if the connection is not estabkished, inform the stream that we want
+ * to be notified whenever the connection completes.
+ */
+ if (!(c->flags & CO_FL_CONNECTED)) {
+ si_applet_cant_get(si);
+ si_applet_cant_put(si);
+ return;
+ }
+
+ /* This function is called after the connect. */
+ appctx->ctx.hlua.connected = 1;
+
+ /* Wake the tasks which wants to write if the buffer have avalaible space. */
+ if (channel_may_recv(si_ic(si)))
+ hlua_com_wake(&appctx->ctx.hlua.wake_on_write);
+
+ /* Wake the tasks which wants to read if the buffer contains data. */
+ if (!channel_is_empty(si_oc(si)))
+ hlua_com_wake(&appctx->ctx.hlua.wake_on_read);
+}
+
+/* This function is called when the "struct stream" is destroyed.
+ * Remove the link from the object to this stream.
+ * Wake all the pending signals.
+ */
+static void hlua_socket_release(struct appctx *appctx)
+{
+ /* Remove my link in the original object. */
+ if (appctx->ctx.hlua.socket)
+ appctx->ctx.hlua.socket->s = NULL;
+
+ /* Wake all the task waiting for me. */
+ hlua_com_wake(&appctx->ctx.hlua.wake_on_read);
+ hlua_com_wake(&appctx->ctx.hlua.wake_on_write);
+}
+
+/* If the garbage collectio of the object is launch, nobody
+ * uses this object. If the stream does not exists, just quit.
+ * Send the shutdown signal to the stream. In some cases,
+ * pending signal can rest in the read and write lists. destroy
+ * it.
+ */
+__LJMP static int hlua_socket_gc(lua_State *L)
+{
+ struct hlua_socket *socket;
+ struct appctx *appctx;
+
+ MAY_LJMP(check_args(L, 1, "__gc"));
+
+ socket = MAY_LJMP(hlua_checksocket(L, 1));
+ if (!socket->s)
+ return 0;
+
+ /* Remove all reference between the Lua stack and the coroutine stream. */
+ appctx = objt_appctx(socket->s->si[0].end);
+ stream_shutdown(socket->s, SF_ERR_KILLED);
+ socket->s = NULL;
+ appctx->ctx.hlua.socket = NULL;
+
+ return 0;
+}
+
+/* The close function send shutdown signal and break the
+ * links between the stream and the object.
+ */
+__LJMP static int hlua_socket_close(lua_State *L)
+{
+ struct hlua_socket *socket;
+ struct appctx *appctx;
+
+ MAY_LJMP(check_args(L, 1, "close"));
+
+ socket = MAY_LJMP(hlua_checksocket(L, 1));
+ if (!socket->s)
+ return 0;
+
+ /* Close the stream and remove the associated stop task. */
+ stream_shutdown(socket->s, SF_ERR_KILLED);
+ appctx = objt_appctx(socket->s->si[0].end);
+ appctx->ctx.hlua.socket = NULL;
+ socket->s = NULL;
+
+ return 0;
+}
+
+/* This Lua function assumes that the stack contain three parameters.
+ * 1 - USERDATA containing a struct socket
+ * 2 - INTEGER with values of the macro defined below
+ * If the integer is -1, we must read at most one line.
+ * If the integer is -2, we ust read all the data until the
+ * end of the stream.
+ * If the integer is positive value, we must read a number of
+ * bytes corresponding to this value.
+ */
+#define HLSR_READ_LINE (-1)
+#define HLSR_READ_ALL (-2)
+__LJMP static int hlua_socket_receive_yield(struct lua_State *L, int status, lua_KContext ctx)
+{
+ struct hlua_socket *socket = MAY_LJMP(hlua_checksocket(L, 1));
+ int wanted = lua_tointeger(L, 2);
+ struct hlua *hlua = hlua_gethlua(L);
+ struct appctx *appctx;
+ int len;
+ int nblk;
+ char *blk1;
+ int len1;
+ char *blk2;
+ int len2;
+ int skip_at_end = 0;
+ struct channel *oc;
+
+ /* Check if this lua stack is schedulable. */
+ if (!hlua || !hlua->task)
+ WILL_LJMP(luaL_error(L, "The 'receive' function is only allowed in "
+ "'frontend', 'backend' or 'task'"));
+
+ /* check for connection closed. If some data where read, return it. */
+ if (!socket->s)
+ goto connection_closed;
+
+ oc = &socket->s->res;
+ if (wanted == HLSR_READ_LINE) {
+ /* Read line. */
+ nblk = bo_getline_nc(oc, &blk1, &len1, &blk2, &len2);
+ if (nblk < 0) /* Connection close. */
+ goto connection_closed;
+ if (nblk == 0) /* No data avalaible. */
+ goto connection_empty;
+
+ /* remove final \r\n. */
+ if (nblk == 1) {
+ if (blk1[len1-1] == '\n') {
+ len1--;
+ skip_at_end++;
+ if (blk1[len1-1] == '\r') {
+ len1--;
+ skip_at_end++;
+ }
+ }
+ }
+ else {
+ if (blk2[len2-1] == '\n') {
+ len2--;
+ skip_at_end++;
+ if (blk2[len2-1] == '\r') {
+ len2--;
+ skip_at_end++;
+ }
+ }
+ }
+ }
+
+ else if (wanted == HLSR_READ_ALL) {
+ /* Read all the available data. */
+ nblk = bo_getblk_nc(oc, &blk1, &len1, &blk2, &len2);
+ if (nblk < 0) /* Connection close. */
+ goto connection_closed;
+ if (nblk == 0) /* No data avalaible. */
+ goto connection_empty;
+ }
+
+ else {
+ /* Read a block of data. */
+ nblk = bo_getblk_nc(oc, &blk1, &len1, &blk2, &len2);
+ if (nblk < 0) /* Connection close. */
+ goto connection_closed;
+ if (nblk == 0) /* No data avalaible. */
+ goto connection_empty;
+
+ if (len1 > wanted) {
+ nblk = 1;
+ len1 = wanted;
+ } if (nblk == 2 && len1 + len2 > wanted)
+ len2 = wanted - len1;
+ }
+
+ len = len1;
+
+ luaL_addlstring(&socket->b, blk1, len1);
+ if (nblk == 2) {
+ len += len2;
+ luaL_addlstring(&socket->b, blk2, len2);
+ }
+
+ /* Consume data. */
+ bo_skip(oc, len + skip_at_end);
+
+ /* Don't wait anything. */
+ stream_int_notify(&socket->s->si[0]);
+ stream_int_update_applet(&socket->s->si[0]);
+
+ /* If the pattern reclaim to read all the data
+ * in the connection, got out.
+ */
+ if (wanted == HLSR_READ_ALL)
+ goto connection_empty;
+ else if (wanted >= 0 && len < wanted)
+ goto connection_empty;
+
+ /* Return result. */
+ luaL_pushresult(&socket->b);
+ return 1;
+
+connection_closed:
+
+ /* If the buffer containds data. */
+ if (socket->b.n > 0) {
+ luaL_pushresult(&socket->b);
+ return 1;
+ }
+ lua_pushnil(L);
+ lua_pushstring(L, "connection closed.");
+ return 2;
+
+connection_empty:
+
+ appctx = objt_appctx(socket->s->si[0].end);
+ if (!hlua_com_new(hlua, &appctx->ctx.hlua.wake_on_read))
+ WILL_LJMP(luaL_error(L, "out of memory"));
+ WILL_LJMP(hlua_yieldk(L, 0, 0, hlua_socket_receive_yield, TICK_ETERNITY, 0));
+ return 0;
+}
+
+/* This Lus function gets two parameters. The first one can be string
+ * or a number. If the string is "*l", the user require one line. If
+ * the string is "*a", the user require all the content of the stream.
+ * If the value is a number, the user require a number of bytes equal
+ * to the value. The default value is "*l" (a line).
+ *
+ * This paraeter with a variable type is converted in integer. This
+ * integer takes this values:
+ * -1 : read a line
+ * -2 : read all the stream
+ * >0 : amount if bytes.
+ *
+ * The second parameter is optinal. It contains a string that must be
+ * concatenated with the read data.
+ */
+__LJMP static int hlua_socket_receive(struct lua_State *L)
+{
+ int wanted = HLSR_READ_LINE;
+ const char *pattern;
+ int type;
+ char *error;
+ size_t len;
+ struct hlua_socket *socket;
+
+ if (lua_gettop(L) < 1 || lua_gettop(L) > 3)
+ WILL_LJMP(luaL_error(L, "The 'receive' function requires between 1 and 3 arguments."));
+
+ socket = MAY_LJMP(hlua_checksocket(L, 1));
+
+ /* check for pattern. */
+ if (lua_gettop(L) >= 2) {
+ type = lua_type(L, 2);
+ if (type == LUA_TSTRING) {
+ pattern = lua_tostring(L, 2);
+ if (strcmp(pattern, "*a") == 0)
+ wanted = HLSR_READ_ALL;
+ else if (strcmp(pattern, "*l") == 0)
+ wanted = HLSR_READ_LINE;
+ else {
+ wanted = strtoll(pattern, &error, 10);
+ if (*error != '\0')
+ WILL_LJMP(luaL_error(L, "Unsupported pattern."));
+ }
+ }
+ else if (type == LUA_TNUMBER) {
+ wanted = lua_tointeger(L, 2);
+ if (wanted < 0)
+ WILL_LJMP(luaL_error(L, "Unsupported size."));
+ }
+ }
+
+ /* Set pattern. */
+ lua_pushinteger(L, wanted);
+ lua_replace(L, 2);
+
+ /* init bufffer, and fiil it wih prefix. */
+ luaL_buffinit(L, &socket->b);
+
+ /* Check prefix. */
+ if (lua_gettop(L) >= 3) {
+ if (lua_type(L, 3) != LUA_TSTRING)
+ WILL_LJMP(luaL_error(L, "Expect a 'string' for the prefix"));
+ pattern = lua_tolstring(L, 3, &len);
+ luaL_addlstring(&socket->b, pattern, len);
+ }
+
+ return __LJMP(hlua_socket_receive_yield(L, 0, 0));
+}
+
+/* Write the Lua input string in the output buffer.
+ * This fucntion returns a yield if no space are available.
+ */
+static int hlua_socket_write_yield(struct lua_State *L,int status, lua_KContext ctx)
+{
+ struct hlua_socket *socket;
+ struct hlua *hlua = hlua_gethlua(L);
+ struct appctx *appctx;
+ size_t buf_len;
+ const char *buf;
+ int len;
+ int send_len;
+ int sent;
+
+ /* Check if this lua stack is schedulable. */
+ if (!hlua || !hlua->task)
+ WILL_LJMP(luaL_error(L, "The 'write' function is only allowed in "
+ "'frontend', 'backend' or 'task'"));
+
+ /* Get object */
+ socket = MAY_LJMP(hlua_checksocket(L, 1));
+ buf = MAY_LJMP(luaL_checklstring(L, 2, &buf_len));
+ sent = MAY_LJMP(luaL_checkinteger(L, 3));
+
+ /* Check for connection close. */
+ if (!socket->s || channel_output_closed(&socket->s->req)) {
+ lua_pushinteger(L, -1);
+ return 1;
+ }
+
+ /* Update the input buffer data. */
+ buf += sent;
+ send_len = buf_len - sent;
+
+ /* All the data are sent. */
+ if (sent >= buf_len)
+ return 1; /* Implicitly return the length sent. */
+
+ /* Check if the buffer is avalaible because HAProxy doesn't allocate
+ * the request buffer if its not required.
+ */
+ if (socket->s->req.buf->size == 0) {
+ if (!stream_alloc_recv_buffer(&socket->s->req)) {
+ socket->s->si[0].flags |= SI_FL_WAIT_ROOM;
+ goto hlua_socket_write_yield_return;
+ }
+ }
+
+ /* Check for avalaible space. */
+ len = buffer_total_space(socket->s->req.buf);
+ if (len <= 0)
+ goto hlua_socket_write_yield_return;
+
+ /* send data */
+ if (len < send_len)
+ send_len = len;
+ len = bi_putblk(&socket->s->req, buf+sent, send_len);
+
+ /* "Not enough space" (-1), "Buffer too little to contain
+ * the data" (-2) are not expected because the available length
+ * is tested.
+ * Other unknown error are also not expected.
+ */
+ if (len <= 0) {
+ if (len == -1)
+ socket->s->req.flags |= CF_WAKE_WRITE;
+
+ MAY_LJMP(hlua_socket_close(L));
+ lua_pop(L, 1);
+ lua_pushinteger(L, -1);
+ return 1;
+ }
+
+ /* update buffers. */
+ stream_int_notify(&socket->s->si[0]);
+ stream_int_update_applet(&socket->s->si[0]);
+
+ socket->s->req.rex = TICK_ETERNITY;
+ socket->s->res.wex = TICK_ETERNITY;
+
+ /* Update length sent. */
+ lua_pop(L, 1);
+ lua_pushinteger(L, sent + len);
+
+ /* All the data buffer is sent ? */
+ if (sent + len >= buf_len)
+ return 1;
+
+hlua_socket_write_yield_return:
+ appctx = objt_appctx(socket->s->si[0].end);
+ if (!hlua_com_new(hlua, &appctx->ctx.hlua.wake_on_write))
+ WILL_LJMP(luaL_error(L, "out of memory"));
+ WILL_LJMP(hlua_yieldk(L, 0, 0, hlua_socket_write_yield, TICK_ETERNITY, 0));
+ return 0;
+}
+
+/* This function initiate the send of data. It just check the input
+ * parameters and push an integer in the Lua stack that contain the
+ * amount of data writed in the buffer. This is used by the function
+ * "hlua_socket_write_yield" that can yield.
+ *
+ * The Lua function gets between 3 and 4 parameters. The first one is
+ * the associated object. The second is a string buffer. The third is
+ * a facultative integer that represents where is the buffer position
+ * of the start of the data that can send. The first byte is the
+ * position "1". The default value is "1". The fourth argument is a
+ * facultative integer that represents where is the buffer position
+ * of the end of the data that can send. The default is the last byte.
+ */
+static int hlua_socket_send(struct lua_State *L)
+{
+ int i;
+ int j;
+ const char *buf;
+ size_t buf_len;
+
+ /* Check number of arguments. */
+ if (lua_gettop(L) < 2 || lua_gettop(L) > 4)
+ WILL_LJMP(luaL_error(L, "'send' needs between 2 and 4 arguments"));
+
+ /* Get the string. */
+ buf = MAY_LJMP(luaL_checklstring(L, 2, &buf_len));
+
+ /* Get and check j. */
+ if (lua_gettop(L) == 4) {
+ j = MAY_LJMP(luaL_checkinteger(L, 4));
+ if (j < 0)
+ j = buf_len + j + 1;
+ if (j > buf_len)
+ j = buf_len + 1;
+ lua_pop(L, 1);
+ }
+ else
+ j = buf_len;
+
+ /* Get and check i. */
+ if (lua_gettop(L) == 3) {
+ i = MAY_LJMP(luaL_checkinteger(L, 3));
+ if (i < 0)
+ i = buf_len + i + 1;
+ if (i > buf_len)
+ i = buf_len + 1;
+ lua_pop(L, 1);
+ } else
+ i = 1;
+
+ /* Check bth i and j. */
+ if (i > j) {
+ lua_pushinteger(L, 0);
+ return 1;
+ }
+ if (i == 0 && j == 0) {
+ lua_pushinteger(L, 0);
+ return 1;
+ }
+ if (i == 0)
+ i = 1;
+ if (j == 0)
+ j = 1;
+
+ /* Pop the string. */
+ lua_pop(L, 1);
+
+ /* Update the buffer length. */
+ buf += i - 1;
+ buf_len = j - i + 1;
+ lua_pushlstring(L, buf, buf_len);
+
+ /* This unsigned is used to remember the amount of sent data. */
+ lua_pushinteger(L, 0);
+
+ return MAY_LJMP(hlua_socket_write_yield(L, 0, 0));
+}
+
+#define SOCKET_INFO_MAX_LEN sizeof("[0000:0000:0000:0000:0000:0000:0000:0000]:12345")
+__LJMP static inline int hlua_socket_info(struct lua_State *L, struct sockaddr_storage *addr)
+{
+ static char buffer[SOCKET_INFO_MAX_LEN];
+ int ret;
+ int len;
+ char *p;
+
+ ret = addr_to_str(addr, buffer+1, SOCKET_INFO_MAX_LEN-1);
+ if (ret <= 0) {
+ lua_pushnil(L);
+ return 1;
+ }
+
+ if (ret == AF_UNIX) {
+ lua_pushstring(L, buffer+1);
+ return 1;
+ }
+ else if (ret == AF_INET6) {
+ buffer[0] = '[';
+ len = strlen(buffer);
+ buffer[len] = ']';
+ len++;
+ buffer[len] = ':';
+ len++;
+ p = buffer;
+ }
+ else if (ret == AF_INET) {
+ p = buffer + 1;
+ len = strlen(p);
+ p[len] = ':';
+ len++;
+ }
+ else {
+ lua_pushnil(L);
+ return 1;
+ }
+
+ if (port_to_str(addr, p + len, SOCKET_INFO_MAX_LEN-1 - len) <= 0) {
+ lua_pushnil(L);
+ return 1;
+ }
+
+ lua_pushstring(L, p);
+ return 1;
+}
+
+/* Returns information about the peer of the connection. */
+__LJMP static int hlua_socket_getpeername(struct lua_State *L)
+{
+ struct hlua_socket *socket;
+ struct connection *conn;
+
+ MAY_LJMP(check_args(L, 1, "getpeername"));
+
+ socket = MAY_LJMP(hlua_checksocket(L, 1));
+
+ /* Check if the tcp object is avalaible. */
+ if (!socket->s) {
+ lua_pushnil(L);
+ return 1;
+ }
+
+ conn = objt_conn(socket->s->si[1].end);
+ if (!conn) {
+ lua_pushnil(L);
+ return 1;
+ }
+
+ if (!(conn->flags & CO_FL_ADDR_TO_SET)) {
+ unsigned int salen = sizeof(conn->addr.to);
+ if (getpeername(conn->t.sock.fd, (struct sockaddr *)&conn->addr.to, &salen) == -1) {
+ lua_pushnil(L);
+ return 1;
+ }
+ conn->flags |= CO_FL_ADDR_TO_SET;
+ }
+
+ return MAY_LJMP(hlua_socket_info(L, &conn->addr.to));
+}
+
+/* Returns information about my connection side. */
+static int hlua_socket_getsockname(struct lua_State *L)
+{
+ struct hlua_socket *socket;
+ struct connection *conn;
+
+ MAY_LJMP(check_args(L, 1, "getsockname"));
+
+ socket = MAY_LJMP(hlua_checksocket(L, 1));
+
+ /* Check if the tcp object is avalaible. */
+ if (!socket->s) {
+ lua_pushnil(L);
+ return 1;
+ }
+
+ conn = objt_conn(socket->s->si[1].end);
+ if (!conn) {
+ lua_pushnil(L);
+ return 1;
+ }
+
+ if (!(conn->flags & CO_FL_ADDR_FROM_SET)) {
+ unsigned int salen = sizeof(conn->addr.from);
+ if (getsockname(conn->t.sock.fd, (struct sockaddr *)&conn->addr.from, &salen) == -1) {
+ lua_pushnil(L);
+ return 1;
+ }
+ conn->flags |= CO_FL_ADDR_FROM_SET;
+ }
+
+ return hlua_socket_info(L, &conn->addr.from);
+}
+
+/* This struct define the applet. */
+static struct applet update_applet = {
+ .obj_type = OBJ_TYPE_APPLET,
+ .name = "<LUA_TCP>",
+ .fct = hlua_socket_handler,
+ .release = hlua_socket_release,
+};
+
+__LJMP static int hlua_socket_connect_yield(struct lua_State *L, int status, lua_KContext ctx)
+{
+ struct hlua_socket *socket = MAY_LJMP(hlua_checksocket(L, 1));
+ struct hlua *hlua = hlua_gethlua(L);
+ struct appctx *appctx;
+
+ /* Check for connection close. */
+ if (!hlua || !socket->s || channel_output_closed(&socket->s->req)) {
+ lua_pushnil(L);
+ lua_pushstring(L, "Can't connect");
+ return 2;
+ }
+
+ appctx = objt_appctx(socket->s->si[0].end);
+
+ /* Check for connection established. */
+ if (appctx->ctx.hlua.connected) {
+ lua_pushinteger(L, 1);
+ return 1;
+ }
+
+ if (!hlua_com_new(hlua, &appctx->ctx.hlua.wake_on_write))
+ WILL_LJMP(luaL_error(L, "out of memory error"));
+ WILL_LJMP(hlua_yieldk(L, 0, 0, hlua_socket_connect_yield, TICK_ETERNITY, 0));
+ return 0;
+}
+
+/* This function fail or initite the connection. */
+__LJMP static int hlua_socket_connect(struct lua_State *L)
+{
+ struct hlua_socket *socket;
+ int port = -1;
+ const char *ip;
+ struct connection *conn;
+ struct hlua *hlua;
+ struct appctx *appctx;
+ int low, high;
+ struct sockaddr_storage *addr;
+
+ if (lua_gettop(L) < 2)
+ WILL_LJMP(luaL_error(L, "connect: need at least 2 arguments"));
+
+ /* Get args. */
+ socket = MAY_LJMP(hlua_checksocket(L, 1));
+ ip = MAY_LJMP(luaL_checkstring(L, 2));
+ if (lua_gettop(L) >= 3)
+ port = MAY_LJMP(luaL_checkinteger(L, 3));
+
+ conn = si_alloc_conn(&socket->s->si[1]);
+ if (!conn)
+ WILL_LJMP(luaL_error(L, "connect: internal error"));
+
+ /* needed for the connection not to be closed */
+ conn->target = socket->s->target;
+
+ /* Parse ip address. */
+ addr = str2sa_range(ip, &low, &high, NULL, NULL, NULL, 0);
+ if (!addr)
+ WILL_LJMP(luaL_error(L, "connect: cannot parse destination address '%s'", ip));
+ if (low != high)
+ WILL_LJMP(luaL_error(L, "connect: port ranges not supported : address '%s'", ip));
+ memcpy(&conn->addr.to, addr, sizeof(struct sockaddr_storage));
+
+ /* Set port. */
+ if (low == 0) {
+ if (conn->addr.to.ss_family == AF_INET) {
+ if (port == -1)
+ WILL_LJMP(luaL_error(L, "connect: port missing"));
+ ((struct sockaddr_in *)&conn->addr.to)->sin_port = htons(port);
+ } else if (conn->addr.to.ss_family == AF_INET6) {
+ if (port == -1)
+ WILL_LJMP(luaL_error(L, "connect: port missing"));
+ ((struct sockaddr_in6 *)&conn->addr.to)->sin6_port = htons(port);
+ }
+ }
+
+ hlua = hlua_gethlua(L);
+ appctx = objt_appctx(socket->s->si[0].end);
+
+ /* inform the stream that we want to be notified whenever the
+ * connection completes.
+ */
+ si_applet_cant_get(&socket->s->si[0]);
+ si_applet_cant_put(&socket->s->si[0]);
+ appctx_wakeup(appctx);
+
+ hlua->flags |= HLUA_MUST_GC;
+
+ if (!hlua_com_new(hlua, &appctx->ctx.hlua.wake_on_write))
+ WILL_LJMP(luaL_error(L, "out of memory"));
+ WILL_LJMP(hlua_yieldk(L, 0, 0, hlua_socket_connect_yield, TICK_ETERNITY, 0));
+
+ return 0;
+}
+
+#ifdef USE_OPENSSL
+__LJMP static int hlua_socket_connect_ssl(struct lua_State *L)
+{
+ struct hlua_socket *socket;
+
+ MAY_LJMP(check_args(L, 3, "connect_ssl"));
+ socket = MAY_LJMP(hlua_checksocket(L, 1));
+ socket->s->target = &socket_ssl.obj_type;
+ return MAY_LJMP(hlua_socket_connect(L));
+}
+#endif
+
+__LJMP static int hlua_socket_setoption(struct lua_State *L)
+{
+ return 0;
+}
+
+__LJMP static int hlua_socket_settimeout(struct lua_State *L)
+{
+ struct hlua_socket *socket;
+ int tmout;
+
+ MAY_LJMP(check_args(L, 2, "settimeout"));
+
+ socket = MAY_LJMP(hlua_checksocket(L, 1));
+ tmout = MAY_LJMP(luaL_checkinteger(L, 2)) * 1000;
+
+ socket->s->req.rto = tmout;
+ socket->s->req.wto = tmout;
+ socket->s->res.rto = tmout;
+ socket->s->res.wto = tmout;
+
+ return 0;
+}
+
+__LJMP static int hlua_socket_new(lua_State *L)
+{
+ struct hlua_socket *socket;
+ struct appctx *appctx;
+ struct session *sess;
+ struct stream *strm;
+ struct task *task;
+
+ /* Check stack size. */
+ if (!lua_checkstack(L, 3)) {
+ hlua_pusherror(L, "socket: full stack");
+ goto out_fail_conf;
+ }
+
+ /* Create the object: obj[0] = userdata. */
+ lua_newtable(L);
+ socket = MAY_LJMP(lua_newuserdata(L, sizeof(*socket)));
+ lua_rawseti(L, -2, 0);
+ memset(socket, 0, sizeof(*socket));
+
+ /* Check if the various memory pools are intialized. */
+ if (!pool2_stream || !pool2_buffer) {
+ hlua_pusherror(L, "socket: uninitialized pools.");
+ goto out_fail_conf;
+ }
+
+ /* Pop a class stream metatable and affect it to the userdata. */
+ lua_rawgeti(L, LUA_REGISTRYINDEX, class_socket_ref);
+ lua_setmetatable(L, -2);
+
+ /* Create the applet context */
+ appctx = appctx_new(&update_applet);
+ if (!appctx) {
+ hlua_pusherror(L, "socket: out of memory");
+ goto out_fail_conf;
+ }
+
+ appctx->ctx.hlua.socket = socket;
+ appctx->ctx.hlua.connected = 0;
+ LIST_INIT(&appctx->ctx.hlua.wake_on_write);
+ LIST_INIT(&appctx->ctx.hlua.wake_on_read);
+
+ /* Now create a session, task and stream for this applet */
+ sess = session_new(&socket_proxy, NULL, &appctx->obj_type);
+ if (!sess) {
+ hlua_pusherror(L, "socket: out of memory");
+ goto out_fail_sess;
+ }
+
+ task = task_new();
+ if (!task) {
+ hlua_pusherror(L, "socket: out of memory");
+ goto out_fail_task;
+ }
+ task->nice = 0;
+
+ strm = stream_new(sess, task, &appctx->obj_type);
+ if (!strm) {
+ hlua_pusherror(L, "socket: out of memory");
+ goto out_fail_stream;
+ }
+
+ /* Configure an empty Lua for the stream. */
+ socket->s = strm;
+ strm->hlua.T = NULL;
+ strm->hlua.Tref = LUA_REFNIL;
+ strm->hlua.Mref = LUA_REFNIL;
+ strm->hlua.nargs = 0;
+ strm->hlua.flags = 0;
+ LIST_INIT(&strm->hlua.com);
+
+ /* Configure "right" stream interface. this "si" is used to connect
+ * and retrieve data from the server. The connection is initialized
+ * with the "struct server".
+ */
+ si_set_state(&strm->si[1], SI_ST_ASS);
+
+ /* Force destination server. */
+ strm->flags |= SF_DIRECT | SF_ASSIGNED | SF_ADDR_SET | SF_BE_ASSIGNED;
+ strm->target = &socket_tcp.obj_type;
+
+ /* Update statistics counters. */
+ socket_proxy.feconn++; /* beconn will be increased later */
+ jobs++;
+ totalconn++;
+
+ /* Return yield waiting for connection. */
+ return 1;
+
+ out_fail_stream:
+ task_free(task);
+ out_fail_task:
+ session_free(sess);
+ out_fail_sess:
+ appctx_free(appctx);
+ out_fail_conf:
+ WILL_LJMP(lua_error(L));
+ return 0;
+}
+
+/*
+ *
+ *
+ * Class Channel
+ *
+ *
+ */
+
+/* The state between the channel data and the HTTP parser state can be
+ * unconsistent, so reset the parser and call it again. Warning, this
+ * action not revalidate the request and not send a 400 if the modified
+ * resuest is not valid.
+ *
+ * This function never fails. The direction is set using dir, which equals
+ * either SMP_OPT_DIR_REQ or SMP_OPT_DIR_RES.
+ */
+static void hlua_resynchonize_proto(struct stream *stream, int dir)
+{
+ /* Protocol HTTP. */
+ if (stream->be->mode == PR_MODE_HTTP) {
+
+ if (dir == SMP_OPT_DIR_REQ)
+ http_txn_reset_req(stream->txn);
+ else if (dir == SMP_OPT_DIR_RES)
+ http_txn_reset_res(stream->txn);
+
+ if (stream->txn->hdr_idx.v)
+ hdr_idx_init(&stream->txn->hdr_idx);
+
+ if (dir == SMP_OPT_DIR_REQ)
+ http_msg_analyzer(&stream->txn->req, &stream->txn->hdr_idx);
+ else if (dir == SMP_OPT_DIR_RES)
+ http_msg_analyzer(&stream->txn->rsp, &stream->txn->hdr_idx);
+ }
+}
+
+/* Check the protocole integrity after the Lua manipulations. Close the stream
+ * and returns 0 if fails, otherwise returns 1. The direction is set using dir,
+ * which equals either SMP_OPT_DIR_REQ or SMP_OPT_DIR_RES.
+ */
+static int hlua_check_proto(struct stream *stream, int dir)
+{
+ const struct chunk msg = { .len = 0 };
+
+ /* Protocol HTTP. The message parsing state must match the request or
+ * response state. The problem that may happen is that Lua modifies
+ * the request or response message *after* it was parsed, and corrupted
+ * it so that it could not be processed anymore. We just need to verify
+ * if the parser is still expected to run or not.
+ */
+ if (stream->be->mode == PR_MODE_HTTP) {
+ if (dir == SMP_OPT_DIR_REQ &&
+ !(stream->req.analysers & AN_REQ_WAIT_HTTP) &&
+ stream->txn->req.msg_state < HTTP_MSG_BODY) {
+ stream_int_retnclose(&stream->si[0], &msg);
+ return 0;
+ }
+ else if (dir == SMP_OPT_DIR_RES &&
+ !(stream->res.analysers & AN_RES_WAIT_HTTP) &&
+ stream->txn->rsp.msg_state < HTTP_MSG_BODY) {
+ stream_int_retnclose(&stream->si[0], &msg);
+ return 0;
+ }
+ }
+ return 1;
+}
+
+/* Returns the struct hlua_channel join to the class channel in the
+ * stack entry "ud" or throws an argument error.
+ */
+__LJMP static struct channel *hlua_checkchannel(lua_State *L, int ud)
+{
+ return (struct channel *)MAY_LJMP(hlua_checkudata(L, ud, class_channel_ref));
+}
+
+/* Pushes the channel onto the top of the stack. If the stask does not have a
+ * free slots, the function fails and returns 0;
+ */
+static int hlua_channel_new(lua_State *L, struct channel *channel)
+{
+ /* Check stack size. */
+ if (!lua_checkstack(L, 3))
+ return 0;
+
+ lua_newtable(L);
+ lua_pushlightuserdata(L, channel);
+ lua_rawseti(L, -2, 0);
+
+ /* Pop a class sesison metatable and affect it to the userdata. */
+ lua_rawgeti(L, LUA_REGISTRYINDEX, class_channel_ref);
+ lua_setmetatable(L, -2);
+ return 1;
+}
+
+/* Duplicate all the data present in the input channel and put it
+ * in a string LUA variables. Returns -1 and push a nil value in
+ * the stack if the channel is closed and all the data are consumed,
+ * returns 0 if no data are available, otherwise it returns the length
+ * of the builded string.
+ */
+static inline int _hlua_channel_dup(struct channel *chn, lua_State *L)
+{
+ char *blk1;
+ char *blk2;
+ int len1;
+ int len2;
+ int ret;
+ luaL_Buffer b;
+
+ ret = bi_getblk_nc(chn, &blk1, &len1, &blk2, &len2);
+ if (unlikely(ret == 0))
+ return 0;
+
+ if (unlikely(ret < 0)) {
+ lua_pushnil(L);
+ return -1;
+ }
+
+ luaL_buffinit(L, &b);
+ luaL_addlstring(&b, blk1, len1);
+ if (unlikely(ret == 2))
+ luaL_addlstring(&b, blk2, len2);
+ luaL_pushresult(&b);
+
+ if (unlikely(ret == 2))
+ return len1 + len2;
+ return len1;
+}
+
+/* "_hlua_channel_dup" wrapper. If no data are available, it returns
+ * a yield. This function keep the data in the buffer.
+ */
+__LJMP static int hlua_channel_dup_yield(lua_State *L, int status, lua_KContext ctx)
+{
+ struct channel *chn;
+
+ chn = MAY_LJMP(hlua_checkchannel(L, 1));
+
+ if (_hlua_channel_dup(chn, L) == 0)
+ WILL_LJMP(hlua_yieldk(L, 0, 0, hlua_channel_dup_yield, TICK_ETERNITY, 0));
+ return 1;
+}
+
+/* Check arguments for the function "hlua_channel_dup_yield". */
+__LJMP static int hlua_channel_dup(lua_State *L)
+{
+ MAY_LJMP(check_args(L, 1, "dup"));
+ MAY_LJMP(hlua_checkchannel(L, 1));
+ return MAY_LJMP(hlua_channel_dup_yield(L, 0, 0));
+}
+
+/* "_hlua_channel_dup" wrapper. If no data are available, it returns
+ * a yield. This function consumes the data in the buffer. It returns
+ * a string containing the data or a nil pointer if no data are available
+ * and the channel is closed.
+ */
+__LJMP static int hlua_channel_get_yield(lua_State *L, int status, lua_KContext ctx)
+{
+ struct channel *chn;
+ int ret;
+
+ chn = MAY_LJMP(hlua_checkchannel(L, 1));
+
+ ret = _hlua_channel_dup(chn, L);
+ if (unlikely(ret == 0))
+ WILL_LJMP(hlua_yieldk(L, 0, 0, hlua_channel_get_yield, TICK_ETERNITY, 0));
+
+ if (unlikely(ret == -1))
+ return 1;
+
+ chn->buf->i -= ret;
+ hlua_resynchonize_proto(chn_strm(chn), !!(chn->flags & CF_ISRESP));
+ return 1;
+}
+
+/* Check arguments for the fucntion "hlua_channel_get_yield". */
+__LJMP static int hlua_channel_get(lua_State *L)
+{
+ MAY_LJMP(check_args(L, 1, "get"));
+ MAY_LJMP(hlua_checkchannel(L, 1));
+ return MAY_LJMP(hlua_channel_get_yield(L, 0, 0));
+}
+
+/* This functions consumes and returns one line. If the channel is closed,
+ * and the last data does not contains a final '\n', the data are returned
+ * without the final '\n'. When no more data are avalaible, it returns nil
+ * value.
+ */
+__LJMP static int hlua_channel_getline_yield(lua_State *L, int status, lua_KContext ctx)
+{
+ char *blk1;
+ char *blk2;
+ int len1;
+ int len2;
+ int len;
+ struct channel *chn;
+ int ret;
+ luaL_Buffer b;
+
+ chn = MAY_LJMP(hlua_checkchannel(L, 1));
+
+ ret = bi_getline_nc(chn, &blk1, &len1, &blk2, &len2);
+ if (ret == 0)
+ WILL_LJMP(hlua_yieldk(L, 0, 0, hlua_channel_getline_yield, TICK_ETERNITY, 0));
+
+ if (ret == -1) {
+ lua_pushnil(L);
+ return 1;
+ }
+
+ luaL_buffinit(L, &b);
+ luaL_addlstring(&b, blk1, len1);
+ len = len1;
+ if (unlikely(ret == 2)) {
+ luaL_addlstring(&b, blk2, len2);
+ len += len2;
+ }
+ luaL_pushresult(&b);
+ buffer_replace2(chn->buf, chn->buf->p, chn->buf->p + len, NULL, 0);
+ hlua_resynchonize_proto(chn_strm(chn), !!(chn->flags & CF_ISRESP));
+ return 1;
+}
+
+/* Check arguments for the fucntion "hlua_channel_getline_yield". */
+__LJMP static int hlua_channel_getline(lua_State *L)
+{
+ MAY_LJMP(check_args(L, 1, "getline"));
+ MAY_LJMP(hlua_checkchannel(L, 1));
+ return MAY_LJMP(hlua_channel_getline_yield(L, 0, 0));
+}
+
+/* This function takes a string as input, and append it at the
+ * input side of channel. If the data is too big, but a space
+ * is probably available after sending some data, the function
+ * yield. If the data is bigger than the buffer, or if the
+ * channel is closed, it returns -1. otherwise, it returns the
+ * amount of data writed.
+ */
+__LJMP static int hlua_channel_append_yield(lua_State *L, int status, lua_KContext ctx)
+{
+ struct channel *chn = MAY_LJMP(hlua_checkchannel(L, 1));
+ size_t len;
+ const char *str = MAY_LJMP(luaL_checklstring(L, 2, &len));
+ int l = MAY_LJMP(luaL_checkinteger(L, 3));
+ int ret;
+ int max;
+
+ max = channel_recv_limit(chn) - buffer_len(chn->buf);
+ if (max > len - l)
+ max = len - l;
+
+ ret = bi_putblk(chn, str + l, max);
+ if (ret == -2 || ret == -3) {
+ lua_pushinteger(L, -1);
+ return 1;
+ }
+ if (ret == -1) {
+ chn->flags |= CF_WAKE_WRITE;
+ WILL_LJMP(hlua_yieldk(L, 0, 0, hlua_channel_append_yield, TICK_ETERNITY, 0));
+ }
+ l += ret;
+ lua_pop(L, 1);
+ lua_pushinteger(L, l);
+ hlua_resynchonize_proto(chn_strm(chn), !!(chn->flags & CF_ISRESP));
+
+ max = channel_recv_limit(chn) - buffer_len(chn->buf);
+ if (max == 0 && chn->buf->o == 0) {
+ /* There are no space avalaible, and the output buffer is empty.
+ * in this case, we cannot add more data, so we cannot yield,
+ * we return the amount of copyied data.
+ */
+ return 1;
+ }
+ if (l < len)
+ WILL_LJMP(hlua_yieldk(L, 0, 0, hlua_channel_append_yield, TICK_ETERNITY, 0));
+ return 1;
+}
+
+/* just a wrapper of "hlua_channel_append_yield". It returns the length
+ * of the writed string, or -1 if the channel is closed or if the
+ * buffer size is too little for the data.
+ */
+__LJMP static int hlua_channel_append(lua_State *L)
+{
+ size_t len;
+
+ MAY_LJMP(check_args(L, 2, "append"));
+ MAY_LJMP(hlua_checkchannel(L, 1));
+ MAY_LJMP(luaL_checklstring(L, 2, &len));
+ MAY_LJMP(luaL_checkinteger(L, 3));
+ lua_pushinteger(L, 0);
+
+ return MAY_LJMP(hlua_channel_append_yield(L, 0, 0));
+}
+
+/* just a wrapper of "hlua_channel_append_yield". This wrapper starts
+ * his process by cleaning the buffer. The result is a replacement
+ * of the current data. It returns the length of the writed string,
+ * or -1 if the channel is closed or if the buffer size is too
+ * little for the data.
+ */
+__LJMP static int hlua_channel_set(lua_State *L)
+{
+ struct channel *chn;
+
+ MAY_LJMP(check_args(L, 2, "set"));
+ chn = MAY_LJMP(hlua_checkchannel(L, 1));
+ lua_pushinteger(L, 0);
+
+ chn->buf->i = 0;
+
+ return MAY_LJMP(hlua_channel_append_yield(L, 0, 0));
+}
+
+/* Append data in the output side of the buffer. This data is immediatly
+ * sent. The fcuntion returns the ammount of data writed. If the buffer
+ * cannot contains the data, the function yield. The function returns -1
+ * if the channel is closed.
+ */
+__LJMP static int hlua_channel_send_yield(lua_State *L, int status, lua_KContext ctx)
+{
+ struct channel *chn = MAY_LJMP(hlua_checkchannel(L, 1));
+ size_t len;
+ const char *str = MAY_LJMP(luaL_checklstring(L, 2, &len));
+ int l = MAY_LJMP(luaL_checkinteger(L, 3));
+ int max;
+ struct hlua *hlua = hlua_gethlua(L);
+
+ if (unlikely(channel_output_closed(chn))) {
+ lua_pushinteger(L, -1);
+ return 1;
+ }
+
+ /* Check if the buffer is avalaible because HAProxy doesn't allocate
+ * the request buffer if its not required.
+ */
+ if (chn->buf->size == 0) {
+ if (!stream_alloc_recv_buffer(chn)) {
+ chn_prod(chn)->flags |= SI_FL_WAIT_ROOM;
+ WILL_LJMP(hlua_yieldk(L, 0, 0, hlua_channel_send_yield, TICK_ETERNITY, 0));
+ }
+ }
+
+ /* the writed data will be immediatly sent, so we can check
+ * the avalaible space without taking in account the reserve.
+ * The reserve is guaranted for the processing of incoming
+ * data, because the buffer will be flushed.
+ */
+ max = chn->buf->size - buffer_len(chn->buf);
+
+ /* If there are no space avalaible, and the output buffer is empty.
+ * in this case, we cannot add more data, so we cannot yield,
+ * we return the amount of copyied data.
+ */
+ if (max == 0 && chn->buf->o == 0)
+ return 1;
+
+ /* Adjust the real required length. */
+ if (max > len - l)
+ max = len - l;
+
+ /* The buffer avalaible size may be not contiguous. This test
+ * detects a non contiguous buffer and realign it.
+ */
+ if (bi_space_for_replace(chn->buf) < max)
+ buffer_slow_realign(chn->buf);
+
+ /* Copy input data in the buffer. */
+ max = buffer_replace2(chn->buf, chn->buf->p, chn->buf->p, str + l, max);
+
+ /* buffer replace considers that the input part is filled.
+ * so, I must forward these new data in the output part.
+ */
+ b_adv(chn->buf, max);
+
+ l += max;
+ lua_pop(L, 1);
+ lua_pushinteger(L, l);
+
+ /* If there are no space avalaible, and the output buffer is empty.
+ * in this case, we cannot add more data, so we cannot yield,
+ * we return the amount of copyied data.
+ */
+ max = chn->buf->size - buffer_len(chn->buf);
+ if (max == 0 && chn->buf->o == 0)
+ return 1;
+
+ if (l < len) {
+ /* If we are waiting for space in the response buffer, we
+ * must set the flag WAKERESWR. This flag required the task
+ * wake up if any activity is detected on the response buffer.
+ */
+ if (chn->flags & CF_ISRESP)
+ HLUA_SET_WAKERESWR(hlua);
+ else
+ HLUA_SET_WAKEREQWR(hlua);
+ WILL_LJMP(hlua_yieldk(L, 0, 0, hlua_channel_send_yield, TICK_ETERNITY, 0));
+ }
+
+ return 1;
+}
+
+/* Just a wraper of "_hlua_channel_send". This wrapper permits
+ * yield the LUA process, and resume it without checking the
+ * input arguments.
+ */
+__LJMP static int hlua_channel_send(lua_State *L)
+{
+ MAY_LJMP(check_args(L, 2, "send"));
+ lua_pushinteger(L, 0);
+
+ return MAY_LJMP(hlua_channel_send_yield(L, 0, 0));
+}
+
+/* This function forward and amount of butes. The data pass from
+ * the input side of the buffer to the output side, and can be
+ * forwarded. This function never fails.
+ *
+ * The Lua function takes an amount of bytes to be forwarded in
+ * imput. It returns the number of bytes forwarded.
+ */
+__LJMP static int hlua_channel_forward_yield(lua_State *L, int status, lua_KContext ctx)
+{
+ struct channel *chn;
+ int len;
+ int l;
+ int max;
+ struct hlua *hlua = hlua_gethlua(L);
+
+ chn = MAY_LJMP(hlua_checkchannel(L, 1));
+ len = MAY_LJMP(luaL_checkinteger(L, 2));
+ l = MAY_LJMP(luaL_checkinteger(L, -1));
+
+ max = len - l;
+ if (max > chn->buf->i)
+ max = chn->buf->i;
+ channel_forward(chn, max);
+ l += max;
+
+ lua_pop(L, 1);
+ lua_pushinteger(L, l);
+
+ /* Check if it miss bytes to forward. */
+ if (l < len) {
+ /* The the input channel or the output channel are closed, we
+ * must return the amount of data forwarded.
+ */
+ if (channel_input_closed(chn) || channel_output_closed(chn))
+ return 1;
+
+ /* If we are waiting for space data in the response buffer, we
+ * must set the flag WAKERESWR. This flag required the task
+ * wake up if any activity is detected on the response buffer.
+ */
+ if (chn->flags & CF_ISRESP)
+ HLUA_SET_WAKERESWR(hlua);
+ else
+ HLUA_SET_WAKEREQWR(hlua);
+
+ /* Otherwise, we can yield waiting for new data in the inpout side. */
+ WILL_LJMP(hlua_yieldk(L, 0, 0, hlua_channel_forward_yield, TICK_ETERNITY, 0));
+ }
+
+ return 1;
+}
+
+/* Just check the input and prepare the stack for the previous
+ * function "hlua_channel_forward_yield"
+ */
+__LJMP static int hlua_channel_forward(lua_State *L)
+{
+ MAY_LJMP(check_args(L, 2, "forward"));
+ MAY_LJMP(hlua_checkchannel(L, 1));
+ MAY_LJMP(luaL_checkinteger(L, 2));
+
+ lua_pushinteger(L, 0);
+ return MAY_LJMP(hlua_channel_forward_yield(L, 0, 0));
+}
+
+/* Just returns the number of bytes available in the input
+ * side of the buffer. This function never fails.
+ */
+__LJMP static int hlua_channel_get_in_len(lua_State *L)
+{
+ struct channel *chn;
+
+ MAY_LJMP(check_args(L, 1, "get_in_len"));
+ chn = MAY_LJMP(hlua_checkchannel(L, 1));
+ lua_pushinteger(L, chn->buf->i);
+ return 1;
+}
+
+/* Just returns the number of bytes available in the output
+ * side of the buffer. This function never fails.
+ */
+__LJMP static int hlua_channel_get_out_len(lua_State *L)
+{
+ struct channel *chn;
+
+ MAY_LJMP(check_args(L, 1, "get_out_len"));
+ chn = MAY_LJMP(hlua_checkchannel(L, 1));
+ lua_pushinteger(L, chn->buf->o);
+ return 1;
+}
+
+/*
+ *
+ *
+ * Class Fetches
+ *
+ *
+ */
+
+/* Returns a struct hlua_session if the stack entry "ud" is
+ * a class stream, otherwise it throws an error.
+ */
+__LJMP static struct hlua_smp *hlua_checkfetches(lua_State *L, int ud)
+{
+ return (struct hlua_smp *)MAY_LJMP(hlua_checkudata(L, ud, class_fetches_ref));
+}
+
+/* This function creates and push in the stack a fetch object according
+ * with a current TXN.
+ */
+static int hlua_fetches_new(lua_State *L, struct hlua_txn *txn, unsigned int flags)
+{
+ struct hlua_smp *hsmp;
+
+ /* Check stack size. */
+ if (!lua_checkstack(L, 3))
+ return 0;
+
+ /* Create the object: obj[0] = userdata.
+ * Note that the base of the Fetches object is the
+ * transaction object.
+ */
+ lua_newtable(L);
+ hsmp = lua_newuserdata(L, sizeof(*hsmp));
+ lua_rawseti(L, -2, 0);
+
+ hsmp->s = txn->s;
+ hsmp->p = txn->p;
+ hsmp->dir = txn->dir;
+ hsmp->flags = flags;
+
+ /* Pop a class sesison metatable and affect it to the userdata. */
+ lua_rawgeti(L, LUA_REGISTRYINDEX, class_fetches_ref);
+ lua_setmetatable(L, -2);
+
+ return 1;
+}
+
+/* This function is an LUA binding. It is called with each sample-fetch.
+ * It uses closure argument to store the associated sample-fetch. It
+ * returns only one argument or throws an error. An error is thrown
+ * only if an error is encountered during the argument parsing. If
+ * the "sample-fetch" function fails, nil is returned.
+ */
+__LJMP static int hlua_run_sample_fetch(lua_State *L)
+{
+ struct hlua_smp *hsmp;
+ struct sample_fetch *f;
+ struct arg args[ARGM_NBARGS + 1];
+ int i;
+ struct sample smp;
+
+ /* Get closure arguments. */
+ f = (struct sample_fetch *)lua_touserdata(L, lua_upvalueindex(1));
+
+ /* Get traditionnal arguments. */
+ hsmp = MAY_LJMP(hlua_checkfetches(L, 1));
+
+ /* Check execution authorization. */
+ if (f->use & SMP_USE_HTTP_ANY &&
+ !(hsmp->flags & HLUA_F_MAY_USE_HTTP)) {
+ lua_pushfstring(L, "the sample-fetch '%s' needs an HTTP parser which "
+ "is not available in Lua services", f->kw);
+ WILL_LJMP(lua_error(L));
+ }
+
+ /* Get extra arguments. */
+ for (i = 0; i < lua_gettop(L) - 1; i++) {
+ if (i >= ARGM_NBARGS)
+ break;
+ hlua_lua2arg(L, i + 2, &args[i]);
+ }
+ args[i].type = ARGT_STOP;
+
+ /* Check arguments. */
+ MAY_LJMP(hlua_lua2arg_check(L, 2, args, f->arg_mask, hsmp->p));
+
+ /* Run the special args checker. */
+ if (f->val_args && !f->val_args(args, NULL)) {
+ lua_pushfstring(L, "error in arguments");
+ WILL_LJMP(lua_error(L));
+ }
+
+ /* Initialise the sample. */
+ memset(&smp, 0, sizeof(smp));
+
+ /* Run the sample fetch process. */
+ smp.px = hsmp->p;
+ smp.sess = hsmp->s->sess;
+ smp.strm = hsmp->s;
+ smp.opt = hsmp->dir & SMP_OPT_DIR;
+ if (!f->process(args, &smp, f->kw, f->private)) {
+ if (hsmp->flags & HLUA_F_AS_STRING)
+ lua_pushstring(L, "");
+ else
+ lua_pushnil(L);
+ return 1;
+ }
+
+ /* Convert the returned sample in lua value. */
+ if (hsmp->flags & HLUA_F_AS_STRING)
+ hlua_smp2lua_str(L, &smp);
+ else
+ hlua_smp2lua(L, &smp);
+ return 1;
+}
+
+/*
+ *
+ *
+ * Class Converters
+ *
+ *
+ */
+
+/* Returns a struct hlua_session if the stack entry "ud" is
+ * a class stream, otherwise it throws an error.
+ */
+__LJMP static struct hlua_smp *hlua_checkconverters(lua_State *L, int ud)
+{
+ return (struct hlua_smp *)MAY_LJMP(hlua_checkudata(L, ud, class_converters_ref));
+}
+
+/* This function creates and push in the stack a Converters object
+ * according with a current TXN.
+ */
+static int hlua_converters_new(lua_State *L, struct hlua_txn *txn, unsigned int flags)
+{
+ struct hlua_smp *hsmp;
+
+ /* Check stack size. */
+ if (!lua_checkstack(L, 3))
+ return 0;
+
+ /* Create the object: obj[0] = userdata.
+ * Note that the base of the Converters object is the
+ * same than the TXN object.
+ */
+ lua_newtable(L);
+ hsmp = lua_newuserdata(L, sizeof(*hsmp));
+ lua_rawseti(L, -2, 0);
+
+ hsmp->s = txn->s;
+ hsmp->p = txn->p;
+ hsmp->dir = txn->dir;
+ hsmp->flags = flags;
+
+ /* Pop a class stream metatable and affect it to the table. */
+ lua_rawgeti(L, LUA_REGISTRYINDEX, class_converters_ref);
+ lua_setmetatable(L, -2);
+
+ return 1;
+}
+
+/* This function is an LUA binding. It is called with each converter.
+ * It uses closure argument to store the associated converter. It
+ * returns only one argument or throws an error. An error is thrown
+ * only if an error is encountered during the argument parsing. If
+ * the converter function function fails, nil is returned.
+ */
+__LJMP static int hlua_run_sample_conv(lua_State *L)
+{
+ struct hlua_smp *hsmp;
+ struct sample_conv *conv;
+ struct arg args[ARGM_NBARGS + 1];
+ int i;
+ struct sample smp;
+
+ /* Get closure arguments. */
+ conv = (struct sample_conv *)lua_touserdata(L, lua_upvalueindex(1));
+
+ /* Get traditionnal arguments. */
+ hsmp = MAY_LJMP(hlua_checkconverters(L, 1));
+
+ /* Get extra arguments. */
+ for (i = 0; i < lua_gettop(L) - 2; i++) {
+ if (i >= ARGM_NBARGS)
+ break;
+ hlua_lua2arg(L, i + 3, &args[i]);
+ }
+ args[i].type = ARGT_STOP;
+
+ /* Check arguments. */
+ MAY_LJMP(hlua_lua2arg_check(L, 3, args, conv->arg_mask, hsmp->p));
+
+ /* Run the special args checker. */
+ if (conv->val_args && !conv->val_args(args, conv, "", 0, NULL)) {
+ hlua_pusherror(L, "error in arguments");
+ WILL_LJMP(lua_error(L));
+ }
+
+ /* Initialise the sample. */
+ if (!hlua_lua2smp(L, 2, &smp)) {
+ hlua_pusherror(L, "error in the input argument");
+ WILL_LJMP(lua_error(L));
+ }
+
+ /* Apply expected cast. */
+ if (!sample_casts[smp.data.type][conv->in_type]) {
+ hlua_pusherror(L, "invalid input argument: cannot cast '%s' to '%s'",
+ smp_to_type[smp.data.type], smp_to_type[conv->in_type]);
+ WILL_LJMP(lua_error(L));
+ }
+ if (sample_casts[smp.data.type][conv->in_type] != c_none &&
+ !sample_casts[smp.data.type][conv->in_type](&smp)) {
+ hlua_pusherror(L, "error during the input argument casting");
+ WILL_LJMP(lua_error(L));
+ }
+
+ /* Run the sample conversion process. */
+ smp.px = hsmp->p;
+ smp.sess = hsmp->s->sess;
+ smp.strm = hsmp->s;
+ smp.opt = hsmp->dir & SMP_OPT_DIR;
+ if (!conv->process(args, &smp, conv->private)) {
+ if (hsmp->flags & HLUA_F_AS_STRING)
+ lua_pushstring(L, "");
+ else
+ lua_pushnil(L);
+ return 1;
+ }
+
+ /* Convert the returned sample in lua value. */
+ if (hsmp->flags & HLUA_F_AS_STRING)
+ hlua_smp2lua_str(L, &smp);
+ else
+ hlua_smp2lua(L, &smp);
+ return 1;
+}
+
+/*
+ *
+ *
+ * Class AppletTCP
+ *
+ *
+ */
+
+/* Returns a struct hlua_txn if the stack entry "ud" is
+ * a class stream, otherwise it throws an error.
+ */
+__LJMP static struct hlua_appctx *hlua_checkapplet_tcp(lua_State *L, int ud)
+{
+ return (struct hlua_appctx *)MAY_LJMP(hlua_checkudata(L, ud, class_applet_tcp_ref));
+}
+
+/* This function creates and push in the stack an Applet object
+ * according with a current TXN.
+ */
+static int hlua_applet_tcp_new(lua_State *L, struct appctx *ctx)
+{
+ struct hlua_appctx *appctx;
+ struct stream_interface *si = ctx->owner;
+ struct stream *s = si_strm(si);
+ struct proxy *p = s->be;
+
+ /* Check stack size. */
+ if (!lua_checkstack(L, 3))
+ return 0;
+
+ /* Create the object: obj[0] = userdata.
+ * Note that the base of the Converters object is the
+ * same than the TXN object.
+ */
+ lua_newtable(L);
+ appctx = lua_newuserdata(L, sizeof(*appctx));
+ lua_rawseti(L, -2, 0);
+ appctx->appctx = ctx;
+ appctx->htxn.s = s;
+ appctx->htxn.p = p;
+
+ /* Create the "f" field that contains a list of fetches. */
+ lua_pushstring(L, "f");
+ if (!hlua_fetches_new(L, &appctx->htxn, 0))
+ return 0;
+ lua_settable(L, -3);
+
+ /* Create the "sf" field that contains a list of stringsafe fetches. */
+ lua_pushstring(L, "sf");
+ if (!hlua_fetches_new(L, &appctx->htxn, HLUA_F_AS_STRING))
+ return 0;
+ lua_settable(L, -3);
+
+ /* Create the "c" field that contains a list of converters. */
+ lua_pushstring(L, "c");
+ if (!hlua_converters_new(L, &appctx->htxn, 0))
+ return 0;
+ lua_settable(L, -3);
+
+ /* Create the "sc" field that contains a list of stringsafe converters. */
+ lua_pushstring(L, "sc");
+ if (!hlua_converters_new(L, &appctx->htxn, HLUA_F_AS_STRING))
+ return 0;
+ lua_settable(L, -3);
+
+ /* Pop a class stream metatable and affect it to the table. */
+ lua_rawgeti(L, LUA_REGISTRYINDEX, class_applet_tcp_ref);
+ lua_setmetatable(L, -2);
+
+ return 1;
+}
+
+/* If expected data not yet available, it returns a yield. This function
+ * consumes the data in the buffer. It returns a string containing the
+ * data. This string can be empty.
+ */
+__LJMP static int hlua_applet_tcp_getline_yield(lua_State *L, int status, lua_KContext ctx)
+{
+ struct hlua_appctx *appctx = MAY_LJMP(hlua_checkapplet_tcp(L, 1));
+ struct stream_interface *si = appctx->appctx->owner;
+ int ret;
+ char *blk1;
+ int len1;
+ char *blk2;
+ int len2;
+
+ /* Read the maximum amount of data avalaible. */
+ ret = bo_getline_nc(si_oc(si), &blk1, &len1, &blk2, &len2);
+
+ /* Data not yet avalaible. return yield. */
+ if (ret == 0) {
+ si_applet_cant_get(si);
+ WILL_LJMP(hlua_yieldk(L, 0, 0, hlua_applet_tcp_getline_yield, TICK_ETERNITY, 0));
+ }
+
+ /* End of data: commit the total strings and return. */
+ if (ret < 0) {
+ luaL_pushresult(&appctx->b);
+ return 1;
+ }
+
+ /* Ensure that the block 2 length is usable. */
+ if (ret == 1)
+ len2 = 0;
+
+ /* dont check the max length read and dont check. */
+ luaL_addlstring(&appctx->b, blk1, len1);
+ luaL_addlstring(&appctx->b, blk2, len2);
+
+ /* Consume input channel output buffer data. */
+ bo_skip(si_oc(si), len1 + len2);
+ luaL_pushresult(&appctx->b);
+ return 1;
+}
+
+/* Check arguments for the fucntion "hlua_channel_get_yield". */
+__LJMP static int hlua_applet_tcp_getline(lua_State *L)
+{
+ struct hlua_appctx *appctx = MAY_LJMP(hlua_checkapplet_tcp(L, 1));
+
+ /* Initialise the string catenation. */
+ luaL_buffinit(L, &appctx->b);
+
+ return MAY_LJMP(hlua_applet_tcp_getline_yield(L, 0, 0));
+}
+
+/* If expected data not yet available, it returns a yield. This function
+ * consumes the data in the buffer. It returns a string containing the
+ * data. This string can be empty.
+ */
+__LJMP static int hlua_applet_tcp_recv_yield(lua_State *L, int status, lua_KContext ctx)
+{
+ struct hlua_appctx *appctx = MAY_LJMP(hlua_checkapplet_tcp(L, 1));
+ struct stream_interface *si = appctx->appctx->owner;
+ int len = MAY_LJMP(luaL_checkinteger(L, 2));
+ int ret;
+ char *blk1;
+ int len1;
+ char *blk2;
+ int len2;
+
+ /* Read the maximum amount of data avalaible. */
+ ret = bo_getblk_nc(si_oc(si), &blk1, &len1, &blk2, &len2);
+
+ /* Data not yet avalaible. return yield. */
+ if (ret == 0) {
+ si_applet_cant_get(si);
+ WILL_LJMP(hlua_yieldk(L, 0, 0, hlua_applet_tcp_recv_yield, TICK_ETERNITY, 0));
+ }
+
+ /* End of data: commit the total strings and return. */
+ if (ret < 0) {
+ luaL_pushresult(&appctx->b);
+ return 1;
+ }
+
+ /* Ensure that the block 2 length is usable. */
+ if (ret == 1)
+ len2 = 0;
+
+ if (len == -1) {
+
+ /* If len == -1, catenate all the data avalaile and
+ * yield because we want to get all the data until
+ * the end of data stream.
+ */
+ luaL_addlstring(&appctx->b, blk1, len1);
+ luaL_addlstring(&appctx->b, blk2, len2);
+ bo_skip(si_oc(si), len1 + len2);
+ si_applet_cant_get(si);
+ WILL_LJMP(hlua_yieldk(L, 0, 0, hlua_applet_tcp_recv_yield, TICK_ETERNITY, 0));
+
+ } else {
+
+ /* Copy the fisrt block caping to the length required. */
+ if (len1 > len)
+ len1 = len;
+ luaL_addlstring(&appctx->b, blk1, len1);
+ len -= len1;
+
+ /* Copy the second block. */
+ if (len2 > len)
+ len2 = len;
+ luaL_addlstring(&appctx->b, blk2, len2);
+ len -= len2;
+
+ /* Consume input channel output buffer data. */
+ bo_skip(si_oc(si), len1 + len2);
+
+ /* If we are no other data avalaible, yield waiting for new data. */
+ if (len > 0) {
+ lua_pushinteger(L, len);
+ lua_replace(L, 2);
+ si_applet_cant_get(si);
+ WILL_LJMP(hlua_yieldk(L, 0, 0, hlua_applet_tcp_recv_yield, TICK_ETERNITY, 0));
+ }
+
+ /* return the result. */
+ luaL_pushresult(&appctx->b);
+ return 1;
+ }
+
+ /* we never executes this */
+ hlua_pusherror(L, "Lua: internal error");
+ WILL_LJMP(lua_error(L));
+ return 0;
+}
+
+/* Check arguments for the fucntion "hlua_channel_get_yield". */
+__LJMP static int hlua_applet_tcp_recv(lua_State *L)
+{
+ struct hlua_appctx *appctx = MAY_LJMP(hlua_checkapplet_tcp(L, 1));
+ int len = -1;
+
+ if (lua_gettop(L) > 2)
+ WILL_LJMP(luaL_error(L, "The 'recv' function requires between 1 and 2 arguments."));
+ if (lua_gettop(L) >= 2) {
+ len = MAY_LJMP(luaL_checkinteger(L, 2));
+ lua_pop(L, 1);
+ }
+
+ /* Confirm or set the required length */
+ lua_pushinteger(L, len);
+
+ /* Initialise the string catenation. */
+ luaL_buffinit(L, &appctx->b);
+
+ return MAY_LJMP(hlua_applet_tcp_recv_yield(L, 0, 0));
+}
+
+/* Append data in the output side of the buffer. This data is immediatly
+ * sent. The fcuntion returns the ammount of data writed. If the buffer
+ * cannot contains the data, the function yield. The function returns -1
+ * if the channel is closed.
+ */
+__LJMP static int hlua_applet_tcp_send_yield(lua_State *L, int status, lua_KContext ctx)
+{
+ size_t len;
+ struct hlua_appctx *appctx = MAY_LJMP(hlua_checkapplet_tcp(L, 1));
+ const char *str = MAY_LJMP(luaL_checklstring(L, 2, &len));
+ int l = MAY_LJMP(luaL_checkinteger(L, 3));
+ struct stream_interface *si = appctx->appctx->owner;
+ struct channel *chn = si_ic(si);
+ int max;
+
+ /* Get the max amount of data which can write as input in the channel. */
+ max = channel_recv_max(chn);
+ if (max > (len - l))
+ max = len - l;
+
+ /* Copy data. */
+ bi_putblk(chn, str + l, max);
+
+ /* update counters. */
+ l += max;
+ lua_pop(L, 1);
+ lua_pushinteger(L, l);
+
+ /* If some data is not send, declares the situation to the
+ * applet, and returns a yield.
+ */
+ if (l < len) {
+ si_applet_cant_put(si);
+ WILL_LJMP(hlua_yieldk(L, 0, 0, hlua_applet_tcp_send_yield, TICK_ETERNITY, 0));
+ }
+
+ return 1;
+}
+
+/* Just a wraper of "hlua_applet_tcp_send_yield". This wrapper permits
+ * yield the LUA process, and resume it without checking the
+ * input arguments.
+ */
+__LJMP static int hlua_applet_tcp_send(lua_State *L)
+{
+ MAY_LJMP(check_args(L, 2, "send"));
+ lua_pushinteger(L, 0);
+
+ return MAY_LJMP(hlua_applet_tcp_send_yield(L, 0, 0));
+}
+
+/*
+ *
+ *
+ * Class AppletHTTP
+ *
+ *
+ */
+
+/* Returns a struct hlua_txn if the stack entry "ud" is
+ * a class stream, otherwise it throws an error.
+ */
+__LJMP static struct hlua_appctx *hlua_checkapplet_http(lua_State *L, int ud)
+{
+ return (struct hlua_appctx *)MAY_LJMP(hlua_checkudata(L, ud, class_applet_http_ref));
+}
+
+/* This function creates and push in the stack an Applet object
+ * according with a current TXN.
+ */
+static int hlua_applet_http_new(lua_State *L, struct appctx *ctx)
+{
+ struct hlua_appctx *appctx;
+ struct hlua_txn htxn;
+ struct stream_interface *si = ctx->owner;
+ struct stream *s = si_strm(si);
+ struct proxy *px = s->be;
+ struct http_txn *txn = s->txn;
+ const char *path;
+ const char *end;
+ const char *p;
+
+ /* Check stack size. */
+ if (!lua_checkstack(L, 3))
+ return 0;
+
+ /* Create the object: obj[0] = userdata.
+ * Note that the base of the Converters object is the
+ * same than the TXN object.
+ */
+ lua_newtable(L);
+ appctx = lua_newuserdata(L, sizeof(*appctx));
+ lua_rawseti(L, -2, 0);
+ appctx->appctx = ctx;
+ appctx->appctx->ctx.hlua_apphttp.status = 200; /* Default status code returned. */
+ appctx->htxn.s = s;
+ appctx->htxn.p = px;
+
+ /* Create the "f" field that contains a list of fetches. */
+ lua_pushstring(L, "f");
+ if (!hlua_fetches_new(L, &appctx->htxn, 0))
+ return 0;
+ lua_settable(L, -3);
+
+ /* Create the "sf" field that contains a list of stringsafe fetches. */
+ lua_pushstring(L, "sf");
+ if (!hlua_fetches_new(L, &appctx->htxn, HLUA_F_AS_STRING))
+ return 0;
+ lua_settable(L, -3);
+
+ /* Create the "c" field that contains a list of converters. */
+ lua_pushstring(L, "c");
+ if (!hlua_converters_new(L, &appctx->htxn, 0))
+ return 0;
+ lua_settable(L, -3);
+
+ /* Create the "sc" field that contains a list of stringsafe converters. */
+ lua_pushstring(L, "sc");
+ if (!hlua_converters_new(L, &appctx->htxn, HLUA_F_AS_STRING))
+ return 0;
+ lua_settable(L, -3);
+
+ /* Stores the request method. */
+ lua_pushstring(L, "method");
+ lua_pushlstring(L, txn->req.chn->buf->p, txn->req.sl.rq.m_l);
+ lua_settable(L, -3);
+
+ /* Stores the http version. */
+ lua_pushstring(L, "version");
+ lua_pushlstring(L, txn->req.chn->buf->p + txn->req.sl.rq.v, txn->req.sl.rq.v_l);
+ lua_settable(L, -3);
+
+ /* creates an array of headers. hlua_http_get_headers() crates and push
+ * the array on the top of the stack.
+ */
+ lua_pushstring(L, "headers");
+ htxn.s = s;
+ htxn.p = px;
+ htxn.dir = SMP_OPT_DIR_REQ;
+ if (!hlua_http_get_headers(L, &htxn, &htxn.s->txn->req))
+ return 0;
+ lua_settable(L, -3);
+
+ /* Get path and qs */
+ path = http_get_path(txn);
+ end = txn->req.chn->buf->p + txn->req.sl.rq.u + txn->req.sl.rq.u_l;
+ p = path;
+ while (p < end && *p != '?')
+ p++;
+
+ /* Stores the request path. */
+ lua_pushstring(L, "path");
+ lua_pushlstring(L, path, p - path);
+ lua_settable(L, -3);
+
+ /* Stores the query string. */
+ lua_pushstring(L, "qs");
+ if (*p == '?')
+ p++;
+ lua_pushlstring(L, p, end - p);
+ lua_settable(L, -3);
+
+ /* Stores the request path. */
+ lua_pushstring(L, "length");
+ lua_pushinteger(L, txn->req.body_len);
+ lua_settable(L, -3);
+
+ /* Create an array of HTTP request headers. */
+ lua_pushstring(L, "headers");
+ MAY_LJMP(hlua_http_get_headers(L, &appctx->htxn, &appctx->htxn.s->txn->req));
+ lua_settable(L, -3);
+
+ /* Create an empty array of HTTP request headers. */
+ lua_pushstring(L, "response");
+ lua_newtable(L);
+ lua_settable(L, -3);
+
+ /* Pop a class stream metatable and affect it to the table. */
+ lua_rawgeti(L, LUA_REGISTRYINDEX, class_applet_http_ref);
+ lua_setmetatable(L, -2);
+
+ return 1;
+}
+
+/* If expected data not yet available, it returns a yield. This function
+ * consumes the data in the buffer. It returns a string containing the
+ * data. This string can be empty.
+ */
+__LJMP static int hlua_applet_http_getline_yield(lua_State *L, int status, lua_KContext ctx)
+{
+ struct hlua_appctx *appctx = MAY_LJMP(hlua_checkapplet_http(L, 1));
+ struct stream_interface *si = appctx->appctx->owner;
+ struct channel *chn = si_ic(si);
+ int ret;
+ char *blk1;
+ int len1;
+ char *blk2;
+ int len2;
+
+ /* Maybe we cant send a 100-continue ? */
+ if (appctx->appctx->ctx.hlua_apphttp.flags & APPLET_100C) {
+ ret = bi_putblk(chn, HTTP_100C, strlen(HTTP_100C));
+ /* if ret == -2 or -3 the channel closed or the message si too
+ * big for the buffers. We cant send anything. So, we ignoring
+ * the error, considers that the 100-continue is sent, and try
+ * to receive.
+ * If ret is -1, we dont have room in the buffer, so we yield.
+ */
+ if (ret == -1) {
+ si_applet_cant_put(si);
+ WILL_LJMP(hlua_yieldk(L, 0, 0, hlua_applet_http_getline_yield, TICK_ETERNITY, 0));
+ }
+ appctx->appctx->ctx.hlua_apphttp.flags &= ~APPLET_100C;
+ }
+
+ /* Check for the end of the data. */
+ if (appctx->appctx->ctx.hlua_apphttp.left_bytes <= 0) {
+ luaL_pushresult(&appctx->b);
+ return 1;
+ }
+
+ /* Read the maximum amount of data avalaible. */
+ ret = bo_getline_nc(si_oc(si), &blk1, &len1, &blk2, &len2);
+
+ /* Data not yet avalaible. return yield. */
+ if (ret == 0) {
+ si_applet_cant_get(si);
+ WILL_LJMP(hlua_yieldk(L, 0, 0, hlua_applet_http_getline_yield, TICK_ETERNITY, 0));
+ }
+
+ /* End of data: commit the total strings and return. */
+ if (ret < 0) {
+ luaL_pushresult(&appctx->b);
+ return 1;
+ }
+
+ /* Ensure that the block 2 length is usable. */
+ if (ret == 1)
+ len2 = 0;
+
+ /* Copy the fisrt block caping to the length required. */
+ if (len1 > appctx->appctx->ctx.hlua_apphttp.left_bytes)
+ len1 = appctx->appctx->ctx.hlua_apphttp.left_bytes;
+ luaL_addlstring(&appctx->b, blk1, len1);
+ appctx->appctx->ctx.hlua_apphttp.left_bytes -= len1;
+
+ /* Copy the second block. */
+ if (len2 > appctx->appctx->ctx.hlua_apphttp.left_bytes)
+ len2 = appctx->appctx->ctx.hlua_apphttp.left_bytes;
+ luaL_addlstring(&appctx->b, blk2, len2);
+ appctx->appctx->ctx.hlua_apphttp.left_bytes -= len2;
+
+ /* Consume input channel output buffer data. */
+ bo_skip(si_oc(si), len1 + len2);
+ luaL_pushresult(&appctx->b);
+ return 1;
+}
+
+/* Check arguments for the fucntion "hlua_channel_get_yield". */
+__LJMP static int hlua_applet_http_getline(lua_State *L)
+{
+ struct hlua_appctx *appctx = MAY_LJMP(hlua_checkapplet_http(L, 1));
+
+ /* Initialise the string catenation. */
+ luaL_buffinit(L, &appctx->b);
+
+ return MAY_LJMP(hlua_applet_http_getline_yield(L, 0, 0));
+}
+
+/* If expected data not yet available, it returns a yield. This function
+ * consumes the data in the buffer. It returns a string containing the
+ * data. This string can be empty.
+ */
+__LJMP static int hlua_applet_http_recv_yield(lua_State *L, int status, lua_KContext ctx)
+{
+ struct hlua_appctx *appctx = MAY_LJMP(hlua_checkapplet_http(L, 1));
+ struct stream_interface *si = appctx->appctx->owner;
+ int len = MAY_LJMP(luaL_checkinteger(L, 2));
+ struct channel *chn = si_ic(si);
+ int ret;
+ char *blk1;
+ int len1;
+ char *blk2;
+ int len2;
+
+ /* Maybe we cant send a 100-continue ? */
+ if (appctx->appctx->ctx.hlua_apphttp.flags & APPLET_100C) {
+ ret = bi_putblk(chn, HTTP_100C, strlen(HTTP_100C));
+ /* if ret == -2 or -3 the channel closed or the message si too
+ * big for the buffers. We cant send anything. So, we ignoring
+ * the error, considers that the 100-continue is sent, and try
+ * to receive.
+ * If ret is -1, we dont have room in the buffer, so we yield.
+ */
+ if (ret == -1) {
+ si_applet_cant_put(si);
+ WILL_LJMP(hlua_yieldk(L, 0, 0, hlua_applet_http_recv_yield, TICK_ETERNITY, 0));
+ }
+ appctx->appctx->ctx.hlua_apphttp.flags &= ~APPLET_100C;
+ }
+
+ /* Read the maximum amount of data avalaible. */
+ ret = bo_getblk_nc(si_oc(si), &blk1, &len1, &blk2, &len2);
+
+ /* Data not yet avalaible. return yield. */
+ if (ret == 0) {
+ si_applet_cant_get(si);
+ WILL_LJMP(hlua_yieldk(L, 0, 0, hlua_applet_http_recv_yield, TICK_ETERNITY, 0));
+ }
+
+ /* End of data: commit the total strings and return. */
+ if (ret < 0) {
+ luaL_pushresult(&appctx->b);
+ return 1;
+ }
+
+ /* Ensure that the block 2 length is usable. */
+ if (ret == 1)
+ len2 = 0;
+
+ /* Copy the fisrt block caping to the length required. */
+ if (len1 > len)
+ len1 = len;
+ luaL_addlstring(&appctx->b, blk1, len1);
+ len -= len1;
+
+ /* Copy the second block. */
+ if (len2 > len)
+ len2 = len;
+ luaL_addlstring(&appctx->b, blk2, len2);
+ len -= len2;
+
+ /* Consume input channel output buffer data. */
+ bo_skip(si_oc(si), len1 + len2);
+ if (appctx->appctx->ctx.hlua_apphttp.left_bytes != -1)
+ appctx->appctx->ctx.hlua_apphttp.left_bytes -= len;
+
+ /* If we are no other data avalaible, yield waiting for new data. */
+ if (len > 0) {
+ lua_pushinteger(L, len);
+ lua_replace(L, 2);
+ si_applet_cant_get(si);
+ WILL_LJMP(hlua_yieldk(L, 0, 0, hlua_applet_http_recv_yield, TICK_ETERNITY, 0));
+ }
+
+ /* return the result. */
+ luaL_pushresult(&appctx->b);
+ return 1;
+}
+
+/* Check arguments for the fucntion "hlua_channel_get_yield". */
+__LJMP static int hlua_applet_http_recv(lua_State *L)
+{
+ struct hlua_appctx *appctx = MAY_LJMP(hlua_checkapplet_http(L, 1));
+ int len = -1;
+
+ /* Check arguments. */
+ if (lua_gettop(L) > 2)
+ WILL_LJMP(luaL_error(L, "The 'recv' function requires between 1 and 2 arguments."));
+ if (lua_gettop(L) >= 2) {
+ len = MAY_LJMP(luaL_checkinteger(L, 2));
+ lua_pop(L, 1);
+ }
+
+ /* Check the required length */
+ if (len == -1 || len > appctx->appctx->ctx.hlua_apphttp.left_bytes)
+ len = appctx->appctx->ctx.hlua_apphttp.left_bytes;
+ lua_pushinteger(L, len);
+
+ /* Initialise the string catenation. */
+ luaL_buffinit(L, &appctx->b);
+
+ return MAY_LJMP(hlua_applet_http_recv_yield(L, 0, 0));
+}
+
+/* Append data in the output side of the buffer. This data is immediatly
+ * sent. The fcuntion returns the ammount of data writed. If the buffer
+ * cannot contains the data, the function yield. The function returns -1
+ * if the channel is closed.
+ */
+__LJMP static int hlua_applet_http_send_yield(lua_State *L, int status, lua_KContext ctx)
+{
+ size_t len;
+ struct hlua_appctx *appctx = MAY_LJMP(hlua_checkapplet_http(L, 1));
+ const char *str = MAY_LJMP(luaL_checklstring(L, 2, &len));
+ int l = MAY_LJMP(luaL_checkinteger(L, 3));
+ struct stream_interface *si = appctx->appctx->owner;
+ struct channel *chn = si_ic(si);
+ int max;
+
+ /* Get the max amount of data which can write as input in the channel. */
+ max = channel_recv_max(chn);
+ if (max > (len - l))
+ max = len - l;
+
+ /* Copy data. */
+ bi_putblk(chn, str + l, max);
+
+ /* update counters. */
+ l += max;
+ lua_pop(L, 1);
+ lua_pushinteger(L, l);
+
+ /* If some data is not send, declares the situation to the
+ * applet, and returns a yield.
+ */
+ if (l < len) {
+ si_applet_cant_put(si);
+ WILL_LJMP(hlua_yieldk(L, 0, 0, hlua_applet_http_send_yield, TICK_ETERNITY, 0));
+ }
+
+ return 1;
+}
+
+/* Just a wraper of "hlua_applet_send_yield". This wrapper permits
+ * yield the LUA process, and resume it without checking the
+ * input arguments.
+ */
+__LJMP static int hlua_applet_http_send(lua_State *L)
+{
+ struct hlua_appctx *appctx = MAY_LJMP(hlua_checkapplet_http(L, 1));
+ size_t len;
+ char hex[10];
+
+ MAY_LJMP(luaL_checklstring(L, 2, &len));
+
+ /* If transfer encoding chunked is selected, we surround the data
+ * by chunk data.
+ */
+ if (appctx->appctx->ctx.hlua_apphttp.flags & APPLET_CHUNKED) {
+ snprintf(hex, 9, "%x", (unsigned int)len);
+ lua_pushfstring(L, "%s\r\n", hex);
+ lua_insert(L, 2); /* swap the last 2 entries. */
+ lua_pushstring(L, "\r\n");
+ lua_concat(L, 3);
+ }
+
+ /* This interger is used for followinf the amount of data sent. */
+ lua_pushinteger(L, 0);
+
+ /* We want to send some data. Headers must be sent. */
+ if (!(appctx->appctx->ctx.hlua_apphttp.flags & APPLET_HDR_SENT)) {
+ hlua_pusherror(L, "Lua: 'send' you must call start_response() before sending data.");
+ WILL_LJMP(lua_error(L));
+ }
+
+ return MAY_LJMP(hlua_applet_http_send_yield(L, 0, 0));
+}
+
+__LJMP static int hlua_applet_http_addheader(lua_State *L)
+{
+ const char *name;
+ int ret;
+
+ MAY_LJMP(hlua_checkapplet_http(L, 1));
+ name = MAY_LJMP(luaL_checkstring(L, 2));
+ MAY_LJMP(luaL_checkstring(L, 3));
+
+ /* Push in the stack the "response" entry. */
+ ret = lua_getfield(L, 1, "response");
+ if (ret != LUA_TTABLE) {
+ hlua_pusherror(L, "Lua: 'add_header' internal error: AppletHTTP['response'] "
+ "is expected as an array. %s found", lua_typename(L, ret));
+ WILL_LJMP(lua_error(L));
+ }
+
+ /* check if the header is already registered if it is not
+ * the case, register it.
+ */
+ ret = lua_getfield(L, -1, name);
+ if (ret == LUA_TNIL) {
+
+ /* Entry not found. */
+ lua_pop(L, 1); /* remove the nil. The "response" table is the top of the stack. */
+
+ /* Insert the new header name in the array in the top of the stack.
+ * It left the new array in the top of the stack.
+ */
+ lua_newtable(L);
+ lua_pushvalue(L, 2);
+ lua_pushvalue(L, -2);
+ lua_settable(L, -4);
+
+ } else if (ret != LUA_TTABLE) {
+
+ /* corruption error. */
+ hlua_pusherror(L, "Lua: 'add_header' internal error: AppletHTTP['response']['%s'] "
+ "is expected as an array. %s found", name, lua_typename(L, ret));
+ WILL_LJMP(lua_error(L));
+ }
+
+ /* Now the top od thestack is an array of values. We push
+ * the header value as new entry.
+ */
+ lua_pushvalue(L, 3);
+ ret = lua_rawlen(L, -2);
+ lua_rawseti(L, -2, ret + 1);
+ lua_pushboolean(L, 1);
+ return 1;
+}
+
+__LJMP static int hlua_applet_http_status(lua_State *L)
+{
+ struct hlua_appctx *appctx = MAY_LJMP(hlua_checkapplet_http(L, 1));
+ int status = MAY_LJMP(luaL_checkinteger(L, 2));
+
+ if (status < 100 || status > 599) {
+ lua_pushboolean(L, 0);
+ return 1;
+ }
+
+ appctx->appctx->ctx.hlua_apphttp.status = status;
+ lua_pushboolean(L, 1);
+ return 1;
+}
+
+/* We will build the status line and the headers of the HTTP response.
+ * We will try send at once if its not possible, we give back the hand
+ * waiting for more room.
+ */
+__LJMP static int hlua_applet_http_start_response_yield(lua_State *L, int status, lua_KContext ctx)
+{
+ struct hlua_appctx *appctx = MAY_LJMP(hlua_checkapplet_http(L, 1));
+ struct stream_interface *si = appctx->appctx->owner;
+ struct channel *chn = si_ic(si);
+ int ret;
+ size_t len;
+ const char *msg;
+
+ /* Get the message as the first argument on the stack. */
+ msg = MAY_LJMP(luaL_checklstring(L, 2, &len));
+
+ /* Send the message at once. */
+ ret = bi_putblk(chn, msg, len);
+
+ /* if ret == -2 or -3 the channel closed or the message si too
+ * big for the buffers.
+ */
+ if (ret == -2 || ret == -3) {
+ hlua_pusherror(L, "Lua: 'start_response': response header block too big");
+ WILL_LJMP(lua_error(L));
+ }
+
+ /* If ret is -1, we dont have room in the buffer, so we yield. */
+ if (ret == -1) {
+ si_applet_cant_put(si);
+ WILL_LJMP(hlua_yieldk(L, 0, 0, hlua_applet_http_start_response_yield, TICK_ETERNITY, 0));
+ }
+
+ /* Headers sent, set the flag. */
+ appctx->appctx->ctx.hlua_apphttp.flags |= APPLET_HDR_SENT;
+ return 0;
+}
+
+__LJMP static int hlua_applet_http_start_response(lua_State *L)
+{
+ struct chunk *tmp = get_trash_chunk();
+ struct hlua_appctx *appctx = MAY_LJMP(hlua_checkapplet_http(L, 1));
+ const char *name;
+ const char *value;
+ int id;
+ int hdr_connection = 0;
+ int hdr_contentlength = -1;
+ int hdr_chunked = 0;
+
+ /* Use the same http version than the request. */
+ chunk_appendf(tmp, "HTTP/1.%c %d %s\r\n",
+ appctx->appctx->ctx.hlua_apphttp.flags & APPLET_HTTP11 ? '1' : '0',
+ appctx->appctx->ctx.hlua_apphttp.status,
+ get_reason(appctx->appctx->ctx.hlua_apphttp.status));
+
+ /* Get the array associated to the field "response" in the object AppletHTTP. */
+ lua_pushvalue(L, 0);
+ if (lua_getfield(L, 1, "response") != LUA_TTABLE) {
+ hlua_pusherror(L, "Lua applet http '%s': AppletHTTP['response'] missing.\n",
+ appctx->appctx->rule->arg.hlua_rule->fcn.name);
+ WILL_LJMP(lua_error(L));
+ }
+
+ /* Browse the list of headers. */
+ lua_pushnil(L);
+ while(lua_next(L, -2) != 0) {
+
+ /* We expect a string as -2. */
+ if (lua_type(L, -2) != LUA_TSTRING) {
+ hlua_pusherror(L, "Lua applet http '%s': AppletHTTP['response'][] element must be a string. got %s.\n",
+ appctx->appctx->rule->arg.hlua_rule->fcn.name,
+ lua_typename(L, lua_type(L, -2)));
+ WILL_LJMP(lua_error(L));
+ }
+ name = lua_tostring(L, -2);
+
+ /* We expect an array as -1. */
+ if (lua_type(L, -1) != LUA_TTABLE) {
+ hlua_pusherror(L, "Lua applet http '%s': AppletHTTP['response']['%s'] element must be an table. got %s.\n",
+ appctx->appctx->rule->arg.hlua_rule->fcn.name,
+ name,
+ lua_typename(L, lua_type(L, -1)));
+ WILL_LJMP(lua_error(L));
+ }
+
+ /* Browse the table who is on the top of the stack. */
+ lua_pushnil(L);
+ while(lua_next(L, -2) != 0) {
+
+ /* We expect a number as -2. */
+ if (lua_type(L, -2) != LUA_TNUMBER) {
+ hlua_pusherror(L, "Lua applet http '%s': AppletHTTP['response']['%s'][] element must be a number. got %s.\n",
+ appctx->appctx->rule->arg.hlua_rule->fcn.name,
+ name,
+ lua_typename(L, lua_type(L, -2)));
+ WILL_LJMP(lua_error(L));
+ }
+ id = lua_tointeger(L, -2);
+
+ /* We expect a string as -2. */
+ if (lua_type(L, -1) != LUA_TSTRING) {
+ hlua_pusherror(L, "Lua applet http '%s': AppletHTTP['response']['%s'][%d] element must be a string. got %s.\n",
+ appctx->appctx->rule->arg.hlua_rule->fcn.name,
+ name, id,
+ lua_typename(L, lua_type(L, -1)));
+ WILL_LJMP(lua_error(L));
+ }
+ value = lua_tostring(L, -1);
+
+ /* Catenate a new header. */
+ chunk_appendf(tmp, "%s: %s\r\n", name, value);
+
+ /* Protocol checks. */
+
+ /* Check if the header conneciton is present. */
+ if (strcasecmp("connection", name) == 0)
+ hdr_connection = 1;
+
+ /* Copy the header content length. The length conversion
+ * is done without control. If it contains a ad value, this
+ * is not our problem.
+ */
+ if (strcasecmp("content-length", name) == 0)
+ hdr_contentlength = atoi(value);
+
+ /* Check if the client annouces a transfer-encoding chunked it self. */
+ if (strcasecmp("transfer-encoding", name) == 0 &&
+ strcasecmp("chunked", value) == 0)
+ hdr_chunked = 1;
+
+ /* Remove the array from the stack, and get next element with a remaining string. */
+ lua_pop(L, 1);
+ }
+
+ /* Remove the array from the stack, and get next element with a remaining string. */
+ lua_pop(L, 1);
+ }
+
+ /* If the http protocol version is 1.1, we expect an header "connection" set
+ * to "close" to be HAProxy/keeplive compliant. Otherwise, we expect nothing.
+ * If the header conneciton is present, don't change it, if it is not present,
+ * we must set.
+ *
+ * we set a "connection: close" header for ensuring that the keepalive will be
+ * respected by haproxy. HAProcy considers that the application cloe the connection
+ * and it keep the connection from the client open.
+ */
+ if (appctx->appctx->ctx.hlua_apphttp.flags & APPLET_HTTP11 && !hdr_connection)
+ chunk_appendf(tmp, "Connection: close\r\n");
+
+ /* If we dont have a content-length set, we must announce a transfer enconding
+ * chunked. This is required by haproxy for the keepalive compliance.
+ * If the applet annouce a transfer-encoding chunked itslef, don't
+ * do anything.
+ */
+ if (hdr_contentlength == -1 && hdr_chunked == 0) {
+ chunk_appendf(tmp, "Transfer-encoding: chunked\r\n");
+ appctx->appctx->ctx.hlua_apphttp.flags |= APPLET_CHUNKED;
+ }
+
+ /* Finalize headers. */
+ chunk_appendf(tmp, "\r\n");
+
+ /* Remove the last entry and the array of headers */
+ lua_pop(L, 2);
+
+ /* Push the headers block. */
+ lua_pushlstring(L, tmp->str, tmp->len);
+
+ return MAY_LJMP(hlua_applet_http_start_response_yield(L, 0, 0));
+}
+
+/*
+ *
+ *
+ * Class HTTP
+ *
+ *
+ */
+
+/* Returns a struct hlua_txn if the stack entry "ud" is
+ * a class stream, otherwise it throws an error.
+ */
+__LJMP static struct hlua_txn *hlua_checkhttp(lua_State *L, int ud)
+{
+ return (struct hlua_txn *)MAY_LJMP(hlua_checkudata(L, ud, class_http_ref));
+}
+
+/* This function creates and push in the stack a HTTP object
+ * according with a current TXN.
+ */
+static int hlua_http_new(lua_State *L, struct hlua_txn *txn)
+{
+ struct hlua_txn *htxn;
+
+ /* Check stack size. */
+ if (!lua_checkstack(L, 3))
+ return 0;
+
+ /* Create the object: obj[0] = userdata.
+ * Note that the base of the Converters object is the
+ * same than the TXN object.
+ */
+ lua_newtable(L);
+ htxn = lua_newuserdata(L, sizeof(*htxn));
+ lua_rawseti(L, -2, 0);
+
+ htxn->s = txn->s;
+ htxn->p = txn->p;
+
+ /* Pop a class stream metatable and affect it to the table. */
+ lua_rawgeti(L, LUA_REGISTRYINDEX, class_http_ref);
+ lua_setmetatable(L, -2);
+
+ return 1;
+}
+
+/* This function creates ans returns an array of HTTP headers.
+ * This function does not fails. It is used as wrapper with the
+ * 2 following functions.
+ */
+__LJMP static int hlua_http_get_headers(lua_State *L, struct hlua_txn *htxn, struct http_msg *msg)
+{
+ const char *cur_ptr, *cur_next, *p;
+ int old_idx, cur_idx;
+ struct hdr_idx_elem *cur_hdr;
+ const char *hn, *hv;
+ int hnl, hvl;
+ int type;
+ const char *in;
+ char *out;
+ int len;
+
+ /* Create the table. */
+ lua_newtable(L);
+
+ if (!htxn->s->txn)
+ return 1;
+
+ /* Build array of headers. */
+ old_idx = 0;
+ cur_next = msg->chn->buf->p + hdr_idx_first_pos(&htxn->s->txn->hdr_idx);
+
+ while (1) {
+ cur_idx = htxn->s->txn->hdr_idx.v[old_idx].next;
+ if (!cur_idx)
+ break;
+ old_idx = cur_idx;
+
+ cur_hdr = &htxn->s->txn->hdr_idx.v[cur_idx];
+ cur_ptr = cur_next;
+ cur_next = cur_ptr + cur_hdr->len + cur_hdr->cr + 1;
+
+ /* Now we have one full header at cur_ptr of len cur_hdr->len,
+ * and the next header starts at cur_next. We'll check
+ * this header in the list as well as against the default
+ * rule.
+ */
+
+ /* look for ': *'. */
+ hn = cur_ptr;
+ for (p = cur_ptr; p < cur_ptr + cur_hdr->len && *p != ':'; p++);
+ if (p >= cur_ptr+cur_hdr->len)
+ continue;
+ hnl = p - hn;
+ p++;
+ while (p < cur_ptr+cur_hdr->len && ( *p == ' ' || *p == '\t' ))
+ p++;
+ if (p >= cur_ptr+cur_hdr->len)
+ continue;
+ hv = p;
+ hvl = cur_ptr+cur_hdr->len-p;
+
+ /* Lowercase the key. Don't check the size of trash, it have
+ * the size of one buffer and the input data contains in one
+ * buffer.
+ */
+ out = trash.str;
+ for (in=hn; in<hn+hnl; in++, out++)
+ *out = tolower(*in);
+ *out = '\0';
+
+ /* Check for existing entry:
+ * assume that the table is on the top of the stack, and
+ * push the key in the stack, the function lua_gettable()
+ * perform the lookup.
+ */
+ lua_pushlstring(L, trash.str, hnl);
+ lua_gettable(L, -2);
+ type = lua_type(L, -1);
+
+ switch (type) {
+ case LUA_TNIL:
+ /* Table not found, create it. */
+ lua_pop(L, 1); /* remove the nil value. */
+ lua_pushlstring(L, trash.str, hnl); /* push the header name as key. */
+ lua_newtable(L); /* create and push empty table. */
+ lua_pushlstring(L, hv, hvl); /* push header value. */
+ lua_rawseti(L, -2, 0); /* index header value (pop it). */
+ lua_rawset(L, -3); /* index new table with header name (pop the values). */
+ break;
+
+ case LUA_TTABLE:
+ /* Entry found: push the value in the table. */
+ len = lua_rawlen(L, -1);
+ lua_pushlstring(L, hv, hvl); /* push header value. */
+ lua_rawseti(L, -2, len+1); /* index header value (pop it). */
+ lua_pop(L, 1); /* remove the table (it is stored in the main table). */
+ break;
+
+ default:
+ /* Other cases are errors. */
+ hlua_pusherror(L, "internal error during the parsing of headers.");
+ WILL_LJMP(lua_error(L));
+ }
+ }
+
+ return 1;
+}
+
+__LJMP static int hlua_http_req_get_headers(lua_State *L)
+{
+ struct hlua_txn *htxn;
+
+ MAY_LJMP(check_args(L, 1, "req_get_headers"));
+ htxn = MAY_LJMP(hlua_checkhttp(L, 1));
+
+ return hlua_http_get_headers(L, htxn, &htxn->s->txn->req);
+}
+
+__LJMP static int hlua_http_res_get_headers(lua_State *L)
+{
+ struct hlua_txn *htxn;
+
+ MAY_LJMP(check_args(L, 1, "res_get_headers"));
+ htxn = MAY_LJMP(hlua_checkhttp(L, 1));
+
+ return hlua_http_get_headers(L, htxn, &htxn->s->txn->rsp);
+}
+
+/* This function replace full header, or just a value in
+ * the request or in the response. It is a wrapper fir the
+ * 4 following functions.
+ */
+__LJMP static inline int hlua_http_rep_hdr(lua_State *L, struct hlua_txn *htxn,
+ struct http_msg *msg, int action)
+{
+ size_t name_len;
+ const char *name = MAY_LJMP(luaL_checklstring(L, 2, &name_len));
+ const char *reg = MAY_LJMP(luaL_checkstring(L, 3));
+ const char *value = MAY_LJMP(luaL_checkstring(L, 4));
+ struct my_regex re;
+
+ if (!regex_comp(reg, &re, 1, 1, NULL))
+ WILL_LJMP(luaL_argerror(L, 3, "invalid regex"));
+
+ http_transform_header_str(htxn->s, msg, name, name_len, value, &re, action);
+ regex_free(&re);
+ return 0;
+}
+
+__LJMP static int hlua_http_req_rep_hdr(lua_State *L)
+{
+ struct hlua_txn *htxn;
+
+ MAY_LJMP(check_args(L, 4, "req_rep_hdr"));
+ htxn = MAY_LJMP(hlua_checkhttp(L, 1));
+
+ return MAY_LJMP(hlua_http_rep_hdr(L, htxn, &htxn->s->txn->req, ACT_HTTP_REPLACE_HDR));
+}
+
+__LJMP static int hlua_http_res_rep_hdr(lua_State *L)
+{
+ struct hlua_txn *htxn;
+
+ MAY_LJMP(check_args(L, 4, "res_rep_hdr"));
+ htxn = MAY_LJMP(hlua_checkhttp(L, 1));
+
+ return MAY_LJMP(hlua_http_rep_hdr(L, htxn, &htxn->s->txn->rsp, ACT_HTTP_REPLACE_HDR));
+}
+
+__LJMP static int hlua_http_req_rep_val(lua_State *L)
+{
+ struct hlua_txn *htxn;
+
+ MAY_LJMP(check_args(L, 4, "req_rep_hdr"));
+ htxn = MAY_LJMP(hlua_checkhttp(L, 1));
+
+ return MAY_LJMP(hlua_http_rep_hdr(L, htxn, &htxn->s->txn->req, ACT_HTTP_REPLACE_VAL));
+}
+
+__LJMP static int hlua_http_res_rep_val(lua_State *L)
+{
+ struct hlua_txn *htxn;
+
+ MAY_LJMP(check_args(L, 4, "res_rep_val"));
+ htxn = MAY_LJMP(hlua_checkhttp(L, 1));
+
+ return MAY_LJMP(hlua_http_rep_hdr(L, htxn, &htxn->s->txn->rsp, ACT_HTTP_REPLACE_VAL));
+}
+
+/* This function deletes all the occurences of an header.
+ * It is a wrapper for the 2 following functions.
+ */
+__LJMP static inline int hlua_http_del_hdr(lua_State *L, struct hlua_txn *htxn, struct http_msg *msg)
+{
+ size_t len;
+ const char *name = MAY_LJMP(luaL_checklstring(L, 2, &len));
+ struct hdr_ctx ctx;
+ struct http_txn *txn = htxn->s->txn;
+
+ ctx.idx = 0;
+ while (http_find_header2(name, len, msg->chn->buf->p, &txn->hdr_idx, &ctx))
+ http_remove_header2(msg, &txn->hdr_idx, &ctx);
+ return 0;
+}
+
+__LJMP static int hlua_http_req_del_hdr(lua_State *L)
+{
+ struct hlua_txn *htxn;
+
+ MAY_LJMP(check_args(L, 2, "req_del_hdr"));
+ htxn = MAY_LJMP(hlua_checkhttp(L, 1));
+
+ return hlua_http_del_hdr(L, htxn, &htxn->s->txn->req);
+}
+
+__LJMP static int hlua_http_res_del_hdr(lua_State *L)
+{
+ struct hlua_txn *htxn;
+
+ MAY_LJMP(check_args(L, 2, "req_del_hdr"));
+ htxn = MAY_LJMP(hlua_checkhttp(L, 1));
+
+ return hlua_http_del_hdr(L, htxn, &htxn->s->txn->rsp);
+}
+
+/* This function adds an header. It is a wrapper used by
+ * the 2 following functions.
+ */
+__LJMP static inline int hlua_http_add_hdr(lua_State *L, struct hlua_txn *htxn, struct http_msg *msg)
+{
+ size_t name_len;
+ const char *name = MAY_LJMP(luaL_checklstring(L, 2, &name_len));
+ size_t value_len;
+ const char *value = MAY_LJMP(luaL_checklstring(L, 3, &value_len));
+ char *p;
+
+ /* Check length. */
+ trash.len = value_len + name_len + 2;
+ if (trash.len > trash.size)
+ return 0;
+
+ /* Creates the header string. */
+ p = trash.str;
+ memcpy(p, name, name_len);
+ p += name_len;
+ *p = ':';
+ p++;
+ *p = ' ';
+ p++;
+ memcpy(p, value, value_len);
+
+ lua_pushboolean(L, http_header_add_tail2(msg, &htxn->s->txn->hdr_idx,
+ trash.str, trash.len) != 0);
+
+ return 0;
+}
+
+__LJMP static int hlua_http_req_add_hdr(lua_State *L)
+{
+ struct hlua_txn *htxn;
+
+ MAY_LJMP(check_args(L, 3, "req_add_hdr"));
+ htxn = MAY_LJMP(hlua_checkhttp(L, 1));
+
+ return hlua_http_add_hdr(L, htxn, &htxn->s->txn->req);
+}
+
+__LJMP static int hlua_http_res_add_hdr(lua_State *L)
+{
+ struct hlua_txn *htxn;
+
+ MAY_LJMP(check_args(L, 3, "res_add_hdr"));
+ htxn = MAY_LJMP(hlua_checkhttp(L, 1));
+
+ return hlua_http_add_hdr(L, htxn, &htxn->s->txn->rsp);
+}
+
+static int hlua_http_req_set_hdr(lua_State *L)
+{
+ struct hlua_txn *htxn;
+
+ MAY_LJMP(check_args(L, 3, "req_set_hdr"));
+ htxn = MAY_LJMP(hlua_checkhttp(L, 1));
+
+ hlua_http_del_hdr(L, htxn, &htxn->s->txn->req);
+ return hlua_http_add_hdr(L, htxn, &htxn->s->txn->req);
+}
+
+static int hlua_http_res_set_hdr(lua_State *L)
+{
+ struct hlua_txn *htxn;
+
+ MAY_LJMP(check_args(L, 3, "res_set_hdr"));
+ htxn = MAY_LJMP(hlua_checkhttp(L, 1));
+
+ hlua_http_del_hdr(L, htxn, &htxn->s->txn->rsp);
+ return hlua_http_add_hdr(L, htxn, &htxn->s->txn->rsp);
+}
+
+/* This function set the method. */
+static int hlua_http_req_set_meth(lua_State *L)
+{
+ struct hlua_txn *htxn = MAY_LJMP(hlua_checkhttp(L, 1));
+ size_t name_len;
+ const char *name = MAY_LJMP(luaL_checklstring(L, 2, &name_len));
+
+ lua_pushboolean(L, http_replace_req_line(0, name, name_len, htxn->p, htxn->s) != -1);
+ return 1;
+}
+
+/* This function set the method. */
+static int hlua_http_req_set_path(lua_State *L)
+{
+ struct hlua_txn *htxn = MAY_LJMP(hlua_checkhttp(L, 1));
+ size_t name_len;
+ const char *name = MAY_LJMP(luaL_checklstring(L, 2, &name_len));
+ lua_pushboolean(L, http_replace_req_line(1, name, name_len, htxn->p, htxn->s) != -1);
+ return 1;
+}
+
+/* This function set the query-string. */
+static int hlua_http_req_set_query(lua_State *L)
+{
+ struct hlua_txn *htxn = MAY_LJMP(hlua_checkhttp(L, 1));
+ size_t name_len;
+ const char *name = MAY_LJMP(luaL_checklstring(L, 2, &name_len));
+
+ /* Check length. */
+ if (name_len > trash.size - 1) {
+ lua_pushboolean(L, 0);
+ return 1;
+ }
+
+ /* Add the mark question as prefix. */
+ chunk_reset(&trash);
+ trash.str[trash.len++] = '?';
+ memcpy(trash.str + trash.len, name, name_len);
+ trash.len += name_len;
+
+ lua_pushboolean(L, http_replace_req_line(2, trash.str, trash.len, htxn->p, htxn->s) != -1);
+ return 1;
+}
+
+/* This function set the uri. */
+static int hlua_http_req_set_uri(lua_State *L)
+{
+ struct hlua_txn *htxn = MAY_LJMP(hlua_checkhttp(L, 1));
+ size_t name_len;
+ const char *name = MAY_LJMP(luaL_checklstring(L, 2, &name_len));
+
+ lua_pushboolean(L, http_replace_req_line(3, name, name_len, htxn->p, htxn->s) != -1);
+ return 1;
+}
+
+/* This function set the response code. */
+static int hlua_http_res_set_status(lua_State *L)
+{
+ struct hlua_txn *htxn = MAY_LJMP(hlua_checkhttp(L, 1));
+ unsigned int code = MAY_LJMP(luaL_checkinteger(L, 2));
+
+ http_set_status(code, htxn->s);
+ return 0;
+}
+
+/*
+ *
+ *
+ * Class TXN
+ *
+ *
+ */
+
+/* Returns a struct hlua_session if the stack entry "ud" is
+ * a class stream, otherwise it throws an error.
+ */
+__LJMP static struct hlua_txn *hlua_checktxn(lua_State *L, int ud)
+{
+ return (struct hlua_txn *)MAY_LJMP(hlua_checkudata(L, ud, class_txn_ref));
+}
+
+__LJMP static int hlua_set_var(lua_State *L)
+{
+ struct hlua_txn *htxn;
+ const char *name;
+ size_t len;
+ struct sample smp;
+
+ MAY_LJMP(check_args(L, 3, "set_var"));
+
+ /* It is useles to retrieve the stream, but this function
+ * runs only in a stream context.
+ */
+ htxn = MAY_LJMP(hlua_checktxn(L, 1));
+ name = MAY_LJMP(luaL_checklstring(L, 2, &len));
+
+ /* Converts the third argument in a sample. */
+ hlua_lua2smp(L, 3, &smp);
+
+ /* Store the sample in a variable. */
+ vars_set_by_name(name, len, htxn->s, &smp);
+ return 0;
+}
+
+__LJMP static int hlua_get_var(lua_State *L)
+{
+ struct hlua_txn *htxn;
+ const char *name;
+ size_t len;
+ struct sample smp;
+
+ MAY_LJMP(check_args(L, 2, "get_var"));
+
+ /* It is useles to retrieve the stream, but this function
+ * runs only in a stream context.
+ */
+ htxn = MAY_LJMP(hlua_checktxn(L, 1));
+ name = MAY_LJMP(luaL_checklstring(L, 2, &len));
+
+ if (!vars_get_by_name(name, len, htxn->s, &smp)) {
+ lua_pushnil(L);
+ return 1;
+ }
+
+ return hlua_smp2lua(L, &smp);
+}
+
+__LJMP static int hlua_set_priv(lua_State *L)
+{
+ struct hlua *hlua;
+
+ MAY_LJMP(check_args(L, 2, "set_priv"));
+
+ /* It is useles to retrieve the stream, but this function
+ * runs only in a stream context.
+ */
+ MAY_LJMP(hlua_checktxn(L, 1));
+ hlua = hlua_gethlua(L);
+
+ /* Remove previous value. */
+ if (hlua->Mref != -1)
+ luaL_unref(L, hlua->Mref, LUA_REGISTRYINDEX);
+
+ /* Get and store new value. */
+ lua_pushvalue(L, 2); /* Copy the element 2 at the top of the stack. */
+ hlua->Mref = luaL_ref(L, LUA_REGISTRYINDEX); /* pop the previously pushed value. */
+
+ return 0;
+}
+
+__LJMP static int hlua_get_priv(lua_State *L)
+{
+ struct hlua *hlua;
+
+ MAY_LJMP(check_args(L, 1, "get_priv"));
+
+ /* It is useles to retrieve the stream, but this function
+ * runs only in a stream context.
+ */
+ MAY_LJMP(hlua_checktxn(L, 1));
+ hlua = hlua_gethlua(L);
+
+ /* Push configuration index in the stack. */
+ lua_rawgeti(L, LUA_REGISTRYINDEX, hlua->Mref);
+
+ return 1;
+}
+
+/* Create stack entry containing a class TXN. This function
+ * return 0 if the stack does not contains free slots,
+ * otherwise it returns 1.
+ */
+static int hlua_txn_new(lua_State *L, struct stream *s, struct proxy *p, int dir)
+{
+ struct hlua_txn *htxn;
+
+ /* Check stack size. */
+ if (!lua_checkstack(L, 3))
+ return 0;
+
+ /* NOTE: The allocation never fails. The failure
+ * throw an error, and the function never returns.
+ * if the throw is not avalaible, the process is aborted.
+ */
+ /* Create the object: obj[0] = userdata. */
+ lua_newtable(L);
+ htxn = lua_newuserdata(L, sizeof(*htxn));
+ lua_rawseti(L, -2, 0);
+
+ htxn->s = s;
+ htxn->p = p;
+ htxn->dir = dir;
+
+ /* Create the "f" field that contains a list of fetches. */
+ lua_pushstring(L, "f");
+ if (!hlua_fetches_new(L, htxn, HLUA_F_MAY_USE_HTTP))
+ return 0;
+ lua_rawset(L, -3);
+
+ /* Create the "sf" field that contains a list of stringsafe fetches. */
+ lua_pushstring(L, "sf");
+ if (!hlua_fetches_new(L, htxn, HLUA_F_MAY_USE_HTTP | HLUA_F_AS_STRING))
+ return 0;
+ lua_rawset(L, -3);
+
+ /* Create the "c" field that contains a list of converters. */
+ lua_pushstring(L, "c");
+ if (!hlua_converters_new(L, htxn, 0))
+ return 0;
+ lua_rawset(L, -3);
+
+ /* Create the "sc" field that contains a list of stringsafe converters. */
+ lua_pushstring(L, "sc");
+ if (!hlua_converters_new(L, htxn, HLUA_F_AS_STRING))
+ return 0;
+ lua_rawset(L, -3);
+
+ /* Create the "req" field that contains the request channel object. */
+ lua_pushstring(L, "req");
+ if (!hlua_channel_new(L, &s->req))
+ return 0;
+ lua_rawset(L, -3);
+
+ /* Create the "res" field that contains the response channel object. */
+ lua_pushstring(L, "res");
+ if (!hlua_channel_new(L, &s->res))
+ return 0;
+ lua_rawset(L, -3);
+
+ /* Creates the HTTP object is the current proxy allows http. */
+ lua_pushstring(L, "http");
+ if (p->mode == PR_MODE_HTTP) {
+ if (!hlua_http_new(L, htxn))
+ return 0;
+ }
+ else
+ lua_pushnil(L);
+ lua_rawset(L, -3);
+
+ /* Pop a class sesison metatable and affect it to the userdata. */
+ lua_rawgeti(L, LUA_REGISTRYINDEX, class_txn_ref);
+ lua_setmetatable(L, -2);
+
+ return 1;
+}
+
+__LJMP static int hlua_txn_deflog(lua_State *L)
+{
+ const char *msg;
+ struct hlua_txn *htxn;
+
+ MAY_LJMP(check_args(L, 2, "deflog"));
+ htxn = MAY_LJMP(hlua_checktxn(L, 1));
+ msg = MAY_LJMP(luaL_checkstring(L, 2));
+
+ hlua_sendlog(htxn->s->be, htxn->s->logs.level, msg);
+ return 0;
+}
+
+__LJMP static int hlua_txn_log(lua_State *L)
+{
+ int level;
+ const char *msg;
+ struct hlua_txn *htxn;
+
+ MAY_LJMP(check_args(L, 3, "log"));
+ htxn = MAY_LJMP(hlua_checktxn(L, 1));
+ level = MAY_LJMP(luaL_checkinteger(L, 2));
+ msg = MAY_LJMP(luaL_checkstring(L, 3));
+
+ if (level < 0 || level >= NB_LOG_LEVELS)
+ WILL_LJMP(luaL_argerror(L, 1, "Invalid loglevel."));
+
+ hlua_sendlog(htxn->s->be, level, msg);
+ return 0;
+}
+
+__LJMP static int hlua_txn_log_debug(lua_State *L)
+{
+ const char *msg;
+ struct hlua_txn *htxn;
+
+ MAY_LJMP(check_args(L, 2, "Debug"));
+ htxn = MAY_LJMP(hlua_checktxn(L, 1));
+ msg = MAY_LJMP(luaL_checkstring(L, 2));
+ hlua_sendlog(htxn->s->be, LOG_DEBUG, msg);
+ return 0;
+}
+
+__LJMP static int hlua_txn_log_info(lua_State *L)
+{
+ const char *msg;
+ struct hlua_txn *htxn;
+
+ MAY_LJMP(check_args(L, 2, "Info"));
+ htxn = MAY_LJMP(hlua_checktxn(L, 1));
+ msg = MAY_LJMP(luaL_checkstring(L, 2));
+ hlua_sendlog(htxn->s->be, LOG_INFO, msg);
+ return 0;
+}
+
+__LJMP static int hlua_txn_log_warning(lua_State *L)
+{
+ const char *msg;
+ struct hlua_txn *htxn;
+
+ MAY_LJMP(check_args(L, 2, "Warning"));
+ htxn = MAY_LJMP(hlua_checktxn(L, 1));
+ msg = MAY_LJMP(luaL_checkstring(L, 2));
+ hlua_sendlog(htxn->s->be, LOG_WARNING, msg);
+ return 0;
+}
+
+__LJMP static int hlua_txn_log_alert(lua_State *L)
+{
+ const char *msg;
+ struct hlua_txn *htxn;
+
+ MAY_LJMP(check_args(L, 2, "Alert"));
+ htxn = MAY_LJMP(hlua_checktxn(L, 1));
+ msg = MAY_LJMP(luaL_checkstring(L, 2));
+ hlua_sendlog(htxn->s->be, LOG_ALERT, msg);
+ return 0;
+}
+
+__LJMP static int hlua_txn_set_loglevel(lua_State *L)
+{
+ struct hlua_txn *htxn;
+ int ll;
+
+ MAY_LJMP(check_args(L, 2, "set_loglevel"));
+ htxn = MAY_LJMP(hlua_checktxn(L, 1));
+ ll = MAY_LJMP(luaL_checkinteger(L, 2));
+
+ if (ll < 0 || ll > 7)
+ WILL_LJMP(luaL_argerror(L, 2, "Bad log level. It must be between 0 and 7"));
+
+ htxn->s->logs.level = ll;
+ return 0;
+}
+
+__LJMP static int hlua_txn_set_tos(lua_State *L)
+{
+ struct hlua_txn *htxn;
+ struct connection *cli_conn;
+ int tos;
+
+ MAY_LJMP(check_args(L, 2, "set_tos"));
+ htxn = MAY_LJMP(hlua_checktxn(L, 1));
+ tos = MAY_LJMP(luaL_checkinteger(L, 2));
+
+ if ((cli_conn = objt_conn(htxn->s->sess->origin)) && conn_ctrl_ready(cli_conn))
+ inet_set_tos(cli_conn->t.sock.fd, cli_conn->addr.from, tos);
+
+ return 0;
+}
+
+__LJMP static int hlua_txn_set_mark(lua_State *L)
+{
+#ifdef SO_MARK
+ struct hlua_txn *htxn;
+ struct connection *cli_conn;
+ int mark;
+
+ MAY_LJMP(check_args(L, 2, "set_mark"));
+ htxn = MAY_LJMP(hlua_checktxn(L, 1));
+ mark = MAY_LJMP(luaL_checkinteger(L, 2));
+
+ if ((cli_conn = objt_conn(htxn->s->sess->origin)) && conn_ctrl_ready(cli_conn))
+ setsockopt(cli_conn->t.sock.fd, SOL_SOCKET, SO_MARK, &mark, sizeof(mark));
+#endif
+ return 0;
+}
+
+/* This function is an Lua binding that send pending data
+ * to the client, and close the stream interface.
+ */
+__LJMP static int hlua_txn_done(lua_State *L)
+{
+ struct hlua_txn *htxn;
+ struct channel *ic, *oc;
+
+ MAY_LJMP(check_args(L, 1, "close"));
+ htxn = MAY_LJMP(hlua_checktxn(L, 1));
+
+ ic = &htxn->s->req;
+ oc = &htxn->s->res;
+
+ if (htxn->s->txn) {
+ /* HTTP mode, let's stay in sync with the stream */
+ bi_fast_delete(ic->buf, htxn->s->txn->req.sov);
+ htxn->s->txn->req.next -= htxn->s->txn->req.sov;
+ htxn->s->txn->req.sov = 0;
+ ic->analysers &= AN_REQ_HTTP_XFER_BODY;
+ oc->analysers = AN_RES_HTTP_XFER_BODY;
+ htxn->s->txn->req.msg_state = HTTP_MSG_CLOSED;
+ htxn->s->txn->rsp.msg_state = HTTP_MSG_DONE;
+
+ /* Note that if we want to support keep-alive, we need
+ * to bypass the close/shutr_now calls below, but that
+ * may only be done if the HTTP request was already
+ * processed and the connection header is known (ie
+ * not during TCP rules).
+ */
+ }
+
+ channel_auto_read(ic);
+ channel_abort(ic);
+ channel_auto_close(ic);
+ channel_erase(ic);
+
+ oc->wex = tick_add_ifset(now_ms, oc->wto);
+ channel_auto_read(oc);
+ channel_auto_close(oc);
+ channel_shutr_now(oc);
+
+ ic->analysers = 0;
+
+ WILL_LJMP(hlua_done(L));
+ return 0;
+}
+
+__LJMP static int hlua_log(lua_State *L)
+{
+ int level;
+ const char *msg;
+
+ MAY_LJMP(check_args(L, 2, "log"));
+ level = MAY_LJMP(luaL_checkinteger(L, 1));
+ msg = MAY_LJMP(luaL_checkstring(L, 2));
+
+ if (level < 0 || level >= NB_LOG_LEVELS)
+ WILL_LJMP(luaL_argerror(L, 1, "Invalid loglevel."));
+
+ hlua_sendlog(NULL, level, msg);
+ return 0;
+}
+
+__LJMP static int hlua_log_debug(lua_State *L)
+{
+ const char *msg;
+
+ MAY_LJMP(check_args(L, 1, "debug"));
+ msg = MAY_LJMP(luaL_checkstring(L, 1));
+ hlua_sendlog(NULL, LOG_DEBUG, msg);
+ return 0;
+}
+
+__LJMP static int hlua_log_info(lua_State *L)
+{
+ const char *msg;
+
+ MAY_LJMP(check_args(L, 1, "info"));
+ msg = MAY_LJMP(luaL_checkstring(L, 1));
+ hlua_sendlog(NULL, LOG_INFO, msg);
+ return 0;
+}
+
+__LJMP static int hlua_log_warning(lua_State *L)
+{
+ const char *msg;
+
+ MAY_LJMP(check_args(L, 1, "warning"));
+ msg = MAY_LJMP(luaL_checkstring(L, 1));
+ hlua_sendlog(NULL, LOG_WARNING, msg);
+ return 0;
+}
+
+__LJMP static int hlua_log_alert(lua_State *L)
+{
+ const char *msg;
+
+ MAY_LJMP(check_args(L, 1, "alert"));
+ msg = MAY_LJMP(luaL_checkstring(L, 1));
+ hlua_sendlog(NULL, LOG_ALERT, msg);
+ return 0;
+}
+
+__LJMP static int hlua_sleep_yield(lua_State *L, int status, lua_KContext ctx)
+{
+ int wakeup_ms = lua_tointeger(L, -1);
+ if (now_ms < wakeup_ms)
+ WILL_LJMP(hlua_yieldk(L, 0, 0, hlua_sleep_yield, wakeup_ms, 0));
+ return 0;
+}
+
+__LJMP static int hlua_sleep(lua_State *L)
+{
+ unsigned int delay;
+ unsigned int wakeup_ms;
+
+ MAY_LJMP(check_args(L, 1, "sleep"));
+
+ delay = MAY_LJMP(luaL_checkinteger(L, 1)) * 1000;
+ wakeup_ms = tick_add(now_ms, delay);
+ lua_pushinteger(L, wakeup_ms);
+
+ WILL_LJMP(hlua_yieldk(L, 0, 0, hlua_sleep_yield, wakeup_ms, 0));
+ return 0;
+}
+
+__LJMP static int hlua_msleep(lua_State *L)
+{
+ unsigned int delay;
+ unsigned int wakeup_ms;
+
+ MAY_LJMP(check_args(L, 1, "msleep"));
+
+ delay = MAY_LJMP(luaL_checkinteger(L, 1));
+ wakeup_ms = tick_add(now_ms, delay);
+ lua_pushinteger(L, wakeup_ms);
+
+ WILL_LJMP(hlua_yieldk(L, 0, 0, hlua_sleep_yield, wakeup_ms, 0));
+ return 0;
+}
+
+/* This functionis an LUA binding. it permits to give back
+ * the hand at the HAProxy scheduler. It is used when the
+ * LUA processing consumes a lot of time.
+ */
+__LJMP static int hlua_yield_yield(lua_State *L, int status, lua_KContext ctx)
+{
+ return 0;
+}
+
+__LJMP static int hlua_yield(lua_State *L)
+{
+ WILL_LJMP(hlua_yieldk(L, 0, 0, hlua_yield_yield, TICK_ETERNITY, HLUA_CTRLYIELD));
+ return 0;
+}
+
+/* This function change the nice of the currently executed
+ * task. It is used set low or high priority at the current
+ * task.
+ */
+__LJMP static int hlua_set_nice(lua_State *L)
+{
+ struct hlua *hlua;
+ int nice;
+
+ MAY_LJMP(check_args(L, 1, "set_nice"));
+ hlua = hlua_gethlua(L);
+ nice = MAY_LJMP(luaL_checkinteger(L, 1));
+
+ /* If he task is not set, I'm in a start mode. */
+ if (!hlua || !hlua->task)
+ return 0;
+
+ if (nice < -1024)
+ nice = -1024;
+ else if (nice > 1024)
+ nice = 1024;
+
+ hlua->task->nice = nice;
+ return 0;
+}
+
+/* This function is used as a calback of a task. It is called by the
+ * HAProxy task subsystem when the task is awaked. The LUA runtime can
+ * return an E_AGAIN signal, the emmiter of this signal must set a
+ * signal to wake the task.
+ *
+ * Task wrapper are longjmp safe because the only one Lua code
+ * executed is the safe hlua_ctx_resume();
+ */
+static struct task *hlua_process_task(struct task *task)
+{
+ struct hlua *hlua = task->context;
+ enum hlua_exec status;
+
+ /* We need to remove the task from the wait queue before executing
+ * the Lua code because we don't know if it needs to wait for
+ * another timer or not in the case of E_AGAIN.
+ */
+ task_delete(task);
+
+ /* If it is the first call to the task, we must initialize the
+ * execution timeouts.
+ */
+ if (!HLUA_IS_RUNNING(hlua))
+ hlua->max_time = hlua_timeout_task;
+
+ /* Execute the Lua code. */
+ status = hlua_ctx_resume(hlua, 1);
+
+ switch (status) {
+ /* finished or yield */
+ case HLUA_E_OK:
+ hlua_ctx_destroy(hlua);
+ task_delete(task);
+ task_free(task);
+ break;
+
+ case HLUA_E_AGAIN: /* co process or timeout wake me later. */
+ if (hlua->wake_time != TICK_ETERNITY)
+ task_schedule(task, hlua->wake_time);
+ break;
+
+ /* finished with error. */
+ case HLUA_E_ERRMSG:
+ SEND_ERR(NULL, "Lua task: %s.\n", lua_tostring(hlua->T, -1));
+ hlua_ctx_destroy(hlua);
+ task_delete(task);
+ task_free(task);
+ break;
+
+ case HLUA_E_ERR:
+ default:
+ SEND_ERR(NULL, "Lua task: unknown error.\n");
+ hlua_ctx_destroy(hlua);
+ task_delete(task);
+ task_free(task);
+ break;
+ }
+ return NULL;
+}
+
+/* This function is an LUA binding that register LUA function to be
+ * executed after the HAProxy configuration parsing and before the
+ * HAProxy scheduler starts. This function expect only one LUA
+ * argument that is a function. This function returns nothing, but
+ * throws if an error is encountered.
+ */
+__LJMP static int hlua_register_init(lua_State *L)
+{
+ struct hlua_init_function *init;
+ int ref;
+
+ MAY_LJMP(check_args(L, 1, "register_init"));
+
+ ref = MAY_LJMP(hlua_checkfunction(L, 1));
+
+ init = calloc(1, sizeof(*init));
+ if (!init)
+ WILL_LJMP(luaL_error(L, "lua out of memory error."));
+
+ init->function_ref = ref;
+ LIST_ADDQ(&hlua_init_functions, &init->l);
+ return 0;
+}
+
+/* This functio is an LUA binding. It permits to register a task
+ * executed in parallel of the main HAroxy activity. The task is
+ * created and it is set in the HAProxy scheduler. It can be called
+ * from the "init" section, "post init" or during the runtime.
+ *
+ * Lua prototype:
+ *
+ * <none> core.register_task(<function>)
+ */
+static int hlua_register_task(lua_State *L)
+{
+ struct hlua *hlua;
+ struct task *task;
+ int ref;
+
+ MAY_LJMP(check_args(L, 1, "register_task"));
+
+ ref = MAY_LJMP(hlua_checkfunction(L, 1));
+
+ hlua = calloc(1, sizeof(*hlua));
+ if (!hlua)
+ WILL_LJMP(luaL_error(L, "lua out of memory error."));
+
+ task = task_new();
+ task->context = hlua;
+ task->process = hlua_process_task;
+
+ if (!hlua_ctx_init(hlua, task))
+ WILL_LJMP(luaL_error(L, "lua out of memory error."));
+
+ /* Restore the function in the stack. */
+ lua_rawgeti(hlua->T, LUA_REGISTRYINDEX, ref);
+ hlua->nargs = 0;
+
+ /* Schedule task. */
+ task_schedule(task, now_ms);
+
+ return 0;
+}
+
+/* Wrapper called by HAProxy to execute an LUA converter. This wrapper
+ * doesn't allow "yield" functions because the HAProxy engine cannot
+ * resume converters.
+ */
+static int hlua_sample_conv_wrapper(const struct arg *arg_p, struct sample *smp, void *private)
+{
+ struct hlua_function *fcn = (struct hlua_function *)private;
+ struct stream *stream = smp->strm;
+
+ /* In the execution wrappers linked with a stream, the
+ * Lua context can be not initialized. This behavior
+ * permits to save performances because a systematic
+ * Lua initialization cause 5% performances loss.
+ */
+ if (!stream->hlua.T && !hlua_ctx_init(&stream->hlua, stream->task)) {
+ SEND_ERR(stream->be, "Lua converter '%s': can't initialize Lua context.\n", fcn->name);
+ return 0;
+ }
+
+ /* If it is the first run, initialize the data for the call. */
+ if (!HLUA_IS_RUNNING(&stream->hlua)) {
+
+ /* The following Lua calls can fail. */
+ if (!SET_SAFE_LJMP(stream->hlua.T)) {
+ SEND_ERR(stream->be, "Lua converter '%s': critical error.\n", fcn->name);
+ return 0;
+ }
+
+ /* Check stack available size. */
+ if (!lua_checkstack(stream->hlua.T, 1)) {
+ SEND_ERR(stream->be, "Lua converter '%s': full stack.\n", fcn->name);
+ RESET_SAFE_LJMP(stream->hlua.T);
+ return 0;
+ }
+
+ /* Restore the function in the stack. */
+ lua_rawgeti(stream->hlua.T, LUA_REGISTRYINDEX, fcn->function_ref);
+
+ /* convert input sample and pust-it in the stack. */
+ if (!lua_checkstack(stream->hlua.T, 1)) {
+ SEND_ERR(stream->be, "Lua converter '%s': full stack.\n", fcn->name);
+ RESET_SAFE_LJMP(stream->hlua.T);
+ return 0;
+ }
+ hlua_smp2lua(stream->hlua.T, smp);
+ stream->hlua.nargs = 2;
+
+ /* push keywords in the stack. */
+ if (arg_p) {
+ for (; arg_p->type != ARGT_STOP; arg_p++) {
+ if (!lua_checkstack(stream->hlua.T, 1)) {
+ SEND_ERR(stream->be, "Lua converter '%s': full stack.\n", fcn->name);
+ RESET_SAFE_LJMP(stream->hlua.T);
+ return 0;
+ }
+ hlua_arg2lua(stream->hlua.T, arg_p);
+ stream->hlua.nargs++;
+ }
+ }
+
+ /* We must initialize the execution timeouts. */
+ stream->hlua.max_time = hlua_timeout_session;
+
+ /* At this point the execution is safe. */
+ RESET_SAFE_LJMP(stream->hlua.T);
+ }
+
+ /* Execute the function. */
+ switch (hlua_ctx_resume(&stream->hlua, 0)) {
+ /* finished. */
+ case HLUA_E_OK:
+ /* Convert the returned value in sample. */
+ hlua_lua2smp(stream->hlua.T, -1, smp);
+ lua_pop(stream->hlua.T, 1);
+ return 1;
+
+ /* yield. */
+ case HLUA_E_AGAIN:
+ SEND_ERR(stream->be, "Lua converter '%s': cannot use yielded functions.\n", fcn->name);
+ return 0;
+
+ /* finished with error. */
+ case HLUA_E_ERRMSG:
+ /* Display log. */
+ SEND_ERR(stream->be, "Lua converter '%s': %s.\n",
+ fcn->name, lua_tostring(stream->hlua.T, -1));
+ lua_pop(stream->hlua.T, 1);
+ return 0;
+
+ case HLUA_E_ERR:
+ /* Display log. */
+ SEND_ERR(stream->be, "Lua converter '%s' returns an unknown error.\n", fcn->name);
+
+ default:
+ return 0;
+ }
+}
+
+/* Wrapper called by HAProxy to execute a sample-fetch. this wrapper
+ * doesn't allow "yield" functions because the HAProxy engine cannot
+ * resume sample-fetches.
+ */
+static int hlua_sample_fetch_wrapper(const struct arg *arg_p, struct sample *smp,
+ const char *kw, void *private)
+{
+ struct hlua_function *fcn = (struct hlua_function *)private;
+ struct stream *stream = smp->strm;
+
+ /* In the execution wrappers linked with a stream, the
+ * Lua context can be not initialized. This behavior
+ * permits to save performances because a systematic
+ * Lua initialization cause 5% performances loss.
+ */
+ if (!stream->hlua.T && !hlua_ctx_init(&stream->hlua, stream->task)) {
+ SEND_ERR(stream->be, "Lua sample-fetch '%s': can't initialize Lua context.\n", fcn->name);
+ return 0;
+ }
+
+ /* If it is the first run, initialize the data for the call. */
+ if (!HLUA_IS_RUNNING(&stream->hlua)) {
+
+ /* The following Lua calls can fail. */
+ if (!SET_SAFE_LJMP(stream->hlua.T)) {
+ SEND_ERR(smp->px, "Lua sample-fetch '%s': critical error.\n", fcn->name);
+ return 0;
+ }
+
+ /* Check stack available size. */
+ if (!lua_checkstack(stream->hlua.T, 2)) {
+ SEND_ERR(smp->px, "Lua sample-fetch '%s': full stack.\n", fcn->name);
+ RESET_SAFE_LJMP(stream->hlua.T);
+ return 0;
+ }
+
+ /* Restore the function in the stack. */
+ lua_rawgeti(stream->hlua.T, LUA_REGISTRYINDEX, fcn->function_ref);
+
+ /* push arguments in the stack. */
+ if (!hlua_txn_new(stream->hlua.T, stream, smp->px, smp->opt & SMP_OPT_DIR)) {
+ SEND_ERR(smp->px, "Lua sample-fetch '%s': full stack.\n", fcn->name);
+ RESET_SAFE_LJMP(stream->hlua.T);
+ return 0;
+ }
+ stream->hlua.nargs = 1;
+
+ /* push keywords in the stack. */
+ for (; arg_p && arg_p->type != ARGT_STOP; arg_p++) {
+ /* Check stack available size. */
+ if (!lua_checkstack(stream->hlua.T, 1)) {
+ SEND_ERR(smp->px, "Lua sample-fetch '%s': full stack.\n", fcn->name);
+ RESET_SAFE_LJMP(stream->hlua.T);
+ return 0;
+ }
+ if (!lua_checkstack(stream->hlua.T, 1)) {
+ SEND_ERR(smp->px, "Lua sample-fetch '%s': full stack.\n", fcn->name);
+ RESET_SAFE_LJMP(stream->hlua.T);
+ return 0;
+ }
+ hlua_arg2lua(stream->hlua.T, arg_p);
+ stream->hlua.nargs++;
+ }
+
+ /* We must initialize the execution timeouts. */
+ stream->hlua.max_time = hlua_timeout_session;
+
+ /* At this point the execution is safe. */
+ RESET_SAFE_LJMP(stream->hlua.T);
+ }
+
+ /* Execute the function. */
+ switch (hlua_ctx_resume(&stream->hlua, 0)) {
+ /* finished. */
+ case HLUA_E_OK:
+ if (!hlua_check_proto(stream, (smp->opt & SMP_OPT_DIR) == SMP_OPT_DIR_RES))
+ return 0;
+ /* Convert the returned value in sample. */
+ hlua_lua2smp(stream->hlua.T, -1, smp);
+ lua_pop(stream->hlua.T, 1);
+
+ /* Set the end of execution flag. */
+ smp->flags &= ~SMP_F_MAY_CHANGE;
+ return 1;
+
+ /* yield. */
+ case HLUA_E_AGAIN:
+ hlua_check_proto(stream, (smp->opt & SMP_OPT_DIR) == SMP_OPT_DIR_RES);
+ SEND_ERR(smp->px, "Lua sample-fetch '%s': cannot use yielded functions.\n", fcn->name);
+ return 0;
+
+ /* finished with error. */
+ case HLUA_E_ERRMSG:
+ hlua_check_proto(stream, (smp->opt & SMP_OPT_DIR) == SMP_OPT_DIR_RES);
+ /* Display log. */
+ SEND_ERR(smp->px, "Lua sample-fetch '%s': %s.\n",
+ fcn->name, lua_tostring(stream->hlua.T, -1));
+ lua_pop(stream->hlua.T, 1);
+ return 0;
+
+ case HLUA_E_ERR:
+ hlua_check_proto(stream, (smp->opt & SMP_OPT_DIR) == SMP_OPT_DIR_RES);
+ /* Display log. */
+ SEND_ERR(smp->px, "Lua sample-fetch '%s' returns an unknown error.\n", fcn->name);
+
+ default:
+ return 0;
+ }
+}
+
+/* This function is an LUA binding used for registering
+ * "sample-conv" functions. It expects a converter name used
+ * in the haproxy configuration file, and an LUA function.
+ */
+__LJMP static int hlua_register_converters(lua_State *L)
+{
+ struct sample_conv_kw_list *sck;
+ const char *name;
+ int ref;
+ int len;
+ struct hlua_function *fcn;
+
+ MAY_LJMP(check_args(L, 2, "register_converters"));
+
+ /* First argument : converter name. */
+ name = MAY_LJMP(luaL_checkstring(L, 1));
+
+ /* Second argument : lua function. */
+ ref = MAY_LJMP(hlua_checkfunction(L, 2));
+
+ /* Allocate and fill the sample fetch keyword struct. */
+ sck = calloc(1, sizeof(*sck) + sizeof(struct sample_conv) * 2);
+ if (!sck)
+ WILL_LJMP(luaL_error(L, "lua out of memory error."));
+ fcn = calloc(1, sizeof(*fcn));
+ if (!fcn)
+ WILL_LJMP(luaL_error(L, "lua out of memory error."));
+
+ /* Fill fcn. */
+ fcn->name = strdup(name);
+ if (!fcn->name)
+ WILL_LJMP(luaL_error(L, "lua out of memory error."));
+ fcn->function_ref = ref;
+
+ /* List head */
+ sck->list.n = sck->list.p = NULL;
+
+ /* converter keyword. */
+ len = strlen("lua.") + strlen(name) + 1;
+ sck->kw[0].kw = calloc(1, len);
+ if (!sck->kw[0].kw)
+ WILL_LJMP(luaL_error(L, "lua out of memory error."));
+
+ snprintf((char *)sck->kw[0].kw, len, "lua.%s", name);
+ sck->kw[0].process = hlua_sample_conv_wrapper;
+ sck->kw[0].arg_mask = ARG5(0,STR,STR,STR,STR,STR);
+ sck->kw[0].val_args = NULL;
+ sck->kw[0].in_type = SMP_T_STR;
+ sck->kw[0].out_type = SMP_T_STR;
+ sck->kw[0].private = fcn;
+
+ /* Register this new converter */
+ sample_register_convs(sck);
+
+ return 0;
+}
+
+/* This fucntion is an LUA binding used for registering
+ * "sample-fetch" functions. It expects a converter name used
+ * in the haproxy configuration file, and an LUA function.
+ */
+__LJMP static int hlua_register_fetches(lua_State *L)
+{
+ const char *name;
+ int ref;
+ int len;
+ struct sample_fetch_kw_list *sfk;
+ struct hlua_function *fcn;
+
+ MAY_LJMP(check_args(L, 2, "register_fetches"));
+
+ /* First argument : sample-fetch name. */
+ name = MAY_LJMP(luaL_checkstring(L, 1));
+
+ /* Second argument : lua function. */
+ ref = MAY_LJMP(hlua_checkfunction(L, 2));
+
+ /* Allocate and fill the sample fetch keyword struct. */
+ sfk = calloc(1, sizeof(*sfk) + sizeof(struct sample_fetch) * 2);
+ if (!sfk)
+ WILL_LJMP(luaL_error(L, "lua out of memory error."));
+ fcn = calloc(1, sizeof(*fcn));
+ if (!fcn)
+ WILL_LJMP(luaL_error(L, "lua out of memory error."));
+
+ /* Fill fcn. */
+ fcn->name = strdup(name);
+ if (!fcn->name)
+ WILL_LJMP(luaL_error(L, "lua out of memory error."));
+ fcn->function_ref = ref;
+
+ /* List head */
+ sfk->list.n = sfk->list.p = NULL;
+
+ /* sample-fetch keyword. */
+ len = strlen("lua.") + strlen(name) + 1;
+ sfk->kw[0].kw = calloc(1, len);
+ if (!sfk->kw[0].kw)
+ return luaL_error(L, "lua out of memory error.");
+
+ snprintf((char *)sfk->kw[0].kw, len, "lua.%s", name);
+ sfk->kw[0].process = hlua_sample_fetch_wrapper;
+ sfk->kw[0].arg_mask = ARG5(0,STR,STR,STR,STR,STR);
+ sfk->kw[0].val_args = NULL;
+ sfk->kw[0].out_type = SMP_T_STR;
+ sfk->kw[0].use = SMP_USE_HTTP_ANY;
+ sfk->kw[0].val = 0;
+ sfk->kw[0].private = fcn;
+
+ /* Register this new fetch. */
+ sample_register_fetches(sfk);
+
+ return 0;
+}
+
+/* This function is a wrapper to execute each LUA function declared
+ * as an action wrapper during the initialisation period. This function
+ * return ACT_RET_CONT if the processing is finished (with or without
+ * error) and return ACT_RET_YIELD if the function must be called again
+ * because the LUA returns a yield.
+ */
+static enum act_return hlua_action(struct act_rule *rule, struct proxy *px,
+ struct session *sess, struct stream *s, int flags)
+{
+ char **arg;
+ unsigned int analyzer;
+ int dir;
+
+ switch (rule->from) {
+ case ACT_F_TCP_REQ_CNT: analyzer = AN_REQ_INSPECT_FE ; dir = SMP_OPT_DIR_REQ; break;
+ case ACT_F_TCP_RES_CNT: analyzer = AN_RES_INSPECT ; dir = SMP_OPT_DIR_RES; break;
+ case ACT_F_HTTP_REQ: analyzer = AN_REQ_HTTP_PROCESS_FE; dir = SMP_OPT_DIR_REQ; break;
+ case ACT_F_HTTP_RES: analyzer = AN_RES_HTTP_PROCESS_BE; dir = SMP_OPT_DIR_RES; break;
+ default:
+ SEND_ERR(px, "Lua: internal error while execute action.\n");
+ return ACT_RET_CONT;
+ }
+
+ /* In the execution wrappers linked with a stream, the
+ * Lua context can be not initialized. This behavior
+ * permits to save performances because a systematic
+ * Lua initialization cause 5% performances loss.
+ */
+ if (!s->hlua.T && !hlua_ctx_init(&s->hlua, s->task)) {
+ SEND_ERR(px, "Lua action '%s': can't initialize Lua context.\n",
+ rule->arg.hlua_rule->fcn.name);
+ return ACT_RET_CONT;
+ }
+
+ /* If it is the first run, initialize the data for the call. */
+ if (!HLUA_IS_RUNNING(&s->hlua)) {
+
+ /* The following Lua calls can fail. */
+ if (!SET_SAFE_LJMP(s->hlua.T)) {
+ SEND_ERR(px, "Lua function '%s': critical error.\n",
+ rule->arg.hlua_rule->fcn.name);
+ return ACT_RET_CONT;
+ }
+
+ /* Check stack available size. */
+ if (!lua_checkstack(s->hlua.T, 1)) {
+ SEND_ERR(px, "Lua function '%s': full stack.\n",
+ rule->arg.hlua_rule->fcn.name);
+ RESET_SAFE_LJMP(s->hlua.T);
+ return ACT_RET_CONT;
+ }
+
+ /* Restore the function in the stack. */
+ lua_rawgeti(s->hlua.T, LUA_REGISTRYINDEX, rule->arg.hlua_rule->fcn.function_ref);
+
+ /* Create and and push object stream in the stack. */
+ if (!hlua_txn_new(s->hlua.T, s, px, dir)) {
+ SEND_ERR(px, "Lua function '%s': full stack.\n",
+ rule->arg.hlua_rule->fcn.name);
+ RESET_SAFE_LJMP(s->hlua.T);
+ return ACT_RET_CONT;
+ }
+ s->hlua.nargs = 1;
+
+ /* push keywords in the stack. */
+ for (arg = rule->arg.hlua_rule->args; arg && *arg; arg++) {
+ if (!lua_checkstack(s->hlua.T, 1)) {
+ SEND_ERR(px, "Lua function '%s': full stack.\n",
+ rule->arg.hlua_rule->fcn.name);
+ RESET_SAFE_LJMP(s->hlua.T);
+ return ACT_RET_CONT;
+ }
+ lua_pushstring(s->hlua.T, *arg);
+ s->hlua.nargs++;
+ }
+
+ /* Now the execution is safe. */
+ RESET_SAFE_LJMP(s->hlua.T);
+
+ /* We must initialize the execution timeouts. */
+ s->hlua.max_time = hlua_timeout_session;
+ }
+
+ /* Execute the function. */
+ switch (hlua_ctx_resume(&s->hlua, !(flags & ACT_FLAG_FINAL))) {
+ /* finished. */
+ case HLUA_E_OK:
+ if (!hlua_check_proto(s, dir))
+ return ACT_RET_ERR;
+ return ACT_RET_CONT;
+
+ /* yield. */
+ case HLUA_E_AGAIN:
+ /* Set timeout in the required channel. */
+ if (s->hlua.wake_time != TICK_ETERNITY) {
+ if (analyzer & (AN_REQ_INSPECT_FE|AN_REQ_HTTP_PROCESS_FE))
+ s->req.analyse_exp = s->hlua.wake_time;
+ else if (analyzer & (AN_RES_INSPECT|AN_RES_HTTP_PROCESS_BE))
+ s->res.analyse_exp = s->hlua.wake_time;
+ }
+ /* Some actions can be wake up when a "write" event
+ * is detected on a response channel. This is useful
+ * only for actions targetted on the requests.
+ */
+ if (HLUA_IS_WAKERESWR(&s->hlua)) {
+ s->res.flags |= CF_WAKE_WRITE;
+ if ((analyzer & (AN_REQ_INSPECT_FE|AN_REQ_HTTP_PROCESS_FE)))
+ s->res.analysers |= analyzer;
+ }
+ if (HLUA_IS_WAKEREQWR(&s->hlua))
+ s->req.flags |= CF_WAKE_WRITE;
+ return ACT_RET_YIELD;
+
+ /* finished with error. */
+ case HLUA_E_ERRMSG:
+ if (!hlua_check_proto(s, dir))
+ return ACT_RET_ERR;
+ /* Display log. */
+ SEND_ERR(px, "Lua function '%s': %s.\n",
+ rule->arg.hlua_rule->fcn.name, lua_tostring(s->hlua.T, -1));
+ lua_pop(s->hlua.T, 1);
+ return ACT_RET_CONT;
+
+ case HLUA_E_ERR:
+ if (!hlua_check_proto(s, dir))
+ return ACT_RET_ERR;
+ /* Display log. */
+ SEND_ERR(px, "Lua function '%s' return an unknown error.\n",
+ rule->arg.hlua_rule->fcn.name);
+
+ default:
+ return ACT_RET_CONT;
+ }
+}
+
+struct task *hlua_applet_wakeup(struct task *t)
+{
+ struct appctx *ctx = t->context;
+ struct stream_interface *si = ctx->owner;
+
+ /* If the applet is wake up without any expected work, the sheduler
+ * remove it from the run queue. This flag indicate that the applet
+ * is waiting for write. If the buffer is full, the main processing
+ * will send some data and after call the applet, otherwise it call
+ * the applet ASAP.
+ */
+ si_applet_cant_put(si);
+ appctx_wakeup(ctx);
+ return NULL;
+}
+
+static int hlua_applet_tcp_init(struct appctx *ctx, struct proxy *px, struct stream *strm)
+{
+ struct stream_interface *si = ctx->owner;
+ struct hlua *hlua = &ctx->ctx.hlua_apptcp.hlua;
+ struct task *task;
+ char **arg;
+
+ HLUA_INIT(hlua);
+ ctx->ctx.hlua_apptcp.flags = 0;
+
+ /* Create task used by signal to wakeup applets. */
+ task = task_new();
+ if (!task) {
+ SEND_ERR(px, "Lua applet tcp '%s': out of memory.\n",
+ ctx->rule->arg.hlua_rule->fcn.name);
+ return 0;
+ }
+ task->nice = 0;
+ task->context = ctx;
+ task->process = hlua_applet_wakeup;
+ ctx->ctx.hlua_apptcp.task = task;
+
+ /* In the execution wrappers linked with a stream, the
+ * Lua context can be not initialized. This behavior
+ * permits to save performances because a systematic
+ * Lua initialization cause 5% performances loss.
+ */
+ if (!hlua_ctx_init(hlua, task)) {
+ SEND_ERR(px, "Lua applet tcp '%s': can't initialize Lua context.\n",
+ ctx->rule->arg.hlua_rule->fcn.name);
+ return 0;
+ }
+
+ /* Set timeout according with the applet configuration. */
+ hlua->max_time = ctx->applet->timeout;
+
+ /* The following Lua calls can fail. */
+ if (!SET_SAFE_LJMP(hlua->T)) {
+ SEND_ERR(px, "Lua applet tcp '%s': critical error.\n",
+ ctx->rule->arg.hlua_rule->fcn.name);
+ RESET_SAFE_LJMP(hlua->T);
+ return 0;
+ }
+
+ /* Check stack available size. */
+ if (!lua_checkstack(hlua->T, 1)) {
+ SEND_ERR(px, "Lua applet tcp '%s': full stack.\n",
+ ctx->rule->arg.hlua_rule->fcn.name);
+ RESET_SAFE_LJMP(hlua->T);
+ return 0;
+ }
+
+ /* Restore the function in the stack. */
+ lua_rawgeti(hlua->T, LUA_REGISTRYINDEX, ctx->rule->arg.hlua_rule->fcn.function_ref);
+
+ /* Create and and push object stream in the stack. */
+ if (!hlua_applet_tcp_new(hlua->T, ctx)) {
+ SEND_ERR(px, "Lua applet tcp '%s': full stack.\n",
+ ctx->rule->arg.hlua_rule->fcn.name);
+ RESET_SAFE_LJMP(hlua->T);
+ return 0;
+ }
+ hlua->nargs = 1;
+
+ /* push keywords in the stack. */
+ for (arg = ctx->rule->arg.hlua_rule->args; arg && *arg; arg++) {
+ if (!lua_checkstack(hlua->T, 1)) {
+ SEND_ERR(px, "Lua applet tcp '%s': full stack.\n",
+ ctx->rule->arg.hlua_rule->fcn.name);
+ RESET_SAFE_LJMP(hlua->T);
+ return 0;
+ }
+ lua_pushstring(hlua->T, *arg);
+ hlua->nargs++;
+ }
+
+ RESET_SAFE_LJMP(hlua->T);
+
+ /* Wakeup the applet ASAP. */
+ si_applet_cant_get(si);
+ si_applet_cant_put(si);
+
+ return 1;
+}
+
+static void hlua_applet_tcp_fct(struct appctx *ctx)
+{
+ struct stream_interface *si = ctx->owner;
+ struct stream *strm = si_strm(si);
+ struct channel *res = si_ic(si);
+ struct act_rule *rule = ctx->rule;
+ struct proxy *px = strm->be;
+ struct hlua *hlua = &ctx->ctx.hlua_apptcp.hlua;
+
+ /* The applet execution is already done. */
+ if (ctx->ctx.hlua_apptcp.flags & APPLET_DONE)
+ return;
+
+ /* If the stream is disconnect or closed, ldo nothing. */
+ if (unlikely(si->state == SI_ST_DIS || si->state == SI_ST_CLO))
+ return;
+
+ /* Execute the function. */
+ switch (hlua_ctx_resume(hlua, 1)) {
+ /* finished. */
+ case HLUA_E_OK:
+ ctx->ctx.hlua_apptcp.flags |= APPLET_DONE;
+
+ /* log time */
+ strm->logs.tv_request = now;
+
+ /* eat the whole request */
+ bo_skip(si_oc(si), si_ob(si)->o);
+ res->flags |= CF_READ_NULL;
+ si_shutr(si);
+ return;
+
+ /* yield. */
+ case HLUA_E_AGAIN:
+ return;
+
+ /* finished with error. */
+ case HLUA_E_ERRMSG:
+ /* Display log. */
+ SEND_ERR(px, "Lua applet tcp '%s': %s.\n",
+ rule->arg.hlua_rule->fcn.name, lua_tostring(hlua->T, -1));
+ lua_pop(hlua->T, 1);
+ goto error;
+
+ case HLUA_E_ERR:
+ /* Display log. */
+ SEND_ERR(px, "Lua applet tcp '%s' return an unknown error.\n",
+ rule->arg.hlua_rule->fcn.name);
+ goto error;
+
+ default:
+ goto error;
+ }
+
+error:
+
+ /* For all other cases, just close the stream. */
+ si_shutw(si);
+ si_shutr(si);
+ ctx->ctx.hlua_apptcp.flags |= APPLET_DONE;
+}
+
+static void hlua_applet_tcp_release(struct appctx *ctx)
+{
+ task_free(ctx->ctx.hlua_apptcp.task);
+ ctx->ctx.hlua_apptcp.task = NULL;
+ hlua_ctx_destroy(&ctx->ctx.hlua_apptcp.hlua);
+}
+
+/* The function returns 1 if the initialisation is complete, 0 if
+ * an errors occurs and -1 if more data are required for initializing
+ * the applet.
+ */
+static int hlua_applet_http_init(struct appctx *ctx, struct proxy *px, struct stream *strm)
+{
+ struct stream_interface *si = ctx->owner;
+ struct channel *req = si_oc(si);
+ struct http_msg *msg;
+ struct http_txn *txn;
+ struct hlua *hlua = &ctx->ctx.hlua_apphttp.hlua;
+ char **arg;
+ struct hdr_ctx hdr;
+ struct task *task;
+ struct sample smp; /* just used for a valid call to smp_prefetch_http. */
+
+ /* Wait for a full HTTP request. */
+ if (!smp_prefetch_http(px, strm, 0, NULL, &smp, 0)) {
+ if (smp.flags & SMP_F_MAY_CHANGE)
+ return -1;
+ return 0;
+ }
+ txn = strm->txn;
+ msg = &txn->req;
+
+ /* We want two things in HTTP mode :
+ * - enforce server-close mode if we were in keep-alive, so that the
+ * applet is released after each response ;
+ * - enable request body transfer to the applet in order to resync
+ * with the response body.
+ */
+ if ((txn->flags & TX_CON_WANT_MSK) == TX_CON_WANT_KAL)
+ txn->flags = (txn->flags & ~TX_CON_WANT_MSK) | TX_CON_WANT_SCL;
+
+ HLUA_INIT(hlua);
+ ctx->ctx.hlua_apphttp.left_bytes = -1;
+ ctx->ctx.hlua_apphttp.flags = 0;
+
+ if (txn->req.flags & HTTP_MSGF_VER_11)
+ ctx->ctx.hlua_apphttp.flags |= APPLET_HTTP11;
+
+ /* Create task used by signal to wakeup applets. */
+ task = task_new();
+ if (!task) {
+ SEND_ERR(px, "Lua applet http '%s': out of memory.\n",
+ ctx->rule->arg.hlua_rule->fcn.name);
+ return 0;
+ }
+ task->nice = 0;
+ task->context = ctx;
+ task->process = hlua_applet_wakeup;
+ ctx->ctx.hlua_apphttp.task = task;
+
+ /* In the execution wrappers linked with a stream, the
+ * Lua context can be not initialized. This behavior
+ * permits to save performances because a systematic
+ * Lua initialization cause 5% performances loss.
+ */
+ if (!hlua_ctx_init(hlua, task)) {
+ SEND_ERR(px, "Lua applet http '%s': can't initialize Lua context.\n",
+ ctx->rule->arg.hlua_rule->fcn.name);
+ return 0;
+ }
+
+ /* Set timeout according with the applet configuration. */
+ hlua->max_time = ctx->applet->timeout;
+
+ /* The following Lua calls can fail. */
+ if (!SET_SAFE_LJMP(hlua->T)) {
+ SEND_ERR(px, "Lua applet http '%s': critical error.\n",
+ ctx->rule->arg.hlua_rule->fcn.name);
+ return 0;
+ }
+
+ /* Check stack available size. */
+ if (!lua_checkstack(hlua->T, 1)) {
+ SEND_ERR(px, "Lua applet http '%s': full stack.\n",
+ ctx->rule->arg.hlua_rule->fcn.name);
+ RESET_SAFE_LJMP(hlua->T);
+ return 0;
+ }
+
+ /* Restore the function in the stack. */
+ lua_rawgeti(hlua->T, LUA_REGISTRYINDEX, ctx->rule->arg.hlua_rule->fcn.function_ref);
+
+ /* Create and and push object stream in the stack. */
+ if (!hlua_applet_http_new(hlua->T, ctx)) {
+ SEND_ERR(px, "Lua applet http '%s': full stack.\n",
+ ctx->rule->arg.hlua_rule->fcn.name);
+ RESET_SAFE_LJMP(hlua->T);
+ return 0;
+ }
+ hlua->nargs = 1;
+
+ /* Look for a 100-continue expected. */
+ if (msg->flags & HTTP_MSGF_VER_11) {
+ hdr.idx = 0;
+ if (http_find_header2("Expect", 6, req->buf->p, &txn->hdr_idx, &hdr) &&
+ unlikely(hdr.vlen == 12 && strncasecmp(hdr.line+hdr.val, "100-continue", 12) == 0))
+ ctx->ctx.hlua_apphttp.flags |= APPLET_100C;
+ }
+
+ /* push keywords in the stack. */
+ for (arg = ctx->rule->arg.hlua_rule->args; arg && *arg; arg++) {
+ if (!lua_checkstack(hlua->T, 1)) {
+ SEND_ERR(px, "Lua applet http '%s': full stack.\n",
+ ctx->rule->arg.hlua_rule->fcn.name);
+ RESET_SAFE_LJMP(hlua->T);
+ return 0;
+ }
+ lua_pushstring(hlua->T, *arg);
+ hlua->nargs++;
+ }
+
+ RESET_SAFE_LJMP(hlua->T);
+
+ /* Wakeup the applet when data is ready for read. */
+ si_applet_cant_get(si);
+
+ return 1;
+}
+
+static void hlua_applet_http_fct(struct appctx *ctx)
+{
+ struct stream_interface *si = ctx->owner;
+ struct stream *strm = si_strm(si);
+ struct channel *res = si_ic(si);
+ struct act_rule *rule = ctx->rule;
+ struct proxy *px = strm->be;
+ struct hlua *hlua = &ctx->ctx.hlua_apphttp.hlua;
+ char *blk1;
+ int len1;
+ char *blk2;
+ int len2;
+ int ret;
+
+ /* If the stream is disconnect or closed, ldo nothing. */
+ if (unlikely(si->state == SI_ST_DIS || si->state == SI_ST_CLO))
+ return;
+
+ /* Set the currently running flag. */
+ if (!HLUA_IS_RUNNING(hlua) &&
+ !(ctx->ctx.hlua_apphttp.flags & APPLET_DONE)) {
+
+ /* Wait for full HTTP analysys. */
+ if (unlikely(strm->txn->req.msg_state < HTTP_MSG_BODY)) {
+ si_applet_cant_get(si);
+ return;
+ }
+
+ /* Store the max amount of bytes that we can read. */
+ ctx->ctx.hlua_apphttp.left_bytes = strm->txn->req.body_len;
+
+ /* We need to flush the request header. This left the body
+ * for the Lua.
+ */
+
+ /* Read the maximum amount of data avalaible. */
+ ret = bo_getblk_nc(si_oc(si), &blk1, &len1, &blk2, &len2);
+ if (ret == -1)
+ return;
+
+ /* No data available, ask for more data. */
+ if (ret == 1)
+ len2 = 0;
+ if (ret == 0)
+ len1 = 0;
+ if (len1 + len2 < strm->txn->req.eoh + 2) {
+ si_applet_cant_get(si);
+ return;
+ }
+
+ /* skip the requests bytes. */
+ bo_skip(si_oc(si), strm->txn->req.eoh + 2);
+ }
+
+ /* Executes The applet if it is not done. */
+ if (!(ctx->ctx.hlua_apphttp.flags & APPLET_DONE)) {
+
+ /* Execute the function. */
+ switch (hlua_ctx_resume(hlua, 1)) {
+ /* finished. */
+ case HLUA_E_OK:
+ ctx->ctx.hlua_apphttp.flags |= APPLET_DONE;
+ break;
+
+ /* yield. */
+ case HLUA_E_AGAIN:
+ return;
+
+ /* finished with error. */
+ case HLUA_E_ERRMSG:
+ /* Display log. */
+ SEND_ERR(px, "Lua applet http '%s': %s.\n",
+ rule->arg.hlua_rule->fcn.name, lua_tostring(hlua->T, -1));
+ lua_pop(hlua->T, 1);
+ goto error;
+
+ case HLUA_E_ERR:
+ /* Display log. */
+ SEND_ERR(px, "Lua applet http '%s' return an unknown error.\n",
+ rule->arg.hlua_rule->fcn.name);
+ goto error;
+
+ default:
+ goto error;
+ }
+ }
+
+ if (ctx->ctx.hlua_apphttp.flags & APPLET_DONE) {
+
+ /* We must send the final chunk. */
+ if (ctx->ctx.hlua_apphttp.flags & APPLET_CHUNKED &&
+ !(ctx->ctx.hlua_apphttp.flags & APPLET_LAST_CHK)) {
+
+ /* sent last chunk at once. */
+ ret = bi_putblk(res, "0\r\n\r\n", 5);
+
+ /* critical error. */
+ if (ret == -2 || ret == -3) {
+ SEND_ERR(px, "Lua applet http '%s'cannont send last chunk.\n",
+ rule->arg.hlua_rule->fcn.name);
+ goto error;
+ }
+
+ /* no enough space error. */
+ if (ret == -1) {
+ si_applet_cant_put(si);
+ return;
+ }
+
+ /* set the last chunk sent. */
+ ctx->ctx.hlua_apphttp.flags |= APPLET_LAST_CHK;
+ }
+
+ /* close the connection. */
+
+ /* status / log */
+ strm->txn->status = ctx->ctx.hlua_apphttp.status;
+ strm->logs.tv_request = now;
+
+ /* eat the whole request */
+ bo_skip(si_oc(si), si_ob(si)->o);
+ res->flags |= CF_READ_NULL;
+ si_shutr(si);
+
+ return;
+ }
+
+error:
+
+ /* If we are in HTTP mode, and we are not send any
+ * data, return a 500 server error in best effort:
+ * if there are no room avalaible in the buffer,
+ * just close the connection.
+ */
+ bi_putblk(res, error_500, strlen(error_500));
+ if (!(strm->flags & SF_ERR_MASK))
+ strm->flags |= SF_ERR_RESOURCE;
+ si_shutw(si);
+ si_shutr(si);
+ ctx->ctx.hlua_apphttp.flags |= APPLET_DONE;
+}
+
+static void hlua_applet_http_release(struct appctx *ctx)
+{
+ task_free(ctx->ctx.hlua_apphttp.task);
+ ctx->ctx.hlua_apphttp.task = NULL;
+ hlua_ctx_destroy(&ctx->ctx.hlua_apphttp.hlua);
+}
+
+/* global {tcp|http}-request parser. Return ACT_RET_PRS_OK in
+ * succes case, else return ACT_RET_PRS_ERR.
+ *
+ * This function can fail with an abort() due to an Lua critical error.
+ * We are in the configuration parsing process of HAProxy, this abort() is
+ * tolerated.
+ */
+static enum act_parse_ret action_register_lua(const char **args, int *cur_arg, struct proxy *px,
+ struct act_rule *rule, char **err)
+{
+ struct hlua_function *fcn = (struct hlua_function *)rule->kw->private;
+
+ /* Memory for the rule. */
+ rule->arg.hlua_rule = calloc(1, sizeof(*rule->arg.hlua_rule));
+ if (!rule->arg.hlua_rule) {
+ memprintf(err, "out of memory error");
+ return ACT_RET_PRS_ERR;
+ }
+
+ /* Reference the Lua function and store the reference. */
+ rule->arg.hlua_rule->fcn = *fcn;
+
+ /* TODO: later accept arguments. */
+ rule->arg.hlua_rule->args = NULL;
+
+ rule->action = ACT_CUSTOM;
+ rule->action_ptr = hlua_action;
+ return ACT_RET_PRS_OK;
+}
+
+static enum act_parse_ret action_register_service_http(const char **args, int *cur_arg, struct proxy *px,
+ struct act_rule *rule, char **err)
+{
+ struct hlua_function *fcn = (struct hlua_function *)rule->kw->private;
+
+ /* HTTP applets are forbidden in tcp-request rules.
+ * HTTP applet request requires everything initilized by
+ * "http_process_request" (analyzer flag AN_REQ_HTTP_INNER).
+ * The applet will be immediately initilized, but its before
+ * the call of this analyzer.
+ */
+ if (rule->from != ACT_F_HTTP_REQ) {
+ memprintf(err, "HTTP applets are forbidden from 'tcp-request' rulesets");
+ return ACT_RET_PRS_ERR;
+ }
+
+ /* Memory for the rule. */
+ rule->arg.hlua_rule = calloc(1, sizeof(*rule->arg.hlua_rule));
+ if (!rule->arg.hlua_rule) {
+ memprintf(err, "out of memory error");
+ return ACT_RET_PRS_ERR;
+ }
+
+ /* Reference the Lua function and store the reference. */
+ rule->arg.hlua_rule->fcn = *fcn;
+
+ /* TODO: later accept arguments. */
+ rule->arg.hlua_rule->args = NULL;
+
+ /* Add applet pointer in the rule. */
+ rule->applet.obj_type = OBJ_TYPE_APPLET;
+ rule->applet.name = fcn->name;
+ rule->applet.init = hlua_applet_http_init;
+ rule->applet.fct = hlua_applet_http_fct;
+ rule->applet.release = hlua_applet_http_release;
+ rule->applet.timeout = hlua_timeout_applet;
+
+ return ACT_RET_PRS_OK;
+}
+
+/* This function is an LUA binding used for registering
+ * "sample-conv" functions. It expects a converter name used
+ * in the haproxy configuration file, and an LUA function.
+ */
+__LJMP static int hlua_register_action(lua_State *L)
+{
+ struct action_kw_list *akl;
+ const char *name;
+ int ref;
+ int len;
+ struct hlua_function *fcn;
+
+ MAY_LJMP(check_args(L, 3, "register_action"));
+
+ /* First argument : converter name. */
+ name = MAY_LJMP(luaL_checkstring(L, 1));
+
+ /* Second argument : environment. */
+ if (lua_type(L, 2) != LUA_TTABLE)
+ WILL_LJMP(luaL_error(L, "register_action: second argument must be a table of strings"));
+
+ /* Third argument : lua function. */
+ ref = MAY_LJMP(hlua_checkfunction(L, 3));
+
+ /* browse the second argulent as an array. */
+ lua_pushnil(L);
+ while (lua_next(L, 2) != 0) {
+ if (lua_type(L, -1) != LUA_TSTRING)
+ WILL_LJMP(luaL_error(L, "register_action: second argument must be a table of strings"));
+
+ /* Check required environment. Only accepted "http" or "tcp". */
+ /* Allocate and fill the sample fetch keyword struct. */
+ akl = calloc(1, sizeof(*akl) + sizeof(struct action_kw) * 2);
+ if (!akl)
+ WILL_LJMP(luaL_error(L, "lua out of memory error."));
+ fcn = calloc(1, sizeof(*fcn));
+ if (!fcn)
+ WILL_LJMP(luaL_error(L, "lua out of memory error."));
+
+ /* Fill fcn. */
+ fcn->name = strdup(name);
+ if (!fcn->name)
+ WILL_LJMP(luaL_error(L, "lua out of memory error."));
+ fcn->function_ref = ref;
+
+ /* List head */
+ akl->list.n = akl->list.p = NULL;
+
+ /* action keyword. */
+ len = strlen("lua.") + strlen(name) + 1;
+ akl->kw[0].kw = calloc(1, len);
+ if (!akl->kw[0].kw)
+ WILL_LJMP(luaL_error(L, "lua out of memory error."));
+
+ snprintf((char *)akl->kw[0].kw, len, "lua.%s", name);
+
+ akl->kw[0].match_pfx = 0;
+ akl->kw[0].private = fcn;
+ akl->kw[0].parse = action_register_lua;
+
+ /* select the action registering point. */
+ if (strcmp(lua_tostring(L, -1), "tcp-req") == 0)
+ tcp_req_cont_keywords_register(akl);
+ else if (strcmp(lua_tostring(L, -1), "tcp-res") == 0)
+ tcp_res_cont_keywords_register(akl);
+ else if (strcmp(lua_tostring(L, -1), "http-req") == 0)
+ http_req_keywords_register(akl);
+ else if (strcmp(lua_tostring(L, -1), "http-res") == 0)
+ http_res_keywords_register(akl);
+ else
+ WILL_LJMP(luaL_error(L, "lua action environment '%s' is unknown. "
+ "'tcp-req', 'tcp-res', 'http-req' or 'http-res' "
+ "are expected.", lua_tostring(L, -1)));
+
+ /* pop the environment string. */
+ lua_pop(L, 1);
+ }
+ return ACT_RET_PRS_OK;
+}
+
+static enum act_parse_ret action_register_service_tcp(const char **args, int *cur_arg, struct proxy *px,
+ struct act_rule *rule, char **err)
+{
+ struct hlua_function *fcn = (struct hlua_function *)rule->kw->private;
+
+ /* Memory for the rule. */
+ rule->arg.hlua_rule = calloc(1, sizeof(*rule->arg.hlua_rule));
+ if (!rule->arg.hlua_rule) {
+ memprintf(err, "out of memory error");
+ return ACT_RET_PRS_ERR;
+ }
+
+ /* Reference the Lua function and store the reference. */
+ rule->arg.hlua_rule->fcn = *fcn;
+
+ /* TODO: later accept arguments. */
+ rule->arg.hlua_rule->args = NULL;
+
+ /* Add applet pointer in the rule. */
+ rule->applet.obj_type = OBJ_TYPE_APPLET;
+ rule->applet.name = fcn->name;
+ rule->applet.init = hlua_applet_tcp_init;
+ rule->applet.fct = hlua_applet_tcp_fct;
+ rule->applet.release = hlua_applet_tcp_release;
+ rule->applet.timeout = hlua_timeout_applet;
+
+ return 0;
+}
+
+/* This function is an LUA binding used for registering
+ * "sample-conv" functions. It expects a converter name used
+ * in the haproxy configuration file, and an LUA function.
+ */
+__LJMP static int hlua_register_service(lua_State *L)
+{
+ struct action_kw_list *akl;
+ const char *name;
+ const char *env;
+ int ref;
+ int len;
+ struct hlua_function *fcn;
+
+ MAY_LJMP(check_args(L, 3, "register_service"));
+
+ /* First argument : converter name. */
+ name = MAY_LJMP(luaL_checkstring(L, 1));
+
+ /* Second argument : environment. */
+ env = MAY_LJMP(luaL_checkstring(L, 2));
+
+ /* Third argument : lua function. */
+ ref = MAY_LJMP(hlua_checkfunction(L, 3));
+
+ /* Check required environment. Only accepted "http" or "tcp". */
+ /* Allocate and fill the sample fetch keyword struct. */
+ akl = calloc(1, sizeof(*akl) + sizeof(struct action_kw) * 2);
+ if (!akl)
+ WILL_LJMP(luaL_error(L, "lua out of memory error."));
+ fcn = calloc(1, sizeof(*fcn));
+ if (!fcn)
+ WILL_LJMP(luaL_error(L, "lua out of memory error."));
+
+ /* Fill fcn. */
+ len = strlen("<lua.>") + strlen(name) + 1;
+ fcn->name = calloc(1, len);
+ if (!fcn->name)
+ WILL_LJMP(luaL_error(L, "lua out of memory error."));
+ snprintf((char *)fcn->name, len, "<lua.%s>", name);
+ fcn->function_ref = ref;
+
+ /* List head */
+ akl->list.n = akl->list.p = NULL;
+
+ /* converter keyword. */
+ len = strlen("lua.") + strlen(name) + 1;
+ akl->kw[0].kw = calloc(1, len);
+ if (!akl->kw[0].kw)
+ WILL_LJMP(luaL_error(L, "lua out of memory error."));
+
+ snprintf((char *)akl->kw[0].kw, len, "lua.%s", name);
+
+ if (strcmp(env, "tcp") == 0)
+ akl->kw[0].parse = action_register_service_tcp;
+ else if (strcmp(env, "http") == 0)
+ akl->kw[0].parse = action_register_service_http;
+ else
+ WILL_LJMP(luaL_error(L, "lua service environment '%s' is unknown. "
+ "'tcp' or 'http' are expected."));
+
+ akl->kw[0].match_pfx = 0;
+ akl->kw[0].private = fcn;
+
+ /* End of array. */
+ memset(&akl->kw[1], 0, sizeof(*akl->kw));
+
+ /* Register this new converter */
+ service_keywords_register(akl);
+
+ return 0;
+}
+
+static int hlua_read_timeout(char **args, int section_type, struct proxy *curpx,
+ struct proxy *defpx, const char *file, int line,
+ char **err, unsigned int *timeout)
+{
+ const char *error;
+
+ error = parse_time_err(args[1], timeout, TIME_UNIT_MS);
+ if (error && *error != '\0') {
+ memprintf(err, "%s: invalid timeout", args[0]);
+ return -1;
+ }
+ return 0;
+}
+
+static int hlua_session_timeout(char **args, int section_type, struct proxy *curpx,
+ struct proxy *defpx, const char *file, int line,
+ char **err)
+{
+ return hlua_read_timeout(args, section_type, curpx, defpx,
+ file, line, err, &hlua_timeout_session);
+}
+
+static int hlua_task_timeout(char **args, int section_type, struct proxy *curpx,
+ struct proxy *defpx, const char *file, int line,
+ char **err)
+{
+ return hlua_read_timeout(args, section_type, curpx, defpx,
+ file, line, err, &hlua_timeout_task);
+}
+
+static int hlua_applet_timeout(char **args, int section_type, struct proxy *curpx,
+ struct proxy *defpx, const char *file, int line,
+ char **err)
+{
+ return hlua_read_timeout(args, section_type, curpx, defpx,
+ file, line, err, &hlua_timeout_applet);
+}
+
+static int hlua_forced_yield(char **args, int section_type, struct proxy *curpx,
+ struct proxy *defpx, const char *file, int line,
+ char **err)
+{
+ char *error;
+
+ hlua_nb_instruction = strtoll(args[1], &error, 10);
+ if (*error != '\0') {
+ memprintf(err, "%s: invalid number", args[0]);
+ return -1;
+ }
+ return 0;
+}
+
+static int hlua_parse_maxmem(char **args, int section_type, struct proxy *curpx,
+ struct proxy *defpx, const char *file, int line,
+ char **err)
+{
+ char *error;
+
+ if (*(args[1]) == 0) {
+ memprintf(err, "'%s' expects an integer argument (Lua memory size in MB).\n", args[0]);
+ return -1;
+ }
+ hlua_global_allocator.limit = strtoll(args[1], &error, 10) * 1024L * 1024L;
+ if (*error != '\0') {
+ memprintf(err, "%s: invalid number %s (error at '%c')", args[0], args[1], *error);
+ return -1;
+ }
+ return 0;
+}
+
+
+/* This function is called by the main configuration key "lua-load". It loads and
+ * execute an lua file during the parsing of the HAProxy configuration file. It is
+ * the main lua entry point.
+ *
+ * This funtion runs with the HAProxy keywords API. It returns -1 if an error is
+ * occured, otherwise it returns 0.
+ *
+ * In some error case, LUA set an error message in top of the stack. This function
+ * returns this error message in the HAProxy logs and pop it from the stack.
+ *
+ * This function can fail with an abort() due to an Lua critical error.
+ * We are in the configuration parsing process of HAProxy, this abort() is
+ * tolerated.
+ */
+static int hlua_load(char **args, int section_type, struct proxy *curpx,
+ struct proxy *defpx, const char *file, int line,
+ char **err)
+{
+ int error;
+
+ /* Just load and compile the file. */
+ error = luaL_loadfile(gL.T, args[1]);
+ if (error) {
+ memprintf(err, "error in lua file '%s': %s", args[1], lua_tostring(gL.T, -1));
+ lua_pop(gL.T, 1);
+ return -1;
+ }
+
+ /* If no syntax error where detected, execute the code. */
+ error = lua_pcall(gL.T, 0, LUA_MULTRET, 0);
+ switch (error) {
+ case LUA_OK:
+ break;
+ case LUA_ERRRUN:
+ memprintf(err, "lua runtime error: %s\n", lua_tostring(gL.T, -1));
+ lua_pop(gL.T, 1);
+ return -1;
+ case LUA_ERRMEM:
+ memprintf(err, "lua out of memory error\n");
+ return -1;
+ case LUA_ERRERR:
+ memprintf(err, "lua message handler error: %s\n", lua_tostring(gL.T, -1));
+ lua_pop(gL.T, 1);
+ return -1;
+ case LUA_ERRGCMM:
+ memprintf(err, "lua garbage collector error: %s\n", lua_tostring(gL.T, -1));
+ lua_pop(gL.T, 1);
+ return -1;
+ default:
+ memprintf(err, "lua unknonwn error: %s\n", lua_tostring(gL.T, -1));
+ lua_pop(gL.T, 1);
+ return -1;
+ }
+
+ return 0;
+}
+
+/* configuration keywords declaration */
+static struct cfg_kw_list cfg_kws = {{ },{
+ { CFG_GLOBAL, "lua-load", hlua_load },
+ { CFG_GLOBAL, "tune.lua.session-timeout", hlua_session_timeout },
+ { CFG_GLOBAL, "tune.lua.task-timeout", hlua_task_timeout },
+ { CFG_GLOBAL, "tune.lua.service-timeout", hlua_applet_timeout },
+ { CFG_GLOBAL, "tune.lua.forced-yield", hlua_forced_yield },
+ { CFG_GLOBAL, "tune.lua.maxmem", hlua_parse_maxmem },
+ { 0, NULL, NULL },
+}};
+
+/* This function can fail with an abort() due to an Lua critical error.
+ * We are in the initialisation process of HAProxy, this abort() is
+ * tolerated.
+ */
+int hlua_post_init()
+{
+ struct hlua_init_function *init;
+ const char *msg;
+ enum hlua_exec ret;
+
+ list_for_each_entry(init, &hlua_init_functions, l) {
+ lua_rawgeti(gL.T, LUA_REGISTRYINDEX, init->function_ref);
+ ret = hlua_ctx_resume(&gL, 0);
+ switch (ret) {
+ case HLUA_E_OK:
+ lua_pop(gL.T, -1);
+ return 1;
+ case HLUA_E_AGAIN:
+ Alert("lua init: yield not allowed.\n");
+ return 0;
+ case HLUA_E_ERRMSG:
+ msg = lua_tostring(gL.T, -1);
+ Alert("lua init: %s.\n", msg);
+ return 0;
+ case HLUA_E_ERR:
+ default:
+ Alert("lua init: unknown runtime error.\n");
+ return 0;
+ }
+ }
+ return 1;
+}
+
+/* The memory allocator used by the Lua stack. <ud> is a pointer to the
+ * allocator's context. <ptr> is the pointer to alloc/free/realloc. <osize>
+ * is the previously allocated size or the kind of object in case of a new
+ * allocation. <nsize> is the requested new size.
+ */
+static void *hlua_alloc(void *ud, void *ptr, size_t osize, size_t nsize)
+{
+ struct hlua_mem_allocator *zone = ud;
+
+ if (nsize == 0) {
+ /* it's a free */
+ if (ptr)
+ zone->allocated -= osize;
+ free(ptr);
+ return NULL;
+ }
+
+ if (!ptr) {
+ /* it's a new allocation */
+ if (zone->limit && zone->allocated + nsize > zone->limit)
+ return NULL;
+
+ ptr = malloc(nsize);
+ if (ptr)
+ zone->allocated += nsize;
+ return ptr;
+ }
+
+ /* it's a realloc */
+ if (zone->limit && zone->allocated + nsize - osize > zone->limit)
+ return NULL;
+
+ ptr = realloc(ptr, nsize);
+ if (ptr)
+ zone->allocated += nsize - osize;
+ return ptr;
+}
+
+/* Ithis function can fail with an abort() due to an Lua critical error.
+ * We are in the initialisation process of HAProxy, this abort() is
+ * tolerated.
+ */
+void hlua_init(void)
+{
+ int i;
+ int idx;
+ struct sample_fetch *sf;
+ struct sample_conv *sc;
+ char *p;
+#ifdef USE_OPENSSL
+ struct srv_kw *kw;
+ int tmp_error;
+ char *error;
+ char *args[] = { /* SSL client configuration. */
+ "ssl",
+ "verify",
+ "none",
+ NULL
+ };
+#endif
+
+ /* Initialise com signals pool */
+ pool2_hlua_com = create_pool("hlua_com", sizeof(struct hlua_com), MEM_F_SHARED);
+
+ /* Register configuration keywords. */
+ cfg_register_keywords(&cfg_kws);
+
+ /* Init main lua stack. */
+ gL.Mref = LUA_REFNIL;
+ gL.flags = 0;
+ LIST_INIT(&gL.com);
+ gL.T = luaL_newstate();
+ hlua_sethlua(&gL);
+ gL.Tref = LUA_REFNIL;
+ gL.task = NULL;
+
+ /* From this point, until the end of the initialisation fucntion,
+ * the Lua function can fail with an abort. We are in the initialisation
+ * process of HAProxy, this abort() is tolerated.
+ */
+
+ /* change the memory allocators to track memory usage */
+ lua_setallocf(gL.T, hlua_alloc, &hlua_global_allocator);
+
+ /* Initialise lua. */
+ luaL_openlibs(gL.T);
+
+ /*
+ *
+ * Create "core" object.
+ *
+ */
+
+ /* This table entry is the object "core" base. */
+ lua_newtable(gL.T);
+
+ /* Push the loglevel constants. */
+ for (i = 0; i < NB_LOG_LEVELS; i++)
+ hlua_class_const_int(gL.T, log_levels[i], i);
+
+ /* Register special functions. */
+ hlua_class_function(gL.T, "register_init", hlua_register_init);
+ hlua_class_function(gL.T, "register_task", hlua_register_task);
+ hlua_class_function(gL.T, "register_fetches", hlua_register_fetches);
+ hlua_class_function(gL.T, "register_converters", hlua_register_converters);
+ hlua_class_function(gL.T, "register_action", hlua_register_action);
+ hlua_class_function(gL.T, "register_service", hlua_register_service);
+ hlua_class_function(gL.T, "yield", hlua_yield);
+ hlua_class_function(gL.T, "set_nice", hlua_set_nice);
+ hlua_class_function(gL.T, "sleep", hlua_sleep);
+ hlua_class_function(gL.T, "msleep", hlua_msleep);
+ hlua_class_function(gL.T, "add_acl", hlua_add_acl);
+ hlua_class_function(gL.T, "del_acl", hlua_del_acl);
+ hlua_class_function(gL.T, "set_map", hlua_set_map);
+ hlua_class_function(gL.T, "del_map", hlua_del_map);
+ hlua_class_function(gL.T, "tcp", hlua_socket_new);
+ hlua_class_function(gL.T, "log", hlua_log);
+ hlua_class_function(gL.T, "Debug", hlua_log_debug);
+ hlua_class_function(gL.T, "Info", hlua_log_info);
+ hlua_class_function(gL.T, "Warning", hlua_log_warning);
+ hlua_class_function(gL.T, "Alert", hlua_log_alert);
+ hlua_class_function(gL.T, "done", hlua_done);
+
+ lua_setglobal(gL.T, "core");
+
+ /*
+ *
+ * Register class Map
+ *
+ */
+
+ /* This table entry is the object "Map" base. */
+ lua_newtable(gL.T);
+
+ /* register pattern types. */
+ for (i=0; i<PAT_MATCH_NUM; i++)
+ hlua_class_const_int(gL.T, pat_match_names[i], i);
+
+ /* register constructor. */
+ hlua_class_function(gL.T, "new", hlua_map_new);
+
+ /* Create and fill the metatable. */
+ lua_newtable(gL.T);
+
+ /* Create the __tostring identifier */
+ lua_pushstring(gL.T, "__tostring");
+ lua_pushstring(gL.T, CLASS_MAP);
+ lua_pushcclosure(gL.T, hlua_dump_object, 1);
+ lua_rawset(gL.T, -3);
+
+ /* Create and fille the __index entry. */
+ lua_pushstring(gL.T, "__index");
+ lua_newtable(gL.T);
+
+ /* Register . */
+ hlua_class_function(gL.T, "lookup", hlua_map_lookup);
+ hlua_class_function(gL.T, "slookup", hlua_map_slookup);
+
+ lua_rawset(gL.T, -3);
+
+ /* Register previous table in the registry with reference and named entry. */
+ lua_pushvalue(gL.T, -1); /* Copy the -1 entry and push it on the stack. */
+ lua_pushvalue(gL.T, -1); /* Copy the -1 entry and push it on the stack. */
+ lua_setfield(gL.T, LUA_REGISTRYINDEX, CLASS_MAP); /* register class session. */
+ class_map_ref = luaL_ref(gL.T, LUA_REGISTRYINDEX); /* reference class session. */
+
+ /* Assign the metatable to the mai Map object. */
+ lua_setmetatable(gL.T, -2);
+
+ /* Set a name to the table. */
+ lua_setglobal(gL.T, "Map");
+
+ /*
+ *
+ * Register class Channel
+ *
+ */
+
+ /* Create and fill the metatable. */
+ lua_newtable(gL.T);
+
+ /* Create the __tostring identifier */
+ lua_pushstring(gL.T, "__tostring");
+ lua_pushstring(gL.T, CLASS_CHANNEL);
+ lua_pushcclosure(gL.T, hlua_dump_object, 1);
+ lua_rawset(gL.T, -3);
+
+ /* Create and fille the __index entry. */
+ lua_pushstring(gL.T, "__index");
+ lua_newtable(gL.T);
+
+ /* Register . */
+ hlua_class_function(gL.T, "get", hlua_channel_get);
+ hlua_class_function(gL.T, "dup", hlua_channel_dup);
+ hlua_class_function(gL.T, "getline", hlua_channel_getline);
+ hlua_class_function(gL.T, "set", hlua_channel_set);
+ hlua_class_function(gL.T, "append", hlua_channel_append);
+ hlua_class_function(gL.T, "send", hlua_channel_send);
+ hlua_class_function(gL.T, "forward", hlua_channel_forward);
+ hlua_class_function(gL.T, "get_in_len", hlua_channel_get_in_len);
+ hlua_class_function(gL.T, "get_out_len", hlua_channel_get_out_len);
+
+ lua_rawset(gL.T, -3);
+
+ /* Register previous table in the registry with reference and named entry. */
+ lua_pushvalue(gL.T, -1); /* Copy the -1 entry and push it on the stack. */
+ lua_setfield(gL.T, LUA_REGISTRYINDEX, CLASS_CHANNEL); /* register class session. */
+ class_channel_ref = luaL_ref(gL.T, LUA_REGISTRYINDEX); /* reference class session. */
+
+ /*
+ *
+ * Register class Fetches
+ *
+ */
+
+ /* Create and fill the metatable. */
+ lua_newtable(gL.T);
+
+ /* Create the __tostring identifier */
+ lua_pushstring(gL.T, "__tostring");
+ lua_pushstring(gL.T, CLASS_FETCHES);
+ lua_pushcclosure(gL.T, hlua_dump_object, 1);
+ lua_rawset(gL.T, -3);
+
+ /* Create and fille the __index entry. */
+ lua_pushstring(gL.T, "__index");
+ lua_newtable(gL.T);
+
+ /* Browse existing fetches and create the associated
+ * object method.
+ */
+ sf = NULL;
+ while ((sf = sample_fetch_getnext(sf, &idx)) != NULL) {
+
+ /* Dont register the keywork if the arguments check function are
+ * not safe during the runtime.
+ */
+ if ((sf->val_args != NULL) &&
+ (sf->val_args != val_payload_lv) &&
+ (sf->val_args != val_hdr))
+ continue;
+
+ /* gL.Tua doesn't support '.' and '-' in the function names, replace it
+ * by an underscore.
+ */
+ strncpy(trash.str, sf->kw, trash.size);
+ trash.str[trash.size - 1] = '\0';
+ for (p = trash.str; *p; p++)
+ if (*p == '.' || *p == '-' || *p == '+')
+ *p = '_';
+
+ /* Register the function. */
+ lua_pushstring(gL.T, trash.str);
+ lua_pushlightuserdata(gL.T, sf);
+ lua_pushcclosure(gL.T, hlua_run_sample_fetch, 1);
+ lua_rawset(gL.T, -3);
+ }
+
+ lua_rawset(gL.T, -3);
+
+ /* Register previous table in the registry with reference and named entry. */
+ lua_pushvalue(gL.T, -1); /* Copy the -1 entry and push it on the stack. */
+ lua_setfield(gL.T, LUA_REGISTRYINDEX, CLASS_FETCHES); /* register class session. */
+ class_fetches_ref = luaL_ref(gL.T, LUA_REGISTRYINDEX); /* reference class session. */
+
+ /*
+ *
+ * Register class Converters
+ *
+ */
+
+ /* Create and fill the metatable. */
+ lua_newtable(gL.T);
+
+ /* Create the __tostring identifier */
+ lua_pushstring(gL.T, "__tostring");
+ lua_pushstring(gL.T, CLASS_CONVERTERS);
+ lua_pushcclosure(gL.T, hlua_dump_object, 1);
+ lua_rawset(gL.T, -3);
+
+ /* Create and fill the __index entry. */
+ lua_pushstring(gL.T, "__index");
+ lua_newtable(gL.T);
+
+ /* Browse existing converters and create the associated
+ * object method.
+ */
+ sc = NULL;
+ while ((sc = sample_conv_getnext(sc, &idx)) != NULL) {
+ /* Dont register the keywork if the arguments check function are
+ * not safe during the runtime.
+ */
+ if (sc->val_args != NULL)
+ continue;
+
+ /* gL.Tua doesn't support '.' and '-' in the function names, replace it
+ * by an underscore.
+ */
+ strncpy(trash.str, sc->kw, trash.size);
+ trash.str[trash.size - 1] = '\0';
+ for (p = trash.str; *p; p++)
+ if (*p == '.' || *p == '-' || *p == '+')
+ *p = '_';
+
+ /* Register the function. */
+ lua_pushstring(gL.T, trash.str);
+ lua_pushlightuserdata(gL.T, sc);
+ lua_pushcclosure(gL.T, hlua_run_sample_conv, 1);
+ lua_rawset(gL.T, -3);
+ }
+
+ lua_rawset(gL.T, -3);
+
+ /* Register previous table in the registry with reference and named entry. */
+ lua_pushvalue(gL.T, -1); /* Copy the -1 entry and push it on the stack. */
+ lua_setfield(gL.T, LUA_REGISTRYINDEX, CLASS_CONVERTERS); /* register class session. */
+ class_converters_ref = luaL_ref(gL.T, LUA_REGISTRYINDEX); /* reference class session. */
+
+ /*
+ *
+ * Register class HTTP
+ *
+ */
+
+ /* Create and fill the metatable. */
+ lua_newtable(gL.T);
+
+ /* Create the __tostring identifier */
+ lua_pushstring(gL.T, "__tostring");
+ lua_pushstring(gL.T, CLASS_HTTP);
+ lua_pushcclosure(gL.T, hlua_dump_object, 1);
+ lua_rawset(gL.T, -3);
+
+ /* Create and fille the __index entry. */
+ lua_pushstring(gL.T, "__index");
+ lua_newtable(gL.T);
+
+ /* Register Lua functions. */
+ hlua_class_function(gL.T, "req_get_headers",hlua_http_req_get_headers);
+ hlua_class_function(gL.T, "req_del_header", hlua_http_req_del_hdr);
+ hlua_class_function(gL.T, "req_rep_header", hlua_http_req_rep_hdr);
+ hlua_class_function(gL.T, "req_rep_value", hlua_http_req_rep_val);
+ hlua_class_function(gL.T, "req_add_header", hlua_http_req_add_hdr);
+ hlua_class_function(gL.T, "req_set_header", hlua_http_req_set_hdr);
+ hlua_class_function(gL.T, "req_set_method", hlua_http_req_set_meth);
+ hlua_class_function(gL.T, "req_set_path", hlua_http_req_set_path);
+ hlua_class_function(gL.T, "req_set_query", hlua_http_req_set_query);
+ hlua_class_function(gL.T, "req_set_uri", hlua_http_req_set_uri);
+
+ hlua_class_function(gL.T, "res_get_headers",hlua_http_res_get_headers);
+ hlua_class_function(gL.T, "res_del_header", hlua_http_res_del_hdr);
+ hlua_class_function(gL.T, "res_rep_header", hlua_http_res_rep_hdr);
+ hlua_class_function(gL.T, "res_rep_value", hlua_http_res_rep_val);
+ hlua_class_function(gL.T, "res_add_header", hlua_http_res_add_hdr);
+ hlua_class_function(gL.T, "res_set_header", hlua_http_res_set_hdr);
+ hlua_class_function(gL.T, "res_set_status", hlua_http_res_set_status);
+
+ lua_rawset(gL.T, -3);
+
+ /* Register previous table in the registry with reference and named entry. */
+ lua_pushvalue(gL.T, -1); /* Copy the -1 entry and push it on the stack. */
+ lua_setfield(gL.T, LUA_REGISTRYINDEX, CLASS_HTTP); /* register class session. */
+ class_http_ref = luaL_ref(gL.T, LUA_REGISTRYINDEX); /* reference class session. */
+
+ /*
+ *
+ * Register class AppletTCP
+ *
+ */
+
+ /* Create and fill the metatable. */
+ lua_newtable(gL.T);
+
+ /* Create the __tostring identifier */
+ lua_pushstring(gL.T, "__tostring");
+ lua_pushstring(gL.T, CLASS_APPLET_TCP);
+ lua_pushcclosure(gL.T, hlua_dump_object, 1);
+ lua_rawset(gL.T, -3);
+
+ /* Create and fille the __index entry. */
+ lua_pushstring(gL.T, "__index");
+ lua_newtable(gL.T);
+
+ /* Register Lua functions. */
+ hlua_class_function(gL.T, "getline", hlua_applet_tcp_getline);
+ hlua_class_function(gL.T, "receive", hlua_applet_tcp_recv);
+ hlua_class_function(gL.T, "send", hlua_applet_tcp_send);
+
+ lua_settable(gL.T, -3);
+
+ /* Register previous table in the registry with reference and named entry. */
+ lua_pushvalue(gL.T, -1); /* Copy the -1 entry and push it on the stack. */
+ lua_setfield(gL.T, LUA_REGISTRYINDEX, CLASS_APPLET_TCP); /* register class session. */
+ class_applet_tcp_ref = luaL_ref(gL.T, LUA_REGISTRYINDEX); /* reference class session. */
+
+ /*
+ *
+ * Register class AppletHTTP
+ *
+ */
+
+ /* Create and fill the metatable. */
+ lua_newtable(gL.T);
+
+ /* Create the __tostring identifier */
+ lua_pushstring(gL.T, "__tostring");
+ lua_pushstring(gL.T, CLASS_APPLET_HTTP);
+ lua_pushcclosure(gL.T, hlua_dump_object, 1);
+ lua_rawset(gL.T, -3);
+
+ /* Create and fille the __index entry. */
+ lua_pushstring(gL.T, "__index");
+ lua_newtable(gL.T);
+
+ /* Register Lua functions. */
+ hlua_class_function(gL.T, "getline", hlua_applet_http_getline);
+ hlua_class_function(gL.T, "receive", hlua_applet_http_recv);
+ hlua_class_function(gL.T, "send", hlua_applet_http_send);
+ hlua_class_function(gL.T, "add_header", hlua_applet_http_addheader);
+ hlua_class_function(gL.T, "set_status", hlua_applet_http_status);
+ hlua_class_function(gL.T, "start_response", hlua_applet_http_start_response);
+
+ lua_settable(gL.T, -3);
+
+ /* Register previous table in the registry with reference and named entry. */
+ lua_pushvalue(gL.T, -1); /* Copy the -1 entry and push it on the stack. */
+ lua_setfield(gL.T, LUA_REGISTRYINDEX, CLASS_APPLET_HTTP); /* register class session. */
+ class_applet_http_ref = luaL_ref(gL.T, LUA_REGISTRYINDEX); /* reference class session. */
+
+ /*
+ *
+ * Register class TXN
+ *
+ */
+
+ /* Create and fill the metatable. */
+ lua_newtable(gL.T);
+
+ /* Create the __tostring identifier */
+ lua_pushstring(gL.T, "__tostring");
+ lua_pushstring(gL.T, CLASS_TXN);
+ lua_pushcclosure(gL.T, hlua_dump_object, 1);
+ lua_rawset(gL.T, -3);
+
+ /* Create and fille the __index entry. */
+ lua_pushstring(gL.T, "__index");
+ lua_newtable(gL.T);
+
+ /* Register Lua functions. */
+ hlua_class_function(gL.T, "set_priv", hlua_set_priv);
+ hlua_class_function(gL.T, "get_priv", hlua_get_priv);
+ hlua_class_function(gL.T, "set_var", hlua_set_var);
+ hlua_class_function(gL.T, "get_var", hlua_get_var);
+ hlua_class_function(gL.T, "done", hlua_txn_done);
+ hlua_class_function(gL.T, "set_loglevel",hlua_txn_set_loglevel);
+ hlua_class_function(gL.T, "set_tos", hlua_txn_set_tos);
+ hlua_class_function(gL.T, "set_mark", hlua_txn_set_mark);
+ hlua_class_function(gL.T, "deflog", hlua_txn_deflog);
+ hlua_class_function(gL.T, "log", hlua_txn_log);
+ hlua_class_function(gL.T, "Debug", hlua_txn_log_debug);
+ hlua_class_function(gL.T, "Info", hlua_txn_log_info);
+ hlua_class_function(gL.T, "Warning", hlua_txn_log_warning);
+ hlua_class_function(gL.T, "Alert", hlua_txn_log_alert);
+
+ lua_rawset(gL.T, -3);
+
+ /* Register previous table in the registry with reference and named entry. */
+ lua_pushvalue(gL.T, -1); /* Copy the -1 entry and push it on the stack. */
+ lua_setfield(gL.T, LUA_REGISTRYINDEX, CLASS_TXN); /* register class session. */
+ class_txn_ref = luaL_ref(gL.T, LUA_REGISTRYINDEX); /* reference class session. */
+
+ /*
+ *
+ * Register class Socket
+ *
+ */
+
+ /* Create and fill the metatable. */
+ lua_newtable(gL.T);
+
+ /* Create the __tostring identifier */
+ lua_pushstring(gL.T, "__tostring");
+ lua_pushstring(gL.T, CLASS_SOCKET);
+ lua_pushcclosure(gL.T, hlua_dump_object, 1);
+ lua_rawset(gL.T, -3);
+
+ /* Create and fille the __index entry. */
+ lua_pushstring(gL.T, "__index");
+ lua_newtable(gL.T);
+
+#ifdef USE_OPENSSL
+ hlua_class_function(gL.T, "connect_ssl", hlua_socket_connect_ssl);
+#endif
+ hlua_class_function(gL.T, "connect", hlua_socket_connect);
+ hlua_class_function(gL.T, "send", hlua_socket_send);
+ hlua_class_function(gL.T, "receive", hlua_socket_receive);
+ hlua_class_function(gL.T, "close", hlua_socket_close);
+ hlua_class_function(gL.T, "getpeername", hlua_socket_getpeername);
+ hlua_class_function(gL.T, "getsockname", hlua_socket_getsockname);
+ hlua_class_function(gL.T, "setoption", hlua_socket_setoption);
+ hlua_class_function(gL.T, "settimeout", hlua_socket_settimeout);
+
+ lua_rawset(gL.T, -3); /* Push the last 2 entries in the table at index -3 */
+
+ /* Register the garbage collector entry. */
+ lua_pushstring(gL.T, "__gc");
+ lua_pushcclosure(gL.T, hlua_socket_gc, 0);
+ lua_rawset(gL.T, -3); /* Push the last 2 entries in the table at index -3 */
+
+ /* Register previous table in the registry with reference and named entry. */
+ lua_pushvalue(gL.T, -1); /* Copy the -1 entry and push it on the stack. */
+ lua_pushvalue(gL.T, -1); /* Copy the -1 entry and push it on the stack. */
+ lua_setfield(gL.T, LUA_REGISTRYINDEX, CLASS_SOCKET); /* register class socket. */
+ class_socket_ref = luaL_ref(gL.T, LUA_REGISTRYINDEX); /* reference class socket. */
+
+ /* Proxy and server configuration initialisation. */
+ memset(&socket_proxy, 0, sizeof(socket_proxy));
+ init_new_proxy(&socket_proxy);
+ socket_proxy.parent = NULL;
+ socket_proxy.last_change = now.tv_sec;
+ socket_proxy.id = "LUA-SOCKET";
+ socket_proxy.cap = PR_CAP_FE | PR_CAP_BE;
+ socket_proxy.maxconn = 0;
+ socket_proxy.accept = NULL;
+ socket_proxy.options2 |= PR_O2_INDEPSTR;
+ socket_proxy.srv = NULL;
+ socket_proxy.conn_retries = 0;
+ socket_proxy.timeout.connect = 5000; /* By default the timeout connection is 5s. */
+
+ /* Init TCP server: unchanged parameters */
+ memset(&socket_tcp, 0, sizeof(socket_tcp));
+ socket_tcp.next = NULL;
+ socket_tcp.proxy = &socket_proxy;
+ socket_tcp.obj_type = OBJ_TYPE_SERVER;
+ LIST_INIT(&socket_tcp.actconns);
+ LIST_INIT(&socket_tcp.pendconns);
+ LIST_INIT(&socket_tcp.priv_conns);
+ LIST_INIT(&socket_tcp.idle_conns);
+ LIST_INIT(&socket_tcp.safe_conns);
+ socket_tcp.state = SRV_ST_RUNNING; /* early server setup */
+ socket_tcp.last_change = 0;
+ socket_tcp.id = "LUA-TCP-CONN";
+ socket_tcp.check.state &= ~CHK_ST_ENABLED; /* Disable health checks. */
+ socket_tcp.agent.state &= ~CHK_ST_ENABLED; /* Disable health checks. */
+ socket_tcp.pp_opts = 0; /* Remove proxy protocol. */
+
+ /* XXX: Copy default parameter from default server,
+ * but the default server is not initialized.
+ */
+ socket_tcp.maxqueue = socket_proxy.defsrv.maxqueue;
+ socket_tcp.minconn = socket_proxy.defsrv.minconn;
+ socket_tcp.maxconn = socket_proxy.defsrv.maxconn;
+ socket_tcp.slowstart = socket_proxy.defsrv.slowstart;
+ socket_tcp.onerror = socket_proxy.defsrv.onerror;
+ socket_tcp.onmarkeddown = socket_proxy.defsrv.onmarkeddown;
+ socket_tcp.onmarkedup = socket_proxy.defsrv.onmarkedup;
+ socket_tcp.consecutive_errors_limit = socket_proxy.defsrv.consecutive_errors_limit;
+ socket_tcp.uweight = socket_proxy.defsrv.iweight;
+ socket_tcp.iweight = socket_proxy.defsrv.iweight;
+
+ socket_tcp.check.status = HCHK_STATUS_INI;
+ socket_tcp.check.rise = socket_proxy.defsrv.check.rise;
+ socket_tcp.check.fall = socket_proxy.defsrv.check.fall;
+ socket_tcp.check.health = socket_tcp.check.rise; /* socket, but will fall down at first failure */
+ socket_tcp.check.server = &socket_tcp;
+
+ socket_tcp.agent.status = HCHK_STATUS_INI;
+ socket_tcp.agent.rise = socket_proxy.defsrv.agent.rise;
+ socket_tcp.agent.fall = socket_proxy.defsrv.agent.fall;
+ socket_tcp.agent.health = socket_tcp.agent.rise; /* socket, but will fall down at first failure */
+ socket_tcp.agent.server = &socket_tcp;
+
+ socket_tcp.xprt = &raw_sock;
+
+#ifdef USE_OPENSSL
+ /* Init TCP server: unchanged parameters */
+ memset(&socket_ssl, 0, sizeof(socket_ssl));
+ socket_ssl.next = NULL;
+ socket_ssl.proxy = &socket_proxy;
+ socket_ssl.obj_type = OBJ_TYPE_SERVER;
+ LIST_INIT(&socket_ssl.actconns);
+ LIST_INIT(&socket_ssl.pendconns);
+ LIST_INIT(&socket_ssl.priv_conns);
+ LIST_INIT(&socket_ssl.idle_conns);
+ LIST_INIT(&socket_ssl.safe_conns);
+ socket_ssl.state = SRV_ST_RUNNING; /* early server setup */
+ socket_ssl.last_change = 0;
+ socket_ssl.id = "LUA-SSL-CONN";
+ socket_ssl.check.state &= ~CHK_ST_ENABLED; /* Disable health checks. */
+ socket_ssl.agent.state &= ~CHK_ST_ENABLED; /* Disable health checks. */
+ socket_ssl.pp_opts = 0; /* Remove proxy protocol. */
+
+ /* XXX: Copy default parameter from default server,
+ * but the default server is not initialized.
+ */
+ socket_ssl.maxqueue = socket_proxy.defsrv.maxqueue;
+ socket_ssl.minconn = socket_proxy.defsrv.minconn;
+ socket_ssl.maxconn = socket_proxy.defsrv.maxconn;
+ socket_ssl.slowstart = socket_proxy.defsrv.slowstart;
+ socket_ssl.onerror = socket_proxy.defsrv.onerror;
+ socket_ssl.onmarkeddown = socket_proxy.defsrv.onmarkeddown;
+ socket_ssl.onmarkedup = socket_proxy.defsrv.onmarkedup;
+ socket_ssl.consecutive_errors_limit = socket_proxy.defsrv.consecutive_errors_limit;
+ socket_ssl.uweight = socket_proxy.defsrv.iweight;
+ socket_ssl.iweight = socket_proxy.defsrv.iweight;
+
+ socket_ssl.check.status = HCHK_STATUS_INI;
+ socket_ssl.check.rise = socket_proxy.defsrv.check.rise;
+ socket_ssl.check.fall = socket_proxy.defsrv.check.fall;
+ socket_ssl.check.health = socket_ssl.check.rise; /* socket, but will fall down at first failure */
+ socket_ssl.check.server = &socket_ssl;
+
+ socket_ssl.agent.status = HCHK_STATUS_INI;
+ socket_ssl.agent.rise = socket_proxy.defsrv.agent.rise;
+ socket_ssl.agent.fall = socket_proxy.defsrv.agent.fall;
+ socket_ssl.agent.health = socket_ssl.agent.rise; /* socket, but will fall down at first failure */
+ socket_ssl.agent.server = &socket_ssl;
+
+ socket_ssl.use_ssl = 1;
+ socket_ssl.xprt = &ssl_sock;
+
+ for (idx = 0; args[idx] != NULL; idx++) {
+ if ((kw = srv_find_kw(args[idx])) != NULL) { /* Maybe it's registered server keyword */
+ /*
+ *
+ * If the keyword is not known, we can search in the registered
+ * server keywords. This is usefull to configure special SSL
+ * features like client certificates and ssl_verify.
+ *
+ */
+ tmp_error = kw->parse(args, &idx, &socket_proxy, &socket_ssl, &error);
+ if (tmp_error != 0) {
+ fprintf(stderr, "INTERNAL ERROR: %s\n", error);
+ abort(); /* This must be never arrives because the command line
+ not editable by the user. */
+ }
+ idx += kw->skip;
+ }
+ }
+
+ /* Initialize SSL server. */
+ ssl_sock_prepare_srv_ctx(&socket_ssl, &socket_proxy);
+#endif
+}
--- /dev/null
+/*
+ * Fast system call support for x86 on Linux
+ *
+ * Copyright 2010 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ * Recent kernels support a faster syscall ABI on x86 using the VDSO page, but
+ * some libc that are built for CPUs earlier than i686 do not implement it.
+ * This code bypasses the libc when the VDSO is detected. It should only be
+ * used when it's sure that the libc really does not support the VDSO, but
+ * fixing the libc is preferred. Using the VDSO can improve the overall
+ * performance by about 10%.
+ */
+
+#if defined(__linux__) && defined(__i386__)
+/* Silently ignore other platforms to be friendly with distro packagers */
+
+#include <dlfcn.h>
+#include <sys/mman.h>
+
+void int80(void); /* declared in the assembler code */
+static void *vsyscall = &int80; /* initialize vsyscall to use int80 by default */
+static __attribute__((used)) unsigned int back_ebx;
+
+/* now we redefine some frequently used syscalls. Epoll_create is defined too
+ * in order to replace old disabled implementations.
+ */
+asm
+(
+ "epoll_create: .GLOBL epoll_create\n"
+ " mov $0xfe, %eax\n"
+ " mov %ebx, back_ebx\n"
+ " mov 4(%esp), %ebx\n"
+ " jmp do_syscall\n"
+
+ "epoll_ctl: .GLOBL epoll_ctl\n"
+ " push %esi\n"
+ " mov $0xff, %eax\n"
+ " mov %ebx, back_ebx\n"
+ " mov 20(%esp), %esi\n"
+ " mov 16(%esp), %edx\n"
+ " mov 12(%esp), %ecx\n"
+ " mov 8(%esp), %ebx\n"
+ " call do_syscall\n"
+ " pop %esi\n"
+ " ret\n"
+
+ "epoll_wait: .GLOBL epoll_wait\n"
+ " push %esi\n"
+ " mov $0x100, %eax\n"
+ " mov %ebx, back_ebx\n"
+ " mov 20(%esp), %esi\n"
+ " mov 16(%esp), %edx\n"
+ " mov 12(%esp), %ecx\n"
+ " mov 8(%esp), %ebx\n"
+ " call do_syscall\n"
+ " pop %esi\n"
+ " ret\n"
+
+ "splice: .GLOBL splice\n"
+ " push %ebp\n"
+ " push %edi\n"
+ " push %esi\n"
+ " mov $0x139, %eax\n"
+ " mov %ebx, back_ebx\n"
+ " mov 36(%esp), %ebp\n"
+ " mov 32(%esp), %edi\n"
+ " mov 28(%esp), %esi\n"
+ " mov 24(%esp), %edx\n"
+ " mov 20(%esp), %ecx\n"
+ " mov 16(%esp), %ebx\n"
+ " call do_syscall\n"
+ " pop %esi\n"
+ " pop %edi\n"
+ " pop %ebp\n"
+ " ret\n"
+
+ "close: .GLOBL close\n"
+ " mov $0x06, %eax\n"
+ " mov %ebx, back_ebx\n"
+ " mov 4(%esp), %ebx\n"
+ " jmp do_syscall\n"
+
+ "gettimeofday: .GLOBL gettimeofday\n"
+ " mov $0x4e, %eax\n"
+ " mov %ebx, back_ebx\n"
+ " mov 8(%esp), %ecx\n"
+ " mov 4(%esp), %ebx\n"
+ " jmp do_syscall\n"
+
+ "fcntl: .GLOBL fcntl\n"
+ " mov $0xdd, %eax\n"
+ " mov %ebx, back_ebx\n"
+ " mov 12(%esp), %edx\n"
+ " mov 8(%esp), %ecx\n"
+ " mov 4(%esp), %ebx\n"
+ " jmp do_syscall\n"
+
+ "socket: .GLOBL socket\n"
+ " mov $0x01, %eax\n"
+ " jmp socketcall\n"
+
+ "bind: .GLOBL bind\n"
+ " mov $0x02, %eax\n"
+ " jmp socketcall\n"
+
+ "connect: .GLOBL connect\n"
+ " mov $0x03, %eax\n"
+ " jmp socketcall\n"
+
+ "listen: .GLOBL listen\n"
+ " mov $0x04, %eax\n"
+ " jmp socketcall\n"
+
+ "accept: .GLOBL accept\n"
+ " mov $0x05, %eax\n"
+ " jmp socketcall\n"
+
+ "accept4: .GLOBL accept4\n"
+ " mov $0x12, %eax\n"
+ " jmp socketcall\n"
+
+ "getsockname: .GLOBL getsockname\n"
+ " mov $0x06, %eax\n"
+ " jmp socketcall\n"
+
+ "send: .GLOBL send\n"
+ " mov $0x09, %eax\n"
+ " jmp socketcall\n"
+
+ "recv: .GLOBL recv\n"
+ " mov $0x0a, %eax\n"
+ " jmp socketcall\n"
+
+ "shutdown: .GLOBL shutdown\n"
+ " mov $0x0d, %eax\n"
+ " jmp socketcall\n"
+
+ "setsockopt: .GLOBL setsockopt\n"
+ " mov $0x0e, %eax\n"
+ " jmp socketcall\n"
+
+ "getsockopt: .GLOBL getsockopt\n"
+ " mov $0x0f, %eax\n"
+ " jmp socketcall\n"
+
+ "socketcall:\n"
+ " mov %ebx, back_ebx\n"
+ " mov %eax, %ebx\n"
+ " mov $0x66, %eax\n"
+ " lea 4(%esp), %ecx\n"
+ /* fall through */
+
+ "do_syscall:\n"
+ " call *vsyscall\n" // always valid, may be int80 or vsyscall
+ " mov back_ebx, %ebx\n"
+ " cmpl $0xfffff000, %eax\n" // consider -4096..-1 for errno
+ " jae 0f\n"
+ " ret\n"
+ "0:\n" // error handling
+ " neg %eax\n" // get errno value
+ " push %eax\n" // save it
+ " call __errno_location\n"
+ " popl (%eax)\n" // store the pushed errno into the proper location
+ " mov $-1, %eax\n" // and return -1
+ " ret\n"
+
+ "int80:\n" // default compatible calling convention
+ " int $0x80\n"
+ " ret\n"
+);
+
+__attribute__((constructor))
+static void __i386_linux_vsyscall_init(void)
+{
+ /* We can get the pointer by resolving the __kernel_vsyscall symbol
+ * from the "linux-gate.so.1" virtual shared object, but this requires
+ * libdl. Or we can also know that the vsyscall pointer is always
+ * located at 0xFFFFE018 when /proc/sys/abi/vsyscall32 contains the
+ * default value 2. So we can use that once we've checked that we can
+ * access it without faulting. The dlsym method will also work when
+ * vsyscall32 = 1, which randomizes the VDSO address.
+ */
+#ifdef USE_VSYSCALL_DLSYM
+ void *handle = dlopen("linux-gate.so.1", RTLD_NOW);
+ if (handle) {
+ void *ptr;
+
+ ptr = dlsym(handle, "__kernel_vsyscall_kml");
+ if (!ptr)
+ ptr = dlsym(handle, "__kernel_vsyscall");
+ if (ptr)
+ vsyscall = ptr;
+ dlclose(handle);
+ }
+#else
+ /* Heuristic: trying to mprotect() the VDSO area will only succeed if
+ * it is mapped.
+ */
+ if (mprotect((void *)0xffffe000, 4096, PROT_READ|PROT_EXEC) == 0) {
+ unsigned long ptr = *(unsigned long *)0xFFFFE018; /* VDSO is mapped */
+ if ((ptr & 0xFFFFE000) == 0xFFFFE000)
+ vsyscall = (void *)ptr;
+ }
+#endif
+}
+
+#endif /* defined(__linux__) && defined(__i386__) */
--- /dev/null
+/*
+ * Consistent Hash implementation
+ * Please consult this very well detailed article for more information :
+ * http://www.spiteful.com/2008/03/17/programmers-toolbox-part-3-consistent-hashing/
+ *
+ * Our implementation has to support both weighted hashing and weighted round
+ * robin because we'll use it to replace the previous map-based implementation
+ * which offered both algorithms.
+ *
+ * Copyright 2000-2010 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <common/compat.h>
+#include <common/config.h>
+#include <common/debug.h>
+#include <common/standard.h>
+#include <eb32tree.h>
+
+#include <types/global.h>
+#include <types/server.h>
+
+#include <proto/backend.h>
+#include <proto/queue.h>
+
+/* Return next tree node after <node> which must still be in the tree, or be
+ * NULL. Lookup wraps around the end to the beginning. If the next node is the
+ * same node, return NULL. This is designed to find a valid next node before
+ * deleting one from the tree.
+ */
+static inline struct eb32_node *chash_skip_node(struct eb_root *root, struct eb32_node *node)
+{
+ struct eb32_node *stop = node;
+
+ if (!node)
+ return NULL;
+ node = eb32_next(node);
+ if (!node)
+ node = eb32_first(root);
+ if (node == stop)
+ return NULL;
+ return node;
+}
+
+/* Remove all of a server's entries from its tree. This may be used when
+ * setting a server down.
+ */
+static inline void chash_dequeue_srv(struct server *s)
+{
+ while (s->lb_nodes_now > 0) {
+ if (s->lb_nodes_now >= s->lb_nodes_tot) // should always be false anyway
+ s->lb_nodes_now = s->lb_nodes_tot;
+ s->lb_nodes_now--;
+ if (s->proxy->lbprm.chash.last == &s->lb_nodes[s->lb_nodes_now].node)
+ s->proxy->lbprm.chash.last = chash_skip_node(s->lb_tree, s->proxy->lbprm.chash.last);
+ eb32_delete(&s->lb_nodes[s->lb_nodes_now].node);
+ }
+}
+
+/* Adjust the number of entries of a server in its tree. The server must appear
+ * as many times as its weight indicates it. If it's there too often, we remove
+ * the last occurrences. If it's not there enough, we add more occurrences. To
+ * remove a server from the tree, normally call this with eweight=0.
+ */
+static inline void chash_queue_dequeue_srv(struct server *s)
+{
+ while (s->lb_nodes_now > s->eweight) {
+ if (s->lb_nodes_now >= s->lb_nodes_tot) // should always be false anyway
+ s->lb_nodes_now = s->lb_nodes_tot;
+ s->lb_nodes_now--;
+ if (s->proxy->lbprm.chash.last == &s->lb_nodes[s->lb_nodes_now].node)
+ s->proxy->lbprm.chash.last = chash_skip_node(s->lb_tree, s->proxy->lbprm.chash.last);
+ eb32_delete(&s->lb_nodes[s->lb_nodes_now].node);
+ }
+
+ while (s->lb_nodes_now < s->eweight) {
+ if (s->lb_nodes_now >= s->lb_nodes_tot) // should always be false anyway
+ break;
+ if (s->proxy->lbprm.chash.last == &s->lb_nodes[s->lb_nodes_now].node)
+ s->proxy->lbprm.chash.last = chash_skip_node(s->lb_tree, s->proxy->lbprm.chash.last);
+ eb32_insert(s->lb_tree, &s->lb_nodes[s->lb_nodes_now].node);
+ s->lb_nodes_now++;
+ }
+}
+
+/* This function updates the server trees according to server <srv>'s new
+ * state. It should be called when server <srv>'s status changes to down.
+ * It is not important whether the server was already down or not. It is not
+ * important either that the new state is completely down (the caller may not
+ * know all the variables of a server's state).
+ */
+static void chash_set_server_status_down(struct server *srv)
+{
+ struct proxy *p = srv->proxy;
+
+ if (!srv_lb_status_changed(srv))
+ return;
+
+ if (srv_is_usable(srv))
+ goto out_update_state;
+
+ if (!srv_was_usable(srv))
+ /* server was already down */
+ goto out_update_backend;
+
+ if (srv->flags & SRV_F_BACKUP) {
+ p->lbprm.tot_wbck -= srv->prev_eweight;
+ p->srv_bck--;
+
+ if (srv == p->lbprm.fbck) {
+ /* we lost the first backup server in a single-backup
+ * configuration, we must search another one.
+ */
+ struct server *srv2 = p->lbprm.fbck;
+ do {
+ srv2 = srv2->next;
+ } while (srv2 &&
+ !((srv2->flags & SRV_F_BACKUP) &&
+ srv_is_usable(srv2)));
+ p->lbprm.fbck = srv2;
+ }
+ } else {
+ p->lbprm.tot_wact -= srv->prev_eweight;
+ p->srv_act--;
+ }
+
+ chash_dequeue_srv(srv);
+
+out_update_backend:
+ /* check/update tot_used, tot_weight */
+ update_backend_weight(p);
+ out_update_state:
+ srv_lb_commit_status(srv);
+}
+
+/* This function updates the server trees according to server <srv>'s new
+ * state. It should be called when server <srv>'s status changes to up.
+ * It is not important whether the server was already down or not. It is not
+ * important either that the new state is completely UP (the caller may not
+ * know all the variables of a server's state). This function will not change
+ * the weight of a server which was already up.
+ */
+static void chash_set_server_status_up(struct server *srv)
+{
+ struct proxy *p = srv->proxy;
+
+ if (!srv_lb_status_changed(srv))
+ return;
+
+ if (!srv_is_usable(srv))
+ goto out_update_state;
+
+ if (srv_was_usable(srv))
+ /* server was already up */
+ goto out_update_backend;
+
+ if (srv->flags & SRV_F_BACKUP) {
+ p->lbprm.tot_wbck += srv->eweight;
+ p->srv_bck++;
+
+ if (!(p->options & PR_O_USE_ALL_BK)) {
+ if (!p->lbprm.fbck) {
+ /* there was no backup server anymore */
+ p->lbprm.fbck = srv;
+ } else {
+ /* we may have restored a backup server prior to fbck,
+ * in which case it should replace it.
+ */
+ struct server *srv2 = srv;
+ do {
+ srv2 = srv2->next;
+ } while (srv2 && (srv2 != p->lbprm.fbck));
+ if (srv2)
+ p->lbprm.fbck = srv;
+ }
+ }
+ } else {
+ p->lbprm.tot_wact += srv->eweight;
+ p->srv_act++;
+ }
+
+ /* note that eweight cannot be 0 here */
+ chash_queue_dequeue_srv(srv);
+
+ out_update_backend:
+ /* check/update tot_used, tot_weight */
+ update_backend_weight(p);
+ out_update_state:
+ srv_lb_commit_status(srv);
+}
+
+/* This function must be called after an update to server <srv>'s effective
+ * weight. It may be called after a state change too.
+ */
+static void chash_update_server_weight(struct server *srv)
+{
+ int old_state, new_state;
+ struct proxy *p = srv->proxy;
+
+ if (!srv_lb_status_changed(srv))
+ return;
+
+ /* If changing the server's weight changes its state, we simply apply
+ * the procedures we already have for status change. If the state
+ * remains down, the server is not in any tree, so it's as easy as
+ * updating its values. If the state remains up with different weights,
+ * there are some computations to perform to find a new place and
+ * possibly a new tree for this server.
+ */
+
+ old_state = srv_was_usable(srv);
+ new_state = srv_is_usable(srv);
+
+ if (!old_state && !new_state) {
+ srv_lb_commit_status(srv);
+ return;
+ }
+ else if (!old_state && new_state) {
+ chash_set_server_status_up(srv);
+ return;
+ }
+ else if (old_state && !new_state) {
+ chash_set_server_status_down(srv);
+ return;
+ }
+
+ /* only adjust the server's presence in the tree */
+ chash_queue_dequeue_srv(srv);
+
+ if (srv->flags & SRV_F_BACKUP)
+ p->lbprm.tot_wbck += srv->eweight - srv->prev_eweight;
+ else
+ p->lbprm.tot_wact += srv->eweight - srv->prev_eweight;
+
+ update_backend_weight(p);
+ srv_lb_commit_status(srv);
+}
+
+/*
+ * This function returns the running server from the CHASH tree, which is at
+ * the closest distance from the value of <hash>. Doing so ensures that even
+ * with a well imbalanced hash, if some servers are close to each other, they
+ * will still both receive traffic. If any server is found, it will be returned.
+ * If no valid server is found, NULL is returned.
+ */
+struct server *chash_get_server_hash(struct proxy *p, unsigned int hash)
+{
+ struct eb32_node *next, *prev;
+ struct server *nsrv, *psrv;
+ struct eb_root *root;
+ unsigned int dn, dp;
+
+ if (p->srv_act)
+ root = &p->lbprm.chash.act;
+ else if (p->lbprm.fbck)
+ return p->lbprm.fbck;
+ else if (p->srv_bck)
+ root = &p->lbprm.chash.bck;
+ else
+ return NULL;
+
+ /* find the node after and the node before */
+ next = eb32_lookup_ge(root, hash);
+ if (!next)
+ next = eb32_first(root);
+ if (!next)
+ return NULL; /* tree is empty */
+
+ prev = eb32_prev(next);
+ if (!prev)
+ prev = eb32_last(root);
+
+ nsrv = eb32_entry(next, struct tree_occ, node)->server;
+ psrv = eb32_entry(prev, struct tree_occ, node)->server;
+ if (nsrv == psrv)
+ return nsrv;
+
+ /* OK we're located between two distinct servers, let's
+ * compare distances between hash and the two servers
+ * and select the closest server.
+ */
+ dp = hash - prev->key;
+ dn = next->key - hash;
+
+ return (dp <= dn) ? psrv : nsrv;
+}
+
+/* Return next server from the CHASH tree in backend <p>. If the tree is empty,
+ * return NULL. Saturated servers are skipped.
+ */
+struct server *chash_get_next_server(struct proxy *p, struct server *srvtoavoid)
+{
+ struct server *srv, *avoided;
+ struct eb32_node *node, *stop, *avoided_node;
+ struct eb_root *root;
+
+ srv = avoided = NULL;
+ avoided_node = NULL;
+
+ if (p->srv_act)
+ root = &p->lbprm.chash.act;
+ else if (p->lbprm.fbck)
+ return p->lbprm.fbck;
+ else if (p->srv_bck)
+ root = &p->lbprm.chash.bck;
+ else
+ return NULL;
+
+ stop = node = p->lbprm.chash.last;
+ do {
+ struct server *s;
+
+ if (node)
+ node = eb32_next(node);
+ if (!node)
+ node = eb32_first(root);
+
+ p->lbprm.chash.last = node;
+ if (!node)
+ /* no node is available */
+ return NULL;
+
+ /* Note: if we came here after a down/up cycle with no last
+ * pointer, and after a redispatch (srvtoavoid is set), we
+ * must set stop to non-null otherwise we can loop forever.
+ */
+ if (!stop)
+ stop = node;
+
+ /* OK, we have a server. However, it may be saturated, in which
+ * case we don't want to reconsider it for now, so we'll simply
+ * skip it. Same if it's the server we try to avoid, in which
+ * case we simply remember it for later use if needed.
+ */
+ s = eb32_entry(node, struct tree_occ, node)->server;
+ if (!s->maxconn || (!s->nbpend && s->served < srv_dynamic_maxconn(s))) {
+ if (s != srvtoavoid) {
+ srv = s;
+ break;
+ }
+ avoided = s;
+ avoided_node = node;
+ }
+ } while (node != stop);
+
+ if (!srv) {
+ srv = avoided;
+ p->lbprm.chash.last = avoided_node;
+ }
+
+ return srv;
+}
+
+/* This function is responsible for building the active and backup trees for
+ * constistent hashing. The servers receive an array of initialized nodes
+ * with their assigned keys. It also sets p->lbprm.wdiv to the eweight to
+ * uweight ratio.
+ */
+void chash_init_server_tree(struct proxy *p)
+{
+ struct server *srv;
+ struct eb_root init_head = EB_ROOT;
+ int node;
+
+ p->lbprm.set_server_status_up = chash_set_server_status_up;
+ p->lbprm.set_server_status_down = chash_set_server_status_down;
+ p->lbprm.update_server_eweight = chash_update_server_weight;
+ p->lbprm.server_take_conn = NULL;
+ p->lbprm.server_drop_conn = NULL;
+
+ p->lbprm.wdiv = BE_WEIGHT_SCALE;
+ for (srv = p->srv; srv; srv = srv->next) {
+ srv->eweight = (srv->uweight * p->lbprm.wdiv + p->lbprm.wmult - 1) / p->lbprm.wmult;
+ srv_lb_commit_status(srv);
+ }
+
+ recount_servers(p);
+ update_backend_weight(p);
+
+ p->lbprm.chash.act = init_head;
+ p->lbprm.chash.bck = init_head;
+ p->lbprm.chash.last = NULL;
+
+ /* queue active and backup servers in two distinct groups */
+ for (srv = p->srv; srv; srv = srv->next) {
+ srv->lb_tree = (srv->flags & SRV_F_BACKUP) ? &p->lbprm.chash.bck : &p->lbprm.chash.act;
+ srv->lb_nodes_tot = srv->uweight * BE_WEIGHT_SCALE;
+ srv->lb_nodes_now = 0;
+ srv->lb_nodes = (struct tree_occ *)calloc(srv->lb_nodes_tot, sizeof(struct tree_occ));
+
+ for (node = 0; node < srv->lb_nodes_tot; node++) {
+ srv->lb_nodes[node].server = srv;
+ srv->lb_nodes[node].node.key = full_hash(srv->puid * SRV_EWGHT_RANGE + node);
+ }
+
+ if (srv_is_usable(srv))
+ chash_queue_dequeue_srv(srv);
+ }
+}
--- /dev/null
+/*
+ * First Available Server load balancing algorithm.
+ *
+ * This file implements an algorithm which emerged during a discussion with
+ * Steen Larsen, initially inspired from Anshul Gandhi et.al.'s work now
+ * described as "packing" in section 3.5:
+ *
+ * http://reports-archive.adm.cs.cmu.edu/anon/2012/CMU-CS-12-109.pdf
+ *
+ * Copyright 2000-2012 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <common/compat.h>
+#include <common/config.h>
+#include <common/debug.h>
+#include <eb32tree.h>
+
+#include <types/global.h>
+#include <types/server.h>
+
+#include <proto/backend.h>
+#include <proto/queue.h>
+
+
+/* Remove a server from a tree. It must have previously been dequeued. This
+ * function is meant to be called when a server is going down or has its
+ * weight disabled.
+ */
+static inline void fas_remove_from_tree(struct server *s)
+{
+ s->lb_tree = NULL;
+}
+
+/* simply removes a server from a tree */
+static inline void fas_dequeue_srv(struct server *s)
+{
+ eb32_delete(&s->lb_node);
+}
+
+/* Queue a server in its associated tree, assuming the weight is >0.
+ * Servers are sorted by unique ID so that we send all connections to the first
+ * available server in declaration order (or ID order) until its maxconn is
+ * reached. It is important to understand that the server weight is not used
+ * here.
+ */
+static inline void fas_queue_srv(struct server *s)
+{
+ s->lb_node.key = s->puid;
+ eb32_insert(s->lb_tree, &s->lb_node);
+}
+
+/* Re-position the server in the FS tree after it has been assigned one
+ * connection or after it has released one. Note that it is possible that
+ * the server has been moved out of the tree due to failed health-checks.
+ */
+static void fas_srv_reposition(struct server *s)
+{
+ if (!s->lb_tree)
+ return;
+ fas_dequeue_srv(s);
+ fas_queue_srv(s);
+}
+
+/* This function updates the server trees according to server <srv>'s new
+ * state. It should be called when server <srv>'s status changes to down.
+ * It is not important whether the server was already down or not. It is not
+ * important either that the new state is completely down (the caller may not
+ * know all the variables of a server's state).
+ */
+static void fas_set_server_status_down(struct server *srv)
+{
+ struct proxy *p = srv->proxy;
+
+ if (!srv_lb_status_changed(srv))
+ return;
+
+ if (srv_is_usable(srv))
+ goto out_update_state;
+
+ if (!srv_was_usable(srv))
+ /* server was already down */
+ goto out_update_backend;
+
+ if (srv->flags & SRV_F_BACKUP) {
+ p->lbprm.tot_wbck -= srv->prev_eweight;
+ p->srv_bck--;
+
+ if (srv == p->lbprm.fbck) {
+ /* we lost the first backup server in a single-backup
+ * configuration, we must search another one.
+ */
+ struct server *srv2 = p->lbprm.fbck;
+ do {
+ srv2 = srv2->next;
+ } while (srv2 &&
+ !((srv2->flags & SRV_F_BACKUP) &&
+ srv_is_usable(srv2)));
+ p->lbprm.fbck = srv2;
+ }
+ } else {
+ p->lbprm.tot_wact -= srv->prev_eweight;
+ p->srv_act--;
+ }
+
+ fas_dequeue_srv(srv);
+ fas_remove_from_tree(srv);
+
+out_update_backend:
+ /* check/update tot_used, tot_weight */
+ update_backend_weight(p);
+ out_update_state:
+ srv_lb_commit_status(srv);
+}
+
+/* This function updates the server trees according to server <srv>'s new
+ * state. It should be called when server <srv>'s status changes to up.
+ * It is not important whether the server was already down or not. It is not
+ * important either that the new state is completely UP (the caller may not
+ * know all the variables of a server's state). This function will not change
+ * the weight of a server which was already up.
+ */
+static void fas_set_server_status_up(struct server *srv)
+{
+ struct proxy *p = srv->proxy;
+
+ if (!srv_lb_status_changed(srv))
+ return;
+
+ if (!srv_is_usable(srv))
+ goto out_update_state;
+
+ if (srv_was_usable(srv))
+ /* server was already up */
+ goto out_update_backend;
+
+ if (srv->flags & SRV_F_BACKUP) {
+ srv->lb_tree = &p->lbprm.fas.bck;
+ p->lbprm.tot_wbck += srv->eweight;
+ p->srv_bck++;
+
+ if (!(p->options & PR_O_USE_ALL_BK)) {
+ if (!p->lbprm.fbck) {
+ /* there was no backup server anymore */
+ p->lbprm.fbck = srv;
+ } else {
+ /* we may have restored a backup server prior to fbck,
+ * in which case it should replace it.
+ */
+ struct server *srv2 = srv;
+ do {
+ srv2 = srv2->next;
+ } while (srv2 && (srv2 != p->lbprm.fbck));
+ if (srv2)
+ p->lbprm.fbck = srv;
+ }
+ }
+ } else {
+ srv->lb_tree = &p->lbprm.fas.act;
+ p->lbprm.tot_wact += srv->eweight;
+ p->srv_act++;
+ }
+
+ /* note that eweight cannot be 0 here */
+ fas_queue_srv(srv);
+
+ out_update_backend:
+ /* check/update tot_used, tot_weight */
+ update_backend_weight(p);
+ out_update_state:
+ srv_lb_commit_status(srv);
+}
+
+/* This function must be called after an update to server <srv>'s effective
+ * weight. It may be called after a state change too.
+ */
+static void fas_update_server_weight(struct server *srv)
+{
+ int old_state, new_state;
+ struct proxy *p = srv->proxy;
+
+ if (!srv_lb_status_changed(srv))
+ return;
+
+ /* If changing the server's weight changes its state, we simply apply
+ * the procedures we already have for status change. If the state
+ * remains down, the server is not in any tree, so it's as easy as
+ * updating its values. If the state remains up with different weights,
+ * there are some computations to perform to find a new place and
+ * possibly a new tree for this server.
+ */
+
+ old_state = srv_was_usable(srv);
+ new_state = srv_is_usable(srv);
+
+ if (!old_state && !new_state) {
+ srv_lb_commit_status(srv);
+ return;
+ }
+ else if (!old_state && new_state) {
+ fas_set_server_status_up(srv);
+ return;
+ }
+ else if (old_state && !new_state) {
+ fas_set_server_status_down(srv);
+ return;
+ }
+
+ if (srv->lb_tree)
+ fas_dequeue_srv(srv);
+
+ if (srv->flags & SRV_F_BACKUP) {
+ p->lbprm.tot_wbck += srv->eweight - srv->prev_eweight;
+ srv->lb_tree = &p->lbprm.fas.bck;
+ } else {
+ p->lbprm.tot_wact += srv->eweight - srv->prev_eweight;
+ srv->lb_tree = &p->lbprm.fas.act;
+ }
+
+ fas_queue_srv(srv);
+
+ update_backend_weight(p);
+ srv_lb_commit_status(srv);
+}
+
+/* This function is responsible for building the trees in case of fast
+ * weighted least-conns. It also sets p->lbprm.wdiv to the eweight to
+ * uweight ratio. Both active and backup groups are initialized.
+ */
+void fas_init_server_tree(struct proxy *p)
+{
+ struct server *srv;
+ struct eb_root init_head = EB_ROOT;
+
+ p->lbprm.set_server_status_up = fas_set_server_status_up;
+ p->lbprm.set_server_status_down = fas_set_server_status_down;
+ p->lbprm.update_server_eweight = fas_update_server_weight;
+ p->lbprm.server_take_conn = fas_srv_reposition;
+ p->lbprm.server_drop_conn = fas_srv_reposition;
+
+ p->lbprm.wdiv = BE_WEIGHT_SCALE;
+ for (srv = p->srv; srv; srv = srv->next) {
+ srv->eweight = (srv->uweight * p->lbprm.wdiv + p->lbprm.wmult - 1) / p->lbprm.wmult;
+ srv_lb_commit_status(srv);
+ }
+
+ recount_servers(p);
+ update_backend_weight(p);
+
+ p->lbprm.fas.act = init_head;
+ p->lbprm.fas.bck = init_head;
+
+ /* queue active and backup servers in two distinct groups */
+ for (srv = p->srv; srv; srv = srv->next) {
+ if (!srv_is_usable(srv))
+ continue;
+ srv->lb_tree = (srv->flags & SRV_F_BACKUP) ? &p->lbprm.fas.bck : &p->lbprm.fas.act;
+ fas_queue_srv(srv);
+ }
+}
+
+/* Return next server from the FS tree in backend <p>. If the tree is empty,
+ * return NULL. Saturated servers are skipped.
+ */
+struct server *fas_get_next_server(struct proxy *p, struct server *srvtoavoid)
+{
+ struct server *srv, *avoided;
+ struct eb32_node *node;
+
+ srv = avoided = NULL;
+
+ if (p->srv_act)
+ node = eb32_first(&p->lbprm.fas.act);
+ else if (p->lbprm.fbck)
+ return p->lbprm.fbck;
+ else if (p->srv_bck)
+ node = eb32_first(&p->lbprm.fas.bck);
+ else
+ return NULL;
+
+ while (node) {
+ /* OK, we have a server. However, it may be saturated, in which
+ * case we don't want to reconsider it for now, so we'll simply
+ * skip it. Same if it's the server we try to avoid, in which
+ * case we simply remember it for later use if needed.
+ */
+ struct server *s;
+
+ s = eb32_entry(node, struct server, lb_node);
+ if (!s->maxconn || (!s->nbpend && s->served < srv_dynamic_maxconn(s))) {
+ if (s != srvtoavoid) {
+ srv = s;
+ break;
+ }
+ avoided = s;
+ }
+ node = eb32_next(node);
+ }
+
+ if (!srv)
+ srv = avoided;
+
+ return srv;
+}
+
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * Fast Weighted Least Connection load balancing algorithm.
+ *
+ * Copyright 2000-2009 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <common/compat.h>
+#include <common/config.h>
+#include <common/debug.h>
+#include <eb32tree.h>
+
+#include <types/global.h>
+#include <types/server.h>
+
+#include <proto/backend.h>
+#include <proto/queue.h>
+
+
+/* Remove a server from a tree. It must have previously been dequeued. This
+ * function is meant to be called when a server is going down or has its
+ * weight disabled.
+ */
+static inline void fwlc_remove_from_tree(struct server *s)
+{
+ s->lb_tree = NULL;
+}
+
+/* simply removes a server from a tree */
+static inline void fwlc_dequeue_srv(struct server *s)
+{
+ eb32_delete(&s->lb_node);
+}
+
+/* Queue a server in its associated tree, assuming the weight is >0.
+ * Servers are sorted by #conns/weight. To ensure maximum accuracy,
+ * we use #conns*SRV_EWGHT_MAX/eweight as the sorting key.
+ */
+static inline void fwlc_queue_srv(struct server *s)
+{
+ s->lb_node.key = s->served * SRV_EWGHT_MAX / s->eweight;
+ eb32_insert(s->lb_tree, &s->lb_node);
+}
+
+/* Re-position the server in the FWLC tree after it has been assigned one
+ * connection or after it has released one. Note that it is possible that
+ * the server has been moved out of the tree due to failed health-checks.
+ */
+static void fwlc_srv_reposition(struct server *s)
+{
+ if (!s->lb_tree)
+ return;
+ fwlc_dequeue_srv(s);
+ fwlc_queue_srv(s);
+}
+
+/* This function updates the server trees according to server <srv>'s new
+ * state. It should be called when server <srv>'s status changes to down.
+ * It is not important whether the server was already down or not. It is not
+ * important either that the new state is completely down (the caller may not
+ * know all the variables of a server's state).
+ */
+static void fwlc_set_server_status_down(struct server *srv)
+{
+ struct proxy *p = srv->proxy;
+
+ if (!srv_lb_status_changed(srv))
+ return;
+
+ if (srv_is_usable(srv))
+ goto out_update_state;
+
+ if (!srv_was_usable(srv))
+ /* server was already down */
+ goto out_update_backend;
+
+ if (srv->flags & SRV_F_BACKUP) {
+ p->lbprm.tot_wbck -= srv->prev_eweight;
+ p->srv_bck--;
+
+ if (srv == p->lbprm.fbck) {
+ /* we lost the first backup server in a single-backup
+ * configuration, we must search another one.
+ */
+ struct server *srv2 = p->lbprm.fbck;
+ do {
+ srv2 = srv2->next;
+ } while (srv2 &&
+ !((srv2->flags & SRV_F_BACKUP) &&
+ srv_is_usable(srv2)));
+ p->lbprm.fbck = srv2;
+ }
+ } else {
+ p->lbprm.tot_wact -= srv->prev_eweight;
+ p->srv_act--;
+ }
+
+ fwlc_dequeue_srv(srv);
+ fwlc_remove_from_tree(srv);
+
+out_update_backend:
+ /* check/update tot_used, tot_weight */
+ update_backend_weight(p);
+ out_update_state:
+ srv_lb_commit_status(srv);
+}
+
+/* This function updates the server trees according to server <srv>'s new
+ * state. It should be called when server <srv>'s status changes to up.
+ * It is not important whether the server was already down or not. It is not
+ * important either that the new state is completely UP (the caller may not
+ * know all the variables of a server's state). This function will not change
+ * the weight of a server which was already up.
+ */
+static void fwlc_set_server_status_up(struct server *srv)
+{
+ struct proxy *p = srv->proxy;
+
+ if (!srv_lb_status_changed(srv))
+ return;
+
+ if (!srv_is_usable(srv))
+ goto out_update_state;
+
+ if (srv_was_usable(srv))
+ /* server was already up */
+ goto out_update_backend;
+
+ if (srv->flags & SRV_F_BACKUP) {
+ srv->lb_tree = &p->lbprm.fwlc.bck;
+ p->lbprm.tot_wbck += srv->eweight;
+ p->srv_bck++;
+
+ if (!(p->options & PR_O_USE_ALL_BK)) {
+ if (!p->lbprm.fbck) {
+ /* there was no backup server anymore */
+ p->lbprm.fbck = srv;
+ } else {
+ /* we may have restored a backup server prior to fbck,
+ * in which case it should replace it.
+ */
+ struct server *srv2 = srv;
+ do {
+ srv2 = srv2->next;
+ } while (srv2 && (srv2 != p->lbprm.fbck));
+ if (srv2)
+ p->lbprm.fbck = srv;
+ }
+ }
+ } else {
+ srv->lb_tree = &p->lbprm.fwlc.act;
+ p->lbprm.tot_wact += srv->eweight;
+ p->srv_act++;
+ }
+
+ /* note that eweight cannot be 0 here */
+ fwlc_queue_srv(srv);
+
+ out_update_backend:
+ /* check/update tot_used, tot_weight */
+ update_backend_weight(p);
+ out_update_state:
+ srv_lb_commit_status(srv);
+}
+
+/* This function must be called after an update to server <srv>'s effective
+ * weight. It may be called after a state change too.
+ */
+static void fwlc_update_server_weight(struct server *srv)
+{
+ int old_state, new_state;
+ struct proxy *p = srv->proxy;
+
+ if (!srv_lb_status_changed(srv))
+ return;
+
+ /* If changing the server's weight changes its state, we simply apply
+ * the procedures we already have for status change. If the state
+ * remains down, the server is not in any tree, so it's as easy as
+ * updating its values. If the state remains up with different weights,
+ * there are some computations to perform to find a new place and
+ * possibly a new tree for this server.
+ */
+
+ old_state = srv_was_usable(srv);
+ new_state = srv_is_usable(srv);
+
+ if (!old_state && !new_state) {
+ srv_lb_commit_status(srv);
+ return;
+ }
+ else if (!old_state && new_state) {
+ fwlc_set_server_status_up(srv);
+ return;
+ }
+ else if (old_state && !new_state) {
+ fwlc_set_server_status_down(srv);
+ return;
+ }
+
+ if (srv->lb_tree)
+ fwlc_dequeue_srv(srv);
+
+ if (srv->flags & SRV_F_BACKUP) {
+ p->lbprm.tot_wbck += srv->eweight - srv->prev_eweight;
+ srv->lb_tree = &p->lbprm.fwlc.bck;
+ } else {
+ p->lbprm.tot_wact += srv->eweight - srv->prev_eweight;
+ srv->lb_tree = &p->lbprm.fwlc.act;
+ }
+
+ fwlc_queue_srv(srv);
+
+ update_backend_weight(p);
+ srv_lb_commit_status(srv);
+}
+
+/* This function is responsible for building the trees in case of fast
+ * weighted least-conns. It also sets p->lbprm.wdiv to the eweight to
+ * uweight ratio. Both active and backup groups are initialized.
+ */
+void fwlc_init_server_tree(struct proxy *p)
+{
+ struct server *srv;
+ struct eb_root init_head = EB_ROOT;
+
+ p->lbprm.set_server_status_up = fwlc_set_server_status_up;
+ p->lbprm.set_server_status_down = fwlc_set_server_status_down;
+ p->lbprm.update_server_eweight = fwlc_update_server_weight;
+ p->lbprm.server_take_conn = fwlc_srv_reposition;
+ p->lbprm.server_drop_conn = fwlc_srv_reposition;
+
+ p->lbprm.wdiv = BE_WEIGHT_SCALE;
+ for (srv = p->srv; srv; srv = srv->next) {
+ srv->eweight = (srv->uweight * p->lbprm.wdiv + p->lbprm.wmult - 1) / p->lbprm.wmult;
+ srv_lb_commit_status(srv);
+ }
+
+ recount_servers(p);
+ update_backend_weight(p);
+
+ p->lbprm.fwlc.act = init_head;
+ p->lbprm.fwlc.bck = init_head;
+
+ /* queue active and backup servers in two distinct groups */
+ for (srv = p->srv; srv; srv = srv->next) {
+ if (!srv_is_usable(srv))
+ continue;
+ srv->lb_tree = (srv->flags & SRV_F_BACKUP) ? &p->lbprm.fwlc.bck : &p->lbprm.fwlc.act;
+ fwlc_queue_srv(srv);
+ }
+}
+
+/* Return next server from the FWLC tree in backend <p>. If the tree is empty,
+ * return NULL. Saturated servers are skipped.
+ */
+struct server *fwlc_get_next_server(struct proxy *p, struct server *srvtoavoid)
+{
+ struct server *srv, *avoided;
+ struct eb32_node *node;
+
+ srv = avoided = NULL;
+
+ if (p->srv_act)
+ node = eb32_first(&p->lbprm.fwlc.act);
+ else if (p->lbprm.fbck)
+ return p->lbprm.fbck;
+ else if (p->srv_bck)
+ node = eb32_first(&p->lbprm.fwlc.bck);
+ else
+ return NULL;
+
+ while (node) {
+ /* OK, we have a server. However, it may be saturated, in which
+ * case we don't want to reconsider it for now, so we'll simply
+ * skip it. Same if it's the server we try to avoid, in which
+ * case we simply remember it for later use if needed.
+ */
+ struct server *s;
+
+ s = eb32_entry(node, struct server, lb_node);
+ if (!s->maxconn || (!s->nbpend && s->served < srv_dynamic_maxconn(s))) {
+ if (s != srvtoavoid) {
+ srv = s;
+ break;
+ }
+ avoided = s;
+ }
+ node = eb32_next(node);
+ }
+
+ if (!srv)
+ srv = avoided;
+
+ return srv;
+}
+
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * Fast Weighted Round Robin load balancing algorithm.
+ *
+ * Copyright 2000-2009 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <common/compat.h>
+#include <common/config.h>
+#include <common/debug.h>
+#include <eb32tree.h>
+
+#include <types/global.h>
+#include <types/server.h>
+
+#include <proto/backend.h>
+#include <proto/queue.h>
+
+static inline void fwrr_remove_from_tree(struct server *s);
+static inline void fwrr_queue_by_weight(struct eb_root *root, struct server *s);
+static inline void fwrr_dequeue_srv(struct server *s);
+static void fwrr_get_srv(struct server *s);
+static void fwrr_queue_srv(struct server *s);
+
+
+/* This function updates the server trees according to server <srv>'s new
+ * state. It should be called when server <srv>'s status changes to down.
+ * It is not important whether the server was already down or not. It is not
+ * important either that the new state is completely down (the caller may not
+ * know all the variables of a server's state).
+ */
+static void fwrr_set_server_status_down(struct server *srv)
+{
+ struct proxy *p = srv->proxy;
+ struct fwrr_group *grp;
+
+ if (!srv_lb_status_changed(srv))
+ return;
+
+ if (srv_is_usable(srv))
+ goto out_update_state;
+
+ if (!srv_was_usable(srv))
+ /* server was already down */
+ goto out_update_backend;
+
+ grp = (srv->flags & SRV_F_BACKUP) ? &p->lbprm.fwrr.bck : &p->lbprm.fwrr.act;
+ grp->next_weight -= srv->prev_eweight;
+
+ if (srv->flags & SRV_F_BACKUP) {
+ p->lbprm.tot_wbck = p->lbprm.fwrr.bck.next_weight;
+ p->srv_bck--;
+
+ if (srv == p->lbprm.fbck) {
+ /* we lost the first backup server in a single-backup
+ * configuration, we must search another one.
+ */
+ struct server *srv2 = p->lbprm.fbck;
+ do {
+ srv2 = srv2->next;
+ } while (srv2 &&
+ !((srv2->flags & SRV_F_BACKUP) &&
+ srv_is_usable(srv2)));
+ p->lbprm.fbck = srv2;
+ }
+ } else {
+ p->lbprm.tot_wact = p->lbprm.fwrr.act.next_weight;
+ p->srv_act--;
+ }
+
+ fwrr_dequeue_srv(srv);
+ fwrr_remove_from_tree(srv);
+
+out_update_backend:
+ /* check/update tot_used, tot_weight */
+ update_backend_weight(p);
+ out_update_state:
+ srv_lb_commit_status(srv);
+}
+
+/* This function updates the server trees according to server <srv>'s new
+ * state. It should be called when server <srv>'s status changes to up.
+ * It is not important whether the server was already down or not. It is not
+ * important either that the new state is completely UP (the caller may not
+ * know all the variables of a server's state). This function will not change
+ * the weight of a server which was already up.
+ */
+static void fwrr_set_server_status_up(struct server *srv)
+{
+ struct proxy *p = srv->proxy;
+ struct fwrr_group *grp;
+
+ if (!srv_lb_status_changed(srv))
+ return;
+
+ if (!srv_is_usable(srv))
+ goto out_update_state;
+
+ if (srv_was_usable(srv))
+ /* server was already up */
+ goto out_update_backend;
+
+ grp = (srv->flags & SRV_F_BACKUP) ? &p->lbprm.fwrr.bck : &p->lbprm.fwrr.act;
+ grp->next_weight += srv->eweight;
+
+ if (srv->flags & SRV_F_BACKUP) {
+ p->lbprm.tot_wbck = p->lbprm.fwrr.bck.next_weight;
+ p->srv_bck++;
+
+ if (!(p->options & PR_O_USE_ALL_BK)) {
+ if (!p->lbprm.fbck) {
+ /* there was no backup server anymore */
+ p->lbprm.fbck = srv;
+ } else {
+ /* we may have restored a backup server prior to fbck,
+ * in which case it should replace it.
+ */
+ struct server *srv2 = srv;
+ do {
+ srv2 = srv2->next;
+ } while (srv2 && (srv2 != p->lbprm.fbck));
+ if (srv2)
+ p->lbprm.fbck = srv;
+ }
+ }
+ } else {
+ p->lbprm.tot_wact = p->lbprm.fwrr.act.next_weight;
+ p->srv_act++;
+ }
+
+ /* note that eweight cannot be 0 here */
+ fwrr_get_srv(srv);
+ srv->npos = grp->curr_pos + (grp->next_weight + grp->curr_weight - grp->curr_pos) / srv->eweight;
+ fwrr_queue_srv(srv);
+
+out_update_backend:
+ /* check/update tot_used, tot_weight */
+ update_backend_weight(p);
+ out_update_state:
+ srv_lb_commit_status(srv);
+}
+
+/* This function must be called after an update to server <srv>'s effective
+ * weight. It may be called after a state change too.
+ */
+static void fwrr_update_server_weight(struct server *srv)
+{
+ int old_state, new_state;
+ struct proxy *p = srv->proxy;
+ struct fwrr_group *grp;
+
+ if (!srv_lb_status_changed(srv))
+ return;
+
+ /* If changing the server's weight changes its state, we simply apply
+ * the procedures we already have for status change. If the state
+ * remains down, the server is not in any tree, so it's as easy as
+ * updating its values. If the state remains up with different weights,
+ * there are some computations to perform to find a new place and
+ * possibly a new tree for this server.
+ */
+
+ old_state = srv_was_usable(srv);
+ new_state = srv_is_usable(srv);
+
+ if (!old_state && !new_state) {
+ srv_lb_commit_status(srv);
+ return;
+ }
+ else if (!old_state && new_state) {
+ fwrr_set_server_status_up(srv);
+ return;
+ }
+ else if (old_state && !new_state) {
+ fwrr_set_server_status_down(srv);
+ return;
+ }
+
+ grp = (srv->flags & SRV_F_BACKUP) ? &p->lbprm.fwrr.bck : &p->lbprm.fwrr.act;
+ grp->next_weight = grp->next_weight - srv->prev_eweight + srv->eweight;
+
+ p->lbprm.tot_wact = p->lbprm.fwrr.act.next_weight;
+ p->lbprm.tot_wbck = p->lbprm.fwrr.bck.next_weight;
+
+ if (srv->lb_tree == grp->init) {
+ fwrr_dequeue_srv(srv);
+ fwrr_queue_by_weight(grp->init, srv);
+ }
+ else if (!srv->lb_tree) {
+ /* FIXME: server was down. This is not possible right now but
+ * may be needed soon for slowstart or graceful shutdown.
+ */
+ fwrr_dequeue_srv(srv);
+ fwrr_get_srv(srv);
+ srv->npos = grp->curr_pos + (grp->next_weight + grp->curr_weight - grp->curr_pos) / srv->eweight;
+ fwrr_queue_srv(srv);
+ } else {
+ /* The server is either active or in the next queue. If it's
+ * still in the active queue and it has not consumed all of its
+ * places, let's adjust its next position.
+ */
+ fwrr_get_srv(srv);
+
+ if (srv->eweight > 0) {
+ int prev_next = srv->npos;
+ int step = grp->next_weight / srv->eweight;
+
+ srv->npos = srv->lpos + step;
+ srv->rweight = 0;
+
+ if (srv->npos > prev_next)
+ srv->npos = prev_next;
+ if (srv->npos < grp->curr_pos + 2)
+ srv->npos = grp->curr_pos + step;
+ } else {
+ /* push it into the next tree */
+ srv->npos = grp->curr_pos + grp->curr_weight;
+ }
+
+ fwrr_dequeue_srv(srv);
+ fwrr_queue_srv(srv);
+ }
+
+ update_backend_weight(p);
+ srv_lb_commit_status(srv);
+}
+
+/* Remove a server from a tree. It must have previously been dequeued. This
+ * function is meant to be called when a server is going down or has its
+ * weight disabled.
+ */
+static inline void fwrr_remove_from_tree(struct server *s)
+{
+ s->lb_tree = NULL;
+}
+
+/* Queue a server in the weight tree <root>, assuming the weight is >0.
+ * We want to sort them by inverted weights, because we need to place
+ * heavy servers first in order to get a smooth distribution.
+ */
+static inline void fwrr_queue_by_weight(struct eb_root *root, struct server *s)
+{
+ s->lb_node.key = SRV_EWGHT_MAX - s->eweight;
+ eb32_insert(root, &s->lb_node);
+ s->lb_tree = root;
+}
+
+/* This function is responsible for building the weight trees in case of fast
+ * weighted round-robin. It also sets p->lbprm.wdiv to the eweight to uweight
+ * ratio. Both active and backup groups are initialized.
+ */
+void fwrr_init_server_groups(struct proxy *p)
+{
+ struct server *srv;
+ struct eb_root init_head = EB_ROOT;
+
+ p->lbprm.set_server_status_up = fwrr_set_server_status_up;
+ p->lbprm.set_server_status_down = fwrr_set_server_status_down;
+ p->lbprm.update_server_eweight = fwrr_update_server_weight;
+
+ p->lbprm.wdiv = BE_WEIGHT_SCALE;
+ for (srv = p->srv; srv; srv = srv->next) {
+ srv->eweight = (srv->uweight * p->lbprm.wdiv + p->lbprm.wmult - 1) / p->lbprm.wmult;
+ srv_lb_commit_status(srv);
+ }
+
+ recount_servers(p);
+ update_backend_weight(p);
+
+ /* prepare the active servers group */
+ p->lbprm.fwrr.act.curr_pos = p->lbprm.fwrr.act.curr_weight =
+ p->lbprm.fwrr.act.next_weight = p->lbprm.tot_wact;
+ p->lbprm.fwrr.act.curr = p->lbprm.fwrr.act.t0 =
+ p->lbprm.fwrr.act.t1 = init_head;
+ p->lbprm.fwrr.act.init = &p->lbprm.fwrr.act.t0;
+ p->lbprm.fwrr.act.next = &p->lbprm.fwrr.act.t1;
+
+ /* prepare the backup servers group */
+ p->lbprm.fwrr.bck.curr_pos = p->lbprm.fwrr.bck.curr_weight =
+ p->lbprm.fwrr.bck.next_weight = p->lbprm.tot_wbck;
+ p->lbprm.fwrr.bck.curr = p->lbprm.fwrr.bck.t0 =
+ p->lbprm.fwrr.bck.t1 = init_head;
+ p->lbprm.fwrr.bck.init = &p->lbprm.fwrr.bck.t0;
+ p->lbprm.fwrr.bck.next = &p->lbprm.fwrr.bck.t1;
+
+ /* queue active and backup servers in two distinct groups */
+ for (srv = p->srv; srv; srv = srv->next) {
+ if (!srv_is_usable(srv))
+ continue;
+ fwrr_queue_by_weight((srv->flags & SRV_F_BACKUP) ?
+ p->lbprm.fwrr.bck.init :
+ p->lbprm.fwrr.act.init,
+ srv);
+ }
+}
+
+/* simply removes a server from a weight tree */
+static inline void fwrr_dequeue_srv(struct server *s)
+{
+ eb32_delete(&s->lb_node);
+}
+
+/* queues a server into the appropriate group and tree depending on its
+ * backup status, and ->npos. If the server is disabled, simply assign
+ * it to the NULL tree.
+ */
+static void fwrr_queue_srv(struct server *s)
+{
+ struct proxy *p = s->proxy;
+ struct fwrr_group *grp;
+
+ grp = (s->flags & SRV_F_BACKUP) ? &p->lbprm.fwrr.bck : &p->lbprm.fwrr.act;
+
+ /* Delay everything which does not fit into the window and everything
+ * which does not fit into the theorical new window.
+ */
+ if (!srv_is_usable(s)) {
+ fwrr_remove_from_tree(s);
+ }
+ else if (s->eweight <= 0 ||
+ s->npos >= 2 * grp->curr_weight ||
+ s->npos >= grp->curr_weight + grp->next_weight) {
+ /* put into next tree, and readjust npos in case we could
+ * finally take this back to current. */
+ s->npos -= grp->curr_weight;
+ fwrr_queue_by_weight(grp->next, s);
+ }
+ else {
+ /* The sorting key is stored in units of s->npos * user_weight
+ * in order to avoid overflows. As stated in backend.h, the
+ * lower the scale, the rougher the weights modulation, and the
+ * higher the scale, the lower the number of servers without
+ * overflow. With this formula, the result is always positive,
+ * so we can use eb32_insert().
+ */
+ s->lb_node.key = SRV_UWGHT_RANGE * s->npos +
+ (unsigned)(SRV_EWGHT_MAX + s->rweight - s->eweight) / BE_WEIGHT_SCALE;
+
+ eb32_insert(&grp->curr, &s->lb_node);
+ s->lb_tree = &grp->curr;
+ }
+}
+
+/* prepares a server when extracting it from the "init" tree */
+static inline void fwrr_get_srv_init(struct server *s)
+{
+ s->npos = s->rweight = 0;
+}
+
+/* prepares a server when extracting it from the "next" tree */
+static inline void fwrr_get_srv_next(struct server *s)
+{
+ struct fwrr_group *grp = (s->flags & SRV_F_BACKUP) ?
+ &s->proxy->lbprm.fwrr.bck :
+ &s->proxy->lbprm.fwrr.act;
+
+ s->npos += grp->curr_weight;
+}
+
+/* prepares a server when it was marked down */
+static inline void fwrr_get_srv_down(struct server *s)
+{
+ struct fwrr_group *grp = (s->flags & SRV_F_BACKUP) ?
+ &s->proxy->lbprm.fwrr.bck :
+ &s->proxy->lbprm.fwrr.act;
+
+ s->npos = grp->curr_pos;
+}
+
+/* prepares a server when extracting it from its tree */
+static void fwrr_get_srv(struct server *s)
+{
+ struct proxy *p = s->proxy;
+ struct fwrr_group *grp = (s->flags & SRV_F_BACKUP) ?
+ &p->lbprm.fwrr.bck :
+ &p->lbprm.fwrr.act;
+
+ if (s->lb_tree == grp->init) {
+ fwrr_get_srv_init(s);
+ }
+ else if (s->lb_tree == grp->next) {
+ fwrr_get_srv_next(s);
+ }
+ else if (s->lb_tree == NULL) {
+ fwrr_get_srv_down(s);
+ }
+}
+
+/* switches trees "init" and "next" for FWRR group <grp>. "init" should be empty
+ * when this happens, and "next" filled with servers sorted by weights.
+ */
+static inline void fwrr_switch_trees(struct fwrr_group *grp)
+{
+ struct eb_root *swap;
+ swap = grp->init;
+ grp->init = grp->next;
+ grp->next = swap;
+ grp->curr_weight = grp->next_weight;
+ grp->curr_pos = grp->curr_weight;
+}
+
+/* return next server from the current tree in FWRR group <grp>, or a server
+ * from the "init" tree if appropriate. If both trees are empty, return NULL.
+ */
+static struct server *fwrr_get_server_from_group(struct fwrr_group *grp)
+{
+ struct eb32_node *node;
+ struct server *s;
+
+ node = eb32_first(&grp->curr);
+ s = eb32_entry(node, struct server, lb_node);
+
+ if (!node || s->npos > grp->curr_pos) {
+ /* either we have no server left, or we have a hole */
+ struct eb32_node *node2;
+ node2 = eb32_first(grp->init);
+ if (node2) {
+ node = node2;
+ s = eb32_entry(node, struct server, lb_node);
+ fwrr_get_srv_init(s);
+ if (s->eweight == 0) /* FIXME: is it possible at all ? */
+ node = NULL;
+ }
+ }
+ if (node)
+ return s;
+ else
+ return NULL;
+}
+
+/* Computes next position of server <s> in the group. It is mandatory for <s>
+ * to have a non-zero, positive eweight.
+*/
+static inline void fwrr_update_position(struct fwrr_group *grp, struct server *s)
+{
+ if (!s->npos) {
+ /* first time ever for this server */
+ s->lpos = grp->curr_pos;
+ s->npos = grp->curr_pos + grp->next_weight / s->eweight;
+ s->rweight += grp->next_weight % s->eweight;
+
+ if (s->rweight >= s->eweight) {
+ s->rweight -= s->eweight;
+ s->npos++;
+ }
+ } else {
+ s->lpos = s->npos;
+ s->npos += grp->next_weight / s->eweight;
+ s->rweight += grp->next_weight % s->eweight;
+
+ if (s->rweight >= s->eweight) {
+ s->rweight -= s->eweight;
+ s->npos++;
+ }
+ }
+}
+
+/* Return next server from the current tree in backend <p>, or a server from
+ * the init tree if appropriate. If both trees are empty, return NULL.
+ * Saturated servers are skipped and requeued.
+ */
+struct server *fwrr_get_next_server(struct proxy *p, struct server *srvtoavoid)
+{
+ struct server *srv, *full, *avoided;
+ struct fwrr_group *grp;
+ int switched;
+
+ if (p->srv_act)
+ grp = &p->lbprm.fwrr.act;
+ else if (p->lbprm.fbck)
+ return p->lbprm.fbck;
+ else if (p->srv_bck)
+ grp = &p->lbprm.fwrr.bck;
+ else
+ return NULL;
+
+ switched = 0;
+ avoided = NULL;
+ full = NULL; /* NULL-terminated list of saturated servers */
+ while (1) {
+ /* if we see an empty group, let's first try to collect weights
+ * which might have recently changed.
+ */
+ if (!grp->curr_weight)
+ grp->curr_pos = grp->curr_weight = grp->next_weight;
+
+ /* get first server from the "current" tree. When the end of
+ * the tree is reached, we may have to switch, but only once.
+ */
+ while (1) {
+ srv = fwrr_get_server_from_group(grp);
+ if (srv)
+ break;
+ if (switched) {
+ if (avoided) {
+ srv = avoided;
+ break;
+ }
+ goto requeue_servers;
+ }
+ switched = 1;
+ fwrr_switch_trees(grp);
+
+ }
+
+ /* OK, we have a server. However, it may be saturated, in which
+ * case we don't want to reconsider it for now. We'll update
+ * its position and dequeue it anyway, so that we can move it
+ * to a better place afterwards.
+ */
+ fwrr_update_position(grp, srv);
+ fwrr_dequeue_srv(srv);
+ grp->curr_pos++;
+ if (!srv->maxconn || (!srv->nbpend && srv->served < srv_dynamic_maxconn(srv))) {
+ /* make sure it is not the server we are trying to exclude... */
+ if (srv != srvtoavoid || avoided)
+ break;
+
+ avoided = srv; /* ...but remember that is was selected yet avoided */
+ }
+
+ /* the server is saturated or avoided, let's chain it for later reinsertion */
+ srv->next_full = full;
+ full = srv;
+ }
+
+ /* OK, we got the best server, let's update it */
+ fwrr_queue_srv(srv);
+
+ requeue_servers:
+ /* Requeue all extracted servers. If full==srv then it was
+ * avoided (unsucessfully) and chained, omit it now.
+ */
+ if (unlikely(full != NULL)) {
+ if (switched) {
+ /* the tree has switched, requeue all extracted servers
+ * into "init", because their place was lost, and only
+ * their weight matters.
+ */
+ do {
+ if (likely(full != srv))
+ fwrr_queue_by_weight(grp->init, full);
+ full = full->next_full;
+ } while (full);
+ } else {
+ /* requeue all extracted servers just as if they were consumed
+ * so that they regain their expected place.
+ */
+ do {
+ if (likely(full != srv))
+ fwrr_queue_srv(full);
+ full = full->next_full;
+ } while (full);
+ }
+ }
+ return srv;
+}
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * Map-based load-balancing (RR and HASH)
+ *
+ * Copyright 2000-2009 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <common/compat.h>
+#include <common/config.h>
+#include <common/debug.h>
+#include <eb32tree.h>
+
+#include <types/global.h>
+#include <types/server.h>
+
+#include <proto/backend.h>
+#include <proto/proto_http.h>
+#include <proto/proto_tcp.h>
+#include <proto/queue.h>
+
+/* this function updates the map according to server <srv>'s new state */
+static void map_set_server_status_down(struct server *srv)
+{
+ struct proxy *p = srv->proxy;
+
+ if (!srv_lb_status_changed(srv))
+ return;
+
+ if (srv_is_usable(srv))
+ goto out_update_state;
+
+ /* FIXME: could be optimized since we know what changed */
+ recount_servers(p);
+ update_backend_weight(p);
+ p->lbprm.map.state |= LB_MAP_RECALC;
+ out_update_state:
+ srv_lb_commit_status(srv);
+}
+
+/* This function updates the map according to server <srv>'s new state */
+static void map_set_server_status_up(struct server *srv)
+{
+ struct proxy *p = srv->proxy;
+
+ if (!srv_lb_status_changed(srv))
+ return;
+
+ if (!srv_is_usable(srv))
+ goto out_update_state;
+
+ /* FIXME: could be optimized since we know what changed */
+ recount_servers(p);
+ update_backend_weight(p);
+ p->lbprm.map.state |= LB_MAP_RECALC;
+ out_update_state:
+ srv_lb_commit_status(srv);
+}
+
+/* This function recomputes the server map for proxy px. It relies on
+ * px->lbprm.tot_wact, tot_wbck, tot_used, tot_weight, so it must be
+ * called after recount_servers(). It also expects px->lbprm.map.srv
+ * to be allocated with the largest size needed. It updates tot_weight.
+ */
+void recalc_server_map(struct proxy *px)
+{
+ int o, tot, flag;
+ struct server *cur, *best;
+
+ switch (px->lbprm.tot_used) {
+ case 0: /* no server */
+ px->lbprm.map.state &= ~LB_MAP_RECALC;
+ return;
+ default:
+ tot = px->lbprm.tot_weight;
+ break;
+ }
+
+ /* here we *know* that we have some servers */
+ if (px->srv_act)
+ flag = 0;
+ else
+ flag = SRV_F_BACKUP;
+
+ /* this algorithm gives priority to the first server, which means that
+ * it will respect the declaration order for equivalent weights, and
+ * that whatever the weights, the first server called will always be
+ * the first declared. This is an important asumption for the backup
+ * case, where we want the first server only.
+ */
+ for (cur = px->srv; cur; cur = cur->next)
+ cur->wscore = 0;
+
+ for (o = 0; o < tot; o++) {
+ int max = 0;
+ best = NULL;
+ for (cur = px->srv; cur; cur = cur->next) {
+ if ((cur->flags & SRV_F_BACKUP) == flag &&
+ srv_is_usable(cur)) {
+ int v;
+
+ /* If we are forced to return only one server, we don't want to
+ * go further, because we would return the wrong one due to
+ * divide overflow.
+ */
+ if (tot == 1) {
+ best = cur;
+ /* note that best->wscore will be wrong but we don't care */
+ break;
+ }
+
+ cur->wscore += cur->eweight;
+ v = (cur->wscore + tot) / tot; /* result between 0 and 3 */
+ if (best == NULL || v > max) {
+ max = v;
+ best = cur;
+ }
+ }
+ }
+ px->lbprm.map.srv[o] = best;
+ best->wscore -= tot;
+ }
+ px->lbprm.map.state &= ~LB_MAP_RECALC;
+}
+
+/* This function is responsible of building the server MAP for map-based LB
+ * algorithms, allocating the map, and setting p->lbprm.wmult to the GCD of the
+ * weights if applicable. It should be called only once per proxy, at config
+ * time.
+ */
+void init_server_map(struct proxy *p)
+{
+ struct server *srv;
+ int pgcd;
+ int act, bck;
+
+ p->lbprm.set_server_status_up = map_set_server_status_up;
+ p->lbprm.set_server_status_down = map_set_server_status_down;
+ p->lbprm.update_server_eweight = NULL;
+
+ if (!p->srv)
+ return;
+
+ /* We will factor the weights to reduce the table,
+ * using Euclide's largest common divisor algorithm.
+ * Since we may have zero weights, we have to first
+ * find a non-zero weight server.
+ */
+ pgcd = 1;
+ srv = p->srv;
+ while (srv && !srv->uweight)
+ srv = srv->next;
+
+ if (srv) {
+ pgcd = srv->uweight; /* note: cannot be zero */
+ while (pgcd > 1 && (srv = srv->next)) {
+ int w = srv->uweight;
+ while (w) {
+ int t = pgcd % w;
+ pgcd = w;
+ w = t;
+ }
+ }
+ }
+
+ /* It is sometimes useful to know what factor to apply
+ * to the backend's effective weight to know its real
+ * weight.
+ */
+ p->lbprm.wmult = pgcd;
+
+ act = bck = 0;
+ for (srv = p->srv; srv; srv = srv->next) {
+ srv->eweight = (srv->uweight * p->lbprm.wdiv + p->lbprm.wmult - 1) / p->lbprm.wmult;
+ srv_lb_commit_status(srv);
+
+ if (srv->flags & SRV_F_BACKUP)
+ bck += srv->eweight;
+ else
+ act += srv->eweight;
+ }
+
+ /* this is the largest map we will ever need for this servers list */
+ if (act < bck)
+ act = bck;
+
+ if (!act)
+ act = 1;
+
+ p->lbprm.map.srv = (struct server **)calloc(act, sizeof(struct server *));
+ /* recounts servers and their weights */
+ p->lbprm.map.state = LB_MAP_RECALC;
+ recount_servers(p);
+ update_backend_weight(p);
+ recalc_server_map(p);
+}
+
+/*
+ * This function tries to find a running server with free connection slots for
+ * the proxy <px> following the round-robin method.
+ * If any server is found, it will be returned and px->lbprm.map.rr_idx will be updated
+ * to point to the next server. If no valid server is found, NULL is returned.
+ */
+struct server *map_get_server_rr(struct proxy *px, struct server *srvtoavoid)
+{
+ int newidx, avoididx;
+ struct server *srv, *avoided;
+
+ if (px->lbprm.tot_weight == 0)
+ return NULL;
+
+ if (px->lbprm.map.state & LB_MAP_RECALC)
+ recalc_server_map(px);
+
+ if (px->lbprm.map.rr_idx < 0 || px->lbprm.map.rr_idx >= px->lbprm.tot_weight)
+ px->lbprm.map.rr_idx = 0;
+ newidx = px->lbprm.map.rr_idx;
+
+ avoided = NULL;
+ avoididx = 0; /* shut a gcc warning */
+ do {
+ srv = px->lbprm.map.srv[newidx++];
+ if (!srv->maxconn || (!srv->nbpend && srv->served < srv_dynamic_maxconn(srv))) {
+ /* make sure it is not the server we are try to exclude... */
+ if (srv != srvtoavoid) {
+ px->lbprm.map.rr_idx = newidx;
+ return srv;
+ }
+
+ avoided = srv; /* ...but remember that is was selected yet avoided */
+ avoididx = newidx;
+ }
+ if (newidx == px->lbprm.tot_weight)
+ newidx = 0;
+ } while (newidx != px->lbprm.map.rr_idx);
+
+ if (avoided)
+ px->lbprm.map.rr_idx = avoididx;
+
+ /* return NULL or srvtoavoid if found */
+ return avoided;
+}
+
+/*
+ * This function returns the running server from the map at the location
+ * pointed to by the result of a modulo operation on <hash>. The server map may
+ * be recomputed if required before being looked up. If any server is found, it
+ * will be returned. If no valid server is found, NULL is returned.
+ */
+struct server *map_get_server_hash(struct proxy *px, unsigned int hash)
+{
+ if (px->lbprm.tot_weight == 0)
+ return NULL;
+
+ if (px->lbprm.map.state & LB_MAP_RECALC)
+ recalc_server_map(px);
+
+ return px->lbprm.map.srv[hash % px->lbprm.tot_weight];
+}
+
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * Listener management functions.
+ *
+ * Copyright 2000-2013 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#define _GNU_SOURCE
+#include <ctype.h>
+#include <errno.h>
+#include <stdio.h>
+#include <string.h>
+#include <unistd.h>
+#include <fcntl.h>
+
+#include <common/accept4.h>
+#include <common/config.h>
+#include <common/errors.h>
+#include <common/mini-clist.h>
+#include <common/standard.h>
+#include <common/time.h>
+
+#include <types/global.h>
+#include <types/protocol.h>
+
+#include <proto/acl.h>
+#include <proto/fd.h>
+#include <proto/freq_ctr.h>
+#include <proto/log.h>
+#include <proto/sample.h>
+#include <proto/stream.h>
+#include <proto/task.h>
+
+/* List head of all known bind keywords */
+static struct bind_kw_list bind_keywords = {
+ .list = LIST_HEAD_INIT(bind_keywords.list)
+};
+
+/* This function adds the specified listener's file descriptor to the polling
+ * lists if it is in the LI_LISTEN state. The listener enters LI_READY or
+ * LI_FULL state depending on its number of connections. In deamon mode, we
+ * also support binding only the relevant processes to their respective
+ * listeners. We don't do that in debug mode however.
+ */
+void enable_listener(struct listener *listener)
+{
+ if (listener->state == LI_LISTEN) {
+ if ((global.mode & (MODE_DAEMON | MODE_SYSTEMD)) &&
+ listener->bind_conf->bind_proc &&
+ !(listener->bind_conf->bind_proc & (1UL << (relative_pid - 1)))) {
+ /* we don't want to enable this listener and don't
+ * want any fd event to reach it.
+ */
+ fd_stop_recv(listener->fd);
+ listener->state = LI_PAUSED;
+ }
+ else if (listener->nbconn < listener->maxconn) {
+ fd_want_recv(listener->fd);
+ listener->state = LI_READY;
+ }
+ else {
+ listener->state = LI_FULL;
+ }
+ }
+}
+
+/* This function removes the specified listener's file descriptor from the
+ * polling lists if it is in the LI_READY or in the LI_FULL state. The listener
+ * enters LI_LISTEN.
+ */
+void disable_listener(struct listener *listener)
+{
+ if (listener->state < LI_READY)
+ return;
+ if (listener->state == LI_READY)
+ fd_stop_recv(listener->fd);
+ if (listener->state == LI_LIMITED)
+ LIST_DEL(&listener->wait_queue);
+ listener->state = LI_LISTEN;
+}
+
+/* This function tries to temporarily disable a listener, depending on the OS
+ * capabilities. Linux unbinds the listen socket after a SHUT_RD, and ignores
+ * SHUT_WR. Solaris refuses either shutdown(). OpenBSD ignores SHUT_RD but
+ * closes upon SHUT_WR and refuses to rebind. So a common validation path
+ * involves SHUT_WR && listen && SHUT_RD. In case of success, the FD's polling
+ * is disabled. It normally returns non-zero, unless an error is reported.
+ */
+int pause_listener(struct listener *l)
+{
+ if (l->state <= LI_PAUSED)
+ return 1;
+
+ if (l->proto->pause) {
+ /* Returns < 0 in case of failure, 0 if the listener
+ * was totally stopped, or > 0 if correctly paused.
+ */
+ int ret = l->proto->pause(l);
+
+ if (ret < 0)
+ return 0;
+ else if (ret == 0)
+ return 1;
+ }
+
+ if (l->state == LI_LIMITED)
+ LIST_DEL(&l->wait_queue);
+
+ fd_stop_recv(l->fd);
+ l->state = LI_PAUSED;
+ return 1;
+}
+
+/* This function tries to resume a temporarily disabled listener. Paused, full,
+ * limited and disabled listeners are handled, which means that this function
+ * may replace enable_listener(). The resulting state will either be LI_READY
+ * or LI_FULL. 0 is returned in case of failure to resume (eg: dead socket).
+ * Listeners bound to a different process are not woken up unless we're in
+ * foreground mode, and are ignored. If the listener was only in the assigned
+ * state, it's totally rebound. This can happen if a pause() has completely
+ * stopped it. If the resume fails, 0 is returned and an error might be
+ * displayed.
+ */
+int resume_listener(struct listener *l)
+{
+ if (l->state == LI_ASSIGNED) {
+ char msg[100];
+ int err;
+
+ err = l->proto->bind(l, msg, sizeof(msg));
+ if (err & ERR_ALERT)
+ Alert("Resuming listener: %s\n", msg);
+ else if (err & ERR_WARN)
+ Warning("Resuming listener: %s\n", msg);
+
+ if (err & (ERR_FATAL | ERR_ABORT))
+ return 0;
+ }
+
+ if (l->state < LI_PAUSED)
+ return 0;
+
+ if ((global.mode & (MODE_DAEMON | MODE_SYSTEMD)) &&
+ l->bind_conf->bind_proc &&
+ !(l->bind_conf->bind_proc & (1UL << (relative_pid - 1))))
+ return 1;
+
+ if (l->proto->sock_prot == IPPROTO_TCP &&
+ l->state == LI_PAUSED &&
+ listen(l->fd, l->backlog ? l->backlog : l->maxconn) != 0)
+ return 0;
+
+ if (l->state == LI_READY)
+ return 1;
+
+ if (l->state == LI_LIMITED)
+ LIST_DEL(&l->wait_queue);
+
+ if (l->nbconn >= l->maxconn) {
+ l->state = LI_FULL;
+ return 1;
+ }
+
+ fd_want_recv(l->fd);
+ l->state = LI_READY;
+ return 1;
+}
+
+/* Marks a ready listener as full so that the stream code tries to re-enable
+ * it upon next close() using resume_listener().
+ */
+void listener_full(struct listener *l)
+{
+ if (l->state >= LI_READY) {
+ if (l->state == LI_LIMITED)
+ LIST_DEL(&l->wait_queue);
+
+ fd_stop_recv(l->fd);
+ l->state = LI_FULL;
+ }
+}
+
+/* Marks a ready listener as limited so that we only try to re-enable it when
+ * resources are free again. It will be queued into the specified queue.
+ */
+void limit_listener(struct listener *l, struct list *list)
+{
+ if (l->state == LI_READY) {
+ LIST_ADDQ(list, &l->wait_queue);
+ fd_stop_recv(l->fd);
+ l->state = LI_LIMITED;
+ }
+}
+
+/* This function adds all of the protocol's listener's file descriptors to the
+ * polling lists when they are in the LI_LISTEN state. It is intended to be
+ * used as a protocol's generic enable_all() primitive, for use after the
+ * fork(). It puts the listeners into LI_READY or LI_FULL states depending on
+ * their number of connections. It always returns ERR_NONE.
+ */
+int enable_all_listeners(struct protocol *proto)
+{
+ struct listener *listener;
+
+ list_for_each_entry(listener, &proto->listeners, proto_list)
+ enable_listener(listener);
+ return ERR_NONE;
+}
+
+/* This function removes all of the protocol's listener's file descriptors from
+ * the polling lists when they are in the LI_READY or LI_FULL states. It is
+ * intended to be used as a protocol's generic disable_all() primitive. It puts
+ * the listeners into LI_LISTEN, and always returns ERR_NONE.
+ */
+int disable_all_listeners(struct protocol *proto)
+{
+ struct listener *listener;
+
+ list_for_each_entry(listener, &proto->listeners, proto_list)
+ disable_listener(listener);
+ return ERR_NONE;
+}
+
+/* Dequeues all of the listeners waiting for a resource in wait queue <queue>. */
+void dequeue_all_listeners(struct list *list)
+{
+ struct listener *listener, *l_back;
+
+ list_for_each_entry_safe(listener, l_back, list, wait_queue) {
+ /* This cannot fail because the listeners are by definition in
+ * the LI_LIMITED state. The function also removes the entry
+ * from the queue.
+ */
+ resume_listener(listener);
+ }
+}
+
+/* This function closes the listening socket for the specified listener,
+ * provided that it's already in a listening state. The listener enters the
+ * LI_ASSIGNED state. It always returns ERR_NONE. This function is intended
+ * to be used as a generic function for standard protocols.
+ */
+int unbind_listener(struct listener *listener)
+{
+ if (listener->state == LI_READY)
+ fd_stop_recv(listener->fd);
+
+ if (listener->state == LI_LIMITED)
+ LIST_DEL(&listener->wait_queue);
+
+ if (listener->state >= LI_PAUSED) {
+ fd_delete(listener->fd);
+ listener->fd = -1;
+ listener->state = LI_ASSIGNED;
+ }
+ return ERR_NONE;
+}
+
+/* This function closes all listening sockets bound to the protocol <proto>,
+ * and the listeners end in LI_ASSIGNED state if they were higher. It does not
+ * detach them from the protocol. It always returns ERR_NONE.
+ */
+int unbind_all_listeners(struct protocol *proto)
+{
+ struct listener *listener;
+
+ list_for_each_entry(listener, &proto->listeners, proto_list)
+ unbind_listener(listener);
+ return ERR_NONE;
+}
+
+/* Delete a listener from its protocol's list of listeners. The listener's
+ * state is automatically updated from LI_ASSIGNED to LI_INIT. The protocol's
+ * number of listeners is updated. Note that the listener must have previously
+ * been unbound. This is the generic function to use to remove a listener.
+ */
+void delete_listener(struct listener *listener)
+{
+ if (listener->state != LI_ASSIGNED)
+ return;
+ listener->state = LI_INIT;
+ LIST_DEL(&listener->proto_list);
+ listener->proto->nb_listeners--;
+}
+
+/* This function is called on a read event from a listening socket, corresponding
+ * to an accept. It tries to accept as many connections as possible, and for each
+ * calls the listener's accept handler (generally the frontend's accept handler).
+ */
+void listener_accept(int fd)
+{
+ struct listener *l = fdtab[fd].owner;
+ struct proxy *p = l->frontend;
+ int max_accept = l->maxaccept ? l->maxaccept : 1;
+ int expire;
+ int cfd;
+ int ret;
+#ifdef USE_ACCEPT4
+ static int accept4_broken;
+#endif
+
+ if (unlikely(l->nbconn >= l->maxconn)) {
+ listener_full(l);
+ return;
+ }
+
+ if (!(l->options & LI_O_UNLIMITED) && global.sps_lim) {
+ int max = freq_ctr_remain(&global.sess_per_sec, global.sps_lim, 0);
+
+ if (unlikely(!max)) {
+ /* frontend accept rate limit was reached */
+ expire = tick_add(now_ms, next_event_delay(&global.sess_per_sec, global.sps_lim, 0));
+ goto wait_expire;
+ }
+
+ if (max_accept > max)
+ max_accept = max;
+ }
+
+ if (!(l->options & LI_O_UNLIMITED) && global.cps_lim) {
+ int max = freq_ctr_remain(&global.conn_per_sec, global.cps_lim, 0);
+
+ if (unlikely(!max)) {
+ /* frontend accept rate limit was reached */
+ expire = tick_add(now_ms, next_event_delay(&global.conn_per_sec, global.cps_lim, 0));
+ goto wait_expire;
+ }
+
+ if (max_accept > max)
+ max_accept = max;
+ }
+#ifdef USE_OPENSSL
+ if (!(l->options & LI_O_UNLIMITED) && global.ssl_lim && l->bind_conf && l->bind_conf->is_ssl) {
+ int max = freq_ctr_remain(&global.ssl_per_sec, global.ssl_lim, 0);
+
+ if (unlikely(!max)) {
+ /* frontend accept rate limit was reached */
+ expire = tick_add(now_ms, next_event_delay(&global.ssl_per_sec, global.ssl_lim, 0));
+ goto wait_expire;
+ }
+
+ if (max_accept > max)
+ max_accept = max;
+ }
+#endif
+ if (p && p->fe_sps_lim) {
+ int max = freq_ctr_remain(&p->fe_sess_per_sec, p->fe_sps_lim, 0);
+
+ if (unlikely(!max)) {
+ /* frontend accept rate limit was reached */
+ limit_listener(l, &p->listener_queue);
+ task_schedule(p->task, tick_add(now_ms, next_event_delay(&p->fe_sess_per_sec, p->fe_sps_lim, 0)));
+ return;
+ }
+
+ if (max_accept > max)
+ max_accept = max;
+ }
+
+ /* Note: if we fail to allocate a connection because of configured
+ * limits, we'll schedule a new attempt worst 1 second later in the
+ * worst case. If we fail due to system limits or temporary resource
+ * shortage, we try again 100ms later in the worst case.
+ */
+ while (max_accept--) {
+ struct sockaddr_storage addr;
+ socklen_t laddr = sizeof(addr);
+
+ if (unlikely(actconn >= global.maxconn) && !(l->options & LI_O_UNLIMITED)) {
+ limit_listener(l, &global_listener_queue);
+ task_schedule(global_listener_queue_task, tick_add(now_ms, 1000)); /* try again in 1 second */
+ return;
+ }
+
+ if (unlikely(p && p->feconn >= p->maxconn)) {
+ limit_listener(l, &p->listener_queue);
+ return;
+ }
+
+#ifdef USE_ACCEPT4
+ /* only call accept4() if it's known to be safe, otherwise
+ * fallback to the legacy accept() + fcntl().
+ */
+ if (unlikely(accept4_broken ||
+ ((cfd = accept4(fd, (struct sockaddr *)&addr, &laddr, SOCK_NONBLOCK)) == -1 &&
+ (errno == ENOSYS || errno == EINVAL || errno == EBADF) &&
+ (accept4_broken = 1))))
+#endif
+ if ((cfd = accept(fd, (struct sockaddr *)&addr, &laddr)) != -1)
+ fcntl(cfd, F_SETFL, O_NONBLOCK);
+
+ if (unlikely(cfd == -1)) {
+ switch (errno) {
+ case EAGAIN:
+ if (fdtab[fd].ev & FD_POLL_HUP) {
+ /* the listening socket might have been disabled in a shared
+ * process and we're a collateral victim. We'll just pause for
+ * a while in case it comes back. In the mean time, we need to
+ * clear this sticky flag.
+ */
+ fdtab[fd].ev &= ~FD_POLL_HUP;
+ goto transient_error;
+ }
+ fd_cant_recv(fd);
+ return; /* nothing more to accept */
+ case EINVAL:
+ /* might be trying to accept on a shut fd (eg: soft stop) */
+ goto transient_error;
+ case EINTR:
+ case ECONNABORTED:
+ continue;
+ case ENFILE:
+ if (p)
+ send_log(p, LOG_EMERG,
+ "Proxy %s reached system FD limit at %d. Please check system tunables.\n",
+ p->id, maxfd);
+ goto transient_error;
+ case EMFILE:
+ if (p)
+ send_log(p, LOG_EMERG,
+ "Proxy %s reached process FD limit at %d. Please check 'ulimit-n' and restart.\n",
+ p->id, maxfd);
+ goto transient_error;
+ case ENOBUFS:
+ case ENOMEM:
+ if (p)
+ send_log(p, LOG_EMERG,
+ "Proxy %s reached system memory limit at %d sockets. Please check system tunables.\n",
+ p->id, maxfd);
+ goto transient_error;
+ default:
+ /* unexpected result, let's give up and let other tasks run */
+ goto stop;
+ }
+ }
+
+ if (unlikely(cfd >= global.maxsock)) {
+ send_log(p, LOG_EMERG,
+ "Proxy %s reached the configured maximum connection limit. Please check the global 'maxconn' value.\n",
+ p->id);
+ close(cfd);
+ limit_listener(l, &global_listener_queue);
+ task_schedule(global_listener_queue_task, tick_add(now_ms, 1000)); /* try again in 1 second */
+ return;
+ }
+
+ /* increase the per-process number of cumulated connections */
+ if (!(l->options & LI_O_UNLIMITED)) {
+ update_freq_ctr(&global.conn_per_sec, 1);
+ if (global.conn_per_sec.curr_ctr > global.cps_max)
+ global.cps_max = global.conn_per_sec.curr_ctr;
+ actconn++;
+ }
+
+ jobs++;
+ totalconn++;
+ l->nbconn++;
+
+ if (l->counters) {
+ if (l->nbconn > l->counters->conn_max)
+ l->counters->conn_max = l->nbconn;
+ }
+
+ ret = l->accept(l, cfd, &addr);
+ if (unlikely(ret <= 0)) {
+ /* The connection was closed by stream_accept(). Either
+ * we just have to ignore it (ret == 0) or it's a critical
+ * error due to a resource shortage, and we must stop the
+ * listener (ret < 0).
+ */
+ if (!(l->options & LI_O_UNLIMITED))
+ actconn--;
+ jobs--;
+ l->nbconn--;
+ if (ret == 0) /* successful termination */
+ continue;
+
+ goto transient_error;
+ }
+
+ if (l->nbconn >= l->maxconn) {
+ listener_full(l);
+ return;
+ }
+
+ /* increase the per-process number of cumulated connections */
+ if (!(l->options & LI_O_UNLIMITED)) {
+ update_freq_ctr(&global.sess_per_sec, 1);
+ if (global.sess_per_sec.curr_ctr > global.sps_max)
+ global.sps_max = global.sess_per_sec.curr_ctr;
+ }
+#ifdef USE_OPENSSL
+ if (!(l->options & LI_O_UNLIMITED) && l->bind_conf && l->bind_conf->is_ssl) {
+
+ update_freq_ctr(&global.ssl_per_sec, 1);
+ if (global.ssl_per_sec.curr_ctr > global.ssl_max)
+ global.ssl_max = global.ssl_per_sec.curr_ctr;
+ }
+#endif
+
+ } /* end of while (max_accept--) */
+
+ /* we've exhausted max_accept, so there is no need to poll again */
+ stop:
+ fd_done_recv(fd);
+ return;
+
+ transient_error:
+ /* pause the listener and try again in 100 ms */
+ expire = tick_add(now_ms, 100);
+
+ wait_expire:
+ limit_listener(l, &global_listener_queue);
+ task_schedule(global_listener_queue_task, tick_first(expire, global_listener_queue_task->expire));
+ return;
+}
+
+/*
+ * Registers the bind keyword list <kwl> as a list of valid keywords for next
+ * parsing sessions.
+ */
+void bind_register_keywords(struct bind_kw_list *kwl)
+{
+ LIST_ADDQ(&bind_keywords.list, &kwl->list);
+}
+
+/* Return a pointer to the bind keyword <kw>, or NULL if not found. If the
+ * keyword is found with a NULL ->parse() function, then an attempt is made to
+ * find one with a valid ->parse() function. This way it is possible to declare
+ * platform-dependant, known keywords as NULL, then only declare them as valid
+ * if some options are met. Note that if the requested keyword contains an
+ * opening parenthesis, everything from this point is ignored.
+ */
+struct bind_kw *bind_find_kw(const char *kw)
+{
+ int index;
+ const char *kwend;
+ struct bind_kw_list *kwl;
+ struct bind_kw *ret = NULL;
+
+ kwend = strchr(kw, '(');
+ if (!kwend)
+ kwend = kw + strlen(kw);
+
+ list_for_each_entry(kwl, &bind_keywords.list, list) {
+ for (index = 0; kwl->kw[index].kw != NULL; index++) {
+ if ((strncmp(kwl->kw[index].kw, kw, kwend - kw) == 0) &&
+ kwl->kw[index].kw[kwend-kw] == 0) {
+ if (kwl->kw[index].parse)
+ return &kwl->kw[index]; /* found it !*/
+ else
+ ret = &kwl->kw[index]; /* may be OK */
+ }
+ }
+ }
+ return ret;
+}
+
+/* Dumps all registered "bind" keywords to the <out> string pointer. The
+ * unsupported keywords are only dumped if their supported form was not
+ * found.
+ */
+void bind_dump_kws(char **out)
+{
+ struct bind_kw_list *kwl;
+ int index;
+
+ *out = NULL;
+ list_for_each_entry(kwl, &bind_keywords.list, list) {
+ for (index = 0; kwl->kw[index].kw != NULL; index++) {
+ if (kwl->kw[index].parse ||
+ bind_find_kw(kwl->kw[index].kw) == &kwl->kw[index]) {
+ memprintf(out, "%s[%4s] %s%s%s\n", *out ? *out : "",
+ kwl->scope,
+ kwl->kw[index].kw,
+ kwl->kw[index].skip ? " <arg>" : "",
+ kwl->kw[index].parse ? "" : " (not supported)");
+ }
+ }
+ }
+}
+
+/************************************************************************/
+/* All supported sample and ACL keywords must be declared here. */
+/************************************************************************/
+
+/* set temp integer to the number of connexions to the same listening socket */
+static int
+smp_fetch_dconn(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = smp->sess->listener->nbconn;
+ return 1;
+}
+
+/* set temp integer to the id of the socket (listener) */
+static int
+smp_fetch_so_id(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = smp->sess->listener->luid;
+ return 1;
+}
+
+/* parse the "accept-proxy" bind keyword */
+static int bind_parse_accept_proxy(char **args, int cur_arg, struct proxy *px, struct bind_conf *conf, char **err)
+{
+ struct listener *l;
+
+ list_for_each_entry(l, &conf->listeners, by_bind)
+ l->options |= LI_O_ACC_PROXY;
+
+ return 0;
+}
+
+/* parse the "backlog" bind keyword */
+static int bind_parse_backlog(char **args, int cur_arg, struct proxy *px, struct bind_conf *conf, char **err)
+{
+ struct listener *l;
+ int val;
+
+ if (!*args[cur_arg + 1]) {
+ memprintf(err, "'%s' : missing value", args[cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ val = atol(args[cur_arg + 1]);
+ if (val <= 0) {
+ memprintf(err, "'%s' : invalid value %d, must be > 0", args[cur_arg], val);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ list_for_each_entry(l, &conf->listeners, by_bind)
+ l->backlog = val;
+
+ return 0;
+}
+
+/* parse the "id" bind keyword */
+static int bind_parse_id(char **args, int cur_arg, struct proxy *px, struct bind_conf *conf, char **err)
+{
+ struct eb32_node *node;
+ struct listener *l, *new;
+
+ if (conf->listeners.n != conf->listeners.p) {
+ memprintf(err, "'%s' can only be used with a single socket", args[cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ if (!*args[cur_arg + 1]) {
+ memprintf(err, "'%s' : expects an integer argument", args[cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ new = LIST_NEXT(&conf->listeners, struct listener *, by_bind);
+ new->luid = atol(args[cur_arg + 1]);
+ new->conf.id.key = new->luid;
+
+ if (new->luid <= 0) {
+ memprintf(err, "'%s' : custom id has to be > 0", args[cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ node = eb32_lookup(&px->conf.used_listener_id, new->luid);
+ if (node) {
+ l = container_of(node, struct listener, conf.id);
+ memprintf(err, "'%s' : custom id %d already used at %s:%d ('bind %s')",
+ args[cur_arg], l->luid, l->bind_conf->file, l->bind_conf->line,
+ l->bind_conf->arg);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ eb32_insert(&px->conf.used_listener_id, &new->conf.id);
+ return 0;
+}
+
+/* parse the "maxconn" bind keyword */
+static int bind_parse_maxconn(char **args, int cur_arg, struct proxy *px, struct bind_conf *conf, char **err)
+{
+ struct listener *l;
+ int val;
+
+ if (!*args[cur_arg + 1]) {
+ memprintf(err, "'%s' : missing value", args[cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ val = atol(args[cur_arg + 1]);
+ if (val <= 0) {
+ memprintf(err, "'%s' : invalid value %d, must be > 0", args[cur_arg], val);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ list_for_each_entry(l, &conf->listeners, by_bind)
+ l->maxconn = val;
+
+ return 0;
+}
+
+/* parse the "name" bind keyword */
+static int bind_parse_name(char **args, int cur_arg, struct proxy *px, struct bind_conf *conf, char **err)
+{
+ struct listener *l;
+
+ if (!*args[cur_arg + 1]) {
+ memprintf(err, "'%s' : missing name", args[cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ list_for_each_entry(l, &conf->listeners, by_bind)
+ l->name = strdup(args[cur_arg + 1]);
+
+ return 0;
+}
+
+/* parse the "nice" bind keyword */
+static int bind_parse_nice(char **args, int cur_arg, struct proxy *px, struct bind_conf *conf, char **err)
+{
+ struct listener *l;
+ int val;
+
+ if (!*args[cur_arg + 1]) {
+ memprintf(err, "'%s' : missing value", args[cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ val = atol(args[cur_arg + 1]);
+ if (val < -1024 || val > 1024) {
+ memprintf(err, "'%s' : invalid value %d, allowed range is -1024..1024", args[cur_arg], val);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ list_for_each_entry(l, &conf->listeners, by_bind)
+ l->nice = val;
+
+ return 0;
+}
+
+/* parse the "process" bind keyword */
+static int bind_parse_process(char **args, int cur_arg, struct proxy *px, struct bind_conf *conf, char **err)
+{
+ unsigned long set = 0;
+ unsigned int low, high;
+
+ if (strcmp(args[cur_arg + 1], "all") == 0) {
+ set = 0;
+ }
+ else if (strcmp(args[cur_arg + 1], "odd") == 0) {
+ set |= ~0UL/3UL; /* 0x555....555 */
+ }
+ else if (strcmp(args[cur_arg + 1], "even") == 0) {
+ set |= (~0UL/3UL) << 1; /* 0xAAA...AAA */
+ }
+ else if (isdigit((int)*args[cur_arg + 1])) {
+ char *dash = strchr(args[cur_arg + 1], '-');
+
+ low = high = str2uic(args[cur_arg + 1]);
+ if (dash)
+ high = str2uic(dash + 1);
+
+ if (high < low) {
+ unsigned int swap = low;
+ low = high;
+ high = swap;
+ }
+
+ if (low < 1 || high > LONGBITS) {
+ memprintf(err, "'%s' : invalid range %d-%d, allowed range is 1..%d", args[cur_arg], low, high, LONGBITS);
+ return ERR_ALERT | ERR_FATAL;
+ }
+ while (low <= high)
+ set |= 1UL << (low++ - 1);
+ }
+ else {
+ memprintf(err, "'%s' expects 'all', 'odd', 'even', or a process range with numbers from 1 to %d.", args[cur_arg], LONGBITS);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ conf->bind_proc = set;
+ return 0;
+}
+
+/* Note: must not be declared <const> as its list will be overwritten.
+ * Please take care of keeping this list alphabetically sorted.
+ */
+static struct sample_fetch_kw_list smp_kws = {ILH, {
+ { "dst_conn", smp_fetch_dconn, 0, NULL, SMP_T_SINT, SMP_USE_FTEND, },
+ { "so_id", smp_fetch_so_id, 0, NULL, SMP_T_SINT, SMP_USE_FTEND, },
+ { /* END */ },
+}};
+
+/* Note: must not be declared <const> as its list will be overwritten.
+ * Please take care of keeping this list alphabetically sorted.
+ */
+static struct acl_kw_list acl_kws = {ILH, {
+ { /* END */ },
+}};
+
+/* Note: must not be declared <const> as its list will be overwritten.
+ * Please take care of keeping this list alphabetically sorted, doing so helps
+ * all code contributors.
+ * Optional keywords are also declared with a NULL ->parse() function so that
+ * the config parser can report an appropriate error when a known keyword was
+ * not enabled.
+ */
+static struct bind_kw_list bind_kws = { "ALL", { }, {
+ { "accept-proxy", bind_parse_accept_proxy, 0 }, /* enable PROXY protocol */
+ { "backlog", bind_parse_backlog, 1 }, /* set backlog of listening socket */
+ { "id", bind_parse_id, 1 }, /* set id of listening socket */
+ { "maxconn", bind_parse_maxconn, 1 }, /* set maxconn of listening socket */
+ { "name", bind_parse_name, 1 }, /* set name of listening socket */
+ { "nice", bind_parse_nice, 1 }, /* set nice of listening socket */
+ { "process", bind_parse_process, 1 }, /* set list of allowed process for this socket */
+ { /* END */ },
+}};
+
+__attribute__((constructor))
+static void __listener_init(void)
+{
+ sample_register_fetches(&smp_kws);
+ acl_register_keywords(&acl_kws);
+ bind_register_keywords(&bind_kws);
+}
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * General logging functions.
+ *
+ * Copyright 2000-2008 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <ctype.h>
+#include <fcntl.h>
+#include <stdarg.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <syslog.h>
+#include <time.h>
+#include <unistd.h>
+#include <errno.h>
+
+#include <sys/time.h>
+
+#include <common/config.h>
+#include <common/compat.h>
+#include <common/standard.h>
+#include <common/time.h>
+
+#include <types/global.h>
+#include <types/log.h>
+
+#include <proto/frontend.h>
+#include <proto/proto_http.h>
+#include <proto/log.h>
+#include <proto/sample.h>
+#include <proto/stream.h>
+#include <proto/stream_interface.h>
+#ifdef USE_OPENSSL
+#include <proto/ssl_sock.h>
+#endif
+
+struct log_fmt {
+ char *name;
+ struct {
+ struct chunk sep1; /* first pid separator */
+ struct chunk sep2; /* second pid separator */
+ } pid;
+};
+
+static const struct log_fmt log_formats[LOG_FORMATS] = {
+ [LOG_FORMAT_RFC3164] = {
+ .name = "rfc3164",
+ .pid = {
+ .sep1 = { .str = "[", .len = 1 },
+ .sep2 = { .str = "]: ", .len = 3 }
+ }
+ },
+ [LOG_FORMAT_RFC5424] = {
+ .name = "rfc5424",
+ .pid = {
+ .sep1 = { .str = " ", .len = 1 },
+ .sep2 = { .str = " - ", .len = 3 }
+ }
+ }
+};
+
+const char *log_facilities[NB_LOG_FACILITIES] = {
+ "kern", "user", "mail", "daemon",
+ "auth", "syslog", "lpr", "news",
+ "uucp", "cron", "auth2", "ftp",
+ "ntp", "audit", "alert", "cron2",
+ "local0", "local1", "local2", "local3",
+ "local4", "local5", "local6", "local7"
+};
+
+const char *log_levels[NB_LOG_LEVELS] = {
+ "emerg", "alert", "crit", "err",
+ "warning", "notice", "info", "debug"
+};
+
+const char sess_term_cond[16] = "-LcCsSPRIDKUIIII"; /* normal, Local, CliTo, CliErr, SrvTo, SrvErr, PxErr, Resource, Internal, Down, Killed, Up, -- */
+const char sess_fin_state[8] = "-RCHDLQT"; /* cliRequest, srvConnect, srvHeader, Data, Last, Queue, Tarpit */
+
+
+/* log_format */
+struct logformat_type {
+ char *name;
+ int type;
+ int mode;
+ int lw; /* logwait bitsfield */
+ int (*config_callback)(struct logformat_node *node, struct proxy *curproxy);
+ const char *replace_by; /* new option to use instead of old one */
+};
+
+int prepare_addrsource(struct logformat_node *node, struct proxy *curproxy);
+
+/* log_format variable names */
+static const struct logformat_type logformat_keywords[] = {
+ { "o", LOG_FMT_GLOBAL, PR_MODE_TCP, 0, NULL }, /* global option */
+
+ /* please keep these lines sorted ! */
+ { "B", LOG_FMT_BYTES, PR_MODE_TCP, LW_BYTES, NULL }, /* bytes from server to client */
+ { "CC", LOG_FMT_CCLIENT, PR_MODE_HTTP, LW_REQHDR, NULL }, /* client cookie */
+ { "CS", LOG_FMT_CSERVER, PR_MODE_HTTP, LW_RSPHDR, NULL }, /* server cookie */
+ { "H", LOG_FMT_HOSTNAME, PR_MODE_TCP, LW_INIT, NULL }, /* Hostname */
+ { "ID", LOG_FMT_UNIQUEID, PR_MODE_HTTP, LW_BYTES, NULL }, /* Unique ID */
+ { "ST", LOG_FMT_STATUS, PR_MODE_TCP, LW_RESP, NULL }, /* status code */
+ { "T", LOG_FMT_DATEGMT, PR_MODE_TCP, LW_INIT, NULL }, /* date GMT */
+ { "Tc", LOG_FMT_TC, PR_MODE_TCP, LW_BYTES, NULL }, /* Tc */
+ { "Tl", LOG_FMT_DATELOCAL, PR_MODE_TCP, LW_INIT, NULL }, /* date local timezone */
+ { "Tq", LOG_FMT_TQ, PR_MODE_HTTP, LW_BYTES, NULL }, /* Tq */
+ { "Tr", LOG_FMT_TR, PR_MODE_HTTP, LW_BYTES, NULL }, /* Tr */
+ { "Ts", LOG_FMT_TS, PR_MODE_TCP, LW_INIT, NULL }, /* timestamp GMT */
+ { "Tt", LOG_FMT_TT, PR_MODE_TCP, LW_BYTES, NULL }, /* Tt */
+ { "Tw", LOG_FMT_TW, PR_MODE_TCP, LW_BYTES, NULL }, /* Tw */
+ { "U", LOG_FMT_BYTES_UP, PR_MODE_TCP, LW_BYTES, NULL }, /* bytes from client to server */
+ { "ac", LOG_FMT_ACTCONN, PR_MODE_TCP, LW_BYTES, NULL }, /* actconn */
+ { "b", LOG_FMT_BACKEND, PR_MODE_TCP, LW_INIT, NULL }, /* backend */
+ { "bc", LOG_FMT_BECONN, PR_MODE_TCP, LW_BYTES, NULL }, /* beconn */
+ { "bi", LOG_FMT_BACKENDIP, PR_MODE_TCP, LW_BCKIP, prepare_addrsource }, /* backend source ip */
+ { "bp", LOG_FMT_BACKENDPORT, PR_MODE_TCP, LW_BCKIP, prepare_addrsource }, /* backend source port */
+ { "bq", LOG_FMT_BCKQUEUE, PR_MODE_TCP, LW_BYTES, NULL }, /* backend_queue */
+ { "ci", LOG_FMT_CLIENTIP, PR_MODE_TCP, LW_CLIP, NULL }, /* client ip */
+ { "cp", LOG_FMT_CLIENTPORT, PR_MODE_TCP, LW_CLIP, NULL }, /* client port */
+ { "f", LOG_FMT_FRONTEND, PR_MODE_TCP, LW_INIT, NULL }, /* frontend */
+ { "fc", LOG_FMT_FECONN, PR_MODE_TCP, LW_BYTES, NULL }, /* feconn */
+ { "fi", LOG_FMT_FRONTENDIP, PR_MODE_TCP, LW_FRTIP, NULL }, /* frontend ip */
+ { "fp", LOG_FMT_FRONTENDPORT, PR_MODE_TCP, LW_FRTIP, NULL }, /* frontend port */
+ { "ft", LOG_FMT_FRONTEND_XPRT, PR_MODE_TCP, LW_INIT, NULL }, /* frontend with transport mode */
+ { "hr", LOG_FMT_HDRREQUEST, PR_MODE_TCP, LW_REQHDR, NULL }, /* header request */
+ { "hrl", LOG_FMT_HDRREQUESTLIST, PR_MODE_TCP, LW_REQHDR, NULL }, /* header request list */
+ { "hs", LOG_FMT_HDRRESPONS, PR_MODE_TCP, LW_RSPHDR, NULL }, /* header response */
+ { "hsl", LOG_FMT_HDRRESPONSLIST, PR_MODE_TCP, LW_RSPHDR, NULL }, /* header response list */
+ { "HM", LOG_FMT_HTTP_METHOD, PR_MODE_HTTP, LW_REQ, NULL }, /* HTTP method */
+ { "HP", LOG_FMT_HTTP_PATH, PR_MODE_HTTP, LW_REQ, NULL }, /* HTTP path */
+ { "HQ", LOG_FMT_HTTP_QUERY, PR_MODE_HTTP, LW_REQ, NULL }, /* HTTP query */
+ { "HU", LOG_FMT_HTTP_URI, PR_MODE_HTTP, LW_REQ, NULL }, /* HTTP full URI */
+ { "HV", LOG_FMT_HTTP_VERSION, PR_MODE_HTTP, LW_REQ, NULL }, /* HTTP version */
+ { "lc", LOG_FMT_LOGCNT, PR_MODE_TCP, LW_INIT, NULL }, /* log counter */
+ { "ms", LOG_FMT_MS, PR_MODE_TCP, LW_INIT, NULL }, /* accept date millisecond */
+ { "pid", LOG_FMT_PID, PR_MODE_TCP, LW_INIT, NULL }, /* log pid */
+ { "r", LOG_FMT_REQ, PR_MODE_HTTP, LW_REQ, NULL }, /* request */
+ { "rc", LOG_FMT_RETRIES, PR_MODE_TCP, LW_BYTES, NULL }, /* retries */
+ { "rt", LOG_FMT_COUNTER, PR_MODE_TCP, LW_REQ, NULL }, /* request counter (HTTP or TCP session) */
+ { "s", LOG_FMT_SERVER, PR_MODE_TCP, LW_SVID, NULL }, /* server */
+ { "sc", LOG_FMT_SRVCONN, PR_MODE_TCP, LW_BYTES, NULL }, /* srv_conn */
+ { "si", LOG_FMT_SERVERIP, PR_MODE_TCP, LW_SVIP, NULL }, /* server destination ip */
+ { "sp", LOG_FMT_SERVERPORT, PR_MODE_TCP, LW_SVIP, NULL }, /* server destination port */
+ { "sq", LOG_FMT_SRVQUEUE, PR_MODE_TCP, LW_BYTES, NULL }, /* srv_queue */
+ { "sslc", LOG_FMT_SSL_CIPHER, PR_MODE_TCP, LW_XPRT, NULL }, /* client-side SSL ciphers */
+ { "sslv", LOG_FMT_SSL_VERSION, PR_MODE_TCP, LW_XPRT, NULL }, /* client-side SSL protocol version */
+ { "t", LOG_FMT_DATE, PR_MODE_TCP, LW_INIT, NULL }, /* date */
+ { "ts", LOG_FMT_TERMSTATE, PR_MODE_TCP, LW_BYTES, NULL },/* termination state */
+ { "tsc", LOG_FMT_TERMSTATE_CK, PR_MODE_TCP, LW_INIT, NULL },/* termination state */
+
+ /* The following tags are deprecated and will be removed soon */
+ { "Bi", LOG_FMT_BACKENDIP, PR_MODE_TCP, LW_BCKIP, prepare_addrsource, "bi" }, /* backend source ip */
+ { "Bp", LOG_FMT_BACKENDPORT, PR_MODE_TCP, LW_BCKIP, prepare_addrsource, "bp" }, /* backend source port */
+ { "Ci", LOG_FMT_CLIENTIP, PR_MODE_TCP, LW_CLIP, NULL, "ci" }, /* client ip */
+ { "Cp", LOG_FMT_CLIENTPORT, PR_MODE_TCP, LW_CLIP, NULL, "cp" }, /* client port */
+ { "Fi", LOG_FMT_FRONTENDIP, PR_MODE_TCP, LW_FRTIP, NULL, "fi" }, /* frontend ip */
+ { "Fp", LOG_FMT_FRONTENDPORT, PR_MODE_TCP, LW_FRTIP, NULL, "fp" }, /* frontend port */
+ { "Si", LOG_FMT_SERVERIP, PR_MODE_TCP, LW_SVIP, NULL, "si" }, /* server destination ip */
+ { "Sp", LOG_FMT_SERVERPORT, PR_MODE_TCP, LW_SVIP, NULL, "sp" }, /* server destination port */
+ { "cc", LOG_FMT_CCLIENT, PR_MODE_HTTP, LW_REQHDR, NULL, "CC" }, /* client cookie */
+ { "cs", LOG_FMT_CSERVER, PR_MODE_HTTP, LW_RSPHDR, NULL, "CS" }, /* server cookie */
+ { "st", LOG_FMT_STATUS, PR_MODE_HTTP, LW_RESP, NULL, "ST" }, /* status code */
+ { 0, 0, 0, 0, NULL }
+};
+
+char default_http_log_format[] = "%ci:%cp [%t] %ft %b/%s %Tq/%Tw/%Tc/%Tr/%Tt %ST %B %CC %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r"; // default format
+char clf_http_log_format[] = "%{+Q}o %{-Q}ci - - [%T] %r %ST %B \"\" \"\" %cp %ms %ft %b %s %Tq %Tw %Tc %Tr %Tt %tsc %ac %fc %bc %sc %rc %sq %bq %CC %CS %hrl %hsl";
+char default_tcp_log_format[] = "%ci:%cp [%t] %ft %b/%s %Tw/%Tc/%Tt %B %ts %ac/%fc/%bc/%sc/%rc %sq/%bq";
+char *log_format = NULL;
+
+/* Default string used for structured-data part in RFC5424 formatted
+ * syslog messages.
+ */
+char default_rfc5424_sd_log_format[] = "- ";
+
+/* This is a global syslog header, common to all outgoing messages in
+ * RFC3164 format. It begins with time-based part and is updated by
+ * update_log_hdr().
+ */
+char *logheader = NULL;
+
+/* This is a global syslog header for messages in RFC5424 format. It is
+ * updated by update_log_hdr_rfc5424().
+ */
+char *logheader_rfc5424 = NULL;
+
+/* This is a global syslog message buffer, common to all outgoing
+ * messages. It contains only the data part.
+ */
+char *logline = NULL;
+
+/* A global syslog message buffer, common to all RFC5424 syslog messages.
+ * Currently, it is used for generating the structured-data part.
+ */
+char *logline_rfc5424 = NULL;
+
+struct logformat_var_args {
+ char *name;
+ int mask;
+};
+
+struct logformat_var_args var_args_list[] = {
+// global
+ { "M", LOG_OPT_MANDATORY },
+ { "Q", LOG_OPT_QUOTE },
+ { "X", LOG_OPT_HEXA },
+ { 0, 0 }
+};
+
+/* return the name of the directive used in the current proxy for which we're
+ * currently parsing a header, when it is known.
+ */
+static inline const char *fmt_directive(const struct proxy *curproxy)
+{
+ switch (curproxy->conf.args.ctx) {
+ case ARGC_ACL:
+ return "acl";
+ case ARGC_STK:
+ return "stick";
+ case ARGC_TRK:
+ return "track-sc";
+ case ARGC_LOG:
+ return "log-format";
+ case ARGC_LOGSD:
+ return "log-format-sd";
+ case ARGC_HRQ:
+ return "http-request";
+ case ARGC_HRS:
+ return "http-response";
+ case ARGC_UIF:
+ return "unique-id-format";
+ case ARGC_RDR:
+ return "redirect";
+ case ARGC_CAP:
+ return "capture";
+ case ARGC_SRV:
+ return "server";
+ default:
+ return "undefined(please report this bug)"; /* must never happen */
+ }
+}
+
+/*
+ * callback used to configure addr source retrieval
+ */
+int prepare_addrsource(struct logformat_node *node, struct proxy *curproxy)
+{
+ curproxy->options2 |= PR_O2_SRC_ADDR;
+
+ return 0;
+}
+
+
+/*
+ * Parse args in a logformat_var
+ */
+int parse_logformat_var_args(char *args, struct logformat_node *node)
+{
+ int i = 0;
+ int end = 0;
+ int flags = 0; // 1 = + 2 = -
+ char *sp = NULL; // start pointer
+
+ if (args == NULL)
+ return 1;
+
+ while (1) {
+ if (*args == '\0')
+ end = 1;
+
+ if (*args == '+') {
+ // add flag
+ sp = args + 1;
+ flags = 1;
+ }
+ if (*args == '-') {
+ // delete flag
+ sp = args + 1;
+ flags = 2;
+ }
+
+ if (*args == '\0' || *args == ',') {
+ *args = '\0';
+ for (i = 0; sp && var_args_list[i].name; i++) {
+ if (strcmp(sp, var_args_list[i].name) == 0) {
+ if (flags == 1) {
+ node->options |= var_args_list[i].mask;
+ break;
+ } else if (flags == 2) {
+ node->options &= ~var_args_list[i].mask;
+ break;
+ }
+ }
+ }
+ sp = NULL;
+ if (end)
+ break;
+ }
+ args++;
+ }
+ return 0;
+}
+
+/*
+ * Parse a variable '%varname' or '%{args}varname' in log-format. The caller
+ * must pass the args part in the <arg> pointer with its length in <arg_len>,
+ * and varname with its length in <var> and <var_len> respectively. <arg> is
+ * ignored when arg_len is 0. Neither <var> nor <var_len> may be null.
+ */
+int parse_logformat_var(char *arg, int arg_len, char *var, int var_len, struct proxy *curproxy, struct list *list_format, int *defoptions)
+{
+ int j;
+ struct logformat_node *node;
+
+ for (j = 0; logformat_keywords[j].name; j++) { // search a log type
+ if (strlen(logformat_keywords[j].name) == var_len &&
+ strncmp(var, logformat_keywords[j].name, var_len) == 0) {
+ if (logformat_keywords[j].mode != PR_MODE_HTTP || curproxy->mode == PR_MODE_HTTP) {
+ node = calloc(1, sizeof(struct logformat_node));
+ node->type = logformat_keywords[j].type;
+ node->options = *defoptions;
+ if (arg_len) {
+ node->arg = my_strndup(arg, arg_len);
+ parse_logformat_var_args(node->arg, node);
+ }
+ if (node->type == LOG_FMT_GLOBAL) {
+ *defoptions = node->options;
+ free(node->arg);
+ free(node);
+ } else {
+ if (logformat_keywords[j].config_callback &&
+ logformat_keywords[j].config_callback(node, curproxy) != 0) {
+ return -1;
+ }
+ curproxy->to_log |= logformat_keywords[j].lw;
+ LIST_ADDQ(list_format, &node->list);
+ }
+ if (logformat_keywords[j].replace_by)
+ Warning("parsing [%s:%d] : deprecated variable '%s' in '%s', please replace it with '%s'.\n",
+ curproxy->conf.args.file, curproxy->conf.args.line,
+ logformat_keywords[j].name, fmt_directive(curproxy), logformat_keywords[j].replace_by);
+ return 0;
+ } else {
+ Warning("parsing [%s:%d] : '%s' : format variable '%s' is reserved for HTTP mode.\n",
+ curproxy->conf.args.file, curproxy->conf.args.line, fmt_directive(curproxy),
+ logformat_keywords[j].name);
+ return -1;
+ }
+ }
+ }
+
+ j = var[var_len];
+ var[var_len] = 0;
+ Warning("parsing [%s:%d] : no such format variable '%s' in '%s'. If you wanted to emit the '%%' character verbatim, you need to use '%%%%' in log-format expressions.\n",
+ curproxy->conf.args.file, curproxy->conf.args.line, var, fmt_directive(curproxy));
+ var[var_len] = j;
+ return -1;
+}
+
+/*
+ * push to the logformat linked list
+ *
+ * start: start pointer
+ * end: end text pointer
+ * type: string type
+ * list_format: destination list
+ *
+ * LOG_TEXT: copy chars from start to end excluding end.
+ *
+*/
+void add_to_logformat_list(char *start, char *end, int type, struct list *list_format)
+{
+ char *str;
+
+ if (type == LF_TEXT) { /* type text */
+ struct logformat_node *node = calloc(1, sizeof(struct logformat_node));
+ str = calloc(end - start + 1, 1);
+ strncpy(str, start, end - start);
+ str[end - start] = '\0';
+ node->arg = str;
+ node->type = LOG_FMT_TEXT; // type string
+ LIST_ADDQ(list_format, &node->list);
+ } else if (type == LF_SEPARATOR) {
+ struct logformat_node *node = calloc(1, sizeof(struct logformat_node));
+ node->type = LOG_FMT_SEPARATOR;
+ LIST_ADDQ(list_format, &node->list);
+ }
+}
+
+/*
+ * Parse the sample fetch expression <text> and add a node to <list_format> upon
+ * success. At the moment, sample converters are not yet supported but fetch arguments
+ * should work. The curpx->conf.args.ctx must be set by the caller.
+ */
+void add_sample_to_logformat_list(char *text, char *arg, int arg_len, struct proxy *curpx, struct list *list_format, int options, int cap, const char *file, int line)
+{
+ char *cmd[2];
+ struct sample_expr *expr;
+ struct logformat_node *node;
+ int cmd_arg;
+ char *errmsg = NULL;
+
+ cmd[0] = text;
+ cmd[1] = "";
+ cmd_arg = 0;
+
+ expr = sample_parse_expr(cmd, &cmd_arg, file, line, &errmsg, &curpx->conf.args);
+ if (!expr) {
+ Warning("parsing [%s:%d] : '%s' : sample fetch <%s> failed with : %s\n",
+ curpx->conf.args.file, curpx->conf.args.line, fmt_directive(curpx),
+ text, errmsg);
+ return;
+ }
+
+ node = calloc(1, sizeof(struct logformat_node));
+ node->type = LOG_FMT_EXPR;
+ node->expr = expr;
+ node->options = options;
+
+ if (arg_len) {
+ node->arg = my_strndup(arg, arg_len);
+ parse_logformat_var_args(node->arg, node);
+ }
+ if (expr->fetch->val & cap & SMP_VAL_REQUEST)
+ node->options |= LOG_OPT_REQ_CAP; /* fetch method is request-compatible */
+
+ if (expr->fetch->val & cap & SMP_VAL_RESPONSE)
+ node->options |= LOG_OPT_RES_CAP; /* fetch method is response-compatible */
+
+ if (!(expr->fetch->val & cap))
+ Warning("parsing [%s:%d] : '%s' : sample fetch <%s> may not be reliably used here because it needs '%s' which is not available here.\n",
+ curpx->conf.args.file, curpx->conf.args.line, fmt_directive(curpx),
+ text, sample_src_names(expr->fetch->use));
+
+ /* check if we need to allocate an hdr_idx struct for HTTP parsing */
+ /* Note, we may also need to set curpx->to_log with certain fetches */
+ curpx->http_needed |= !!(expr->fetch->use & SMP_USE_HTTP_ANY);
+
+ /* FIXME: temporary workaround for missing LW_XPRT and LW_REQ flags
+ * needed with some sample fetches (eg: ssl*). We always set it for
+ * now on, but this will leave with sample capabilities soon.
+ */
+ curpx->to_log |= LW_XPRT;
+ curpx->to_log |= LW_REQ;
+ LIST_ADDQ(list_format, &node->list);
+}
+
+/*
+ * Parse the log_format string and fill a linked list.
+ * Variable name are preceded by % and composed by characters [a-zA-Z0-9]* : %varname
+ * You can set arguments using { } : %{many arguments}varname.
+ * The curproxy->conf.args.ctx must be set by the caller.
+ *
+ * str: the string to parse
+ * curproxy: the proxy affected
+ * list_format: the destination list
+ * options: LOG_OPT_* to force on every node
+ * cap: all SMP_VAL_* flags supported by the consumer
+ */
+void parse_logformat_string(const char *fmt, struct proxy *curproxy, struct list *list_format, int options, int cap, const char *file, int line)
+{
+ char *sp, *str, *backfmt; /* start pointer for text parts */
+ char *arg = NULL; /* start pointer for args */
+ char *var = NULL; /* start pointer for vars */
+ int arg_len = 0;
+ int var_len = 0;
+ int cformat; /* current token format */
+ int pformat; /* previous token format */
+ struct logformat_node *tmplf, *back;
+
+ sp = str = backfmt = strdup(fmt);
+ curproxy->to_log |= LW_INIT;
+
+ /* flush the list first. */
+ list_for_each_entry_safe(tmplf, back, list_format, list) {
+ LIST_DEL(&tmplf->list);
+ free(tmplf);
+ }
+
+ for (cformat = LF_INIT; cformat != LF_END; str++) {
+ pformat = cformat;
+
+ if (!*str)
+ cformat = LF_END; // preset it to save all states from doing this
+
+ /* The prinicple of the two-step state machine below is to first detect a change, and
+ * second have all common paths processed at one place. The common paths are the ones
+ * encountered in text areas (LF_INIT, LF_TEXT, LF_SEPARATOR) and at the end (LF_END).
+ * We use the common LF_INIT state to dispatch to the different final states.
+ */
+ switch (pformat) {
+ case LF_STARTVAR: // text immediately following a '%'
+ arg = NULL; var = NULL;
+ arg_len = var_len = 0;
+ if (*str == '{') { // optional argument
+ cformat = LF_STARG;
+ arg = str + 1;
+ }
+ else if (*str == '[') {
+ cformat = LF_STEXPR;
+ var = str + 1; // store expr in variable name
+ }
+ else if (isalpha((unsigned char)*str)) { // variable name
+ cformat = LF_VAR;
+ var = str;
+ }
+ else if (*str == '%')
+ cformat = LF_TEXT; // convert this character to a litteral (useful for '%')
+ else if (isdigit((unsigned char)*str) || *str == ' ' || *str == '\t') {
+ /* single '%' followed by blank or digit, send them both */
+ cformat = LF_TEXT;
+ pformat = LF_TEXT; /* finally we include the previous char as well */
+ sp = str - 1; /* send both the '%' and the current char */
+ Warning("parsing [%s:%d] : Fixed missing '%%' before '%c' at position %d in %s line : '%s'. Please use '%%%%' when you need the '%%' character in a log-format expression.\n",
+ curproxy->conf.args.file, curproxy->conf.args.line, *str, (int)(str - backfmt), fmt_directive(curproxy), fmt);
+
+ }
+ else
+ cformat = LF_INIT; // handle other cases of litterals
+ break;
+
+ case LF_STARG: // text immediately following '%{'
+ if (*str == '}') { // end of arg
+ cformat = LF_EDARG;
+ arg_len = str - arg;
+ *str = 0; // used for reporting errors
+ }
+ break;
+
+ case LF_EDARG: // text immediately following '%{arg}'
+ if (*str == '[') {
+ cformat = LF_STEXPR;
+ var = str + 1; // store expr in variable name
+ break;
+ }
+ else if (isalnum((unsigned char)*str)) { // variable name
+ cformat = LF_VAR;
+ var = str;
+ break;
+ }
+ Warning("parsing [%s:%d] : Skipping isolated argument in '%s' line : '%%{%s}'\n",
+ curproxy->conf.args.file, curproxy->conf.args.line, fmt_directive(curproxy), arg);
+ cformat = LF_INIT;
+ break;
+
+ case LF_STEXPR: // text immediately following '%['
+ if (*str == ']') { // end of arg
+ cformat = LF_EDEXPR;
+ var_len = str - var;
+ *str = 0; // needed for parsing the expression
+ }
+ break;
+
+ case LF_VAR: // text part of a variable name
+ var_len = str - var;
+ if (!isalnum((unsigned char)*str))
+ cformat = LF_INIT; // not variable name anymore
+ break;
+
+ default: // LF_INIT, LF_TEXT, LF_SEPARATOR, LF_END, LF_EDEXPR
+ cformat = LF_INIT;
+ }
+
+ if (cformat == LF_INIT) { /* resynchronize state to text/sep/startvar */
+ switch (*str) {
+ case '%': cformat = LF_STARTVAR; break;
+ case ' ': cformat = LF_SEPARATOR; break;
+ case 0 : cformat = LF_END; break;
+ default : cformat = LF_TEXT; break;
+ }
+ }
+
+ if (cformat != pformat || pformat == LF_SEPARATOR) {
+ switch (pformat) {
+ case LF_VAR:
+ parse_logformat_var(arg, arg_len, var, var_len, curproxy, list_format, &options);
+ break;
+ case LF_STEXPR:
+ add_sample_to_logformat_list(var, arg, arg_len, curproxy, list_format, options, cap, file, line);
+ break;
+ case LF_TEXT:
+ case LF_SEPARATOR:
+ add_to_logformat_list(sp, str, pformat, list_format);
+ break;
+ }
+ sp = str; /* new start of text at every state switch and at every separator */
+ }
+ }
+
+ if (pformat == LF_STARTVAR || pformat == LF_STARG || pformat == LF_STEXPR)
+ Warning("parsing [%s:%d] : Ignoring end of truncated '%s' line after '%s'\n",
+ curproxy->conf.args.file, curproxy->conf.args.line, fmt_directive(curproxy),
+ var ? var : arg ? arg : "%");
+
+ free(backfmt);
+}
+
+/*
+ * Displays the message on stderr with the date and pid. Overrides the quiet
+ * mode during startup.
+ */
+void Alert(const char *fmt, ...)
+{
+ va_list argp;
+ struct tm tm;
+
+ if (!(global.mode & MODE_QUIET) || (global.mode & (MODE_VERBOSE | MODE_STARTING))) {
+ va_start(argp, fmt);
+
+ get_localtime(date.tv_sec, &tm);
+ fprintf(stderr, "[ALERT] %03d/%02d%02d%02d (%d) : ",
+ tm.tm_yday, tm.tm_hour, tm.tm_min, tm.tm_sec, (int)getpid());
+ vfprintf(stderr, fmt, argp);
+ fflush(stderr);
+ va_end(argp);
+ }
+}
+
+
+/*
+ * Displays the message on stderr with the date and pid.
+ */
+void Warning(const char *fmt, ...)
+{
+ va_list argp;
+ struct tm tm;
+
+ if (!(global.mode & MODE_QUIET) || (global.mode & MODE_VERBOSE)) {
+ va_start(argp, fmt);
+
+ get_localtime(date.tv_sec, &tm);
+ fprintf(stderr, "[WARNING] %03d/%02d%02d%02d (%d) : ",
+ tm.tm_yday, tm.tm_hour, tm.tm_min, tm.tm_sec, (int)getpid());
+ vfprintf(stderr, fmt, argp);
+ fflush(stderr);
+ va_end(argp);
+ }
+}
+
+/*
+ * Displays the message on <out> only if quiet mode is not set.
+ */
+void qfprintf(FILE *out, const char *fmt, ...)
+{
+ va_list argp;
+
+ if (!(global.mode & MODE_QUIET) || (global.mode & MODE_VERBOSE)) {
+ va_start(argp, fmt);
+ vfprintf(out, fmt, argp);
+ fflush(out);
+ va_end(argp);
+ }
+}
+
+/*
+ * returns log format for <fmt> or -1 if not found.
+ */
+int get_log_format(const char *fmt)
+{
+ int format;
+
+ format = LOG_FORMATS - 1;
+ while (format >= 0 && strcmp(log_formats[format].name, fmt))
+ format--;
+
+ return format;
+}
+
+/*
+ * returns log level for <lev> or -1 if not found.
+ */
+int get_log_level(const char *lev)
+{
+ int level;
+
+ level = NB_LOG_LEVELS - 1;
+ while (level >= 0 && strcmp(log_levels[level], lev))
+ level--;
+
+ return level;
+}
+
+/*
+ * returns log facility for <fac> or -1 if not found.
+ */
+int get_log_facility(const char *fac)
+{
+ int facility;
+
+ facility = NB_LOG_FACILITIES - 1;
+ while (facility >= 0 && strcmp(log_facilities[facility], fac))
+ facility--;
+
+ return facility;
+}
+
+/*
+ * Write a string in the log string
+ * Take cares of quote options
+ *
+ * Return the adress of the \0 character, or NULL on error
+ */
+char *lf_text_len(char *dst, const char *src, size_t len, size_t size, struct logformat_node *node)
+{
+ if (size < 2)
+ return NULL;
+
+ if (node->options & LOG_OPT_QUOTE) {
+ *(dst++) = '"';
+ size--;
+ }
+
+ if (src && len) {
+ if (++len > size)
+ len = size;
+ len = strlcpy2(dst, src, len);
+
+ size -= len;
+ dst += len;
+ }
+ else if ((node->options & (LOG_OPT_QUOTE|LOG_OPT_MANDATORY)) == LOG_OPT_MANDATORY) {
+ if (size < 2)
+ return NULL;
+ *(dst++) = '-';
+ }
+
+ if (node->options & LOG_OPT_QUOTE) {
+ if (size < 2)
+ return NULL;
+ *(dst++) = '"';
+ }
+
+ *dst = '\0';
+ return dst;
+}
+
+static inline char *lf_text(char *dst, const char *src, size_t size, struct logformat_node *node)
+{
+ return lf_text_len(dst, src, size, size, node);
+}
+
+/*
+ * Write a IP adress to the log string
+ * +X option write in hexadecimal notation, most signifant byte on the left
+ */
+char *lf_ip(char *dst, struct sockaddr *sockaddr, size_t size, struct logformat_node *node)
+{
+ char *ret = dst;
+ int iret;
+ char pn[INET6_ADDRSTRLEN];
+
+ if (node->options & LOG_OPT_HEXA) {
+ const unsigned char *addr = (const unsigned char *)&((struct sockaddr_in *)sockaddr)->sin_addr.s_addr;
+ iret = snprintf(dst, size, "%02X%02X%02X%02X", addr[0], addr[1], addr[2], addr[3]);
+ if (iret < 0 || iret > size)
+ return NULL;
+ ret += iret;
+ } else {
+ addr_to_str((struct sockaddr_storage *)sockaddr, pn, sizeof(pn));
+ ret = lf_text(dst, pn, size, node);
+ if (ret == NULL)
+ return NULL;
+ }
+ return ret;
+}
+
+/*
+ * Write a port to the log
+ * +X option write in hexadecimal notation, most signifant byte on the left
+ */
+char *lf_port(char *dst, struct sockaddr *sockaddr, size_t size, struct logformat_node *node)
+{
+ char *ret = dst;
+ int iret;
+
+ if (node->options & LOG_OPT_HEXA) {
+ const unsigned char *port = (const unsigned char *)&((struct sockaddr_in *)sockaddr)->sin_port;
+ iret = snprintf(dst, size, "%02X%02X", port[0], port[1]);
+ if (iret < 0 || iret > size)
+ return NULL;
+ ret += iret;
+ } else {
+ ret = ltoa_o(get_host_port((struct sockaddr_storage *)sockaddr), dst, size);
+ if (ret == NULL)
+ return NULL;
+ }
+ return ret;
+}
+
+/* Re-generate time-based part of the syslog header in RFC3164 format at
+ * the beginning of logheader once a second and return the pointer to the
+ * first character after it.
+ */
+static char *update_log_hdr(const time_t time)
+{
+ static long tvsec;
+ static char *dataptr = NULL; /* backup of last end of header, NULL first time */
+ static struct chunk host = { NULL, 0, 0 };
+ static int sep = 0;
+
+ if (unlikely(time != tvsec || dataptr == NULL)) {
+ /* this string is rebuild only once a second */
+ struct tm tm;
+ int hdr_len;
+
+ tvsec = time;
+ get_localtime(tvsec, &tm);
+
+ if (unlikely(global.log_send_hostname != host.str)) {
+ host.str = global.log_send_hostname;
+ host.len = host.str ? strlen(host.str) : 0;
+ sep = host.len ? 1 : 0;
+ }
+
+ hdr_len = snprintf(logheader, global.max_syslog_len,
+ "<<<<>%s %2d %02d:%02d:%02d %.*s%*s",
+ monthname[tm.tm_mon],
+ tm.tm_mday, tm.tm_hour, tm.tm_min, tm.tm_sec,
+ host.len, host.str, sep, "");
+ /* WARNING: depending upon implementations, snprintf may return
+ * either -1 or the number of bytes that would be needed to store
+ * the total message. In both cases, we must adjust it.
+ */
+ if (hdr_len < 0 || hdr_len > global.max_syslog_len)
+ hdr_len = global.max_syslog_len;
+
+ dataptr = logheader + hdr_len;
+ }
+
+ dataptr[0] = 0; // ensure we get rid of any previous attempt
+
+ return dataptr;
+}
+
+/* Re-generate time-based part of the syslog header in RFC5424 format at
+ * the beginning of logheader_rfc5424 once a second and return the pointer
+ * to the first character after it.
+ */
+static char *update_log_hdr_rfc5424(const time_t time)
+{
+ static long tvsec;
+ static char *dataptr = NULL; /* backup of last end of header, NULL first time */
+
+ if (unlikely(time != tvsec || dataptr == NULL)) {
+ /* this string is rebuild only once a second */
+ struct tm tm;
+ int hdr_len;
+
+ tvsec = time;
+ get_localtime(tvsec, &tm);
+
+ hdr_len = snprintf(logheader_rfc5424, global.max_syslog_len,
+ "<<<<>1 %4d-%02d-%02dT%02d:%02d:%02d%.3s:%.2s %s ",
+ tm.tm_year+1900, tm.tm_mon+1, tm.tm_mday,
+ tm.tm_hour, tm.tm_min, tm.tm_sec,
+ localtimezone, localtimezone+3,
+ global.log_send_hostname ? global.log_send_hostname : hostname);
+ /* WARNING: depending upon implementations, snprintf may return
+ * either -1 or the number of bytes that would be needed to store
+ * the total message. In both cases, we must adjust it.
+ */
+ if (hdr_len < 0 || hdr_len > global.max_syslog_len)
+ hdr_len = global.max_syslog_len;
+
+ dataptr = logheader_rfc5424 + hdr_len;
+ }
+
+ dataptr[0] = 0; // ensure we get rid of any previous attempt
+
+ return dataptr;
+}
+
+/*
+ * This function sends the syslog message using a printf format string. It
+ * expects an LF-terminated message.
+ */
+void send_log(struct proxy *p, int level, const char *format, ...)
+{
+ va_list argp;
+ int data_len;
+
+ if (level < 0 || format == NULL || logline == NULL)
+ return;
+
+ va_start(argp, format);
+ data_len = vsnprintf(logline, global.max_syslog_len, format, argp);
+ if (data_len < 0 || data_len > global.max_syslog_len)
+ data_len = global.max_syslog_len;
+ va_end(argp);
+
+ __send_log(p, level, logline, data_len, default_rfc5424_sd_log_format, 2);
+}
+
+/*
+ * This function sends a syslog message.
+ * It doesn't care about errors nor does it report them.
+ * It overrides the last byte of the message vector with an LF character.
+ * The arguments <sd> and <sd_size> are used for the structured-data part
+ * in RFC5424 formatted syslog messages.
+ */
+void __send_log(struct proxy *p, int level, char *message, size_t size, char *sd, size_t sd_size)
+{
+ static struct iovec iovec[NB_MSG_IOVEC_ELEMENTS] = { };
+ static struct msghdr msghdr = {
+ .msg_iov = iovec,
+ .msg_iovlen = NB_MSG_IOVEC_ELEMENTS
+ };
+ static int logfdunix = -1; /* syslog to AF_UNIX socket */
+ static int logfdinet = -1; /* syslog to AF_INET socket */
+ static char *dataptr = NULL;
+ int fac_level;
+ struct list *logsrvs = NULL;
+ struct logsrv *tmp = NULL;
+ int nblogger;
+ char *hdr, *hdr_ptr;
+ size_t hdr_size;
+ time_t time = date.tv_sec;
+ struct chunk *tag = &global.log_tag;
+ static int curr_pid;
+ static char pidstr[100];
+ static struct chunk pid;
+
+ dataptr = message;
+
+ if (p == NULL) {
+ if (!LIST_ISEMPTY(&global.logsrvs)) {
+ logsrvs = &global.logsrvs;
+ }
+ } else {
+ if (!LIST_ISEMPTY(&p->logsrvs)) {
+ logsrvs = &p->logsrvs;
+ }
+ if (p->log_tag.str) {
+ tag = &p->log_tag;
+ }
+ }
+
+ if (!logsrvs)
+ return;
+
+ if (unlikely(curr_pid != getpid())) {
+ curr_pid = getpid();
+ ltoa_o(curr_pid, pidstr, sizeof(pidstr));
+ chunk_initstr(&pid, pidstr);
+ }
+
+ /* Send log messages to syslog server. */
+ nblogger = 0;
+ list_for_each_entry(tmp, logsrvs, list) {
+ const struct logsrv *logsrv = tmp;
+ int *plogfd = logsrv->addr.ss_family == AF_UNIX ?
+ &logfdunix : &logfdinet;
+ char *pid_sep1 = NULL, *pid_sep2 = NULL;
+ int sent;
+ int maxlen;
+ int hdr_max = 0;
+ int tag_max = 0;
+ int pid_sep1_max = 0;
+ int pid_max = 0;
+ int pid_sep2_max = 0;
+ int sd_max = 0;
+ int max = 0;
+
+ nblogger++;
+
+ /* we can filter the level of the messages that are sent to each logger */
+ if (level > logsrv->level)
+ continue;
+
+ if (unlikely(*plogfd < 0)) {
+ /* socket not successfully initialized yet */
+ int proto = logsrv->addr.ss_family == AF_UNIX ? 0 : IPPROTO_UDP;
+
+ if ((*plogfd = socket(logsrv->addr.ss_family, SOCK_DGRAM, proto)) < 0) {
+ Alert("socket for logger #%d failed: %s (errno=%d)\n",
+ nblogger, strerror(errno), errno);
+ continue;
+ }
+ /* we don't want to receive anything on this socket */
+ setsockopt(*plogfd, SOL_SOCKET, SO_RCVBUF, &zero, sizeof(zero));
+ /* does nothing under Linux, maybe needed for others */
+ shutdown(*plogfd, SHUT_RD);
+ }
+
+ switch (logsrv->format) {
+ case LOG_FORMAT_RFC3164:
+ hdr = logheader;
+ hdr_ptr = update_log_hdr(time);
+ break;
+
+ case LOG_FORMAT_RFC5424:
+ hdr = logheader_rfc5424;
+ hdr_ptr = update_log_hdr_rfc5424(time);
+ sd_max = sd_size; /* the SD part allowed only in RFC5424 */
+ break;
+
+ default:
+ continue; /* must never happen */
+ }
+
+ hdr_size = hdr_ptr - hdr;
+
+ /* For each target, we may have a different facility.
+ * We can also have a different log level for each message.
+ * This induces variations in the message header length.
+ * Since we don't want to recompute it each time, nor copy it every
+ * time, we only change the facility in the pre-computed header,
+ * and we change the pointer to the header accordingly.
+ */
+ fac_level = (logsrv->facility << 3) + MAX(level, logsrv->minlvl);
+ hdr_ptr = hdr + 3; /* last digit of the log level */
+ do {
+ *hdr_ptr = '0' + fac_level % 10;
+ fac_level /= 10;
+ hdr_ptr--;
+ } while (fac_level && hdr_ptr > hdr);
+ *hdr_ptr = '<';
+
+ hdr_max = hdr_size - (hdr_ptr - hdr);
+
+ /* time-based header */
+ if (unlikely(hdr_size >= logsrv->maxlen)) {
+ hdr_max = MIN(hdr_max, logsrv->maxlen) - 1;
+ sd_max = 0;
+ goto send;
+ }
+
+ maxlen = logsrv->maxlen - hdr_max;
+
+ /* tag */
+ tag_max = tag->len;
+ if (unlikely(tag_max >= maxlen)) {
+ tag_max = maxlen - 1;
+ sd_max = 0;
+ goto send;
+ }
+
+ maxlen -= tag_max;
+
+ /* first pid separator */
+ pid_sep1_max = log_formats[logsrv->format].pid.sep1.len;
+ if (unlikely(pid_sep1_max >= maxlen)) {
+ pid_sep1_max = maxlen - 1;
+ sd_max = 0;
+ goto send;
+ }
+
+ pid_sep1 = log_formats[logsrv->format].pid.sep1.str;
+ maxlen -= pid_sep1_max;
+
+ /* pid */
+ pid_max = pid.len;
+ if (unlikely(pid_max >= maxlen)) {
+ pid_max = maxlen - 1;
+ sd_max = 0;
+ goto send;
+ }
+
+ maxlen -= pid_max;
+
+ /* second pid separator */
+ pid_sep2_max = log_formats[logsrv->format].pid.sep2.len;
+ if (unlikely(pid_sep2_max >= maxlen)) {
+ pid_sep2_max = maxlen - 1;
+ sd_max = 0;
+ goto send;
+ }
+
+ pid_sep2 = log_formats[logsrv->format].pid.sep2.str;
+ maxlen -= pid_sep2_max;
+
+ /* structured-data */
+ if (sd_max >= maxlen) {
+ sd_max = maxlen - 1;
+ goto send;
+ }
+
+ max = MIN(size, maxlen - sd_max) - 1;
+send:
+ iovec[0].iov_base = hdr_ptr;
+ iovec[0].iov_len = hdr_max;
+ iovec[1].iov_base = tag->str;
+ iovec[1].iov_len = tag_max;
+ iovec[2].iov_base = pid_sep1;
+ iovec[2].iov_len = pid_sep1_max;
+ iovec[3].iov_base = pid.str;
+ iovec[3].iov_len = pid_max;
+ iovec[4].iov_base = pid_sep2;
+ iovec[4].iov_len = pid_sep2_max;
+ iovec[5].iov_base = sd;
+ iovec[5].iov_len = sd_max;
+ iovec[6].iov_base = dataptr;
+ iovec[6].iov_len = max;
+ iovec[7].iov_base = "\n"; /* insert a \n at the end of the message */
+ iovec[7].iov_len = 1;
+
+ msghdr.msg_name = (struct sockaddr *)&logsrv->addr;
+ msghdr.msg_namelen = get_addr_len(&logsrv->addr);
+
+ sent = sendmsg(*plogfd, &msghdr, MSG_DONTWAIT | MSG_NOSIGNAL);
+
+ if (sent < 0) {
+ Alert("sendmsg logger #%d failed: %s (errno=%d)\n",
+ nblogger, strerror(errno), errno);
+ }
+ }
+}
+
+extern fd_set hdr_encode_map[];
+extern fd_set url_encode_map[];
+extern fd_set http_encode_map[];
+
+
+const char sess_cookie[8] = "NIDVEOU7"; /* No cookie, Invalid cookie, cookie for a Down server, Valid cookie, Expired cookie, Old cookie, Unused, unknown */
+const char sess_set_cookie[8] = "NPDIRU67"; /* No set-cookie, Set-cookie found and left unchanged (passive),
+ Set-cookie Deleted, Set-Cookie Inserted, Set-cookie Rewritten,
+ Set-cookie Updated, unknown, unknown */
+
+/*
+ * try to write a character if there is enough space, or goto out
+ */
+#define LOGCHAR(x) do { \
+ if (tmplog < dst + maxsize - 1) { \
+ *(tmplog++) = (x); \
+ } else { \
+ goto out; \
+ } \
+ } while(0)
+
+
+/* Builds a log line in <dst> based on <list_format>, and stops before reaching
+ * <maxsize> characters. Returns the size of the output string in characters,
+ * not counting the trailing zero which is always added if the resulting size
+ * is not zero.
+ */
+int build_logline(struct stream *s, char *dst, size_t maxsize, struct list *list_format)
+{
+ struct session *sess = strm_sess(s);
+ struct proxy *fe = sess->fe;
+ struct proxy *be = s->be;
+ struct http_txn *txn = s->txn;
+ struct chunk chunk;
+ char *uri;
+ char *spc;
+ char *qmark;
+ char *end;
+ struct tm tm;
+ int t_request;
+ int hdr;
+ int last_isspace = 1;
+ int nspaces = 0;
+ char *tmplog;
+ char *ret;
+ int iret;
+ struct logformat_node *tmp;
+
+ /* FIXME: let's limit ourselves to frontend logging for now. */
+
+ t_request = -1;
+ if (tv_isge(&s->logs.tv_request, &s->logs.tv_accept))
+ t_request = tv_ms_elapsed(&s->logs.tv_accept, &s->logs.tv_request);
+
+ tmplog = dst;
+
+ /* fill logbuffer */
+ if (LIST_ISEMPTY(list_format))
+ return 0;
+
+ list_for_each_entry(tmp, list_format, list) {
+ struct connection *conn;
+ const char *src = NULL;
+ struct sample *key;
+ const struct chunk empty = { NULL, 0, 0 };
+
+ switch (tmp->type) {
+ case LOG_FMT_SEPARATOR:
+ if (!last_isspace) {
+ LOGCHAR(' ');
+ last_isspace = 1;
+ }
+ break;
+
+ case LOG_FMT_TEXT: // text
+ src = tmp->arg;
+ iret = strlcpy2(tmplog, src, dst + maxsize - tmplog);
+ if (iret == 0)
+ goto out;
+ tmplog += iret;
+ last_isspace = 0;
+ break;
+
+ case LOG_FMT_EXPR: // sample expression, may be request or response
+ key = NULL;
+ if (tmp->options & LOG_OPT_REQ_CAP)
+ key = sample_fetch_as_type(be, sess, s, SMP_OPT_DIR_REQ|SMP_OPT_FINAL, tmp->expr, SMP_T_STR);
+ if (!key && (tmp->options & LOG_OPT_RES_CAP))
+ key = sample_fetch_as_type(be, sess, s, SMP_OPT_DIR_RES|SMP_OPT_FINAL, tmp->expr, SMP_T_STR);
+ if (tmp->options & LOG_OPT_HTTP)
+ ret = encode_chunk(tmplog, dst + maxsize,
+ '%', http_encode_map, key ? &key->data.u.str : &empty);
+ else
+ ret = lf_text_len(tmplog, key ? key->data.u.str.str : NULL, key ? key->data.u.str.len : 0, dst + maxsize - tmplog, tmp);
+ if (ret == 0)
+ goto out;
+ tmplog = ret;
+ last_isspace = 0;
+ break;
+
+ case LOG_FMT_CLIENTIP: // %ci
+ conn = objt_conn(sess->origin);
+ if (conn)
+ ret = lf_ip(tmplog, (struct sockaddr *)&conn->addr.from, dst + maxsize - tmplog, tmp);
+ else
+ ret = lf_text_len(tmplog, NULL, 0, dst + maxsize - tmplog, tmp);
+ if (ret == NULL)
+ goto out;
+ tmplog = ret;
+ last_isspace = 0;
+ break;
+
+ case LOG_FMT_CLIENTPORT: // %cp
+ conn = objt_conn(sess->origin);
+ if (conn) {
+ if (conn->addr.from.ss_family == AF_UNIX) {
+ ret = ltoa_o(sess->listener->luid, tmplog, dst + maxsize - tmplog);
+ } else {
+ ret = lf_port(tmplog, (struct sockaddr *)&conn->addr.from,
+ dst + maxsize - tmplog, tmp);
+ }
+ }
+ else
+ ret = lf_text_len(tmplog, NULL, 0, dst + maxsize - tmplog, tmp);
+
+ if (ret == NULL)
+ goto out;
+ tmplog = ret;
+ last_isspace = 0;
+ break;
+
+ case LOG_FMT_FRONTENDIP: // %fi
+ conn = objt_conn(sess->origin);
+ if (conn) {
+ conn_get_to_addr(conn);
+ ret = lf_ip(tmplog, (struct sockaddr *)&conn->addr.to, dst + maxsize - tmplog, tmp);
+ }
+ else
+ ret = lf_text_len(tmplog, NULL, 0, dst + maxsize - tmplog, tmp);
+
+ if (ret == NULL)
+ goto out;
+ tmplog = ret;
+ last_isspace = 0;
+ break;
+
+ case LOG_FMT_FRONTENDPORT: // %fp
+ conn = objt_conn(sess->origin);
+ if (conn) {
+ conn_get_to_addr(conn);
+ if (conn->addr.to.ss_family == AF_UNIX)
+ ret = ltoa_o(sess->listener->luid, tmplog, dst + maxsize - tmplog);
+ else
+ ret = lf_port(tmplog, (struct sockaddr *)&conn->addr.to, dst + maxsize - tmplog, tmp);
+ }
+ else
+ ret = lf_text_len(tmplog, NULL, 0, dst + maxsize - tmplog, tmp);
+
+ if (ret == NULL)
+ goto out;
+ tmplog = ret;
+ last_isspace = 0;
+ break;
+
+ case LOG_FMT_BACKENDIP: // %bi
+ conn = objt_conn(s->si[1].end);
+ if (conn)
+ ret = lf_ip(tmplog, (struct sockaddr *)&conn->addr.from, dst + maxsize - tmplog, tmp);
+ else
+ ret = lf_text_len(tmplog, NULL, 0, dst + maxsize - tmplog, tmp);
+
+ if (ret == NULL)
+ goto out;
+ tmplog = ret;
+ last_isspace = 0;
+ break;
+
+ case LOG_FMT_BACKENDPORT: // %bp
+ conn = objt_conn(s->si[1].end);
+ if (conn)
+ ret = lf_port(tmplog, (struct sockaddr *)&conn->addr.from, dst + maxsize - tmplog, tmp);
+ else
+ ret = lf_text_len(tmplog, NULL, 0, dst + maxsize - tmplog, tmp);
+
+ if (ret == NULL)
+ goto out;
+ tmplog = ret;
+ last_isspace = 0;
+ break;
+
+ case LOG_FMT_SERVERIP: // %si
+ conn = objt_conn(s->si[1].end);
+ if (conn)
+ ret = lf_ip(tmplog, (struct sockaddr *)&conn->addr.to, dst + maxsize - tmplog, tmp);
+ else
+ ret = lf_text_len(tmplog, NULL, 0, dst + maxsize - tmplog, tmp);
+
+ if (ret == NULL)
+ goto out;
+ tmplog = ret;
+ last_isspace = 0;
+ break;
+
+ case LOG_FMT_SERVERPORT: // %sp
+ conn = objt_conn(s->si[1].end);
+ if (conn)
+ ret = lf_port(tmplog, (struct sockaddr *)&conn->addr.to, dst + maxsize - tmplog, tmp);
+ else
+ ret = lf_text_len(tmplog, NULL, 0, dst + maxsize - tmplog, tmp);
+
+ if (ret == NULL)
+ goto out;
+ tmplog = ret;
+ last_isspace = 0;
+ break;
+
+ case LOG_FMT_DATE: // %t
+ get_localtime(s->logs.accept_date.tv_sec, &tm);
+ ret = date2str_log(tmplog, &tm, &(s->logs.accept_date),
+ dst + maxsize - tmplog);
+ if (ret == NULL)
+ goto out;
+ tmplog = ret;
+ last_isspace = 0;
+ break;
+
+ case LOG_FMT_DATEGMT: // %T
+ get_gmtime(s->logs.accept_date.tv_sec, &tm);
+ ret = gmt2str_log(tmplog, &tm, dst + maxsize - tmplog);
+ if (ret == NULL)
+ goto out;
+ tmplog = ret;
+ last_isspace = 0;
+ break;
+
+ case LOG_FMT_DATELOCAL: // %Tl
+ get_localtime(s->logs.accept_date.tv_sec, &tm);
+ ret = localdate2str_log(tmplog, &tm, dst + maxsize - tmplog);
+ if (ret == NULL)
+ goto out;
+ tmplog = ret;
+ last_isspace = 0;
+ break;
+
+ case LOG_FMT_TS: // %Ts
+ get_gmtime(s->logs.accept_date.tv_sec, &tm);
+ if (tmp->options & LOG_OPT_HEXA) {
+ iret = snprintf(tmplog, dst + maxsize - tmplog, "%04X", (unsigned int)s->logs.accept_date.tv_sec);
+ if (iret < 0 || iret > dst + maxsize - tmplog)
+ goto out;
+ last_isspace = 0;
+ tmplog += iret;
+ } else {
+ ret = ltoa_o(s->logs.accept_date.tv_sec, tmplog, dst + maxsize - tmplog);
+ if (ret == NULL)
+ goto out;
+ tmplog = ret;
+ last_isspace = 0;
+ }
+ break;
+
+ case LOG_FMT_MS: // %ms
+ if (tmp->options & LOG_OPT_HEXA) {
+ iret = snprintf(tmplog, dst + maxsize - tmplog, "%02X",(unsigned int)s->logs.accept_date.tv_usec/1000);
+ if (iret < 0 || iret > dst + maxsize - tmplog)
+ goto out;
+ last_isspace = 0;
+ tmplog += iret;
+ } else {
+ if ((dst + maxsize - tmplog) < 4)
+ goto out;
+ ret = utoa_pad((unsigned int)s->logs.accept_date.tv_usec/1000,
+ tmplog, 4);
+ if (ret == NULL)
+ goto out;
+ tmplog = ret;
+ last_isspace = 0;
+ }
+ break;
+
+ case LOG_FMT_FRONTEND: // %f
+ src = fe->id;
+ ret = lf_text(tmplog, src, dst + maxsize - tmplog, tmp);
+ if (ret == NULL)
+ goto out;
+ tmplog = ret;
+ last_isspace = 0;
+ break;
+
+ case LOG_FMT_FRONTEND_XPRT: // %ft
+ src = fe->id;
+ if (tmp->options & LOG_OPT_QUOTE)
+ LOGCHAR('"');
+ iret = strlcpy2(tmplog, src, dst + maxsize - tmplog);
+ if (iret == 0)
+ goto out;
+ tmplog += iret;
+#ifdef USE_OPENSSL
+ if (sess->listener->xprt == &ssl_sock)
+ LOGCHAR('~');
+#endif
+ if (tmp->options & LOG_OPT_QUOTE)
+ LOGCHAR('"');
+ last_isspace = 0;
+ break;
+#ifdef USE_OPENSSL
+ case LOG_FMT_SSL_CIPHER: // %sslc
+ src = NULL;
+ conn = objt_conn(sess->origin);
+ if (conn) {
+ if (sess->listener->xprt == &ssl_sock)
+ src = ssl_sock_get_cipher_name(conn);
+ }
+ ret = lf_text(tmplog, src, dst + maxsize - tmplog, tmp);
+ if (ret == NULL)
+ goto out;
+ tmplog = ret;
+ last_isspace = 0;
+ break;
+
+ case LOG_FMT_SSL_VERSION: // %sslv
+ src = NULL;
+ conn = objt_conn(sess->origin);
+ if (conn) {
+ if (sess->listener->xprt == &ssl_sock)
+ src = ssl_sock_get_proto_version(conn);
+ }
+ ret = lf_text(tmplog, src, dst + maxsize - tmplog, tmp);
+ if (ret == NULL)
+ goto out;
+ tmplog = ret;
+ last_isspace = 0;
+ break;
+#endif
+ case LOG_FMT_BACKEND: // %b
+ src = be->id;
+ ret = lf_text(tmplog, src, dst + maxsize - tmplog, tmp);
+ if (ret == NULL)
+ goto out;
+ tmplog = ret;
+ last_isspace = 0;
+ break;
+
+ case LOG_FMT_SERVER: // %s
+ switch (obj_type(s->target)) {
+ case OBJ_TYPE_SERVER:
+ src = objt_server(s->target)->id;
+ break;
+ case OBJ_TYPE_APPLET:
+ src = objt_applet(s->target)->name;
+ break;
+ default:
+ src = "<NOSRV>";
+ break;
+ }
+ ret = lf_text(tmplog, src, dst + maxsize - tmplog, tmp);
+ if (ret == NULL)
+ goto out;
+ tmplog = ret;
+ last_isspace = 0;
+ break;
+
+ case LOG_FMT_TQ: // %Tq
+ ret = ltoa_o(t_request, tmplog, dst + maxsize - tmplog);
+ if (ret == NULL)
+ goto out;
+ tmplog = ret;
+ last_isspace = 0;
+ break;
+
+ case LOG_FMT_TW: // %Tw
+ ret = ltoa_o((s->logs.t_queue >= 0) ? s->logs.t_queue - t_request : -1,
+ tmplog, dst + maxsize - tmplog);
+ if (ret == NULL)
+ goto out;
+ tmplog = ret;
+ last_isspace = 0;
+ break;
+
+ case LOG_FMT_TC: // %Tc
+ ret = ltoa_o((s->logs.t_connect >= 0) ? s->logs.t_connect - s->logs.t_queue : -1,
+ tmplog, dst + maxsize - tmplog);
+ if (ret == NULL)
+ goto out;
+ tmplog = ret;
+ last_isspace = 0;
+ break;
+
+ case LOG_FMT_TR: // %Tr
+ ret = ltoa_o((s->logs.t_data >= 0) ? s->logs.t_data - s->logs.t_connect : -1,
+ tmplog, dst + maxsize - tmplog);
+ if (ret == NULL)
+ goto out;
+ tmplog = ret;
+ last_isspace = 0;
+ break;
+
+ case LOG_FMT_TT: // %Tt
+ if (!(fe->to_log & LW_BYTES))
+ LOGCHAR('+');
+ ret = ltoa_o(s->logs.t_close, tmplog, dst + maxsize - tmplog);
+ if (ret == NULL)
+ goto out;
+ tmplog = ret;
+ last_isspace = 0;
+ break;
+
+ case LOG_FMT_STATUS: // %ST
+ ret = ltoa_o(txn->status, tmplog, dst + maxsize - tmplog);
+ if (ret == NULL)
+ goto out;
+ tmplog = ret;
+ last_isspace = 0;
+ break;
+
+ case LOG_FMT_BYTES: // %B
+ if (!(fe->to_log & LW_BYTES))
+ LOGCHAR('+');
+ ret = lltoa(s->logs.bytes_out, tmplog, dst + maxsize - tmplog);
+ if (ret == NULL)
+ goto out;
+ tmplog = ret;
+ last_isspace = 0;
+ break;
+
+ case LOG_FMT_BYTES_UP: // %U
+ ret = lltoa(s->logs.bytes_in, tmplog, dst + maxsize - tmplog);
+ if (ret == NULL)
+ goto out;
+ tmplog = ret;
+ last_isspace = 0;
+ break;
+
+ case LOG_FMT_CCLIENT: // %CC
+ src = txn->cli_cookie;
+ ret = lf_text(tmplog, src, dst + maxsize - tmplog, tmp);
+ if (ret == NULL)
+ goto out;
+ tmplog = ret;
+ last_isspace = 0;
+ break;
+
+ case LOG_FMT_CSERVER: // %CS
+ src = txn->srv_cookie;
+ ret = lf_text(tmplog, src, dst + maxsize - tmplog, tmp);
+ if (ret == NULL)
+ goto out;
+ tmplog = ret;
+ last_isspace = 0;
+ break;
+
+ case LOG_FMT_TERMSTATE: // %ts
+ LOGCHAR(sess_term_cond[(s->flags & SF_ERR_MASK) >> SF_ERR_SHIFT]);
+ LOGCHAR(sess_fin_state[(s->flags & SF_FINST_MASK) >> SF_FINST_SHIFT]);
+ *tmplog = '\0';
+ last_isspace = 0;
+ break;
+
+ case LOG_FMT_TERMSTATE_CK: // %tsc, same as TS with cookie state (for mode HTTP)
+ LOGCHAR(sess_term_cond[(s->flags & SF_ERR_MASK) >> SF_ERR_SHIFT]);
+ LOGCHAR(sess_fin_state[(s->flags & SF_FINST_MASK) >> SF_FINST_SHIFT]);
+ LOGCHAR((be->ck_opts & PR_CK_ANY) ? sess_cookie[(txn->flags & TX_CK_MASK) >> TX_CK_SHIFT] : '-');
+ LOGCHAR((be->ck_opts & PR_CK_ANY) ? sess_set_cookie[(txn->flags & TX_SCK_MASK) >> TX_SCK_SHIFT] : '-');
+ last_isspace = 0;
+ break;
+
+ case LOG_FMT_ACTCONN: // %ac
+ ret = ltoa_o(actconn, tmplog, dst + maxsize - tmplog);
+ if (ret == NULL)
+ goto out;
+ tmplog = ret;
+ last_isspace = 0;
+ break;
+
+ case LOG_FMT_FECONN: // %fc
+ ret = ltoa_o(fe->feconn, tmplog, dst + maxsize - tmplog);
+ if (ret == NULL)
+ goto out;
+ tmplog = ret;
+ last_isspace = 0;
+ break;
+
+ case LOG_FMT_BECONN: // %bc
+ ret = ltoa_o(be->beconn, tmplog, dst + maxsize - tmplog);
+ if (ret == NULL)
+ goto out;
+ tmplog = ret;
+ last_isspace = 0;
+ break;
+
+ case LOG_FMT_SRVCONN: // %sc
+ ret = ultoa_o(objt_server(s->target) ?
+ objt_server(s->target)->cur_sess :
+ 0, tmplog, dst + maxsize - tmplog);
+ if (ret == NULL)
+ goto out;
+ tmplog = ret;
+ last_isspace = 0;
+ break;
+
+ case LOG_FMT_RETRIES: // %rq
+ if (s->flags & SF_REDISP)
+ LOGCHAR('+');
+ ret = ltoa_o((s->si[1].conn_retries>0) ?
+ (be->conn_retries - s->si[1].conn_retries) :
+ be->conn_retries, tmplog, dst + maxsize - tmplog);
+ if (ret == NULL)
+ goto out;
+ tmplog = ret;
+ last_isspace = 0;
+ break;
+
+ case LOG_FMT_SRVQUEUE: // %sq
+ ret = ltoa_o(s->logs.srv_queue_size, tmplog, dst + maxsize - tmplog);
+ if (ret == NULL)
+ goto out;
+ tmplog = ret;
+ last_isspace = 0;
+ break;
+
+ case LOG_FMT_BCKQUEUE: // %bq
+ ret = ltoa_o(s->logs.prx_queue_size, tmplog, dst + maxsize - tmplog);
+ if (ret == NULL)
+ goto out;
+ tmplog = ret;
+ last_isspace = 0;
+ break;
+
+ case LOG_FMT_HDRREQUEST: // %hr
+ /* request header */
+ if (fe->nb_req_cap && s->req_cap) {
+ if (tmp->options & LOG_OPT_QUOTE)
+ LOGCHAR('"');
+ LOGCHAR('{');
+ for (hdr = 0; hdr < fe->nb_req_cap; hdr++) {
+ if (hdr)
+ LOGCHAR('|');
+ if (s->req_cap[hdr] != NULL) {
+ ret = encode_string(tmplog, dst + maxsize,
+ '#', hdr_encode_map, s->req_cap[hdr]);
+ if (ret == NULL || *ret != '\0')
+ goto out;
+ tmplog = ret;
+ }
+ }
+ LOGCHAR('}');
+ if (tmp->options & LOG_OPT_QUOTE)
+ LOGCHAR('"');
+ last_isspace = 0;
+ }
+ break;
+
+ case LOG_FMT_HDRREQUESTLIST: // %hrl
+ /* request header list */
+ if (fe->nb_req_cap && s->req_cap) {
+ for (hdr = 0; hdr < fe->nb_req_cap; hdr++) {
+ if (hdr > 0)
+ LOGCHAR(' ');
+ if (tmp->options & LOG_OPT_QUOTE)
+ LOGCHAR('"');
+ if (s->req_cap[hdr] != NULL) {
+ ret = encode_string(tmplog, dst + maxsize,
+ '#', hdr_encode_map, s->req_cap[hdr]);
+ if (ret == NULL || *ret != '\0')
+ goto out;
+ tmplog = ret;
+ } else if (!(tmp->options & LOG_OPT_QUOTE))
+ LOGCHAR('-');
+ if (tmp->options & LOG_OPT_QUOTE)
+ LOGCHAR('"');
+ last_isspace = 0;
+ }
+ }
+ break;
+
+
+ case LOG_FMT_HDRRESPONS: // %hs
+ /* response header */
+ if (fe->nb_rsp_cap && s->res_cap) {
+ if (tmp->options & LOG_OPT_QUOTE)
+ LOGCHAR('"');
+ LOGCHAR('{');
+ for (hdr = 0; hdr < fe->nb_rsp_cap; hdr++) {
+ if (hdr)
+ LOGCHAR('|');
+ if (s->res_cap[hdr] != NULL) {
+ ret = encode_string(tmplog, dst + maxsize,
+ '#', hdr_encode_map, s->res_cap[hdr]);
+ if (ret == NULL || *ret != '\0')
+ goto out;
+ tmplog = ret;
+ }
+ }
+ LOGCHAR('}');
+ last_isspace = 0;
+ if (tmp->options & LOG_OPT_QUOTE)
+ LOGCHAR('"');
+ }
+ break;
+
+ case LOG_FMT_HDRRESPONSLIST: // %hsl
+ /* response header list */
+ if (fe->nb_rsp_cap && s->res_cap) {
+ for (hdr = 0; hdr < fe->nb_rsp_cap; hdr++) {
+ if (hdr > 0)
+ LOGCHAR(' ');
+ if (tmp->options & LOG_OPT_QUOTE)
+ LOGCHAR('"');
+ if (s->res_cap[hdr] != NULL) {
+ ret = encode_string(tmplog, dst + maxsize,
+ '#', hdr_encode_map, s->res_cap[hdr]);
+ if (ret == NULL || *ret != '\0')
+ goto out;
+ tmplog = ret;
+ } else if (!(tmp->options & LOG_OPT_QUOTE))
+ LOGCHAR('-');
+ if (tmp->options & LOG_OPT_QUOTE)
+ LOGCHAR('"');
+ last_isspace = 0;
+ }
+ }
+ break;
+
+ case LOG_FMT_REQ: // %r
+ /* Request */
+ if (tmp->options & LOG_OPT_QUOTE)
+ LOGCHAR('"');
+ uri = txn->uri ? txn->uri : "<BADREQ>";
+ ret = encode_string(tmplog, dst + maxsize,
+ '#', url_encode_map, uri);
+ if (ret == NULL || *ret != '\0')
+ goto out;
+ tmplog = ret;
+ if (tmp->options & LOG_OPT_QUOTE)
+ LOGCHAR('"');
+ last_isspace = 0;
+ break;
+
+ case LOG_FMT_HTTP_PATH: // %HP
+ uri = txn->uri ? txn->uri : "<BADREQ>";
+
+ if (tmp->options & LOG_OPT_QUOTE)
+ LOGCHAR('"');
+
+ end = uri + strlen(uri);
+ // look for the first whitespace character
+ while (uri < end && !HTTP_IS_SPHT(*uri))
+ uri++;
+
+ // keep advancing past multiple spaces
+ while (uri < end && HTTP_IS_SPHT(*uri)) {
+ uri++; nspaces++;
+ }
+
+ // look for first space or question mark after url
+ spc = uri;
+ while (spc < end && *spc != '?' && !HTTP_IS_SPHT(*spc))
+ spc++;
+
+ if (!txn->uri || nspaces == 0) {
+ chunk.str = "<BADREQ>";
+ chunk.len = strlen("<BADREQ>");
+ } else {
+ chunk.str = uri;
+ chunk.len = spc - uri;
+ }
+
+ ret = encode_chunk(tmplog, dst + maxsize, '#', url_encode_map, &chunk);
+ if (ret == NULL || *ret != '\0')
+ goto out;
+
+ tmplog = ret;
+ if (tmp->options & LOG_OPT_QUOTE)
+ LOGCHAR('"');
+
+ last_isspace = 0;
+ break;
+
+ case LOG_FMT_HTTP_QUERY: // %HQ
+ if (tmp->options & LOG_OPT_QUOTE)
+ LOGCHAR('"');
+
+ if (!txn->uri) {
+ chunk.str = "<BADREQ>";
+ chunk.len = strlen("<BADREQ>");
+ } else {
+ uri = txn->uri;
+ end = uri + strlen(uri);
+ // look for the first question mark
+ while (uri < end && *uri != '?')
+ uri++;
+
+ qmark = uri;
+ // look for first space or question mark after url
+ while (uri < end && !HTTP_IS_SPHT(*uri))
+ uri++;
+
+ chunk.str = qmark;
+ chunk.len = uri - qmark;
+ }
+
+ ret = encode_chunk(tmplog, dst + maxsize, '#', url_encode_map, &chunk);
+ if (ret == NULL || *ret != '\0')
+ goto out;
+
+ tmplog = ret;
+ if (tmp->options & LOG_OPT_QUOTE)
+ LOGCHAR('"');
+
+ last_isspace = 0;
+ break;
+
+ case LOG_FMT_HTTP_URI: // %HU
+ uri = txn->uri ? txn->uri : "<BADREQ>";
+
+ if (tmp->options & LOG_OPT_QUOTE)
+ LOGCHAR('"');
+
+ end = uri + strlen(uri);
+ // look for the first whitespace character
+ while (uri < end && !HTTP_IS_SPHT(*uri))
+ uri++;
+
+ // keep advancing past multiple spaces
+ while (uri < end && HTTP_IS_SPHT(*uri)) {
+ uri++; nspaces++;
+ }
+
+ // look for first space after url
+ spc = uri;
+ while (spc < end && !HTTP_IS_SPHT(*spc))
+ spc++;
+
+ if (!txn->uri || nspaces == 0) {
+ chunk.str = "<BADREQ>";
+ chunk.len = strlen("<BADREQ>");
+ } else {
+ chunk.str = uri;
+ chunk.len = spc - uri;
+ }
+
+ ret = encode_chunk(tmplog, dst + maxsize, '#', url_encode_map, &chunk);
+ if (ret == NULL || *ret != '\0')
+ goto out;
+
+ tmplog = ret;
+ if (tmp->options & LOG_OPT_QUOTE)
+ LOGCHAR('"');
+
+ last_isspace = 0;
+ break;
+
+ case LOG_FMT_HTTP_METHOD: // %HM
+ uri = txn->uri ? txn->uri : "<BADREQ>";
+ if (tmp->options & LOG_OPT_QUOTE)
+ LOGCHAR('"');
+
+ end = uri + strlen(uri);
+ // look for the first whitespace character
+ spc = uri;
+ while (spc < end && !HTTP_IS_SPHT(*spc))
+ spc++;
+
+ if (spc == end) { // odd case, we have txn->uri, but we only got a verb
+ chunk.str = "<BADREQ>";
+ chunk.len = strlen("<BADREQ>");
+ } else {
+ chunk.str = uri;
+ chunk.len = spc - uri;
+ }
+
+ ret = encode_chunk(tmplog, dst + maxsize, '#', url_encode_map, &chunk);
+ if (ret == NULL || *ret != '\0')
+ goto out;
+
+ tmplog = ret;
+ if (tmp->options & LOG_OPT_QUOTE)
+ LOGCHAR('"');
+
+ last_isspace = 0;
+ break;
+
+ case LOG_FMT_HTTP_VERSION: // %HV
+ uri = txn->uri ? txn->uri : "<BADREQ>";
+ if (tmp->options & LOG_OPT_QUOTE)
+ LOGCHAR('"');
+
+ end = uri + strlen(uri);
+ // look for the first whitespace character
+ while (uri < end && !HTTP_IS_SPHT(*uri))
+ uri++;
+
+ // keep advancing past multiple spaces
+ while (uri < end && HTTP_IS_SPHT(*uri)) {
+ uri++; nspaces++;
+ }
+
+ // look for the next whitespace character
+ while (uri < end && !HTTP_IS_SPHT(*uri))
+ uri++;
+
+ // keep advancing past multiple spaces
+ while (uri < end && HTTP_IS_SPHT(*uri))
+ uri++;
+
+ if (!txn->uri || nspaces == 0) {
+ chunk.str = "<BADREQ>";
+ chunk.len = strlen("<BADREQ>");
+ } else if (uri == end) {
+ chunk.str = "HTTP/0.9";
+ chunk.len = strlen("HTTP/0.9");
+ } else {
+ chunk.str = uri;
+ chunk.len = end - uri;
+ }
+
+ ret = encode_chunk(tmplog, dst + maxsize, '#', url_encode_map, &chunk);
+ if (ret == NULL || *ret != '\0')
+ goto out;
+
+ tmplog = ret;
+ if (tmp->options & LOG_OPT_QUOTE)
+ LOGCHAR('"');
+
+ last_isspace = 0;
+ break;
+
+ case LOG_FMT_COUNTER: // %rt
+ if (tmp->options & LOG_OPT_HEXA) {
+ iret = snprintf(tmplog, dst + maxsize - tmplog, "%04X", s->uniq_id);
+ if (iret < 0 || iret > dst + maxsize - tmplog)
+ goto out;
+ last_isspace = 0;
+ tmplog += iret;
+ } else {
+ ret = ltoa_o(s->uniq_id, tmplog, dst + maxsize - tmplog);
+ if (ret == NULL)
+ goto out;
+ tmplog = ret;
+ last_isspace = 0;
+ }
+ break;
+
+ case LOG_FMT_LOGCNT: // %lc
+ if (tmp->options & LOG_OPT_HEXA) {
+ iret = snprintf(tmplog, dst + maxsize - tmplog, "%04X", fe->log_count);
+ if (iret < 0 || iret > dst + maxsize - tmplog)
+ goto out;
+ last_isspace = 0;
+ tmplog += iret;
+ } else {
+ ret = ultoa_o(fe->log_count, tmplog, dst + maxsize - tmplog);
+ if (ret == NULL)
+ goto out;
+ tmplog = ret;
+ last_isspace = 0;
+ }
+ break;
+
+ case LOG_FMT_HOSTNAME: // %H
+ src = hostname;
+ ret = lf_text(tmplog, src, dst + maxsize - tmplog, tmp);
+ if (ret == NULL)
+ goto out;
+ tmplog = ret;
+ last_isspace = 0;
+ break;
+
+ case LOG_FMT_PID: // %pid
+ if (tmp->options & LOG_OPT_HEXA) {
+ iret = snprintf(tmplog, dst + maxsize - tmplog, "%04X", pid);
+ if (iret < 0 || iret > dst + maxsize - tmplog)
+ goto out;
+ last_isspace = 0;
+ tmplog += iret;
+ } else {
+ ret = ltoa_o(pid, tmplog, dst + maxsize - tmplog);
+ if (ret == NULL)
+ goto out;
+ tmplog = ret;
+ last_isspace = 0;
+ }
+ break;
+
+ case LOG_FMT_UNIQUEID: // %ID
+ ret = NULL;
+ src = s->unique_id;
+ ret = lf_text(tmplog, src, maxsize - (tmplog - dst), tmp);
+ if (ret == NULL)
+ goto out;
+ tmplog = ret;
+ last_isspace = 0;
+ break;
+
+ }
+ }
+
+out:
+ /* *tmplog is a unused character */
+ *tmplog = '\0';
+ return tmplog - dst;
+
+}
+
+/*
+ * send a log for the stream when we have enough info about it.
+ * Will not log if the frontend has no log defined.
+ */
+void strm_log(struct stream *s)
+{
+ struct session *sess = s->sess;
+ int size, err, level;
+ int sd_size = 0;
+
+ /* if we don't want to log normal traffic, return now */
+ err = (s->flags & SF_REDISP) ||
+ ((s->flags & SF_ERR_MASK) > SF_ERR_LOCAL) ||
+ (((s->flags & SF_ERR_MASK) == SF_ERR_NONE) &&
+ (s->si[1].conn_retries != s->be->conn_retries)) ||
+ ((sess->fe->mode == PR_MODE_HTTP) && s->txn && s->txn->status >= 500);
+
+ if (!err && (sess->fe->options2 & PR_O2_NOLOGNORM))
+ return;
+
+ if (LIST_ISEMPTY(&sess->fe->logsrvs))
+ return;
+
+ if (s->logs.level) { /* loglevel was overridden */
+ if (s->logs.level == -1) {
+ s->logs.logwait = 0; /* logs disabled */
+ return;
+ }
+ level = s->logs.level - 1;
+ }
+ else {
+ level = LOG_INFO;
+ if (err && (sess->fe->options2 & PR_O2_LOGERRORS))
+ level = LOG_ERR;
+ }
+
+ /* if unique-id was not generated */
+ if (!s->unique_id && !LIST_ISEMPTY(&sess->fe->format_unique_id)) {
+ if ((s->unique_id = pool_alloc2(pool2_uniqueid)) != NULL)
+ build_logline(s, s->unique_id, UNIQUEID_LEN, &sess->fe->format_unique_id);
+ }
+
+ if (!LIST_ISEMPTY(&sess->fe->logformat_sd)) {
+ sd_size = build_logline(s, logline_rfc5424, global.max_syslog_len,
+ &sess->fe->logformat_sd);
+ }
+
+ size = build_logline(s, logline, global.max_syslog_len, &sess->fe->logformat);
+ if (size > 0) {
+ sess->fe->log_count++;
+ __send_log(sess->fe, level, logline, size + 1, logline_rfc5424, sd_size);
+ s->logs.logwait = 0;
+ }
+}
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * Copyright (C) 2015 Willy Tarreau <w@1wt.eu>
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining
+ * a copy of this software and associated documentation files (the
+ * "Software"), to deal in the Software without restriction, including
+ * without limitation the rights to use, copy, modify, merge, publish,
+ * distribute, sublicense, and/or sell copies of the Software, and to
+ * permit persons to whom the Software is furnished to do so, subject to
+ * the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+#include <import/lru.h>
+
+/* Minimal list manipulation macros for lru64_list */
+#define LIST_ADD(lh, el) ({ (el)->n = (lh)->n; (el)->n->p = (lh)->n = (el); (el)->p = (lh); })
+#define LIST_DEL(el) ({ (el)->n->p = (el)->p; (el)->p->n = (el)->n; })
+
+
+/* Lookup key <key> in LRU cache <lru> for use with domain <domain> whose data's
+ * current version is <revision>. It differs from lru64_get as it does not
+ * create missing keys. The function returns NULL if an error or a cache miss
+ * occurs. */
+struct lru64 *lru64_lookup(unsigned long long key, struct lru64_head *lru,
+ void *domain, unsigned long long revision)
+{
+ struct eb64_node *node;
+ struct lru64 *elem;
+
+ node = __eb64_lookup(&lru->keys, key);
+ elem = container_of(node, typeof(*elem), node);
+ if (elem) {
+ /* Existing entry found, check validity then move it at the
+ * head of the LRU list.
+ */
+ if (elem->domain == domain && elem->revision == revision) {
+ LIST_DEL(&elem->lru);
+ LIST_ADD(&lru->list, &elem->lru);
+ return elem;
+ }
+ }
+ return NULL;
+}
+
+/* Get key <key> from LRU cache <lru> for use with domain <domain> whose data's
+ * current revision is <revision>. If the key doesn't exist it's first created
+ * with ->domain = NULL. The caller detects this situation by checking ->domain
+ * and must perform the operation to be cached then call lru64_commit() to
+ * complete the operation. A lock (mutex or spinlock) may be added around the
+ * function to permit use in a multi-threaded environment. The function may
+ * return NULL upon memory allocation failure.
+ */
+struct lru64 *lru64_get(unsigned long long key, struct lru64_head *lru,
+ void *domain, unsigned long long revision)
+{
+ struct eb64_node *node;
+ struct lru64 *elem;
+
+ if (!lru->spare) {
+ if (!lru->cache_size)
+ return NULL;
+ lru->spare = malloc(sizeof(*lru->spare));
+ if (!lru->spare)
+ return NULL;
+ lru->spare->domain = NULL;
+ }
+
+ /* Lookup or insert */
+ lru->spare->node.key = key;
+ node = __eb64_insert(&lru->keys, &lru->spare->node);
+ elem = container_of(node, typeof(*elem), node);
+
+ if (elem != lru->spare) {
+ /* Existing entry found, check validity then move it at the
+ * head of the LRU list.
+ */
+ if (elem->domain == domain && elem->revision == revision) {
+ LIST_DEL(&elem->lru);
+ LIST_ADD(&lru->list, &elem->lru);
+ return elem;
+ }
+
+ if (!elem->domain)
+ return NULL; // currently locked
+
+ /* recycle this entry */
+ LIST_DEL(&elem->lru);
+ }
+ else {
+ /* New entry inserted, initialize and move to the head of the
+ * LRU list, and lock it until commit.
+ */
+ lru->cache_usage++;
+ lru->spare = NULL; // used, need a new one next time
+ }
+
+ elem->domain = NULL;
+ LIST_ADD(&lru->list, &elem->lru);
+
+ if (lru->cache_usage > lru->cache_size) {
+ /* try to kill oldest entry */
+ struct lru64 *old;
+
+ old = container_of(lru->list.p, typeof(*old), lru);
+ if (old->domain) {
+ /* not locked */
+ LIST_DEL(&old->lru);
+ __eb64_delete(&old->node);
+ if (old->data && old->free)
+ old->free(old->data);
+ if (!lru->spare)
+ lru->spare = old;
+ else {
+ free(old);
+ }
+ lru->cache_usage--;
+ }
+ }
+ return elem;
+}
+
+/* Commit element <elem> with data <data>, domain <domain> and revision
+ * <revision>. <elem> is checked for NULL so that it's possible to call it
+ * with the result from a call to lru64_get(). The caller might lock it using a
+ * spinlock or mutex shared with the one around lru64_get().
+ */
+void lru64_commit(struct lru64 *elem, void *data, void *domain,
+ unsigned long long revision, void (*free)(void *))
+{
+ if (!elem)
+ return;
+
+ elem->data = data;
+ elem->revision = revision;
+ elem->domain = domain;
+ elem->free = free;
+}
+
+/* Create a new LRU cache of <size> entries. Returns the new cache or NULL in
+ * case of allocation failure.
+ */
+struct lru64_head *lru64_new(int size)
+{
+ struct lru64_head *lru;
+
+ lru = malloc(sizeof(*lru));
+ if (lru) {
+ lru->list.p = lru->list.n = &lru->list;
+ lru->keys = EB_ROOT_UNIQUE;
+ lru->spare = NULL;
+ lru->cache_size = size;
+ lru->cache_usage = 0;
+ }
+ return lru;
+}
+
+/* Tries to destroy the LRU cache <lru>. Returns the number of locked entries
+ * that prevent it from being destroyed, or zero meaning everything was done.
+ */
+int lru64_destroy(struct lru64_head *lru)
+{
+ struct lru64 *elem, *next;
+
+ if (!lru)
+ return 0;
+
+ elem = container_of(lru->list.p, typeof(*elem), lru);
+ while (&elem->lru != &lru->list) {
+ next = container_of(elem->lru.p, typeof(*next), lru);
+ if (elem->domain) {
+ /* not locked */
+ LIST_DEL(&elem->lru);
+ eb64_delete(&elem->node);
+ if (elem->data && elem->free)
+ elem->free(elem->data);
+ free(elem);
+ lru->cache_usage--;
+ lru->cache_size--;
+ }
+ elem = next;
+ }
+
+ if (lru->cache_usage)
+ return lru->cache_usage;
+
+ free(lru);
+ return 0;
+}
+
+/* The code below is just for validation and performance testing. It's an
+ * example of a function taking some time to return results that could be
+ * cached.
+ */
+#ifdef STANDALONE
+
+#include <stdio.h>
+
+static unsigned int misses;
+
+static unsigned long long sum(unsigned long long x)
+{
+#ifndef TEST_LRU_FAST_OPERATION
+ if (x < 1)
+ return 0;
+ return x + sum(x * 99 / 100 - 1);
+#else
+ return (x << 16) - (x << 8) - 1;
+#endif
+}
+
+static long get_value(struct lru64_head *lru, long a)
+{
+ struct lru64 *item;
+
+ if (lru) {
+ item = lru64_get(a, lru, lru, 0);
+ if (item && item->domain)
+ return (long)item->data;
+ }
+ misses++;
+ /* do the painful work here */
+ a = sum(a);
+ if (item)
+ lru64_commit(item, (void *)a, lru, 0);
+ return a;
+}
+
+/* pass #of loops in argv[1] and set argv[2] to something to use the LRU */
+int main(int argc, char **argv)
+{
+ struct lru64_head *lru = NULL;
+ long long ret;
+ int total, loops;
+
+ if (argc < 2) {
+ printf("Need a number of rounds and optionally an LRU cache size (0..65536)\n");
+ exit(1);
+ }
+
+ total = atoi(argv[1]);
+
+ if (argc > 2) /* cache size */
+ lru = lru64_new(atoi(argv[2]));
+
+ ret = 0;
+ for (loops = 0; loops < total; loops++) {
+ ret += get_value(lru, rand() & 65535);
+ }
+ /* just for accuracy control */
+ printf("ret=%llx, hits=%d, misses=%d (%d %% hits)\n", ret, total-misses, misses, (int)((float)(total-misses) * 100.0 / total));
+
+ while (lru64_destroy(lru));
+
+ return 0;
+}
+
+#endif
--- /dev/null
+/*
+ * Mailer management.
+ *
+ * Copyright 2015 Horms Solutions Ltd, Simon Horman <horms@verge.net.au>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <stdlib.h>
+
+#include <types/mailers.h>
+
+struct mailers *mailers = NULL;
--- /dev/null
+/*
+ * MAP management functions.
+ *
+ * Copyright 2000-2013 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <limits.h>
+#include <stdio.h>
+
+#include <common/standard.h>
+
+#include <types/global.h>
+#include <types/map.h>
+#include <types/pattern.h>
+
+#include <proto/arg.h>
+#include <proto/map.h>
+#include <proto/pattern.h>
+#include <proto/sample.h>
+
+/* Parse an IPv4 or IPv6 address and store it into the sample.
+ * The output type is IPv4 or IPv6.
+ */
+int map_parse_ip(const char *text, struct sample_data *data)
+{
+ int len = strlen(text);
+
+ if (buf2ip(text, len, &data->u.ipv4)) {
+ data->type = SMP_T_IPV4;
+ return 1;
+ }
+ if (buf2ip6(text, len, &data->u.ipv6)) {
+ data->type = SMP_T_IPV6;
+ return 1;
+ }
+ return 0;
+}
+
+/* Parse a string and store a pointer to it into the sample. The original
+ * string must be left in memory because we return a direct memory reference.
+ * The output type is SMP_T_STR. There is no risk that the data will be
+ * overwritten because sample_conv_map() makes a const sample with this
+ * output.
+ */
+int map_parse_str(const char *text, struct sample_data *data)
+{
+ data->u.str.str = (char *)text;
+ data->u.str.len = strlen(text);
+ data->u.str.size = data->u.str.len + 1;
+ data->type = SMP_T_STR;
+ return 1;
+}
+
+/* Parse an integer and convert it to a sample. The output type is SINT if the
+ * number is negative, or UINT if it is positive or null. The function returns
+ * zero (error) if the number is too large.
+ */
+int map_parse_int(const char *text, struct sample_data *data)
+{
+ data->type = SMP_T_SINT;
+ data->u.sint = read_int64(&text, text + strlen(text));
+ if (*text != '\0')
+ return 0;
+ return 1;
+}
+
+/* This crete and initialize map descriptor.
+ * Return NULL if out of memory error
+ */
+static struct map_descriptor *map_create_descriptor(struct sample_conv *conv)
+{
+ struct map_descriptor *desc;
+
+ desc = calloc(1, sizeof(*desc));
+ if (!desc)
+ return NULL;
+
+ desc->conv = conv;
+
+ return desc;
+}
+
+/* This function load the map file according with data type declared into
+ * the "struct sample_conv".
+ *
+ * This function choose the indexation type (ebtree or list) according with
+ * the type of match needed.
+ */
+int sample_load_map(struct arg *arg, struct sample_conv *conv,
+ const char *file, int line, char **err)
+{
+ struct map_descriptor *desc;
+
+ /* create new map descriptor */
+ desc = map_create_descriptor(conv);
+ if (!desc) {
+ memprintf(err, "out of memory");
+ return 0;
+ }
+
+ /* Initialize pattern */
+ pattern_init_head(&desc->pat);
+
+ /* This is original pattern, must free */
+ desc->do_free = 1;
+
+ /* Set the match method. */
+ desc->pat.match = pat_match_fcts[(long)conv->private];
+ desc->pat.parse = pat_parse_fcts[(long)conv->private];
+ desc->pat.index = pat_index_fcts[(long)conv->private];
+ desc->pat.delete = pat_delete_fcts[(long)conv->private];
+ desc->pat.prune = pat_prune_fcts[(long)conv->private];
+ desc->pat.expect_type = pat_match_types[(long)conv->private];
+
+ /* Set the output parse method. */
+ switch (desc->conv->out_type) {
+ case SMP_T_STR: desc->pat.parse_smp = map_parse_str; break;
+ case SMP_T_SINT: desc->pat.parse_smp = map_parse_int; break;
+ case SMP_T_ADDR: desc->pat.parse_smp = map_parse_ip; break;
+ default:
+ memprintf(err, "map: internal haproxy error: no default parse case for the input type <%d>.",
+ conv->out_type);
+ return 0;
+ }
+
+ /* Load map. */
+ if (!pattern_read_from_file(&desc->pat, PAT_REF_MAP, arg[0].data.str.str, PAT_MF_NO_DNS,
+ 1, err, file, line))
+ return 0;
+
+ /* the maps of type IP have a string as defaultvalue. This
+ * string canbe anipv4 or an ipv6, we must convert it.
+ */
+ if (desc->conv->out_type == SMP_T_ADDR) {
+ struct sample_data data;
+ if (!map_parse_ip(arg[1].data.str.str, &data)) {
+ memprintf(err, "map: cannot parse default ip <%s>.", arg[1].data.str.str);
+ return 0;
+ }
+ if (data.type == SMP_T_IPV4) {
+ arg[1].type = ARGT_IPV4;
+ arg[1].data.ipv4 = data.u.ipv4;
+ } else {
+ arg[1].type = ARGT_IPV6;
+ arg[1].data.ipv6 = data.u.ipv6;
+ }
+ }
+
+ /* replace the first argument by this definition */
+ arg[0].type = ARGT_MAP;
+ arg[0].data.map = desc;
+
+ return 1;
+}
+
+static int sample_conv_map(const struct arg *arg_p, struct sample *smp, void *private)
+{
+ struct map_descriptor *desc;
+ struct pattern *pat;
+
+ /* get config */
+ desc = arg_p[0].data.map;
+
+ /* Execute the match function. */
+ pat = pattern_exec_match(&desc->pat, smp, 1);
+
+ /* Match case. */
+ if (pat) {
+ /* Copy sample. */
+ if (pat->data) {
+ smp->data = *pat->data;
+ smp->flags |= SMP_F_CONST;
+ return 1;
+ }
+
+ /* Return just int sample containing 1. */
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = 1;
+ return 1;
+ }
+
+ /* If no default value avalaible, the converter fails. */
+ if (arg_p[1].type == ARGT_STOP)
+ return 0;
+
+ /* Return the default value. */
+ switch (desc->conv->out_type) {
+
+ case SMP_T_STR:
+ smp->data.type = SMP_T_STR;
+ smp->flags |= SMP_F_CONST;
+ smp->data.u.str = arg_p[1].data.str;
+ break;
+
+ case SMP_T_SINT:
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = arg_p[1].data.sint;
+ break;
+
+ case SMP_T_ADDR:
+ if (arg_p[1].type == ARGT_IPV4) {
+ smp->data.type = SMP_T_IPV4;
+ smp->data.u.ipv4 = arg_p[1].data.ipv4;
+ } else {
+ smp->data.type = SMP_T_IPV6;
+ smp->data.u.ipv6 = arg_p[1].data.ipv6;
+ }
+ break;
+ }
+
+ return 1;
+}
+
+/* Note: must not be declared <const> as its list will be overwritten
+ *
+ * For the map_*_int keywords, the output is declared as SMP_T_UINT, but the converter function
+ * can provide SMP_T_UINT, SMP_T_SINT or SMP_T_BOOL depending on how the patterns found in the
+ * file can be parsed.
+ *
+ * For the map_*_ip keyword, the output is declared as SMP_T_IPV4, but the converter function
+ * can provide SMP_T_IPV4 or SMP_T_IPV6 depending on the patterns found in the file.
+ *
+ * The map_* keywords only emit strings.
+ *
+ * The output type is only used during the configuration parsing. It is used for detecting
+ * compatibility problems.
+ *
+ * The arguments are: <file>[,<default value>]
+ */
+static struct sample_conv_kw_list sample_conv_kws = {ILH, {
+ { "map", sample_conv_map, ARG2(1,STR,STR), sample_load_map, SMP_T_STR, SMP_T_STR, (void *)PAT_MATCH_STR },
+ { "map_str", sample_conv_map, ARG2(1,STR,STR), sample_load_map, SMP_T_STR, SMP_T_STR, (void *)PAT_MATCH_STR },
+ { "map_beg", sample_conv_map, ARG2(1,STR,STR), sample_load_map, SMP_T_STR, SMP_T_STR, (void *)PAT_MATCH_BEG },
+ { "map_sub", sample_conv_map, ARG2(1,STR,STR), sample_load_map, SMP_T_STR, SMP_T_STR, (void *)PAT_MATCH_SUB },
+ { "map_dir", sample_conv_map, ARG2(1,STR,STR), sample_load_map, SMP_T_STR, SMP_T_STR, (void *)PAT_MATCH_DIR },
+ { "map_dom", sample_conv_map, ARG2(1,STR,STR), sample_load_map, SMP_T_STR, SMP_T_STR, (void *)PAT_MATCH_DOM },
+ { "map_end", sample_conv_map, ARG2(1,STR,STR), sample_load_map, SMP_T_STR, SMP_T_STR, (void *)PAT_MATCH_END },
+ { "map_reg", sample_conv_map, ARG2(1,STR,STR), sample_load_map, SMP_T_STR, SMP_T_STR, (void *)PAT_MATCH_REG },
+ { "map_int", sample_conv_map, ARG2(1,STR,STR), sample_load_map, SMP_T_SINT, SMP_T_STR, (void *)PAT_MATCH_INT },
+ { "map_ip", sample_conv_map, ARG2(1,STR,STR), sample_load_map, SMP_T_ADDR, SMP_T_STR, (void *)PAT_MATCH_IP },
+
+ { "map_str_int", sample_conv_map, ARG2(1,STR,SINT), sample_load_map, SMP_T_STR, SMP_T_SINT, (void *)PAT_MATCH_STR },
+ { "map_beg_int", sample_conv_map, ARG2(1,STR,SINT), sample_load_map, SMP_T_STR, SMP_T_SINT, (void *)PAT_MATCH_BEG },
+ { "map_sub_int", sample_conv_map, ARG2(1,STR,SINT), sample_load_map, SMP_T_STR, SMP_T_SINT, (void *)PAT_MATCH_SUB },
+ { "map_dir_int", sample_conv_map, ARG2(1,STR,SINT), sample_load_map, SMP_T_STR, SMP_T_SINT, (void *)PAT_MATCH_DIR },
+ { "map_dom_int", sample_conv_map, ARG2(1,STR,SINT), sample_load_map, SMP_T_STR, SMP_T_SINT, (void *)PAT_MATCH_DOM },
+ { "map_end_int", sample_conv_map, ARG2(1,STR,SINT), sample_load_map, SMP_T_STR, SMP_T_SINT, (void *)PAT_MATCH_END },
+ { "map_reg_int", sample_conv_map, ARG2(1,STR,SINT), sample_load_map, SMP_T_STR, SMP_T_SINT, (void *)PAT_MATCH_REG },
+ { "map_int_int", sample_conv_map, ARG2(1,STR,SINT), sample_load_map, SMP_T_SINT, SMP_T_SINT, (void *)PAT_MATCH_INT },
+ { "map_ip_int", sample_conv_map, ARG2(1,STR,SINT), sample_load_map, SMP_T_ADDR, SMP_T_SINT, (void *)PAT_MATCH_IP },
+
+ { "map_str_ip", sample_conv_map, ARG2(1,STR,STR), sample_load_map, SMP_T_STR, SMP_T_ADDR, (void *)PAT_MATCH_STR },
+ { "map_beg_ip", sample_conv_map, ARG2(1,STR,STR), sample_load_map, SMP_T_STR, SMP_T_ADDR, (void *)PAT_MATCH_BEG },
+ { "map_sub_ip", sample_conv_map, ARG2(1,STR,STR), sample_load_map, SMP_T_STR, SMP_T_ADDR, (void *)PAT_MATCH_SUB },
+ { "map_dir_ip", sample_conv_map, ARG2(1,STR,STR), sample_load_map, SMP_T_STR, SMP_T_ADDR, (void *)PAT_MATCH_DIR },
+ { "map_dom_ip", sample_conv_map, ARG2(1,STR,STR), sample_load_map, SMP_T_STR, SMP_T_ADDR, (void *)PAT_MATCH_DOM },
+ { "map_end_ip", sample_conv_map, ARG2(1,STR,STR), sample_load_map, SMP_T_STR, SMP_T_ADDR, (void *)PAT_MATCH_END },
+ { "map_reg_ip", sample_conv_map, ARG2(1,STR,STR), sample_load_map, SMP_T_STR, SMP_T_ADDR, (void *)PAT_MATCH_REG },
+ { "map_int_ip", sample_conv_map, ARG2(1,STR,STR), sample_load_map, SMP_T_SINT, SMP_T_ADDR, (void *)PAT_MATCH_INT },
+ { "map_ip_ip", sample_conv_map, ARG2(1,STR,STR), sample_load_map, SMP_T_ADDR, SMP_T_ADDR, (void *)PAT_MATCH_IP },
+
+ { /* END */ },
+}};
+
+__attribute__((constructor))
+static void __map_init(void)
+{
+ /* register format conversion keywords */
+ sample_register_convs(&sample_conv_kws);
+}
--- /dev/null
+/*
+ * Memory management functions.
+ *
+ * Copyright 2000-2007 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <types/global.h>
+#include <common/config.h>
+#include <common/debug.h>
+#include <common/memory.h>
+#include <common/mini-clist.h>
+#include <common/standard.h>
+
+#include <proto/log.h>
+
+static struct list pools = LIST_HEAD_INIT(pools);
+int mem_poison_byte = -1;
+
+/* Try to find an existing shared pool with the same characteristics and
+ * returns it, otherwise creates this one. NULL is returned if no memory
+ * is available for a new creation.
+ */
+struct pool_head *create_pool(char *name, unsigned int size, unsigned int flags)
+{
+ struct pool_head *pool;
+ struct pool_head *entry;
+ struct list *start;
+ unsigned int align;
+
+ /* We need to store at least a (void *) in the chunks. Since we know
+ * that the malloc() function will never return such a small size,
+ * let's round the size up to something slightly bigger, in order to
+ * ease merging of entries. Note that the rounding is a power of two.
+ */
+
+ align = 16;
+ size = (size + align - 1) & -align;
+
+ start = &pools;
+ pool = NULL;
+
+ list_for_each_entry(entry, &pools, list) {
+ if (entry->size == size) {
+ /* either we can share this place and we take it, or
+ * we look for a sharable one or for the next position
+ * before which we will insert a new one.
+ */
+ if (flags & entry->flags & MEM_F_SHARED) {
+ /* we can share this one */
+ pool = entry;
+ DPRINTF(stderr, "Sharing %s with %s\n", name, pool->name);
+ break;
+ }
+ }
+ else if (entry->size > size) {
+ /* insert before this one */
+ start = &entry->list;
+ break;
+ }
+ }
+
+ if (!pool) {
+ pool = CALLOC(1, sizeof(*pool));
+ if (!pool)
+ return NULL;
+ if (name)
+ strlcpy2(pool->name, name, sizeof(pool->name));
+ pool->size = size;
+ pool->flags = flags;
+ LIST_ADDQ(start, &pool->list);
+ }
+ pool->users++;
+ return pool;
+}
+
+/* Allocates new entries for pool <pool> until there are at least <avail> + 1
+ * available, then returns the last one for immediate use, so that at least
+ * <avail> are left available in the pool upon return. NULL is returned if the
+ * last entry could not be allocated. It's important to note that at least one
+ * allocation is always performed even if there are enough entries in the pool.
+ * A call to the garbage collector is performed at most once in case malloc()
+ * returns an error, before returning NULL.
+ */
+void *pool_refill_alloc(struct pool_head *pool, unsigned int avail)
+{
+ void *ptr = NULL;
+ int failed = 0;
+
+ /* stop point */
+ avail += pool->used;
+
+ while (1) {
+ if (pool->limit && pool->allocated >= pool->limit)
+ return NULL;
+
+ ptr = MALLOC(pool->size);
+ if (!ptr) {
+ if (failed)
+ return NULL;
+ failed++;
+ pool_gc2();
+ continue;
+ }
+ if (++pool->allocated > avail)
+ break;
+
+ *(void **)ptr = (void *)pool->free_list;
+ pool->free_list = ptr;
+ }
+ pool->used++;
+ return ptr;
+}
+
+/*
+ * This function frees whatever can be freed in pool <pool>.
+ */
+void pool_flush2(struct pool_head *pool)
+{
+ void *temp, *next;
+ if (!pool)
+ return;
+
+ next = pool->free_list;
+ while (next) {
+ temp = next;
+ next = *(void **)temp;
+ pool->allocated--;
+ FREE(temp);
+ }
+ pool->free_list = next;
+
+ /* here, we should have pool->allocate == pool->used */
+}
+
+/*
+ * This function frees whatever can be freed in all pools, but respecting
+ * the minimum thresholds imposed by owners. It takes care of avoiding
+ * recursion because it may be called from a signal handler.
+ */
+void pool_gc2()
+{
+ static int recurse;
+ struct pool_head *entry;
+
+ if (recurse++)
+ goto out;
+
+ list_for_each_entry(entry, &pools, list) {
+ void *temp, *next;
+ //qfprintf(stderr, "Flushing pool %s\n", entry->name);
+ next = entry->free_list;
+ while (next &&
+ (int)(entry->allocated - entry->used) > (int)entry->minavail) {
+ temp = next;
+ next = *(void **)temp;
+ entry->allocated--;
+ FREE(temp);
+ }
+ entry->free_list = next;
+ }
+ out:
+ recurse--;
+}
+
+/*
+ * This function destroys a pool by freeing it completely, unless it's still
+ * in use. This should be called only under extreme circumstances. It always
+ * returns NULL if the resulting pool is empty, easing the clearing of the old
+ * pointer, otherwise it returns the pool.
+ * .
+ */
+void *pool_destroy2(struct pool_head *pool)
+{
+ if (pool) {
+ pool_flush2(pool);
+ if (pool->used)
+ return pool;
+ pool->users--;
+ if (!pool->users) {
+ LIST_DEL(&pool->list);
+ FREE(pool);
+ }
+ }
+ return NULL;
+}
+
+/* This function dumps memory usage information into the trash buffer. */
+void dump_pools_to_trash()
+{
+ struct pool_head *entry;
+ unsigned long allocated, used;
+ int nbpools;
+
+ allocated = used = nbpools = 0;
+ chunk_printf(&trash, "Dumping pools usage. Use SIGQUIT to flush them.\n");
+ list_for_each_entry(entry, &pools, list) {
+ chunk_appendf(&trash, " - Pool %s (%d bytes) : %d allocated (%u bytes), %d used, %d users%s\n",
+ entry->name, entry->size, entry->allocated,
+ entry->size * entry->allocated, entry->used,
+ entry->users, (entry->flags & MEM_F_SHARED) ? " [SHARED]" : "");
+
+ allocated += entry->allocated * entry->size;
+ used += entry->used * entry->size;
+ nbpools++;
+ }
+ chunk_appendf(&trash, "Total: %d pools, %lu bytes allocated, %lu used.\n",
+ nbpools, allocated, used);
+}
+
+/* Dump statistics on pools usage. */
+void dump_pools(void)
+{
+ dump_pools_to_trash();
+ qfprintf(stderr, "%s", trash.str);
+}
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+#define _GNU_SOURCE
+
+#include <common/namespace.h>
+#include <common/compiler.h>
+#include <common/hash.h>
+#include <common/errors.h>
+#include <proto/log.h>
+#include <types/global.h>
+
+#include <sched.h>
+#include <stdio.h>
+#include <sys/stat.h>
+#include <fcntl.h>
+#include <sys/types.h>
+#include <unistd.h>
+#include <sys/socket.h>
+
+#include <string.h>
+#ifdef CONFIG_HAP_NS
+
+/* Opens the namespace <ns_name> and returns the FD or -1 in case of error
+ * (check errno).
+ */
+static int open_named_namespace(const char *ns_name)
+{
+ if (chunk_printf(&trash, "/var/run/netns/%s", ns_name) < 0)
+ return -1;
+ return open(trash.str, O_RDONLY);
+}
+
+static int default_namespace = -1;
+
+static int init_default_namespace()
+{
+ if (chunk_printf(&trash, "/proc/%d/ns/net", getpid()) < 0)
+ return -1;
+ default_namespace = open(trash.str, O_RDONLY);
+ return default_namespace;
+}
+
+static struct eb_root namespace_tree_root = EB_ROOT;
+
+int netns_init(void)
+{
+ int err_code = 0;
+
+ /* if no namespaces have been defined in the config then
+ * there is no point in trying to initialize anything:
+ * my_socketat() will never be called with a valid namespace
+ * structure and thus switching back to the default namespace
+ * is not needed either */
+ if (!eb_is_empty(&namespace_tree_root)) {
+ if (init_default_namespace() < 0) {
+ Alert("Failed to open the default namespace.\n");
+ err_code |= ERR_ALERT | ERR_FATAL;
+ }
+ }
+
+ return err_code;
+}
+
+struct netns_entry* netns_store_insert(const char *ns_name)
+{
+ struct netns_entry *entry = NULL;
+ int fd = open_named_namespace(ns_name);
+ if (fd == -1)
+ goto out;
+
+ entry = (struct netns_entry *)calloc(1, sizeof(struct netns_entry));
+ if (!entry)
+ goto out;
+ entry->fd = fd;
+ entry->node.key = strdup(ns_name);
+ entry->name_len = strlen(ns_name);
+ ebis_insert(&namespace_tree_root, &entry->node);
+out:
+ return entry;
+}
+
+const struct netns_entry* netns_store_lookup(const char *ns_name, size_t ns_name_len)
+{
+ struct ebpt_node *node;
+
+ node = ebis_lookup_len(&namespace_tree_root, ns_name, ns_name_len);
+ if (node)
+ return ebpt_entry(node, struct netns_entry, node);
+ else
+ return NULL;
+}
+#endif
+
+/* Opens a socket in the namespace described by <ns> with the parameters <domain>,
+ * <type> and <protocol> and returns the FD or -1 in case of error (check errno).
+ */
+int my_socketat(const struct netns_entry *ns, int domain, int type, int protocol)
+{
+ int sock;
+
+#ifdef CONFIG_HAP_NS
+ if (default_namespace >= 0 && ns && setns(ns->fd, CLONE_NEWNET) == -1)
+ return -1;
+#endif
+ sock = socket(domain, type, protocol);
+
+#ifdef CONFIG_HAP_NS
+ if (default_namespace >= 0 && ns && setns(default_namespace, CLONE_NEWNET) == -1) {
+ close(sock);
+ return -1;
+ }
+#endif
+
+ return sock;
+}
--- /dev/null
+/*
+ * Pattern management functions.
+ *
+ * Copyright 2000-2013 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <ctype.h>
+#include <stdio.h>
+
+#include <common/config.h>
+#include <common/standard.h>
+
+#include <types/global.h>
+#include <types/pattern.h>
+
+#include <proto/log.h>
+#include <proto/pattern.h>
+#include <proto/sample.h>
+
+#include <ebsttree.h>
+#include <import/lru.h>
+#include <import/xxhash.h>
+
+char *pat_match_names[PAT_MATCH_NUM] = {
+ [PAT_MATCH_FOUND] = "found",
+ [PAT_MATCH_BOOL] = "bool",
+ [PAT_MATCH_INT] = "int",
+ [PAT_MATCH_IP] = "ip",
+ [PAT_MATCH_BIN] = "bin",
+ [PAT_MATCH_LEN] = "len",
+ [PAT_MATCH_STR] = "str",
+ [PAT_MATCH_BEG] = "beg",
+ [PAT_MATCH_SUB] = "sub",
+ [PAT_MATCH_DIR] = "dir",
+ [PAT_MATCH_DOM] = "dom",
+ [PAT_MATCH_END] = "end",
+ [PAT_MATCH_REG] = "reg",
+};
+
+int (*pat_parse_fcts[PAT_MATCH_NUM])(const char *, struct pattern *, int, char **) = {
+ [PAT_MATCH_FOUND] = pat_parse_nothing,
+ [PAT_MATCH_BOOL] = pat_parse_nothing,
+ [PAT_MATCH_INT] = pat_parse_int,
+ [PAT_MATCH_IP] = pat_parse_ip,
+ [PAT_MATCH_BIN] = pat_parse_bin,
+ [PAT_MATCH_LEN] = pat_parse_int,
+ [PAT_MATCH_STR] = pat_parse_str,
+ [PAT_MATCH_BEG] = pat_parse_str,
+ [PAT_MATCH_SUB] = pat_parse_str,
+ [PAT_MATCH_DIR] = pat_parse_str,
+ [PAT_MATCH_DOM] = pat_parse_str,
+ [PAT_MATCH_END] = pat_parse_str,
+ [PAT_MATCH_REG] = pat_parse_reg,
+};
+
+int (*pat_index_fcts[PAT_MATCH_NUM])(struct pattern_expr *, struct pattern *, char **) = {
+ [PAT_MATCH_FOUND] = pat_idx_list_val,
+ [PAT_MATCH_BOOL] = pat_idx_list_val,
+ [PAT_MATCH_INT] = pat_idx_list_val,
+ [PAT_MATCH_IP] = pat_idx_tree_ip,
+ [PAT_MATCH_BIN] = pat_idx_list_ptr,
+ [PAT_MATCH_LEN] = pat_idx_list_val,
+ [PAT_MATCH_STR] = pat_idx_tree_str,
+ [PAT_MATCH_BEG] = pat_idx_tree_pfx,
+ [PAT_MATCH_SUB] = pat_idx_list_str,
+ [PAT_MATCH_DIR] = pat_idx_list_str,
+ [PAT_MATCH_DOM] = pat_idx_list_str,
+ [PAT_MATCH_END] = pat_idx_list_str,
+ [PAT_MATCH_REG] = pat_idx_list_reg,
+};
+
+void (*pat_delete_fcts[PAT_MATCH_NUM])(struct pattern_expr *, struct pat_ref_elt *) = {
+ [PAT_MATCH_FOUND] = pat_del_list_val,
+ [PAT_MATCH_BOOL] = pat_del_list_val,
+ [PAT_MATCH_INT] = pat_del_list_val,
+ [PAT_MATCH_IP] = pat_del_tree_ip,
+ [PAT_MATCH_BIN] = pat_del_list_ptr,
+ [PAT_MATCH_LEN] = pat_del_list_val,
+ [PAT_MATCH_STR] = pat_del_tree_str,
+ [PAT_MATCH_BEG] = pat_del_tree_str,
+ [PAT_MATCH_SUB] = pat_del_list_ptr,
+ [PAT_MATCH_DIR] = pat_del_list_ptr,
+ [PAT_MATCH_DOM] = pat_del_list_ptr,
+ [PAT_MATCH_END] = pat_del_list_ptr,
+ [PAT_MATCH_REG] = pat_del_list_reg,
+};
+
+void (*pat_prune_fcts[PAT_MATCH_NUM])(struct pattern_expr *) = {
+ [PAT_MATCH_FOUND] = pat_prune_val,
+ [PAT_MATCH_BOOL] = pat_prune_val,
+ [PAT_MATCH_INT] = pat_prune_val,
+ [PAT_MATCH_IP] = pat_prune_val,
+ [PAT_MATCH_BIN] = pat_prune_ptr,
+ [PAT_MATCH_LEN] = pat_prune_val,
+ [PAT_MATCH_STR] = pat_prune_ptr,
+ [PAT_MATCH_BEG] = pat_prune_ptr,
+ [PAT_MATCH_SUB] = pat_prune_ptr,
+ [PAT_MATCH_DIR] = pat_prune_ptr,
+ [PAT_MATCH_DOM] = pat_prune_ptr,
+ [PAT_MATCH_END] = pat_prune_ptr,
+ [PAT_MATCH_REG] = pat_prune_reg,
+};
+
+struct pattern *(*pat_match_fcts[PAT_MATCH_NUM])(struct sample *, struct pattern_expr *, int) = {
+ [PAT_MATCH_FOUND] = NULL,
+ [PAT_MATCH_BOOL] = pat_match_nothing,
+ [PAT_MATCH_INT] = pat_match_int,
+ [PAT_MATCH_IP] = pat_match_ip,
+ [PAT_MATCH_BIN] = pat_match_bin,
+ [PAT_MATCH_LEN] = pat_match_len,
+ [PAT_MATCH_STR] = pat_match_str,
+ [PAT_MATCH_BEG] = pat_match_beg,
+ [PAT_MATCH_SUB] = pat_match_sub,
+ [PAT_MATCH_DIR] = pat_match_dir,
+ [PAT_MATCH_DOM] = pat_match_dom,
+ [PAT_MATCH_END] = pat_match_end,
+ [PAT_MATCH_REG] = pat_match_reg,
+};
+
+/* Just used for checking configuration compatibility */
+int pat_match_types[PAT_MATCH_NUM] = {
+ [PAT_MATCH_FOUND] = SMP_T_SINT,
+ [PAT_MATCH_BOOL] = SMP_T_SINT,
+ [PAT_MATCH_INT] = SMP_T_SINT,
+ [PAT_MATCH_IP] = SMP_T_ADDR,
+ [PAT_MATCH_BIN] = SMP_T_BIN,
+ [PAT_MATCH_LEN] = SMP_T_STR,
+ [PAT_MATCH_STR] = SMP_T_STR,
+ [PAT_MATCH_BEG] = SMP_T_STR,
+ [PAT_MATCH_SUB] = SMP_T_STR,
+ [PAT_MATCH_DIR] = SMP_T_STR,
+ [PAT_MATCH_DOM] = SMP_T_STR,
+ [PAT_MATCH_END] = SMP_T_STR,
+ [PAT_MATCH_REG] = SMP_T_STR,
+};
+
+/* this struct is used to return information */
+static struct pattern static_pattern;
+
+/* This is the root of the list of all pattern_ref avalaibles. */
+struct list pattern_reference = LIST_HEAD_INIT(pattern_reference);
+
+static struct lru64_head *pat_lru_tree;
+static unsigned long long pat_lru_seed;
+
+/*
+ *
+ * The following functions are not exported and are used by internals process
+ * of pattern matching
+ *
+ */
+
+/* Background: Fast way to find a zero byte in a word
+ * http://graphics.stanford.edu/~seander/bithacks.html#ZeroInWord
+ * hasZeroByte = (v - 0x01010101UL) & ~v & 0x80808080UL;
+ *
+ * To look for 4 different byte values, xor the word with those bytes and
+ * then check for zero bytes:
+ *
+ * v = (((unsigned char)c * 0x1010101U) ^ delimiter)
+ * where <delimiter> is the 4 byte values to look for (as an uint)
+ * and <c> is the character that is being tested
+ */
+static inline unsigned int is_delimiter(unsigned char c, unsigned int mask)
+{
+ mask ^= (c * 0x01010101); /* propagate the char to all 4 bytes */
+ return (mask - 0x01010101) & ~mask & 0x80808080U;
+}
+
+static inline unsigned int make_4delim(unsigned char d1, unsigned char d2, unsigned char d3, unsigned char d4)
+{
+ return d1 << 24 | d2 << 16 | d3 << 8 | d4;
+}
+
+
+/*
+ *
+ * These functions are exported and may be used by any other component.
+ *
+ * The following functions are used for parsing pattern matching input value.
+ * The <text> contain the string to be parsed. <pattern> must be a preallocated
+ * pattern. The pat_parse_* functions fill this structure with the parsed value.
+ * <err> is filled with an error message built with memprintf() function. It is
+ * allowed to use a trash as a temporary storage for the returned pattern, as
+ * the next call after these functions will be pat_idx_*.
+ *
+ * In success case, the pat_parse_* function returns 1. If the function
+ * fails, it returns 0 and <err> is filled.
+ */
+
+/* ignore the current line */
+int pat_parse_nothing(const char *text, struct pattern *pattern, int mflags, char **err)
+{
+ return 1;
+}
+
+/* Parse a string. It is allocated and duplicated. */
+int pat_parse_str(const char *text, struct pattern *pattern, int mflags, char **err)
+{
+ pattern->type = SMP_T_STR;
+ pattern->ptr.str = (char *)text;
+ pattern->len = strlen(text);
+ return 1;
+}
+
+/* Parse a binary written in hexa. It is allocated. */
+int pat_parse_bin(const char *text, struct pattern *pattern, int mflags, char **err)
+{
+ struct chunk *trash;
+
+ pattern->type = SMP_T_BIN;
+ trash = get_trash_chunk();
+ pattern->len = trash->size;
+ pattern->ptr.str = trash->str;
+ return !!parse_binary(text, &pattern->ptr.str, &pattern->len, err);
+}
+
+/* Parse a regex. It is allocated. */
+int pat_parse_reg(const char *text, struct pattern *pattern, int mflags, char **err)
+{
+ pattern->ptr.str = (char *)text;
+ return 1;
+}
+
+/* Parse a range of positive integers delimited by either ':' or '-'. If only
+ * one integer is read, it is set as both min and max. An operator may be
+ * specified as the prefix, among this list of 5 :
+ *
+ * 0:eq, 1:gt, 2:ge, 3:lt, 4:le
+ *
+ * The default operator is "eq". It supports range matching. Ranges are
+ * rejected for other operators. The operator may be changed at any time.
+ * The operator is stored in the 'opaque' argument.
+ *
+ * If err is non-NULL, an error message will be returned there on errors and
+ * the caller will have to free it. The function returns zero on error, and
+ * non-zero on success.
+ *
+ */
+int pat_parse_int(const char *text, struct pattern *pattern, int mflags, char **err)
+{
+ const char *ptr = text;
+
+ pattern->type = SMP_T_SINT;
+
+ /* Empty string is not valid */
+ if (!*text)
+ goto not_valid_range;
+
+ /* Search ':' or '-' separator. */
+ while (*ptr != '\0' && *ptr != ':' && *ptr != '-')
+ ptr++;
+
+ /* If separator not found. */
+ if (!*ptr) {
+ if (strl2llrc(text, ptr - text, &pattern->val.range.min) != 0) {
+ memprintf(err, "'%s' is not a number", text);
+ return 0;
+ }
+ pattern->val.range.max = pattern->val.range.min;
+ pattern->val.range.min_set = 1;
+ pattern->val.range.max_set = 1;
+ return 1;
+ }
+
+ /* If the separator is the first character. */
+ if (ptr == text && *(ptr + 1) != '\0') {
+ if (strl2llrc(ptr + 1, strlen(ptr + 1), &pattern->val.range.max) != 0)
+ goto not_valid_range;
+
+ pattern->val.range.min_set = 0;
+ pattern->val.range.max_set = 1;
+ return 1;
+ }
+
+ /* If separator is the last character. */
+ if (*(ptr + 1) == '\0') {
+ if (strl2llrc(text, ptr - text, &pattern->val.range.min) != 0)
+ goto not_valid_range;
+
+ pattern->val.range.min_set = 1;
+ pattern->val.range.max_set = 0;
+ return 1;
+ }
+
+ /* Else, parse two numbers. */
+ if (strl2llrc(text, ptr - text, &pattern->val.range.min) != 0)
+ goto not_valid_range;
+
+ if (strl2llrc(ptr + 1, strlen(ptr + 1), &pattern->val.range.max) != 0)
+ goto not_valid_range;
+
+ if (pattern->val.range.min > pattern->val.range.max)
+ goto not_valid_range;
+
+ pattern->val.range.min_set = 1;
+ pattern->val.range.max_set = 1;
+ return 1;
+
+ not_valid_range:
+ memprintf(err, "'%s' is not a valid number range", text);
+ return 0;
+}
+
+/* Parse a range of positive 2-component versions delimited by either ':' or
+ * '-'. The version consists in a major and a minor, both of which must be
+ * smaller than 65536, because internally they will be represented as a 32-bit
+ * integer.
+ * If only one version is read, it is set as both min and max. Just like for
+ * pure integers, an operator may be specified as the prefix, among this list
+ * of 5 :
+ *
+ * 0:eq, 1:gt, 2:ge, 3:lt, 4:le
+ *
+ * The default operator is "eq". It supports range matching. Ranges are
+ * rejected for other operators. The operator may be changed at any time.
+ * The operator is stored in the 'opaque' argument. This allows constructs
+ * such as the following one :
+ *
+ * acl obsolete_ssl ssl_req_proto lt 3
+ * acl unsupported_ssl ssl_req_proto gt 3.1
+ * acl valid_ssl ssl_req_proto 3.0-3.1
+ *
+ */
+int pat_parse_dotted_ver(const char *text, struct pattern *pattern, int mflags, char **err)
+{
+ const char *ptr = text;
+
+ pattern->type = SMP_T_SINT;
+
+ /* Search ':' or '-' separator. */
+ while (*ptr != '\0' && *ptr != ':' && *ptr != '-')
+ ptr++;
+
+ /* If separator not found. */
+ if (*ptr == '\0' && ptr > text) {
+ if (strl2llrc_dotted(text, ptr-text, &pattern->val.range.min) != 0) {
+ memprintf(err, "'%s' is not a dotted number", text);
+ return 0;
+ }
+ pattern->val.range.max = pattern->val.range.min;
+ pattern->val.range.min_set = 1;
+ pattern->val.range.max_set = 1;
+ return 1;
+ }
+
+ /* If the separator is the first character. */
+ if (ptr == text && *(ptr+1) != '\0') {
+ if (strl2llrc_dotted(ptr+1, strlen(ptr+1), &pattern->val.range.max) != 0) {
+ memprintf(err, "'%s' is not a valid dotted number range", text);
+ return 0;
+ }
+ pattern->val.range.min_set = 0;
+ pattern->val.range.max_set = 1;
+ return 1;
+ }
+
+ /* If separator is the last character. */
+ if (ptr == &text[strlen(text)-1]) {
+ if (strl2llrc_dotted(text, ptr-text, &pattern->val.range.min) != 0) {
+ memprintf(err, "'%s' is not a valid dotted number range", text);
+ return 0;
+ }
+ pattern->val.range.min_set = 1;
+ pattern->val.range.max_set = 0;
+ return 1;
+ }
+
+ /* Else, parse two numbers. */
+ if (strl2llrc_dotted(text, ptr-text, &pattern->val.range.min) != 0) {
+ memprintf(err, "'%s' is not a valid dotted number range", text);
+ return 0;
+ }
+ if (strl2llrc_dotted(ptr+1, strlen(ptr+1), &pattern->val.range.max) != 0) {
+ memprintf(err, "'%s' is not a valid dotted number range", text);
+ return 0;
+ }
+ if (pattern->val.range.min > pattern->val.range.max) {
+ memprintf(err, "'%s' is not a valid dotted number range", text);
+ return 0;
+ }
+ pattern->val.range.min_set = 1;
+ pattern->val.range.max_set = 1;
+ return 1;
+}
+
+/* Parse an IP address and an optional mask in the form addr[/mask].
+ * The addr may either be an IPv4 address or a hostname. The mask
+ * may either be a dotted mask or a number of bits. Returns 1 if OK,
+ * otherwise 0. NOTE: IP address patterns are typed (IPV4/IPV6).
+ */
+int pat_parse_ip(const char *text, struct pattern *pattern, int mflags, char **err)
+{
+ if (str2net(text, !(mflags & PAT_MF_NO_DNS) && (global.mode & MODE_STARTING),
+ &pattern->val.ipv4.addr, &pattern->val.ipv4.mask)) {
+ pattern->type = SMP_T_IPV4;
+ return 1;
+ }
+ else if (str62net(text, &pattern->val.ipv6.addr, &pattern->val.ipv6.mask)) {
+ pattern->type = SMP_T_IPV6;
+ return 1;
+ }
+ else {
+ memprintf(err, "'%s' is not a valid IPv4 or IPv6 address", text);
+ return 0;
+ }
+}
+
+/*
+ *
+ * These functions are exported and may be used by any other component.
+ *
+ * This fucntion just take a sample <smp> and check if this sample match
+ * with the pattern <pattern>. This fucntion return just PAT_MATCH or
+ * PAT_NOMATCH.
+ *
+ */
+
+/* always return false */
+struct pattern *pat_match_nothing(struct sample *smp, struct pattern_expr *expr, int fill)
+{
+ if (smp->data.u.sint) {
+ if (fill) {
+ static_pattern.data = NULL;
+ static_pattern.ref = NULL;
+ static_pattern.type = 0;
+ static_pattern.ptr.str = NULL;
+ }
+ return &static_pattern;
+ }
+ else
+ return NULL;
+}
+
+
+/* NB: For two strings to be identical, it is required that their lengths match */
+struct pattern *pat_match_str(struct sample *smp, struct pattern_expr *expr, int fill)
+{
+ int icase;
+ struct ebmb_node *node;
+ char prev;
+ struct pattern_tree *elt;
+ struct pattern_list *lst;
+ struct pattern *pattern;
+ struct pattern *ret = NULL;
+ struct lru64 *lru = NULL;
+
+ /* Lookup a string in the expression's pattern tree. */
+ if (!eb_is_empty(&expr->pattern_tree)) {
+ /* we may have to force a trailing zero on the test pattern */
+ prev = smp->data.u.str.str[smp->data.u.str.len];
+ if (prev)
+ smp->data.u.str.str[smp->data.u.str.len] = '\0';
+ node = ebst_lookup(&expr->pattern_tree, smp->data.u.str.str);
+ if (prev)
+ smp->data.u.str.str[smp->data.u.str.len] = prev;
+
+ if (node) {
+ if (fill) {
+ elt = ebmb_entry(node, struct pattern_tree, node);
+ static_pattern.data = elt->data;
+ static_pattern.ref = elt->ref;
+ static_pattern.sflags = PAT_SF_TREE;
+ static_pattern.type = SMP_T_STR;
+ static_pattern.ptr.str = (char *)elt->node.key;
+ }
+ return &static_pattern;
+ }
+ }
+
+ /* look in the list */
+ if (pat_lru_tree) {
+ unsigned long long seed = pat_lru_seed ^ (long)expr;
+
+ lru = lru64_get(XXH64(smp->data.u.str.str, smp->data.u.str.len, seed),
+ pat_lru_tree, expr, expr->revision);
+ if (lru && lru->domain)
+ return lru->data;
+ }
+
+ list_for_each_entry(lst, &expr->patterns, list) {
+ pattern = &lst->pat;
+
+ if (pattern->len != smp->data.u.str.len)
+ continue;
+
+ icase = expr->mflags & PAT_MF_IGNORE_CASE;
+ if ((icase && strncasecmp(pattern->ptr.str, smp->data.u.str.str, smp->data.u.str.len) == 0) ||
+ (!icase && strncmp(pattern->ptr.str, smp->data.u.str.str, smp->data.u.str.len) == 0)) {
+ ret = pattern;
+ break;
+ }
+ }
+
+ if (lru)
+ lru64_commit(lru, ret, expr, expr->revision, NULL);
+
+ return ret;
+}
+
+/* NB: For two binaries buf to be identical, it is required that their lengths match */
+struct pattern *pat_match_bin(struct sample *smp, struct pattern_expr *expr, int fill)
+{
+ struct pattern_list *lst;
+ struct pattern *pattern;
+ struct pattern *ret = NULL;
+ struct lru64 *lru = NULL;
+
+ if (pat_lru_tree) {
+ unsigned long long seed = pat_lru_seed ^ (long)expr;
+
+ lru = lru64_get(XXH64(smp->data.u.str.str, smp->data.u.str.len, seed),
+ pat_lru_tree, expr, expr->revision);
+ if (lru && lru->domain)
+ return lru->data;
+ }
+
+ list_for_each_entry(lst, &expr->patterns, list) {
+ pattern = &lst->pat;
+
+ if (pattern->len != smp->data.u.str.len)
+ continue;
+
+ if (memcmp(pattern->ptr.str, smp->data.u.str.str, smp->data.u.str.len) == 0) {
+ ret = pattern;
+ break;
+ }
+ }
+
+ if (lru)
+ lru64_commit(lru, ret, expr, expr->revision, NULL);
+
+ return ret;
+}
+
+/* Executes a regex. It temporarily changes the data to add a trailing zero,
+ * and restores the previous character when leaving.
+ */
+struct pattern *pat_match_reg(struct sample *smp, struct pattern_expr *expr, int fill)
+{
+ struct pattern_list *lst;
+ struct pattern *pattern;
+ struct pattern *ret = NULL;
+ struct lru64 *lru = NULL;
+
+ if (pat_lru_tree) {
+ unsigned long long seed = pat_lru_seed ^ (long)expr;
+
+ lru = lru64_get(XXH64(smp->data.u.str.str, smp->data.u.str.len, seed),
+ pat_lru_tree, expr, expr->revision);
+ if (lru && lru->domain)
+ return lru->data;
+ }
+
+ list_for_each_entry(lst, &expr->patterns, list) {
+ pattern = &lst->pat;
+
+ if (regex_exec2(pattern->ptr.reg, smp->data.u.str.str, smp->data.u.str.len)) {
+ ret = pattern;
+ break;
+ }
+ }
+
+ if (lru)
+ lru64_commit(lru, ret, expr, expr->revision, NULL);
+
+ return ret;
+}
+
+/* Checks that the pattern matches the beginning of the tested string. */
+struct pattern *pat_match_beg(struct sample *smp, struct pattern_expr *expr, int fill)
+{
+ int icase;
+ struct ebmb_node *node;
+ char prev;
+ struct pattern_tree *elt;
+ struct pattern_list *lst;
+ struct pattern *pattern;
+ struct pattern *ret = NULL;
+ struct lru64 *lru = NULL;
+
+ /* Lookup a string in the expression's pattern tree. */
+ if (!eb_is_empty(&expr->pattern_tree)) {
+ /* we may have to force a trailing zero on the test pattern */
+ prev = smp->data.u.str.str[smp->data.u.str.len];
+ if (prev)
+ smp->data.u.str.str[smp->data.u.str.len] = '\0';
+ node = ebmb_lookup_longest(&expr->pattern_tree, smp->data.u.str.str);
+ if (prev)
+ smp->data.u.str.str[smp->data.u.str.len] = prev;
+
+ if (node) {
+ if (fill) {
+ elt = ebmb_entry(node, struct pattern_tree, node);
+ static_pattern.data = elt->data;
+ static_pattern.ref = elt->ref;
+ static_pattern.sflags = PAT_SF_TREE;
+ static_pattern.type = SMP_T_STR;
+ static_pattern.ptr.str = (char *)elt->node.key;
+ }
+ return &static_pattern;
+ }
+ }
+
+ /* look in the list */
+ if (pat_lru_tree) {
+ unsigned long long seed = pat_lru_seed ^ (long)expr;
+
+ lru = lru64_get(XXH64(smp->data.u.str.str, smp->data.u.str.len, seed),
+ pat_lru_tree, expr, expr->revision);
+ if (lru && lru->domain)
+ return lru->data;
+ }
+
+ list_for_each_entry(lst, &expr->patterns, list) {
+ pattern = &lst->pat;
+
+ if (pattern->len > smp->data.u.str.len)
+ continue;
+
+ icase = expr->mflags & PAT_MF_IGNORE_CASE;
+ if ((icase && strncasecmp(pattern->ptr.str, smp->data.u.str.str, pattern->len) != 0) ||
+ (!icase && strncmp(pattern->ptr.str, smp->data.u.str.str, pattern->len) != 0))
+ continue;
+
+ ret = pattern;
+ break;
+ }
+
+ if (lru)
+ lru64_commit(lru, ret, expr, expr->revision, NULL);
+
+ return ret;
+}
+
+/* Checks that the pattern matches the end of the tested string. */
+struct pattern *pat_match_end(struct sample *smp, struct pattern_expr *expr, int fill)
+{
+ int icase;
+ struct pattern_list *lst;
+ struct pattern *pattern;
+ struct pattern *ret = NULL;
+ struct lru64 *lru = NULL;
+
+ if (pat_lru_tree) {
+ unsigned long long seed = pat_lru_seed ^ (long)expr;
+
+ lru = lru64_get(XXH64(smp->data.u.str.str, smp->data.u.str.len, seed),
+ pat_lru_tree, expr, expr->revision);
+ if (lru && lru->domain)
+ return lru->data;
+ }
+
+ list_for_each_entry(lst, &expr->patterns, list) {
+ pattern = &lst->pat;
+
+ if (pattern->len > smp->data.u.str.len)
+ continue;
+
+ icase = expr->mflags & PAT_MF_IGNORE_CASE;
+ if ((icase && strncasecmp(pattern->ptr.str, smp->data.u.str.str + smp->data.u.str.len - pattern->len, pattern->len) != 0) ||
+ (!icase && strncmp(pattern->ptr.str, smp->data.u.str.str + smp->data.u.str.len - pattern->len, pattern->len) != 0))
+ continue;
+
+ ret = pattern;
+ break;
+ }
+
+ if (lru)
+ lru64_commit(lru, ret, expr, expr->revision, NULL);
+
+ return ret;
+}
+
+/* Checks that the pattern is included inside the tested string.
+ * NB: Suboptimal, should be rewritten using a Boyer-Moore method.
+ */
+struct pattern *pat_match_sub(struct sample *smp, struct pattern_expr *expr, int fill)
+{
+ int icase;
+ char *end;
+ char *c;
+ struct pattern_list *lst;
+ struct pattern *pattern;
+ struct pattern *ret = NULL;
+ struct lru64 *lru = NULL;
+
+ if (pat_lru_tree) {
+ unsigned long long seed = pat_lru_seed ^ (long)expr;
+
+ lru = lru64_get(XXH64(smp->data.u.str.str, smp->data.u.str.len, seed),
+ pat_lru_tree, expr, expr->revision);
+ if (lru && lru->domain)
+ return lru->data;
+ }
+
+ list_for_each_entry(lst, &expr->patterns, list) {
+ pattern = &lst->pat;
+
+ if (pattern->len > smp->data.u.str.len)
+ continue;
+
+ end = smp->data.u.str.str + smp->data.u.str.len - pattern->len;
+ icase = expr->mflags & PAT_MF_IGNORE_CASE;
+ if (icase) {
+ for (c = smp->data.u.str.str; c <= end; c++) {
+ if (tolower(*c) != tolower(*pattern->ptr.str))
+ continue;
+ if (strncasecmp(pattern->ptr.str, c, pattern->len) == 0) {
+ ret = pattern;
+ goto leave;
+ }
+ }
+ } else {
+ for (c = smp->data.u.str.str; c <= end; c++) {
+ if (*c != *pattern->ptr.str)
+ continue;
+ if (strncmp(pattern->ptr.str, c, pattern->len) == 0) {
+ ret = pattern;
+ goto leave;
+ }
+ }
+ }
+ }
+ leave:
+ if (lru)
+ lru64_commit(lru, ret, expr, expr->revision, NULL);
+
+ return ret;
+}
+
+/* This one is used by other real functions. It checks that the pattern is
+ * included inside the tested string, but enclosed between the specified
+ * delimiters or at the beginning or end of the string. The delimiters are
+ * provided as an unsigned int made by make_4delim() and match up to 4 different
+ * delimiters. Delimiters are stripped at the beginning and end of the pattern.
+ */
+static int match_word(struct sample *smp, struct pattern *pattern, int mflags, unsigned int delimiters)
+{
+ int may_match, icase;
+ char *c, *end;
+ char *ps;
+ int pl;
+
+ pl = pattern->len;
+ ps = pattern->ptr.str;
+
+ while (pl > 0 && is_delimiter(*ps, delimiters)) {
+ pl--;
+ ps++;
+ }
+
+ while (pl > 0 && is_delimiter(ps[pl - 1], delimiters))
+ pl--;
+
+ if (pl > smp->data.u.str.len)
+ return PAT_NOMATCH;
+
+ may_match = 1;
+ icase = mflags & PAT_MF_IGNORE_CASE;
+ end = smp->data.u.str.str + smp->data.u.str.len - pl;
+ for (c = smp->data.u.str.str; c <= end; c++) {
+ if (is_delimiter(*c, delimiters)) {
+ may_match = 1;
+ continue;
+ }
+
+ if (!may_match)
+ continue;
+
+ if (icase) {
+ if ((tolower(*c) == tolower(*ps)) &&
+ (strncasecmp(ps, c, pl) == 0) &&
+ (c == end || is_delimiter(c[pl], delimiters)))
+ return PAT_MATCH;
+ } else {
+ if ((*c == *ps) &&
+ (strncmp(ps, c, pl) == 0) &&
+ (c == end || is_delimiter(c[pl], delimiters)))
+ return PAT_MATCH;
+ }
+ may_match = 0;
+ }
+ return PAT_NOMATCH;
+}
+
+/* Checks that the pattern is included inside the tested string, but enclosed
+ * between the delimiters '?' or '/' or at the beginning or end of the string.
+ * Delimiters at the beginning or end of the pattern are ignored.
+ */
+struct pattern *pat_match_dir(struct sample *smp, struct pattern_expr *expr, int fill)
+{
+ struct pattern_list *lst;
+ struct pattern *pattern;
+
+ list_for_each_entry(lst, &expr->patterns, list) {
+ pattern = &lst->pat;
+ if (match_word(smp, pattern, expr->mflags, make_4delim('/', '?', '?', '?')))
+ return pattern;
+ }
+ return NULL;
+}
+
+/* Checks that the pattern is included inside the tested string, but enclosed
+ * between the delmiters '/', '?', '.' or ":" or at the beginning or end of
+ * the string. Delimiters at the beginning or end of the pattern are ignored.
+ */
+struct pattern *pat_match_dom(struct sample *smp, struct pattern_expr *expr, int fill)
+{
+ struct pattern_list *lst;
+ struct pattern *pattern;
+
+ list_for_each_entry(lst, &expr->patterns, list) {
+ pattern = &lst->pat;
+ if (match_word(smp, pattern, expr->mflags, make_4delim('/', '?', '.', ':')))
+ return pattern;
+ }
+ return NULL;
+}
+
+/* Checks that the integer in <test> is included between min and max */
+struct pattern *pat_match_int(struct sample *smp, struct pattern_expr *expr, int fill)
+{
+ struct pattern_list *lst;
+ struct pattern *pattern;
+
+ list_for_each_entry(lst, &expr->patterns, list) {
+ pattern = &lst->pat;
+ if ((!pattern->val.range.min_set || pattern->val.range.min <= smp->data.u.sint) &&
+ (!pattern->val.range.max_set || smp->data.u.sint <= pattern->val.range.max))
+ return pattern;
+ }
+ return NULL;
+}
+
+/* Checks that the length of the pattern in <test> is included between min and max */
+struct pattern *pat_match_len(struct sample *smp, struct pattern_expr *expr, int fill)
+{
+ struct pattern_list *lst;
+ struct pattern *pattern;
+
+ list_for_each_entry(lst, &expr->patterns, list) {
+ pattern = &lst->pat;
+ if ((!pattern->val.range.min_set || pattern->val.range.min <= smp->data.u.str.len) &&
+ (!pattern->val.range.max_set || smp->data.u.str.len <= pattern->val.range.max))
+ return pattern;
+ }
+ return NULL;
+}
+
+struct pattern *pat_match_ip(struct sample *smp, struct pattern_expr *expr, int fill)
+{
+ unsigned int v4; /* in network byte order */
+ struct in6_addr tmp6;
+ struct in_addr *s;
+ struct ebmb_node *node;
+ struct pattern_tree *elt;
+ struct pattern_list *lst;
+ struct pattern *pattern;
+
+ /* The input sample is IPv4. Try to match in the trees. */
+ if (smp->data.type == SMP_T_IPV4) {
+ /* Lookup an IPv4 address in the expression's pattern tree using
+ * the longest match method.
+ */
+ s = &smp->data.u.ipv4;
+ node = ebmb_lookup_longest(&expr->pattern_tree, &s->s_addr);
+ if (node) {
+ if (fill) {
+ elt = ebmb_entry(node, struct pattern_tree, node);
+ static_pattern.data = elt->data;
+ static_pattern.ref = elt->ref;
+ static_pattern.sflags = PAT_SF_TREE;
+ static_pattern.type = SMP_T_IPV4;
+ memcpy(&static_pattern.val.ipv4.addr.s_addr, elt->node.key, 4);
+ if (!cidr2dotted(elt->node.node.pfx, &static_pattern.val.ipv4.mask))
+ return NULL;
+ }
+ return &static_pattern;
+ }
+
+ /* The IPv4 sample dont match the IPv4 tree. Convert the IPv4
+ * sample address to IPv6 with the mapping method using the ::ffff:
+ * prefix, and try to lookup in the IPv6 tree.
+ */
+ memset(&tmp6, 0, 10);
+ *(uint16_t*)&tmp6.s6_addr[10] = htons(0xffff);
+ *(uint32_t*)&tmp6.s6_addr[12] = smp->data.u.ipv4.s_addr;
+ node = ebmb_lookup_longest(&expr->pattern_tree_2, &tmp6);
+ if (node) {
+ if (fill) {
+ elt = ebmb_entry(node, struct pattern_tree, node);
+ static_pattern.data = elt->data;
+ static_pattern.ref = elt->ref;
+ static_pattern.sflags = PAT_SF_TREE;
+ static_pattern.type = SMP_T_IPV6;
+ memcpy(&static_pattern.val.ipv6.addr, elt->node.key, 16);
+ static_pattern.val.ipv6.mask = elt->node.node.pfx;
+ }
+ return &static_pattern;
+ }
+ }
+
+ /* The input sample is IPv6. Try to match in the trees. */
+ if (smp->data.type == SMP_T_IPV6) {
+ /* Lookup an IPv6 address in the expression's pattern tree using
+ * the longest match method.
+ */
+ node = ebmb_lookup_longest(&expr->pattern_tree_2, &smp->data.u.ipv6);
+ if (node) {
+ if (fill) {
+ elt = ebmb_entry(node, struct pattern_tree, node);
+ static_pattern.data = elt->data;
+ static_pattern.ref = elt->ref;
+ static_pattern.sflags = PAT_SF_TREE;
+ static_pattern.type = SMP_T_IPV6;
+ memcpy(&static_pattern.val.ipv6.addr, elt->node.key, 16);
+ static_pattern.val.ipv6.mask = elt->node.node.pfx;
+ }
+ return &static_pattern;
+ }
+
+ /* Try to convert 6 to 4 when the start of the ipv6 address match the
+ * following forms :
+ * - ::ffff:ip:v4 (ipv4 mapped)
+ * - ::0000:ip:v4 (old ipv4 mapped)
+ * - 2002:ip:v4:: (6to4)
+ */
+ if ((*(uint32_t*)&smp->data.u.ipv6.s6_addr[0] == 0 &&
+ *(uint32_t*)&smp->data.u.ipv6.s6_addr[4] == 0 &&
+ (*(uint32_t*)&smp->data.u.ipv6.s6_addr[8] == 0 ||
+ *(uint32_t*)&smp->data.u.ipv6.s6_addr[8] == htonl(0xFFFF))) ||
+ *(uint16_t*)&smp->data.u.ipv6.s6_addr[0] == htons(0x2002)) {
+ if (*(uint32_t*)&smp->data.u.ipv6.s6_addr[0] == 0)
+ v4 = *(uint32_t*)&smp->data.u.ipv6.s6_addr[12];
+ else
+ v4 = htonl((ntohs(*(uint16_t*)&smp->data.u.ipv6.s6_addr[2]) << 16) +
+ ntohs(*(uint16_t*)&smp->data.u.ipv6.s6_addr[4]));
+
+ /* Lookup an IPv4 address in the expression's pattern tree using the longest
+ * match method.
+ */
+ node = ebmb_lookup_longest(&expr->pattern_tree, &v4);
+ if (node) {
+ if (fill) {
+ elt = ebmb_entry(node, struct pattern_tree, node);
+ static_pattern.data = elt->data;
+ static_pattern.ref = elt->ref;
+ static_pattern.sflags = PAT_SF_TREE;
+ static_pattern.type = SMP_T_IPV4;
+ memcpy(&static_pattern.val.ipv4.addr.s_addr, elt->node.key, 4);
+ if (!cidr2dotted(elt->node.node.pfx, &static_pattern.val.ipv4.mask))
+ return NULL;
+ }
+ return &static_pattern;
+ }
+ }
+ }
+
+ /* Lookup in the list. the list contain only IPv4 patterns */
+ list_for_each_entry(lst, &expr->patterns, list) {
+ pattern = &lst->pat;
+
+ /* The input sample is IPv4, use it as is. */
+ if (smp->data.type == SMP_T_IPV4) {
+ v4 = smp->data.u.ipv4.s_addr;
+ }
+ else if (smp->data.type == SMP_T_IPV6) {
+ /* v4 match on a V6 sample. We want to check at least for
+ * the following forms :
+ * - ::ffff:ip:v4 (ipv4 mapped)
+ * - ::0000:ip:v4 (old ipv4 mapped)
+ * - 2002:ip:v4:: (6to4)
+ */
+ if (*(uint32_t*)&smp->data.u.ipv6.s6_addr[0] == 0 &&
+ *(uint32_t*)&smp->data.u.ipv6.s6_addr[4] == 0 &&
+ (*(uint32_t*)&smp->data.u.ipv6.s6_addr[8] == 0 ||
+ *(uint32_t*)&smp->data.u.ipv6.s6_addr[8] == htonl(0xFFFF))) {
+ v4 = *(uint32_t*)&smp->data.u.ipv6.s6_addr[12];
+ }
+ else if (*(uint16_t*)&smp->data.u.ipv6.s6_addr[0] == htons(0x2002)) {
+ v4 = htonl((ntohs(*(uint16_t*)&smp->data.u.ipv6.s6_addr[2]) << 16) +
+ ntohs(*(uint16_t*)&smp->data.u.ipv6.s6_addr[4]));
+ }
+ else
+ continue;
+ }
+
+ /* Check if the input sample match the current pattern. */
+ if (((v4 ^ pattern->val.ipv4.addr.s_addr) & pattern->val.ipv4.mask.s_addr) == 0)
+ return pattern;
+ }
+ return NULL;
+}
+
+void free_pattern_tree(struct eb_root *root)
+{
+ struct eb_node *node, *next;
+ struct pattern_tree *elt;
+
+ node = eb_first(root);
+ while (node) {
+ next = eb_next(node);
+ eb_delete(node);
+ elt = container_of(node, struct pattern_tree, node);
+ free(elt->data);
+ free(elt);
+ node = next;
+ }
+}
+
+void pat_prune_val(struct pattern_expr *expr)
+{
+ struct pattern_list *pat, *tmp;
+
+ list_for_each_entry_safe(pat, tmp, &expr->patterns, list) {
+ free(pat->pat.data);
+ free(pat);
+ }
+
+ free_pattern_tree(&expr->pattern_tree);
+ free_pattern_tree(&expr->pattern_tree_2);
+ LIST_INIT(&expr->patterns);
+}
+
+void pat_prune_ptr(struct pattern_expr *expr)
+{
+ struct pattern_list *pat, *tmp;
+
+ list_for_each_entry_safe(pat, tmp, &expr->patterns, list) {
+ free(pat->pat.ptr.ptr);
+ free(pat->pat.data);
+ free(pat);
+ }
+
+ free_pattern_tree(&expr->pattern_tree);
+ free_pattern_tree(&expr->pattern_tree_2);
+ LIST_INIT(&expr->patterns);
+}
+
+void pat_prune_reg(struct pattern_expr *expr)
+{
+ struct pattern_list *pat, *tmp;
+
+ list_for_each_entry_safe(pat, tmp, &expr->patterns, list) {
+ regex_free(pat->pat.ptr.ptr);
+ free(pat->pat.data);
+ free(pat);
+ }
+
+ free_pattern_tree(&expr->pattern_tree);
+ free_pattern_tree(&expr->pattern_tree_2);
+ LIST_INIT(&expr->patterns);
+}
+
+/*
+ *
+ * The following functions are used for the pattern indexation
+ *
+ */
+
+int pat_idx_list_val(struct pattern_expr *expr, struct pattern *pat, char **err)
+{
+ struct pattern_list *patl;
+
+ /* allocate pattern */
+ patl = calloc(1, sizeof(*patl));
+ if (!patl) {
+ memprintf(err, "out of memory while indexing pattern");
+ return 0;
+ }
+
+ /* duplicate pattern */
+ memcpy(&patl->pat, pat, sizeof(*pat));
+
+ /* chain pattern in the expression */
+ LIST_ADDQ(&expr->patterns, &patl->list);
+ expr->revision = rdtsc();
+
+ /* that's ok */
+ return 1;
+}
+
+int pat_idx_list_ptr(struct pattern_expr *expr, struct pattern *pat, char **err)
+{
+ struct pattern_list *patl;
+
+ /* allocate pattern */
+ patl = calloc(1, sizeof(*patl));
+ if (!patl) {
+ memprintf(err, "out of memory while indexing pattern");
+ return 0;
+ }
+
+ /* duplicate pattern */
+ memcpy(&patl->pat, pat, sizeof(*pat));
+ patl->pat.ptr.ptr = malloc(patl->pat.len);
+ if (!patl->pat.ptr.ptr) {
+ free(patl);
+ memprintf(err, "out of memory while indexing pattern");
+ return 0;
+ }
+ memcpy(patl->pat.ptr.ptr, pat->ptr.ptr, pat->len);
+
+ /* chain pattern in the expression */
+ LIST_ADDQ(&expr->patterns, &patl->list);
+ expr->revision = rdtsc();
+
+ /* that's ok */
+ return 1;
+}
+
+int pat_idx_list_str(struct pattern_expr *expr, struct pattern *pat, char **err)
+{
+ struct pattern_list *patl;
+
+ /* allocate pattern */
+ patl = calloc(1, sizeof(*patl));
+ if (!patl) {
+ memprintf(err, "out of memory while indexing pattern");
+ return 0;
+ }
+
+ /* duplicate pattern */
+ memcpy(&patl->pat, pat, sizeof(*pat));
+ patl->pat.ptr.str = malloc(patl->pat.len + 1);
+ if (!patl->pat.ptr.str) {
+ free(patl);
+ memprintf(err, "out of memory while indexing pattern");
+ return 0;
+ }
+ memcpy(patl->pat.ptr.ptr, pat->ptr.ptr, pat->len);
+ patl->pat.ptr.str[patl->pat.len] = '\0';
+
+ /* chain pattern in the expression */
+ LIST_ADDQ(&expr->patterns, &patl->list);
+ expr->revision = rdtsc();
+
+ /* that's ok */
+ return 1;
+}
+
+int pat_idx_list_reg(struct pattern_expr *expr, struct pattern *pat, char **err)
+{
+ struct pattern_list *patl;
+
+ /* allocate pattern */
+ patl = calloc(1, sizeof(*patl));
+ if (!patl) {
+ memprintf(err, "out of memory while indexing pattern");
+ return 0;
+ }
+
+ /* duplicate pattern */
+ memcpy(&patl->pat, pat, sizeof(*pat));
+
+ /* allocate regex */
+ patl->pat.ptr.reg = calloc(1, sizeof(*patl->pat.ptr.reg));
+ if (!patl->pat.ptr.reg) {
+ free(patl);
+ memprintf(err, "out of memory while indexing pattern");
+ return 0;
+ }
+
+ /* compile regex */
+ if (!regex_comp(pat->ptr.str, patl->pat.ptr.reg, !(expr->mflags & PAT_MF_IGNORE_CASE), 0, err)) {
+ free(patl->pat.ptr.reg);
+ free(patl);
+ return 0;
+ }
+
+ /* chain pattern in the expression */
+ LIST_ADDQ(&expr->patterns, &patl->list);
+ expr->revision = rdtsc();
+
+ /* that's ok */
+ return 1;
+}
+
+int pat_idx_tree_ip(struct pattern_expr *expr, struct pattern *pat, char **err)
+{
+ unsigned int mask;
+ struct pattern_tree *node;
+
+ /* Only IPv4 can be indexed */
+ if (pat->type == SMP_T_IPV4) {
+ /* in IPv4 case, check if the mask is contiguous so that we can
+ * insert the network into the tree. A continuous mask has only
+ * ones on the left. This means that this mask + its lower bit
+ * added once again is null.
+ */
+ mask = ntohl(pat->val.ipv4.mask.s_addr);
+ if (mask + (mask & -mask) == 0) {
+ mask = mask ? 33 - flsnz(mask & -mask) : 0; /* equals cidr value */
+
+ /* node memory allocation */
+ node = calloc(1, sizeof(*node) + 4);
+ if (!node) {
+ memprintf(err, "out of memory while loading pattern");
+ return 0;
+ }
+
+ /* copy the pointer to sample associated to this node */
+ node->data = pat->data;
+ node->ref = pat->ref;
+
+ /* FIXME: insert <addr>/<mask> into the tree here */
+ memcpy(node->node.key, &pat->val.ipv4.addr, 4); /* network byte order */
+ node->node.node.pfx = mask;
+
+ /* Insert the entry. */
+ ebmb_insert_prefix(&expr->pattern_tree, &node->node, 4);
+ expr->revision = rdtsc();
+
+ /* that's ok */
+ return 1;
+ }
+ else {
+ /* If the mask is not contiguous, just add the pattern to the list */
+ return pat_idx_list_val(expr, pat, err);
+ }
+ }
+ else if (pat->type == SMP_T_IPV6) {
+ /* IPv6 also can be indexed */
+ node = calloc(1, sizeof(*node) + 16);
+ if (!node) {
+ memprintf(err, "out of memory while loading pattern");
+ return 0;
+ }
+
+ /* copy the pointer to sample associated to this node */
+ node->data = pat->data;
+ node->ref = pat->ref;
+
+ /* FIXME: insert <addr>/<mask> into the tree here */
+ memcpy(node->node.key, &pat->val.ipv6.addr, 16); /* network byte order */
+ node->node.node.pfx = pat->val.ipv6.mask;
+
+ /* Insert the entry. */
+ ebmb_insert_prefix(&expr->pattern_tree_2, &node->node, 16);
+ expr->revision = rdtsc();
+
+ /* that's ok */
+ return 1;
+ }
+
+ return 0;
+}
+
+int pat_idx_tree_str(struct pattern_expr *expr, struct pattern *pat, char **err)
+{
+ int len;
+ struct pattern_tree *node;
+
+ /* Only string can be indexed */
+ if (pat->type != SMP_T_STR) {
+ memprintf(err, "internal error: string expected, but the type is '%s'",
+ smp_to_type[pat->type]);
+ return 0;
+ }
+
+ /* If the flag PAT_F_IGNORE_CASE is set, we cannot use trees */
+ if (expr->mflags & PAT_MF_IGNORE_CASE)
+ return pat_idx_list_str(expr, pat, err);
+
+ /* Process the key len */
+ len = strlen(pat->ptr.str) + 1;
+
+ /* node memory allocation */
+ node = calloc(1, sizeof(*node) + len);
+ if (!node) {
+ memprintf(err, "out of memory while loading pattern");
+ return 0;
+ }
+
+ /* copy the pointer to sample associated to this node */
+ node->data = pat->data;
+ node->ref = pat->ref;
+
+ /* copy the string */
+ memcpy(node->node.key, pat->ptr.str, len);
+
+ /* index the new node */
+ ebst_insert(&expr->pattern_tree, &node->node);
+ expr->revision = rdtsc();
+
+ /* that's ok */
+ return 1;
+}
+
+int pat_idx_tree_pfx(struct pattern_expr *expr, struct pattern *pat, char **err)
+{
+ int len;
+ struct pattern_tree *node;
+
+ /* Only string can be indexed */
+ if (pat->type != SMP_T_STR) {
+ memprintf(err, "internal error: string expected, but the type is '%s'",
+ smp_to_type[pat->type]);
+ return 0;
+ }
+
+ /* If the flag PAT_F_IGNORE_CASE is set, we cannot use trees */
+ if (expr->mflags & PAT_MF_IGNORE_CASE)
+ return pat_idx_list_str(expr, pat, err);
+
+ /* Process the key len */
+ len = strlen(pat->ptr.str);
+
+ /* node memory allocation */
+ node = calloc(1, sizeof(*node) + len + 1);
+ if (!node) {
+ memprintf(err, "out of memory while loading pattern");
+ return 0;
+ }
+
+ /* copy the pointer to sample associated to this node */
+ node->data = pat->data;
+ node->ref = pat->ref;
+
+ /* copy the string and the trailing zero */
+ memcpy(node->node.key, pat->ptr.str, len + 1);
+ node->node.node.pfx = len * 8;
+
+ /* index the new node */
+ ebmb_insert_prefix(&expr->pattern_tree, &node->node, len);
+ expr->revision = rdtsc();
+
+ /* that's ok */
+ return 1;
+}
+
+void pat_del_list_val(struct pattern_expr *expr, struct pat_ref_elt *ref)
+{
+ struct pattern_list *pat;
+ struct pattern_list *safe;
+
+ list_for_each_entry_safe(pat, safe, &expr->patterns, list) {
+ /* Check equality. */
+ if (pat->pat.ref != ref)
+ continue;
+
+ /* Delete and free entry. */
+ LIST_DEL(&pat->list);
+ free(pat->pat.data);
+ free(pat);
+ }
+ expr->revision = rdtsc();
+}
+
+void pat_del_tree_ip(struct pattern_expr *expr, struct pat_ref_elt *ref)
+{
+ struct ebmb_node *node, *next_node;
+ struct pattern_tree *elt;
+
+ /* browse each node of the tree for IPv4 addresses. */
+ for (node = ebmb_first(&expr->pattern_tree), next_node = node ? ebmb_next(node) : NULL;
+ node;
+ node = next_node, next_node = node ? ebmb_next(node) : NULL) {
+ /* Extract container of the tree node. */
+ elt = container_of(node, struct pattern_tree, node);
+
+ /* Check equality. */
+ if (elt->ref != ref)
+ continue;
+
+ /* Delete and free entry. */
+ ebmb_delete(node);
+ free(elt->data);
+ free(elt);
+ }
+
+ /* Browse each node of the list for IPv4 addresses. */
+ pat_del_list_val(expr, ref);
+
+ /* browse each node of the tree for IPv6 addresses. */
+ for (node = ebmb_first(&expr->pattern_tree_2), next_node = node ? ebmb_next(node) : NULL;
+ node;
+ node = next_node, next_node = node ? ebmb_next(node) : NULL) {
+ /* Extract container of the tree node. */
+ elt = container_of(node, struct pattern_tree, node);
+
+ /* Check equality. */
+ if (elt->ref != ref)
+ continue;
+
+ /* Delete and free entry. */
+ ebmb_delete(node);
+ free(elt->data);
+ free(elt);
+ }
+ expr->revision = rdtsc();
+}
+
+void pat_del_list_ptr(struct pattern_expr *expr, struct pat_ref_elt *ref)
+{
+ struct pattern_list *pat;
+ struct pattern_list *safe;
+
+ list_for_each_entry_safe(pat, safe, &expr->patterns, list) {
+ /* Check equality. */
+ if (pat->pat.ref != ref)
+ continue;
+
+ /* Delete and free entry. */
+ LIST_DEL(&pat->list);
+ free(pat->pat.ptr.ptr);
+ free(pat->pat.data);
+ free(pat);
+ }
+ expr->revision = rdtsc();
+}
+
+void pat_del_tree_str(struct pattern_expr *expr, struct pat_ref_elt *ref)
+{
+ struct ebmb_node *node, *next_node;
+ struct pattern_tree *elt;
+
+ /* If the flag PAT_F_IGNORE_CASE is set, we cannot use trees */
+ if (expr->mflags & PAT_MF_IGNORE_CASE)
+ return pat_del_list_ptr(expr, ref);
+
+ /* browse each node of the tree. */
+ for (node = ebmb_first(&expr->pattern_tree), next_node = node ? ebmb_next(node) : NULL;
+ node;
+ node = next_node, next_node = node ? ebmb_next(node) : NULL) {
+ /* Extract container of the tree node. */
+ elt = container_of(node, struct pattern_tree, node);
+
+ /* Check equality. */
+ if (elt->ref != ref)
+ continue;
+
+ /* Delete and free entry. */
+ ebmb_delete(node);
+ free(elt->data);
+ free(elt);
+ }
+ expr->revision = rdtsc();
+}
+
+void pat_del_list_reg(struct pattern_expr *expr, struct pat_ref_elt *ref)
+{
+ struct pattern_list *pat;
+ struct pattern_list *safe;
+
+ list_for_each_entry_safe(pat, safe, &expr->patterns, list) {
+ /* Check equality. */
+ if (pat->pat.ref != ref)
+ continue;
+
+ /* Delete and free entry. */
+ LIST_DEL(&pat->list);
+ regex_free(pat->pat.ptr.ptr);
+ free(pat->pat.data);
+ free(pat);
+ }
+ expr->revision = rdtsc();
+}
+
+void pattern_init_expr(struct pattern_expr *expr)
+{
+ LIST_INIT(&expr->patterns);
+ expr->revision = 0;
+ expr->pattern_tree = EB_ROOT;
+ expr->pattern_tree_2 = EB_ROOT;
+}
+
+void pattern_init_head(struct pattern_head *head)
+{
+ LIST_INIT(&head->head);
+}
+
+/* The following functions are relative to the management of the reference
+ * lists. These lists are used to store the original pattern and associated
+ * value as string form.
+ *
+ * This is used with modifiable ACL and MAPS
+ *
+ * The pattern reference are stored with two identifiers: the unique_id and
+ * the reference.
+ *
+ * The reference identify a file. Each file with the same name point to the
+ * same reference. We can register many times one file. If the file is modified,
+ * all his dependencies are also modified. The reference can be used with map or
+ * acl.
+ *
+ * The unique_id identify inline acl. The unique id is unique for each acl.
+ * You cannot force the same id in the configuration file, because this repoort
+ * an error.
+ *
+ * A particular case appears if the filename is a number. In this case, the
+ * unique_id is set with the number represented by the filename and the
+ * reference is also set. This method prevent double unique_id.
+ *
+ */
+
+/* This function lookup for reference. If the reference is found, they return
+ * pointer to the struct pat_ref, else return NULL.
+ */
+struct pat_ref *pat_ref_lookup(const char *reference)
+{
+ struct pat_ref *ref;
+
+ list_for_each_entry(ref, &pattern_reference, list)
+ if (ref->reference && strcmp(reference, ref->reference) == 0)
+ return ref;
+ return NULL;
+}
+
+/* This function lookup for unique id. If the reference is found, they return
+ * pointer to the struct pat_ref, else return NULL.
+ */
+struct pat_ref *pat_ref_lookupid(int unique_id)
+{
+ struct pat_ref *ref;
+
+ list_for_each_entry(ref, &pattern_reference, list)
+ if (ref->unique_id == unique_id)
+ return ref;
+ return NULL;
+}
+
+/* This function remove all pattern matching the pointer <refelt> from
+ * the the reference and from each expr member of the reference. This
+ * function returns 1 if the deletion is done and return 0 is the entry
+ * is not found.
+ */
+int pat_ref_delete_by_id(struct pat_ref *ref, struct pat_ref_elt *refelt)
+{
+ struct pattern_expr *expr;
+ struct pat_ref_elt *elt, *safe;
+
+ /* delete pattern from reference */
+ list_for_each_entry_safe(elt, safe, &ref->head, list) {
+ if (elt == refelt) {
+ list_for_each_entry(expr, &ref->pat, list)
+ pattern_delete(expr, elt);
+
+ LIST_DEL(&elt->list);
+ free(elt->sample);
+ free(elt->pattern);
+ free(elt);
+ return 1;
+ }
+ }
+ return 0;
+}
+
+/* This function remove all pattern match <key> from the the reference
+ * and from each expr member of the reference. This fucntion returns 1
+ * if the deletion is done and return 0 is the entry is not found.
+ */
+int pat_ref_delete(struct pat_ref *ref, const char *key)
+{
+ struct pattern_expr *expr;
+ struct pat_ref_elt *elt, *safe;
+ int found = 0;
+
+ /* delete pattern from reference */
+ list_for_each_entry_safe(elt, safe, &ref->head, list) {
+ if (strcmp(key, elt->pattern) == 0) {
+ list_for_each_entry(expr, &ref->pat, list)
+ pattern_delete(expr, elt);
+
+ LIST_DEL(&elt->list);
+ free(elt->sample);
+ free(elt->pattern);
+ free(elt);
+
+ found = 1;
+ }
+ }
+
+ if (!found)
+ return 0;
+ return 1;
+}
+
+/*
+ * find and return an element <elt> matching <key> in a reference <ref>
+ * return NULL if not found
+ */
+struct pat_ref_elt *pat_ref_find_elt(struct pat_ref *ref, const char *key)
+{
+ struct pat_ref_elt *elt;
+
+ list_for_each_entry(elt, &ref->head, list) {
+ if (strcmp(key, elt->pattern) == 0)
+ return elt;
+ }
+
+ return NULL;
+}
+
+
+ /* This function modify the sample of the first pattern that match the <key>. */
+static inline int pat_ref_set_elt(struct pat_ref *ref, struct pat_ref_elt *elt,
+ const char *value, char **err)
+{
+ struct pattern_expr *expr;
+ struct sample_data **data;
+ char *sample;
+ struct sample_data test;
+
+ /* Try all needed converters. */
+ list_for_each_entry(expr, &ref->pat, list) {
+ if (!expr->pat_head->parse_smp)
+ continue;
+
+ if (!expr->pat_head->parse_smp(value, &test)) {
+ memprintf(err, "unable to parse '%s'", value);
+ return 0;
+ }
+ }
+
+ /* Modify pattern from reference. */
+ sample = strdup(value);
+ if (!sample) {
+ memprintf(err, "out of memory error");
+ return 0;
+ }
+ free(elt->sample);
+ elt->sample = sample;
+
+ /* Load sample in each reference. All the conversion are tested
+ * below, normally these calls dosn't fail.
+ */
+ list_for_each_entry(expr, &ref->pat, list) {
+ if (!expr->pat_head->parse_smp)
+ continue;
+
+ data = pattern_find_smp(expr, elt);
+ if (data && *data && !expr->pat_head->parse_smp(sample, *data))
+ *data = NULL;
+ }
+
+ return 1;
+}
+
+/* This function modify the sample of the first pattern that match the <key>. */
+int pat_ref_set_by_id(struct pat_ref *ref, struct pat_ref_elt *refelt, const char *value, char **err)
+{
+ struct pat_ref_elt *elt;
+
+ /* Look for pattern in the reference. */
+ list_for_each_entry(elt, &ref->head, list) {
+ if (elt == refelt) {
+ if (!pat_ref_set_elt(ref, elt, value, err))
+ return 0;
+ return 1;
+ }
+ }
+
+ memprintf(err, "key or pattern not found");
+ return 0;
+}
+
+/* This function modify the sample of the first pattern that match the <key>. */
+int pat_ref_set(struct pat_ref *ref, const char *key, const char *value, char **err)
+{
+ struct pat_ref_elt *elt;
+ int found = 0;
+ char *_merr;
+ char **merr;
+
+ if (err) {
+ merr = &_merr;
+ *merr = NULL;
+ }
+ else
+ merr = NULL;
+
+ /* Look for pattern in the reference. */
+ list_for_each_entry(elt, &ref->head, list) {
+ if (strcmp(key, elt->pattern) == 0) {
+ if (!pat_ref_set_elt(ref, elt, value, merr)) {
+ if (!found)
+ *err = *merr;
+ else {
+ memprintf(err, "%s, %s", *err, *merr);
+ free(*merr);
+ *merr = NULL;
+ }
+ }
+ found = 1;
+ }
+ }
+
+ if (!found) {
+ memprintf(err, "entry not found");
+ return 0;
+ }
+ return 1;
+}
+
+/* This function create new reference. <ref> is the reference name.
+ * <flags> are PAT_REF_*. /!\ The reference is not checked, and must
+ * be unique. The user must check the reference with "pat_ref_lookup()"
+ * before calling this function. If the fucntion fail, it return NULL,
+ * else return new struct pat_ref.
+ */
+struct pat_ref *pat_ref_new(const char *reference, const char *display, unsigned int flags)
+{
+ struct pat_ref *ref;
+
+ ref = malloc(sizeof(*ref));
+ if (!ref)
+ return NULL;
+
+ if (display) {
+ ref->display = strdup(display);
+ if (!ref->display) {
+ free(ref);
+ return NULL;
+ }
+ }
+ else
+ ref->display = NULL;
+
+ ref->reference = strdup(reference);
+ if (!ref->reference) {
+ free(ref->display);
+ free(ref);
+ return NULL;
+ }
+
+ ref->flags = flags;
+ ref->unique_id = -1;
+
+ LIST_INIT(&ref->head);
+ LIST_INIT(&ref->pat);
+
+ LIST_ADDQ(&pattern_reference, &ref->list);
+
+ return ref;
+}
+
+/* This function create new reference. <unique_id> is the unique id. If
+ * the value of <unique_id> is -1, the unique id is calculated later.
+ * <flags> are PAT_REF_*. /!\ The reference is not checked, and must
+ * be unique. The user must check the reference with "pat_ref_lookup()"
+ * or pat_ref_lookupid before calling this function. If the function
+ * fail, it return NULL, else return new struct pat_ref.
+ */
+struct pat_ref *pat_ref_newid(int unique_id, const char *display, unsigned int flags)
+{
+ struct pat_ref *ref;
+
+ ref = malloc(sizeof(*ref));
+ if (!ref)
+ return NULL;
+
+ if (display) {
+ ref->display = strdup(display);
+ if (!ref->display) {
+ free(ref);
+ return NULL;
+ }
+ }
+ else
+ ref->display = NULL;
+
+ ref->reference = NULL;
+ ref->flags = flags;
+ ref->unique_id = unique_id;
+ LIST_INIT(&ref->head);
+ LIST_INIT(&ref->pat);
+
+ LIST_ADDQ(&pattern_reference, &ref->list);
+
+ return ref;
+}
+
+/* This function adds entry to <ref>. It can failed with memory error.
+ * If the function fails, it returns 0.
+ */
+int pat_ref_append(struct pat_ref *ref, char *pattern, char *sample, int line)
+{
+ struct pat_ref_elt *elt;
+
+ elt = malloc(sizeof(*elt));
+ if (!elt)
+ return 0;
+
+ elt->line = line;
+
+ elt->pattern = strdup(pattern);
+ if (!elt->pattern) {
+ free(elt);
+ return 0;
+ }
+
+ if (sample) {
+ elt->sample = strdup(sample);
+ if (!elt->sample) {
+ free(elt->pattern);
+ free(elt);
+ return 0;
+ }
+ }
+ else
+ elt->sample = NULL;
+
+ LIST_ADDQ(&ref->head, &elt->list);
+
+ return 1;
+}
+
+/* This function create sample found in <elt>, parse the pattern also
+ * found in <elt> and insert it in <expr>. The function copy <patflags>
+ * in <expr>. If the function fails, it returns0 and <err> is filled.
+ * In succes case, the function returns 1.
+ */
+static inline
+int pat_ref_push(struct pat_ref_elt *elt, struct pattern_expr *expr,
+ int patflags, char **err)
+{
+ struct sample_data *data;
+ struct pattern pattern;
+
+ /* Create sample */
+ if (elt->sample && expr->pat_head->parse_smp) {
+ /* New sample. */
+ data = malloc(sizeof(*data));
+ if (!data)
+ return 0;
+
+ /* Parse value. */
+ if (!expr->pat_head->parse_smp(elt->sample, data)) {
+ memprintf(err, "unable to parse '%s'", elt->sample);
+ free(data);
+ return 0;
+ }
+
+ }
+ else
+ data = NULL;
+
+ /* initialise pattern */
+ memset(&pattern, 0, sizeof(pattern));
+ pattern.data = data;
+ pattern.ref = elt;
+
+ /* parse pattern */
+ if (!expr->pat_head->parse(elt->pattern, &pattern, expr->mflags, err)) {
+ free(data);
+ return 0;
+ }
+
+ /* index pattern */
+ if (!expr->pat_head->index(expr, &pattern, err)) {
+ free(data);
+ return 0;
+ }
+
+ return 1;
+}
+
+/* This function adds entry to <ref>. It can failed with memory error. The new
+ * entry is added at all the pattern_expr registered in this reference. The
+ * function stop on the first error encountered. It returns 0 and err is
+ * filled. If an error is encountered, the complete add operation is cancelled.
+ * If the insertion is a success the function returns 1.
+ */
+int pat_ref_add(struct pat_ref *ref,
+ const char *pattern, const char *sample,
+ char **err)
+{
+ struct pat_ref_elt *elt;
+ struct pattern_expr *expr;
+
+ elt = malloc(sizeof(*elt));
+ if (!elt) {
+ memprintf(err, "out of memory error");
+ return 0;
+ }
+
+ elt->line = -1;
+
+ elt->pattern = strdup(pattern);
+ if (!elt->pattern) {
+ free(elt);
+ memprintf(err, "out of memory error");
+ return 0;
+ }
+
+ if (sample) {
+ elt->sample = strdup(sample);
+ if (!elt->sample) {
+ free(elt->pattern);
+ free(elt);
+ memprintf(err, "out of memory error");
+ return 0;
+ }
+ }
+ else
+ elt->sample = NULL;
+
+ LIST_ADDQ(&ref->head, &elt->list);
+
+ list_for_each_entry(expr, &ref->pat, list) {
+ if (!pat_ref_push(elt, expr, 0, err)) {
+ /* If the insertion fails, try to delete all the added entries. */
+ pat_ref_delete_by_id(ref, elt);
+ return 0;
+ }
+ }
+ return 1;
+}
+
+/* This function prune <ref>, replace all reference by the references
+ * of <replace>, and reindex all the news values.
+ *
+ * The pattern are loaded in best effort and the errors are ignored,
+ * but writed in the logs.
+ */
+void pat_ref_reload(struct pat_ref *ref, struct pat_ref *replace)
+{
+ struct pattern_expr *expr;
+ struct pat_ref_elt *elt;
+ char *err = NULL;
+
+ pat_ref_prune(ref);
+
+ LIST_ADD(&replace->head, &ref->head);
+ LIST_DEL(&replace->head);
+
+ list_for_each_entry(elt, &ref->head, list) {
+ list_for_each_entry(expr, &ref->pat, list) {
+ if (!pat_ref_push(elt, expr, 0, &err)) {
+ send_log(NULL, LOG_NOTICE, "%s", err);
+ free(err);
+ err = NULL;
+ }
+ }
+ }
+}
+
+/* This function prune all entries of <ref>. This function
+ * prune the associated pattern_expr.
+ */
+void pat_ref_prune(struct pat_ref *ref)
+{
+ struct pat_ref_elt *elt, *safe;
+ struct pattern_expr *expr;
+
+ list_for_each_entry_safe(elt, safe, &ref->head, list) {
+ LIST_DEL(&elt->list);
+ free(elt->pattern);
+ free(elt->sample);
+ free(elt);
+ }
+
+ list_for_each_entry(expr, &ref->pat, list)
+ expr->pat_head->prune(expr);
+}
+
+/* This function lookup for existing reference <ref> in pattern_head <head>. */
+struct pattern_expr *pattern_lookup_expr(struct pattern_head *head, struct pat_ref *ref)
+{
+ struct pattern_expr_list *expr;
+
+ list_for_each_entry(expr, &head->head, list)
+ if (expr->expr->ref == ref)
+ return expr->expr;
+ return NULL;
+}
+
+/* This function create new pattern_expr associated to the reference <ref>.
+ * <ref> can be NULL. If an error is occured, the function returns NULL and
+ * <err> is filled. Otherwise, the function returns new pattern_expr linked
+ * with <head> and <ref>.
+ *
+ * The returned value can be a alredy filled pattern list, in this case the
+ * flag <reuse> is set.
+ */
+struct pattern_expr *pattern_new_expr(struct pattern_head *head, struct pat_ref *ref,
+ char **err, int *reuse)
+{
+ struct pattern_expr *expr;
+ struct pattern_expr_list *list;
+
+ if (reuse)
+ *reuse = 0;
+
+ /* Memory and initialization of the chain element. */
+ list = malloc(sizeof(*list));
+ if (!list) {
+ memprintf(err, "out of memory");
+ return NULL;
+ }
+
+ /* Look for existing similar expr. No that only the index, parse and
+ * parse_smp function must be identical for having similar pattern.
+ * The other function depends of theses first.
+ */
+ if (ref) {
+ list_for_each_entry(expr, &ref->pat, list)
+ if (expr->pat_head->index == head->index &&
+ expr->pat_head->parse == head->parse &&
+ expr->pat_head->parse_smp == head->parse_smp)
+ break;
+ if (&expr->list == &ref->pat)
+ expr = NULL;
+ }
+ else
+ expr = NULL;
+
+ /* If no similar expr was found, we create new expr. */
+ if (!expr) {
+ /* Get a lot of memory for the expr struct. */
+ expr = malloc(sizeof(*expr));
+ if (!expr) {
+ memprintf(err, "out of memory");
+ return NULL;
+ }
+
+ /* Initialize this new expr. */
+ pattern_init_expr(expr);
+
+ /* This new pattern expression reference one of his heads. */
+ expr->pat_head = head;
+
+ /* Link with ref, or to self to facilitate LIST_DEL() */
+ if (ref)
+ LIST_ADDQ(&ref->pat, &expr->list);
+ else
+ LIST_INIT(&expr->list);
+
+ expr->ref = ref;
+
+ /* We must free this pattern if it is no more used. */
+ list->do_free = 1;
+ }
+ else {
+ /* If the pattern used already exists, it is already linked
+ * with ref and we must not free it.
+ */
+ list->do_free = 0;
+ if (reuse)
+ *reuse = 1;
+ }
+
+ /* The new list element reference the pattern_expr. */
+ list->expr = expr;
+
+ /* Link the list element with the pattern_head. */
+ LIST_ADDQ(&head->head, &list->list);
+ return expr;
+}
+
+/* Reads patterns from a file. If <err_msg> is non-NULL, an error message will
+ * be returned there on errors and the caller will have to free it.
+ *
+ * The file contains one key + value per line. Lines which start with '#' are
+ * ignored, just like empty lines. Leading tabs/spaces are stripped. The key is
+ * then the first "word" (series of non-space/tabs characters), and the value is
+ * what follows this series of space/tab till the end of the line excluding
+ * trailing spaces/tabs.
+ *
+ * Example :
+ *
+ * # this is a comment and is ignored
+ * 62.212.114.60 1wt.eu \n
+ * <-><-----------><---><----><---->
+ * | | | | `--- trailing spaces ignored
+ * | | | `-------- value
+ * | | `--------------- middle spaces ignored
+ * | `------------------------ key
+ * `-------------------------------- leading spaces ignored
+ *
+ * Return non-zero in case of succes, otherwise 0.
+ */
+int pat_ref_read_from_file_smp(struct pat_ref *ref, const char *filename, char **err)
+{
+ FILE *file;
+ char *c;
+ int ret = 0;
+ int line = 0;
+ char *key_beg;
+ char *key_end;
+ char *value_beg;
+ char *value_end;
+
+ file = fopen(filename, "r");
+ if (!file) {
+ memprintf(err, "failed to open pattern file <%s>", filename);
+ return 0;
+ }
+
+ /* now parse all patterns. The file may contain only one pattern
+ * followed by one value per line. The start spaces, separator spaces
+ * and and spaces are stripped. Each can contain comment started by '#'
+ */
+ while (fgets(trash.str, trash.size, file) != NULL) {
+ line++;
+ c = trash.str;
+
+ /* ignore lines beginning with a dash */
+ if (*c == '#')
+ continue;
+
+ /* strip leading spaces and tabs */
+ while (*c == ' ' || *c == '\t')
+ c++;
+
+ /* empty lines are ignored too */
+ if (*c == '\0' || *c == '\r' || *c == '\n')
+ continue;
+
+ /* look for the end of the key */
+ key_beg = c;
+ while (*c && *c != ' ' && *c != '\t' && *c != '\n' && *c != '\r')
+ c++;
+
+ key_end = c;
+
+ /* strip middle spaces and tabs */
+ while (*c == ' ' || *c == '\t')
+ c++;
+
+ /* look for the end of the value, it is the end of the line */
+ value_beg = c;
+ while (*c && *c != '\n' && *c != '\r')
+ c++;
+ value_end = c;
+
+ /* trim possibly trailing spaces and tabs */
+ while (value_end > value_beg && (value_end[-1] == ' ' || value_end[-1] == '\t'))
+ value_end--;
+
+ /* set final \0 and check entries */
+ *key_end = '\0';
+ *value_end = '\0';
+
+ /* insert values */
+ if (!pat_ref_append(ref, key_beg, value_beg, line)) {
+ memprintf(err, "out of memory");
+ goto out_close;
+ }
+ }
+
+ /* succes */
+ ret = 1;
+
+ out_close:
+ fclose(file);
+ return ret;
+}
+
+/* Reads patterns from a file. If <err_msg> is non-NULL, an error message will
+ * be returned there on errors and the caller will have to free it.
+ */
+int pat_ref_read_from_file(struct pat_ref *ref, const char *filename, char **err)
+{
+ FILE *file;
+ char *c;
+ char *arg;
+ int ret = 0;
+ int line = 0;
+
+ file = fopen(filename, "r");
+ if (!file) {
+ memprintf(err, "failed to open pattern file <%s>", filename);
+ return 0;
+ }
+
+ /* now parse all patterns. The file may contain only one pattern per
+ * line. If the line contains spaces, they will be part of the pattern.
+ * The pattern stops at the first CR, LF or EOF encountered.
+ */
+ while (fgets(trash.str, trash.size, file) != NULL) {
+ line++;
+ c = trash.str;
+
+ /* ignore lines beginning with a dash */
+ if (*c == '#')
+ continue;
+
+ /* strip leading spaces and tabs */
+ while (*c == ' ' || *c == '\t')
+ c++;
+
+
+ arg = c;
+ while (*c && *c != '\n' && *c != '\r')
+ c++;
+ *c = 0;
+
+ /* empty lines are ignored too */
+ if (c == arg)
+ continue;
+
+ if (!pat_ref_append(ref, arg, NULL, line)) {
+ memprintf(err, "out of memory when loading patterns from file <%s>", filename);
+ goto out_close;
+ }
+ }
+
+ ret = 1; /* success */
+
+ out_close:
+ fclose(file);
+ return ret;
+}
+
+int pattern_read_from_file(struct pattern_head *head, unsigned int refflags,
+ const char *filename, int patflags, int load_smp,
+ char **err, const char *file, int line)
+{
+ struct pat_ref *ref;
+ struct pattern_expr *expr;
+ struct pat_ref_elt *elt;
+ int reuse = 0;
+
+ /* Lookup for the existing reference. */
+ ref = pat_ref_lookup(filename);
+
+ /* If the reference doesn't exists, create it and load associated file. */
+ if (!ref) {
+ chunk_printf(&trash,
+ "pattern loaded from file '%s' used by %s at file '%s' line %d",
+ filename, refflags & PAT_REF_MAP ? "map" : "acl", file, line);
+
+ ref = pat_ref_new(filename, trash.str, refflags);
+ if (!ref) {
+ memprintf(err, "out of memory");
+ return 0;
+ }
+
+ if (load_smp) {
+ ref->flags |= PAT_REF_SMP;
+ if (!pat_ref_read_from_file_smp(ref, filename, err))
+ return 0;
+ }
+ else {
+ if (!pat_ref_read_from_file(ref, filename, err))
+ return 0;
+ }
+ }
+ else {
+ /* The reference already exists, check the map compatibility. */
+
+ /* If the load require samples and the flag PAT_REF_SMP is not set,
+ * the reference doesn't contain sample, and cannot be used.
+ */
+ if (load_smp) {
+ if (!(ref->flags & PAT_REF_SMP)) {
+ memprintf(err, "The file \"%s\" is already used as one column file "
+ "and cannot be used by as two column file.",
+ filename);
+ return 0;
+ }
+ }
+ else {
+ /* The load doesn't require samples. If the flag PAT_REF_SMP is
+ * set, the reference contains a sample, and cannot be used.
+ */
+ if (ref->flags & PAT_REF_SMP) {
+ memprintf(err, "The file \"%s\" is already used as two column file "
+ "and cannot be used by as one column file.",
+ filename);
+ return 0;
+ }
+ }
+
+ /* Extends display */
+ chunk_printf(&trash, "%s", ref->display);
+ chunk_appendf(&trash, ", by %s at file '%s' line %d",
+ refflags & PAT_REF_MAP ? "map" : "acl", file, line);
+ free(ref->display);
+ ref->display = strdup(trash.str);
+ if (!ref->display) {
+ memprintf(err, "out of memory");
+ return 0;
+ }
+
+ /* Merge flags. */
+ ref->flags |= refflags;
+ }
+
+ /* Now, we can loading patterns from the reference. */
+
+ /* Lookup for existing reference in the head. If the reference
+ * doesn't exists, create it.
+ */
+ expr = pattern_lookup_expr(head, ref);
+ if (!expr || (expr->mflags != patflags)) {
+ expr = pattern_new_expr(head, ref, err, &reuse);
+ if (!expr)
+ return 0;
+ expr->mflags = patflags;
+ }
+
+ /* The returned expression may be not empty, because the function
+ * "pattern_new_expr" lookup for similar pattern list and can
+ * reuse a already filled pattern list. In this case, we can not
+ * reload the patterns.
+ */
+ if (reuse)
+ return 1;
+
+ /* Load reference content in the pattern expression. */
+ list_for_each_entry(elt, &ref->head, list) {
+ if (!pat_ref_push(elt, expr, patflags, err)) {
+ if (elt->line > 0)
+ memprintf(err, "%s at line %d of file '%s'",
+ *err, elt->line, filename);
+ return 0;
+ }
+ }
+
+ return 1;
+}
+
+/* This function executes a pattern match on a sample. It applies pattern <expr>
+ * to sample <smp>. The function returns NULL if the sample dont match. It returns
+ * non-null if the sample match. If <fill> is true and the sample match, the
+ * function returns the matched pattern. In many cases, this pattern can be a
+ * static buffer.
+ */
+struct pattern *pattern_exec_match(struct pattern_head *head, struct sample *smp, int fill)
+{
+ struct pattern_expr_list *list;
+ struct pattern *pat;
+
+ if (!head->match) {
+ if (fill) {
+ static_pattern.data = NULL;
+ static_pattern.ref = NULL;
+ static_pattern.sflags = 0;
+ static_pattern.type = SMP_T_SINT;
+ static_pattern.val.i = 1;
+ }
+ return &static_pattern;
+ }
+
+ /* convert input to string */
+ if (!sample_convert(smp, head->expect_type))
+ return NULL;
+
+ list_for_each_entry(list, &head->head, list) {
+ pat = head->match(smp, list->expr, fill);
+ if (pat)
+ return pat;
+ }
+ return NULL;
+}
+
+/* This function prune the pattern expression. */
+void pattern_prune(struct pattern_head *head)
+{
+ struct pattern_expr_list *list, *safe;
+
+ list_for_each_entry_safe(list, safe, &head->head, list) {
+ LIST_DEL(&list->list);
+ if (list->do_free) {
+ LIST_DEL(&list->expr->list);
+ head->prune(list->expr);
+ free(list->expr);
+ }
+ free(list);
+ }
+}
+
+/* This function lookup for a pattern matching the <key> and return a
+ * pointer to a pointer of the sample stoarge. If the <key> dont match,
+ * the function returns NULL. If the key cannot be parsed, the function
+ * fill <err>.
+ */
+struct sample_data **pattern_find_smp(struct pattern_expr *expr, struct pat_ref_elt *ref)
+{
+ struct ebmb_node *node;
+ struct pattern_tree *elt;
+ struct pattern_list *pat;
+
+ for (node = ebmb_first(&expr->pattern_tree);
+ node;
+ node = ebmb_next(node)) {
+ elt = container_of(node, struct pattern_tree, node);
+ if (elt->ref == ref)
+ return &elt->data;
+ }
+
+ for (node = ebmb_first(&expr->pattern_tree_2);
+ node;
+ node = ebmb_next(node)) {
+ elt = container_of(node, struct pattern_tree, node);
+ if (elt->ref == ref)
+ return &elt->data;
+ }
+
+ list_for_each_entry(pat, &expr->patterns, list)
+ if (pat->pat.ref == ref)
+ return &pat->pat.data;
+
+ return NULL;
+}
+
+/* This function search all the pattern matching the <key> and delete it.
+ * If the parsing of the input key fails, the function returns 0 and the
+ * <err> is filled, else return 1;
+ */
+int pattern_delete(struct pattern_expr *expr, struct pat_ref_elt *ref)
+{
+ expr->pat_head->delete(expr, ref);
+ return 1;
+}
+
+/* This function finalize the configuration parsing. Its set all the
+ * automatic ids
+ */
+void pattern_finalize_config(void)
+{
+ int i = 0;
+ struct pat_ref *ref, *ref2, *ref3;
+ struct list pr = LIST_HEAD_INIT(pr);
+
+ pat_lru_seed = random();
+ if (global.tune.pattern_cache)
+ pat_lru_tree = lru64_new(global.tune.pattern_cache);
+
+ list_for_each_entry(ref, &pattern_reference, list) {
+ if (ref->unique_id == -1) {
+ /* Look for the first free id. */
+ while (1) {
+ list_for_each_entry(ref2, &pattern_reference, list) {
+ if (ref2->unique_id == i) {
+ i++;
+ break;
+ }
+ }
+ if (&ref2->list == &pattern_reference)
+ break;
+ }
+
+ /* Uses the unique id and increment it for the next entry. */
+ ref->unique_id = i;
+ i++;
+ }
+ }
+
+ /* This sort the reference list by id. */
+ list_for_each_entry_safe(ref, ref2, &pattern_reference, list) {
+ LIST_DEL(&ref->list);
+ list_for_each_entry(ref3, &pr, list) {
+ if (ref->unique_id < ref3->unique_id) {
+ LIST_ADDQ(&ref3->list, &ref->list);
+ break;
+ }
+ }
+ if (&ref3->list == &pr)
+ LIST_ADDQ(&pr, &ref->list);
+ }
+
+ /* swap root */
+ LIST_ADD(&pr, &pattern_reference);
+ LIST_DEL(&pr);
+}
--- /dev/null
+/*
+ * General protocol-agnostic payload-based sample fetches and ACLs
+ *
+ * Copyright 2000-2013 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <stdlib.h>
+#include <string.h>
+
+#include <proto/acl.h>
+#include <proto/arg.h>
+#include <proto/channel.h>
+#include <proto/pattern.h>
+#include <proto/payload.h>
+#include <proto/sample.h>
+
+
+/************************************************************************/
+/* All supported sample fetch functions must be declared here */
+/************************************************************************/
+
+/* wait for more data as long as possible, then return TRUE. This should be
+ * used with content inspection.
+ */
+static int
+smp_fetch_wait_end(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ if (!(smp->opt & SMP_OPT_FINAL)) {
+ smp->flags |= SMP_F_MAY_CHANGE;
+ return 0;
+ }
+ smp->data.type = SMP_T_BOOL;
+ smp->data.u.sint = 1;
+ return 1;
+}
+
+/* return the number of bytes in the request buffer */
+static int
+smp_fetch_len(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct channel *chn;
+
+ chn = ((smp->opt & SMP_OPT_DIR) == SMP_OPT_DIR_RES) ? &smp->strm->res : &smp->strm->req;
+ if (!chn->buf)
+ return 0;
+
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = chn->buf->i;
+ smp->flags = SMP_F_VOLATILE | SMP_F_MAY_CHANGE;
+ return 1;
+}
+
+/* Returns 0 if the client didn't send a SessionTicket Extension
+ * Returns 1 if the client sent SessionTicket Extension
+ * Returns 2 if the client also sent non-zero length SessionTicket
+ * Returns SMP_T_SINT data type
+ */
+static int
+smp_fetch_req_ssl_st_ext(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ int hs_len, ext_len, bleft;
+ struct channel *chn;
+ unsigned char *data;
+
+ chn = ((smp->opt & SMP_OPT_DIR) == SMP_OPT_DIR_RES) ? &smp->strm->res : &smp->strm->req;
+ if (!chn->buf)
+ goto not_ssl_hello;
+
+ bleft = chn->buf->i;
+ data = (unsigned char *)chn->buf->p;
+
+ /* Check for SSL/TLS Handshake */
+ if (!bleft)
+ goto too_short;
+ if (*data != 0x16)
+ goto not_ssl_hello;
+
+ /* Check for SSLv3 or later (SSL version >= 3.0) in the record layer*/
+ if (bleft < 3)
+ goto too_short;
+ if (data[1] < 0x03)
+ goto not_ssl_hello;
+
+ if (bleft < 5)
+ goto too_short;
+ hs_len = (data[3] << 8) + data[4];
+ if (hs_len < 1 + 3 + 2 + 32 + 1 + 2 + 2 + 1 + 1 + 2 + 2)
+ goto not_ssl_hello; /* too short to have an extension */
+
+ data += 5; /* enter TLS handshake */
+ bleft -= 5;
+
+ /* Check for a complete client hello starting at <data> */
+ if (bleft < 1)
+ goto too_short;
+ if (data[0] != 0x01) /* msg_type = Client Hello */
+ goto not_ssl_hello;
+
+ /* Check the Hello's length */
+ if (bleft < 4)
+ goto too_short;
+ hs_len = (data[1] << 16) + (data[2] << 8) + data[3];
+ if (hs_len < 2 + 32 + 1 + 2 + 2 + 1 + 1 + 2 + 2)
+ goto not_ssl_hello; /* too short to have an extension */
+
+ /* We want the full handshake here */
+ if (bleft < hs_len)
+ goto too_short;
+
+ data += 4;
+ /* Start of the ClientHello message */
+ if (data[0] < 0x03 || data[1] < 0x01) /* TLSv1 minimum */
+ goto not_ssl_hello;
+
+ ext_len = data[34]; /* session_id_len */
+ if (ext_len > 32 || ext_len > (hs_len - 35)) /* check for correct session_id len */
+ goto not_ssl_hello;
+
+ /* Jump to cipher suite */
+ hs_len -= 35 + ext_len;
+ data += 35 + ext_len;
+
+ if (hs_len < 4 || /* minimum one cipher */
+ (ext_len = (data[0] << 8) + data[1]) < 2 || /* minimum 2 bytes for a cipher */
+ ext_len > hs_len)
+ goto not_ssl_hello;
+
+ /* Jump to the compression methods */
+ hs_len -= 2 + ext_len;
+ data += 2 + ext_len;
+
+ if (hs_len < 2 || /* minimum one compression method */
+ data[0] < 1 || data[0] > hs_len) /* minimum 1 bytes for a method */
+ goto not_ssl_hello;
+
+ /* Jump to the extensions */
+ hs_len -= 1 + data[0];
+ data += 1 + data[0];
+
+ if (hs_len < 2 || /* minimum one extension list length */
+ (ext_len = (data[0] << 8) + data[1]) > hs_len - 2) /* list too long */
+ goto not_ssl_hello;
+
+ hs_len = ext_len; /* limit ourselves to the extension length */
+ data += 2;
+
+ while (hs_len >= 4) {
+ int ext_type, ext_len;
+
+ ext_type = (data[0] << 8) + data[1];
+ ext_len = (data[2] << 8) + data[3];
+
+ if (ext_len > hs_len - 4) /* Extension too long */
+ goto not_ssl_hello;
+
+ /* SesstionTicket extension */
+ if (ext_type == 35) {
+ smp->data.type = SMP_T_SINT;
+ /* SessionTicket also present */
+ if (ext_len > 0)
+ smp->data.u.sint = 2;
+ /* SessionTicket absent */
+ else
+ smp->data.u.sint = 1;
+ smp->flags = SMP_F_VOLATILE;
+ return 1;
+ }
+
+ hs_len -= 4 + ext_len;
+ data += 4 + ext_len;
+ }
+ /* SessionTicket Extension not found */
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = 0;
+ smp->flags = SMP_F_VOLATILE;
+ return 1;
+
+ /* server name not found */
+ goto not_ssl_hello;
+
+ too_short:
+ smp->flags = SMP_F_MAY_CHANGE;
+
+ not_ssl_hello:
+ return 0;
+}
+
+/* Returns TRUE if the client sent Supported Elliptic Curves Extension (0x000a)
+ * Mainly used to detect if client supports ECC cipher suites.
+ */
+static int
+smp_fetch_req_ssl_ec_ext(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ int hs_len, ext_len, bleft;
+ struct channel *chn;
+ unsigned char *data;
+
+ chn = ((smp->opt & SMP_OPT_DIR) == SMP_OPT_DIR_RES) ? &smp->strm->res : &smp->strm->req;
+ if (!chn->buf)
+ goto not_ssl_hello;
+
+ bleft = chn->buf->i;
+ data = (unsigned char *)chn->buf->p;
+
+ /* Check for SSL/TLS Handshake */
+ if (!bleft)
+ goto too_short;
+ if (*data != 0x16)
+ goto not_ssl_hello;
+
+ /* Check for SSLv3 or later (SSL version >= 3.0) in the record layer*/
+ if (bleft < 3)
+ goto too_short;
+ if (data[1] < 0x03)
+ goto not_ssl_hello;
+
+ if (bleft < 5)
+ goto too_short;
+ hs_len = (data[3] << 8) + data[4];
+ if (hs_len < 1 + 3 + 2 + 32 + 1 + 2 + 2 + 1 + 1 + 2 + 2)
+ goto not_ssl_hello; /* too short to have an extension */
+
+ data += 5; /* enter TLS handshake */
+ bleft -= 5;
+
+ /* Check for a complete client hello starting at <data> */
+ if (bleft < 1)
+ goto too_short;
+ if (data[0] != 0x01) /* msg_type = Client Hello */
+ goto not_ssl_hello;
+
+ /* Check the Hello's length */
+ if (bleft < 4)
+ goto too_short;
+ hs_len = (data[1] << 16) + (data[2] << 8) + data[3];
+ if (hs_len < 2 + 32 + 1 + 2 + 2 + 1 + 1 + 2 + 2)
+ goto not_ssl_hello; /* too short to have an extension */
+
+ /* We want the full handshake here */
+ if (bleft < hs_len)
+ goto too_short;
+
+ data += 4;
+ /* Start of the ClientHello message */
+ if (data[0] < 0x03 || data[1] < 0x01) /* TLSv1 minimum */
+ goto not_ssl_hello;
+
+ ext_len = data[34]; /* session_id_len */
+ if (ext_len > 32 || ext_len > (hs_len - 35)) /* check for correct session_id len */
+ goto not_ssl_hello;
+
+ /* Jump to cipher suite */
+ hs_len -= 35 + ext_len;
+ data += 35 + ext_len;
+
+ if (hs_len < 4 || /* minimum one cipher */
+ (ext_len = (data[0] << 8) + data[1]) < 2 || /* minimum 2 bytes for a cipher */
+ ext_len > hs_len)
+ goto not_ssl_hello;
+
+ /* Jump to the compression methods */
+ hs_len -= 2 + ext_len;
+ data += 2 + ext_len;
+
+ if (hs_len < 2 || /* minimum one compression method */
+ data[0] < 1 || data[0] > hs_len) /* minimum 1 bytes for a method */
+ goto not_ssl_hello;
+
+ /* Jump to the extensions */
+ hs_len -= 1 + data[0];
+ data += 1 + data[0];
+
+ if (hs_len < 2 || /* minimum one extension list length */
+ (ext_len = (data[0] << 8) + data[1]) > hs_len - 2) /* list too long */
+ goto not_ssl_hello;
+
+ hs_len = ext_len; /* limit ourselves to the extension length */
+ data += 2;
+
+ while (hs_len >= 4) {
+ int ext_type, ext_len;
+
+ ext_type = (data[0] << 8) + data[1];
+ ext_len = (data[2] << 8) + data[3];
+
+ if (ext_len > hs_len - 4) /* Extension too long */
+ goto not_ssl_hello;
+
+ /* Elliptic curves extension */
+ if (ext_type == 10) {
+ smp->data.type = SMP_T_BOOL;
+ smp->data.u.sint = 1;
+ smp->flags = SMP_F_VOLATILE;
+ return 1;
+ }
+
+ hs_len -= 4 + ext_len;
+ data += 4 + ext_len;
+ }
+ /* server name not found */
+ goto not_ssl_hello;
+
+ too_short:
+ smp->flags = SMP_F_MAY_CHANGE;
+
+ not_ssl_hello:
+
+ return 0;
+}
+/* returns the type of SSL hello message (mainly used to detect an SSL hello) */
+static int
+smp_fetch_ssl_hello_type(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ int hs_len;
+ int hs_type, bleft;
+ struct channel *chn;
+ const unsigned char *data;
+
+ chn = ((smp->opt & SMP_OPT_DIR) == SMP_OPT_DIR_RES) ? &smp->strm->res : &smp->strm->req;
+ if (!chn->buf)
+ goto not_ssl_hello;
+
+ bleft = chn->buf->i;
+ data = (const unsigned char *)chn->buf->p;
+
+ if (!bleft)
+ goto too_short;
+
+ if ((*data >= 0x14 && *data <= 0x17) || (*data == 0xFF)) {
+ /* SSLv3 header format */
+ if (bleft < 9)
+ goto too_short;
+
+ /* ssl version 3 */
+ if ((data[1] << 16) + data[2] < 0x00030000)
+ goto not_ssl_hello;
+
+ /* ssl message len must present handshake type and len */
+ if ((data[3] << 8) + data[4] < 4)
+ goto not_ssl_hello;
+
+ /* format introduced with SSLv3 */
+
+ hs_type = (int)data[5];
+ hs_len = ( data[6] << 16 ) + ( data[7] << 8 ) + data[8];
+
+ /* not a full handshake */
+ if (bleft < (9 + hs_len))
+ goto too_short;
+
+ }
+ else {
+ goto not_ssl_hello;
+ }
+
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = hs_type;
+ smp->flags = SMP_F_VOLATILE;
+
+ return 1;
+
+ too_short:
+ smp->flags = SMP_F_MAY_CHANGE;
+
+ not_ssl_hello:
+
+ return 0;
+}
+
+/* Return the version of the SSL protocol in the request. It supports both
+ * SSLv3 (TLSv1) header format for any message, and SSLv2 header format for
+ * the hello message. The SSLv3 format is described in RFC 2246 p49, and the
+ * SSLv2 format is described here, and completed p67 of RFC 2246 :
+ * http://wp.netscape.com/eng/security/SSL_2.html
+ *
+ * Note: this decoder only works with non-wrapping data.
+ */
+static int
+smp_fetch_req_ssl_ver(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ int version, bleft, msg_len;
+ const unsigned char *data;
+ struct channel *req = &smp->strm->req;
+
+ if (!req->buf)
+ return 0;
+
+ msg_len = 0;
+ bleft = req->buf->i;
+ if (!bleft)
+ goto too_short;
+
+ data = (const unsigned char *)req->buf->p;
+ if ((*data >= 0x14 && *data <= 0x17) || (*data == 0xFF)) {
+ /* SSLv3 header format */
+ if (bleft < 11)
+ goto too_short;
+
+ version = (data[1] << 16) + data[2]; /* record layer version: major, minor */
+ msg_len = (data[3] << 8) + data[4]; /* record length */
+
+ /* format introduced with SSLv3 */
+ if (version < 0x00030000)
+ goto not_ssl;
+
+ /* message length between 6 and 2^14 + 2048 */
+ if (msg_len < 6 || msg_len > ((1<<14) + 2048))
+ goto not_ssl;
+
+ bleft -= 5; data += 5;
+
+ /* return the client hello client version, not the record layer version */
+ version = (data[4] << 16) + data[5]; /* client hello version: major, minor */
+ } else {
+ /* SSLv2 header format, only supported for hello (msg type 1) */
+ int rlen, plen, cilen, silen, chlen;
+
+ if (*data & 0x80) {
+ if (bleft < 3)
+ goto too_short;
+ /* short header format : 15 bits for length */
+ rlen = ((data[0] & 0x7F) << 8) | data[1];
+ plen = 0;
+ bleft -= 2; data += 2;
+ } else {
+ if (bleft < 4)
+ goto too_short;
+ /* long header format : 14 bits for length + pad length */
+ rlen = ((data[0] & 0x3F) << 8) | data[1];
+ plen = data[2];
+ bleft -= 3; data += 2;
+ }
+
+ if (*data != 0x01)
+ goto not_ssl;
+ bleft--; data++;
+
+ if (bleft < 8)
+ goto too_short;
+ version = (data[0] << 16) + data[1]; /* version: major, minor */
+ cilen = (data[2] << 8) + data[3]; /* cipher len, multiple of 3 */
+ silen = (data[4] << 8) + data[5]; /* session_id_len: 0 or 16 */
+ chlen = (data[6] << 8) + data[7]; /* 16<=challenge length<=32 */
+
+ bleft -= 8; data += 8;
+ if (cilen % 3 != 0)
+ goto not_ssl;
+ if (silen && silen != 16)
+ goto not_ssl;
+ if (chlen < 16 || chlen > 32)
+ goto not_ssl;
+ if (rlen != 9 + cilen + silen + chlen)
+ goto not_ssl;
+
+ /* focus on the remaining data length */
+ msg_len = cilen + silen + chlen + plen;
+ }
+ /* We could recursively check that the buffer ends exactly on an SSL
+ * fragment boundary and that a possible next segment is still SSL,
+ * but that's a bit pointless. However, we could still check that
+ * all the part of the request which fits in a buffer is already
+ * there.
+ */
+ if (msg_len > channel_recv_limit(req) + req->buf->data - req->buf->p)
+ msg_len = channel_recv_limit(req) + req->buf->data - req->buf->p;
+
+ if (bleft < msg_len)
+ goto too_short;
+
+ /* OK that's enough. We have at least the whole message, and we have
+ * the protocol version.
+ */
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = version;
+ smp->flags = SMP_F_VOLATILE;
+ return 1;
+
+ too_short:
+ smp->flags = SMP_F_MAY_CHANGE;
+ not_ssl:
+ return 0;
+}
+
+/* Try to extract the Server Name Indication that may be presented in a TLS
+ * client hello handshake message. The format of the message is the following
+ * (cf RFC5246 + RFC6066) :
+ * TLS frame :
+ * - uint8 type = 0x16 (Handshake)
+ * - uint16 version >= 0x0301 (TLSv1)
+ * - uint16 length (frame length)
+ * - TLS handshake :
+ * - uint8 msg_type = 0x01 (ClientHello)
+ * - uint24 length (handshake message length)
+ * - ClientHello :
+ * - uint16 client_version >= 0x0301 (TLSv1)
+ * - uint8 Random[32] (4 first ones are timestamp)
+ * - SessionID :
+ * - uint8 session_id_len (0..32) (SessionID len in bytes)
+ * - uint8 session_id[session_id_len]
+ * - CipherSuite :
+ * - uint16 cipher_len >= 2 (Cipher length in bytes)
+ * - uint16 ciphers[cipher_len/2]
+ * - CompressionMethod :
+ * - uint8 compression_len >= 1 (# of supported methods)
+ * - uint8 compression_methods[compression_len]
+ * - optional client_extension_len (in bytes)
+ * - optional sequence of ClientHelloExtensions (as many bytes as above):
+ * - uint16 extension_type = 0 for server_name
+ * - uint16 extension_len
+ * - opaque extension_data[extension_len]
+ * - uint16 server_name_list_len (# of bytes here)
+ * - opaque server_names[server_name_list_len bytes]
+ * - uint8 name_type = 0 for host_name
+ * - uint16 name_len
+ * - opaque hostname[name_len bytes]
+ */
+static int
+smp_fetch_ssl_hello_sni(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ int hs_len, ext_len, bleft;
+ struct channel *chn;
+ unsigned char *data;
+
+ chn = ((smp->opt & SMP_OPT_DIR) == SMP_OPT_DIR_RES) ? &smp->strm->res : &smp->strm->req;
+ if (!chn->buf)
+ goto not_ssl_hello;
+
+ bleft = chn->buf->i;
+ data = (unsigned char *)chn->buf->p;
+
+ /* Check for SSL/TLS Handshake */
+ if (!bleft)
+ goto too_short;
+ if (*data != 0x16)
+ goto not_ssl_hello;
+
+ /* Check for SSLv3 or later (SSL version >= 3.0) in the record layer*/
+ if (bleft < 3)
+ goto too_short;
+ if (data[1] < 0x03)
+ goto not_ssl_hello;
+
+ if (bleft < 5)
+ goto too_short;
+ hs_len = (data[3] << 8) + data[4];
+ if (hs_len < 1 + 3 + 2 + 32 + 1 + 2 + 2 + 1 + 1 + 2 + 2)
+ goto not_ssl_hello; /* too short to have an extension */
+
+ data += 5; /* enter TLS handshake */
+ bleft -= 5;
+
+ /* Check for a complete client hello starting at <data> */
+ if (bleft < 1)
+ goto too_short;
+ if (data[0] != 0x01) /* msg_type = Client Hello */
+ goto not_ssl_hello;
+
+ /* Check the Hello's length */
+ if (bleft < 4)
+ goto too_short;
+ hs_len = (data[1] << 16) + (data[2] << 8) + data[3];
+ if (hs_len < 2 + 32 + 1 + 2 + 2 + 1 + 1 + 2 + 2)
+ goto not_ssl_hello; /* too short to have an extension */
+
+ /* We want the full handshake here */
+ if (bleft < hs_len)
+ goto too_short;
+
+ data += 4;
+ /* Start of the ClientHello message */
+ if (data[0] < 0x03 || data[1] < 0x01) /* TLSv1 minimum */
+ goto not_ssl_hello;
+
+ ext_len = data[34]; /* session_id_len */
+ if (ext_len > 32 || ext_len > (hs_len - 35)) /* check for correct session_id len */
+ goto not_ssl_hello;
+
+ /* Jump to cipher suite */
+ hs_len -= 35 + ext_len;
+ data += 35 + ext_len;
+
+ if (hs_len < 4 || /* minimum one cipher */
+ (ext_len = (data[0] << 8) + data[1]) < 2 || /* minimum 2 bytes for a cipher */
+ ext_len > hs_len)
+ goto not_ssl_hello;
+
+ /* Jump to the compression methods */
+ hs_len -= 2 + ext_len;
+ data += 2 + ext_len;
+
+ if (hs_len < 2 || /* minimum one compression method */
+ data[0] < 1 || data[0] > hs_len) /* minimum 1 bytes for a method */
+ goto not_ssl_hello;
+
+ /* Jump to the extensions */
+ hs_len -= 1 + data[0];
+ data += 1 + data[0];
+
+ if (hs_len < 2 || /* minimum one extension list length */
+ (ext_len = (data[0] << 8) + data[1]) > hs_len - 2) /* list too long */
+ goto not_ssl_hello;
+
+ hs_len = ext_len; /* limit ourselves to the extension length */
+ data += 2;
+
+ while (hs_len >= 4) {
+ int ext_type, name_type, srv_len, name_len;
+
+ ext_type = (data[0] << 8) + data[1];
+ ext_len = (data[2] << 8) + data[3];
+
+ if (ext_len > hs_len - 4) /* Extension too long */
+ goto not_ssl_hello;
+
+ if (ext_type == 0) { /* Server name */
+ if (ext_len < 2) /* need one list length */
+ goto not_ssl_hello;
+
+ srv_len = (data[4] << 8) + data[5];
+ if (srv_len < 4 || srv_len > hs_len - 6)
+ goto not_ssl_hello; /* at least 4 bytes per server name */
+
+ name_type = data[6];
+ name_len = (data[7] << 8) + data[8];
+
+ if (name_type == 0) { /* hostname */
+ smp->data.type = SMP_T_STR;
+ smp->data.u.str.str = (char *)data + 9;
+ smp->data.u.str.len = name_len;
+ smp->flags = SMP_F_VOLATILE | SMP_F_CONST;
+ return 1;
+ }
+ }
+
+ hs_len -= 4 + ext_len;
+ data += 4 + ext_len;
+ }
+ /* server name not found */
+ goto not_ssl_hello;
+
+ too_short:
+ smp->flags = SMP_F_MAY_CHANGE;
+
+ not_ssl_hello:
+
+ return 0;
+}
+
+/* Fetch the request RDP cookie identified in <cname>:<clen>, or any cookie if
+ * <clen> is empty (cname is then ignored). It returns the data into sample <smp>
+ * of type SMP_T_CSTR. Note: this decoder only works with non-wrapping data.
+ */
+int
+fetch_rdp_cookie_name(struct stream *s, struct sample *smp, const char *cname, int clen)
+{
+ int bleft;
+ const unsigned char *data;
+
+ if (!s->req.buf)
+ return 0;
+
+ smp->flags = SMP_F_CONST;
+ smp->data.type = SMP_T_STR;
+
+ bleft = s->req.buf->i;
+ if (bleft <= 11)
+ goto too_short;
+
+ data = (const unsigned char *)s->req.buf->p + 11;
+ bleft -= 11;
+
+ if (bleft <= 7)
+ goto too_short;
+
+ if (strncasecmp((const char *)data, "Cookie:", 7) != 0)
+ goto not_cookie;
+
+ data += 7;
+ bleft -= 7;
+
+ while (bleft > 0 && *data == ' ') {
+ data++;
+ bleft--;
+ }
+
+ if (clen) {
+ if (bleft <= clen)
+ goto too_short;
+
+ if ((data[clen] != '=') ||
+ strncasecmp(cname, (const char *)data, clen) != 0)
+ goto not_cookie;
+
+ data += clen + 1;
+ bleft -= clen + 1;
+ } else {
+ while (bleft > 0 && *data != '=') {
+ if (*data == '\r' || *data == '\n')
+ goto not_cookie;
+ data++;
+ bleft--;
+ }
+
+ if (bleft < 1)
+ goto too_short;
+
+ if (*data != '=')
+ goto not_cookie;
+
+ data++;
+ bleft--;
+ }
+
+ /* data points to cookie value */
+ smp->data.u.str.str = (char *)data;
+ smp->data.u.str.len = 0;
+
+ while (bleft > 0 && *data != '\r') {
+ data++;
+ bleft--;
+ }
+
+ if (bleft < 2)
+ goto too_short;
+
+ if (data[0] != '\r' || data[1] != '\n')
+ goto not_cookie;
+
+ smp->data.u.str.len = (char *)data - smp->data.u.str.str;
+ smp->flags = SMP_F_VOLATILE | SMP_F_CONST;
+ return 1;
+
+ too_short:
+ smp->flags = SMP_F_MAY_CHANGE | SMP_F_CONST;
+ not_cookie:
+ return 0;
+}
+
+/* Fetch the request RDP cookie identified in the args, or any cookie if no arg
+ * is passed. It is usable both for ACL and for samples. Note: this decoder
+ * only works with non-wrapping data. Accepts either 0 or 1 argument. Argument
+ * is a string (cookie name), other types will lead to undefined behaviour. The
+ * returned sample has type SMP_T_CSTR.
+ */
+int
+smp_fetch_rdp_cookie(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ return fetch_rdp_cookie_name(smp->strm, smp, args ? args->data.str.str : NULL, args ? args->data.str.len : 0);
+}
+
+/* returns either 1 or 0 depending on whether an RDP cookie is found or not */
+static int
+smp_fetch_rdp_cookie_cnt(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ int ret;
+
+ ret = smp_fetch_rdp_cookie(args, smp, kw, private);
+
+ if (smp->flags & SMP_F_MAY_CHANGE)
+ return 0;
+
+ smp->flags = SMP_F_VOLATILE;
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = ret;
+ return 1;
+}
+
+/* extracts part of a payload with offset and length at a given position */
+static int
+smp_fetch_payload_lv(const struct arg *arg_p, struct sample *smp, const char *kw, void *private)
+{
+ unsigned int len_offset = arg_p[0].data.sint;
+ unsigned int len_size = arg_p[1].data.sint;
+ unsigned int buf_offset;
+ unsigned int buf_size = 0;
+ struct channel *chn;
+ int i;
+
+ /* Format is (len offset, len size, buf offset) or (len offset, len size) */
+ /* by default buf offset == len offset + len size */
+ /* buf offset could be absolute or relative to len offset + len size if prefixed by + or - */
+
+ chn = ((smp->opt & SMP_OPT_DIR) == SMP_OPT_DIR_RES) ? &smp->strm->res : &smp->strm->req;
+ if (!chn->buf)
+ return 0;
+
+ if (len_offset + len_size > chn->buf->i)
+ goto too_short;
+
+ for (i = 0; i < len_size; i++) {
+ buf_size = (buf_size << 8) + ((unsigned char *)chn->buf->p)[i + len_offset];
+ }
+
+ /* buf offset may be implicit, absolute or relative. If the LSB
+ * is set, then the offset is relative otherwise it is absolute.
+ */
+ buf_offset = len_offset + len_size;
+ if (arg_p[2].type == ARGT_SINT) {
+ if (arg_p[2].data.sint & 1)
+ buf_offset += arg_p[2].data.sint >> 1;
+ else
+ buf_offset = arg_p[2].data.sint >> 1;
+ }
+
+ if (!buf_size || buf_size > global.tune.bufsize || buf_offset + buf_size > global.tune.bufsize) {
+ /* will never match */
+ smp->flags = 0;
+ return 0;
+ }
+
+ if (buf_offset + buf_size > chn->buf->i)
+ goto too_short;
+
+ /* init chunk as read only */
+ smp->data.type = SMP_T_BIN;
+ smp->flags = SMP_F_VOLATILE | SMP_F_CONST;
+ chunk_initlen(&smp->data.u.str, chn->buf->p + buf_offset, 0, buf_size);
+ return 1;
+
+ too_short:
+ smp->flags = SMP_F_MAY_CHANGE | SMP_F_CONST;
+ return 0;
+}
+
+/* extracts some payload at a fixed position and length */
+static int
+smp_fetch_payload(const struct arg *arg_p, struct sample *smp, const char *kw, void *private)
+{
+ unsigned int buf_offset = arg_p[0].data.sint;
+ unsigned int buf_size = arg_p[1].data.sint;
+ struct channel *chn;
+
+ chn = ((smp->opt & SMP_OPT_DIR) == SMP_OPT_DIR_RES) ? &smp->strm->res : &smp->strm->req;
+ if (!chn->buf)
+ return 0;
+
+ if (!buf_size || buf_size > global.tune.bufsize || buf_offset + buf_size > global.tune.bufsize) {
+ /* will never match */
+ smp->flags = 0;
+ return 0;
+ }
+
+ if (buf_offset + buf_size > chn->buf->i)
+ goto too_short;
+
+ /* init chunk as read only */
+ smp->data.type = SMP_T_BIN;
+ smp->flags = SMP_F_VOLATILE | SMP_F_CONST;
+ chunk_initlen(&smp->data.u.str, chn->buf->p + buf_offset, 0, buf_size ? buf_size : (chn->buf->i - buf_offset));
+ if (!buf_size && channel_may_recv(chn) && !channel_input_closed(chn))
+ smp->flags |= SMP_F_MAY_CHANGE;
+
+ return 1;
+
+ too_short:
+ smp->flags = SMP_F_MAY_CHANGE | SMP_F_CONST;
+ return 0;
+}
+
+/* This function is used to validate the arguments passed to a "payload_lv" fetch
+ * keyword. This keyword allows two positive integers and an optional signed one,
+ * with the second one being strictly positive and the third one being greater than
+ * the opposite of the two others if negative. It is assumed that the types are
+ * already the correct ones. Returns 0 on error, non-zero if OK. If <err_msg> is
+ * not NULL, it will be filled with a pointer to an error message in case of
+ * error, that the caller is responsible for freeing. The initial location must
+ * either be freeable or NULL.
+ *
+ * Note that offset2 is stored with SINT type, but its not directly usable as is.
+ * The value is contained in the 63 MSB and the LSB is used as a flag for marking
+ * the "relative" property of the value.
+ */
+int val_payload_lv(struct arg *arg, char **err_msg)
+{
+ int relative = 0;
+ const char *str;
+
+ if (arg[0].data.sint < 0) {
+ memprintf(err_msg, "payload offset1 must be positive");
+ return 0;
+ }
+
+ if (!arg[1].data.sint) {
+ memprintf(err_msg, "payload length must be > 0");
+ return 0;
+ }
+
+ if (arg[2].type == ARGT_STR && arg[2].data.str.len > 0) {
+ if (arg[2].data.str.str[0] == '+' || arg[2].data.str.str[0] == '-')
+ relative = 1;
+ str = arg[2].data.str.str;
+ arg[2].type = ARGT_SINT;
+ arg[2].data.sint = read_int64(&str, str + arg[2].data.str.len);
+ if (*str != '\0') {
+ memprintf(err_msg, "payload offset2 is not a number");
+ return 0;
+ }
+ if (arg[0].data.sint + arg[1].data.sint + arg[2].data.sint < 0) {
+ memprintf(err_msg, "payload offset2 too negative");
+ return 0;
+ }
+ if (relative)
+ arg[2].data.sint = ( arg[2].data.sint << 1 ) + 1;
+ }
+ return 1;
+}
+
+/************************************************************************/
+/* All supported sample and ACL keywords must be declared here. */
+/************************************************************************/
+
+/* Note: must not be declared <const> as its list will be overwritten.
+ * Note: fetches that may return multiple types must be declared as the lowest
+ * common denominator, the type that can be casted into all other ones. For
+ * instance IPv4/IPv6 must be declared IPv4.
+ */
+static struct sample_fetch_kw_list smp_kws = {ILH, {
+ { "payload", smp_fetch_payload, ARG2(2,SINT,SINT), NULL, SMP_T_BIN, SMP_USE_L6REQ|SMP_USE_L6RES },
+ { "payload_lv", smp_fetch_payload_lv, ARG3(2,SINT,SINT,STR), val_payload_lv, SMP_T_BIN, SMP_USE_L6REQ|SMP_USE_L6RES },
+ { "rdp_cookie", smp_fetch_rdp_cookie, ARG1(0,STR), NULL, SMP_T_STR, SMP_USE_L6REQ },
+ { "rdp_cookie_cnt", smp_fetch_rdp_cookie_cnt, ARG1(0,STR), NULL, SMP_T_SINT, SMP_USE_L6REQ },
+ { "rep_ssl_hello_type", smp_fetch_ssl_hello_type, 0, NULL, SMP_T_SINT, SMP_USE_L6RES },
+ { "req_len", smp_fetch_len, 0, NULL, SMP_T_SINT, SMP_USE_L6REQ },
+ { "req_ssl_hello_type", smp_fetch_ssl_hello_type, 0, NULL, SMP_T_SINT, SMP_USE_L6REQ },
+ { "req_ssl_sni", smp_fetch_ssl_hello_sni, 0, NULL, SMP_T_STR, SMP_USE_L6REQ },
+ { "req_ssl_ver", smp_fetch_req_ssl_ver, 0, NULL, SMP_T_SINT, SMP_USE_L6REQ },
+
+ { "req.len", smp_fetch_len, 0, NULL, SMP_T_SINT, SMP_USE_L6REQ },
+ { "req.payload", smp_fetch_payload, ARG2(2,SINT,SINT), NULL, SMP_T_BIN, SMP_USE_L6REQ },
+ { "req.payload_lv", smp_fetch_payload_lv, ARG3(2,SINT,SINT,STR), val_payload_lv, SMP_T_BIN, SMP_USE_L6REQ },
+ { "req.rdp_cookie", smp_fetch_rdp_cookie, ARG1(0,STR), NULL, SMP_T_STR, SMP_USE_L6REQ },
+ { "req.rdp_cookie_cnt", smp_fetch_rdp_cookie_cnt, ARG1(0,STR), NULL, SMP_T_SINT, SMP_USE_L6REQ },
+ { "req.ssl_ec_ext", smp_fetch_req_ssl_ec_ext, 0, NULL, SMP_T_BOOL, SMP_USE_L6REQ },
+ { "req.ssl_st_ext", smp_fetch_req_ssl_st_ext, 0, NULL, SMP_T_SINT, SMP_USE_L6REQ },
+ { "req.ssl_hello_type", smp_fetch_ssl_hello_type, 0, NULL, SMP_T_SINT, SMP_USE_L6REQ },
+ { "req.ssl_sni", smp_fetch_ssl_hello_sni, 0, NULL, SMP_T_STR, SMP_USE_L6REQ },
+ { "req.ssl_ver", smp_fetch_req_ssl_ver, 0, NULL, SMP_T_SINT, SMP_USE_L6REQ },
+ { "res.len", smp_fetch_len, 0, NULL, SMP_T_SINT, SMP_USE_L6RES },
+ { "res.payload", smp_fetch_payload, ARG2(2,SINT,SINT), NULL, SMP_T_BIN, SMP_USE_L6RES },
+ { "res.payload_lv", smp_fetch_payload_lv, ARG3(2,SINT,SINT,STR), val_payload_lv, SMP_T_BIN, SMP_USE_L6RES },
+ { "res.ssl_hello_type", smp_fetch_ssl_hello_type, 0, NULL, SMP_T_SINT, SMP_USE_L6RES },
+ { "wait_end", smp_fetch_wait_end, 0, NULL, SMP_T_BOOL, SMP_USE_INTRN },
+ { /* END */ },
+}};
+
+
+/* Note: must not be declared <const> as its list will be overwritten.
+ * Please take care of keeping this list alphabetically sorted.
+ */
+static struct acl_kw_list acl_kws = {ILH, {
+ { "payload", "req.payload", PAT_MATCH_BIN },
+ { "payload_lv", "req.payload_lv", PAT_MATCH_BIN },
+ { "req_rdp_cookie", "req.rdp_cookie", PAT_MATCH_STR },
+ { "req_rdp_cookie_cnt", "req.rdp_cookie_cnt", PAT_MATCH_INT },
+ { "req_ssl_sni", "req.ssl_sni", PAT_MATCH_STR },
+ { "req_ssl_ver", "req.ssl_ver", PAT_MATCH_INT, pat_parse_dotted_ver },
+ { "req.ssl_ver", "req.ssl_ver", PAT_MATCH_INT, pat_parse_dotted_ver },
+ { /* END */ },
+}};
+
+
+__attribute__((constructor))
+static void __payload_init(void)
+{
+ sample_register_fetches(&smp_kws);
+ acl_register_keywords(&acl_kws);
+}
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * Peer synchro management.
+ *
+ * Copyright 2010 EXCELIANCE, Emeric Brun <ebrun@exceliance.fr>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <errno.h>
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+
+#include <sys/socket.h>
+#include <sys/stat.h>
+#include <sys/types.h>
+
+#include <common/compat.h>
+#include <common/config.h>
+#include <common/time.h>
+
+#include <types/global.h>
+#include <types/listener.h>
+#include <types/obj_type.h>
+#include <types/peers.h>
+
+#include <proto/acl.h>
+#include <proto/applet.h>
+#include <proto/channel.h>
+#include <proto/fd.h>
+#include <proto/frontend.h>
+#include <proto/log.h>
+#include <proto/hdr_idx.h>
+#include <proto/proto_tcp.h>
+#include <proto/proto_http.h>
+#include <proto/proxy.h>
+#include <proto/session.h>
+#include <proto/stream.h>
+#include <proto/signal.h>
+#include <proto/stick_table.h>
+#include <proto/stream_interface.h>
+#include <proto/task.h>
+
+
+/*******************************/
+/* Current peer learning state */
+/*******************************/
+
+/******************************/
+/* Current peers section resync state */
+/******************************/
+#define PEERS_F_RESYNC_LOCAL 0x00000001 /* Learn from local finished or no more needed */
+#define PEERS_F_RESYNC_REMOTE 0x00000002 /* Learn from remote finished or no more needed */
+#define PEERS_F_RESYNC_ASSIGN 0x00000004 /* A peer was assigned to learn our lesson */
+#define PEERS_F_RESYNC_PROCESS 0x00000008 /* The assigned peer was requested for resync */
+#define PEERS_F_DONOTSTOP 0x00010000 /* Main table sync task block process during soft stop
+ to push data to new process */
+
+#define PEERS_RESYNC_STATEMASK (PEERS_F_RESYNC_LOCAL|PEERS_F_RESYNC_REMOTE)
+#define PEERS_RESYNC_FROMLOCAL 0x00000000
+#define PEERS_RESYNC_FROMREMOTE PEERS_F_RESYNC_LOCAL
+#define PEERS_RESYNC_FINISHED (PEERS_F_RESYNC_LOCAL|PEERS_F_RESYNC_REMOTE)
+
+/***********************************/
+/* Current shared table sync state */
+/***********************************/
+#define SHTABLE_F_TEACH_STAGE1 0x00000001 /* Teach state 1 complete */
+#define SHTABLE_F_TEACH_STAGE2 0x00000002 /* Teach state 2 complete */
+
+/******************************/
+/* Remote peer teaching state */
+/******************************/
+#define PEER_F_TEACH_PROCESS 0x00000001 /* Teach a lesson to current peer */
+#define PEER_F_TEACH_FINISHED 0x00000008 /* Teach conclude, (wait for confirm) */
+#define PEER_F_TEACH_COMPLETE 0x00000010 /* All that we know already taught to current peer, used only for a local peer */
+#define PEER_F_LEARN_ASSIGN 0x00000100 /* Current peer was assigned for a lesson */
+#define PEER_F_LEARN_NOTUP2DATE 0x00000200 /* Learn from peer finished but peer is not up to date */
+
+#define PEER_TEACH_RESET ~(PEER_F_TEACH_PROCESS|PEER_F_TEACH_FINISHED) /* PEER_F_TEACH_COMPLETE should never be reset */
+#define PEER_LEARN_RESET ~(PEER_F_LEARN_ASSIGN|PEER_F_LEARN_NOTUP2DATE)
+
+/*****************************/
+/* Sync message class */
+/*****************************/
+enum {
+ PEER_MSG_CLASS_CONTROL = 0,
+ PEER_MSG_CLASS_ERROR,
+ PEER_MSG_CLASS_STICKTABLE = 10,
+ PEER_MSG_CLASS_RESERVED = 255,
+};
+
+/*****************************/
+/* control message types */
+/*****************************/
+enum {
+ PEER_MSG_CTRL_RESYNCREQ = 0,
+ PEER_MSG_CTRL_RESYNCFINISHED,
+ PEER_MSG_CTRL_RESYNCPARTIAL,
+ PEER_MSG_CTRL_RESYNCCONFIRM,
+};
+
+/*****************************/
+/* error message types */
+/*****************************/
+enum {
+ PEER_MSG_ERR_PROTOCOL = 0,
+ PEER_MSG_ERR_SIZELIMIT,
+};
+
+
+/*******************************/
+/* stick table sync mesg types */
+/* Note: ids >= 128 contains */
+/* id message cotains data */
+/*******************************/
+enum {
+ PEER_MSG_STKT_UPDATE = 128,
+ PEER_MSG_STKT_INCUPDATE,
+ PEER_MSG_STKT_DEFINE,
+ PEER_MSG_STKT_SWITCH,
+ PEER_MSG_STKT_ACK,
+};
+
+/**********************************/
+/* Peer Session IO handler states */
+/**********************************/
+
+enum {
+ PEER_SESS_ST_ACCEPT = 0, /* Initial state for session create by an accept, must be zero! */
+ PEER_SESS_ST_GETVERSION, /* Validate supported protocol version */
+ PEER_SESS_ST_GETHOST, /* Validate host ID correspond to local host id */
+ PEER_SESS_ST_GETPEER, /* Validate peer ID correspond to a known remote peer id */
+ /* after this point, data were possibly exchanged */
+ PEER_SESS_ST_SENDSUCCESS, /* Send ret code 200 (success) and wait for message */
+ PEER_SESS_ST_CONNECT, /* Initial state for session create on a connect, push presentation into buffer */
+ PEER_SESS_ST_GETSTATUS, /* Wait for the welcome message */
+ PEER_SESS_ST_WAITMSG, /* Wait for data messages */
+ PEER_SESS_ST_EXIT, /* Exit with status code */
+ PEER_SESS_ST_ERRPROTO, /* Send error proto message before exit */
+ PEER_SESS_ST_ERRSIZE, /* Send error size message before exit */
+ PEER_SESS_ST_END, /* Killed session */
+};
+
+/***************************************************/
+/* Peer Session status code - part of the protocol */
+/***************************************************/
+
+#define PEER_SESS_SC_CONNECTCODE 100 /* connect in progress */
+#define PEER_SESS_SC_CONNECTEDCODE 110 /* tcp connect success */
+
+#define PEER_SESS_SC_SUCCESSCODE 200 /* accept or connect successful */
+
+#define PEER_SESS_SC_TRYAGAIN 300 /* try again later */
+
+#define PEER_SESS_SC_ERRPROTO 501 /* error protocol */
+#define PEER_SESS_SC_ERRVERSION 502 /* unknown protocol version */
+#define PEER_SESS_SC_ERRHOST 503 /* bad host name */
+#define PEER_SESS_SC_ERRPEER 504 /* unknown peer */
+
+#define PEER_SESSION_PROTO_NAME "HAProxyS"
+
+struct peers *peers = NULL;
+static void peer_session_forceshutdown(struct stream * stream);
+
+int intencode(uint64_t i, char **str) {
+ int idx = 0;
+ unsigned char *msg;
+
+ if (!*str)
+ return 0;
+
+ msg = (unsigned char *)*str;
+ if (i < 240) {
+ msg[0] = (unsigned char)i;
+ *str = (char *)&msg[idx+1];
+ return (idx+1);
+ }
+
+ msg[idx] =(unsigned char)i | 240;
+ i = (i - 240) >> 4;
+ while (i >= 128) {
+ msg[++idx] = (unsigned char)i | 128;
+ i = (i - 128) >> 7;
+ }
+ msg[++idx] = (unsigned char)i;
+ *str = (char *)&msg[idx+1];
+ return (idx+1);
+}
+
+
+/* This function returns the decoded integer or 0
+ if decode failed
+ *str point on the beginning of the integer to decode
+ at the end of decoding *str point on the end of the
+ encoded integer or to null if end is reached */
+uint64_t intdecode(char **str, char *end) {
+ uint64_t i;
+ int idx = 0;
+ unsigned char *msg;
+
+ if (!*str)
+ return 0;
+
+ msg = (unsigned char *)*str;
+ if (msg >= (unsigned char *)end) {
+ *str = NULL;
+ return 0;
+ }
+
+ if (msg[idx] < 240) {
+ *str = (char *)&msg[idx+1];
+ return msg[idx];
+ }
+ i = msg[idx];
+ do {
+ idx++;
+ if (msg >= (unsigned char *)end) {
+ *str = NULL;
+ return 0;
+ }
+ i += (uint64_t)msg[idx] << (4 + 7*(idx-1));
+ }
+ while (msg[idx] > 128);
+ *str = (char *)&msg[idx+1];
+ return i;
+}
+
+/*
+ * This prepare the data update message on the stick session <ts>, <st> is the considered
+ * stick table.
+ * <msg> is a buffer of <size> to recieve data message content
+ * If function returns 0, the caller should consider we were unable to encode this message (TODO:
+ * check size)
+ */
+static int peer_prepare_updatemsg(struct stksess *ts, struct shared_table *st, char *msg, size_t size, int use_identifier)
+{
+ uint32_t netinteger;
+ unsigned short datalen;
+ char *cursor, *datamsg;
+ unsigned int data_type;
+ void *data_ptr;
+
+ cursor = datamsg = msg + 1 + 5;
+
+ /* construct message */
+
+ /* check if we need to send the update identifer */
+ if (!st->last_pushed || ts->upd.key < st->last_pushed || ((ts->upd.key - st->last_pushed) != 1)) {
+ use_identifier = 1;
+ }
+
+ /* encode update identifier if needed */
+ if (use_identifier) {
+ netinteger = htonl(ts->upd.key);
+ memcpy(cursor, &netinteger, sizeof(netinteger));
+ cursor += sizeof(netinteger);
+ }
+
+ /* encode the key */
+ if (st->table->type == SMP_T_STR) {
+ int stlen = strlen((char *)ts->key.key);
+
+ intencode(stlen, &cursor);
+ memcpy(cursor, ts->key.key, stlen);
+ cursor += stlen;
+ }
+ else if (st->table->type == SMP_T_SINT) {
+ netinteger = htonl(*((uint32_t *)ts->key.key));
+ memcpy(cursor, &netinteger, sizeof(netinteger));
+ cursor += sizeof(netinteger);
+ }
+ else {
+ memcpy(cursor, ts->key.key, st->table->key_size);
+ cursor += st->table->key_size;
+ }
+
+ /* encode values */
+ for (data_type = 0 ; data_type < STKTABLE_DATA_TYPES ; data_type++) {
+
+ data_ptr = stktable_data_ptr(st->table, ts, data_type);
+ if (data_ptr) {
+ switch (stktable_data_types[data_type].std_type) {
+ case STD_T_SINT: {
+ int data;
+
+ data = stktable_data_cast(data_ptr, std_t_sint);
+ intencode(data, &cursor);
+ break;
+ }
+ case STD_T_UINT: {
+ unsigned int data;
+
+ data = stktable_data_cast(data_ptr, std_t_uint);
+ intencode(data, &cursor);
+ break;
+ }
+ case STD_T_ULL: {
+ unsigned long long data;
+
+ data = stktable_data_cast(data_ptr, std_t_ull);
+ intencode(data, &cursor);
+ break;
+ }
+ case STD_T_FRQP: {
+ struct freq_ctr_period *frqp;
+
+ frqp = &stktable_data_cast(data_ptr, std_t_frqp);
+ intencode((unsigned int)(now_ms - frqp->curr_tick), &cursor);
+ intencode(frqp->curr_ctr, &cursor);
+ intencode(frqp->prev_ctr, &cursor);
+ break;
+ }
+ }
+ }
+ }
+
+ /* Compute datalen */
+ datalen = (cursor - datamsg);
+
+ /* prepare message header */
+ msg[0] = PEER_MSG_CLASS_STICKTABLE;
+ if (use_identifier)
+ msg[1] = PEER_MSG_STKT_UPDATE;
+ else
+ msg[1] = PEER_MSG_STKT_INCUPDATE;
+
+ cursor = &msg[2];
+ intencode(datalen, &cursor);
+
+ /* move data after header */
+ memmove(cursor, datamsg, datalen);
+
+ /* return header size + data_len */
+ return (cursor - msg) + datalen;
+}
+
+/*
+ * This prepare the switch table message to targeted share table <st>.
+ * <msg> is a buffer of <size> to recieve data message content
+ * If function returns 0, the caller should consider we were unable to encode this message (TODO:
+ * check size)
+ */
+static int peer_prepare_switchmsg(struct shared_table *st, char *msg, size_t size)
+{
+ int len;
+ unsigned short datalen;
+ char *cursor, *datamsg;
+ uint64_t data = 0;
+ unsigned int data_type;
+
+ cursor = datamsg = msg + 2 + 5;
+
+ /* Encode data */
+
+ /* encode local id */
+ intencode(st->local_id, &cursor);
+
+ /* encode table name */
+ len = strlen(st->table->id);
+ intencode(len, &cursor);
+ memcpy(cursor, st->table->id, len);
+ cursor += len;
+
+ /* encode table type */
+
+ intencode(st->table->type, &cursor);
+
+ /* encode table key size */
+ intencode(st->table->key_size, &cursor);
+
+ /* encode available known data types in table */
+ for (data_type = 0 ; data_type < STKTABLE_DATA_TYPES ; data_type++) {
+ if (st->table->data_ofs[data_type]) {
+ switch (stktable_data_types[data_type].std_type) {
+ case STD_T_SINT:
+ case STD_T_UINT:
+ case STD_T_ULL:
+ case STD_T_FRQP:
+ data |= 1 << data_type;
+ break;
+ }
+ }
+ }
+ intencode(data, &cursor);
+
+ /* Compute datalen */
+ datalen = (cursor - datamsg);
+
+ /* prepare message header */
+ msg[0] = PEER_MSG_CLASS_STICKTABLE;
+ msg[1] = PEER_MSG_STKT_DEFINE;
+ cursor = &msg[2];
+ intencode(datalen, &cursor);
+
+ /* move data after header */
+ memmove(cursor, datamsg, datalen);
+
+ /* return header size + data_len */
+ return (cursor - msg) + datalen;
+}
+
+/*
+ * This prepare the acknowledge message on the stick session <ts>, <st> is the considered
+ * stick table.
+ * <msg> is a buffer of <size> to recieve data message content
+ * If function returns 0, the caller should consider we were unable to encode this message (TODO:
+ * check size)
+ */
+static int peer_prepare_ackmsg(struct shared_table *st, char *msg, size_t size)
+{
+ unsigned short datalen;
+ char *cursor, *datamsg;
+ uint32_t netinteger;
+
+ cursor = datamsg = msg + 2 + 5;
+
+ intencode(st->remote_id, &cursor);
+ netinteger = htonl(st->last_get);
+ memcpy(cursor, &netinteger, sizeof(netinteger));
+ cursor += sizeof(netinteger);
+
+ /* Compute datalen */
+ datalen = (cursor - datamsg);
+
+ /* prepare message header */
+ msg[0] = PEER_MSG_CLASS_STICKTABLE;
+ msg[1] = PEER_MSG_STKT_ACK;
+ cursor = &msg[2];
+ intencode(datalen, &cursor);
+
+ /* move data after header */
+ memmove(cursor, datamsg, datalen);
+
+ /* return header size + data_len */
+ return (cursor - msg) + datalen;
+}
+
+/*
+ * Callback to release a session with a peer
+ */
+static void peer_session_release(struct appctx *appctx)
+{
+ struct stream_interface *si = appctx->owner;
+ struct stream *s = si_strm(si);
+ struct peer *peer = (struct peer *)appctx->ctx.peers.ptr;
+ struct peers *peers = (struct peers *)strm_fe(s)->parent;
+
+ /* appctx->ctx.peers.ptr is not a peer session */
+ if (appctx->st0 < PEER_SESS_ST_SENDSUCCESS)
+ return;
+
+ /* peer session identified */
+ if (peer) {
+ if (peer->stream == s) {
+ /* Re-init current table pointers to force announcement on re-connect */
+ peer->remote_table = peer->last_local_table = NULL;
+ peer->stream = NULL;
+ peer->appctx = NULL;
+ if (peer->flags & PEER_F_LEARN_ASSIGN) {
+ /* unassign current peer for learning */
+ peer->flags &= ~(PEER_F_LEARN_ASSIGN);
+ peers->flags &= ~(PEERS_F_RESYNC_ASSIGN|PEERS_F_RESYNC_PROCESS);
+
+ /* reschedule a resync */
+ peers->resync_timeout = tick_add(now_ms, MS_TO_TICKS(5000));
+ }
+ /* reset teaching and learning flags to 0 */
+ peer->flags &= PEER_TEACH_RESET;
+ peer->flags &= PEER_LEARN_RESET;
+ }
+ task_wakeup(peers->sync_task, TASK_WOKEN_MSG);
+ }
+}
+
+
+/*
+ * IO Handler to handle message exchance with a peer
+ */
+static void peer_io_handler(struct appctx *appctx)
+{
+ struct stream_interface *si = appctx->owner;
+ struct stream *s = si_strm(si);
+ struct peers *curpeers = (struct peers *)strm_fe(s)->parent;
+ int reql = 0;
+ int repl = 0;
+
+ while (1) {
+switchstate:
+ switch(appctx->st0) {
+ case PEER_SESS_ST_ACCEPT:
+ appctx->ctx.peers.ptr = NULL;
+ appctx->st0 = PEER_SESS_ST_GETVERSION;
+ /* fall through */
+ case PEER_SESS_ST_GETVERSION:
+ reql = bo_getline(si_oc(si), trash.str, trash.size);
+ if (reql <= 0) { /* closed or EOL not found */
+ if (reql == 0)
+ goto out;
+ appctx->st0 = PEER_SESS_ST_END;
+ goto switchstate;
+ }
+ if (trash.str[reql-1] != '\n') {
+ appctx->st0 = PEER_SESS_ST_END;
+ goto switchstate;
+ }
+ else if (reql > 1 && (trash.str[reql-2] == '\r'))
+ trash.str[reql-2] = 0;
+ else
+ trash.str[reql-1] = 0;
+
+ bo_skip(si_oc(si), reql);
+
+ /* test version */
+ if (strcmp(PEER_SESSION_PROTO_NAME " 2.0", trash.str) != 0) {
+ appctx->st0 = PEER_SESS_ST_EXIT;
+ appctx->st1 = PEER_SESS_SC_ERRVERSION;
+ /* test protocol */
+ if (strncmp(PEER_SESSION_PROTO_NAME " ", trash.str, strlen(PEER_SESSION_PROTO_NAME)+1) != 0)
+ appctx->st1 = PEER_SESS_SC_ERRPROTO;
+ goto switchstate;
+ }
+
+ appctx->st0 = PEER_SESS_ST_GETHOST;
+ /* fall through */
+ case PEER_SESS_ST_GETHOST:
+ reql = bo_getline(si_oc(si), trash.str, trash.size);
+ if (reql <= 0) { /* closed or EOL not found */
+ if (reql == 0)
+ goto out;
+ appctx->st0 = PEER_SESS_ST_END;
+ goto switchstate;
+ }
+ if (trash.str[reql-1] != '\n') {
+ appctx->st0 = PEER_SESS_ST_END;
+ goto switchstate;
+ }
+ else if (reql > 1 && (trash.str[reql-2] == '\r'))
+ trash.str[reql-2] = 0;
+ else
+ trash.str[reql-1] = 0;
+
+ bo_skip(si_oc(si), reql);
+
+ /* test hostname match */
+ if (strcmp(localpeer, trash.str) != 0) {
+ appctx->st0 = PEER_SESS_ST_EXIT;
+ appctx->st1 = PEER_SESS_SC_ERRHOST;
+ goto switchstate;
+ }
+
+ appctx->st0 = PEER_SESS_ST_GETPEER;
+ /* fall through */
+ case PEER_SESS_ST_GETPEER: {
+ struct peer *curpeer;
+ char *p;
+ reql = bo_getline(si_oc(si), trash.str, trash.size);
+ if (reql <= 0) { /* closed or EOL not found */
+ if (reql == 0)
+ goto out;
+ appctx->st0 = PEER_SESS_ST_END;
+ goto switchstate;
+ }
+ if (trash.str[reql-1] != '\n') {
+ /* Incomplete line, we quit */
+ appctx->st0 = PEER_SESS_ST_END;
+ goto switchstate;
+ }
+ else if (reql > 1 && (trash.str[reql-2] == '\r'))
+ trash.str[reql-2] = 0;
+ else
+ trash.str[reql-1] = 0;
+
+ bo_skip(si_oc(si), reql);
+
+ /* parse line "<peer name> <pid> <relative_pid>" */
+ p = strchr(trash.str, ' ');
+ if (!p) {
+ appctx->st0 = PEER_SESS_ST_EXIT;
+ appctx->st1 = PEER_SESS_SC_ERRPROTO;
+ goto switchstate;
+ }
+ *p = 0;
+
+ /* lookup known peer */
+ for (curpeer = curpeers->remote; curpeer; curpeer = curpeer->next) {
+ if (strcmp(curpeer->id, trash.str) == 0)
+ break;
+ }
+
+ /* if unknown peer */
+ if (!curpeer) {
+ appctx->st0 = PEER_SESS_ST_EXIT;
+ appctx->st1 = PEER_SESS_SC_ERRPEER;
+ goto switchstate;
+ }
+
+ if (curpeer->stream && curpeer->stream != s) {
+ if (curpeer->local) {
+ /* Local connection, reply a retry */
+ appctx->st0 = PEER_SESS_ST_EXIT;
+ appctx->st1 = PEER_SESS_SC_TRYAGAIN;
+ goto switchstate;
+ }
+ peer_session_forceshutdown(curpeer->stream);
+ }
+ curpeer->stream = s;
+ curpeer->appctx = appctx;
+ appctx->ctx.peers.ptr = curpeer;
+ appctx->st0 = PEER_SESS_ST_SENDSUCCESS;
+ /* fall through */
+ }
+ case PEER_SESS_ST_SENDSUCCESS: {
+ struct peer *curpeer = (struct peer *)appctx->ctx.peers.ptr;
+ struct shared_table *st;
+
+ repl = snprintf(trash.str, trash.size, "%d\n", PEER_SESS_SC_SUCCESSCODE);
+ repl = bi_putblk(si_ic(si), trash.str, repl);
+ if (repl <= 0) {
+ if (repl == -1)
+ goto full;
+ appctx->st0 = PEER_SESS_ST_END;
+ goto switchstate;
+ }
+
+ /* Register status code */
+ curpeer->statuscode = PEER_SESS_SC_SUCCESSCODE;
+
+ /* Awake main task */
+ task_wakeup(curpeers->sync_task, TASK_WOKEN_MSG);
+
+ /* Init confirm counter */
+ curpeer->confirm = 0;
+
+ /* Init cursors */
+ for (st = curpeer->tables; st ; st = st->next) {
+ st->last_get = st->last_acked = 0;
+ st->teaching_origin = st->last_pushed = st->update;
+ }
+
+ /* reset teaching and learning flags to 0 */
+ curpeer->flags &= PEER_TEACH_RESET;
+ curpeer->flags &= PEER_LEARN_RESET;
+
+ /* if current peer is local */
+ if (curpeer->local) {
+ /* if current host need resyncfrom local and no process assined */
+ if ((peers->flags & PEERS_RESYNC_STATEMASK) == PEERS_RESYNC_FROMLOCAL &&
+ !(peers->flags & PEERS_F_RESYNC_ASSIGN)) {
+ /* assign local peer for a lesson, consider lesson already requested */
+ curpeer->flags |= PEER_F_LEARN_ASSIGN;
+ peers->flags |= (PEERS_F_RESYNC_ASSIGN|PEERS_F_RESYNC_PROCESS);
+ }
+
+ }
+ else if ((peers->flags & PEERS_RESYNC_STATEMASK) == PEERS_RESYNC_FROMREMOTE &&
+ !(peers->flags & PEERS_F_RESYNC_ASSIGN)) {
+ /* assign peer for a lesson */
+ curpeer->flags |= PEER_F_LEARN_ASSIGN;
+ peers->flags |= PEERS_F_RESYNC_ASSIGN;
+ }
+
+
+ /* switch to waiting message state */
+ appctx->st0 = PEER_SESS_ST_WAITMSG;
+ goto switchstate;
+ }
+ case PEER_SESS_ST_CONNECT: {
+ struct peer *curpeer = (struct peer *)appctx->ctx.peers.ptr;
+
+ /* Send headers */
+ repl = snprintf(trash.str, trash.size,
+ PEER_SESSION_PROTO_NAME " 2.0\n%s\n%s %d %d\n",
+ curpeer->id,
+ localpeer,
+ (int)getpid(),
+ relative_pid);
+
+ if (repl >= trash.size) {
+ appctx->st0 = PEER_SESS_ST_END;
+ goto switchstate;
+ }
+
+ repl = bi_putblk(si_ic(si), trash.str, repl);
+ if (repl <= 0) {
+ if (repl == -1)
+ goto full;
+ appctx->st0 = PEER_SESS_ST_END;
+ goto switchstate;
+ }
+
+ /* switch to the waiting statuscode state */
+ appctx->st0 = PEER_SESS_ST_GETSTATUS;
+ /* fall through */
+ }
+ case PEER_SESS_ST_GETSTATUS: {
+ struct peer *curpeer = (struct peer *)appctx->ctx.peers.ptr;
+ struct shared_table *st;
+
+ if (si_ic(si)->flags & CF_WRITE_PARTIAL)
+ curpeer->statuscode = PEER_SESS_SC_CONNECTEDCODE;
+
+ reql = bo_getline(si_oc(si), trash.str, trash.size);
+ if (reql <= 0) { /* closed or EOL not found */
+ if (reql == 0)
+ goto out;
+ appctx->st0 = PEER_SESS_ST_END;
+ goto switchstate;
+ }
+ if (trash.str[reql-1] != '\n') {
+ /* Incomplete line, we quit */
+ appctx->st0 = PEER_SESS_ST_END;
+ goto switchstate;
+ }
+ else if (reql > 1 && (trash.str[reql-2] == '\r'))
+ trash.str[reql-2] = 0;
+ else
+ trash.str[reql-1] = 0;
+
+ bo_skip(si_oc(si), reql);
+
+ /* Register status code */
+ curpeer->statuscode = atoi(trash.str);
+
+ /* Awake main task */
+ task_wakeup(peers->sync_task, TASK_WOKEN_MSG);
+
+ /* If status code is success */
+ if (curpeer->statuscode == PEER_SESS_SC_SUCCESSCODE) {
+ /* Init cursors */
+ for (st = curpeer->tables; st ; st = st->next) {
+ st->last_get = st->last_acked = 0;
+ st->teaching_origin = st->last_pushed = st->update;
+ }
+
+ /* Init confirm counter */
+ curpeer->confirm = 0;
+
+ /* reset teaching and learning flags to 0 */
+ curpeer->flags &= PEER_TEACH_RESET;
+ curpeer->flags &= PEER_LEARN_RESET;
+
+ /* If current peer is local */
+ if (curpeer->local) {
+ /* flag to start to teach lesson */
+ curpeer->flags |= PEER_F_TEACH_PROCESS;
+
+ }
+ else if ((peers->flags & PEERS_RESYNC_STATEMASK) == PEERS_RESYNC_FROMREMOTE &&
+ !(peers->flags & PEERS_F_RESYNC_ASSIGN)) {
+ /* If peer is remote and resync from remote is needed,
+ and no peer currently assigned */
+
+ /* assign peer for a lesson */
+ curpeer->flags |= PEER_F_LEARN_ASSIGN;
+ peers->flags |= PEERS_F_RESYNC_ASSIGN;
+ }
+
+ }
+ else {
+ /* Status code is not success, abort */
+ appctx->st0 = PEER_SESS_ST_END;
+ goto switchstate;
+ }
+ appctx->st0 = PEER_SESS_ST_WAITMSG;
+ /* fall through */
+ }
+ case PEER_SESS_ST_WAITMSG: {
+ struct peer *curpeer = (struct peer *)appctx->ctx.peers.ptr;
+ struct stksess *ts, *newts = NULL;
+ uint32_t msg_len = 0;
+ char *msg_cur = trash.str;
+ char *msg_end = trash.str;
+ unsigned char msg_head[7];
+ int totl = 0;
+
+ reql = bo_getblk(si_oc(si), (char *)msg_head, 2*sizeof(unsigned char), totl);
+ if (reql <= 0) /* closed or EOL not found */
+ goto incomplete;
+
+ totl += reql;
+
+ if (msg_head[1] >= 128) {
+ /* Read and Decode message length */
+ reql = bo_getblk(si_oc(si), (char *)&msg_head[2], sizeof(unsigned char), totl);
+ if (reql <= 0) /* closed */
+ goto incomplete;
+
+ totl += reql;
+
+ if (msg_head[2] < 240) {
+ msg_len = msg_head[2];
+ }
+ else {
+ int i;
+ char *cur;
+ char *end;
+
+ for (i = 3 ; i < sizeof(msg_head) ; i++) {
+ reql = bo_getblk(si_oc(si), (char *)&msg_head[i], sizeof(char), totl);
+ if (reql <= 0) /* closed */
+ goto incomplete;
+
+ totl += reql;
+
+ if (!(msg_head[i] & 0x80))
+ break;
+ }
+
+ if (i == sizeof(msg_head)) {
+ /* malformed message */
+ appctx->st0 = PEER_SESS_ST_ERRPROTO;
+ goto switchstate;
+
+ }
+ end = (char *)msg_head + sizeof(msg_head);
+ cur = (char *)&msg_head[2];
+ msg_len = intdecode(&cur, end);
+ if (!cur) {
+ /* malformed message */
+ appctx->st0 = PEER_SESS_ST_ERRPROTO;
+ goto switchstate;
+ }
+ }
+
+
+ /* Read message content */
+ if (msg_len) {
+ if (msg_len > trash.size) {
+ /* Status code is not success, abort */
+ appctx->st0 = PEER_SESS_ST_ERRSIZE;
+ goto switchstate;
+ }
+
+ reql = bo_getblk(si_oc(si), trash.str, msg_len, totl);
+ if (reql <= 0) /* closed */
+ goto incomplete;
+ totl += reql;
+
+ msg_end += msg_len;
+ }
+ }
+
+ if (msg_head[0] == PEER_MSG_CLASS_CONTROL) {
+ if (msg_head[1] == PEER_MSG_CTRL_RESYNCREQ) {
+ struct shared_table *st;
+ /* Reset message: remote need resync */
+
+ /* prepare tables fot a global push */
+ for (st = curpeer->tables; st; st = st->next) {
+ st->teaching_origin = st->last_pushed = st->table->update;
+ st->flags = 0;
+ }
+
+ /* reset teaching flags to 0 */
+ curpeer->flags &= PEER_TEACH_RESET;
+
+ /* flag to start to teach lesson */
+ curpeer->flags |= PEER_F_TEACH_PROCESS;
+
+
+ }
+ else if (msg_head[1] == PEER_MSG_CTRL_RESYNCFINISHED) {
+
+ if (curpeer->flags & PEER_F_LEARN_ASSIGN) {
+ curpeer->flags &= ~PEER_F_LEARN_ASSIGN;
+ peers->flags &= ~(PEERS_F_RESYNC_ASSIGN|PEERS_F_RESYNC_PROCESS);
+ peers->flags |= (PEERS_F_RESYNC_LOCAL|PEERS_F_RESYNC_REMOTE);
+ }
+ curpeer->confirm++;
+ }
+ else if (msg_head[1] == PEER_MSG_CTRL_RESYNCPARTIAL) {
+
+ if (curpeer->flags & PEER_F_LEARN_ASSIGN) {
+ curpeer->flags &= ~PEER_F_LEARN_ASSIGN;
+ peers->flags &= ~(PEERS_F_RESYNC_ASSIGN|PEERS_F_RESYNC_PROCESS);
+
+ curpeer->flags |= PEER_F_LEARN_NOTUP2DATE;
+ peers->resync_timeout = tick_add(now_ms, MS_TO_TICKS(5000));
+ task_wakeup(peers->sync_task, TASK_WOKEN_MSG);
+ }
+ curpeer->confirm++;
+ }
+ else if (msg_head[1] == PEER_MSG_CTRL_RESYNCCONFIRM) {
+
+ /* If stopping state */
+ if (stopping) {
+ /* Close session, push resync no more needed */
+ curpeer->flags |= PEER_F_TEACH_COMPLETE;
+ appctx->st0 = PEER_SESS_ST_END;
+ goto switchstate;
+ }
+
+ /* reset teaching flags to 0 */
+ curpeer->flags &= PEER_TEACH_RESET;
+ }
+ }
+ else if (msg_head[0] == PEER_MSG_CLASS_STICKTABLE) {
+ if (msg_head[1] == PEER_MSG_STKT_DEFINE) {
+ int table_id_len;
+ struct shared_table *st;
+ int table_type;
+ int table_keylen;
+ int table_id;
+ uint64_t table_data;
+
+ table_id = intdecode(&msg_cur, msg_end);
+ if (!msg_cur) {
+ /* malformed message */
+ appctx->st0 = PEER_SESS_ST_ERRPROTO;
+ goto switchstate;
+ }
+
+ table_id_len = intdecode(&msg_cur, msg_end);
+ if (!msg_cur) {
+ /* malformed message */
+ appctx->st0 = PEER_SESS_ST_ERRPROTO;
+ goto switchstate;
+ }
+
+ curpeer->remote_table = NULL;
+ if (!table_id_len || (msg_cur + table_id_len) >= msg_end) {
+ /* malformed message */
+ appctx->st0 = PEER_SESS_ST_ERRPROTO;
+ goto switchstate;
+ }
+
+ for (st = curpeer->tables; st; st = st->next) {
+ /* Reset IDs */
+ if (st->remote_id == table_id)
+ st->remote_id = 0;
+
+ if (!curpeer->remote_table
+ && (table_id_len == strlen(st->table->id))
+ && (memcmp(st->table->id, msg_cur, table_id_len) == 0)) {
+ curpeer->remote_table = st;
+ }
+ }
+
+ if (!curpeer->remote_table) {
+ goto ignore_msg;
+ }
+
+ msg_cur += table_id_len;
+ if (msg_cur >= msg_end) {
+ /* malformed message */
+ appctx->st0 = PEER_SESS_ST_ERRPROTO;
+ goto switchstate;
+ }
+
+ table_type = intdecode(&msg_cur, msg_end);
+ if (!msg_cur) {
+ /* malformed message */
+ appctx->st0 = PEER_SESS_ST_ERRPROTO;
+ goto switchstate;
+ }
+
+ table_keylen = intdecode(&msg_cur, msg_end);
+ if (!msg_cur) {
+ /* malformed message */
+ appctx->st0 = PEER_SESS_ST_ERRPROTO;
+ goto switchstate;
+ }
+
+ table_data = intdecode(&msg_cur, msg_end);
+ if (!msg_cur) {
+ /* malformed message */
+ appctx->st0 = PEER_SESS_ST_ERRPROTO;
+ goto switchstate;
+ }
+
+ if (curpeer->remote_table->table->type != table_type
+ || curpeer->remote_table->table->key_size != table_keylen) {
+ curpeer->remote_table = NULL;
+ goto ignore_msg;
+ }
+
+ curpeer->remote_table->remote_data = table_data;
+ curpeer->remote_table->remote_id = table_id;
+ }
+ else if (msg_head[1] == PEER_MSG_STKT_SWITCH) {
+ struct shared_table *st;
+ int table_id;
+
+ table_id = intdecode(&msg_cur, msg_end);
+ if (!msg_cur) {
+ /* malformed message */
+ appctx->st0 = PEER_SESS_ST_ERRPROTO;
+ goto switchstate;
+ }
+ curpeer->remote_table = NULL;
+ for (st = curpeer->tables; st; st = st->next) {
+ if (st->remote_id == table_id) {
+ curpeer->remote_table = st;
+ break;
+ }
+ }
+
+ }
+ else if (msg_head[1] == PEER_MSG_STKT_UPDATE
+ || msg_head[1] == PEER_MSG_STKT_INCUPDATE) {
+ struct shared_table *st = curpeer->remote_table;
+ uint32_t update;
+ unsigned int data_type;
+ void *data_ptr;
+
+ /* Here we have data message */
+ if (!st)
+ goto ignore_msg;
+
+ if (msg_head[1] == PEER_MSG_STKT_UPDATE) {
+ if (msg_len < sizeof(update)) {
+ /* malformed message */
+ appctx->st0 = PEER_SESS_ST_ERRPROTO;
+ goto switchstate;
+ }
+ memcpy(&update, msg_cur, sizeof(update));
+ msg_cur += sizeof(update);
+ st->last_get = htonl(update);
+ }
+ else {
+ st->last_get++;
+ }
+
+ newts = stksess_new(st->table, NULL);
+ if (!newts)
+ goto ignore_msg;
+
+ if (st->table->type == SMP_T_STR) {
+ unsigned int to_read, to_store;
+
+ to_read = intdecode(&msg_cur, msg_end);
+ if (!msg_cur) {
+ /* malformed message */
+ stksess_free(st->table, newts);
+ appctx->st0 = PEER_SESS_ST_ERRPROTO;
+ goto switchstate;
+ }
+ to_store = MIN(to_read, st->table->key_size - 1);
+ if (msg_cur + to_store > msg_end) {
+ /* malformed message */
+ stksess_free(st->table, newts);
+ appctx->st0 = PEER_SESS_ST_ERRPROTO;
+ goto switchstate;
+ }
+
+ memcpy(newts->key.key, msg_cur, to_store);
+ newts->key.key[to_store] = 0;
+ msg_cur += to_read;
+ }
+ else if (st->table->type == SMP_T_SINT) {
+ unsigned int netinteger;
+
+ if (msg_cur + sizeof(netinteger) > msg_end) {
+ /* malformed message */
+ stksess_free(st->table, newts);
+ appctx->st0 = PEER_SESS_ST_ERRPROTO;
+ goto switchstate;
+ }
+ memcpy(&netinteger, msg_cur, sizeof(netinteger));
+ netinteger = ntohl(netinteger);
+ memcpy(newts->key.key, &netinteger, sizeof(netinteger));
+ msg_cur += sizeof(netinteger);
+ }
+ else {
+ if (msg_cur + st->table->key_size > msg_end) {
+ /* malformed message */
+ stksess_free(st->table, newts);
+ appctx->st0 = PEER_SESS_ST_ERRPROTO;
+ goto switchstate;
+ }
+ memcpy(newts->key.key, msg_cur, st->table->key_size);
+ msg_cur += st->table->key_size;
+ }
+
+ /* lookup for existing entry */
+ ts = stktable_lookup(st->table, newts);
+ if (ts) {
+ /* the entry already exist, we can free ours */
+ stktable_touch(st->table, ts, 0);
+ stksess_free(st->table, newts);
+ newts = NULL;
+ }
+ else {
+ struct eb32_node *eb;
+
+ /* create new entry */
+ ts = stktable_store(st->table, newts, 0);
+ newts = NULL; /* don't reuse it */
+
+ ts->upd.key= (++st->table->update)+(2147483648U);
+ eb = eb32_insert(&st->table->updates, &ts->upd);
+ if (eb != &ts->upd) {
+ eb32_delete(eb);
+ eb32_insert(&st->table->updates, &ts->upd);
+ }
+ }
+
+ for (data_type = 0 ; data_type < STKTABLE_DATA_TYPES ; data_type++) {
+
+ if ((1 << data_type) & st->remote_data) {
+ switch (stktable_data_types[data_type].std_type) {
+ case STD_T_SINT: {
+ int data;
+
+ data = intdecode(&msg_cur, msg_end);
+ if (!msg_cur) {
+ /* malformed message */
+ appctx->st0 = PEER_SESS_ST_ERRPROTO;
+ goto switchstate;
+ }
+
+ data_ptr = stktable_data_ptr(st->table, ts, data_type);
+ if (data_ptr)
+ stktable_data_cast(data_ptr, std_t_sint) = data;
+ break;
+ }
+ case STD_T_UINT: {
+ unsigned int data;
+
+ data = intdecode(&msg_cur, msg_end);
+ if (!msg_cur) {
+ /* malformed message */
+ appctx->st0 = PEER_SESS_ST_ERRPROTO;
+ goto switchstate;
+ }
+
+ data_ptr = stktable_data_ptr(st->table, ts, data_type);
+ if (data_ptr)
+ stktable_data_cast(data_ptr, std_t_uint) = data;
+ break;
+ }
+ case STD_T_ULL: {
+ unsigned long long data;
+
+ data = intdecode(&msg_cur, msg_end);
+ if (!msg_cur) {
+ /* malformed message */
+ appctx->st0 = PEER_SESS_ST_ERRPROTO;
+ goto switchstate;
+ }
+
+ data_ptr = stktable_data_ptr(st->table, ts, data_type);
+ if (data_ptr)
+ stktable_data_cast(data_ptr, std_t_ull) = data;
+ break;
+ }
+ case STD_T_FRQP: {
+ struct freq_ctr_period data;
+
+ data.curr_tick = tick_add(now_ms, intdecode(&msg_cur, msg_end));
+ if (!msg_cur) {
+ /* malformed message */
+ appctx->st0 = PEER_SESS_ST_ERRPROTO;
+ goto switchstate;
+ }
+ data.curr_ctr = intdecode(&msg_cur, msg_end);
+ if (!msg_cur) {
+ /* malformed message */
+ appctx->st0 = PEER_SESS_ST_ERRPROTO;
+ goto switchstate;
+ }
+ data.prev_ctr = intdecode(&msg_cur, msg_end);
+ if (!msg_cur) {
+ /* malformed message */
+ appctx->st0 = PEER_SESS_ST_ERRPROTO;
+ goto switchstate;
+ }
+
+ data_ptr = stktable_data_ptr(st->table, ts, data_type);
+ if (data_ptr)
+ stktable_data_cast(data_ptr, std_t_frqp) = data;
+ break;
+ }
+ }
+ }
+ }
+ }
+ else if (msg_head[1] == PEER_MSG_STKT_ACK) {
+ /* ack message */
+ uint32_t table_id ;
+ uint32_t update;
+ struct shared_table *st;
+
+ table_id = intdecode(&msg_cur, msg_end);
+ if (!msg_cur || (msg_cur + sizeof(update) > msg_end)) {
+ /* malformed message */
+ appctx->st0 = PEER_SESS_ST_ERRPROTO;
+ goto switchstate;
+ }
+ memcpy(&update, msg_cur, sizeof(update));
+ update = ntohl(update);
+
+ for (st = curpeer->tables; st; st = st->next) {
+ if (st->local_id == table_id) {
+ st->update = update;
+ break;
+ }
+ }
+ }
+ }
+ else if (msg_head[0] == PEER_MSG_CLASS_RESERVED) {
+ appctx->st0 = PEER_SESS_ST_ERRPROTO;
+ goto switchstate;
+ }
+
+ignore_msg:
+ /* skip consumed message */
+ bo_skip(si_oc(si), totl);
+ /* loop on that state to peek next message */
+ goto switchstate;
+
+incomplete:
+ /* we get here when a bo_getblk() returns <= 0 in reql */
+
+ if (reql < 0) {
+ /* there was an error */
+ appctx->st0 = PEER_SESS_ST_END;
+ goto switchstate;
+ }
+
+
+ /* Confirm finished or partial messages */
+ while (curpeer->confirm) {
+ unsigned char msg[2];
+
+ /* There is a confirm messages to send */
+ msg[0] = PEER_MSG_CLASS_CONTROL;
+ msg[1] = PEER_MSG_CTRL_RESYNCCONFIRM;
+
+ /* message to buffer */
+ repl = bi_putblk(si_ic(si), (char *)msg, sizeof(msg));
+ if (repl <= 0) {
+ /* no more write possible */
+ if (repl == -1)
+ goto full;
+ appctx->st0 = PEER_SESS_ST_END;
+ goto switchstate;
+ }
+ curpeer->confirm--;
+ }
+
+
+ /* Need to request a resync */
+ if ((curpeer->flags & PEER_F_LEARN_ASSIGN) &&
+ (peers->flags & PEERS_F_RESYNC_ASSIGN) &&
+ !(peers->flags & PEERS_F_RESYNC_PROCESS)) {
+ unsigned char msg[2];
+
+ /* Current peer was elected to request a resync */
+ msg[0] = PEER_MSG_CLASS_CONTROL;
+ msg[1] = PEER_MSG_CTRL_RESYNCREQ;
+
+ /* message to buffer */
+ repl = bi_putblk(si_ic(si), (char *)msg, sizeof(msg));
+ if (repl <= 0) {
+ /* no more write possible */
+ if (repl == -1)
+ goto full;
+ appctx->st0 = PEER_SESS_ST_END;
+ goto switchstate;
+ }
+ peers->flags |= PEERS_F_RESYNC_PROCESS;
+ }
+
+ /* Nothing to read, now we start to write */
+
+ if (curpeer->tables) {
+ struct shared_table *st;
+ struct shared_table *last_local_table;
+
+ last_local_table = curpeer->last_local_table;
+ if (!last_local_table)
+ last_local_table = curpeer->tables;
+ st = last_local_table->next;
+
+ while (1) {
+ if (!st)
+ st = curpeer->tables;
+
+ /* It remains some updates to ack */
+ if (st->last_get != st->last_acked) {
+ int msglen;
+
+ msglen = peer_prepare_ackmsg(st, trash.str, trash.size);
+ if (!msglen) {
+ /* internal error: message does not fit in trash */
+ appctx->st0 = PEER_SESS_ST_END;
+ goto switchstate;
+ }
+
+ /* message to buffer */
+ repl = bi_putblk(si_ic(si), trash.str, msglen);
+ if (repl <= 0) {
+ /* no more write possible */
+ if (repl == -1) {
+ goto full;
+ }
+ appctx->st0 = PEER_SESS_ST_END;
+ goto switchstate;
+ }
+ st->last_acked = st->last_get;
+ }
+
+ if (!(curpeer->flags & PEER_F_TEACH_PROCESS)) {
+ if (!(curpeer->flags & PEER_F_LEARN_ASSIGN) &&
+ ((int)(st->last_pushed - st->table->localupdate) < 0)) {
+ struct eb32_node *eb;
+ int new_pushed;
+
+ if (st != curpeer->last_local_table) {
+ int msglen;
+
+ msglen = peer_prepare_switchmsg(st, trash.str, trash.size);
+ if (!msglen) {
+ /* internal error: message does not fit in trash */
+ appctx->st0 = PEER_SESS_ST_END;
+ goto switchstate;
+ }
+
+ /* message to buffer */
+ repl = bi_putblk(si_ic(si), trash.str, msglen);
+ if (repl <= 0) {
+ /* no more write possible */
+ if (repl == -1) {
+ goto full;
+ }
+ appctx->st0 = PEER_SESS_ST_END;
+ goto switchstate;
+ }
+ curpeer->last_local_table = st;
+ }
+
+ /* We force new pushed to 1 to force identifier in update message */
+ new_pushed = 1;
+ eb = eb32_lookup_ge(&st->table->updates, st->last_pushed+1);
+ while (1) {
+ uint32_t msglen;
+ struct stksess *ts;
+
+ /* push local updates */
+ if (!eb) {
+ eb = eb32_first(&st->table->updates);
+ if (!eb || ((int)(eb->key - st->last_pushed) <= 0)) {
+ st->table->commitupdate = st->last_pushed = st->table->localupdate;
+ break;
+ }
+ }
+
+ if ((int)(eb->key - st->table->localupdate) > 0) {
+ st->table->commitupdate = st->last_pushed = st->table->localupdate;
+ break;
+ }
+
+ ts = eb32_entry(eb, struct stksess, upd);
+ msglen = peer_prepare_updatemsg(ts, st, trash.str, trash.size, new_pushed);
+ if (!msglen) {
+ /* internal error: message does not fit in trash */
+ appctx->st0 = PEER_SESS_ST_END;
+ goto switchstate;
+ }
+
+ /* message to buffer */
+ repl = bi_putblk(si_ic(si), trash.str, msglen);
+ if (repl <= 0) {
+ /* no more write possible */
+ if (repl == -1) {
+ goto full;
+ }
+ appctx->st0 = PEER_SESS_ST_END;
+ goto switchstate;
+ }
+ st->last_pushed = ts->upd.key;
+ if ((int)(st->last_pushed - st->table->commitupdate) > 0)
+ st->table->commitupdate = st->last_pushed;
+ /* identifier may not needed in next update message */
+ new_pushed = 0;
+
+ eb = eb32_next(eb);
+ }
+ }
+ }
+ else {
+ if (!(st->flags & SHTABLE_F_TEACH_STAGE1)) {
+ struct eb32_node *eb;
+ int new_pushed;
+
+ if (st != curpeer->last_local_table) {
+ int msglen;
+
+ msglen = peer_prepare_switchmsg(st, trash.str, trash.size);
+ if (!msglen) {
+ /* internal error: message does not fit in trash */
+ appctx->st0 = PEER_SESS_ST_END;
+ goto switchstate;
+ }
+
+ /* message to buffer */
+ repl = bi_putblk(si_ic(si), trash.str, msglen);
+ if (repl <= 0) {
+ /* no more write possible */
+ if (repl == -1) {
+ goto full;
+ }
+ appctx->st0 = PEER_SESS_ST_END;
+ goto switchstate;
+ }
+ curpeer->last_local_table = st;
+ }
+
+ /* We force new pushed to 1 to force identifier in update message */
+ new_pushed = 1;
+ eb = eb32_lookup_ge(&st->table->updates, st->last_pushed+1);
+ while (1) {
+ uint32_t msglen;
+ struct stksess *ts;
+
+ /* push local updates */
+ if (!eb) {
+ st->flags |= SHTABLE_F_TEACH_STAGE1;
+ eb = eb32_first(&st->table->updates);
+ if (eb)
+ st->last_pushed = eb->key - 1;
+ break;
+ }
+
+ ts = eb32_entry(eb, struct stksess, upd);
+ msglen = peer_prepare_updatemsg(ts, st, trash.str, trash.size, new_pushed);
+ if (!msglen) {
+ /* internal error: message does not fit in trash */
+ appctx->st0 = PEER_SESS_ST_END;
+ goto switchstate;
+ }
+
+ /* message to buffer */
+ repl = bi_putblk(si_ic(si), trash.str, msglen);
+ if (repl <= 0) {
+ /* no more write possible */
+ if (repl == -1) {
+ goto full;
+ }
+ appctx->st0 = PEER_SESS_ST_END;
+ goto switchstate;
+ }
+ st->last_pushed = ts->upd.key;
+ /* identifier may not needed in next update message */
+ new_pushed = 0;
+
+ eb = eb32_next(eb);
+ }
+ }
+
+ if (!(st->flags & SHTABLE_F_TEACH_STAGE2)) {
+ struct eb32_node *eb;
+ int new_pushed;
+
+ if (st != curpeer->last_local_table) {
+ int msglen;
+
+ msglen = peer_prepare_switchmsg(st, trash.str, trash.size);
+ if (!msglen) {
+ /* internal error: message does not fit in trash */
+ appctx->st0 = PEER_SESS_ST_END;
+ goto switchstate;
+ }
+
+ /* message to buffer */
+ repl = bi_putblk(si_ic(si), trash.str, msglen);
+ if (repl <= 0) {
+ /* no more write possible */
+ if (repl == -1) {
+ goto full;
+ }
+ appctx->st0 = PEER_SESS_ST_END;
+ goto switchstate;
+ }
+ curpeer->last_local_table = st;
+ }
+
+ /* We force new pushed to 1 to force identifier in update message */
+ new_pushed = 1;
+ eb = eb32_lookup_ge(&st->table->updates, st->last_pushed+1);
+ while (1) {
+ uint32_t msglen;
+ struct stksess *ts;
+
+ /* push local updates */
+ if (!eb || eb->key > st->teaching_origin) {
+ st->flags |= SHTABLE_F_TEACH_STAGE2;
+ eb = eb32_first(&st->table->updates);
+ if (eb)
+ st->last_pushed = eb->key - 1;
+ break;
+ }
+
+ ts = eb32_entry(eb, struct stksess, upd);
+ msglen = peer_prepare_updatemsg(ts, st, trash.str, trash.size, new_pushed);
+ if (!msglen) {
+ /* internal error: message does not fit in trash */
+ appctx->st0 = PEER_SESS_ST_END;
+ goto switchstate;
+ }
+
+ /* message to buffer */
+ repl = bi_putblk(si_ic(si), trash.str, msglen);
+ if (repl <= 0) {
+ /* no more write possible */
+ if (repl == -1) {
+ goto full;
+ }
+ appctx->st0 = PEER_SESS_ST_END;
+ goto switchstate;
+ }
+ st->last_pushed = ts->upd.key;
+ /* identifier may not needed in next update message */
+ new_pushed = 0;
+
+ eb = eb32_next(eb);
+ }
+ }
+ }
+
+ if (st == last_local_table)
+ break;
+ st = st->next;
+ }
+ }
+
+
+ if ((curpeer->flags & PEER_F_TEACH_PROCESS) && !(curpeer->flags & PEER_F_TEACH_FINISHED)) {
+ unsigned char msg[2];
+
+ /* Current peer was elected to request a resync */
+ msg[0] = PEER_MSG_CLASS_CONTROL;
+ msg[1] = ((peers->flags & PEERS_RESYNC_STATEMASK) == PEERS_RESYNC_FINISHED) ? PEER_MSG_CTRL_RESYNCFINISHED : PEER_MSG_CTRL_RESYNCPARTIAL;
+ /* process final lesson message */
+ repl = bi_putblk(si_ic(si), (char *)msg, sizeof(msg));
+ if (repl <= 0) {
+ /* no more write possible */
+ if (repl == -1)
+ goto full;
+ appctx->st0 = PEER_SESS_ST_END;
+ goto switchstate;
+ }
+ /* flag finished message sent */
+ curpeer->flags |= PEER_F_TEACH_FINISHED;
+ }
+
+ /* noting more to do */
+ goto out;
+ }
+ case PEER_SESS_ST_EXIT:
+ repl = snprintf(trash.str, trash.size, "%d\n", appctx->st1);
+ if (bi_putblk(si_ic(si), trash.str, repl) == -1)
+ goto full;
+ appctx->st0 = PEER_SESS_ST_END;
+ goto switchstate;
+ case PEER_SESS_ST_ERRSIZE: {
+ unsigned char msg[2];
+
+ msg[0] = PEER_MSG_CLASS_ERROR;
+ msg[1] = PEER_MSG_ERR_SIZELIMIT;
+
+ if (bi_putblk(si_ic(si), (char *)msg, sizeof(msg)) == -1)
+ goto full;
+ appctx->st0 = PEER_SESS_ST_END;
+ goto switchstate;
+ }
+ case PEER_SESS_ST_ERRPROTO: {
+ unsigned char msg[2];
+
+ msg[0] = PEER_MSG_CLASS_ERROR;
+ msg[1] = PEER_MSG_ERR_PROTOCOL;
+
+ if (bi_putblk(si_ic(si), (char *)msg, sizeof(msg)) == -1)
+ goto full;
+ appctx->st0 = PEER_SESS_ST_END;
+ /* fall through */
+ }
+ case PEER_SESS_ST_END: {
+ si_shutw(si);
+ si_shutr(si);
+ si_ic(si)->flags |= CF_READ_NULL;
+ goto out;
+ }
+ }
+ }
+out:
+ si_oc(si)->flags |= CF_READ_DONTWAIT;
+ return;
+full:
+ si_applet_cant_put(si);
+ goto out;
+}
+
+static struct applet peer_applet = {
+ .obj_type = OBJ_TYPE_APPLET,
+ .name = "<PEER>", /* used for logging */
+ .fct = peer_io_handler,
+ .release = peer_session_release,
+};
+
+/*
+ * Use this function to force a close of a peer session
+ */
+static void peer_session_forceshutdown(struct stream * stream)
+{
+ struct appctx *appctx = NULL;
+ struct peer *ps;
+
+ int i;
+
+ for (i = 0; i <= 1; i++) {
+ appctx = objt_appctx(stream->si[i].end);
+ if (!appctx)
+ continue;
+ if (appctx->applet != &peer_applet)
+ continue;
+ break;
+ }
+
+ if (!appctx)
+ return;
+
+ ps = (struct peer *)appctx->ctx.peers.ptr;
+ /* we're killing a connection, we must apply a random delay before
+ * retrying otherwise the other end will do the same and we can loop
+ * for a while.
+ */
+ if (ps)
+ ps->reconnect = tick_add(now_ms, MS_TO_TICKS(50 + random() % 2000));
+
+ /* call release to reinit resync states if needed */
+ peer_session_release(appctx);
+ appctx->st0 = PEER_SESS_ST_END;
+ appctx->ctx.peers.ptr = NULL;
+ task_wakeup(stream->task, TASK_WOKEN_MSG);
+}
+
+/* Pre-configures a peers frontend to accept incoming connections */
+void peers_setup_frontend(struct proxy *fe)
+{
+ fe->last_change = now.tv_sec;
+ fe->cap = PR_CAP_FE;
+ fe->maxconn = 0;
+ fe->conn_retries = CONN_RETRIES;
+ fe->timeout.client = MS_TO_TICKS(5000);
+ fe->accept = frontend_accept;
+ fe->default_target = &peer_applet.obj_type;
+ fe->options2 |= PR_O2_INDEPSTR | PR_O2_SMARTCON | PR_O2_SMARTACC;
+ fe->bind_proc = 0; /* will be filled by users */
+}
+
+/*
+ * Create a new peer session in assigned state (connect will start automatically)
+ */
+static struct stream *peer_session_create(struct peers *peers, struct peer *peer)
+{
+ struct listener *l = LIST_NEXT(&peers->peers_fe->conf.listeners, struct listener *, by_fe);
+ struct proxy *p = (struct proxy *)l->frontend; /* attached frontend */
+ struct appctx *appctx;
+ struct session *sess;
+ struct stream *s;
+ struct task *t;
+ struct connection *conn;
+
+ peer->reconnect = tick_add(now_ms, MS_TO_TICKS(5000));
+ peer->statuscode = PEER_SESS_SC_CONNECTCODE;
+ s = NULL;
+
+ appctx = appctx_new(&peer_applet);
+ if (!appctx)
+ goto out_close;
+
+ appctx->st0 = PEER_SESS_ST_CONNECT;
+ appctx->ctx.peers.ptr = (void *)peer;
+
+ sess = session_new(p, l, &appctx->obj_type);
+ if (!sess) {
+ Alert("out of memory in peer_session_create().\n");
+ goto out_free_appctx;
+ }
+
+ if ((t = task_new()) == NULL) {
+ Alert("out of memory in peer_session_create().\n");
+ goto out_free_sess;
+ }
+ t->nice = l->nice;
+
+ if ((s = stream_new(sess, t, &appctx->obj_type)) == NULL) {
+ Alert("Failed to initialize stream in peer_session_create().\n");
+ goto out_free_task;
+ }
+
+ /* The tasks below are normally what is supposed to be done by
+ * fe->accept().
+ */
+ s->flags = SF_ASSIGNED|SF_ADDR_SET;
+
+ /* applet is waiting for data */
+ si_applet_cant_get(&s->si[0]);
+ appctx_wakeup(appctx);
+
+ /* initiate an outgoing connection */
+ si_set_state(&s->si[1], SI_ST_ASS);
+
+ /* automatically prepare the stream interface to connect to the
+ * pre-initialized connection in si->conn.
+ */
+ if (unlikely((conn = conn_new()) == NULL))
+ goto out_free_strm;
+
+ conn_prepare(conn, peer->proto, peer->xprt);
+ si_attach_conn(&s->si[1], conn);
+
+ conn->target = s->target = &s->be->obj_type;
+ memcpy(&conn->addr.to, &peer->addr, sizeof(conn->addr.to));
+ s->do_log = NULL;
+ s->uniq_id = 0;
+
+ s->res.flags |= CF_READ_DONTWAIT;
+
+ l->nbconn++; /* warning! right now, it's up to the handler to decrease this */
+ p->feconn++;/* beconn will be increased later */
+ jobs++;
+ if (!(s->sess->listener->options & LI_O_UNLIMITED))
+ actconn++;
+ totalconn++;
+
+ peer->appctx = appctx;
+ peer->stream = s;
+ return s;
+
+ /* Error unrolling */
+ out_free_strm:
+ LIST_DEL(&s->list);
+ pool_free2(pool2_stream, s);
+ out_free_task:
+ task_free(t);
+ out_free_sess:
+ session_free(sess);
+ out_free_appctx:
+ appctx_free(appctx);
+ out_close:
+ return s;
+}
+
+/*
+ * Task processing function to manage re-connect and peer session
+ * tasks wakeup on local update.
+ */
+static struct task *process_peer_sync(struct task * task)
+{
+ struct peers *peers = (struct peers *)task->context;
+ struct peer *ps;
+ struct shared_table *st;
+
+ task->expire = TICK_ETERNITY;
+
+ if (!peers->peers_fe) {
+ /* this one was never started, kill it */
+ signal_unregister_handler(peers->sighandler);
+ task_delete(peers->sync_task);
+ task_free(peers->sync_task);
+ peers->sync_task = NULL;
+ return NULL;
+ }
+
+ if (!stopping) {
+ /* Normal case (not soft stop)*/
+
+ if (((peers->flags & PEERS_RESYNC_STATEMASK) == PEERS_RESYNC_FROMLOCAL) &&
+ (!nb_oldpids || tick_is_expired(peers->resync_timeout, now_ms)) &&
+ !(peers->flags & PEERS_F_RESYNC_ASSIGN)) {
+ /* Resync from local peer needed
+ no peer was assigned for the lesson
+ and no old local peer found
+ or resync timeout expire */
+
+ /* flag no more resync from local, to try resync from remotes */
+ peers->flags |= PEERS_F_RESYNC_LOCAL;
+
+ /* reschedule a resync */
+ peers->resync_timeout = tick_add(now_ms, MS_TO_TICKS(5000));
+ }
+
+ /* For each session */
+ for (ps = peers->remote; ps; ps = ps->next) {
+ /* For each remote peers */
+ if (!ps->local) {
+ if (!ps->stream) {
+ /* no active stream */
+ if (ps->statuscode == 0 ||
+ ((ps->statuscode == PEER_SESS_SC_CONNECTCODE ||
+ ps->statuscode == PEER_SESS_SC_SUCCESSCODE ||
+ ps->statuscode == PEER_SESS_SC_CONNECTEDCODE) &&
+ tick_is_expired(ps->reconnect, now_ms))) {
+ /* connection never tried
+ * or previous stream established with success
+ * or previous stream failed during connection
+ * and reconnection timer is expired */
+
+ /* retry a connect */
+ ps->stream = peer_session_create(peers, ps);
+ }
+ else if (!tick_is_expired(ps->reconnect, now_ms)) {
+ /* If previous session failed during connection
+ * but reconnection timer is not expired */
+
+ /* reschedule task for reconnect */
+ task->expire = tick_first(task->expire, ps->reconnect);
+ }
+ /* else do nothing */
+ } /* !ps->stream */
+ else if (ps->statuscode == PEER_SESS_SC_SUCCESSCODE) {
+ /* current stream is active and established */
+ if (((peers->flags & PEERS_RESYNC_STATEMASK) == PEERS_RESYNC_FROMREMOTE) &&
+ !(peers->flags & PEERS_F_RESYNC_ASSIGN) &&
+ !(ps->flags & PEER_F_LEARN_NOTUP2DATE)) {
+ /* Resync from a remote is needed
+ * and no peer was assigned for lesson
+ * and current peer may be up2date */
+
+ /* assign peer for the lesson */
+ ps->flags |= PEER_F_LEARN_ASSIGN;
+ peers->flags |= PEERS_F_RESYNC_ASSIGN;
+
+ /* awake peer stream task to handle a request of resync */
+ appctx_wakeup(ps->appctx);
+ }
+ else {
+ /* Awake session if there is data to push */
+ for (st = ps->tables; st ; st = st->next) {
+ if ((int)(st->last_pushed - st->table->localupdate) < 0) {
+ /* awake peer stream task to push local updates */
+ appctx_wakeup(ps->appctx);
+ break;
+ }
+ }
+ }
+ /* else do nothing */
+ } /* SUCCESSCODE */
+ } /* !ps->peer->local */
+ } /* for */
+
+ /* Resync from remotes expired: consider resync is finished */
+ if (((peers->flags & PEERS_RESYNC_STATEMASK) == PEERS_RESYNC_FROMREMOTE) &&
+ !(peers->flags & PEERS_F_RESYNC_ASSIGN) &&
+ tick_is_expired(peers->resync_timeout, now_ms)) {
+ /* Resync from remote peer needed
+ * no peer was assigned for the lesson
+ * and resync timeout expire */
+
+ /* flag no more resync from remote, consider resync is finished */
+ peers->flags |= PEERS_F_RESYNC_REMOTE;
+ }
+
+ if ((peers->flags & PEERS_RESYNC_STATEMASK) != PEERS_RESYNC_FINISHED) {
+ /* Resync not finished*/
+ /* reschedule task to resync timeout, to ended resync if needed */
+ task->expire = tick_first(task->expire, peers->resync_timeout);
+ }
+ } /* !stopping */
+ else {
+ /* soft stop case */
+ if (task->state & TASK_WOKEN_SIGNAL) {
+ /* We've just recieved the signal */
+ if (!(peers->flags & PEERS_F_DONOTSTOP)) {
+ /* add DO NOT STOP flag if not present */
+ jobs++;
+ peers->flags |= PEERS_F_DONOTSTOP;
+ ps = peers->local;
+ for (st = ps->tables; st ; st = st->next)
+ st->table->syncing++;
+ }
+
+ /* disconnect all connected peers */
+ for (ps = peers->remote; ps; ps = ps->next) {
+ if (ps->stream) {
+ peer_session_forceshutdown(ps->stream);
+ ps->stream = NULL;
+ ps->appctx = NULL;
+ }
+ }
+ }
+
+ ps = peers->local;
+ if (ps->flags & PEER_F_TEACH_COMPLETE) {
+ if (peers->flags & PEERS_F_DONOTSTOP) {
+ /* resync of new process was complete, current process can die now */
+ jobs--;
+ peers->flags &= ~PEERS_F_DONOTSTOP;
+ for (st = ps->tables; st ; st = st->next)
+ st->table->syncing--;
+ }
+ }
+ else if (!ps->stream) {
+ /* If stream is not active */
+ if (ps->statuscode == 0 ||
+ ps->statuscode == PEER_SESS_SC_SUCCESSCODE ||
+ ps->statuscode == PEER_SESS_SC_CONNECTEDCODE ||
+ ps->statuscode == PEER_SESS_SC_TRYAGAIN) {
+ /* connection never tried
+ * or previous stream was successfully established
+ * or previous stream tcp connect success but init state incomplete
+ * or during previous connect, peer replies a try again statuscode */
+
+ /* connect to the peer */
+ peer_session_create(peers, ps);
+ }
+ else {
+ /* Other error cases */
+ if (peers->flags & PEERS_F_DONOTSTOP) {
+ /* unable to resync new process, current process can die now */
+ jobs--;
+ peers->flags &= ~PEERS_F_DONOTSTOP;
+ for (st = ps->tables; st ; st = st->next)
+ st->table->syncing--;
+ }
+ }
+ }
+ else if (ps->statuscode == PEER_SESS_SC_SUCCESSCODE ) {
+ /* current stream active and established
+ awake stream to push remaining local updates */
+ for (st = ps->tables; st ; st = st->next) {
+ if ((int)(st->last_pushed - st->table->localupdate) < 0) {
+ /* awake peer stream task to push local updates */
+ appctx_wakeup(ps->appctx);
+ break;
+ }
+ }
+ }
+ } /* stopping */
+ /* Wakeup for re-connect */
+ return task;
+}
+
+
+/*
+ *
+ */
+void peers_init_sync(struct peers *peers)
+{
+ struct peer * curpeer;
+ struct listener *listener;
+
+ for (curpeer = peers->remote; curpeer; curpeer = curpeer->next) {
+ peers->peers_fe->maxconn += 3;
+ }
+
+ list_for_each_entry(listener, &peers->peers_fe->conf.listeners, by_fe)
+ listener->maxconn = peers->peers_fe->maxconn;
+ peers->sync_task = task_new();
+ peers->sync_task->process = process_peer_sync;
+ peers->sync_task->expire = TICK_ETERNITY;
+ peers->sync_task->context = (void *)peers;
+ peers->sighandler = signal_register_task(0, peers->sync_task, 0);
+ task_wakeup(peers->sync_task, TASK_WOKEN_INIT);
+}
+
+
+
+/*
+ * Function used to register a table for sync on a group of peers
+ *
+ */
+void peers_register_table(struct peers *peers, struct stktable *table)
+{
+ struct shared_table *st;
+ struct peer * curpeer;
+ int id = 0;
+
+ for (curpeer = peers->remote; curpeer; curpeer = curpeer->next) {
+ st = (struct shared_table *)calloc(1,sizeof(struct shared_table));
+ st->table = table;
+ st->next = curpeer->tables;
+ if (curpeer->tables)
+ id = curpeer->tables->local_id;
+ st->local_id = id + 1;
+
+ curpeer->tables = st;
+ }
+
+ table->sync_task = peers->sync_task;
+}
+
--- /dev/null
+/*
+ * Pipe management
+ *
+ * Copyright 2000-2009 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <unistd.h>
+#include <fcntl.h>
+
+#include <common/config.h>
+#include <common/memory.h>
+
+#include <types/global.h>
+#include <types/pipe.h>
+
+struct pool_head *pool2_pipe = NULL;
+struct pipe *pipes_live = NULL; /* pipes which are still ready to use */
+int pipes_used = 0; /* # of pipes in use (2 fds each) */
+int pipes_free = 0; /* # of pipes unused */
+
+/* allocate memory for the pipes */
+static void init_pipe()
+{
+ pool2_pipe = create_pool("pipe", sizeof(struct pipe), MEM_F_SHARED);
+ pipes_used = 0;
+ pipes_free = 0;
+}
+
+/* return a pre-allocated empty pipe. Try to allocate one if there isn't any
+ * left. NULL is returned if a pipe could not be allocated.
+ */
+struct pipe *get_pipe()
+{
+ struct pipe *ret;
+ int pipefd[2];
+
+ if (likely(pipes_live)) {
+ ret = pipes_live;
+ pipes_live = pipes_live->next;
+ pipes_free--;
+ pipes_used++;
+ return ret;
+ }
+
+ if (pipes_used >= global.maxpipes)
+ return NULL;
+
+ ret = pool_alloc2(pool2_pipe);
+ if (!ret)
+ return NULL;
+
+ if (pipe(pipefd) < 0) {
+ pool_free2(pool2_pipe, ret);
+ return NULL;
+ }
+#ifdef F_SETPIPE_SZ
+ if (global.tune.pipesize)
+ fcntl(pipefd[0], F_SETPIPE_SZ, global.tune.pipesize);
+#endif
+ ret->data = 0;
+ ret->prod = pipefd[1];
+ ret->cons = pipefd[0];
+ ret->next = NULL;
+ pipes_used++;
+ return ret;
+}
+
+/* destroy a pipe, possibly because an error was encountered on it. Its FDs
+ * will be closed and it will not be reinjected into the live pool.
+ */
+void kill_pipe(struct pipe *p)
+{
+ close(p->prod);
+ close(p->cons);
+ pool_free2(pool2_pipe, p);
+ pipes_used--;
+ return;
+}
+
+/* put back a unused pipe into the live pool. If it still has data in it, it is
+ * closed and not reinjected into the live pool. The caller is not allowed to
+ * use it once released.
+ */
+void put_pipe(struct pipe *p)
+{
+ if (p->data) {
+ kill_pipe(p);
+ return;
+ }
+ p->next = pipes_live;
+ pipes_live = p;
+ pipes_free++;
+ pipes_used--;
+}
+
+
+__attribute__((constructor))
+static void __pipe_module_init(void)
+{
+ init_pipe();
+}
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * HTTP protocol analyzer
+ *
+ * Copyright 2000-2011 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <ctype.h>
+#include <errno.h>
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <syslog.h>
+#include <time.h>
+
+#include <sys/socket.h>
+#include <sys/stat.h>
+#include <sys/types.h>
+
+#include <netinet/tcp.h>
+
+#include <common/base64.h>
+#include <common/chunk.h>
+#include <common/compat.h>
+#include <common/config.h>
+#include <common/debug.h>
+#include <common/memory.h>
+#include <common/mini-clist.h>
+#include <common/standard.h>
+#include <common/ticks.h>
+#include <common/time.h>
+#include <common/uri_auth.h>
+#include <common/version.h>
+
+#include <types/capture.h>
+#include <types/global.h>
+
+#include <proto/acl.h>
+#include <proto/action.h>
+#include <proto/arg.h>
+#include <proto/auth.h>
+#include <proto/backend.h>
+#include <proto/channel.h>
+#include <proto/checks.h>
+#include <proto/compression.h>
+#include <proto/dumpstats.h>
+#include <proto/fd.h>
+#include <proto/frontend.h>
+#include <proto/log.h>
+#include <proto/hdr_idx.h>
+#include <proto/pattern.h>
+#include <proto/proto_tcp.h>
+#include <proto/proto_http.h>
+#include <proto/proxy.h>
+#include <proto/queue.h>
+#include <proto/sample.h>
+#include <proto/server.h>
+#include <proto/stream.h>
+#include <proto/stream_interface.h>
+#include <proto/task.h>
+#include <proto/pattern.h>
+#include <proto/vars.h>
+
+const char HTTP_100[] =
+ "HTTP/1.1 100 Continue\r\n\r\n";
+
+const struct chunk http_100_chunk = {
+ .str = (char *)&HTTP_100,
+ .len = sizeof(HTTP_100)-1
+};
+
+/* Warning: no "connection" header is provided with the 3xx messages below */
+const char *HTTP_301 =
+ "HTTP/1.1 301 Moved Permanently\r\n"
+ "Content-length: 0\r\n"
+ "Location: "; /* not terminated since it will be concatenated with the URL */
+
+const char *HTTP_302 =
+ "HTTP/1.1 302 Found\r\n"
+ "Cache-Control: no-cache\r\n"
+ "Content-length: 0\r\n"
+ "Location: "; /* not terminated since it will be concatenated with the URL */
+
+/* same as 302 except that the browser MUST retry with the GET method */
+const char *HTTP_303 =
+ "HTTP/1.1 303 See Other\r\n"
+ "Cache-Control: no-cache\r\n"
+ "Content-length: 0\r\n"
+ "Location: "; /* not terminated since it will be concatenated with the URL */
+
+
+/* same as 302 except that the browser MUST retry with the same method */
+const char *HTTP_307 =
+ "HTTP/1.1 307 Temporary Redirect\r\n"
+ "Cache-Control: no-cache\r\n"
+ "Content-length: 0\r\n"
+ "Location: "; /* not terminated since it will be concatenated with the URL */
+
+/* same as 301 except that the browser MUST retry with the same method */
+const char *HTTP_308 =
+ "HTTP/1.1 308 Permanent Redirect\r\n"
+ "Content-length: 0\r\n"
+ "Location: "; /* not terminated since it will be concatenated with the URL */
+
+/* Warning: this one is an sprintf() fmt string, with <realm> as its only argument */
+const char *HTTP_401_fmt =
+ "HTTP/1.0 401 Unauthorized\r\n"
+ "Cache-Control: no-cache\r\n"
+ "Connection: close\r\n"
+ "Content-Type: text/html\r\n"
+ "WWW-Authenticate: Basic realm=\"%s\"\r\n"
+ "\r\n"
+ "<html><body><h1>401 Unauthorized</h1>\nYou need a valid user and password to access this content.\n</body></html>\n";
+
+const char *HTTP_407_fmt =
+ "HTTP/1.0 407 Unauthorized\r\n"
+ "Cache-Control: no-cache\r\n"
+ "Connection: close\r\n"
+ "Content-Type: text/html\r\n"
+ "Proxy-Authenticate: Basic realm=\"%s\"\r\n"
+ "\r\n"
+ "<html><body><h1>407 Unauthorized</h1>\nYou need a valid user and password to access this content.\n</body></html>\n";
+
+
+const int http_err_codes[HTTP_ERR_SIZE] = {
+ [HTTP_ERR_200] = 200, /* used by "monitor-uri" */
+ [HTTP_ERR_400] = 400,
+ [HTTP_ERR_403] = 403,
+ [HTTP_ERR_405] = 405,
+ [HTTP_ERR_408] = 408,
+ [HTTP_ERR_429] = 429,
+ [HTTP_ERR_500] = 500,
+ [HTTP_ERR_502] = 502,
+ [HTTP_ERR_503] = 503,
+ [HTTP_ERR_504] = 504,
+};
+
+static const char *http_err_msgs[HTTP_ERR_SIZE] = {
+ [HTTP_ERR_200] =
+ "HTTP/1.0 200 OK\r\n"
+ "Cache-Control: no-cache\r\n"
+ "Connection: close\r\n"
+ "Content-Type: text/html\r\n"
+ "\r\n"
+ "<html><body><h1>200 OK</h1>\nService ready.\n</body></html>\n",
+
+ [HTTP_ERR_400] =
+ "HTTP/1.0 400 Bad request\r\n"
+ "Cache-Control: no-cache\r\n"
+ "Connection: close\r\n"
+ "Content-Type: text/html\r\n"
+ "\r\n"
+ "<html><body><h1>400 Bad request</h1>\nYour browser sent an invalid request.\n</body></html>\n",
+
+ [HTTP_ERR_403] =
+ "HTTP/1.0 403 Forbidden\r\n"
+ "Cache-Control: no-cache\r\n"
+ "Connection: close\r\n"
+ "Content-Type: text/html\r\n"
+ "\r\n"
+ "<html><body><h1>403 Forbidden</h1>\nRequest forbidden by administrative rules.\n</body></html>\n",
+
+ [HTTP_ERR_405] =
+ "HTTP/1.0 405 Method Not Allowed\r\n"
+ "Cache-Control: no-cache\r\n"
+ "Connection: close\r\n"
+ "Content-Type: text/html\r\n"
+ "\r\n"
+ "<html><body><h1>405 Method Not Allowed</h1>\nA request was made of a resource using a request method not supported by that resource\n</body></html>\n",
+
+ [HTTP_ERR_408] =
+ "HTTP/1.0 408 Request Time-out\r\n"
+ "Cache-Control: no-cache\r\n"
+ "Connection: close\r\n"
+ "Content-Type: text/html\r\n"
+ "\r\n"
+ "<html><body><h1>408 Request Time-out</h1>\nYour browser didn't send a complete request in time.\n</body></html>\n",
+
+ [HTTP_ERR_429] =
+ "HTTP/1.0 429 Too Many Requests\r\n"
+ "Cache-Control: no-cache\r\n"
+ "Connection: close\r\n"
+ "Content-Type: text/html\r\n"
+ "\r\n"
+ "<html><body><h1>429 Too Many Requests</h1>\nYou have sent too many requests in a given amount of time.\n</body></html>\n",
+
+ [HTTP_ERR_500] =
+ "HTTP/1.0 500 Server Error\r\n"
+ "Cache-Control: no-cache\r\n"
+ "Connection: close\r\n"
+ "Content-Type: text/html\r\n"
+ "\r\n"
+ "<html><body><h1>500 Server Error</h1>\nAn internal server error occured.\n</body></html>\n",
+
+ [HTTP_ERR_502] =
+ "HTTP/1.0 502 Bad Gateway\r\n"
+ "Cache-Control: no-cache\r\n"
+ "Connection: close\r\n"
+ "Content-Type: text/html\r\n"
+ "\r\n"
+ "<html><body><h1>502 Bad Gateway</h1>\nThe server returned an invalid or incomplete response.\n</body></html>\n",
+
+ [HTTP_ERR_503] =
+ "HTTP/1.0 503 Service Unavailable\r\n"
+ "Cache-Control: no-cache\r\n"
+ "Connection: close\r\n"
+ "Content-Type: text/html\r\n"
+ "\r\n"
+ "<html><body><h1>503 Service Unavailable</h1>\nNo server is available to handle this request.\n</body></html>\n",
+
+ [HTTP_ERR_504] =
+ "HTTP/1.0 504 Gateway Time-out\r\n"
+ "Cache-Control: no-cache\r\n"
+ "Connection: close\r\n"
+ "Content-Type: text/html\r\n"
+ "\r\n"
+ "<html><body><h1>504 Gateway Time-out</h1>\nThe server didn't respond in time.\n</body></html>\n",
+
+};
+
+/* status codes available for the stats admin page (strictly 4 chars length) */
+const char *stat_status_codes[STAT_STATUS_SIZE] = {
+ [STAT_STATUS_DENY] = "DENY",
+ [STAT_STATUS_DONE] = "DONE",
+ [STAT_STATUS_ERRP] = "ERRP",
+ [STAT_STATUS_EXCD] = "EXCD",
+ [STAT_STATUS_NONE] = "NONE",
+ [STAT_STATUS_PART] = "PART",
+ [STAT_STATUS_UNKN] = "UNKN",
+};
+
+
+/* List head of all known action keywords for "http-request" */
+struct action_kw_list http_req_keywords = {
+ .list = LIST_HEAD_INIT(http_req_keywords.list)
+};
+
+/* List head of all known action keywords for "http-response" */
+struct action_kw_list http_res_keywords = {
+ .list = LIST_HEAD_INIT(http_res_keywords.list)
+};
+
+/* We must put the messages here since GCC cannot initialize consts depending
+ * on strlen().
+ */
+struct chunk http_err_chunks[HTTP_ERR_SIZE];
+
+/* this struct is used between calls to smp_fetch_hdr() or smp_fetch_cookie() */
+static struct hdr_ctx static_hdr_ctx;
+
+#define FD_SETS_ARE_BITFIELDS
+#ifdef FD_SETS_ARE_BITFIELDS
+/*
+ * This map is used with all the FD_* macros to check whether a particular bit
+ * is set or not. Each bit represents an ACSII code. FD_SET() sets those bytes
+ * which should be encoded. When FD_ISSET() returns non-zero, it means that the
+ * byte should be encoded. Be careful to always pass bytes from 0 to 255
+ * exclusively to the macros.
+ */
+fd_set hdr_encode_map[(sizeof(fd_set) > (256/8)) ? 1 : ((256/8) / sizeof(fd_set))];
+fd_set url_encode_map[(sizeof(fd_set) > (256/8)) ? 1 : ((256/8) / sizeof(fd_set))];
+fd_set http_encode_map[(sizeof(fd_set) > (256/8)) ? 1 : ((256/8) / sizeof(fd_set))];
+
+#else
+#error "Check if your OS uses bitfields for fd_sets"
+#endif
+
+static int http_apply_redirect_rule(struct redirect_rule *rule, struct stream *s, struct http_txn *txn);
+
+/* This function returns a reason associated with the HTTP status.
+ * This function never fails, a message is always returned.
+ */
+const char *get_reason(unsigned int status)
+{
+ switch (status) {
+ case 100: return "Continue";
+ case 101: return "Switching Protocols";
+ case 102: return "Processing";
+ case 200: return "OK";
+ case 201: return "Created";
+ case 202: return "Accepted";
+ case 203: return "Non-Authoritative Information";
+ case 204: return "No Content";
+ case 205: return "Reset Content";
+ case 206: return "Partial Content";
+ case 207: return "Multi-Status";
+ case 210: return "Content Different";
+ case 226: return "IM Used";
+ case 300: return "Multiple Choices";
+ case 301: return "Moved Permanently";
+ case 302: return "Moved Temporarily";
+ case 303: return "See Other";
+ case 304: return "Not Modified";
+ case 305: return "Use Proxy";
+ case 307: return "Temporary Redirect";
+ case 308: return "Permanent Redirect";
+ case 310: return "Too many Redirects";
+ case 400: return "Bad Request";
+ case 401: return "Unauthorized";
+ case 402: return "Payment Required";
+ case 403: return "Forbidden";
+ case 404: return "Not Found";
+ case 405: return "Method Not Allowed";
+ case 406: return "Not Acceptable";
+ case 407: return "Proxy Authentication Required";
+ case 408: return "Request Time-out";
+ case 409: return "Conflict";
+ case 410: return "Gone";
+ case 411: return "Length Required";
+ case 412: return "Precondition Failed";
+ case 413: return "Request Entity Too Large";
+ case 414: return "Request-URI Too Long";
+ case 415: return "Unsupported Media Type";
+ case 416: return "Requested range unsatisfiable";
+ case 417: return "Expectation failed";
+ case 418: return "I'm a teapot";
+ case 422: return "Unprocessable entity";
+ case 423: return "Locked";
+ case 424: return "Method failure";
+ case 425: return "Unordered Collection";
+ case 426: return "Upgrade Required";
+ case 428: return "Precondition Required";
+ case 429: return "Too Many Requests";
+ case 431: return "Request Header Fields Too Large";
+ case 449: return "Retry With";
+ case 450: return "Blocked by Windows Parental Controls";
+ case 451: return "Unavailable For Legal Reasons";
+ case 456: return "Unrecoverable Error";
+ case 499: return "client has closed connection";
+ case 500: return "Internal Server Error";
+ case 501: return "Not Implemented";
+ case 502: return "Bad Gateway ou Proxy Error";
+ case 503: return "Service Unavailable";
+ case 504: return "Gateway Time-out";
+ case 505: return "HTTP Version not supported";
+ case 506: return "Variant also negociate";
+ case 507: return "Insufficient storage";
+ case 508: return "Loop detected";
+ case 509: return "Bandwidth Limit Exceeded";
+ case 510: return "Not extended";
+ case 511: return "Network authentication required";
+ case 520: return "Web server is returning an unknown error";
+ default:
+ switch (status) {
+ case 100 ... 199: return "Informational";
+ case 200 ... 299: return "Success";
+ case 300 ... 399: return "Redirection";
+ case 400 ... 499: return "Client Error";
+ case 500 ... 599: return "Server Error";
+ default: return "Other";
+ }
+ }
+}
+
+void init_proto_http()
+{
+ int i;
+ char *tmp;
+ int msg;
+
+ for (msg = 0; msg < HTTP_ERR_SIZE; msg++) {
+ if (!http_err_msgs[msg]) {
+ Alert("Internal error: no message defined for HTTP return code %d. Aborting.\n", msg);
+ abort();
+ }
+
+ http_err_chunks[msg].str = (char *)http_err_msgs[msg];
+ http_err_chunks[msg].len = strlen(http_err_msgs[msg]);
+ }
+
+ /* initialize the log header encoding map : '{|}"#' should be encoded with
+ * '#' as prefix, as well as non-printable characters ( <32 or >= 127 ).
+ * URL encoding only requires '"', '#' to be encoded as well as non-
+ * printable characters above.
+ */
+ memset(hdr_encode_map, 0, sizeof(hdr_encode_map));
+ memset(url_encode_map, 0, sizeof(url_encode_map));
+ memset(http_encode_map, 0, sizeof(url_encode_map));
+ for (i = 0; i < 32; i++) {
+ FD_SET(i, hdr_encode_map);
+ FD_SET(i, url_encode_map);
+ }
+ for (i = 127; i < 256; i++) {
+ FD_SET(i, hdr_encode_map);
+ FD_SET(i, url_encode_map);
+ }
+
+ tmp = "\"#{|}";
+ while (*tmp) {
+ FD_SET(*tmp, hdr_encode_map);
+ tmp++;
+ }
+
+ tmp = "\"#";
+ while (*tmp) {
+ FD_SET(*tmp, url_encode_map);
+ tmp++;
+ }
+
+ /* initialize the http header encoding map. The draft httpbis define the
+ * header content as:
+ *
+ * HTTP-message = start-line
+ * *( header-field CRLF )
+ * CRLF
+ * [ message-body ]
+ * header-field = field-name ":" OWS field-value OWS
+ * field-value = *( field-content / obs-fold )
+ * field-content = field-vchar [ 1*( SP / HTAB ) field-vchar ]
+ * obs-fold = CRLF 1*( SP / HTAB )
+ * field-vchar = VCHAR / obs-text
+ * VCHAR = %x21-7E
+ * obs-text = %x80-FF
+ *
+ * All the chars are encoded except "VCHAR", "obs-text", SP and HTAB.
+ * The encoded chars are form 0x00 to 0x08, 0x0a to 0x1f and 0x7f. The
+ * "obs-fold" is volontary forgotten because haproxy remove this.
+ */
+ memset(http_encode_map, 0, sizeof(http_encode_map));
+ for (i = 0x00; i <= 0x08; i++)
+ FD_SET(i, http_encode_map);
+ for (i = 0x0a; i <= 0x1f; i++)
+ FD_SET(i, http_encode_map);
+ FD_SET(0x7f, http_encode_map);
+
+ /* memory allocations */
+ pool2_http_txn = create_pool("http_txn", sizeof(struct http_txn), MEM_F_SHARED);
+ pool2_requri = create_pool("requri", REQURI_LEN, MEM_F_SHARED);
+ pool2_uniqueid = create_pool("uniqueid", UNIQUEID_LEN, MEM_F_SHARED);
+}
+
+/*
+ * We have 26 list of methods (1 per first letter), each of which can have
+ * up to 3 entries (2 valid, 1 null).
+ */
+struct http_method_desc {
+ enum http_meth_t meth;
+ int len;
+ const char text[8];
+};
+
+const struct http_method_desc http_methods[26][3] = {
+ ['C' - 'A'] = {
+ [0] = { .meth = HTTP_METH_CONNECT , .len=7, .text="CONNECT" },
+ },
+ ['D' - 'A'] = {
+ [0] = { .meth = HTTP_METH_DELETE , .len=6, .text="DELETE" },
+ },
+ ['G' - 'A'] = {
+ [0] = { .meth = HTTP_METH_GET , .len=3, .text="GET" },
+ },
+ ['H' - 'A'] = {
+ [0] = { .meth = HTTP_METH_HEAD , .len=4, .text="HEAD" },
+ },
+ ['O' - 'A'] = {
+ [0] = { .meth = HTTP_METH_OPTIONS , .len=7, .text="OPTIONS" },
+ },
+ ['P' - 'A'] = {
+ [0] = { .meth = HTTP_METH_POST , .len=4, .text="POST" },
+ [1] = { .meth = HTTP_METH_PUT , .len=3, .text="PUT" },
+ },
+ ['T' - 'A'] = {
+ [0] = { .meth = HTTP_METH_TRACE , .len=5, .text="TRACE" },
+ },
+ /* rest is empty like this :
+ * [0] = { .meth = HTTP_METH_OTHER , .len=0, .text="" },
+ */
+};
+
+const struct http_method_name http_known_methods[HTTP_METH_OTHER] = {
+ [HTTP_METH_OPTIONS] = { "OPTIONS", 7 },
+ [HTTP_METH_GET] = { "GET", 3 },
+ [HTTP_METH_HEAD] = { "HEAD", 4 },
+ [HTTP_METH_POST] = { "POST", 4 },
+ [HTTP_METH_PUT] = { "PUT", 3 },
+ [HTTP_METH_DELETE] = { "DELETE", 6 },
+ [HTTP_METH_TRACE] = { "TRACE", 5 },
+ [HTTP_METH_CONNECT] = { "CONNECT", 7 },
+};
+
+/* It is about twice as fast on recent architectures to lookup a byte in a
+ * table than to perform a boolean AND or OR between two tests. Refer to
+ * RFC2616 for those chars.
+ */
+
+const char http_is_spht[256] = {
+ [' '] = 1, ['\t'] = 1,
+};
+
+const char http_is_crlf[256] = {
+ ['\r'] = 1, ['\n'] = 1,
+};
+
+const char http_is_lws[256] = {
+ [' '] = 1, ['\t'] = 1,
+ ['\r'] = 1, ['\n'] = 1,
+};
+
+const char http_is_sep[256] = {
+ ['('] = 1, [')'] = 1, ['<'] = 1, ['>'] = 1,
+ ['@'] = 1, [','] = 1, [';'] = 1, [':'] = 1,
+ ['"'] = 1, ['/'] = 1, ['['] = 1, [']'] = 1,
+ ['{'] = 1, ['}'] = 1, ['?'] = 1, ['='] = 1,
+ [' '] = 1, ['\t'] = 1, ['\\'] = 1,
+};
+
+const char http_is_ctl[256] = {
+ [0 ... 31] = 1,
+ [127] = 1,
+};
+
+/*
+ * A token is any ASCII char that is neither a separator nor a CTL char.
+ * Do not overwrite values in assignment since gcc-2.95 will not handle
+ * them correctly. Instead, define every non-CTL char's status.
+ */
+const char http_is_token[256] = {
+ [' '] = 0, ['!'] = 1, ['"'] = 0, ['#'] = 1,
+ ['$'] = 1, ['%'] = 1, ['&'] = 1, ['\''] = 1,
+ ['('] = 0, [')'] = 0, ['*'] = 1, ['+'] = 1,
+ [','] = 0, ['-'] = 1, ['.'] = 1, ['/'] = 0,
+ ['0'] = 1, ['1'] = 1, ['2'] = 1, ['3'] = 1,
+ ['4'] = 1, ['5'] = 1, ['6'] = 1, ['7'] = 1,
+ ['8'] = 1, ['9'] = 1, [':'] = 0, [';'] = 0,
+ ['<'] = 0, ['='] = 0, ['>'] = 0, ['?'] = 0,
+ ['@'] = 0, ['A'] = 1, ['B'] = 1, ['C'] = 1,
+ ['D'] = 1, ['E'] = 1, ['F'] = 1, ['G'] = 1,
+ ['H'] = 1, ['I'] = 1, ['J'] = 1, ['K'] = 1,
+ ['L'] = 1, ['M'] = 1, ['N'] = 1, ['O'] = 1,
+ ['P'] = 1, ['Q'] = 1, ['R'] = 1, ['S'] = 1,
+ ['T'] = 1, ['U'] = 1, ['V'] = 1, ['W'] = 1,
+ ['X'] = 1, ['Y'] = 1, ['Z'] = 1, ['['] = 0,
+ ['\\'] = 0, [']'] = 0, ['^'] = 1, ['_'] = 1,
+ ['`'] = 1, ['a'] = 1, ['b'] = 1, ['c'] = 1,
+ ['d'] = 1, ['e'] = 1, ['f'] = 1, ['g'] = 1,
+ ['h'] = 1, ['i'] = 1, ['j'] = 1, ['k'] = 1,
+ ['l'] = 1, ['m'] = 1, ['n'] = 1, ['o'] = 1,
+ ['p'] = 1, ['q'] = 1, ['r'] = 1, ['s'] = 1,
+ ['t'] = 1, ['u'] = 1, ['v'] = 1, ['w'] = 1,
+ ['x'] = 1, ['y'] = 1, ['z'] = 1, ['{'] = 0,
+ ['|'] = 1, ['}'] = 0, ['~'] = 1,
+};
+
+
+/*
+ * An http ver_token is any ASCII which can be found in an HTTP version,
+ * which includes 'H', 'T', 'P', '/', '.' and any digit.
+ */
+const char http_is_ver_token[256] = {
+ ['.'] = 1, ['/'] = 1,
+ ['0'] = 1, ['1'] = 1, ['2'] = 1, ['3'] = 1, ['4'] = 1,
+ ['5'] = 1, ['6'] = 1, ['7'] = 1, ['8'] = 1, ['9'] = 1,
+ ['H'] = 1, ['P'] = 1, ['R'] = 1, ['S'] = 1, ['T'] = 1,
+};
+
+
+/*
+ * Adds a header and its CRLF at the tail of the message's buffer, just before
+ * the last CRLF. Text length is measured first, so it cannot be NULL.
+ * The header is also automatically added to the index <hdr_idx>, and the end
+ * of headers is automatically adjusted. The number of bytes added is returned
+ * on success, otherwise <0 is returned indicating an error.
+ */
+int http_header_add_tail(struct http_msg *msg, struct hdr_idx *hdr_idx, const char *text)
+{
+ int bytes, len;
+
+ len = strlen(text);
+ bytes = buffer_insert_line2(msg->chn->buf, msg->chn->buf->p + msg->eoh, text, len);
+ if (!bytes)
+ return -1;
+ http_msg_move_end(msg, bytes);
+ return hdr_idx_add(len, 1, hdr_idx, hdr_idx->tail);
+}
+
+/*
+ * Adds a header and its CRLF at the tail of the message's buffer, just before
+ * the last CRLF. <len> bytes are copied, not counting the CRLF. If <text> is NULL, then
+ * the buffer is only opened and the space reserved, but nothing is copied.
+ * The header is also automatically added to the index <hdr_idx>, and the end
+ * of headers is automatically adjusted. The number of bytes added is returned
+ * on success, otherwise <0 is returned indicating an error.
+ */
+int http_header_add_tail2(struct http_msg *msg,
+ struct hdr_idx *hdr_idx, const char *text, int len)
+{
+ int bytes;
+
+ bytes = buffer_insert_line2(msg->chn->buf, msg->chn->buf->p + msg->eoh, text, len);
+ if (!bytes)
+ return -1;
+ http_msg_move_end(msg, bytes);
+ return hdr_idx_add(len, 1, hdr_idx, hdr_idx->tail);
+}
+
+/*
+ * Checks if <hdr> is exactly <name> for <len> chars, and ends with a colon.
+ * If so, returns the position of the first non-space character relative to
+ * <hdr>, or <end>-<hdr> if not found before. If no value is found, it tries
+ * to return a pointer to the place after the first space. Returns 0 if the
+ * header name does not match. Checks are case-insensitive.
+ */
+int http_header_match2(const char *hdr, const char *end,
+ const char *name, int len)
+{
+ const char *val;
+
+ if (hdr + len >= end)
+ return 0;
+ if (hdr[len] != ':')
+ return 0;
+ if (strncasecmp(hdr, name, len) != 0)
+ return 0;
+ val = hdr + len + 1;
+ while (val < end && HTTP_IS_SPHT(*val))
+ val++;
+ if ((val >= end) && (len + 2 <= end - hdr))
+ return len + 2; /* we may replace starting from second space */
+ return val - hdr;
+}
+
+/* Find the first or next occurrence of header <name> in message buffer <sol>
+ * using headers index <idx>, and return it in the <ctx> structure. This
+ * structure holds everything necessary to use the header and find next
+ * occurrence. If its <idx> member is 0, the header is searched from the
+ * beginning. Otherwise, the next occurrence is returned. The function returns
+ * 1 when it finds a value, and 0 when there is no more. It is very similar to
+ * http_find_header2() except that it is designed to work with full-line headers
+ * whose comma is not a delimiter but is part of the syntax. As a special case,
+ * if ctx->val is NULL when searching for a new values of a header, the current
+ * header is rescanned. This allows rescanning after a header deletion.
+ */
+int http_find_full_header2(const char *name, int len,
+ char *sol, struct hdr_idx *idx,
+ struct hdr_ctx *ctx)
+{
+ char *eol, *sov;
+ int cur_idx, old_idx;
+
+ cur_idx = ctx->idx;
+ if (cur_idx) {
+ /* We have previously returned a header, let's search another one */
+ sol = ctx->line;
+ eol = sol + idx->v[cur_idx].len;
+ goto next_hdr;
+ }
+
+ /* first request for this header */
+ sol += hdr_idx_first_pos(idx);
+ old_idx = 0;
+ cur_idx = hdr_idx_first_idx(idx);
+ while (cur_idx) {
+ eol = sol + idx->v[cur_idx].len;
+
+ if (len == 0) {
+ /* No argument was passed, we want any header.
+ * To achieve this, we simply build a fake request. */
+ while (sol + len < eol && sol[len] != ':')
+ len++;
+ name = sol;
+ }
+
+ if ((len < eol - sol) &&
+ (sol[len] == ':') &&
+ (strncasecmp(sol, name, len) == 0)) {
+ ctx->del = len;
+ sov = sol + len + 1;
+ while (sov < eol && http_is_lws[(unsigned char)*sov])
+ sov++;
+
+ ctx->line = sol;
+ ctx->prev = old_idx;
+ ctx->idx = cur_idx;
+ ctx->val = sov - sol;
+ ctx->tws = 0;
+ while (eol > sov && http_is_lws[(unsigned char)*(eol - 1)]) {
+ eol--;
+ ctx->tws++;
+ }
+ ctx->vlen = eol - sov;
+ return 1;
+ }
+ next_hdr:
+ sol = eol + idx->v[cur_idx].cr + 1;
+ old_idx = cur_idx;
+ cur_idx = idx->v[cur_idx].next;
+ }
+ return 0;
+}
+
+/* Find the first or next header field in message buffer <sol> using headers
+ * index <idx>, and return it in the <ctx> structure. This structure holds
+ * everything necessary to use the header and find next occurrence. If its
+ * <idx> member is 0, the first header is retrieved. Otherwise, the next
+ * occurrence is returned. The function returns 1 when it finds a value, and
+ * 0 when there is no more. It is equivalent to http_find_full_header2() with
+ * no header name.
+ */
+int http_find_next_header(char *sol, struct hdr_idx *idx, struct hdr_ctx *ctx)
+{
+ char *eol, *sov;
+ int cur_idx, old_idx;
+ int len;
+
+ cur_idx = ctx->idx;
+ if (cur_idx) {
+ /* We have previously returned a header, let's search another one */
+ sol = ctx->line;
+ eol = sol + idx->v[cur_idx].len;
+ goto next_hdr;
+ }
+
+ /* first request for this header */
+ sol += hdr_idx_first_pos(idx);
+ old_idx = 0;
+ cur_idx = hdr_idx_first_idx(idx);
+ while (cur_idx) {
+ eol = sol + idx->v[cur_idx].len;
+
+ len = 0;
+ while (1) {
+ if (len >= eol - sol)
+ goto next_hdr;
+ if (sol[len] == ':')
+ break;
+ len++;
+ }
+
+ ctx->del = len;
+ sov = sol + len + 1;
+ while (sov < eol && http_is_lws[(unsigned char)*sov])
+ sov++;
+
+ ctx->line = sol;
+ ctx->prev = old_idx;
+ ctx->idx = cur_idx;
+ ctx->val = sov - sol;
+ ctx->tws = 0;
+
+ while (eol > sov && http_is_lws[(unsigned char)*(eol - 1)]) {
+ eol--;
+ ctx->tws++;
+ }
+ ctx->vlen = eol - sov;
+ return 1;
+
+ next_hdr:
+ sol = eol + idx->v[cur_idx].cr + 1;
+ old_idx = cur_idx;
+ cur_idx = idx->v[cur_idx].next;
+ }
+ return 0;
+}
+
+/* Find the end of the header value contained between <s> and <e>. See RFC2616,
+ * par 2.2 for more information. Note that it requires a valid header to return
+ * a valid result. This works for headers defined as comma-separated lists.
+ */
+char *find_hdr_value_end(char *s, const char *e)
+{
+ int quoted, qdpair;
+
+ quoted = qdpair = 0;
+ for (; s < e; s++) {
+ if (qdpair) qdpair = 0;
+ else if (quoted) {
+ if (*s == '\\') qdpair = 1;
+ else if (*s == '"') quoted = 0;
+ }
+ else if (*s == '"') quoted = 1;
+ else if (*s == ',') return s;
+ }
+ return s;
+}
+
+/* Find the first or next occurrence of header <name> in message buffer <sol>
+ * using headers index <idx>, and return it in the <ctx> structure. This
+ * structure holds everything necessary to use the header and find next
+ * occurrence. If its <idx> member is 0, the header is searched from the
+ * beginning. Otherwise, the next occurrence is returned. The function returns
+ * 1 when it finds a value, and 0 when there is no more. It is designed to work
+ * with headers defined as comma-separated lists. As a special case, if ctx->val
+ * is NULL when searching for a new values of a header, the current header is
+ * rescanned. This allows rescanning after a header deletion.
+ */
+int http_find_header2(const char *name, int len,
+ char *sol, struct hdr_idx *idx,
+ struct hdr_ctx *ctx)
+{
+ char *eol, *sov;
+ int cur_idx, old_idx;
+
+ cur_idx = ctx->idx;
+ if (cur_idx) {
+ /* We have previously returned a value, let's search
+ * another one on the same line.
+ */
+ sol = ctx->line;
+ ctx->del = ctx->val + ctx->vlen + ctx->tws;
+ sov = sol + ctx->del;
+ eol = sol + idx->v[cur_idx].len;
+
+ if (sov >= eol)
+ /* no more values in this header */
+ goto next_hdr;
+
+ /* values remaining for this header, skip the comma but save it
+ * for later use (eg: for header deletion).
+ */
+ sov++;
+ while (sov < eol && http_is_lws[(unsigned char)*sov])
+ sov++;
+
+ goto return_hdr;
+ }
+
+ /* first request for this header */
+ sol += hdr_idx_first_pos(idx);
+ old_idx = 0;
+ cur_idx = hdr_idx_first_idx(idx);
+ while (cur_idx) {
+ eol = sol + idx->v[cur_idx].len;
+
+ if (len == 0) {
+ /* No argument was passed, we want any header.
+ * To achieve this, we simply build a fake request. */
+ while (sol + len < eol && sol[len] != ':')
+ len++;
+ name = sol;
+ }
+
+ if ((len < eol - sol) &&
+ (sol[len] == ':') &&
+ (strncasecmp(sol, name, len) == 0)) {
+ ctx->del = len;
+ sov = sol + len + 1;
+ while (sov < eol && http_is_lws[(unsigned char)*sov])
+ sov++;
+
+ ctx->line = sol;
+ ctx->prev = old_idx;
+ return_hdr:
+ ctx->idx = cur_idx;
+ ctx->val = sov - sol;
+
+ eol = find_hdr_value_end(sov, eol);
+ ctx->tws = 0;
+ while (eol > sov && http_is_lws[(unsigned char)*(eol - 1)]) {
+ eol--;
+ ctx->tws++;
+ }
+ ctx->vlen = eol - sov;
+ return 1;
+ }
+ next_hdr:
+ sol = eol + idx->v[cur_idx].cr + 1;
+ old_idx = cur_idx;
+ cur_idx = idx->v[cur_idx].next;
+ }
+ return 0;
+}
+
+int http_find_header(const char *name,
+ char *sol, struct hdr_idx *idx,
+ struct hdr_ctx *ctx)
+{
+ return http_find_header2(name, strlen(name), sol, idx, ctx);
+}
+
+/* Remove one value of a header. This only works on a <ctx> returned by one of
+ * the http_find_header functions. The value is removed, as well as surrounding
+ * commas if any. If the removed value was alone, the whole header is removed.
+ * The ctx is always updated accordingly, as well as the buffer and HTTP
+ * message <msg>. The new index is returned. If it is zero, it means there is
+ * no more header, so any processing may stop. The ctx is always left in a form
+ * that can be handled by http_find_header2() to find next occurrence.
+ */
+int http_remove_header2(struct http_msg *msg, struct hdr_idx *idx, struct hdr_ctx *ctx)
+{
+ int cur_idx = ctx->idx;
+ char *sol = ctx->line;
+ struct hdr_idx_elem *hdr;
+ int delta, skip_comma;
+
+ if (!cur_idx)
+ return 0;
+
+ hdr = &idx->v[cur_idx];
+ if (sol[ctx->del] == ':' && ctx->val + ctx->vlen + ctx->tws == hdr->len) {
+ /* This was the only value of the header, we must now remove it entirely. */
+ delta = buffer_replace2(msg->chn->buf, sol, sol + hdr->len + hdr->cr + 1, NULL, 0);
+ http_msg_move_end(msg, delta);
+ idx->used--;
+ hdr->len = 0; /* unused entry */
+ idx->v[ctx->prev].next = idx->v[ctx->idx].next;
+ if (idx->tail == ctx->idx)
+ idx->tail = ctx->prev;
+ ctx->idx = ctx->prev; /* walk back to the end of previous header */
+ ctx->line -= idx->v[ctx->idx].len + idx->v[ctx->idx].cr + 1;
+ ctx->val = idx->v[ctx->idx].len; /* point to end of previous header */
+ ctx->tws = ctx->vlen = 0;
+ return ctx->idx;
+ }
+
+ /* This was not the only value of this header. We have to remove between
+ * ctx->del+1 and ctx->val+ctx->vlen+ctx->tws+1 included. If it is the
+ * last entry of the list, we remove the last separator.
+ */
+
+ skip_comma = (ctx->val + ctx->vlen + ctx->tws == hdr->len) ? 0 : 1;
+ delta = buffer_replace2(msg->chn->buf, sol + ctx->del + skip_comma,
+ sol + ctx->val + ctx->vlen + ctx->tws + skip_comma,
+ NULL, 0);
+ hdr->len += delta;
+ http_msg_move_end(msg, delta);
+ ctx->val = ctx->del;
+ ctx->tws = ctx->vlen = 0;
+ return ctx->idx;
+}
+
+/* This function handles a server error at the stream interface level. The
+ * stream interface is assumed to be already in a closed state. An optional
+ * message is copied into the input buffer, and an HTTP status code stored.
+ * The error flags are set to the values in arguments. Any pending request
+ * in this buffer will be lost.
+ */
+static void http_server_error(struct stream *s, struct stream_interface *si,
+ int err, int finst, int status, const struct chunk *msg)
+{
+ channel_auto_read(si_oc(si));
+ channel_abort(si_oc(si));
+ channel_auto_close(si_oc(si));
+ channel_erase(si_oc(si));
+ channel_auto_close(si_ic(si));
+ channel_auto_read(si_ic(si));
+ if (status > 0 && msg) {
+ s->txn->status = status;
+ bo_inject(si_ic(si), msg->str, msg->len);
+ }
+ if (!(s->flags & SF_ERR_MASK))
+ s->flags |= err;
+ if (!(s->flags & SF_FINST_MASK))
+ s->flags |= finst;
+}
+
+/* This function returns the appropriate error location for the given stream
+ * and message.
+ */
+
+struct chunk *http_error_message(struct stream *s, int msgnum)
+{
+ if (s->be->errmsg[msgnum].str)
+ return &s->be->errmsg[msgnum];
+ else if (strm_fe(s)->errmsg[msgnum].str)
+ return &strm_fe(s)->errmsg[msgnum];
+ else
+ return &http_err_chunks[msgnum];
+}
+
+/*
+ * returns a known method among HTTP_METH_* or HTTP_METH_OTHER for all unknown
+ * ones.
+ */
+enum http_meth_t find_http_meth(const char *str, const int len)
+{
+ unsigned char m;
+ const struct http_method_desc *h;
+
+ m = ((unsigned)*str - 'A');
+
+ if (m < 26) {
+ for (h = http_methods[m]; h->len > 0; h++) {
+ if (unlikely(h->len != len))
+ continue;
+ if (likely(memcmp(str, h->text, h->len) == 0))
+ return h->meth;
+ };
+ }
+ return HTTP_METH_OTHER;
+}
+
+/* Parse the URI from the given transaction (which is assumed to be in request
+ * phase) and look for the "/" beginning the PATH. If not found, return NULL.
+ * It is returned otherwise.
+ */
+char *http_get_path(struct http_txn *txn)
+{
+ char *ptr, *end;
+
+ ptr = txn->req.chn->buf->p + txn->req.sl.rq.u;
+ end = ptr + txn->req.sl.rq.u_l;
+
+ if (ptr >= end)
+ return NULL;
+
+ /* RFC2616, par. 5.1.2 :
+ * Request-URI = "*" | absuri | abspath | authority
+ */
+
+ if (*ptr == '*')
+ return NULL;
+
+ if (isalpha((unsigned char)*ptr)) {
+ /* this is a scheme as described by RFC3986, par. 3.1 */
+ ptr++;
+ while (ptr < end &&
+ (isalnum((unsigned char)*ptr) || *ptr == '+' || *ptr == '-' || *ptr == '.'))
+ ptr++;
+ /* skip '://' */
+ if (ptr == end || *ptr++ != ':')
+ return NULL;
+ if (ptr == end || *ptr++ != '/')
+ return NULL;
+ if (ptr == end || *ptr++ != '/')
+ return NULL;
+ }
+ /* skip [user[:passwd]@]host[:[port]] */
+
+ while (ptr < end && *ptr != '/')
+ ptr++;
+
+ if (ptr == end)
+ return NULL;
+
+ /* OK, we got the '/' ! */
+ return ptr;
+}
+
+/* Parse the URI from the given string and look for the "/" beginning the PATH.
+ * If not found, return NULL. It is returned otherwise.
+ */
+static char *
+http_get_path_from_string(char *str)
+{
+ char *ptr = str;
+
+ /* RFC2616, par. 5.1.2 :
+ * Request-URI = "*" | absuri | abspath | authority
+ */
+
+ if (*ptr == '*')
+ return NULL;
+
+ if (isalpha((unsigned char)*ptr)) {
+ /* this is a scheme as described by RFC3986, par. 3.1 */
+ ptr++;
+ while (isalnum((unsigned char)*ptr) || *ptr == '+' || *ptr == '-' || *ptr == '.')
+ ptr++;
+ /* skip '://' */
+ if (*ptr == '\0' || *ptr++ != ':')
+ return NULL;
+ if (*ptr == '\0' || *ptr++ != '/')
+ return NULL;
+ if (*ptr == '\0' || *ptr++ != '/')
+ return NULL;
+ }
+ /* skip [user[:passwd]@]host[:[port]] */
+
+ while (*ptr != '\0' && *ptr != ' ' && *ptr != '/')
+ ptr++;
+
+ if (*ptr == '\0' || *ptr == ' ')
+ return NULL;
+
+ /* OK, we got the '/' ! */
+ return ptr;
+}
+
+/* Returns a 302 for a redirectable request that reaches a server working in
+ * in redirect mode. This may only be called just after the stream interface
+ * has moved to SI_ST_ASS. Unprocessable requests are left unchanged and will
+ * follow normal proxy processing. NOTE: this function is designed to support
+ * being called once data are scheduled for forwarding.
+ */
+void http_perform_server_redirect(struct stream *s, struct stream_interface *si)
+{
+ struct http_txn *txn;
+ struct server *srv;
+ char *path;
+ int len, rewind;
+
+ /* 1: create the response header */
+ trash.len = strlen(HTTP_302);
+ memcpy(trash.str, HTTP_302, trash.len);
+
+ srv = objt_server(s->target);
+
+ /* 2: add the server's prefix */
+ if (trash.len + srv->rdr_len > trash.size)
+ return;
+
+ /* special prefix "/" means don't change URL */
+ if (srv->rdr_len != 1 || *srv->rdr_pfx != '/') {
+ memcpy(trash.str + trash.len, srv->rdr_pfx, srv->rdr_len);
+ trash.len += srv->rdr_len;
+ }
+
+ /* 3: add the request URI. Since it was already forwarded, we need
+ * to temporarily rewind the buffer.
+ */
+ txn = s->txn;
+ b_rew(s->req.buf, rewind = http_hdr_rewind(&txn->req));
+
+ path = http_get_path(txn);
+ len = buffer_count(s->req.buf, path, b_ptr(s->req.buf, txn->req.sl.rq.u + txn->req.sl.rq.u_l));
+
+ b_adv(s->req.buf, rewind);
+
+ if (!path)
+ return;
+
+ if (trash.len + len > trash.size - 4) /* 4 for CRLF-CRLF */
+ return;
+
+ memcpy(trash.str + trash.len, path, len);
+ trash.len += len;
+
+ if (unlikely(txn->flags & TX_USE_PX_CONN)) {
+ memcpy(trash.str + trash.len, "\r\nProxy-Connection: close\r\n\r\n", 29);
+ trash.len += 29;
+ } else {
+ memcpy(trash.str + trash.len, "\r\nConnection: close\r\n\r\n", 23);
+ trash.len += 23;
+ }
+
+ /* prepare to return without error. */
+ si_shutr(si);
+ si_shutw(si);
+ si->err_type = SI_ET_NONE;
+ si->state = SI_ST_CLO;
+
+ /* send the message */
+ http_server_error(s, si, SF_ERR_LOCAL, SF_FINST_C, 302, &trash);
+
+ /* FIXME: we should increase a counter of redirects per server and per backend. */
+ srv_inc_sess_ctr(srv);
+ srv_set_sess_last(srv);
+}
+
+/* Return the error message corresponding to si->err_type. It is assumed
+ * that the server side is closed. Note that err_type is actually a
+ * bitmask, where almost only aborts may be cumulated with other
+ * values. We consider that aborted operations are more important
+ * than timeouts or errors due to the fact that nobody else in the
+ * logs might explain incomplete retries. All others should avoid
+ * being cumulated. It should normally not be possible to have multiple
+ * aborts at once, but just in case, the first one in sequence is reported.
+ * Note that connection errors appearing on the second request of a keep-alive
+ * connection are not reported since this allows the client to retry.
+ */
+void http_return_srv_error(struct stream *s, struct stream_interface *si)
+{
+ int err_type = si->err_type;
+
+ if (err_type & SI_ET_QUEUE_ABRT)
+ http_server_error(s, si, SF_ERR_CLICL, SF_FINST_Q,
+ 503, http_error_message(s, HTTP_ERR_503));
+ else if (err_type & SI_ET_CONN_ABRT)
+ http_server_error(s, si, SF_ERR_CLICL, SF_FINST_C,
+ 503, (s->txn->flags & TX_NOT_FIRST) ? NULL :
+ http_error_message(s, HTTP_ERR_503));
+ else if (err_type & SI_ET_QUEUE_TO)
+ http_server_error(s, si, SF_ERR_SRVTO, SF_FINST_Q,
+ 503, http_error_message(s, HTTP_ERR_503));
+ else if (err_type & SI_ET_QUEUE_ERR)
+ http_server_error(s, si, SF_ERR_SRVCL, SF_FINST_Q,
+ 503, http_error_message(s, HTTP_ERR_503));
+ else if (err_type & SI_ET_CONN_TO)
+ http_server_error(s, si, SF_ERR_SRVTO, SF_FINST_C,
+ 503, (s->txn->flags & TX_NOT_FIRST) ? NULL :
+ http_error_message(s, HTTP_ERR_503));
+ else if (err_type & SI_ET_CONN_ERR)
+ http_server_error(s, si, SF_ERR_SRVCL, SF_FINST_C,
+ 503, (s->flags & SF_SRV_REUSED) ? NULL :
+ http_error_message(s, HTTP_ERR_503));
+ else if (err_type & SI_ET_CONN_RES)
+ http_server_error(s, si, SF_ERR_RESOURCE, SF_FINST_C,
+ 503, (s->txn->flags & TX_NOT_FIRST) ? NULL :
+ http_error_message(s, HTTP_ERR_503));
+ else /* SI_ET_CONN_OTHER and others */
+ http_server_error(s, si, SF_ERR_INTERNAL, SF_FINST_C,
+ 500, http_error_message(s, HTTP_ERR_500));
+}
+
+extern const char sess_term_cond[8];
+extern const char sess_fin_state[8];
+extern const char *monthname[12];
+struct pool_head *pool2_http_txn;
+struct pool_head *pool2_requri;
+struct pool_head *pool2_capture = NULL;
+struct pool_head *pool2_uniqueid;
+
+/*
+ * Capture headers from message starting at <som> according to header list
+ * <cap_hdr>, and fill the <cap> pointers appropriately.
+ */
+void capture_headers(char *som, struct hdr_idx *idx,
+ char **cap, struct cap_hdr *cap_hdr)
+{
+ char *eol, *sol, *col, *sov;
+ int cur_idx;
+ struct cap_hdr *h;
+ int len;
+
+ sol = som + hdr_idx_first_pos(idx);
+ cur_idx = hdr_idx_first_idx(idx);
+
+ while (cur_idx) {
+ eol = sol + idx->v[cur_idx].len;
+
+ col = sol;
+ while (col < eol && *col != ':')
+ col++;
+
+ sov = col + 1;
+ while (sov < eol && http_is_lws[(unsigned char)*sov])
+ sov++;
+
+ for (h = cap_hdr; h; h = h->next) {
+ if (h->namelen && (h->namelen == col - sol) &&
+ (strncasecmp(sol, h->name, h->namelen) == 0)) {
+ if (cap[h->index] == NULL)
+ cap[h->index] =
+ pool_alloc2(h->pool);
+
+ if (cap[h->index] == NULL) {
+ Alert("HTTP capture : out of memory.\n");
+ continue;
+ }
+
+ len = eol - sov;
+ if (len > h->len)
+ len = h->len;
+
+ memcpy(cap[h->index], sov, len);
+ cap[h->index][len]=0;
+ }
+ }
+ sol = eol + idx->v[cur_idx].cr + 1;
+ cur_idx = idx->v[cur_idx].next;
+ }
+}
+
+
+/* either we find an LF at <ptr> or we jump to <bad>.
+ */
+#define EXPECT_LF_HERE(ptr, bad) do { if (unlikely(*(ptr) != '\n')) goto bad; } while (0)
+
+/* plays with variables <ptr>, <end> and <state>. Jumps to <good> if OK,
+ * otherwise to <http_msg_ood> with <state> set to <st>.
+ */
+#define EAT_AND_JUMP_OR_RETURN(good, st) do { \
+ ptr++; \
+ if (likely(ptr < end)) \
+ goto good; \
+ else { \
+ state = (st); \
+ goto http_msg_ood; \
+ } \
+ } while (0)
+
+
+/*
+ * This function parses a status line between <ptr> and <end>, starting with
+ * parser state <state>. Only states HTTP_MSG_RPVER, HTTP_MSG_RPVER_SP,
+ * HTTP_MSG_RPCODE, HTTP_MSG_RPCODE_SP and HTTP_MSG_RPREASON are handled. Others
+ * will give undefined results.
+ * Note that it is upon the caller's responsibility to ensure that ptr < end,
+ * and that msg->sol points to the beginning of the response.
+ * If a complete line is found (which implies that at least one CR or LF is
+ * found before <end>, the updated <ptr> is returned, otherwise NULL is
+ * returned indicating an incomplete line (which does not mean that parts have
+ * not been updated). In the incomplete case, if <ret_ptr> or <ret_state> are
+ * non-NULL, they are fed with the new <ptr> and <state> values to be passed
+ * upon next call.
+ *
+ * This function was intentionally designed to be called from
+ * http_msg_analyzer() with the lowest overhead. It should integrate perfectly
+ * within its state machine and use the same macros, hence the need for same
+ * labels and variable names. Note that msg->sol is left unchanged.
+ */
+const char *http_parse_stsline(struct http_msg *msg,
+ enum ht_state state, const char *ptr, const char *end,
+ unsigned int *ret_ptr, enum ht_state *ret_state)
+{
+ const char *msg_start = msg->chn->buf->p;
+
+ switch (state) {
+ case HTTP_MSG_RPVER:
+ http_msg_rpver:
+ if (likely(HTTP_IS_VER_TOKEN(*ptr)))
+ EAT_AND_JUMP_OR_RETURN(http_msg_rpver, HTTP_MSG_RPVER);
+
+ if (likely(HTTP_IS_SPHT(*ptr))) {
+ msg->sl.st.v_l = ptr - msg_start;
+ EAT_AND_JUMP_OR_RETURN(http_msg_rpver_sp, HTTP_MSG_RPVER_SP);
+ }
+ state = HTTP_MSG_ERROR;
+ break;
+
+ case HTTP_MSG_RPVER_SP:
+ http_msg_rpver_sp:
+ if (likely(!HTTP_IS_LWS(*ptr))) {
+ msg->sl.st.c = ptr - msg_start;
+ goto http_msg_rpcode;
+ }
+ if (likely(HTTP_IS_SPHT(*ptr)))
+ EAT_AND_JUMP_OR_RETURN(http_msg_rpver_sp, HTTP_MSG_RPVER_SP);
+ /* so it's a CR/LF, this is invalid */
+ state = HTTP_MSG_ERROR;
+ break;
+
+ case HTTP_MSG_RPCODE:
+ http_msg_rpcode:
+ if (likely(!HTTP_IS_LWS(*ptr)))
+ EAT_AND_JUMP_OR_RETURN(http_msg_rpcode, HTTP_MSG_RPCODE);
+
+ if (likely(HTTP_IS_SPHT(*ptr))) {
+ msg->sl.st.c_l = ptr - msg_start - msg->sl.st.c;
+ EAT_AND_JUMP_OR_RETURN(http_msg_rpcode_sp, HTTP_MSG_RPCODE_SP);
+ }
+
+ /* so it's a CR/LF, so there is no reason phrase */
+ msg->sl.st.c_l = ptr - msg_start - msg->sl.st.c;
+ http_msg_rsp_reason:
+ /* FIXME: should we support HTTP responses without any reason phrase ? */
+ msg->sl.st.r = ptr - msg_start;
+ msg->sl.st.r_l = 0;
+ goto http_msg_rpline_eol;
+
+ case HTTP_MSG_RPCODE_SP:
+ http_msg_rpcode_sp:
+ if (likely(!HTTP_IS_LWS(*ptr))) {
+ msg->sl.st.r = ptr - msg_start;
+ goto http_msg_rpreason;
+ }
+ if (likely(HTTP_IS_SPHT(*ptr)))
+ EAT_AND_JUMP_OR_RETURN(http_msg_rpcode_sp, HTTP_MSG_RPCODE_SP);
+ /* so it's a CR/LF, so there is no reason phrase */
+ goto http_msg_rsp_reason;
+
+ case HTTP_MSG_RPREASON:
+ http_msg_rpreason:
+ if (likely(!HTTP_IS_CRLF(*ptr)))
+ EAT_AND_JUMP_OR_RETURN(http_msg_rpreason, HTTP_MSG_RPREASON);
+ msg->sl.st.r_l = ptr - msg_start - msg->sl.st.r;
+ http_msg_rpline_eol:
+ /* We have seen the end of line. Note that we do not
+ * necessarily have the \n yet, but at least we know that we
+ * have EITHER \r OR \n, otherwise the response would not be
+ * complete. We can then record the response length and return
+ * to the caller which will be able to register it.
+ */
+ msg->sl.st.l = ptr - msg_start - msg->sol;
+ return ptr;
+
+ default:
+#ifdef DEBUG_FULL
+ fprintf(stderr, "FIXME !!!! impossible state at %s:%d = %d\n", __FILE__, __LINE__, state);
+ exit(1);
+#endif
+ ;
+ }
+
+ http_msg_ood:
+ /* out of valid data */
+ if (ret_state)
+ *ret_state = state;
+ if (ret_ptr)
+ *ret_ptr = ptr - msg_start;
+ return NULL;
+}
+
+/*
+ * This function parses a request line between <ptr> and <end>, starting with
+ * parser state <state>. Only states HTTP_MSG_RQMETH, HTTP_MSG_RQMETH_SP,
+ * HTTP_MSG_RQURI, HTTP_MSG_RQURI_SP and HTTP_MSG_RQVER are handled. Others
+ * will give undefined results.
+ * Note that it is upon the caller's responsibility to ensure that ptr < end,
+ * and that msg->sol points to the beginning of the request.
+ * If a complete line is found (which implies that at least one CR or LF is
+ * found before <end>, the updated <ptr> is returned, otherwise NULL is
+ * returned indicating an incomplete line (which does not mean that parts have
+ * not been updated). In the incomplete case, if <ret_ptr> or <ret_state> are
+ * non-NULL, they are fed with the new <ptr> and <state> values to be passed
+ * upon next call.
+ *
+ * This function was intentionally designed to be called from
+ * http_msg_analyzer() with the lowest overhead. It should integrate perfectly
+ * within its state machine and use the same macros, hence the need for same
+ * labels and variable names. Note that msg->sol is left unchanged.
+ */
+const char *http_parse_reqline(struct http_msg *msg,
+ enum ht_state state, const char *ptr, const char *end,
+ unsigned int *ret_ptr, enum ht_state *ret_state)
+{
+ const char *msg_start = msg->chn->buf->p;
+
+ switch (state) {
+ case HTTP_MSG_RQMETH:
+ http_msg_rqmeth:
+ if (likely(HTTP_IS_TOKEN(*ptr)))
+ EAT_AND_JUMP_OR_RETURN(http_msg_rqmeth, HTTP_MSG_RQMETH);
+
+ if (likely(HTTP_IS_SPHT(*ptr))) {
+ msg->sl.rq.m_l = ptr - msg_start;
+ EAT_AND_JUMP_OR_RETURN(http_msg_rqmeth_sp, HTTP_MSG_RQMETH_SP);
+ }
+
+ if (likely(HTTP_IS_CRLF(*ptr))) {
+ /* HTTP 0.9 request */
+ msg->sl.rq.m_l = ptr - msg_start;
+ http_msg_req09_uri:
+ msg->sl.rq.u = ptr - msg_start;
+ http_msg_req09_uri_e:
+ msg->sl.rq.u_l = ptr - msg_start - msg->sl.rq.u;
+ http_msg_req09_ver:
+ msg->sl.rq.v = ptr - msg_start;
+ msg->sl.rq.v_l = 0;
+ goto http_msg_rqline_eol;
+ }
+ state = HTTP_MSG_ERROR;
+ break;
+
+ case HTTP_MSG_RQMETH_SP:
+ http_msg_rqmeth_sp:
+ if (likely(!HTTP_IS_LWS(*ptr))) {
+ msg->sl.rq.u = ptr - msg_start;
+ goto http_msg_rquri;
+ }
+ if (likely(HTTP_IS_SPHT(*ptr)))
+ EAT_AND_JUMP_OR_RETURN(http_msg_rqmeth_sp, HTTP_MSG_RQMETH_SP);
+ /* so it's a CR/LF, meaning an HTTP 0.9 request */
+ goto http_msg_req09_uri;
+
+ case HTTP_MSG_RQURI:
+ http_msg_rquri:
+ if (likely((unsigned char)(*ptr - 33) <= 93)) /* 33 to 126 included */
+ EAT_AND_JUMP_OR_RETURN(http_msg_rquri, HTTP_MSG_RQURI);
+
+ if (likely(HTTP_IS_SPHT(*ptr))) {
+ msg->sl.rq.u_l = ptr - msg_start - msg->sl.rq.u;
+ EAT_AND_JUMP_OR_RETURN(http_msg_rquri_sp, HTTP_MSG_RQURI_SP);
+ }
+
+ if (likely((unsigned char)*ptr >= 128)) {
+ /* non-ASCII chars are forbidden unless option
+ * accept-invalid-http-request is enabled in the frontend.
+ * In any case, we capture the faulty char.
+ */
+ if (msg->err_pos < -1)
+ goto invalid_char;
+ if (msg->err_pos == -1)
+ msg->err_pos = ptr - msg_start;
+ EAT_AND_JUMP_OR_RETURN(http_msg_rquri, HTTP_MSG_RQURI);
+ }
+
+ if (likely(HTTP_IS_CRLF(*ptr))) {
+ /* so it's a CR/LF, meaning an HTTP 0.9 request */
+ goto http_msg_req09_uri_e;
+ }
+
+ /* OK forbidden chars, 0..31 or 127 */
+ invalid_char:
+ msg->err_pos = ptr - msg_start;
+ state = HTTP_MSG_ERROR;
+ break;
+
+ case HTTP_MSG_RQURI_SP:
+ http_msg_rquri_sp:
+ if (likely(!HTTP_IS_LWS(*ptr))) {
+ msg->sl.rq.v = ptr - msg_start;
+ goto http_msg_rqver;
+ }
+ if (likely(HTTP_IS_SPHT(*ptr)))
+ EAT_AND_JUMP_OR_RETURN(http_msg_rquri_sp, HTTP_MSG_RQURI_SP);
+ /* so it's a CR/LF, meaning an HTTP 0.9 request */
+ goto http_msg_req09_ver;
+
+ case HTTP_MSG_RQVER:
+ http_msg_rqver:
+ if (likely(HTTP_IS_VER_TOKEN(*ptr)))
+ EAT_AND_JUMP_OR_RETURN(http_msg_rqver, HTTP_MSG_RQVER);
+
+ if (likely(HTTP_IS_CRLF(*ptr))) {
+ msg->sl.rq.v_l = ptr - msg_start - msg->sl.rq.v;
+ http_msg_rqline_eol:
+ /* We have seen the end of line. Note that we do not
+ * necessarily have the \n yet, but at least we know that we
+ * have EITHER \r OR \n, otherwise the request would not be
+ * complete. We can then record the request length and return
+ * to the caller which will be able to register it.
+ */
+ msg->sl.rq.l = ptr - msg_start - msg->sol;
+ return ptr;
+ }
+
+ /* neither an HTTP_VER token nor a CRLF */
+ state = HTTP_MSG_ERROR;
+ break;
+
+ default:
+#ifdef DEBUG_FULL
+ fprintf(stderr, "FIXME !!!! impossible state at %s:%d = %d\n", __FILE__, __LINE__, state);
+ exit(1);
+#endif
+ ;
+ }
+
+ http_msg_ood:
+ /* out of valid data */
+ if (ret_state)
+ *ret_state = state;
+ if (ret_ptr)
+ *ret_ptr = ptr - msg_start;
+ return NULL;
+}
+
+/*
+ * Returns the data from Authorization header. Function may be called more
+ * than once so data is stored in txn->auth_data. When no header is found
+ * or auth method is unknown auth_method is set to HTTP_AUTH_WRONG to avoid
+ * searching again for something we are unable to find anyway. However, if
+ * the result if valid, the cache is not reused because we would risk to
+ * have the credentials overwritten by another stream in parallel.
+ */
+
+/* This bufffer is initialized in the file 'src/haproxy.c'. This length is
+ * set according to global.tune.bufsize.
+ */
+char *get_http_auth_buff;
+
+int
+get_http_auth(struct stream *s)
+{
+
+ struct http_txn *txn = s->txn;
+ struct chunk auth_method;
+ struct hdr_ctx ctx;
+ char *h, *p;
+ int len;
+
+#ifdef DEBUG_AUTH
+ printf("Auth for stream %p: %d\n", s, txn->auth.method);
+#endif
+
+ if (txn->auth.method == HTTP_AUTH_WRONG)
+ return 0;
+
+ txn->auth.method = HTTP_AUTH_WRONG;
+
+ ctx.idx = 0;
+
+ if (txn->flags & TX_USE_PX_CONN) {
+ h = "Proxy-Authorization";
+ len = strlen(h);
+ } else {
+ h = "Authorization";
+ len = strlen(h);
+ }
+
+ if (!http_find_header2(h, len, s->req.buf->p, &txn->hdr_idx, &ctx))
+ return 0;
+
+ h = ctx.line + ctx.val;
+
+ p = memchr(h, ' ', ctx.vlen);
+ if (!p || p == h)
+ return 0;
+
+ chunk_initlen(&auth_method, h, 0, p-h);
+ chunk_initlen(&txn->auth.method_data, p+1, 0, ctx.vlen-(p-h)-1);
+
+ if (!strncasecmp("Basic", auth_method.str, auth_method.len)) {
+
+ len = base64dec(txn->auth.method_data.str, txn->auth.method_data.len,
+ get_http_auth_buff, global.tune.bufsize - 1);
+
+ if (len < 0)
+ return 0;
+
+
+ get_http_auth_buff[len] = '\0';
+
+ p = strchr(get_http_auth_buff, ':');
+
+ if (!p)
+ return 0;
+
+ txn->auth.user = get_http_auth_buff;
+ *p = '\0';
+ txn->auth.pass = p+1;
+
+ txn->auth.method = HTTP_AUTH_BASIC;
+ return 1;
+ }
+
+ return 0;
+}
+
+
+/*
+ * This function parses an HTTP message, either a request or a response,
+ * depending on the initial msg->msg_state. The caller is responsible for
+ * ensuring that the message does not wrap. The function can be preempted
+ * everywhere when data are missing and recalled at the exact same location
+ * with no information loss. The message may even be realigned between two
+ * calls. The header index is re-initialized when switching from
+ * MSG_R[PQ]BEFORE to MSG_RPVER|MSG_RQMETH. It modifies msg->sol among other
+ * fields. Note that msg->sol will be initialized after completing the first
+ * state, so that none of the msg pointers has to be initialized prior to the
+ * first call.
+ */
+void http_msg_analyzer(struct http_msg *msg, struct hdr_idx *idx)
+{
+ enum ht_state state; /* updated only when leaving the FSM */
+ register char *ptr, *end; /* request pointers, to avoid dereferences */
+ struct buffer *buf;
+
+ state = msg->msg_state;
+ buf = msg->chn->buf;
+ ptr = buf->p + msg->next;
+ end = buf->p + buf->i;
+
+ if (unlikely(ptr >= end))
+ goto http_msg_ood;
+
+ switch (state) {
+ /*
+ * First, states that are specific to the response only.
+ * We check them first so that request and headers are
+ * closer to each other (accessed more often).
+ */
+ case HTTP_MSG_RPBEFORE:
+ http_msg_rpbefore:
+ if (likely(HTTP_IS_TOKEN(*ptr))) {
+ /* we have a start of message, but we have to check
+ * first if we need to remove some CRLF. We can only
+ * do this when o=0.
+ */
+ if (unlikely(ptr != buf->p)) {
+ if (buf->o)
+ goto http_msg_ood;
+ /* Remove empty leading lines, as recommended by RFC2616. */
+ bi_fast_delete(buf, ptr - buf->p);
+ }
+ msg->sol = 0;
+ msg->sl.st.l = 0; /* used in debug mode */
+ hdr_idx_init(idx);
+ state = HTTP_MSG_RPVER;
+ goto http_msg_rpver;
+ }
+
+ if (unlikely(!HTTP_IS_CRLF(*ptr)))
+ goto http_msg_invalid;
+
+ if (unlikely(*ptr == '\n'))
+ EAT_AND_JUMP_OR_RETURN(http_msg_rpbefore, HTTP_MSG_RPBEFORE);
+ EAT_AND_JUMP_OR_RETURN(http_msg_rpbefore_cr, HTTP_MSG_RPBEFORE_CR);
+ /* stop here */
+
+ case HTTP_MSG_RPBEFORE_CR:
+ http_msg_rpbefore_cr:
+ EXPECT_LF_HERE(ptr, http_msg_invalid);
+ EAT_AND_JUMP_OR_RETURN(http_msg_rpbefore, HTTP_MSG_RPBEFORE);
+ /* stop here */
+
+ case HTTP_MSG_RPVER:
+ http_msg_rpver:
+ case HTTP_MSG_RPVER_SP:
+ case HTTP_MSG_RPCODE:
+ case HTTP_MSG_RPCODE_SP:
+ case HTTP_MSG_RPREASON:
+ ptr = (char *)http_parse_stsline(msg,
+ state, ptr, end,
+ &msg->next, &msg->msg_state);
+ if (unlikely(!ptr))
+ return;
+
+ /* we have a full response and we know that we have either a CR
+ * or an LF at <ptr>.
+ */
+ hdr_idx_set_start(idx, msg->sl.st.l, *ptr == '\r');
+
+ msg->sol = ptr - buf->p;
+ if (likely(*ptr == '\r'))
+ EAT_AND_JUMP_OR_RETURN(http_msg_rpline_end, HTTP_MSG_RPLINE_END);
+ goto http_msg_rpline_end;
+
+ case HTTP_MSG_RPLINE_END:
+ http_msg_rpline_end:
+ /* msg->sol must point to the first of CR or LF. */
+ EXPECT_LF_HERE(ptr, http_msg_invalid);
+ EAT_AND_JUMP_OR_RETURN(http_msg_hdr_first, HTTP_MSG_HDR_FIRST);
+ /* stop here */
+
+ /*
+ * Second, states that are specific to the request only
+ */
+ case HTTP_MSG_RQBEFORE:
+ http_msg_rqbefore:
+ if (likely(HTTP_IS_TOKEN(*ptr))) {
+ /* we have a start of message, but we have to check
+ * first if we need to remove some CRLF. We can only
+ * do this when o=0.
+ */
+ if (likely(ptr != buf->p)) {
+ if (buf->o)
+ goto http_msg_ood;
+ /* Remove empty leading lines, as recommended by RFC2616. */
+ bi_fast_delete(buf, ptr - buf->p);
+ }
+ msg->sol = 0;
+ msg->sl.rq.l = 0; /* used in debug mode */
+ state = HTTP_MSG_RQMETH;
+ goto http_msg_rqmeth;
+ }
+
+ if (unlikely(!HTTP_IS_CRLF(*ptr)))
+ goto http_msg_invalid;
+
+ if (unlikely(*ptr == '\n'))
+ EAT_AND_JUMP_OR_RETURN(http_msg_rqbefore, HTTP_MSG_RQBEFORE);
+ EAT_AND_JUMP_OR_RETURN(http_msg_rqbefore_cr, HTTP_MSG_RQBEFORE_CR);
+ /* stop here */
+
+ case HTTP_MSG_RQBEFORE_CR:
+ http_msg_rqbefore_cr:
+ EXPECT_LF_HERE(ptr, http_msg_invalid);
+ EAT_AND_JUMP_OR_RETURN(http_msg_rqbefore, HTTP_MSG_RQBEFORE);
+ /* stop here */
+
+ case HTTP_MSG_RQMETH:
+ http_msg_rqmeth:
+ case HTTP_MSG_RQMETH_SP:
+ case HTTP_MSG_RQURI:
+ case HTTP_MSG_RQURI_SP:
+ case HTTP_MSG_RQVER:
+ ptr = (char *)http_parse_reqline(msg,
+ state, ptr, end,
+ &msg->next, &msg->msg_state);
+ if (unlikely(!ptr))
+ return;
+
+ /* we have a full request and we know that we have either a CR
+ * or an LF at <ptr>.
+ */
+ hdr_idx_set_start(idx, msg->sl.rq.l, *ptr == '\r');
+
+ msg->sol = ptr - buf->p;
+ if (likely(*ptr == '\r'))
+ EAT_AND_JUMP_OR_RETURN(http_msg_rqline_end, HTTP_MSG_RQLINE_END);
+ goto http_msg_rqline_end;
+
+ case HTTP_MSG_RQLINE_END:
+ http_msg_rqline_end:
+ /* check for HTTP/0.9 request : no version information available.
+ * msg->sol must point to the first of CR or LF.
+ */
+ if (unlikely(msg->sl.rq.v_l == 0))
+ goto http_msg_last_lf;
+
+ EXPECT_LF_HERE(ptr, http_msg_invalid);
+ EAT_AND_JUMP_OR_RETURN(http_msg_hdr_first, HTTP_MSG_HDR_FIRST);
+ /* stop here */
+
+ /*
+ * Common states below
+ */
+ case HTTP_MSG_HDR_FIRST:
+ http_msg_hdr_first:
+ msg->sol = ptr - buf->p;
+ if (likely(!HTTP_IS_CRLF(*ptr))) {
+ goto http_msg_hdr_name;
+ }
+
+ if (likely(*ptr == '\r'))
+ EAT_AND_JUMP_OR_RETURN(http_msg_last_lf, HTTP_MSG_LAST_LF);
+ goto http_msg_last_lf;
+
+ case HTTP_MSG_HDR_NAME:
+ http_msg_hdr_name:
+ /* assumes msg->sol points to the first char */
+ if (likely(HTTP_IS_TOKEN(*ptr)))
+ EAT_AND_JUMP_OR_RETURN(http_msg_hdr_name, HTTP_MSG_HDR_NAME);
+
+ if (likely(*ptr == ':'))
+ EAT_AND_JUMP_OR_RETURN(http_msg_hdr_l1_sp, HTTP_MSG_HDR_L1_SP);
+
+ if (likely(msg->err_pos < -1) || *ptr == '\n')
+ goto http_msg_invalid;
+
+ if (msg->err_pos == -1) /* capture error pointer */
+ msg->err_pos = ptr - buf->p; /* >= 0 now */
+
+ /* and we still accept this non-token character */
+ EAT_AND_JUMP_OR_RETURN(http_msg_hdr_name, HTTP_MSG_HDR_NAME);
+
+ case HTTP_MSG_HDR_L1_SP:
+ http_msg_hdr_l1_sp:
+ /* assumes msg->sol points to the first char */
+ if (likely(HTTP_IS_SPHT(*ptr)))
+ EAT_AND_JUMP_OR_RETURN(http_msg_hdr_l1_sp, HTTP_MSG_HDR_L1_SP);
+
+ /* header value can be basically anything except CR/LF */
+ msg->sov = ptr - buf->p;
+
+ if (likely(!HTTP_IS_CRLF(*ptr))) {
+ goto http_msg_hdr_val;
+ }
+
+ if (likely(*ptr == '\r'))
+ EAT_AND_JUMP_OR_RETURN(http_msg_hdr_l1_lf, HTTP_MSG_HDR_L1_LF);
+ goto http_msg_hdr_l1_lf;
+
+ case HTTP_MSG_HDR_L1_LF:
+ http_msg_hdr_l1_lf:
+ EXPECT_LF_HERE(ptr, http_msg_invalid);
+ EAT_AND_JUMP_OR_RETURN(http_msg_hdr_l1_lws, HTTP_MSG_HDR_L1_LWS);
+
+ case HTTP_MSG_HDR_L1_LWS:
+ http_msg_hdr_l1_lws:
+ if (likely(HTTP_IS_SPHT(*ptr))) {
+ /* replace HT,CR,LF with spaces */
+ for (; buf->p + msg->sov < ptr; msg->sov++)
+ buf->p[msg->sov] = ' ';
+ goto http_msg_hdr_l1_sp;
+ }
+ /* we had a header consisting only in spaces ! */
+ msg->eol = msg->sov;
+ goto http_msg_complete_header;
+
+ case HTTP_MSG_HDR_VAL:
+ http_msg_hdr_val:
+ /* assumes msg->sol points to the first char, and msg->sov
+ * points to the first character of the value.
+ */
+ if (likely(!HTTP_IS_CRLF(*ptr)))
+ EAT_AND_JUMP_OR_RETURN(http_msg_hdr_val, HTTP_MSG_HDR_VAL);
+
+ msg->eol = ptr - buf->p;
+ /* Note: we could also copy eol into ->eoh so that we have the
+ * real header end in case it ends with lots of LWS, but is this
+ * really needed ?
+ */
+ if (likely(*ptr == '\r'))
+ EAT_AND_JUMP_OR_RETURN(http_msg_hdr_l2_lf, HTTP_MSG_HDR_L2_LF);
+ goto http_msg_hdr_l2_lf;
+
+ case HTTP_MSG_HDR_L2_LF:
+ http_msg_hdr_l2_lf:
+ EXPECT_LF_HERE(ptr, http_msg_invalid);
+ EAT_AND_JUMP_OR_RETURN(http_msg_hdr_l2_lws, HTTP_MSG_HDR_L2_LWS);
+
+ case HTTP_MSG_HDR_L2_LWS:
+ http_msg_hdr_l2_lws:
+ if (unlikely(HTTP_IS_SPHT(*ptr))) {
+ /* LWS: replace HT,CR,LF with spaces */
+ for (; buf->p + msg->eol < ptr; msg->eol++)
+ buf->p[msg->eol] = ' ';
+ goto http_msg_hdr_val;
+ }
+ http_msg_complete_header:
+ /*
+ * It was a new header, so the last one is finished.
+ * Assumes msg->sol points to the first char, msg->sov points
+ * to the first character of the value and msg->eol to the
+ * first CR or LF so we know how the line ends. We insert last
+ * header into the index.
+ */
+ if (unlikely(hdr_idx_add(msg->eol - msg->sol, buf->p[msg->eol] == '\r',
+ idx, idx->tail) < 0))
+ goto http_msg_invalid;
+
+ msg->sol = ptr - buf->p;
+ if (likely(!HTTP_IS_CRLF(*ptr))) {
+ goto http_msg_hdr_name;
+ }
+
+ if (likely(*ptr == '\r'))
+ EAT_AND_JUMP_OR_RETURN(http_msg_last_lf, HTTP_MSG_LAST_LF);
+ goto http_msg_last_lf;
+
+ case HTTP_MSG_LAST_LF:
+ http_msg_last_lf:
+ /* Assumes msg->sol points to the first of either CR or LF.
+ * Sets ->sov and ->next to the total header length, ->eoh to
+ * the last CRLF, and ->eol to the last CRLF length (1 or 2).
+ */
+ EXPECT_LF_HERE(ptr, http_msg_invalid);
+ ptr++;
+ msg->sov = msg->next = ptr - buf->p;
+ msg->eoh = msg->sol;
+ msg->sol = 0;
+ msg->eol = msg->sov - msg->eoh;
+ msg->msg_state = HTTP_MSG_BODY;
+ return;
+
+ case HTTP_MSG_ERROR:
+ /* this may only happen if we call http_msg_analyser() twice with an error */
+ break;
+
+ default:
+#ifdef DEBUG_FULL
+ fprintf(stderr, "FIXME !!!! impossible state at %s:%d = %d\n", __FILE__, __LINE__, state);
+ exit(1);
+#endif
+ ;
+ }
+ http_msg_ood:
+ /* out of data */
+ msg->msg_state = state;
+ msg->next = ptr - buf->p;
+ return;
+
+ http_msg_invalid:
+ /* invalid message */
+ msg->msg_state = HTTP_MSG_ERROR;
+ msg->next = ptr - buf->p;
+ return;
+}
+
+/* convert an HTTP/0.9 request into an HTTP/1.0 request. Returns 1 if the
+ * conversion succeeded, 0 in case of error. If the request was already 1.X,
+ * nothing is done and 1 is returned.
+ */
+static int http_upgrade_v09_to_v10(struct http_txn *txn)
+{
+ int delta;
+ char *cur_end;
+ struct http_msg *msg = &txn->req;
+
+ if (msg->sl.rq.v_l != 0)
+ return 1;
+
+ /* RFC 1945 allows only GET for HTTP/0.9 requests */
+ if (txn->meth != HTTP_METH_GET)
+ return 0;
+
+ cur_end = msg->chn->buf->p + msg->sl.rq.l;
+ delta = 0;
+
+ if (msg->sl.rq.u_l == 0) {
+ /* HTTP/0.9 requests *must* have a request URI, per RFC 1945 */
+ return 0;
+ }
+ /* add HTTP version */
+ delta = buffer_replace2(msg->chn->buf, cur_end, cur_end, " HTTP/1.0\r\n", 11);
+ http_msg_move_end(msg, delta);
+ cur_end += delta;
+ cur_end = (char *)http_parse_reqline(msg,
+ HTTP_MSG_RQMETH,
+ msg->chn->buf->p, cur_end + 1,
+ NULL, NULL);
+ if (unlikely(!cur_end))
+ return 0;
+
+ /* we have a full HTTP/1.0 request now and we know that
+ * we have either a CR or an LF at <ptr>.
+ */
+ hdr_idx_set_start(&txn->hdr_idx, msg->sl.rq.l, *cur_end == '\r');
+ return 1;
+}
+
+/* Parse the Connection: header of an HTTP request, looking for both "close"
+ * and "keep-alive" values. If we already know that some headers may safely
+ * be removed, we remove them now. The <to_del> flags are used for that :
+ * - bit 0 means remove "close" headers (in HTTP/1.0 requests/responses)
+ * - bit 1 means remove "keep-alive" headers (in HTTP/1.1 reqs/resp to 1.1).
+ * Presence of the "Upgrade" token is also checked and reported.
+ * The TX_HDR_CONN_* flags are adjusted in txn->flags depending on what was
+ * found, and TX_CON_*_SET is adjusted depending on what is left so only
+ * harmless combinations may be removed. Do not call that after changes have
+ * been processed.
+ */
+void http_parse_connection_header(struct http_txn *txn, struct http_msg *msg, int to_del)
+{
+ struct hdr_ctx ctx;
+ const char *hdr_val = "Connection";
+ int hdr_len = 10;
+
+ if (txn->flags & TX_HDR_CONN_PRS)
+ return;
+
+ if (unlikely(txn->flags & TX_USE_PX_CONN)) {
+ hdr_val = "Proxy-Connection";
+ hdr_len = 16;
+ }
+
+ ctx.idx = 0;
+ txn->flags &= ~(TX_CON_KAL_SET|TX_CON_CLO_SET);
+ while (http_find_header2(hdr_val, hdr_len, msg->chn->buf->p, &txn->hdr_idx, &ctx)) {
+ if (ctx.vlen >= 10 && word_match(ctx.line + ctx.val, ctx.vlen, "keep-alive", 10)) {
+ txn->flags |= TX_HDR_CONN_KAL;
+ if (to_del & 2)
+ http_remove_header2(msg, &txn->hdr_idx, &ctx);
+ else
+ txn->flags |= TX_CON_KAL_SET;
+ }
+ else if (ctx.vlen >= 5 && word_match(ctx.line + ctx.val, ctx.vlen, "close", 5)) {
+ txn->flags |= TX_HDR_CONN_CLO;
+ if (to_del & 1)
+ http_remove_header2(msg, &txn->hdr_idx, &ctx);
+ else
+ txn->flags |= TX_CON_CLO_SET;
+ }
+ else if (ctx.vlen >= 7 && word_match(ctx.line + ctx.val, ctx.vlen, "upgrade", 7)) {
+ txn->flags |= TX_HDR_CONN_UPG;
+ }
+ }
+
+ txn->flags |= TX_HDR_CONN_PRS;
+ return;
+}
+
+/* Apply desired changes on the Connection: header. Values may be removed and/or
+ * added depending on the <wanted> flags, which are exclusively composed of
+ * TX_CON_CLO_SET and TX_CON_KAL_SET, depending on what flags are desired. The
+ * TX_CON_*_SET flags are adjusted in txn->flags depending on what is left.
+ */
+void http_change_connection_header(struct http_txn *txn, struct http_msg *msg, int wanted)
+{
+ struct hdr_ctx ctx;
+ const char *hdr_val = "Connection";
+ int hdr_len = 10;
+
+ ctx.idx = 0;
+
+
+ if (unlikely(txn->flags & TX_USE_PX_CONN)) {
+ hdr_val = "Proxy-Connection";
+ hdr_len = 16;
+ }
+
+ txn->flags &= ~(TX_CON_CLO_SET | TX_CON_KAL_SET);
+ while (http_find_header2(hdr_val, hdr_len, msg->chn->buf->p, &txn->hdr_idx, &ctx)) {
+ if (ctx.vlen >= 10 && word_match(ctx.line + ctx.val, ctx.vlen, "keep-alive", 10)) {
+ if (wanted & TX_CON_KAL_SET)
+ txn->flags |= TX_CON_KAL_SET;
+ else
+ http_remove_header2(msg, &txn->hdr_idx, &ctx);
+ }
+ else if (ctx.vlen >= 5 && word_match(ctx.line + ctx.val, ctx.vlen, "close", 5)) {
+ if (wanted & TX_CON_CLO_SET)
+ txn->flags |= TX_CON_CLO_SET;
+ else
+ http_remove_header2(msg, &txn->hdr_idx, &ctx);
+ }
+ }
+
+ if (wanted == (txn->flags & (TX_CON_CLO_SET|TX_CON_KAL_SET)))
+ return;
+
+ if ((wanted & TX_CON_CLO_SET) && !(txn->flags & TX_CON_CLO_SET)) {
+ txn->flags |= TX_CON_CLO_SET;
+ hdr_val = "Connection: close";
+ hdr_len = 17;
+ if (unlikely(txn->flags & TX_USE_PX_CONN)) {
+ hdr_val = "Proxy-Connection: close";
+ hdr_len = 23;
+ }
+ http_header_add_tail2(msg, &txn->hdr_idx, hdr_val, hdr_len);
+ }
+
+ if ((wanted & TX_CON_KAL_SET) && !(txn->flags & TX_CON_KAL_SET)) {
+ txn->flags |= TX_CON_KAL_SET;
+ hdr_val = "Connection: keep-alive";
+ hdr_len = 22;
+ if (unlikely(txn->flags & TX_USE_PX_CONN)) {
+ hdr_val = "Proxy-Connection: keep-alive";
+ hdr_len = 28;
+ }
+ http_header_add_tail2(msg, &txn->hdr_idx, hdr_val, hdr_len);
+ }
+ return;
+}
+
+/* Parse the chunk size at msg->next. Once done, it adjusts ->next to point to
+ * the first byte of data after the chunk size, so that we know we can forward
+ * exactly msg->next bytes. msg->sol contains the exact number of bytes forming
+ * the chunk size. That way it is always possible to differentiate between the
+ * start of the body and the start of the data.
+ * Return >0 on success, 0 when some data is missing, <0 on error.
+ * Note: this function is designed to parse wrapped CRLF at the end of the buffer.
+ */
+static inline int http_parse_chunk_size(struct http_msg *msg)
+{
+ const struct buffer *buf = msg->chn->buf;
+ const char *ptr = b_ptr(buf, msg->next);
+ const char *ptr_old = ptr;
+ const char *end = buf->data + buf->size;
+ const char *stop = bi_end(buf);
+ unsigned int chunk = 0;
+
+ /* The chunk size is in the following form, though we are only
+ * interested in the size and CRLF :
+ * 1*HEXDIGIT *WSP *[ ';' extensions ] CRLF
+ */
+ while (1) {
+ int c;
+ if (ptr == stop)
+ return 0;
+ c = hex2i(*ptr);
+ if (c < 0) /* not a hex digit anymore */
+ break;
+ if (unlikely(++ptr >= end))
+ ptr = buf->data;
+ if (chunk & 0xF8000000) /* integer overflow will occur if result >= 2GB */
+ goto error;
+ chunk = (chunk << 4) + c;
+ }
+
+ /* empty size not allowed */
+ if (unlikely(ptr == ptr_old))
+ goto error;
+
+ while (http_is_spht[(unsigned char)*ptr]) {
+ if (++ptr >= end)
+ ptr = buf->data;
+ if (unlikely(ptr == stop))
+ return 0;
+ }
+
+ /* Up to there, we know that at least one byte is present at *ptr. Check
+ * for the end of chunk size.
+ */
+ while (1) {
+ if (likely(HTTP_IS_CRLF(*ptr))) {
+ /* we now have a CR or an LF at ptr */
+ if (likely(*ptr == '\r')) {
+ if (++ptr >= end)
+ ptr = buf->data;
+ if (ptr == stop)
+ return 0;
+ }
+
+ if (*ptr != '\n')
+ goto error;
+ if (++ptr >= end)
+ ptr = buf->data;
+ /* done */
+ break;
+ }
+ else if (*ptr == ';') {
+ /* chunk extension, ends at next CRLF */
+ if (++ptr >= end)
+ ptr = buf->data;
+ if (ptr == stop)
+ return 0;
+
+ while (!HTTP_IS_CRLF(*ptr)) {
+ if (++ptr >= end)
+ ptr = buf->data;
+ if (ptr == stop)
+ return 0;
+ }
+ /* we have a CRLF now, loop above */
+ continue;
+ }
+ else
+ goto error;
+ }
+
+ /* OK we found our CRLF and now <ptr> points to the next byte,
+ * which may or may not be present. We save that into ->next,
+ * and the number of bytes parsed into msg->sol.
+ */
+ msg->sol = ptr - ptr_old;
+ if (unlikely(ptr < ptr_old))
+ msg->sol += buf->size;
+ msg->next = buffer_count(buf, buf->p, ptr);
+ msg->chunk_len = chunk;
+ msg->body_len += chunk;
+ msg->msg_state = chunk ? HTTP_MSG_DATA : HTTP_MSG_TRAILERS;
+ return 1;
+ error:
+ msg->err_pos = buffer_count(buf, buf->p, ptr);
+ return -1;
+}
+
+/* This function skips trailers in the buffer associated with HTTP
+ * message <msg>. The first visited position is msg->next. If the end of
+ * the trailers is found, it is automatically scheduled to be forwarded,
+ * msg->msg_state switches to HTTP_MSG_DONE, and the function returns >0.
+ * If not enough data are available, the function does not change anything
+ * except maybe msg->next if it could parse some lines, and returns zero.
+ * If a parse error is encountered, the function returns < 0 and does not
+ * change anything except maybe msg->next. Note that the message must
+ * already be in HTTP_MSG_TRAILERS state before calling this function,
+ * which implies that all non-trailers data have already been scheduled for
+ * forwarding, and that msg->next exactly matches the length of trailers
+ * already parsed and not forwarded. It is also important to note that this
+ * function is designed to be able to parse wrapped headers at end of buffer.
+ */
+static int http_forward_trailers(struct http_msg *msg)
+{
+ const struct buffer *buf = msg->chn->buf;
+
+ /* we have msg->next which points to next line. Look for CRLF. */
+ while (1) {
+ const char *p1 = NULL, *p2 = NULL;
+ const char *ptr = b_ptr(buf, msg->next);
+ const char *stop = bi_end(buf);
+ int bytes;
+
+ /* scan current line and stop at LF or CRLF */
+ while (1) {
+ if (ptr == stop)
+ return 0;
+
+ if (*ptr == '\n') {
+ if (!p1)
+ p1 = ptr;
+ p2 = ptr;
+ break;
+ }
+
+ if (*ptr == '\r') {
+ if (p1) {
+ msg->err_pos = buffer_count(buf, buf->p, ptr);
+ return -1;
+ }
+ p1 = ptr;
+ }
+
+ ptr++;
+ if (ptr >= buf->data + buf->size)
+ ptr = buf->data;
+ }
+
+ /* after LF; point to beginning of next line */
+ p2++;
+ if (p2 >= buf->data + buf->size)
+ p2 = buf->data;
+
+ bytes = p2 - b_ptr(buf, msg->next);
+ if (bytes < 0)
+ bytes += buf->size;
+
+ if (p1 == b_ptr(buf, msg->next)) {
+ /* LF/CRLF at beginning of line => end of trailers at p2.
+ * Everything was scheduled for forwarding, there's nothing
+ * left from this message.
+ */
+ msg->next = buffer_count(buf, buf->p, p2);
+ msg->msg_state = HTTP_MSG_DONE;
+ return 1;
+ }
+ /* OK, next line then */
+ msg->next = buffer_count(buf, buf->p, p2);
+ }
+}
+
+/* This function may be called only in HTTP_MSG_CHUNK_CRLF. It reads the CRLF
+ * or a possible LF alone at the end of a chunk. It automatically adjusts
+ * msg->next in order to include this part into the next forwarding phase.
+ * Note that the caller must ensure that ->p points to the first byte to parse.
+ * It also sets msg_state to HTTP_MSG_CHUNK_SIZE and returns >0 on success. If
+ * not enough data are available, the function does not change anything and
+ * returns zero. If a parse error is encountered, the function returns < 0 and
+ * does not change anything. Note: this function is designed to parse wrapped
+ * CRLF at the end of the buffer.
+ */
+static inline int http_skip_chunk_crlf(struct http_msg *msg)
+{
+ const struct buffer *buf = msg->chn->buf;
+ const char *ptr;
+ int bytes;
+
+ /* NB: we'll check data availabilty at the end. It's not a
+ * problem because whatever we match first will be checked
+ * against the correct length.
+ */
+ bytes = 1;
+ ptr = b_ptr(buf, msg->next);
+ if (*ptr == '\r') {
+ bytes++;
+ ptr++;
+ if (ptr >= buf->data + buf->size)
+ ptr = buf->data;
+ }
+
+ if (msg->next + bytes > buf->i)
+ return 0;
+
+ if (*ptr != '\n') {
+ msg->err_pos = buffer_count(buf, buf->p, ptr);
+ return -1;
+ }
+
+ ptr++;
+ if (unlikely(ptr >= buf->data + buf->size))
+ ptr = buf->data;
+ /* Advance ->next to allow the CRLF to be forwarded */
+ msg->next += bytes;
+ msg->msg_state = HTTP_MSG_CHUNK_SIZE;
+ return 1;
+}
+
+/* Parses a qvalue and returns it multipled by 1000, from 0 to 1000. If the
+ * value is larger than 1000, it is bound to 1000. The parser consumes up to
+ * 1 digit, one dot and 3 digits and stops on the first invalid character.
+ * Unparsable qvalues return 1000 as "q=1.000".
+ */
+int parse_qvalue(const char *qvalue, const char **end)
+{
+ int q = 1000;
+
+ if (!isdigit((unsigned char)*qvalue))
+ goto out;
+ q = (*qvalue++ - '0') * 1000;
+
+ if (*qvalue++ != '.')
+ goto out;
+
+ if (!isdigit((unsigned char)*qvalue))
+ goto out;
+ q += (*qvalue++ - '0') * 100;
+
+ if (!isdigit((unsigned char)*qvalue))
+ goto out;
+ q += (*qvalue++ - '0') * 10;
+
+ if (!isdigit((unsigned char)*qvalue))
+ goto out;
+ q += (*qvalue++ - '0') * 1;
+ out:
+ if (q > 1000)
+ q = 1000;
+ if (end)
+ *end = qvalue;
+ return q;
+}
+
+/*
+ * Selects a compression algorithm depending on the client request.
+ */
+int select_compression_request_header(struct stream *s, struct buffer *req)
+{
+ struct http_txn *txn = s->txn;
+ struct http_msg *msg = &txn->req;
+ struct hdr_ctx ctx;
+ struct comp_algo *comp_algo = NULL;
+ struct comp_algo *comp_algo_back = NULL;
+
+ /* Disable compression for older user agents announcing themselves as "Mozilla/4"
+ * unless they are known good (MSIE 6 with XP SP2, or MSIE 7 and later).
+ * See http://zoompf.com/2012/02/lose-the-wait-http-compression for more details.
+ */
+ ctx.idx = 0;
+ if (http_find_header2("User-Agent", 10, req->p, &txn->hdr_idx, &ctx) &&
+ ctx.vlen >= 9 &&
+ memcmp(ctx.line + ctx.val, "Mozilla/4", 9) == 0 &&
+ (ctx.vlen < 31 ||
+ memcmp(ctx.line + ctx.val + 25, "MSIE ", 5) != 0 ||
+ ctx.line[ctx.val + 30] < '6' ||
+ (ctx.line[ctx.val + 30] == '6' &&
+ (ctx.vlen < 54 || memcmp(ctx.line + 51, "SV1", 3) != 0)))) {
+ s->comp_algo = NULL;
+ return 0;
+ }
+
+ /* search for the algo in the backend in priority or the frontend */
+ if ((s->be->comp && (comp_algo_back = s->be->comp->algos)) || (strm_fe(s)->comp && (comp_algo_back = strm_fe(s)->comp->algos))) {
+ int best_q = 0;
+
+ ctx.idx = 0;
+ while (http_find_header2("Accept-Encoding", 15, req->p, &txn->hdr_idx, &ctx)) {
+ const char *qval;
+ int q;
+ int toklen;
+
+ /* try to isolate the token from the optional q-value */
+ toklen = 0;
+ while (toklen < ctx.vlen && http_is_token[(unsigned char)*(ctx.line + ctx.val + toklen)])
+ toklen++;
+
+ qval = ctx.line + ctx.val + toklen;
+ while (1) {
+ while (qval < ctx.line + ctx.val + ctx.vlen && http_is_lws[(unsigned char)*qval])
+ qval++;
+
+ if (qval >= ctx.line + ctx.val + ctx.vlen || *qval != ';') {
+ qval = NULL;
+ break;
+ }
+ qval++;
+
+ while (qval < ctx.line + ctx.val + ctx.vlen && http_is_lws[(unsigned char)*qval])
+ qval++;
+
+ if (qval >= ctx.line + ctx.val + ctx.vlen) {
+ qval = NULL;
+ break;
+ }
+ if (strncmp(qval, "q=", MIN(ctx.line + ctx.val + ctx.vlen - qval, 2)) == 0)
+ break;
+
+ while (qval < ctx.line + ctx.val + ctx.vlen && *qval != ';')
+ qval++;
+ }
+
+ /* here we have qval pointing to the first "q=" attribute or NULL if not found */
+ q = qval ? parse_qvalue(qval + 2, NULL) : 1000;
+
+ if (q <= best_q)
+ continue;
+
+ for (comp_algo = comp_algo_back; comp_algo; comp_algo = comp_algo->next) {
+ if (*(ctx.line + ctx.val) == '*' ||
+ word_match(ctx.line + ctx.val, toklen, comp_algo->ua_name, comp_algo->ua_name_len)) {
+ s->comp_algo = comp_algo;
+ best_q = q;
+ break;
+ }
+ }
+ }
+ }
+
+ /* remove all occurrences of the header when "compression offload" is set */
+ if (s->comp_algo) {
+ if ((s->be->comp && s->be->comp->offload) || (strm_fe(s)->comp && strm_fe(s)->comp->offload)) {
+ http_remove_header2(msg, &txn->hdr_idx, &ctx);
+ ctx.idx = 0;
+ while (http_find_header2("Accept-Encoding", 15, req->p, &txn->hdr_idx, &ctx)) {
+ http_remove_header2(msg, &txn->hdr_idx, &ctx);
+ }
+ }
+ return 1;
+ }
+
+ /* identity is implicit does not require headers */
+ if ((s->be->comp && (comp_algo_back = s->be->comp->algos)) || (strm_fe(s)->comp && (comp_algo_back = strm_fe(s)->comp->algos))) {
+ for (comp_algo = comp_algo_back; comp_algo; comp_algo = comp_algo->next) {
+ if (comp_algo->cfg_name_len == 8 && memcmp(comp_algo->cfg_name, "identity", 8) == 0) {
+ s->comp_algo = comp_algo;
+ return 1;
+ }
+ }
+ }
+
+ s->comp_algo = NULL;
+ return 0;
+}
+
+/*
+ * Selects a comression algorithm depending of the server response.
+ */
+int select_compression_response_header(struct stream *s, struct buffer *res)
+{
+ struct http_txn *txn = s->txn;
+ struct http_msg *msg = &txn->rsp;
+ struct hdr_ctx ctx;
+ struct comp_type *comp_type;
+
+ /* no common compression algorithm was found in request header */
+ if (s->comp_algo == NULL)
+ goto fail;
+
+ /* HTTP < 1.1 should not be compressed */
+ if (!(msg->flags & HTTP_MSGF_VER_11) || !(txn->req.flags & HTTP_MSGF_VER_11))
+ goto fail;
+
+ /* compress 200,201,202,203 responses only */
+ if ((txn->status != 200) &&
+ (txn->status != 201) &&
+ (txn->status != 202) &&
+ (txn->status != 203))
+ goto fail;
+
+ /* Content-Length is null */
+ if (!(msg->flags & HTTP_MSGF_TE_CHNK) && msg->body_len == 0)
+ goto fail;
+
+ /* content is already compressed */
+ ctx.idx = 0;
+ if (http_find_header2("Content-Encoding", 16, res->p, &txn->hdr_idx, &ctx))
+ goto fail;
+
+ /* no compression when Cache-Control: no-transform is present in the message */
+ ctx.idx = 0;
+ while (http_find_header2("Cache-Control", 13, res->p, &txn->hdr_idx, &ctx)) {
+ if (word_match(ctx.line + ctx.val, ctx.vlen, "no-transform", 12))
+ goto fail;
+ }
+
+ comp_type = NULL;
+
+ /* we don't want to compress multipart content-types, nor content-types that are
+ * not listed in the "compression type" directive if any. If no content-type was
+ * found but configuration requires one, we don't compress either. Backend has
+ * the priority.
+ */
+ ctx.idx = 0;
+ if (http_find_header2("Content-Type", 12, res->p, &txn->hdr_idx, &ctx)) {
+ if (ctx.vlen >= 9 && strncasecmp("multipart", ctx.line+ctx.val, 9) == 0)
+ goto fail;
+
+ if ((s->be->comp && (comp_type = s->be->comp->types)) ||
+ (strm_fe(s)->comp && (comp_type = strm_fe(s)->comp->types))) {
+ for (; comp_type; comp_type = comp_type->next) {
+ if (ctx.vlen >= comp_type->name_len &&
+ strncasecmp(ctx.line+ctx.val, comp_type->name, comp_type->name_len) == 0)
+ /* this Content-Type should be compressed */
+ break;
+ }
+ /* this Content-Type should not be compressed */
+ if (comp_type == NULL)
+ goto fail;
+ }
+ }
+ else { /* no content-type header */
+ if ((s->be->comp && s->be->comp->types) || (strm_fe(s)->comp && strm_fe(s)->comp->types))
+ goto fail; /* a content-type was required */
+ }
+
+ /* limit compression rate */
+ if (global.comp_rate_lim > 0)
+ if (read_freq_ctr(&global.comp_bps_in) > global.comp_rate_lim)
+ goto fail;
+
+ /* limit cpu usage */
+ if (idle_pct < compress_min_idle)
+ goto fail;
+
+ /* initialize compression */
+ if (s->comp_algo->init(&s->comp_ctx, global.tune.comp_maxlevel) < 0)
+ goto fail;
+
+ s->flags |= SF_COMP_READY;
+
+ /* remove Content-Length header */
+ ctx.idx = 0;
+ if ((msg->flags & HTTP_MSGF_CNT_LEN) && http_find_header2("Content-Length", 14, res->p, &txn->hdr_idx, &ctx))
+ http_remove_header2(msg, &txn->hdr_idx, &ctx);
+
+ /* add Transfer-Encoding header */
+ if (!(msg->flags & HTTP_MSGF_TE_CHNK))
+ http_header_add_tail2(&txn->rsp, &txn->hdr_idx, "Transfer-Encoding: chunked", 26);
+
+ /*
+ * Add Content-Encoding header when it's not identity encoding.
+ * RFC 2616 : Identity encoding: This content-coding is used only in the
+ * Accept-Encoding header, and SHOULD NOT be used in the Content-Encoding
+ * header.
+ */
+ if (s->comp_algo->cfg_name_len != 8 || memcmp(s->comp_algo->cfg_name, "identity", 8) != 0) {
+ trash.len = 18;
+ memcpy(trash.str, "Content-Encoding: ", trash.len);
+ memcpy(trash.str + trash.len, s->comp_algo->ua_name, s->comp_algo->ua_name_len);
+ trash.len += s->comp_algo->ua_name_len;
+ trash.str[trash.len] = '\0';
+ http_header_add_tail2(&txn->rsp, &txn->hdr_idx, trash.str, trash.len);
+ }
+ return 1;
+
+fail:
+ s->comp_algo = NULL;
+ return 0;
+}
+
+void http_adjust_conn_mode(struct stream *s, struct http_txn *txn, struct http_msg *msg)
+{
+ struct proxy *fe = strm_fe(s);
+ int tmp = TX_CON_WANT_KAL;
+
+ if (!((fe->options2|s->be->options2) & PR_O2_FAKE_KA)) {
+ if ((fe->options & PR_O_HTTP_MODE) == PR_O_HTTP_TUN ||
+ (s->be->options & PR_O_HTTP_MODE) == PR_O_HTTP_TUN)
+ tmp = TX_CON_WANT_TUN;
+
+ if ((fe->options & PR_O_HTTP_MODE) == PR_O_HTTP_PCL ||
+ (s->be->options & PR_O_HTTP_MODE) == PR_O_HTTP_PCL)
+ tmp = TX_CON_WANT_TUN;
+ }
+
+ if ((fe->options & PR_O_HTTP_MODE) == PR_O_HTTP_SCL ||
+ (s->be->options & PR_O_HTTP_MODE) == PR_O_HTTP_SCL) {
+ /* option httpclose + server_close => forceclose */
+ if ((fe->options & PR_O_HTTP_MODE) == PR_O_HTTP_PCL ||
+ (s->be->options & PR_O_HTTP_MODE) == PR_O_HTTP_PCL)
+ tmp = TX_CON_WANT_CLO;
+ else
+ tmp = TX_CON_WANT_SCL;
+ }
+
+ if ((fe->options & PR_O_HTTP_MODE) == PR_O_HTTP_FCL ||
+ (s->be->options & PR_O_HTTP_MODE) == PR_O_HTTP_FCL)
+ tmp = TX_CON_WANT_CLO;
+
+ if ((txn->flags & TX_CON_WANT_MSK) < tmp)
+ txn->flags = (txn->flags & ~TX_CON_WANT_MSK) | tmp;
+
+ if (!(txn->flags & TX_HDR_CONN_PRS) &&
+ (txn->flags & TX_CON_WANT_MSK) != TX_CON_WANT_TUN) {
+ /* parse the Connection header and possibly clean it */
+ int to_del = 0;
+ if ((msg->flags & HTTP_MSGF_VER_11) ||
+ ((txn->flags & TX_CON_WANT_MSK) >= TX_CON_WANT_SCL &&
+ !((fe->options2|s->be->options2) & PR_O2_FAKE_KA)))
+ to_del |= 2; /* remove "keep-alive" */
+ if (!(msg->flags & HTTP_MSGF_VER_11))
+ to_del |= 1; /* remove "close" */
+ http_parse_connection_header(txn, msg, to_del);
+ }
+
+ /* check if client or config asks for explicit close in KAL/SCL */
+ if (((txn->flags & TX_CON_WANT_MSK) == TX_CON_WANT_KAL ||
+ (txn->flags & TX_CON_WANT_MSK) == TX_CON_WANT_SCL) &&
+ ((txn->flags & TX_HDR_CONN_CLO) || /* "connection: close" */
+ (!(msg->flags & HTTP_MSGF_VER_11) && !(txn->flags & TX_HDR_CONN_KAL)) || /* no "connection: k-a" in 1.0 */
+ !(msg->flags & HTTP_MSGF_XFER_LEN) || /* no length known => close */
+ fe->state == PR_STSTOPPED)) /* frontend is stopping */
+ txn->flags = (txn->flags & ~TX_CON_WANT_MSK) | TX_CON_WANT_CLO;
+}
+
+/* This stream analyser waits for a complete HTTP request. It returns 1 if the
+ * processing can continue on next analysers, or zero if it either needs more
+ * data or wants to immediately abort the request (eg: timeout, error, ...). It
+ * is tied to AN_REQ_WAIT_HTTP and may may remove itself from s->req.analysers
+ * when it has nothing left to do, and may remove any analyser when it wants to
+ * abort.
+ */
+int http_wait_for_request(struct stream *s, struct channel *req, int an_bit)
+{
+ /*
+ * We will parse the partial (or complete) lines.
+ * We will check the request syntax, and also join multi-line
+ * headers. An index of all the lines will be elaborated while
+ * parsing.
+ *
+ * For the parsing, we use a 28 states FSM.
+ *
+ * Here is the information we currently have :
+ * req->buf->p = beginning of request
+ * req->buf->p + msg->eoh = end of processed headers / start of current one
+ * req->buf->p + req->buf->i = end of input data
+ * msg->eol = end of current header or line (LF or CRLF)
+ * msg->next = first non-visited byte
+ *
+ * At end of parsing, we may perform a capture of the error (if any), and
+ * we will set a few fields (txn->meth, sn->flags/SF_REDIRECTABLE).
+ * We also check for monitor-uri, logging, HTTP/0.9 to 1.0 conversion, and
+ * finally headers capture.
+ */
+
+ int cur_idx;
+ struct session *sess = s->sess;
+ struct http_txn *txn = s->txn;
+ struct http_msg *msg = &txn->req;
+ struct hdr_ctx ctx;
+
+ DPRINTF(stderr,"[%u] %s: stream=%p b=%p, exp(r,w)=%u,%u bf=%08x bh=%d analysers=%02x\n",
+ now_ms, __FUNCTION__,
+ s,
+ req,
+ req->rex, req->wex,
+ req->flags,
+ req->buf->i,
+ req->analysers);
+
+ /* we're speaking HTTP here, so let's speak HTTP to the client */
+ s->srv_error = http_return_srv_error;
+
+ /* There's a protected area at the end of the buffer for rewriting
+ * purposes. We don't want to start to parse the request if the
+ * protected area is affected, because we may have to move processed
+ * data later, which is much more complicated.
+ */
+ if (buffer_not_empty(req->buf) && msg->msg_state < HTTP_MSG_ERROR) {
+ if (txn->flags & TX_NOT_FIRST) {
+ if (unlikely(!channel_is_rewritable(req))) {
+ if (req->flags & (CF_SHUTW|CF_SHUTW_NOW|CF_WRITE_ERROR|CF_WRITE_TIMEOUT))
+ goto failed_keep_alive;
+ /* some data has still not left the buffer, wake us once that's done */
+ channel_dont_connect(req);
+ req->flags |= CF_READ_DONTWAIT; /* try to get back here ASAP */
+ req->flags |= CF_WAKE_WRITE;
+ return 0;
+ }
+ if (unlikely(bi_end(req->buf) < b_ptr(req->buf, msg->next) ||
+ bi_end(req->buf) > req->buf->data + req->buf->size - global.tune.maxrewrite))
+ buffer_slow_realign(req->buf);
+ }
+
+ /* Note that we have the same problem with the response ; we
+ * may want to send a redirect, error or anything which requires
+ * some spare space. So we'll ensure that we have at least
+ * maxrewrite bytes available in the response buffer before
+ * processing that one. This will only affect pipelined
+ * keep-alive requests.
+ */
+ if ((txn->flags & TX_NOT_FIRST) &&
+ unlikely(!channel_is_rewritable(&s->res) ||
+ bi_end(s->res.buf) < b_ptr(s->res.buf, txn->rsp.next) ||
+ bi_end(s->res.buf) > s->res.buf->data + s->res.buf->size - global.tune.maxrewrite)) {
+ if (s->res.buf->o) {
+ if (s->res.flags & (CF_SHUTW|CF_SHUTW_NOW|CF_WRITE_ERROR|CF_WRITE_TIMEOUT))
+ goto failed_keep_alive;
+ /* don't let a connection request be initiated */
+ channel_dont_connect(req);
+ s->res.flags &= ~CF_EXPECT_MORE; /* speed up sending a previous response */
+ s->res.flags |= CF_WAKE_WRITE;
+ s->res.analysers |= an_bit; /* wake us up once it changes */
+ return 0;
+ }
+ }
+
+ if (likely(msg->next < req->buf->i)) /* some unparsed data are available */
+ http_msg_analyzer(msg, &txn->hdr_idx);
+ }
+
+ /* 1: we might have to print this header in debug mode */
+ if (unlikely((global.mode & MODE_DEBUG) &&
+ (!(global.mode & MODE_QUIET) || (global.mode & MODE_VERBOSE)) &&
+ msg->msg_state >= HTTP_MSG_BODY)) {
+ char *eol, *sol;
+
+ sol = req->buf->p;
+ /* this is a bit complex : in case of error on the request line,
+ * we know that rq.l is still zero, so we display only the part
+ * up to the end of the line (truncated by debug_hdr).
+ */
+ eol = sol + (msg->sl.rq.l ? msg->sl.rq.l : req->buf->i);
+ debug_hdr("clireq", s, sol, eol);
+
+ sol += hdr_idx_first_pos(&txn->hdr_idx);
+ cur_idx = hdr_idx_first_idx(&txn->hdr_idx);
+
+ while (cur_idx) {
+ eol = sol + txn->hdr_idx.v[cur_idx].len;
+ debug_hdr("clihdr", s, sol, eol);
+ sol = eol + txn->hdr_idx.v[cur_idx].cr + 1;
+ cur_idx = txn->hdr_idx.v[cur_idx].next;
+ }
+ }
+
+
+ /*
+ * Now we quickly check if we have found a full valid request.
+ * If not so, we check the FD and buffer states before leaving.
+ * A full request is indicated by the fact that we have seen
+ * the double LF/CRLF, so the state is >= HTTP_MSG_BODY. Invalid
+ * requests are checked first. When waiting for a second request
+ * on a keep-alive stream, if we encounter and error, close, t/o,
+ * we note the error in the stream flags but don't set any state.
+ * Since the error will be noted there, it will not be counted by
+ * process_stream() as a frontend error.
+ * Last, we may increase some tracked counters' http request errors on
+ * the cases that are deliberately the client's fault. For instance,
+ * a timeout or connection reset is not counted as an error. However
+ * a bad request is.
+ */
+
+ if (unlikely(msg->msg_state < HTTP_MSG_BODY)) {
+ /*
+ * First, let's catch bad requests.
+ */
+ if (unlikely(msg->msg_state == HTTP_MSG_ERROR)) {
+ stream_inc_http_req_ctr(s);
+ stream_inc_http_err_ctr(s);
+ proxy_inc_fe_req_ctr(sess->fe);
+ goto return_bad_req;
+ }
+
+ /* 1: Since we are in header mode, if there's no space
+ * left for headers, we won't be able to free more
+ * later, so the stream will never terminate. We
+ * must terminate it now.
+ */
+ if (unlikely(buffer_full(req->buf, global.tune.maxrewrite))) {
+ /* FIXME: check if URI is set and return Status
+ * 414 Request URI too long instead.
+ */
+ stream_inc_http_req_ctr(s);
+ stream_inc_http_err_ctr(s);
+ proxy_inc_fe_req_ctr(sess->fe);
+ if (msg->err_pos < 0)
+ msg->err_pos = req->buf->i;
+ goto return_bad_req;
+ }
+
+ /* 2: have we encountered a read error ? */
+ else if (req->flags & CF_READ_ERROR) {
+ if (!(s->flags & SF_ERR_MASK))
+ s->flags |= SF_ERR_CLICL;
+
+ if (txn->flags & TX_WAIT_NEXT_RQ)
+ goto failed_keep_alive;
+
+ if (sess->fe->options & PR_O_IGNORE_PRB)
+ goto failed_keep_alive;
+
+ /* we cannot return any message on error */
+ if (msg->err_pos >= 0) {
+ http_capture_bad_message(&sess->fe->invalid_req, s, msg, msg->msg_state, sess->fe);
+ stream_inc_http_err_ctr(s);
+ }
+
+ txn->status = 400;
+ stream_int_retnclose(&s->si[0], NULL);
+ msg->msg_state = HTTP_MSG_ERROR;
+ req->analysers = 0;
+
+ stream_inc_http_req_ctr(s);
+ proxy_inc_fe_req_ctr(sess->fe);
+ sess->fe->fe_counters.failed_req++;
+ if (sess->listener->counters)
+ sess->listener->counters->failed_req++;
+
+ if (!(s->flags & SF_FINST_MASK))
+ s->flags |= SF_FINST_R;
+ return 0;
+ }
+
+ /* 3: has the read timeout expired ? */
+ else if (req->flags & CF_READ_TIMEOUT || tick_is_expired(req->analyse_exp, now_ms)) {
+ if (!(s->flags & SF_ERR_MASK))
+ s->flags |= SF_ERR_CLITO;
+
+ if (txn->flags & TX_WAIT_NEXT_RQ)
+ goto failed_keep_alive;
+
+ if (sess->fe->options & PR_O_IGNORE_PRB)
+ goto failed_keep_alive;
+
+ /* read timeout : give up with an error message. */
+ if (msg->err_pos >= 0) {
+ http_capture_bad_message(&sess->fe->invalid_req, s, msg, msg->msg_state, sess->fe);
+ stream_inc_http_err_ctr(s);
+ }
+ txn->status = 408;
+ stream_int_retnclose(&s->si[0], http_error_message(s, HTTP_ERR_408));
+ msg->msg_state = HTTP_MSG_ERROR;
+ req->analysers = 0;
+
+ stream_inc_http_req_ctr(s);
+ proxy_inc_fe_req_ctr(sess->fe);
+ sess->fe->fe_counters.failed_req++;
+ if (sess->listener->counters)
+ sess->listener->counters->failed_req++;
+
+ if (!(s->flags & SF_FINST_MASK))
+ s->flags |= SF_FINST_R;
+ return 0;
+ }
+
+ /* 4: have we encountered a close ? */
+ else if (req->flags & CF_SHUTR) {
+ if (!(s->flags & SF_ERR_MASK))
+ s->flags |= SF_ERR_CLICL;
+
+ if (txn->flags & TX_WAIT_NEXT_RQ)
+ goto failed_keep_alive;
+
+ if (sess->fe->options & PR_O_IGNORE_PRB)
+ goto failed_keep_alive;
+
+ if (msg->err_pos >= 0)
+ http_capture_bad_message(&sess->fe->invalid_req, s, msg, msg->msg_state, sess->fe);
+ txn->status = 400;
+ stream_int_retnclose(&s->si[0], http_error_message(s, HTTP_ERR_400));
+ msg->msg_state = HTTP_MSG_ERROR;
+ req->analysers = 0;
+
+ stream_inc_http_err_ctr(s);
+ stream_inc_http_req_ctr(s);
+ proxy_inc_fe_req_ctr(sess->fe);
+ sess->fe->fe_counters.failed_req++;
+ if (sess->listener->counters)
+ sess->listener->counters->failed_req++;
+
+ if (!(s->flags & SF_FINST_MASK))
+ s->flags |= SF_FINST_R;
+ return 0;
+ }
+
+ channel_dont_connect(req);
+ req->flags |= CF_READ_DONTWAIT; /* try to get back here ASAP */
+ s->res.flags &= ~CF_EXPECT_MORE; /* speed up sending a previous response */
+#ifdef TCP_QUICKACK
+ if (sess->listener->options & LI_O_NOQUICKACK && req->buf->i &&
+ objt_conn(sess->origin) && conn_ctrl_ready(__objt_conn(sess->origin))) {
+ /* We need more data, we have to re-enable quick-ack in case we
+ * previously disabled it, otherwise we might cause the client
+ * to delay next data.
+ */
+ setsockopt(__objt_conn(sess->origin)->t.sock.fd, IPPROTO_TCP, TCP_QUICKACK, &one, sizeof(one));
+ }
+#endif
+
+ if ((msg->msg_state != HTTP_MSG_RQBEFORE) && (txn->flags & TX_WAIT_NEXT_RQ)) {
+ /* If the client starts to talk, let's fall back to
+ * request timeout processing.
+ */
+ txn->flags &= ~TX_WAIT_NEXT_RQ;
+ req->analyse_exp = TICK_ETERNITY;
+ }
+
+ /* just set the request timeout once at the beginning of the request */
+ if (!tick_isset(req->analyse_exp)) {
+ if ((msg->msg_state == HTTP_MSG_RQBEFORE) &&
+ (txn->flags & TX_WAIT_NEXT_RQ) &&
+ tick_isset(s->be->timeout.httpka))
+ req->analyse_exp = tick_add(now_ms, s->be->timeout.httpka);
+ else
+ req->analyse_exp = tick_add_ifset(now_ms, s->be->timeout.httpreq);
+ }
+
+ /* we're not ready yet */
+ return 0;
+
+ failed_keep_alive:
+ /* Here we process low-level errors for keep-alive requests. In
+ * short, if the request is not the first one and it experiences
+ * a timeout, read error or shutdown, we just silently close so
+ * that the client can try again.
+ */
+ txn->status = 0;
+ msg->msg_state = HTTP_MSG_RQBEFORE;
+ req->analysers = 0;
+ s->logs.logwait = 0;
+ s->logs.level = 0;
+ s->res.flags &= ~CF_EXPECT_MORE; /* speed up sending a previous response */
+ stream_int_retnclose(&s->si[0], NULL);
+ return 0;
+ }
+
+ /* OK now we have a complete HTTP request with indexed headers. Let's
+ * complete the request parsing by setting a few fields we will need
+ * later. At this point, we have the last CRLF at req->buf->data + msg->eoh.
+ * If the request is in HTTP/0.9 form, the rule is still true, and eoh
+ * points to the CRLF of the request line. msg->next points to the first
+ * byte after the last LF. msg->sov points to the first byte of data.
+ * msg->eol cannot be trusted because it may have been left uninitialized
+ * (for instance in the absence of headers).
+ */
+
+ stream_inc_http_req_ctr(s);
+ proxy_inc_fe_req_ctr(sess->fe); /* one more valid request for this FE */
+
+ if (txn->flags & TX_WAIT_NEXT_RQ) {
+ /* kill the pending keep-alive timeout */
+ txn->flags &= ~TX_WAIT_NEXT_RQ;
+ req->analyse_exp = TICK_ETERNITY;
+ }
+
+
+ /* Maybe we found in invalid header name while we were configured not
+ * to block on that, so we have to capture it now.
+ */
+ if (unlikely(msg->err_pos >= 0))
+ http_capture_bad_message(&sess->fe->invalid_req, s, msg, msg->msg_state, sess->fe);
+
+ /*
+ * 1: identify the method
+ */
+ txn->meth = find_http_meth(req->buf->p, msg->sl.rq.m_l);
+
+ /* we can make use of server redirect on GET and HEAD */
+ if (txn->meth == HTTP_METH_GET || txn->meth == HTTP_METH_HEAD)
+ s->flags |= SF_REDIRECTABLE;
+
+ /*
+ * 2: check if the URI matches the monitor_uri.
+ * We have to do this for every request which gets in, because
+ * the monitor-uri is defined by the frontend.
+ */
+ if (unlikely((sess->fe->monitor_uri_len != 0) &&
+ (sess->fe->monitor_uri_len == msg->sl.rq.u_l) &&
+ !memcmp(req->buf->p + msg->sl.rq.u,
+ sess->fe->monitor_uri,
+ sess->fe->monitor_uri_len))) {
+ /*
+ * We have found the monitor URI
+ */
+ struct acl_cond *cond;
+
+ s->flags |= SF_MONITOR;
+ sess->fe->fe_counters.intercepted_req++;
+
+ /* Check if we want to fail this monitor request or not */
+ list_for_each_entry(cond, &sess->fe->mon_fail_cond, list) {
+ int ret = acl_exec_cond(cond, sess->fe, sess, s, SMP_OPT_DIR_REQ|SMP_OPT_FINAL);
+
+ ret = acl_pass(ret);
+ if (cond->pol == ACL_COND_UNLESS)
+ ret = !ret;
+
+ if (ret) {
+ /* we fail this request, let's return 503 service unavail */
+ txn->status = 503;
+ stream_int_retnclose(&s->si[0], http_error_message(s, HTTP_ERR_503));
+ if (!(s->flags & SF_ERR_MASK))
+ s->flags |= SF_ERR_LOCAL; /* we don't want a real error here */
+ goto return_prx_cond;
+ }
+ }
+
+ /* nothing to fail, let's reply normaly */
+ txn->status = 200;
+ stream_int_retnclose(&s->si[0], http_error_message(s, HTTP_ERR_200));
+ if (!(s->flags & SF_ERR_MASK))
+ s->flags |= SF_ERR_LOCAL; /* we don't want a real error here */
+ goto return_prx_cond;
+ }
+
+ /*
+ * 3: Maybe we have to copy the original REQURI for the logs ?
+ * Note: we cannot log anymore if the request has been
+ * classified as invalid.
+ */
+ if (unlikely(s->logs.logwait & LW_REQ)) {
+ /* we have a complete HTTP request that we must log */
+ if ((txn->uri = pool_alloc2(pool2_requri)) != NULL) {
+ int urilen = msg->sl.rq.l;
+
+ if (urilen >= REQURI_LEN)
+ urilen = REQURI_LEN - 1;
+ memcpy(txn->uri, req->buf->p, urilen);
+ txn->uri[urilen] = 0;
+
+ if (!(s->logs.logwait &= ~(LW_REQ|LW_INIT)))
+ s->do_log(s);
+ } else {
+ Alert("HTTP logging : out of memory.\n");
+ }
+ }
+
+ /* RFC7230#2.6 has enforced the format of the HTTP version string to be
+ * exactly one digit "." one digit. This check may be disabled using
+ * option accept-invalid-http-request.
+ */
+ if (!(sess->fe->options2 & PR_O2_REQBUG_OK)) {
+ if (msg->sl.rq.v_l != 8) {
+ msg->err_pos = msg->sl.rq.v;
+ goto return_bad_req;
+ }
+
+ if (req->buf->p[msg->sl.rq.v + 4] != '/' ||
+ !isdigit((unsigned char)req->buf->p[msg->sl.rq.v + 5]) ||
+ req->buf->p[msg->sl.rq.v + 6] != '.' ||
+ !isdigit((unsigned char)req->buf->p[msg->sl.rq.v + 7])) {
+ msg->err_pos = msg->sl.rq.v + 4;
+ goto return_bad_req;
+ }
+ }
+ else {
+ /* 4. We may have to convert HTTP/0.9 requests to HTTP/1.0 */
+ if (unlikely(msg->sl.rq.v_l == 0) && !http_upgrade_v09_to_v10(txn))
+ goto return_bad_req;
+ }
+
+ /* ... and check if the request is HTTP/1.1 or above */
+ if ((msg->sl.rq.v_l == 8) &&
+ ((req->buf->p[msg->sl.rq.v + 5] > '1') ||
+ ((req->buf->p[msg->sl.rq.v + 5] == '1') &&
+ (req->buf->p[msg->sl.rq.v + 7] >= '1'))))
+ msg->flags |= HTTP_MSGF_VER_11;
+
+ /* "connection" has not been parsed yet */
+ txn->flags &= ~(TX_HDR_CONN_PRS | TX_HDR_CONN_CLO | TX_HDR_CONN_KAL | TX_HDR_CONN_UPG);
+
+ /* if the frontend has "option http-use-proxy-header", we'll check if
+ * we have what looks like a proxied connection instead of a connection,
+ * and in this case set the TX_USE_PX_CONN flag to use Proxy-connection.
+ * Note that this is *not* RFC-compliant, however browsers and proxies
+ * happen to do that despite being non-standard :-(
+ * We consider that a request not beginning with either '/' or '*' is
+ * a proxied connection, which covers both "scheme://location" and
+ * CONNECT ip:port.
+ */
+ if ((sess->fe->options2 & PR_O2_USE_PXHDR) &&
+ req->buf->p[msg->sl.rq.u] != '/' && req->buf->p[msg->sl.rq.u] != '*')
+ txn->flags |= TX_USE_PX_CONN;
+
+ /* transfer length unknown*/
+ msg->flags &= ~HTTP_MSGF_XFER_LEN;
+
+ /* 5: we may need to capture headers */
+ if (unlikely((s->logs.logwait & LW_REQHDR) && s->req_cap))
+ capture_headers(req->buf->p, &txn->hdr_idx,
+ s->req_cap, sess->fe->req_cap);
+
+ /* 6: determine the transfer-length according to RFC2616 #4.4, updated
+ * by RFC7230#3.3.3 :
+ *
+ * The length of a message body is determined by one of the following
+ * (in order of precedence):
+ *
+ * 1. Any response to a HEAD request and any response with a 1xx
+ * (Informational), 204 (No Content), or 304 (Not Modified) status
+ * code is always terminated by the first empty line after the
+ * header fields, regardless of the header fields present in the
+ * message, and thus cannot contain a message body.
+ *
+ * 2. Any 2xx (Successful) response to a CONNECT request implies that
+ * the connection will become a tunnel immediately after the empty
+ * line that concludes the header fields. A client MUST ignore any
+ * Content-Length or Transfer-Encoding header fields received in
+ * such a message.
+ *
+ * 3. If a Transfer-Encoding header field is present and the chunked
+ * transfer coding (Section 4.1) is the final encoding, the message
+ * body length is determined by reading and decoding the chunked
+ * data until the transfer coding indicates the data is complete.
+ *
+ * If a Transfer-Encoding header field is present in a response and
+ * the chunked transfer coding is not the final encoding, the
+ * message body length is determined by reading the connection until
+ * it is closed by the server. If a Transfer-Encoding header field
+ * is present in a request and the chunked transfer coding is not
+ * the final encoding, the message body length cannot be determined
+ * reliably; the server MUST respond with the 400 (Bad Request)
+ * status code and then close the connection.
+ *
+ * If a message is received with both a Transfer-Encoding and a
+ * Content-Length header field, the Transfer-Encoding overrides the
+ * Content-Length. Such a message might indicate an attempt to
+ * perform request smuggling (Section 9.5) or response splitting
+ * (Section 9.4) and ought to be handled as an error. A sender MUST
+ * remove the received Content-Length field prior to forwarding such
+ * a message downstream.
+ *
+ * 4. If a message is received without Transfer-Encoding and with
+ * either multiple Content-Length header fields having differing
+ * field-values or a single Content-Length header field having an
+ * invalid value, then the message framing is invalid and the
+ * recipient MUST treat it as an unrecoverable error. If this is a
+ * request message, the server MUST respond with a 400 (Bad Request)
+ * status code and then close the connection. If this is a response
+ * message received by a proxy, the proxy MUST close the connection
+ * to the server, discard the received response, and send a 502 (Bad
+ * Gateway) response to the client. If this is a response message
+ * received by a user agent, the user agent MUST close the
+ * connection to the server and discard the received response.
+ *
+ * 5. If a valid Content-Length header field is present without
+ * Transfer-Encoding, its decimal value defines the expected message
+ * body length in octets. If the sender closes the connection or
+ * the recipient times out before the indicated number of octets are
+ * received, the recipient MUST consider the message to be
+ * incomplete and close the connection.
+ *
+ * 6. If this is a request message and none of the above are true, then
+ * the message body length is zero (no message body is present).
+ *
+ * 7. Otherwise, this is a response message without a declared message
+ * body length, so the message body length is determined by the
+ * number of octets received prior to the server closing the
+ * connection.
+ */
+
+ ctx.idx = 0;
+ /* set TE_CHNK and XFER_LEN only if "chunked" is seen last */
+ while (http_find_header2("Transfer-Encoding", 17, req->buf->p, &txn->hdr_idx, &ctx)) {
+ if (ctx.vlen == 7 && strncasecmp(ctx.line + ctx.val, "chunked", 7) == 0)
+ msg->flags |= (HTTP_MSGF_TE_CHNK | HTTP_MSGF_XFER_LEN);
+ else if (msg->flags & HTTP_MSGF_TE_CHNK) {
+ /* chunked not last, return badreq */
+ goto return_bad_req;
+ }
+ }
+
+ /* Chunked requests must have their content-length removed */
+ ctx.idx = 0;
+ if (msg->flags & HTTP_MSGF_TE_CHNK) {
+ while (http_find_header2("Content-Length", 14, req->buf->p, &txn->hdr_idx, &ctx))
+ http_remove_header2(msg, &txn->hdr_idx, &ctx);
+ }
+ else while (http_find_header2("Content-Length", 14, req->buf->p, &txn->hdr_idx, &ctx)) {
+ signed long long cl;
+
+ if (!ctx.vlen) {
+ msg->err_pos = ctx.line + ctx.val - req->buf->p;
+ goto return_bad_req;
+ }
+
+ if (strl2llrc(ctx.line + ctx.val, ctx.vlen, &cl)) {
+ msg->err_pos = ctx.line + ctx.val - req->buf->p;
+ goto return_bad_req; /* parse failure */
+ }
+
+ if (cl < 0) {
+ msg->err_pos = ctx.line + ctx.val - req->buf->p;
+ goto return_bad_req;
+ }
+
+ if ((msg->flags & HTTP_MSGF_CNT_LEN) && (msg->chunk_len != cl)) {
+ msg->err_pos = ctx.line + ctx.val - req->buf->p;
+ goto return_bad_req; /* already specified, was different */
+ }
+
+ msg->flags |= HTTP_MSGF_CNT_LEN | HTTP_MSGF_XFER_LEN;
+ msg->body_len = msg->chunk_len = cl;
+ }
+
+ /* even bodyless requests have a known length */
+ msg->flags |= HTTP_MSGF_XFER_LEN;
+
+ /* Until set to anything else, the connection mode is set as Keep-Alive. It will
+ * only change if both the request and the config reference something else.
+ * Option httpclose by itself sets tunnel mode where headers are mangled.
+ * However, if another mode is set, it will affect it (eg: server-close/
+ * keep-alive + httpclose = close). Note that we avoid to redo the same work
+ * if FE and BE have the same settings (common). The method consists in
+ * checking if options changed between the two calls (implying that either
+ * one is non-null, or one of them is non-null and we are there for the first
+ * time.
+ */
+ if (!(txn->flags & TX_HDR_CONN_PRS) ||
+ ((sess->fe->options & PR_O_HTTP_MODE) != (s->be->options & PR_O_HTTP_MODE)))
+ http_adjust_conn_mode(s, txn, msg);
+
+ /* we may have to wait for the request's body */
+ if ((s->be->options & PR_O_WREQ_BODY) &&
+ (msg->body_len || (msg->flags & HTTP_MSGF_TE_CHNK)))
+ req->analysers |= AN_REQ_HTTP_BODY;
+
+ /* end of job, return OK */
+ req->analysers &= ~an_bit;
+ req->analyse_exp = TICK_ETERNITY;
+ return 1;
+
+ return_bad_req:
+ /* We centralize bad requests processing here */
+ if (unlikely(msg->msg_state == HTTP_MSG_ERROR) || msg->err_pos >= 0) {
+ /* we detected a parsing error. We want to archive this request
+ * in the dedicated proxy area for later troubleshooting.
+ */
+ http_capture_bad_message(&sess->fe->invalid_req, s, msg, msg->msg_state, sess->fe);
+ }
+
+ txn->req.msg_state = HTTP_MSG_ERROR;
+ txn->status = 400;
+ stream_int_retnclose(&s->si[0], http_error_message(s, HTTP_ERR_400));
+
+ sess->fe->fe_counters.failed_req++;
+ if (sess->listener->counters)
+ sess->listener->counters->failed_req++;
+
+ return_prx_cond:
+ if (!(s->flags & SF_ERR_MASK))
+ s->flags |= SF_ERR_PRXCOND;
+ if (!(s->flags & SF_FINST_MASK))
+ s->flags |= SF_FINST_R;
+
+ req->analysers = 0;
+ req->analyse_exp = TICK_ETERNITY;
+ return 0;
+}
+
+
+/* This function prepares an applet to handle the stats. It can deal with the
+ * "100-continue" expectation, check that admin rules are met for POST requests,
+ * and program a response message if something was unexpected. It cannot fail
+ * and always relies on the stats applet to complete the job. It does not touch
+ * analysers nor counters, which are left to the caller. It does not touch
+ * s->target which is supposed to already point to the stats applet. The caller
+ * is expected to have already assigned an appctx to the stream.
+ */
+int http_handle_stats(struct stream *s, struct channel *req)
+{
+ struct stats_admin_rule *stats_admin_rule;
+ struct stream_interface *si = &s->si[1];
+ struct session *sess = s->sess;
+ struct http_txn *txn = s->txn;
+ struct http_msg *msg = &txn->req;
+ struct uri_auth *uri_auth = s->be->uri_auth;
+ const char *uri, *h, *lookup;
+ struct appctx *appctx;
+
+ appctx = si_appctx(si);
+ memset(&appctx->ctx.stats, 0, sizeof(appctx->ctx.stats));
+ appctx->st1 = appctx->st2 = 0;
+ appctx->ctx.stats.st_code = STAT_STATUS_INIT;
+ appctx->ctx.stats.flags |= STAT_FMT_HTML; /* assume HTML mode by default */
+ if ((msg->flags & HTTP_MSGF_VER_11) && (s->txn->meth != HTTP_METH_HEAD))
+ appctx->ctx.stats.flags |= STAT_CHUNKED;
+
+ uri = msg->chn->buf->p + msg->sl.rq.u;
+ lookup = uri + uri_auth->uri_len;
+
+ for (h = lookup; h <= uri + msg->sl.rq.u_l - 3; h++) {
+ if (memcmp(h, ";up", 3) == 0) {
+ appctx->ctx.stats.flags |= STAT_HIDE_DOWN;
+ break;
+ }
+ }
+
+ if (uri_auth->refresh) {
+ for (h = lookup; h <= uri + msg->sl.rq.u_l - 10; h++) {
+ if (memcmp(h, ";norefresh", 10) == 0) {
+ appctx->ctx.stats.flags |= STAT_NO_REFRESH;
+ break;
+ }
+ }
+ }
+
+ for (h = lookup; h <= uri + msg->sl.rq.u_l - 4; h++) {
+ if (memcmp(h, ";csv", 4) == 0) {
+ appctx->ctx.stats.flags &= ~STAT_FMT_HTML;
+ break;
+ }
+ }
+
+ for (h = lookup; h <= uri + msg->sl.rq.u_l - 8; h++) {
+ if (memcmp(h, ";st=", 4) == 0) {
+ int i;
+ h += 4;
+ appctx->ctx.stats.st_code = STAT_STATUS_UNKN;
+ for (i = STAT_STATUS_INIT + 1; i < STAT_STATUS_SIZE; i++) {
+ if (strncmp(stat_status_codes[i], h, 4) == 0) {
+ appctx->ctx.stats.st_code = i;
+ break;
+ }
+ }
+ break;
+ }
+ }
+
+ appctx->ctx.stats.scope_str = 0;
+ appctx->ctx.stats.scope_len = 0;
+ for (h = lookup; h <= uri + msg->sl.rq.u_l - 8; h++) {
+ if (memcmp(h, STAT_SCOPE_INPUT_NAME "=", strlen(STAT_SCOPE_INPUT_NAME) + 1) == 0) {
+ int itx = 0;
+ const char *h2;
+ char scope_txt[STAT_SCOPE_TXT_MAXLEN + 1];
+ const char *err;
+
+ h += strlen(STAT_SCOPE_INPUT_NAME) + 1;
+ h2 = h;
+ appctx->ctx.stats.scope_str = h2 - msg->chn->buf->p;
+ while (*h != ';' && *h != '\0' && *h != '&' && *h != ' ' && *h != '\n') {
+ itx++;
+ h++;
+ }
+
+ if (itx > STAT_SCOPE_TXT_MAXLEN)
+ itx = STAT_SCOPE_TXT_MAXLEN;
+ appctx->ctx.stats.scope_len = itx;
+
+ /* scope_txt = search query, appctx->ctx.stats.scope_len is always <= STAT_SCOPE_TXT_MAXLEN */
+ memcpy(scope_txt, h2, itx);
+ scope_txt[itx] = '\0';
+ err = invalid_char(scope_txt);
+ if (err) {
+ /* bad char in search text => clear scope */
+ appctx->ctx.stats.scope_str = 0;
+ appctx->ctx.stats.scope_len = 0;
+ }
+ break;
+ }
+ }
+
+ /* now check whether we have some admin rules for this request */
+ list_for_each_entry(stats_admin_rule, &uri_auth->admin_rules, list) {
+ int ret = 1;
+
+ if (stats_admin_rule->cond) {
+ ret = acl_exec_cond(stats_admin_rule->cond, s->be, sess, s, SMP_OPT_DIR_REQ|SMP_OPT_FINAL);
+ ret = acl_pass(ret);
+ if (stats_admin_rule->cond->pol == ACL_COND_UNLESS)
+ ret = !ret;
+ }
+
+ if (ret) {
+ /* no rule, or the rule matches */
+ appctx->ctx.stats.flags |= STAT_ADMIN;
+ break;
+ }
+ }
+
+ /* Was the status page requested with a POST ? */
+ if (unlikely(txn->meth == HTTP_METH_POST && txn->req.body_len > 0)) {
+ if (appctx->ctx.stats.flags & STAT_ADMIN) {
+ /* we'll need the request body, possibly after sending 100-continue */
+ if (msg->msg_state < HTTP_MSG_CHUNK_SIZE)
+ req->analysers |= AN_REQ_HTTP_BODY;
+ appctx->st0 = STAT_HTTP_POST;
+ }
+ else {
+ appctx->ctx.stats.st_code = STAT_STATUS_DENY;
+ appctx->st0 = STAT_HTTP_LAST;
+ }
+ }
+ else {
+ /* So it was another method (GET/HEAD) */
+ appctx->st0 = STAT_HTTP_HEAD;
+ }
+
+ s->task->nice = -32; /* small boost for HTTP statistics */
+ return 1;
+}
+
+/* Sets the TOS header in IPv4 and the traffic class header in IPv6 packets
+ * (as per RFC3260 #4 and BCP37 #4.2 and #5.2).
+ */
+void inet_set_tos(int fd, struct sockaddr_storage from, int tos)
+{
+#ifdef IP_TOS
+ if (from.ss_family == AF_INET)
+ setsockopt(fd, IPPROTO_IP, IP_TOS, &tos, sizeof(tos));
+#endif
+#ifdef IPV6_TCLASS
+ if (from.ss_family == AF_INET6) {
+ if (IN6_IS_ADDR_V4MAPPED(&((struct sockaddr_in6 *)&from)->sin6_addr))
+ /* v4-mapped addresses need IP_TOS */
+ setsockopt(fd, IPPROTO_IP, IP_TOS, &tos, sizeof(tos));
+ else
+ setsockopt(fd, IPPROTO_IPV6, IPV6_TCLASS, &tos, sizeof(tos));
+ }
+#endif
+}
+
+int http_transform_header_str(struct stream* s, struct http_msg *msg,
+ const char* name, unsigned int name_len,
+ const char *str, struct my_regex *re,
+ int action)
+{
+ struct hdr_ctx ctx;
+ char *buf = msg->chn->buf->p;
+ struct hdr_idx *idx = &s->txn->hdr_idx;
+ int (*http_find_hdr_func)(const char *name, int len, char *sol,
+ struct hdr_idx *idx, struct hdr_ctx *ctx);
+ struct chunk *output = get_trash_chunk();
+
+ ctx.idx = 0;
+
+ /* Choose the header browsing function. */
+ switch (action) {
+ case ACT_HTTP_REPLACE_VAL:
+ http_find_hdr_func = http_find_header2;
+ break;
+ case ACT_HTTP_REPLACE_HDR:
+ http_find_hdr_func = http_find_full_header2;
+ break;
+ default: /* impossible */
+ return -1;
+ }
+
+ while (http_find_hdr_func(name, name_len, buf, idx, &ctx)) {
+ struct hdr_idx_elem *hdr = idx->v + ctx.idx;
+ int delta;
+ char *val = ctx.line + ctx.val;
+ char* val_end = val + ctx.vlen;
+
+ if (!regex_exec_match2(re, val, val_end-val, MAX_MATCH, pmatch, 0))
+ continue;
+
+ output->len = exp_replace(output->str, output->size, val, str, pmatch);
+ if (output->len == -1)
+ return -1;
+
+ delta = buffer_replace2(msg->chn->buf, val, val_end, output->str, output->len);
+
+ hdr->len += delta;
+ http_msg_move_end(msg, delta);
+
+ /* Adjust the length of the current value of the index. */
+ ctx.vlen += delta;
+ }
+
+ return 0;
+}
+
+static int http_transform_header(struct stream* s, struct http_msg *msg,
+ const char* name, unsigned int name_len,
+ struct list *fmt, struct my_regex *re,
+ int action)
+{
+ struct chunk *replace = get_trash_chunk();
+
+ replace->len = build_logline(s, replace->str, replace->size, fmt);
+ if (replace->len >= replace->size - 1)
+ return -1;
+
+ return http_transform_header_str(s, msg, name, name_len, replace->str, re, action);
+}
+
+/* Executes the http-request rules <rules> for stream <s>, proxy <px> and
+ * transaction <txn>. Returns the verdict of the first rule that prevents
+ * further processing of the request (auth, deny, ...), and defaults to
+ * HTTP_RULE_RES_STOP if it executed all rules or stopped on an allow, or
+ * HTTP_RULE_RES_CONT if the last rule was reached. It may set the TX_CLTARPIT
+ * on txn->flags if it encounters a tarpit rule.
+ */
+enum rule_result
+http_req_get_intercept_rule(struct proxy *px, struct list *rules, struct stream *s)
+{
+ struct session *sess = strm_sess(s);
+ struct http_txn *txn = s->txn;
+ struct connection *cli_conn;
+ struct act_rule *rule;
+ struct hdr_ctx ctx;
+ const char *auth_realm;
+ int act_flags = 0;
+
+ /* If "the current_rule_list" match the executed rule list, we are in
+ * resume condition. If a resume is needed it is always in the action
+ * and never in the ACL or converters. In this case, we initialise the
+ * current rule, and go to the action execution point.
+ */
+ if (s->current_rule) {
+ rule = s->current_rule;
+ s->current_rule = NULL;
+ if (s->current_rule_list == rules)
+ goto resume_execution;
+ }
+ s->current_rule_list = rules;
+
+ list_for_each_entry(rule, rules, list) {
+
+ /* check optional condition */
+ if (rule->cond) {
+ int ret;
+
+ ret = acl_exec_cond(rule->cond, px, sess, s, SMP_OPT_DIR_REQ|SMP_OPT_FINAL);
+ ret = acl_pass(ret);
+
+ if (rule->cond->pol == ACL_COND_UNLESS)
+ ret = !ret;
+
+ if (!ret) /* condition not matched */
+ continue;
+ }
+
+ act_flags |= ACT_FLAG_FIRST;
+resume_execution:
+ switch (rule->action) {
+ case ACT_ACTION_ALLOW:
+ return HTTP_RULE_RES_STOP;
+
+ case ACT_ACTION_DENY:
+ txn->rule_deny_status = rule->deny_status;
+ return HTTP_RULE_RES_DENY;
+
+ case ACT_HTTP_REQ_TARPIT:
+ txn->flags |= TX_CLTARPIT;
+ txn->rule_deny_status = rule->deny_status;
+ return HTTP_RULE_RES_DENY;
+
+ case ACT_HTTP_REQ_AUTH:
+ /* Auth might be performed on regular http-req rules as well as on stats */
+ auth_realm = rule->arg.auth.realm;
+ if (!auth_realm) {
+ if (px->uri_auth && rules == &px->uri_auth->http_req_rules)
+ auth_realm = STATS_DEFAULT_REALM;
+ else
+ auth_realm = px->id;
+ }
+ /* send 401/407 depending on whether we use a proxy or not. We still
+ * count one error, because normal browsing won't significantly
+ * increase the counter but brute force attempts will.
+ */
+ chunk_printf(&trash, (txn->flags & TX_USE_PX_CONN) ? HTTP_407_fmt : HTTP_401_fmt, auth_realm);
+ txn->status = (txn->flags & TX_USE_PX_CONN) ? 407 : 401;
+ stream_int_retnclose(&s->si[0], &trash);
+ stream_inc_http_err_ctr(s);
+ return HTTP_RULE_RES_ABRT;
+
+ case ACT_HTTP_REDIR:
+ if (!http_apply_redirect_rule(rule->arg.redir, s, txn))
+ return HTTP_RULE_RES_BADREQ;
+ return HTTP_RULE_RES_DONE;
+
+ case ACT_HTTP_SET_NICE:
+ s->task->nice = rule->arg.nice;
+ break;
+
+ case ACT_HTTP_SET_TOS:
+ if ((cli_conn = objt_conn(sess->origin)) && conn_ctrl_ready(cli_conn))
+ inet_set_tos(cli_conn->t.sock.fd, cli_conn->addr.from, rule->arg.tos);
+ break;
+
+ case ACT_HTTP_SET_MARK:
+#ifdef SO_MARK
+ if ((cli_conn = objt_conn(sess->origin)) && conn_ctrl_ready(cli_conn))
+ setsockopt(cli_conn->t.sock.fd, SOL_SOCKET, SO_MARK, &rule->arg.mark, sizeof(rule->arg.mark));
+#endif
+ break;
+
+ case ACT_HTTP_SET_LOGL:
+ s->logs.level = rule->arg.loglevel;
+ break;
+
+ case ACT_HTTP_REPLACE_HDR:
+ case ACT_HTTP_REPLACE_VAL:
+ if (http_transform_header(s, &txn->req, rule->arg.hdr_add.name,
+ rule->arg.hdr_add.name_len,
+ &rule->arg.hdr_add.fmt,
+ &rule->arg.hdr_add.re, rule->action))
+ return HTTP_RULE_RES_BADREQ;
+ break;
+
+ case ACT_HTTP_DEL_HDR:
+ ctx.idx = 0;
+ /* remove all occurrences of the header */
+ while (http_find_header2(rule->arg.hdr_add.name, rule->arg.hdr_add.name_len,
+ txn->req.chn->buf->p, &txn->hdr_idx, &ctx)) {
+ http_remove_header2(&txn->req, &txn->hdr_idx, &ctx);
+ }
+ break;
+
+ case ACT_HTTP_SET_HDR:
+ case ACT_HTTP_ADD_HDR:
+ chunk_printf(&trash, "%s: ", rule->arg.hdr_add.name);
+ memcpy(trash.str, rule->arg.hdr_add.name, rule->arg.hdr_add.name_len);
+ trash.len = rule->arg.hdr_add.name_len;
+ trash.str[trash.len++] = ':';
+ trash.str[trash.len++] = ' ';
+ trash.len += build_logline(s, trash.str + trash.len, trash.size - trash.len, &rule->arg.hdr_add.fmt);
+
+ if (rule->action == ACT_HTTP_SET_HDR) {
+ /* remove all occurrences of the header */
+ ctx.idx = 0;
+ while (http_find_header2(rule->arg.hdr_add.name, rule->arg.hdr_add.name_len,
+ txn->req.chn->buf->p, &txn->hdr_idx, &ctx)) {
+ http_remove_header2(&txn->req, &txn->hdr_idx, &ctx);
+ }
+ }
+
+ http_header_add_tail2(&txn->req, &txn->hdr_idx, trash.str, trash.len);
+ break;
+
+ case ACT_HTTP_DEL_ACL:
+ case ACT_HTTP_DEL_MAP: {
+ struct pat_ref *ref;
+ char *key;
+ int len;
+
+ /* collect reference */
+ ref = pat_ref_lookup(rule->arg.map.ref);
+ if (!ref)
+ continue;
+
+ /* collect key */
+ len = build_logline(s, trash.str, trash.size, &rule->arg.map.key);
+ key = trash.str;
+ key[len] = '\0';
+
+ /* perform update */
+ /* returned code: 1=ok, 0=ko */
+ pat_ref_delete(ref, key);
+
+ break;
+ }
+
+ case ACT_HTTP_ADD_ACL: {
+ struct pat_ref *ref;
+ char *key;
+ struct chunk *trash_key;
+ int len;
+
+ trash_key = get_trash_chunk();
+
+ /* collect reference */
+ ref = pat_ref_lookup(rule->arg.map.ref);
+ if (!ref)
+ continue;
+
+ /* collect key */
+ len = build_logline(s, trash_key->str, trash_key->size, &rule->arg.map.key);
+ key = trash_key->str;
+ key[len] = '\0';
+
+ /* perform update */
+ /* add entry only if it does not already exist */
+ if (pat_ref_find_elt(ref, key) == NULL)
+ pat_ref_add(ref, key, NULL, NULL);
+
+ break;
+ }
+
+ case ACT_HTTP_SET_MAP: {
+ struct pat_ref *ref;
+ char *key, *value;
+ struct chunk *trash_key, *trash_value;
+ int len;
+
+ trash_key = get_trash_chunk();
+ trash_value = get_trash_chunk();
+
+ /* collect reference */
+ ref = pat_ref_lookup(rule->arg.map.ref);
+ if (!ref)
+ continue;
+
+ /* collect key */
+ len = build_logline(s, trash_key->str, trash_key->size, &rule->arg.map.key);
+ key = trash_key->str;
+ key[len] = '\0';
+
+ /* collect value */
+ len = build_logline(s, trash_value->str, trash_value->size, &rule->arg.map.value);
+ value = trash_value->str;
+ value[len] = '\0';
+
+ /* perform update */
+ if (pat_ref_find_elt(ref, key) != NULL)
+ /* update entry if it exists */
+ pat_ref_set(ref, key, value, NULL);
+ else
+ /* insert a new entry */
+ pat_ref_add(ref, key, value, NULL);
+
+ break;
+ }
+
+ case ACT_CUSTOM:
+ if ((px->options & PR_O_ABRT_CLOSE) && (s->req.flags & (CF_SHUTR|CF_READ_NULL|CF_READ_ERROR)))
+ act_flags |= ACT_FLAG_FINAL;
+
+ switch (rule->action_ptr(rule, px, s->sess, s, act_flags)) {
+ case ACT_RET_ERR:
+ case ACT_RET_CONT:
+ break;
+ case ACT_RET_STOP:
+ return HTTP_RULE_RES_DONE;
+ case ACT_RET_YIELD:
+ s->current_rule = rule;
+ return HTTP_RULE_RES_YIELD;
+ }
+ break;
+
+ case ACT_ACTION_TRK_SC0 ... ACT_ACTION_TRK_SCMAX:
+ /* Note: only the first valid tracking parameter of each
+ * applies.
+ */
+
+ if (stkctr_entry(&s->stkctr[http_req_trk_idx(rule->action)]) == NULL) {
+ struct stktable *t;
+ struct stksess *ts;
+ struct stktable_key *key;
+ void *ptr;
+
+ t = rule->arg.trk_ctr.table.t;
+ key = stktable_fetch_key(t, s->be, sess, s, SMP_OPT_DIR_REQ | SMP_OPT_FINAL, rule->arg.trk_ctr.expr, NULL);
+
+ if (key && (ts = stktable_get_entry(t, key))) {
+ stream_track_stkctr(&s->stkctr[http_req_trk_idx(rule->action)], t, ts);
+
+ /* let's count a new HTTP request as it's the first time we do it */
+ ptr = stktable_data_ptr(t, ts, STKTABLE_DT_HTTP_REQ_CNT);
+ if (ptr)
+ stktable_data_cast(ptr, http_req_cnt)++;
+
+ ptr = stktable_data_ptr(t, ts, STKTABLE_DT_HTTP_REQ_RATE);
+ if (ptr)
+ update_freq_ctr_period(&stktable_data_cast(ptr, http_req_rate),
+ t->data_arg[STKTABLE_DT_HTTP_REQ_RATE].u, 1);
+
+ stkctr_set_flags(&s->stkctr[http_req_trk_idx(rule->action)], STKCTR_TRACK_CONTENT);
+ if (sess->fe != s->be)
+ stkctr_set_flags(&s->stkctr[http_req_trk_idx(rule->action)], STKCTR_TRACK_BACKEND);
+ }
+ }
+ break;
+
+ case ACT_HTTP_REQ_SET_SRC:
+ if ((cli_conn = objt_conn(sess->origin)) && conn_ctrl_ready(cli_conn)) {
+ struct sample *smp;
+
+ smp = sample_fetch_as_type(px, sess, s, SMP_OPT_DIR_REQ|SMP_OPT_FINAL, rule->arg.expr, SMP_T_ADDR);
+
+ if (smp) {
+ if (smp->data.type == SMP_T_IPV4) {
+ ((struct sockaddr_in *)&cli_conn->addr.from)->sin_family = AF_INET;
+ ((struct sockaddr_in *)&cli_conn->addr.from)->sin_addr.s_addr = smp->data.u.ipv4.s_addr;
+ ((struct sockaddr_in *)&cli_conn->addr.from)->sin_port = 0;
+ } else if (smp->data.type == SMP_T_IPV6) {
+ ((struct sockaddr_in6 *)&cli_conn->addr.from)->sin6_family = AF_INET6;
+ memcpy(&((struct sockaddr_in6 *)&cli_conn->addr.from)->sin6_addr, &smp->data.u.ipv6, sizeof(struct in6_addr));
+ ((struct sockaddr_in6 *)&cli_conn->addr.from)->sin6_port = 0;
+ }
+ }
+ }
+ break;
+
+ /* other flags exists, but normaly, they never be matched. */
+ default:
+ break;
+ }
+ }
+
+ /* we reached the end of the rules, nothing to report */
+ return HTTP_RULE_RES_CONT;
+}
+
+
+/* Executes the http-response rules <rules> for stream <s> and proxy <px>. It
+ * returns one of 5 possible statuses: HTTP_RULE_RES_CONT, HTTP_RULE_RES_STOP,
+ * HTTP_RULE_RES_DONE, HTTP_RULE_RES_YIELD, or HTTP_RULE_RES_BADREQ. If *CONT
+ * is returned, the process can continue the evaluation of next rule list. If
+ * *STOP or *DONE is returned, the process must stop the evaluation. If *BADREQ
+ * is returned, it means the operation could not be processed and a server error
+ * must be returned. It may set the TX_SVDENY on txn->flags if it encounters a
+ * deny rule. If *YIELD is returned, the caller must call again the function
+ * with the same context.
+ */
+static enum rule_result
+http_res_get_intercept_rule(struct proxy *px, struct list *rules, struct stream *s)
+{
+ struct session *sess = strm_sess(s);
+ struct http_txn *txn = s->txn;
+ struct connection *cli_conn;
+ struct act_rule *rule;
+ struct hdr_ctx ctx;
+ int act_flags = 0;
+
+ /* If "the current_rule_list" match the executed rule list, we are in
+ * resume condition. If a resume is needed it is always in the action
+ * and never in the ACL or converters. In this case, we initialise the
+ * current rule, and go to the action execution point.
+ */
+ if (s->current_rule) {
+ rule = s->current_rule;
+ s->current_rule = NULL;
+ if (s->current_rule_list == rules)
+ goto resume_execution;
+ }
+ s->current_rule_list = rules;
+
+ list_for_each_entry(rule, rules, list) {
+
+ /* check optional condition */
+ if (rule->cond) {
+ int ret;
+
+ ret = acl_exec_cond(rule->cond, px, sess, s, SMP_OPT_DIR_RES|SMP_OPT_FINAL);
+ ret = acl_pass(ret);
+
+ if (rule->cond->pol == ACL_COND_UNLESS)
+ ret = !ret;
+
+ if (!ret) /* condition not matched */
+ continue;
+ }
+
+ act_flags |= ACT_FLAG_FIRST;
+resume_execution:
+ switch (rule->action) {
+ case ACT_ACTION_ALLOW:
+ return HTTP_RULE_RES_STOP; /* "allow" rules are OK */
+
+ case ACT_ACTION_DENY:
+ txn->flags |= TX_SVDENY;
+ return HTTP_RULE_RES_STOP;
+
+ case ACT_HTTP_SET_NICE:
+ s->task->nice = rule->arg.nice;
+ break;
+
+ case ACT_HTTP_SET_TOS:
+ if ((cli_conn = objt_conn(sess->origin)) && conn_ctrl_ready(cli_conn))
+ inet_set_tos(cli_conn->t.sock.fd, cli_conn->addr.from, rule->arg.tos);
+ break;
+
+ case ACT_HTTP_SET_MARK:
+#ifdef SO_MARK
+ if ((cli_conn = objt_conn(sess->origin)) && conn_ctrl_ready(cli_conn))
+ setsockopt(cli_conn->t.sock.fd, SOL_SOCKET, SO_MARK, &rule->arg.mark, sizeof(rule->arg.mark));
+#endif
+ break;
+
+ case ACT_HTTP_SET_LOGL:
+ s->logs.level = rule->arg.loglevel;
+ break;
+
+ case ACT_HTTP_REPLACE_HDR:
+ case ACT_HTTP_REPLACE_VAL:
+ if (http_transform_header(s, &txn->rsp, rule->arg.hdr_add.name,
+ rule->arg.hdr_add.name_len,
+ &rule->arg.hdr_add.fmt,
+ &rule->arg.hdr_add.re, rule->action))
+ return HTTP_RULE_RES_STOP; /* note: we should report an error here */
+ break;
+
+ case ACT_HTTP_DEL_HDR:
+ ctx.idx = 0;
+ /* remove all occurrences of the header */
+ while (http_find_header2(rule->arg.hdr_add.name, rule->arg.hdr_add.name_len,
+ txn->rsp.chn->buf->p, &txn->hdr_idx, &ctx)) {
+ http_remove_header2(&txn->rsp, &txn->hdr_idx, &ctx);
+ }
+ break;
+
+ case ACT_HTTP_SET_HDR:
+ case ACT_HTTP_ADD_HDR:
+ chunk_printf(&trash, "%s: ", rule->arg.hdr_add.name);
+ memcpy(trash.str, rule->arg.hdr_add.name, rule->arg.hdr_add.name_len);
+ trash.len = rule->arg.hdr_add.name_len;
+ trash.str[trash.len++] = ':';
+ trash.str[trash.len++] = ' ';
+ trash.len += build_logline(s, trash.str + trash.len, trash.size - trash.len, &rule->arg.hdr_add.fmt);
+
+ if (rule->action == ACT_HTTP_SET_HDR) {
+ /* remove all occurrences of the header */
+ ctx.idx = 0;
+ while (http_find_header2(rule->arg.hdr_add.name, rule->arg.hdr_add.name_len,
+ txn->rsp.chn->buf->p, &txn->hdr_idx, &ctx)) {
+ http_remove_header2(&txn->rsp, &txn->hdr_idx, &ctx);
+ }
+ }
+ http_header_add_tail2(&txn->rsp, &txn->hdr_idx, trash.str, trash.len);
+ break;
+
+ case ACT_HTTP_DEL_ACL:
+ case ACT_HTTP_DEL_MAP: {
+ struct pat_ref *ref;
+ char *key;
+ int len;
+
+ /* collect reference */
+ ref = pat_ref_lookup(rule->arg.map.ref);
+ if (!ref)
+ continue;
+
+ /* collect key */
+ len = build_logline(s, trash.str, trash.size, &rule->arg.map.key);
+ key = trash.str;
+ key[len] = '\0';
+
+ /* perform update */
+ /* returned code: 1=ok, 0=ko */
+ pat_ref_delete(ref, key);
+
+ break;
+ }
+
+ case ACT_HTTP_ADD_ACL: {
+ struct pat_ref *ref;
+ char *key;
+ struct chunk *trash_key;
+ int len;
+
+ trash_key = get_trash_chunk();
+
+ /* collect reference */
+ ref = pat_ref_lookup(rule->arg.map.ref);
+ if (!ref)
+ continue;
+
+ /* collect key */
+ len = build_logline(s, trash_key->str, trash_key->size, &rule->arg.map.key);
+ key = trash_key->str;
+ key[len] = '\0';
+
+ /* perform update */
+ /* check if the entry already exists */
+ if (pat_ref_find_elt(ref, key) == NULL)
+ pat_ref_add(ref, key, NULL, NULL);
+
+ break;
+ }
+
+ case ACT_HTTP_SET_MAP: {
+ struct pat_ref *ref;
+ char *key, *value;
+ struct chunk *trash_key, *trash_value;
+ int len;
+
+ trash_key = get_trash_chunk();
+ trash_value = get_trash_chunk();
+
+ /* collect reference */
+ ref = pat_ref_lookup(rule->arg.map.ref);
+ if (!ref)
+ continue;
+
+ /* collect key */
+ len = build_logline(s, trash_key->str, trash_key->size, &rule->arg.map.key);
+ key = trash_key->str;
+ key[len] = '\0';
+
+ /* collect value */
+ len = build_logline(s, trash_value->str, trash_value->size, &rule->arg.map.value);
+ value = trash_value->str;
+ value[len] = '\0';
+
+ /* perform update */
+ if (pat_ref_find_elt(ref, key) != NULL)
+ /* update entry if it exists */
+ pat_ref_set(ref, key, value, NULL);
+ else
+ /* insert a new entry */
+ pat_ref_add(ref, key, value, NULL);
+
+ break;
+ }
+
+ case ACT_HTTP_REDIR:
+ if (!http_apply_redirect_rule(rule->arg.redir, s, txn))
+ return HTTP_RULE_RES_BADREQ;
+ return HTTP_RULE_RES_DONE;
+
+ case ACT_CUSTOM:
+ if ((px->options & PR_O_ABRT_CLOSE) && (s->req.flags & (CF_SHUTR|CF_READ_NULL|CF_READ_ERROR)))
+ act_flags |= ACT_FLAG_FINAL;
+
+ switch (rule->action_ptr(rule, px, s->sess, s, act_flags)) {
+ case ACT_RET_ERR:
+ case ACT_RET_CONT:
+ break;
+ case ACT_RET_STOP:
+ return HTTP_RULE_RES_STOP;
+ case ACT_RET_YIELD:
+ s->current_rule = rule;
+ return HTTP_RULE_RES_YIELD;
+ }
+ break;
+
+ /* other flags exists, but normaly, they never be matched. */
+ default:
+ break;
+ }
+ }
+
+ /* we reached the end of the rules, nothing to report */
+ return HTTP_RULE_RES_CONT;
+}
+
+
+/* Perform an HTTP redirect based on the information in <rule>. The function
+ * returns non-zero on success, or zero in case of a, irrecoverable error such
+ * as too large a request to build a valid response.
+ */
+static int http_apply_redirect_rule(struct redirect_rule *rule, struct stream *s, struct http_txn *txn)
+{
+ struct http_msg *req = &txn->req;
+ struct http_msg *res = &txn->rsp;
+ const char *msg_fmt;
+ const char *location;
+
+ /* build redirect message */
+ switch(rule->code) {
+ case 308:
+ msg_fmt = HTTP_308;
+ break;
+ case 307:
+ msg_fmt = HTTP_307;
+ break;
+ case 303:
+ msg_fmt = HTTP_303;
+ break;
+ case 301:
+ msg_fmt = HTTP_301;
+ break;
+ case 302:
+ default:
+ msg_fmt = HTTP_302;
+ break;
+ }
+
+ if (unlikely(!chunk_strcpy(&trash, msg_fmt)))
+ return 0;
+
+ location = trash.str + trash.len;
+
+ switch(rule->type) {
+ case REDIRECT_TYPE_SCHEME: {
+ const char *path;
+ const char *host;
+ struct hdr_ctx ctx;
+ int pathlen;
+ int hostlen;
+
+ host = "";
+ hostlen = 0;
+ ctx.idx = 0;
+ if (http_find_header2("Host", 4, req->chn->buf->p, &txn->hdr_idx, &ctx)) {
+ host = ctx.line + ctx.val;
+ hostlen = ctx.vlen;
+ }
+
+ path = http_get_path(txn);
+ /* build message using path */
+ if (path) {
+ pathlen = req->sl.rq.u_l + (req->chn->buf->p + req->sl.rq.u) - path;
+ if (rule->flags & REDIRECT_FLAG_DROP_QS) {
+ int qs = 0;
+ while (qs < pathlen) {
+ if (path[qs] == '?') {
+ pathlen = qs;
+ break;
+ }
+ qs++;
+ }
+ }
+ } else {
+ path = "/";
+ pathlen = 1;
+ }
+
+ if (rule->rdr_str) { /* this is an old "redirect" rule */
+ /* check if we can add scheme + "://" + host + path */
+ if (trash.len + rule->rdr_len + 3 + hostlen + pathlen > trash.size - 4)
+ return 0;
+
+ /* add scheme */
+ memcpy(trash.str + trash.len, rule->rdr_str, rule->rdr_len);
+ trash.len += rule->rdr_len;
+ }
+ else {
+ /* add scheme with executing log format */
+ trash.len += build_logline(s, trash.str + trash.len, trash.size - trash.len, &rule->rdr_fmt);
+
+ /* check if we can add scheme + "://" + host + path */
+ if (trash.len + 3 + hostlen + pathlen > trash.size - 4)
+ return 0;
+ }
+ /* add "://" */
+ memcpy(trash.str + trash.len, "://", 3);
+ trash.len += 3;
+
+ /* add host */
+ memcpy(trash.str + trash.len, host, hostlen);
+ trash.len += hostlen;
+
+ /* add path */
+ memcpy(trash.str + trash.len, path, pathlen);
+ trash.len += pathlen;
+
+ /* append a slash at the end of the location if needed and missing */
+ if (trash.len && trash.str[trash.len - 1] != '/' &&
+ (rule->flags & REDIRECT_FLAG_APPEND_SLASH)) {
+ if (trash.len > trash.size - 5)
+ return 0;
+ trash.str[trash.len] = '/';
+ trash.len++;
+ }
+
+ break;
+ }
+ case REDIRECT_TYPE_PREFIX: {
+ const char *path;
+ int pathlen;
+
+ path = http_get_path(txn);
+ /* build message using path */
+ if (path) {
+ pathlen = req->sl.rq.u_l + (req->chn->buf->p + req->sl.rq.u) - path;
+ if (rule->flags & REDIRECT_FLAG_DROP_QS) {
+ int qs = 0;
+ while (qs < pathlen) {
+ if (path[qs] == '?') {
+ pathlen = qs;
+ break;
+ }
+ qs++;
+ }
+ }
+ } else {
+ path = "/";
+ pathlen = 1;
+ }
+
+ if (rule->rdr_str) { /* this is an old "redirect" rule */
+ if (trash.len + rule->rdr_len + pathlen > trash.size - 4)
+ return 0;
+
+ /* add prefix. Note that if prefix == "/", we don't want to
+ * add anything, otherwise it makes it hard for the user to
+ * configure a self-redirection.
+ */
+ if (rule->rdr_len != 1 || *rule->rdr_str != '/') {
+ memcpy(trash.str + trash.len, rule->rdr_str, rule->rdr_len);
+ trash.len += rule->rdr_len;
+ }
+ }
+ else {
+ /* add prefix with executing log format */
+ trash.len += build_logline(s, trash.str + trash.len, trash.size - trash.len, &rule->rdr_fmt);
+
+ /* Check length */
+ if (trash.len + pathlen > trash.size - 4)
+ return 0;
+ }
+
+ /* add path */
+ memcpy(trash.str + trash.len, path, pathlen);
+ trash.len += pathlen;
+
+ /* append a slash at the end of the location if needed and missing */
+ if (trash.len && trash.str[trash.len - 1] != '/' &&
+ (rule->flags & REDIRECT_FLAG_APPEND_SLASH)) {
+ if (trash.len > trash.size - 5)
+ return 0;
+ trash.str[trash.len] = '/';
+ trash.len++;
+ }
+
+ break;
+ }
+ case REDIRECT_TYPE_LOCATION:
+ default:
+ if (rule->rdr_str) { /* this is an old "redirect" rule */
+ if (trash.len + rule->rdr_len > trash.size - 4)
+ return 0;
+
+ /* add location */
+ memcpy(trash.str + trash.len, rule->rdr_str, rule->rdr_len);
+ trash.len += rule->rdr_len;
+ }
+ else {
+ /* add location with executing log format */
+ trash.len += build_logline(s, trash.str + trash.len, trash.size - trash.len, &rule->rdr_fmt);
+
+ /* Check left length */
+ if (trash.len > trash.size - 4)
+ return 0;
+ }
+ break;
+ }
+
+ if (rule->cookie_len) {
+ memcpy(trash.str + trash.len, "\r\nSet-Cookie: ", 14);
+ trash.len += 14;
+ memcpy(trash.str + trash.len, rule->cookie_str, rule->cookie_len);
+ trash.len += rule->cookie_len;
+ memcpy(trash.str + trash.len, "\r\n", 2);
+ trash.len += 2;
+ }
+
+ /* add end of headers and the keep-alive/close status.
+ * We may choose to set keep-alive if the Location begins
+ * with a slash, because the client will come back to the
+ * same server.
+ */
+ txn->status = rule->code;
+ /* let's log the request time */
+ s->logs.tv_request = now;
+
+ if (*location == '/' &&
+ (req->flags & HTTP_MSGF_XFER_LEN) &&
+ ((!(req->flags & HTTP_MSGF_TE_CHNK) && !req->body_len) || (req->msg_state == HTTP_MSG_DONE)) &&
+ ((txn->flags & TX_CON_WANT_MSK) == TX_CON_WANT_SCL ||
+ (txn->flags & TX_CON_WANT_MSK) == TX_CON_WANT_KAL)) {
+ /* keep-alive possible */
+ if (!(req->flags & HTTP_MSGF_VER_11)) {
+ if (unlikely(txn->flags & TX_USE_PX_CONN)) {
+ memcpy(trash.str + trash.len, "\r\nProxy-Connection: keep-alive", 30);
+ trash.len += 30;
+ } else {
+ memcpy(trash.str + trash.len, "\r\nConnection: keep-alive", 24);
+ trash.len += 24;
+ }
+ }
+ memcpy(trash.str + trash.len, "\r\n\r\n", 4);
+ trash.len += 4;
+ bo_inject(res->chn, trash.str, trash.len);
+ /* "eat" the request */
+ bi_fast_delete(req->chn->buf, req->sov);
+ req->next -= req->sov;
+ req->sov = 0;
+ s->req.analysers = AN_REQ_HTTP_XFER_BODY;
+ s->res.analysers = AN_RES_HTTP_XFER_BODY;
+ req->msg_state = HTTP_MSG_CLOSED;
+ res->msg_state = HTTP_MSG_DONE;
+ /* Trim any possible response */
+ res->chn->buf->i = 0;
+ res->next = res->sov = 0;
+ } else {
+ /* keep-alive not possible */
+ if (unlikely(txn->flags & TX_USE_PX_CONN)) {
+ memcpy(trash.str + trash.len, "\r\nProxy-Connection: close\r\n\r\n", 29);
+ trash.len += 29;
+ } else {
+ memcpy(trash.str + trash.len, "\r\nConnection: close\r\n\r\n", 23);
+ trash.len += 23;
+ }
+ stream_int_retnclose(&s->si[0], &trash);
+ req->chn->analysers = 0;
+ }
+
+ if (!(s->flags & SF_ERR_MASK))
+ s->flags |= SF_ERR_LOCAL;
+ if (!(s->flags & SF_FINST_MASK))
+ s->flags |= SF_FINST_R;
+
+ return 1;
+}
+
+/* This stream analyser runs all HTTP request processing which is common to
+ * frontends and backends, which means blocking ACLs, filters, connection-close,
+ * reqadd, stats and redirects. This is performed for the designated proxy.
+ * It returns 1 if the processing can continue on next analysers, or zero if it
+ * either needs more data or wants to immediately abort the request (eg: deny,
+ * error, ...).
+ */
+int http_process_req_common(struct stream *s, struct channel *req, int an_bit, struct proxy *px)
+{
+ struct session *sess = s->sess;
+ struct http_txn *txn = s->txn;
+ struct http_msg *msg = &txn->req;
+ struct redirect_rule *rule;
+ struct cond_wordlist *wl;
+ enum rule_result verdict;
+
+ if (unlikely(msg->msg_state < HTTP_MSG_BODY)) {
+ /* we need more data */
+ goto return_prx_yield;
+ }
+
+ DPRINTF(stderr,"[%u] %s: stream=%p b=%p, exp(r,w)=%u,%u bf=%08x bh=%d analysers=%02x\n",
+ now_ms, __FUNCTION__,
+ s,
+ req,
+ req->rex, req->wex,
+ req->flags,
+ req->buf->i,
+ req->analysers);
+
+ /* just in case we have some per-backend tracking */
+ stream_inc_be_http_req_ctr(s);
+
+ /* evaluate http-request rules */
+ if (!LIST_ISEMPTY(&px->http_req_rules)) {
+ verdict = http_req_get_intercept_rule(px, &px->http_req_rules, s);
+
+ switch (verdict) {
+ case HTTP_RULE_RES_YIELD: /* some data miss, call the function later. */
+ goto return_prx_yield;
+
+ case HTTP_RULE_RES_CONT:
+ case HTTP_RULE_RES_STOP: /* nothing to do */
+ break;
+
+ case HTTP_RULE_RES_DENY: /* deny or tarpit */
+ if (txn->flags & TX_CLTARPIT)
+ goto tarpit;
+ goto deny;
+
+ case HTTP_RULE_RES_ABRT: /* abort request, response already sent. Eg: auth */
+ goto return_prx_cond;
+
+ case HTTP_RULE_RES_DONE: /* OK, but terminate request processing (eg: redirect) */
+ goto done;
+
+ case HTTP_RULE_RES_BADREQ: /* failed with a bad request */
+ goto return_bad_req;
+ }
+ }
+
+ /* OK at this stage, we know that the request was accepted according to
+ * the http-request rules, we can check for the stats. Note that the
+ * URI is detected *before* the req* rules in order not to be affected
+ * by a possible reqrep, while they are processed *after* so that a
+ * reqdeny can still block them. This clearly needs to change in 1.6!
+ */
+ if (stats_check_uri(&s->si[1], txn, px)) {
+ s->target = &http_stats_applet.obj_type;
+ if (unlikely(!stream_int_register_handler(&s->si[1], objt_applet(s->target)))) {
+ txn->status = 500;
+ s->logs.tv_request = now;
+ stream_int_retnclose(&s->si[0], http_error_message(s, HTTP_ERR_500));
+
+ if (!(s->flags & SF_ERR_MASK))
+ s->flags |= SF_ERR_RESOURCE;
+ goto return_prx_cond;
+ }
+
+ /* parse the whole stats request and extract the relevant information */
+ http_handle_stats(s, req);
+ verdict = http_req_get_intercept_rule(px, &px->uri_auth->http_req_rules, s);
+ /* not all actions implemented: deny, allow, auth */
+
+ if (verdict == HTTP_RULE_RES_DENY) /* stats http-request deny */
+ goto deny;
+
+ if (verdict == HTTP_RULE_RES_ABRT) /* stats auth / stats http-request auth */
+ goto return_prx_cond;
+ }
+
+ /* evaluate the req* rules except reqadd */
+ if (px->req_exp != NULL) {
+ if (apply_filters_to_request(s, req, px) < 0)
+ goto return_bad_req;
+
+ if (txn->flags & TX_CLDENY)
+ goto deny;
+
+ if (txn->flags & TX_CLTARPIT)
+ goto tarpit;
+ }
+
+ /* add request headers from the rule sets in the same order */
+ list_for_each_entry(wl, &px->req_add, list) {
+ if (wl->cond) {
+ int ret = acl_exec_cond(wl->cond, px, sess, s, SMP_OPT_DIR_REQ|SMP_OPT_FINAL);
+ ret = acl_pass(ret);
+ if (((struct acl_cond *)wl->cond)->pol == ACL_COND_UNLESS)
+ ret = !ret;
+ if (!ret)
+ continue;
+ }
+
+ if (unlikely(http_header_add_tail(&txn->req, &txn->hdr_idx, wl->s) < 0))
+ goto return_bad_req;
+ }
+
+
+ /* Proceed with the stats now. */
+ if (unlikely(objt_applet(s->target) == &http_stats_applet)) {
+ /* process the stats request now */
+ if (sess->fe == s->be) /* report it if the request was intercepted by the frontend */
+ sess->fe->fe_counters.intercepted_req++;
+
+ if (!(s->flags & SF_ERR_MASK)) // this is not really an error but it is
+ s->flags |= SF_ERR_LOCAL; // to mark that it comes from the proxy
+ if (!(s->flags & SF_FINST_MASK))
+ s->flags |= SF_FINST_R;
+
+ /* we may want to compress the stats page */
+ if (sess->fe->comp || s->be->comp)
+ select_compression_request_header(s, req->buf);
+
+ /* enable the minimally required analyzers to handle keep-alive and compression on the HTTP response */
+ req->analysers = (req->analysers & AN_REQ_HTTP_BODY) | AN_REQ_HTTP_XFER_BODY;
+ goto done;
+ }
+
+ /* check whether we have some ACLs set to redirect this request */
+ list_for_each_entry(rule, &px->redirect_rules, list) {
+ if (rule->cond) {
+ int ret;
+
+ ret = acl_exec_cond(rule->cond, px, sess, s, SMP_OPT_DIR_REQ|SMP_OPT_FINAL);
+ ret = acl_pass(ret);
+ if (rule->cond->pol == ACL_COND_UNLESS)
+ ret = !ret;
+ if (!ret)
+ continue;
+ }
+ if (!http_apply_redirect_rule(rule, s, txn))
+ goto return_bad_req;
+ goto done;
+ }
+
+ /* POST requests may be accompanied with an "Expect: 100-Continue" header.
+ * If this happens, then the data will not come immediately, so we must
+ * send all what we have without waiting. Note that due to the small gain
+ * in waiting for the body of the request, it's easier to simply put the
+ * CF_SEND_DONTWAIT flag any time. It's a one-shot flag so it will remove
+ * itself once used.
+ */
+ req->flags |= CF_SEND_DONTWAIT;
+
+ done: /* done with this analyser, continue with next ones that the calling
+ * points will have set, if any.
+ */
+ req->analyse_exp = TICK_ETERNITY;
+ done_without_exp: /* done with this analyser, but dont reset the analyse_exp. */
+ req->analysers &= ~an_bit;
+ return 1;
+
+ tarpit:
+ /* When a connection is tarpitted, we use the tarpit timeout,
+ * which may be the same as the connect timeout if unspecified.
+ * If unset, then set it to zero because we really want it to
+ * eventually expire. We build the tarpit as an analyser.
+ */
+ channel_erase(&s->req);
+
+ /* wipe the request out so that we can drop the connection early
+ * if the client closes first.
+ */
+ channel_dont_connect(req);
+ req->analysers = 0; /* remove switching rules etc... */
+ req->analysers |= AN_REQ_HTTP_TARPIT;
+ req->analyse_exp = tick_add_ifset(now_ms, s->be->timeout.tarpit);
+ if (!req->analyse_exp)
+ req->analyse_exp = tick_add(now_ms, 0);
+ stream_inc_http_err_ctr(s);
+ sess->fe->fe_counters.denied_req++;
+ if (sess->fe != s->be)
+ s->be->be_counters.denied_req++;
+ if (sess->listener->counters)
+ sess->listener->counters->denied_req++;
+ goto done_without_exp;
+
+ deny: /* this request was blocked (denied) */
+ txn->flags |= TX_CLDENY;
+ txn->status = http_err_codes[txn->rule_deny_status];
+ s->logs.tv_request = now;
+ stream_int_retnclose(&s->si[0], http_error_message(s, txn->rule_deny_status));
+ stream_inc_http_err_ctr(s);
+ sess->fe->fe_counters.denied_req++;
+ if (sess->fe != s->be)
+ s->be->be_counters.denied_req++;
+ if (sess->listener->counters)
+ sess->listener->counters->denied_req++;
+ goto return_prx_cond;
+
+ return_bad_req:
+ /* We centralize bad requests processing here */
+ if (unlikely(msg->msg_state == HTTP_MSG_ERROR) || msg->err_pos >= 0) {
+ /* we detected a parsing error. We want to archive this request
+ * in the dedicated proxy area for later troubleshooting.
+ */
+ http_capture_bad_message(&sess->fe->invalid_req, s, msg, msg->msg_state, sess->fe);
+ }
+
+ txn->req.msg_state = HTTP_MSG_ERROR;
+ txn->status = 400;
+ stream_int_retnclose(&s->si[0], http_error_message(s, HTTP_ERR_400));
+
+ sess->fe->fe_counters.failed_req++;
+ if (sess->listener->counters)
+ sess->listener->counters->failed_req++;
+
+ return_prx_cond:
+ if (!(s->flags & SF_ERR_MASK))
+ s->flags |= SF_ERR_PRXCOND;
+ if (!(s->flags & SF_FINST_MASK))
+ s->flags |= SF_FINST_R;
+
+ req->analysers = 0;
+ req->analyse_exp = TICK_ETERNITY;
+ return 0;
+
+ return_prx_yield:
+ channel_dont_connect(req);
+ return 0;
+}
+
+/* This function performs all the processing enabled for the current request.
+ * It returns 1 if the processing can continue on next analysers, or zero if it
+ * needs more data, encounters an error, or wants to immediately abort the
+ * request. It relies on buffers flags, and updates s->req.analysers.
+ */
+int http_process_request(struct stream *s, struct channel *req, int an_bit)
+{
+ struct session *sess = s->sess;
+ struct http_txn *txn = s->txn;
+ struct http_msg *msg = &txn->req;
+ struct connection *cli_conn = objt_conn(strm_sess(s)->origin);
+
+ if (unlikely(msg->msg_state < HTTP_MSG_BODY)) {
+ /* we need more data */
+ channel_dont_connect(req);
+ return 0;
+ }
+
+ DPRINTF(stderr,"[%u] %s: stream=%p b=%p, exp(r,w)=%u,%u bf=%08x bh=%d analysers=%02x\n",
+ now_ms, __FUNCTION__,
+ s,
+ req,
+ req->rex, req->wex,
+ req->flags,
+ req->buf->i,
+ req->analysers);
+
+ if (sess->fe->comp || s->be->comp)
+ select_compression_request_header(s, req->buf);
+
+ /*
+ * Right now, we know that we have processed the entire headers
+ * and that unwanted requests have been filtered out. We can do
+ * whatever we want with the remaining request. Also, now we
+ * may have separate values for ->fe, ->be.
+ */
+
+ /*
+ * If HTTP PROXY is set we simply get remote server address parsing
+ * incoming request. Note that this requires that a connection is
+ * allocated on the server side.
+ */
+ if ((s->be->options & PR_O_HTTP_PROXY) && !(s->flags & SF_ADDR_SET)) {
+ struct connection *conn;
+ char *path;
+
+ /* Note that for now we don't reuse existing proxy connections */
+ if (unlikely((conn = si_alloc_conn(&s->si[1])) == NULL)) {
+ txn->req.msg_state = HTTP_MSG_ERROR;
+ txn->status = 500;
+ req->analysers = 0;
+ stream_int_retnclose(&s->si[0], http_error_message(s, HTTP_ERR_500));
+
+ if (!(s->flags & SF_ERR_MASK))
+ s->flags |= SF_ERR_RESOURCE;
+ if (!(s->flags & SF_FINST_MASK))
+ s->flags |= SF_FINST_R;
+
+ return 0;
+ }
+
+ path = http_get_path(txn);
+ url2sa(req->buf->p + msg->sl.rq.u,
+ path ? path - (req->buf->p + msg->sl.rq.u) : msg->sl.rq.u_l,
+ &conn->addr.to, NULL);
+ /* if the path was found, we have to remove everything between
+ * req->buf->p + msg->sl.rq.u and path (excluded). If it was not
+ * found, we need to replace from req->buf->p + msg->sl.rq.u for
+ * u_l characters by a single "/".
+ */
+ if (path) {
+ char *cur_ptr = req->buf->p;
+ char *cur_end = cur_ptr + txn->req.sl.rq.l;
+ int delta;
+
+ delta = buffer_replace2(req->buf, req->buf->p + msg->sl.rq.u, path, NULL, 0);
+ http_msg_move_end(&txn->req, delta);
+ cur_end += delta;
+ if (http_parse_reqline(&txn->req, HTTP_MSG_RQMETH, cur_ptr, cur_end + 1, NULL, NULL) == NULL)
+ goto return_bad_req;
+ }
+ else {
+ char *cur_ptr = req->buf->p;
+ char *cur_end = cur_ptr + txn->req.sl.rq.l;
+ int delta;
+
+ delta = buffer_replace2(req->buf, req->buf->p + msg->sl.rq.u,
+ req->buf->p + msg->sl.rq.u + msg->sl.rq.u_l, "/", 1);
+ http_msg_move_end(&txn->req, delta);
+ cur_end += delta;
+ if (http_parse_reqline(&txn->req, HTTP_MSG_RQMETH, cur_ptr, cur_end + 1, NULL, NULL) == NULL)
+ goto return_bad_req;
+ }
+ }
+
+ /*
+ * 7: Now we can work with the cookies.
+ * Note that doing so might move headers in the request, but
+ * the fields will stay coherent and the URI will not move.
+ * This should only be performed in the backend.
+ */
+ if ((s->be->cookie_name || sess->fe->capture_name)
+ && !(txn->flags & (TX_CLDENY|TX_CLTARPIT)))
+ manage_client_side_cookies(s, req);
+
+ /* add unique-id if "header-unique-id" is specified */
+
+ if (!LIST_ISEMPTY(&sess->fe->format_unique_id)) {
+ if ((s->unique_id = pool_alloc2(pool2_uniqueid)) == NULL)
+ goto return_bad_req;
+ s->unique_id[0] = '\0';
+ build_logline(s, s->unique_id, UNIQUEID_LEN, &sess->fe->format_unique_id);
+ }
+
+ if (sess->fe->header_unique_id && s->unique_id) {
+ chunk_printf(&trash, "%s: %s", sess->fe->header_unique_id, s->unique_id);
+ if (trash.len < 0)
+ goto return_bad_req;
+ if (unlikely(http_header_add_tail2(&txn->req, &txn->hdr_idx, trash.str, trash.len) < 0))
+ goto return_bad_req;
+ }
+
+ /*
+ * 9: add X-Forwarded-For if either the frontend or the backend
+ * asks for it.
+ */
+ if ((sess->fe->options | s->be->options) & PR_O_FWDFOR) {
+ struct hdr_ctx ctx = { .idx = 0 };
+ if (!((sess->fe->options | s->be->options) & PR_O_FF_ALWAYS) &&
+ http_find_header2(s->be->fwdfor_hdr_len ? s->be->fwdfor_hdr_name : sess->fe->fwdfor_hdr_name,
+ s->be->fwdfor_hdr_len ? s->be->fwdfor_hdr_len : sess->fe->fwdfor_hdr_len,
+ req->buf->p, &txn->hdr_idx, &ctx)) {
+ /* The header is set to be added only if none is present
+ * and we found it, so don't do anything.
+ */
+ }
+ else if (cli_conn && cli_conn->addr.from.ss_family == AF_INET) {
+ /* Add an X-Forwarded-For header unless the source IP is
+ * in the 'except' network range.
+ */
+ if ((!sess->fe->except_mask.s_addr ||
+ (((struct sockaddr_in *)&cli_conn->addr.from)->sin_addr.s_addr & sess->fe->except_mask.s_addr)
+ != sess->fe->except_net.s_addr) &&
+ (!s->be->except_mask.s_addr ||
+ (((struct sockaddr_in *)&cli_conn->addr.from)->sin_addr.s_addr & s->be->except_mask.s_addr)
+ != s->be->except_net.s_addr)) {
+ int len;
+ unsigned char *pn;
+ pn = (unsigned char *)&((struct sockaddr_in *)&cli_conn->addr.from)->sin_addr;
+
+ /* Note: we rely on the backend to get the header name to be used for
+ * x-forwarded-for, because the header is really meant for the backends.
+ * However, if the backend did not specify any option, we have to rely
+ * on the frontend's header name.
+ */
+ if (s->be->fwdfor_hdr_len) {
+ len = s->be->fwdfor_hdr_len;
+ memcpy(trash.str, s->be->fwdfor_hdr_name, len);
+ } else {
+ len = sess->fe->fwdfor_hdr_len;
+ memcpy(trash.str, sess->fe->fwdfor_hdr_name, len);
+ }
+ len += snprintf(trash.str + len, trash.size - len, ": %d.%d.%d.%d", pn[0], pn[1], pn[2], pn[3]);
+
+ if (unlikely(http_header_add_tail2(&txn->req, &txn->hdr_idx, trash.str, len) < 0))
+ goto return_bad_req;
+ }
+ }
+ else if (cli_conn && cli_conn->addr.from.ss_family == AF_INET6) {
+ /* FIXME: for the sake of completeness, we should also support
+ * 'except' here, although it is mostly useless in this case.
+ */
+ int len;
+ char pn[INET6_ADDRSTRLEN];
+ inet_ntop(AF_INET6,
+ (const void *)&((struct sockaddr_in6 *)(&cli_conn->addr.from))->sin6_addr,
+ pn, sizeof(pn));
+
+ /* Note: we rely on the backend to get the header name to be used for
+ * x-forwarded-for, because the header is really meant for the backends.
+ * However, if the backend did not specify any option, we have to rely
+ * on the frontend's header name.
+ */
+ if (s->be->fwdfor_hdr_len) {
+ len = s->be->fwdfor_hdr_len;
+ memcpy(trash.str, s->be->fwdfor_hdr_name, len);
+ } else {
+ len = sess->fe->fwdfor_hdr_len;
+ memcpy(trash.str, sess->fe->fwdfor_hdr_name, len);
+ }
+ len += snprintf(trash.str + len, trash.size - len, ": %s", pn);
+
+ if (unlikely(http_header_add_tail2(&txn->req, &txn->hdr_idx, trash.str, len) < 0))
+ goto return_bad_req;
+ }
+ }
+
+ /*
+ * 10: add X-Original-To if either the frontend or the backend
+ * asks for it.
+ */
+ if ((sess->fe->options | s->be->options) & PR_O_ORGTO) {
+
+ /* FIXME: don't know if IPv6 can handle that case too. */
+ if (cli_conn && cli_conn->addr.from.ss_family == AF_INET) {
+ /* Add an X-Original-To header unless the destination IP is
+ * in the 'except' network range.
+ */
+ conn_get_to_addr(cli_conn);
+
+ if (cli_conn->addr.to.ss_family == AF_INET &&
+ ((!sess->fe->except_mask_to.s_addr ||
+ (((struct sockaddr_in *)&cli_conn->addr.to)->sin_addr.s_addr & sess->fe->except_mask_to.s_addr)
+ != sess->fe->except_to.s_addr) &&
+ (!s->be->except_mask_to.s_addr ||
+ (((struct sockaddr_in *)&cli_conn->addr.to)->sin_addr.s_addr & s->be->except_mask_to.s_addr)
+ != s->be->except_to.s_addr))) {
+ int len;
+ unsigned char *pn;
+ pn = (unsigned char *)&((struct sockaddr_in *)&cli_conn->addr.to)->sin_addr;
+
+ /* Note: we rely on the backend to get the header name to be used for
+ * x-original-to, because the header is really meant for the backends.
+ * However, if the backend did not specify any option, we have to rely
+ * on the frontend's header name.
+ */
+ if (s->be->orgto_hdr_len) {
+ len = s->be->orgto_hdr_len;
+ memcpy(trash.str, s->be->orgto_hdr_name, len);
+ } else {
+ len = sess->fe->orgto_hdr_len;
+ memcpy(trash.str, sess->fe->orgto_hdr_name, len);
+ }
+ len += snprintf(trash.str + len, trash.size - len, ": %d.%d.%d.%d", pn[0], pn[1], pn[2], pn[3]);
+
+ if (unlikely(http_header_add_tail2(&txn->req, &txn->hdr_idx, trash.str, len) < 0))
+ goto return_bad_req;
+ }
+ }
+ }
+
+ /* 11: add "Connection: close" or "Connection: keep-alive" if needed and not yet set.
+ * If an "Upgrade" token is found, the header is left untouched in order not to have
+ * to deal with some servers bugs : some of them fail an Upgrade if anything but
+ * "Upgrade" is present in the Connection header.
+ */
+ if (!(txn->flags & TX_HDR_CONN_UPG) &&
+ (((txn->flags & TX_CON_WANT_MSK) != TX_CON_WANT_TUN) ||
+ ((sess->fe->options & PR_O_HTTP_MODE) == PR_O_HTTP_PCL ||
+ (s->be->options & PR_O_HTTP_MODE) == PR_O_HTTP_PCL))) {
+ unsigned int want_flags = 0;
+
+ if (msg->flags & HTTP_MSGF_VER_11) {
+ if (((txn->flags & TX_CON_WANT_MSK) >= TX_CON_WANT_SCL ||
+ ((sess->fe->options & PR_O_HTTP_MODE) == PR_O_HTTP_PCL ||
+ (s->be->options & PR_O_HTTP_MODE) == PR_O_HTTP_PCL)) &&
+ !((sess->fe->options2|s->be->options2) & PR_O2_FAKE_KA))
+ want_flags |= TX_CON_CLO_SET;
+ } else {
+ if (((txn->flags & TX_CON_WANT_MSK) == TX_CON_WANT_KAL &&
+ ((sess->fe->options & PR_O_HTTP_MODE) != PR_O_HTTP_PCL &&
+ (s->be->options & PR_O_HTTP_MODE) != PR_O_HTTP_PCL)) ||
+ ((sess->fe->options2|s->be->options2) & PR_O2_FAKE_KA))
+ want_flags |= TX_CON_KAL_SET;
+ }
+
+ if (want_flags != (txn->flags & (TX_CON_CLO_SET|TX_CON_KAL_SET)))
+ http_change_connection_header(txn, msg, want_flags);
+ }
+
+
+ /* If we have no server assigned yet and we're balancing on url_param
+ * with a POST request, we may be interested in checking the body for
+ * that parameter. This will be done in another analyser.
+ */
+ if (!(s->flags & (SF_ASSIGNED|SF_DIRECT)) &&
+ s->txn->meth == HTTP_METH_POST && s->be->url_param_name != NULL &&
+ (msg->flags & (HTTP_MSGF_CNT_LEN|HTTP_MSGF_TE_CHNK))) {
+ channel_dont_connect(req);
+ req->analysers |= AN_REQ_HTTP_BODY;
+ }
+
+ if (msg->flags & HTTP_MSGF_XFER_LEN) {
+ req->analysers |= AN_REQ_HTTP_XFER_BODY;
+#ifdef TCP_QUICKACK
+ /* We expect some data from the client. Unless we know for sure
+ * we already have a full request, we have to re-enable quick-ack
+ * in case we previously disabled it, otherwise we might cause
+ * the client to delay further data.
+ */
+ if ((sess->listener->options & LI_O_NOQUICKACK) &&
+ cli_conn && conn_ctrl_ready(cli_conn) &&
+ ((msg->flags & HTTP_MSGF_TE_CHNK) ||
+ (msg->body_len > req->buf->i - txn->req.eoh - 2)))
+ setsockopt(cli_conn->t.sock.fd, IPPROTO_TCP, TCP_QUICKACK, &one, sizeof(one));
+#endif
+ }
+
+ /*************************************************************
+ * OK, that's finished for the headers. We have done what we *
+ * could. Let's switch to the DATA state. *
+ ************************************************************/
+ req->analyse_exp = TICK_ETERNITY;
+ req->analysers &= ~an_bit;
+
+ /* if the server closes the connection, we want to immediately react
+ * and close the socket to save packets and syscalls.
+ */
+ if (!(req->analysers & AN_REQ_HTTP_XFER_BODY))
+ s->si[1].flags |= SI_FL_NOHALF;
+
+ s->logs.tv_request = now;
+ /* OK let's go on with the BODY now */
+ return 1;
+
+ return_bad_req: /* let's centralize all bad requests */
+ if (unlikely(msg->msg_state == HTTP_MSG_ERROR) || msg->err_pos >= 0) {
+ /* we detected a parsing error. We want to archive this request
+ * in the dedicated proxy area for later troubleshooting.
+ */
+ http_capture_bad_message(&sess->fe->invalid_req, s, msg, msg->msg_state, sess->fe);
+ }
+
+ txn->req.msg_state = HTTP_MSG_ERROR;
+ txn->status = 400;
+ req->analysers = 0;
+ stream_int_retnclose(&s->si[0], http_error_message(s, HTTP_ERR_400));
+
+ sess->fe->fe_counters.failed_req++;
+ if (sess->listener->counters)
+ sess->listener->counters->failed_req++;
+
+ if (!(s->flags & SF_ERR_MASK))
+ s->flags |= SF_ERR_PRXCOND;
+ if (!(s->flags & SF_FINST_MASK))
+ s->flags |= SF_FINST_R;
+ return 0;
+}
+
+/* This function is an analyser which processes the HTTP tarpit. It always
+ * returns zero, at the beginning because it prevents any other processing
+ * from occurring, and at the end because it terminates the request.
+ */
+int http_process_tarpit(struct stream *s, struct channel *req, int an_bit)
+{
+ struct http_txn *txn = s->txn;
+
+ /* This connection is being tarpitted. The CLIENT side has
+ * already set the connect expiration date to the right
+ * timeout. We just have to check that the client is still
+ * there and that the timeout has not expired.
+ */
+ channel_dont_connect(req);
+ if ((req->flags & (CF_SHUTR|CF_READ_ERROR)) == 0 &&
+ !tick_is_expired(req->analyse_exp, now_ms))
+ return 0;
+
+ /* We will set the queue timer to the time spent, just for
+ * logging purposes. We fake a 500 server error, so that the
+ * attacker will not suspect his connection has been tarpitted.
+ * It will not cause trouble to the logs because we can exclude
+ * the tarpitted connections by filtering on the 'PT' status flags.
+ */
+ s->logs.t_queue = tv_ms_elapsed(&s->logs.tv_accept, &now);
+
+ txn->status = 500;
+ if (!(req->flags & CF_READ_ERROR))
+ stream_int_retnclose(&s->si[0], http_error_message(s, HTTP_ERR_500));
+
+ req->analysers = 0;
+ req->analyse_exp = TICK_ETERNITY;
+
+ if (!(s->flags & SF_ERR_MASK))
+ s->flags |= SF_ERR_PRXCOND;
+ if (!(s->flags & SF_FINST_MASK))
+ s->flags |= SF_FINST_T;
+ return 0;
+}
+
+/* This function is an analyser which waits for the HTTP request body. It waits
+ * for either the buffer to be full, or the full advertised contents to have
+ * reached the buffer. It must only be called after the standard HTTP request
+ * processing has occurred, because it expects the request to be parsed and will
+ * look for the Expect header. It may send a 100-Continue interim response. It
+ * takes in input any state starting from HTTP_MSG_BODY and leaves with one of
+ * HTTP_MSG_CHK_SIZE, HTTP_MSG_DATA or HTTP_MSG_TRAILERS. It returns zero if it
+ * needs to read more data, or 1 once it has completed its analysis.
+ */
+int http_wait_for_request_body(struct stream *s, struct channel *req, int an_bit)
+{
+ struct session *sess = s->sess;
+ struct http_txn *txn = s->txn;
+ struct http_msg *msg = &s->txn->req;
+
+ /* We have to parse the HTTP request body to find any required data.
+ * "balance url_param check_post" should have been the only way to get
+ * into this. We were brought here after HTTP header analysis, so all
+ * related structures are ready.
+ */
+
+ if (msg->msg_state < HTTP_MSG_CHUNK_SIZE) {
+ /* This is the first call */
+ if (msg->msg_state < HTTP_MSG_BODY)
+ goto missing_data;
+
+ if (msg->msg_state < HTTP_MSG_100_SENT) {
+ /* If we have HTTP/1.1 and Expect: 100-continue, then we must
+ * send an HTTP/1.1 100 Continue intermediate response.
+ */
+ if (msg->flags & HTTP_MSGF_VER_11) {
+ struct hdr_ctx ctx;
+ ctx.idx = 0;
+ /* Expect is allowed in 1.1, look for it */
+ if (http_find_header2("Expect", 6, req->buf->p, &txn->hdr_idx, &ctx) &&
+ unlikely(ctx.vlen == 12 && strncasecmp(ctx.line+ctx.val, "100-continue", 12) == 0)) {
+ bo_inject(&s->res, http_100_chunk.str, http_100_chunk.len);
+ }
+ }
+ msg->msg_state = HTTP_MSG_100_SENT;
+ }
+
+ /* we have msg->sov which points to the first byte of message body.
+ * req->buf->p still points to the beginning of the message. We
+ * must save the body in msg->next because it survives buffer
+ * re-alignments.
+ */
+ msg->next = msg->sov;
+
+ if (msg->flags & HTTP_MSGF_TE_CHNK)
+ msg->msg_state = HTTP_MSG_CHUNK_SIZE;
+ else
+ msg->msg_state = HTTP_MSG_DATA;
+ }
+
+ if (!(msg->flags & HTTP_MSGF_TE_CHNK)) {
+ /* We're in content-length mode, we just have to wait for enough data. */
+ if (http_body_bytes(msg) < msg->body_len)
+ goto missing_data;
+
+ /* OK we have everything we need now */
+ goto http_end;
+ }
+
+ /* OK here we're parsing a chunked-encoded message */
+
+ if (msg->msg_state == HTTP_MSG_CHUNK_SIZE) {
+ /* read the chunk size and assign it to ->chunk_len, then
+ * set ->sov and ->next to point to the body and switch to DATA or
+ * TRAILERS state.
+ */
+ int ret = http_parse_chunk_size(msg);
+
+ if (!ret)
+ goto missing_data;
+ else if (ret < 0) {
+ stream_inc_http_err_ctr(s);
+ goto return_bad_req;
+ }
+ }
+
+ /* Now we're in HTTP_MSG_DATA or HTTP_MSG_TRAILERS state.
+ * We have the first data byte is in msg->sov + msg->sol. We're waiting
+ * for at least a whole chunk or the whole content length bytes after
+ * msg->sov + msg->sol.
+ */
+ if (msg->msg_state == HTTP_MSG_TRAILERS)
+ goto http_end;
+
+ if (http_body_bytes(msg) >= msg->body_len) /* we have enough bytes now */
+ goto http_end;
+
+ missing_data:
+ /* we get here if we need to wait for more data. If the buffer is full,
+ * we have the maximum we can expect.
+ */
+ if (buffer_full(req->buf, global.tune.maxrewrite))
+ goto http_end;
+
+ if ((req->flags & CF_READ_TIMEOUT) || tick_is_expired(req->analyse_exp, now_ms)) {
+ txn->status = 408;
+ stream_int_retnclose(&s->si[0], http_error_message(s, HTTP_ERR_408));
+
+ if (!(s->flags & SF_ERR_MASK))
+ s->flags |= SF_ERR_CLITO;
+ if (!(s->flags & SF_FINST_MASK))
+ s->flags |= SF_FINST_D;
+ goto return_err_msg;
+ }
+
+ /* we get here if we need to wait for more data */
+ if (!(req->flags & (CF_SHUTR | CF_READ_ERROR))) {
+ /* Not enough data. We'll re-use the http-request
+ * timeout here. Ideally, we should set the timeout
+ * relative to the accept() date. We just set the
+ * request timeout once at the beginning of the
+ * request.
+ */
+ channel_dont_connect(req);
+ if (!tick_isset(req->analyse_exp))
+ req->analyse_exp = tick_add_ifset(now_ms, s->be->timeout.httpreq);
+ return 0;
+ }
+
+ http_end:
+ /* The situation will not evolve, so let's give up on the analysis. */
+ s->logs.tv_request = now; /* update the request timer to reflect full request */
+ req->analysers &= ~an_bit;
+ req->analyse_exp = TICK_ETERNITY;
+ return 1;
+
+ return_bad_req: /* let's centralize all bad requests */
+ txn->req.msg_state = HTTP_MSG_ERROR;
+ txn->status = 400;
+ stream_int_retnclose(&s->si[0], http_error_message(s, HTTP_ERR_400));
+
+ if (!(s->flags & SF_ERR_MASK))
+ s->flags |= SF_ERR_PRXCOND;
+ if (!(s->flags & SF_FINST_MASK))
+ s->flags |= SF_FINST_R;
+
+ return_err_msg:
+ req->analysers = 0;
+ sess->fe->fe_counters.failed_req++;
+ if (sess->listener->counters)
+ sess->listener->counters->failed_req++;
+ return 0;
+}
+
+/* send a server's name with an outgoing request over an established connection.
+ * Note: this function is designed to be called once the request has been scheduled
+ * for being forwarded. This is the reason why it rewinds the buffer before
+ * proceeding.
+ */
+int http_send_name_header(struct http_txn *txn, struct proxy* be, const char* srv_name) {
+
+ struct hdr_ctx ctx;
+
+ char *hdr_name = be->server_id_hdr_name;
+ int hdr_name_len = be->server_id_hdr_len;
+ struct channel *chn = txn->req.chn;
+ char *hdr_val;
+ unsigned int old_o, old_i;
+
+ ctx.idx = 0;
+
+ old_o = http_hdr_rewind(&txn->req);
+ if (old_o) {
+ /* The request was already skipped, let's restore it */
+ b_rew(chn->buf, old_o);
+ txn->req.next += old_o;
+ txn->req.sov += old_o;
+ }
+
+ old_i = chn->buf->i;
+ while (http_find_header2(hdr_name, hdr_name_len, txn->req.chn->buf->p, &txn->hdr_idx, &ctx)) {
+ /* remove any existing values from the header */
+ http_remove_header2(&txn->req, &txn->hdr_idx, &ctx);
+ }
+
+ /* Add the new header requested with the server value */
+ hdr_val = trash.str;
+ memcpy(hdr_val, hdr_name, hdr_name_len);
+ hdr_val += hdr_name_len;
+ *hdr_val++ = ':';
+ *hdr_val++ = ' ';
+ hdr_val += strlcpy2(hdr_val, srv_name, trash.str + trash.size - hdr_val);
+ http_header_add_tail2(&txn->req, &txn->hdr_idx, trash.str, hdr_val - trash.str);
+
+ if (old_o) {
+ /* If this was a forwarded request, we must readjust the amount of
+ * data to be forwarded in order to take into account the size
+ * variations. Note that the current state is >= HTTP_MSG_BODY,
+ * so we don't have to adjust ->sol.
+ */
+ old_o += chn->buf->i - old_i;
+ b_adv(chn->buf, old_o);
+ txn->req.next -= old_o;
+ txn->req.sov -= old_o;
+ }
+
+ return 0;
+}
+
+/* Terminate current transaction and prepare a new one. This is very tricky
+ * right now but it works.
+ */
+void http_end_txn_clean_session(struct stream *s)
+{
+ int prev_status = s->txn->status;
+ struct proxy *fe = strm_fe(s);
+ struct proxy *be = s->be;
+ struct connection *srv_conn;
+ struct server *srv;
+ unsigned int prev_flags = s->txn->flags;
+
+ /* FIXME: We need a more portable way of releasing a backend's and a
+ * server's connections. We need a safer way to reinitialize buffer
+ * flags. We also need a more accurate method for computing per-request
+ * data.
+ */
+ srv_conn = objt_conn(s->si[1].end);
+
+ /* unless we're doing keep-alive, we want to quickly close the connection
+ * to the server.
+ */
+ if (((s->txn->flags & TX_CON_WANT_MSK) != TX_CON_WANT_KAL) ||
+ !si_conn_ready(&s->si[1])) {
+ s->si[1].flags |= SI_FL_NOLINGER | SI_FL_NOHALF;
+ si_shutr(&s->si[1]);
+ si_shutw(&s->si[1]);
+ }
+
+ if (s->flags & SF_BE_ASSIGNED) {
+ be->beconn--;
+ if (unlikely(s->srv_conn))
+ sess_change_server(s, NULL);
+ }
+
+ s->logs.t_close = tv_ms_elapsed(&s->logs.tv_accept, &now);
+ stream_process_counters(s);
+
+ if (s->txn->status) {
+ int n;
+
+ n = s->txn->status / 100;
+ if (n < 1 || n > 5)
+ n = 0;
+
+ if (fe->mode == PR_MODE_HTTP) {
+ fe->fe_counters.p.http.rsp[n]++;
+ if (s->comp_algo && (s->flags & SF_COMP_READY))
+ fe->fe_counters.p.http.comp_rsp++;
+ }
+ if ((s->flags & SF_BE_ASSIGNED) &&
+ (be->mode == PR_MODE_HTTP)) {
+ be->be_counters.p.http.rsp[n]++;
+ be->be_counters.p.http.cum_req++;
+ if (s->comp_algo && (s->flags & SF_COMP_READY))
+ be->be_counters.p.http.comp_rsp++;
+ }
+ }
+
+ /* don't count other requests' data */
+ s->logs.bytes_in -= s->req.buf->i;
+ s->logs.bytes_out -= s->res.buf->i;
+
+ /* let's do a final log if we need it */
+ if (!LIST_ISEMPTY(&fe->logformat) && s->logs.logwait &&
+ !(s->flags & SF_MONITOR) &&
+ (!(fe->options & PR_O_NULLNOLOG) || s->req.total)) {
+ s->do_log(s);
+ }
+
+ /* stop tracking content-based counters */
+ stream_stop_content_counters(s);
+ stream_update_time_stats(s);
+
+ s->logs.accept_date = date; /* user-visible date for logging */
+ s->logs.tv_accept = now; /* corrected date for internal use */
+ tv_zero(&s->logs.tv_request);
+ s->logs.t_queue = -1;
+ s->logs.t_connect = -1;
+ s->logs.t_data = -1;
+ s->logs.t_close = 0;
+ s->logs.prx_queue_size = 0; /* we get the number of pending conns before us */
+ s->logs.srv_queue_size = 0; /* we will get this number soon */
+
+ s->logs.bytes_in = s->req.total = s->req.buf->i;
+ s->logs.bytes_out = s->res.total = s->res.buf->i;
+
+ if (s->pend_pos)
+ pendconn_free(s->pend_pos);
+
+ if (objt_server(s->target)) {
+ if (s->flags & SF_CURR_SESS) {
+ s->flags &= ~SF_CURR_SESS;
+ objt_server(s->target)->cur_sess--;
+ }
+ if (may_dequeue_tasks(objt_server(s->target), be))
+ process_srv_queue(objt_server(s->target));
+ }
+
+ s->target = NULL;
+
+ /* only release our endpoint if we don't intend to reuse the
+ * connection.
+ */
+ if (((s->txn->flags & TX_CON_WANT_MSK) != TX_CON_WANT_KAL) ||
+ !si_conn_ready(&s->si[1])) {
+ si_release_endpoint(&s->si[1]);
+ srv_conn = NULL;
+ }
+
+ s->si[1].state = s->si[1].prev_state = SI_ST_INI;
+ s->si[1].err_type = SI_ET_NONE;
+ s->si[1].conn_retries = 0; /* used for logging too */
+ s->si[1].exp = TICK_ETERNITY;
+ s->si[1].flags &= SI_FL_ISBACK | SI_FL_DONT_WAKE; /* we're in the context of process_stream */
+ s->req.flags &= ~(CF_SHUTW|CF_SHUTW_NOW|CF_AUTO_CONNECT|CF_WRITE_ERROR|CF_STREAMER|CF_STREAMER_FAST|CF_NEVER_WAIT|CF_WAKE_CONNECT|CF_WROTE_DATA);
+ s->res.flags &= ~(CF_SHUTR|CF_SHUTR_NOW|CF_READ_ATTACHED|CF_READ_ERROR|CF_READ_NOEXP|CF_STREAMER|CF_STREAMER_FAST|CF_WRITE_PARTIAL|CF_NEVER_WAIT|CF_WROTE_DATA);
+ s->flags &= ~(SF_DIRECT|SF_ASSIGNED|SF_ADDR_SET|SF_BE_ASSIGNED|SF_FORCE_PRST|SF_IGNORE_PRST);
+ s->flags &= ~(SF_CURR_SESS|SF_REDIRECTABLE|SF_SRV_REUSED);
+ s->flags &= ~(SF_ERR_MASK|SF_FINST_MASK|SF_REDISP);
+
+ s->txn->meth = 0;
+ http_reset_txn(s);
+ s->txn->flags |= TX_NOT_FIRST | TX_WAIT_NEXT_RQ;
+
+ if (prev_status == 401 || prev_status == 407) {
+ /* In HTTP keep-alive mode, if we receive a 401, we still have
+ * a chance of being able to send the visitor again to the same
+ * server over the same connection. This is required by some
+ * broken protocols such as NTLM, and anyway whenever there is
+ * an opportunity for sending the challenge to the proper place,
+ * it's better to do it (at least it helps with debugging).
+ */
+ s->txn->flags |= TX_PREFER_LAST;
+ if (srv_conn)
+ srv_conn->flags |= CO_FL_PRIVATE;
+ }
+
+ if (fe->options2 & PR_O2_INDEPSTR)
+ s->si[1].flags |= SI_FL_INDEP_STR;
+
+ if (fe->options2 & PR_O2_NODELAY) {
+ s->req.flags |= CF_NEVER_WAIT;
+ s->res.flags |= CF_NEVER_WAIT;
+ }
+
+ /* if the request buffer is not empty, it means we're
+ * about to process another request, so send pending
+ * data with MSG_MORE to merge TCP packets when possible.
+ * Just don't do this if the buffer is close to be full,
+ * because the request will wait for it to flush a little
+ * bit before proceeding.
+ */
+ if (s->req.buf->i) {
+ if (s->res.buf->o &&
+ !buffer_full(s->res.buf, global.tune.maxrewrite) &&
+ bi_end(s->res.buf) <= s->res.buf->data + s->res.buf->size - global.tune.maxrewrite)
+ s->res.flags |= CF_EXPECT_MORE;
+ }
+
+ /* we're removing the analysers, we MUST re-enable events detection.
+ * We don't enable close on the response channel since it's either
+ * already closed, or in keep-alive with an idle connection handler.
+ */
+ channel_auto_read(&s->req);
+ channel_auto_close(&s->req);
+ channel_auto_read(&s->res);
+
+ /* we're in keep-alive with an idle connection, monitor it if not already done */
+ if (srv_conn && LIST_ISEMPTY(&srv_conn->list)) {
+ srv = objt_server(srv_conn->target);
+ if (!srv)
+ si_idle_conn(&s->si[1], NULL);
+ else if ((srv_conn->flags & CO_FL_PRIVATE) ||
+ ((be->options & PR_O_REUSE_MASK) == PR_O_REUSE_NEVR))
+ si_idle_conn(&s->si[1], &srv->priv_conns);
+ else if (prev_flags & TX_NOT_FIRST)
+ /* note: we check the request, not the connection, but
+ * this is valid for strategies SAFE and AGGR, and in
+ * case of ALWS, we don't care anyway.
+ */
+ si_idle_conn(&s->si[1], &srv->safe_conns);
+ else
+ si_idle_conn(&s->si[1], &srv->idle_conns);
+ }
+
+ s->req.analysers = strm_li(s) ? strm_li(s)->analysers : 0;
+ s->res.analysers = 0;
+}
+
+
+/* This function updates the request state machine according to the response
+ * state machine and buffer flags. It returns 1 if it changes anything (flag
+ * or state), otherwise zero. It ignores any state before HTTP_MSG_DONE, as
+ * it is only used to find when a request/response couple is complete. Both
+ * this function and its equivalent should loop until both return zero. It
+ * can set its own state to DONE, CLOSING, CLOSED, TUNNEL, ERROR.
+ */
+int http_sync_req_state(struct stream *s)
+{
+ struct channel *chn = &s->req;
+ struct http_txn *txn = s->txn;
+ unsigned int old_flags = chn->flags;
+ unsigned int old_state = txn->req.msg_state;
+
+ if (unlikely(txn->req.msg_state < HTTP_MSG_BODY))
+ return 0;
+
+ if (txn->req.msg_state == HTTP_MSG_DONE) {
+ /* No need to read anymore, the request was completely parsed.
+ * We can shut the read side unless we want to abort_on_close,
+ * or we have a POST request. The issue with POST requests is
+ * that some browsers still send a CRLF after the request, and
+ * this CRLF must be read so that it does not remain in the kernel
+ * buffers, otherwise a close could cause an RST on some systems
+ * (eg: Linux).
+ * Note that if we're using keep-alive on the client side, we'd
+ * rather poll now and keep the polling enabled for the whole
+ * stream's life than enabling/disabling it between each
+ * response and next request.
+ */
+ if (((txn->flags & TX_CON_WANT_MSK) != TX_CON_WANT_SCL) &&
+ ((txn->flags & TX_CON_WANT_MSK) != TX_CON_WANT_KAL) &&
+ !(s->be->options & PR_O_ABRT_CLOSE) &&
+ txn->meth != HTTP_METH_POST)
+ channel_dont_read(chn);
+
+ /* if the server closes the connection, we want to immediately react
+ * and close the socket to save packets and syscalls.
+ */
+ s->si[1].flags |= SI_FL_NOHALF;
+
+ /* In any case we've finished parsing the request so we must
+ * disable Nagle when sending data because 1) we're not going
+ * to shut this side, and 2) the server is waiting for us to
+ * send pending data.
+ */
+ chn->flags |= CF_NEVER_WAIT;
+
+ if (txn->rsp.msg_state == HTTP_MSG_ERROR)
+ goto wait_other_side;
+
+ if (txn->rsp.msg_state < HTTP_MSG_DONE) {
+ /* The server has not finished to respond, so we
+ * don't want to move in order not to upset it.
+ */
+ goto wait_other_side;
+ }
+
+ if (txn->rsp.msg_state == HTTP_MSG_TUNNEL) {
+ /* if any side switches to tunnel mode, the other one does too */
+ channel_auto_read(chn);
+ txn->req.msg_state = HTTP_MSG_TUNNEL;
+ goto wait_other_side;
+ }
+
+ /* When we get here, it means that both the request and the
+ * response have finished receiving. Depending on the connection
+ * mode, we'll have to wait for the last bytes to leave in either
+ * direction, and sometimes for a close to be effective.
+ */
+
+ if ((txn->flags & TX_CON_WANT_MSK) == TX_CON_WANT_SCL) {
+ /* Server-close mode : queue a connection close to the server */
+ if (!(chn->flags & (CF_SHUTW|CF_SHUTW_NOW)))
+ channel_shutw_now(chn);
+ }
+ else if ((txn->flags & TX_CON_WANT_MSK) == TX_CON_WANT_CLO) {
+ /* Option forceclose is set, or either side wants to close,
+ * let's enforce it now that we're not expecting any new
+ * data to come. The caller knows the stream is complete
+ * once both states are CLOSED.
+ */
+ if (!(chn->flags & (CF_SHUTW|CF_SHUTW_NOW))) {
+ channel_shutr_now(chn);
+ channel_shutw_now(chn);
+ }
+ }
+ else {
+ /* The last possible modes are keep-alive and tunnel. Tunnel mode
+ * will not have any analyser so it needs to poll for reads.
+ */
+ if ((txn->flags & TX_CON_WANT_MSK) == TX_CON_WANT_TUN) {
+ channel_auto_read(chn);
+ txn->req.msg_state = HTTP_MSG_TUNNEL;
+ }
+ }
+
+ if (chn->flags & (CF_SHUTW|CF_SHUTW_NOW)) {
+ /* if we've just closed an output, let's switch */
+ s->si[1].flags |= SI_FL_NOLINGER; /* we want to close ASAP */
+
+ if (!channel_is_empty(chn)) {
+ txn->req.msg_state = HTTP_MSG_CLOSING;
+ goto http_msg_closing;
+ }
+ else {
+ txn->req.msg_state = HTTP_MSG_CLOSED;
+ goto http_msg_closed;
+ }
+ }
+ goto wait_other_side;
+ }
+
+ if (txn->req.msg_state == HTTP_MSG_CLOSING) {
+ http_msg_closing:
+ /* nothing else to forward, just waiting for the output buffer
+ * to be empty and for the shutw_now to take effect.
+ */
+ if (channel_is_empty(chn)) {
+ txn->req.msg_state = HTTP_MSG_CLOSED;
+ goto http_msg_closed;
+ }
+ else if (chn->flags & CF_SHUTW) {
+ txn->req.msg_state = HTTP_MSG_ERROR;
+ goto wait_other_side;
+ }
+ }
+
+ if (txn->req.msg_state == HTTP_MSG_CLOSED) {
+ http_msg_closed:
+ /* see above in MSG_DONE why we only do this in these states */
+ if (((txn->flags & TX_CON_WANT_MSK) != TX_CON_WANT_SCL) &&
+ ((txn->flags & TX_CON_WANT_MSK) != TX_CON_WANT_KAL) &&
+ !(s->be->options & PR_O_ABRT_CLOSE))
+ channel_dont_read(chn);
+ goto wait_other_side;
+ }
+
+ wait_other_side:
+ return txn->req.msg_state != old_state || chn->flags != old_flags;
+}
+
+
+/* This function updates the response state machine according to the request
+ * state machine and buffer flags. It returns 1 if it changes anything (flag
+ * or state), otherwise zero. It ignores any state before HTTP_MSG_DONE, as
+ * it is only used to find when a request/response couple is complete. Both
+ * this function and its equivalent should loop until both return zero. It
+ * can set its own state to DONE, CLOSING, CLOSED, TUNNEL, ERROR.
+ */
+int http_sync_res_state(struct stream *s)
+{
+ struct channel *chn = &s->res;
+ struct http_txn *txn = s->txn;
+ unsigned int old_flags = chn->flags;
+ unsigned int old_state = txn->rsp.msg_state;
+
+ if (unlikely(txn->rsp.msg_state < HTTP_MSG_BODY))
+ return 0;
+
+ if (txn->rsp.msg_state == HTTP_MSG_DONE) {
+ /* In theory, we don't need to read anymore, but we must
+ * still monitor the server connection for a possible close
+ * while the request is being uploaded, so we don't disable
+ * reading.
+ */
+ /* channel_dont_read(chn); */
+
+ if (txn->req.msg_state == HTTP_MSG_ERROR)
+ goto wait_other_side;
+
+ if (txn->req.msg_state < HTTP_MSG_DONE) {
+ /* The client seems to still be sending data, probably
+ * because we got an error response during an upload.
+ * We have the choice of either breaking the connection
+ * or letting it pass through. Let's do the later.
+ */
+ goto wait_other_side;
+ }
+
+ if (txn->req.msg_state == HTTP_MSG_TUNNEL) {
+ /* if any side switches to tunnel mode, the other one does too */
+ channel_auto_read(chn);
+ txn->rsp.msg_state = HTTP_MSG_TUNNEL;
+ chn->flags |= CF_NEVER_WAIT;
+ goto wait_other_side;
+ }
+
+ /* When we get here, it means that both the request and the
+ * response have finished receiving. Depending on the connection
+ * mode, we'll have to wait for the last bytes to leave in either
+ * direction, and sometimes for a close to be effective.
+ */
+
+ if ((txn->flags & TX_CON_WANT_MSK) == TX_CON_WANT_SCL) {
+ /* Server-close mode : shut read and wait for the request
+ * side to close its output buffer. The caller will detect
+ * when we're in DONE and the other is in CLOSED and will
+ * catch that for the final cleanup.
+ */
+ if (!(chn->flags & (CF_SHUTR|CF_SHUTR_NOW)))
+ channel_shutr_now(chn);
+ }
+ else if ((txn->flags & TX_CON_WANT_MSK) == TX_CON_WANT_CLO) {
+ /* Option forceclose is set, or either side wants to close,
+ * let's enforce it now that we're not expecting any new
+ * data to come. The caller knows the stream is complete
+ * once both states are CLOSED.
+ */
+ if (!(chn->flags & (CF_SHUTW|CF_SHUTW_NOW))) {
+ channel_shutr_now(chn);
+ channel_shutw_now(chn);
+ }
+ }
+ else {
+ /* The last possible modes are keep-alive and tunnel. Tunnel will
+ * need to forward remaining data. Keep-alive will need to monitor
+ * for connection closing.
+ */
+ channel_auto_read(chn);
+ chn->flags |= CF_NEVER_WAIT;
+ if ((txn->flags & TX_CON_WANT_MSK) == TX_CON_WANT_TUN)
+ txn->rsp.msg_state = HTTP_MSG_TUNNEL;
+ }
+
+ if (chn->flags & (CF_SHUTW|CF_SHUTW_NOW)) {
+ /* if we've just closed an output, let's switch */
+ if (!channel_is_empty(chn)) {
+ txn->rsp.msg_state = HTTP_MSG_CLOSING;
+ goto http_msg_closing;
+ }
+ else {
+ txn->rsp.msg_state = HTTP_MSG_CLOSED;
+ goto http_msg_closed;
+ }
+ }
+ goto wait_other_side;
+ }
+
+ if (txn->rsp.msg_state == HTTP_MSG_CLOSING) {
+ http_msg_closing:
+ /* nothing else to forward, just waiting for the output buffer
+ * to be empty and for the shutw_now to take effect.
+ */
+ if (channel_is_empty(chn)) {
+ txn->rsp.msg_state = HTTP_MSG_CLOSED;
+ goto http_msg_closed;
+ }
+ else if (chn->flags & CF_SHUTW) {
+ txn->rsp.msg_state = HTTP_MSG_ERROR;
+ s->be->be_counters.cli_aborts++;
+ if (objt_server(s->target))
+ objt_server(s->target)->counters.cli_aborts++;
+ goto wait_other_side;
+ }
+ }
+
+ if (txn->rsp.msg_state == HTTP_MSG_CLOSED) {
+ http_msg_closed:
+ /* drop any pending data */
+ channel_truncate(chn);
+ channel_auto_close(chn);
+ channel_auto_read(chn);
+ goto wait_other_side;
+ }
+
+ wait_other_side:
+ /* We force the response to leave immediately if we're waiting for the
+ * other side, since there is no pending shutdown to push it out.
+ */
+ if (!channel_is_empty(chn))
+ chn->flags |= CF_SEND_DONTWAIT;
+ return txn->rsp.msg_state != old_state || chn->flags != old_flags;
+}
+
+
+/* Resync the request and response state machines. Return 1 if either state
+ * changes.
+ */
+int http_resync_states(struct stream *s)
+{
+ struct http_txn *txn = s->txn;
+ int old_req_state = txn->req.msg_state;
+ int old_res_state = txn->rsp.msg_state;
+
+ http_sync_req_state(s);
+ while (1) {
+ if (!http_sync_res_state(s))
+ break;
+ if (!http_sync_req_state(s))
+ break;
+ }
+
+ /* OK, both state machines agree on a compatible state.
+ * There are a few cases we're interested in :
+ * - HTTP_MSG_TUNNEL on either means we have to disable both analysers
+ * - HTTP_MSG_CLOSED on both sides means we've reached the end in both
+ * directions, so let's simply disable both analysers.
+ * - HTTP_MSG_CLOSED on the response only means we must abort the
+ * request.
+ * - HTTP_MSG_CLOSED on the request and HTTP_MSG_DONE on the response
+ * with server-close mode means we've completed one request and we
+ * must re-initialize the server connection.
+ */
+
+ if (txn->req.msg_state == HTTP_MSG_TUNNEL ||
+ txn->rsp.msg_state == HTTP_MSG_TUNNEL ||
+ (txn->req.msg_state == HTTP_MSG_CLOSED &&
+ txn->rsp.msg_state == HTTP_MSG_CLOSED)) {
+ s->req.analysers = 0;
+ channel_auto_close(&s->req);
+ channel_auto_read(&s->req);
+ s->res.analysers = 0;
+ channel_auto_close(&s->res);
+ channel_auto_read(&s->res);
+ }
+ else if ((txn->req.msg_state >= HTTP_MSG_DONE &&
+ (txn->rsp.msg_state == HTTP_MSG_CLOSED || (s->res.flags & CF_SHUTW))) ||
+ txn->rsp.msg_state == HTTP_MSG_ERROR ||
+ txn->req.msg_state == HTTP_MSG_ERROR) {
+ s->res.analysers = 0;
+ channel_auto_close(&s->res);
+ channel_auto_read(&s->res);
+ s->req.analysers = 0;
+ channel_abort(&s->req);
+ channel_auto_close(&s->req);
+ channel_auto_read(&s->req);
+ channel_truncate(&s->req);
+ }
+ else if ((txn->req.msg_state == HTTP_MSG_DONE ||
+ txn->req.msg_state == HTTP_MSG_CLOSED) &&
+ txn->rsp.msg_state == HTTP_MSG_DONE &&
+ ((txn->flags & TX_CON_WANT_MSK) == TX_CON_WANT_SCL ||
+ (txn->flags & TX_CON_WANT_MSK) == TX_CON_WANT_KAL)) {
+ /* server-close/keep-alive: terminate this transaction,
+ * possibly killing the server connection and reinitialize
+ * a fresh-new transaction.
+ */
+ http_end_txn_clean_session(s);
+ }
+
+ return txn->req.msg_state != old_req_state ||
+ txn->rsp.msg_state != old_res_state;
+}
+
+/* This function is an analyser which forwards request body (including chunk
+ * sizes if any). It is called as soon as we must forward, even if we forward
+ * zero byte. The only situation where it must not be called is when we're in
+ * tunnel mode and we want to forward till the close. It's used both to forward
+ * remaining data and to resync after end of body. It expects the msg_state to
+ * be between MSG_BODY and MSG_DONE (inclusive). It returns zero if it needs to
+ * read more data, or 1 once we can go on with next request or end the stream.
+ * When in MSG_DATA or MSG_TRAILERS, it will automatically forward chunk_len
+ * bytes of pending data + the headers if not already done.
+ */
+int http_request_forward_body(struct stream *s, struct channel *req, int an_bit)
+{
+ struct session *sess = s->sess;
+ struct http_txn *txn = s->txn;
+ struct http_msg *msg = &s->txn->req;
+
+ if (unlikely(msg->msg_state < HTTP_MSG_BODY))
+ return 0;
+
+ if ((req->flags & (CF_READ_ERROR|CF_READ_TIMEOUT|CF_WRITE_ERROR|CF_WRITE_TIMEOUT)) ||
+ ((req->flags & CF_SHUTW) && (req->to_forward || req->buf->o))) {
+ /* Output closed while we were sending data. We must abort and
+ * wake the other side up.
+ */
+ msg->msg_state = HTTP_MSG_ERROR;
+ http_resync_states(s);
+ return 1;
+ }
+
+ /* Note that we don't have to send 100-continue back because we don't
+ * need the data to complete our job, and it's up to the server to
+ * decide whether to return 100, 417 or anything else in return of
+ * an "Expect: 100-continue" header.
+ */
+
+ if (msg->sov > 0) {
+ /* we have msg->sov which points to the first byte of message
+ * body, and req->buf.p still points to the beginning of the
+ * message. We forward the headers now, as we don't need them
+ * anymore, and we want to flush them.
+ */
+ b_adv(req->buf, msg->sov);
+ msg->next -= msg->sov;
+ msg->sov = 0;
+
+ /* The previous analysers guarantee that the state is somewhere
+ * between MSG_BODY and the first MSG_DATA. So msg->sol and
+ * msg->next are always correct.
+ */
+ if (msg->msg_state < HTTP_MSG_CHUNK_SIZE) {
+ if (msg->flags & HTTP_MSGF_TE_CHNK)
+ msg->msg_state = HTTP_MSG_CHUNK_SIZE;
+ else
+ msg->msg_state = HTTP_MSG_DATA;
+ }
+ }
+
+ /* Some post-connect processing might want us to refrain from starting to
+ * forward data. Currently, the only reason for this is "balance url_param"
+ * whichs need to parse/process the request after we've enabled forwarding.
+ */
+ if (unlikely(msg->flags & HTTP_MSGF_WAIT_CONN)) {
+ if (!(s->res.flags & CF_READ_ATTACHED)) {
+ channel_auto_connect(req);
+ req->flags |= CF_WAKE_CONNECT;
+ goto missing_data;
+ }
+ msg->flags &= ~HTTP_MSGF_WAIT_CONN;
+ }
+
+ /* in most states, we should abort in case of early close */
+ channel_auto_close(req);
+
+ if (req->to_forward) {
+ /* We can't process the buffer's contents yet */
+ req->flags |= CF_WAKE_WRITE;
+ goto missing_data;
+ }
+
+ while (1) {
+ if (msg->msg_state == HTTP_MSG_DATA) {
+ /* must still forward */
+ /* we may have some pending data starting at req->buf->p */
+ if (msg->chunk_len > req->buf->i - msg->next) {
+ req->flags |= CF_WAKE_WRITE;
+ goto missing_data;
+ }
+ msg->next += msg->chunk_len;
+ msg->chunk_len = 0;
+
+ /* nothing left to forward */
+ if (msg->flags & HTTP_MSGF_TE_CHNK)
+ msg->msg_state = HTTP_MSG_CHUNK_CRLF;
+ else
+ msg->msg_state = HTTP_MSG_DONE;
+ }
+ else if (msg->msg_state == HTTP_MSG_CHUNK_SIZE) {
+ /* read the chunk size and assign it to ->chunk_len, then
+ * set ->next to point to the body and switch to DATA or
+ * TRAILERS state.
+ */
+ int ret = http_parse_chunk_size(msg);
+
+ if (ret == 0)
+ goto missing_data;
+ else if (ret < 0) {
+ stream_inc_http_err_ctr(s);
+ if (msg->err_pos >= 0)
+ http_capture_bad_message(&sess->fe->invalid_req, s, msg, HTTP_MSG_CHUNK_SIZE, s->be);
+ goto return_bad_req;
+ }
+ /* otherwise we're in HTTP_MSG_DATA or HTTP_MSG_TRAILERS state */
+ }
+ else if (msg->msg_state == HTTP_MSG_CHUNK_CRLF) {
+ /* we want the CRLF after the data */
+ int ret = http_skip_chunk_crlf(msg);
+
+ if (ret == 0)
+ goto missing_data;
+ else if (ret < 0) {
+ stream_inc_http_err_ctr(s);
+ if (msg->err_pos >= 0)
+ http_capture_bad_message(&sess->fe->invalid_req, s, msg, HTTP_MSG_CHUNK_CRLF, s->be);
+ goto return_bad_req;
+ }
+ /* we're in MSG_CHUNK_SIZE now */
+ }
+ else if (msg->msg_state == HTTP_MSG_TRAILERS) {
+ int ret = http_forward_trailers(msg);
+
+ if (ret == 0)
+ goto missing_data;
+ else if (ret < 0) {
+ stream_inc_http_err_ctr(s);
+ if (msg->err_pos >= 0)
+ http_capture_bad_message(&sess->fe->invalid_req, s, msg, HTTP_MSG_TRAILERS, s->be);
+ goto return_bad_req;
+ }
+ /* we're in HTTP_MSG_DONE now */
+ }
+ else {
+ int old_state = msg->msg_state;
+
+ /* other states, DONE...TUNNEL */
+
+ /* we may have some pending data starting at req->buf->p
+ * such as last chunk of data or trailers.
+ */
+ b_adv(req->buf, msg->next);
+ if (unlikely(!(s->req.flags & CF_WROTE_DATA)))
+ msg->sov -= msg->next;
+ msg->next = 0;
+
+ /* we don't want to forward closes on DONE except in
+ * tunnel mode.
+ */
+ if ((txn->flags & TX_CON_WANT_MSK) != TX_CON_WANT_TUN)
+ channel_dont_close(req);
+ if (http_resync_states(s)) {
+ /* some state changes occurred, maybe the analyser
+ * was disabled too.
+ */
+ if (unlikely(msg->msg_state == HTTP_MSG_ERROR)) {
+ if (req->flags & CF_SHUTW) {
+ /* request errors are most likely due to
+ * the server aborting the transfer.
+ */
+ goto aborted_xfer;
+ }
+ if (msg->err_pos >= 0)
+ http_capture_bad_message(&sess->fe->invalid_req, s, msg, old_state, s->be);
+ goto return_bad_req;
+ }
+ return 1;
+ }
+
+ /* If "option abortonclose" is set on the backend, we
+ * want to monitor the client's connection and forward
+ * any shutdown notification to the server, which will
+ * decide whether to close or to go on processing the
+ * request. We only do that in tunnel mode, and not in
+ * other modes since it can be abused to exhaust source
+ * ports.
+ */
+ if (s->be->options & PR_O_ABRT_CLOSE) {
+ channel_auto_read(req);
+ if ((req->flags & (CF_SHUTR|CF_READ_NULL)) &&
+ ((txn->flags & TX_CON_WANT_MSK) != TX_CON_WANT_TUN))
+ s->si[1].flags |= SI_FL_NOLINGER;
+ channel_auto_close(req);
+ }
+ else if (s->txn->meth == HTTP_METH_POST) {
+ /* POST requests may require to read extra CRLF
+ * sent by broken browsers and which could cause
+ * an RST to be sent upon close on some systems
+ * (eg: Linux).
+ */
+ channel_auto_read(req);
+ }
+
+ return 0;
+ }
+ }
+
+ missing_data:
+ /* we may have some pending data starting at req->buf->p */
+ b_adv(req->buf, msg->next);
+ if (unlikely(!(s->req.flags & CF_WROTE_DATA)))
+ msg->sov -= msg->next + MIN(msg->chunk_len, req->buf->i);
+
+ msg->next = 0;
+ msg->chunk_len -= channel_forward(req, msg->chunk_len);
+
+ /* stop waiting for data if the input is closed before the end */
+ if (req->flags & CF_SHUTR) {
+ if (!(s->flags & SF_ERR_MASK))
+ s->flags |= SF_ERR_CLICL;
+ if (!(s->flags & SF_FINST_MASK)) {
+ if (txn->rsp.msg_state < HTTP_MSG_ERROR)
+ s->flags |= SF_FINST_H;
+ else
+ s->flags |= SF_FINST_D;
+ }
+
+ sess->fe->fe_counters.cli_aborts++;
+ s->be->be_counters.cli_aborts++;
+ if (objt_server(s->target))
+ objt_server(s->target)->counters.cli_aborts++;
+
+ goto return_bad_req_stats_ok;
+ }
+
+ /* waiting for the last bits to leave the buffer */
+ if (req->flags & CF_SHUTW)
+ goto aborted_xfer;
+
+ /* When TE: chunked is used, we need to get there again to parse remaining
+ * chunks even if the client has closed, so we don't want to set CF_DONTCLOSE.
+ */
+ if (msg->flags & HTTP_MSGF_TE_CHNK)
+ channel_dont_close(req);
+
+ /* We know that more data are expected, but we couldn't send more that
+ * what we did. So we always set the CF_EXPECT_MORE flag so that the
+ * system knows it must not set a PUSH on this first part. Interactive
+ * modes are already handled by the stream sock layer. We must not do
+ * this in content-length mode because it could present the MSG_MORE
+ * flag with the last block of forwarded data, which would cause an
+ * additional delay to be observed by the receiver.
+ */
+ if (msg->flags & HTTP_MSGF_TE_CHNK)
+ req->flags |= CF_EXPECT_MORE;
+
+ return 0;
+
+ return_bad_req: /* let's centralize all bad requests */
+ sess->fe->fe_counters.failed_req++;
+ if (sess->listener->counters)
+ sess->listener->counters->failed_req++;
+
+ return_bad_req_stats_ok:
+ /* we may have some pending data starting at req->buf->p */
+ b_adv(req->buf, msg->next);
+ msg->next = 0;
+
+ txn->req.msg_state = HTTP_MSG_ERROR;
+ if (txn->status) {
+ /* Note: we don't send any error if some data were already sent */
+ stream_int_retnclose(&s->si[0], NULL);
+ } else {
+ txn->status = 400;
+ stream_int_retnclose(&s->si[0], http_error_message(s, HTTP_ERR_400));
+ }
+ req->analysers = 0;
+ s->res.analysers = 0; /* we're in data phase, we want to abort both directions */
+
+ if (!(s->flags & SF_ERR_MASK))
+ s->flags |= SF_ERR_PRXCOND;
+ if (!(s->flags & SF_FINST_MASK)) {
+ if (txn->rsp.msg_state < HTTP_MSG_ERROR)
+ s->flags |= SF_FINST_H;
+ else
+ s->flags |= SF_FINST_D;
+ }
+ return 0;
+
+ aborted_xfer:
+ txn->req.msg_state = HTTP_MSG_ERROR;
+ if (txn->status) {
+ /* Note: we don't send any error if some data were already sent */
+ stream_int_retnclose(&s->si[0], NULL);
+ } else {
+ txn->status = 502;
+ stream_int_retnclose(&s->si[0], http_error_message(s, HTTP_ERR_502));
+ }
+ req->analysers = 0;
+ s->res.analysers = 0; /* we're in data phase, we want to abort both directions */
+
+ sess->fe->fe_counters.srv_aborts++;
+ s->be->be_counters.srv_aborts++;
+ if (objt_server(s->target))
+ objt_server(s->target)->counters.srv_aborts++;
+
+ if (!(s->flags & SF_ERR_MASK))
+ s->flags |= SF_ERR_SRVCL;
+ if (!(s->flags & SF_FINST_MASK)) {
+ if (txn->rsp.msg_state < HTTP_MSG_ERROR)
+ s->flags |= SF_FINST_H;
+ else
+ s->flags |= SF_FINST_D;
+ }
+ return 0;
+}
+
+/* This stream analyser waits for a complete HTTP response. It returns 1 if the
+ * processing can continue on next analysers, or zero if it either needs more
+ * data or wants to immediately abort the response (eg: timeout, error, ...). It
+ * is tied to AN_RES_WAIT_HTTP and may may remove itself from s->res.analysers
+ * when it has nothing left to do, and may remove any analyser when it wants to
+ * abort.
+ */
+int http_wait_for_response(struct stream *s, struct channel *rep, int an_bit)
+{
+ struct session *sess = s->sess;
+ struct http_txn *txn = s->txn;
+ struct http_msg *msg = &txn->rsp;
+ struct hdr_ctx ctx;
+ int use_close_only;
+ int cur_idx;
+ int n;
+
+ DPRINTF(stderr,"[%u] %s: stream=%p b=%p, exp(r,w)=%u,%u bf=%08x bh=%d analysers=%02x\n",
+ now_ms, __FUNCTION__,
+ s,
+ rep,
+ rep->rex, rep->wex,
+ rep->flags,
+ rep->buf->i,
+ rep->analysers);
+
+ /*
+ * Now parse the partial (or complete) lines.
+ * We will check the response syntax, and also join multi-line
+ * headers. An index of all the lines will be elaborated while
+ * parsing.
+ *
+ * For the parsing, we use a 28 states FSM.
+ *
+ * Here is the information we currently have :
+ * rep->buf->p = beginning of response
+ * rep->buf->p + msg->eoh = end of processed headers / start of current one
+ * rep->buf->p + rep->buf->i = end of input data
+ * msg->eol = end of current header or line (LF or CRLF)
+ * msg->next = first non-visited byte
+ */
+
+ next_one:
+ /* There's a protected area at the end of the buffer for rewriting
+ * purposes. We don't want to start to parse the request if the
+ * protected area is affected, because we may have to move processed
+ * data later, which is much more complicated.
+ */
+ if (buffer_not_empty(rep->buf) && msg->msg_state < HTTP_MSG_ERROR) {
+ if (unlikely(!channel_is_rewritable(rep))) {
+ /* some data has still not left the buffer, wake us once that's done */
+ if (rep->flags & (CF_SHUTW|CF_SHUTW_NOW|CF_WRITE_ERROR|CF_WRITE_TIMEOUT))
+ goto abort_response;
+ channel_dont_close(rep);
+ rep->flags |= CF_READ_DONTWAIT; /* try to get back here ASAP */
+ rep->flags |= CF_WAKE_WRITE;
+ return 0;
+ }
+
+ if (unlikely(bi_end(rep->buf) < b_ptr(rep->buf, msg->next) ||
+ bi_end(rep->buf) > rep->buf->data + rep->buf->size - global.tune.maxrewrite))
+ buffer_slow_realign(rep->buf);
+
+ if (likely(msg->next < rep->buf->i))
+ http_msg_analyzer(msg, &txn->hdr_idx);
+ }
+
+ /* 1: we might have to print this header in debug mode */
+ if (unlikely((global.mode & MODE_DEBUG) &&
+ (!(global.mode & MODE_QUIET) || (global.mode & MODE_VERBOSE)) &&
+ msg->msg_state >= HTTP_MSG_BODY)) {
+ char *eol, *sol;
+
+ sol = rep->buf->p;
+ eol = sol + (msg->sl.st.l ? msg->sl.st.l : rep->buf->i);
+ debug_hdr("srvrep", s, sol, eol);
+
+ sol += hdr_idx_first_pos(&txn->hdr_idx);
+ cur_idx = hdr_idx_first_idx(&txn->hdr_idx);
+
+ while (cur_idx) {
+ eol = sol + txn->hdr_idx.v[cur_idx].len;
+ debug_hdr("srvhdr", s, sol, eol);
+ sol = eol + txn->hdr_idx.v[cur_idx].cr + 1;
+ cur_idx = txn->hdr_idx.v[cur_idx].next;
+ }
+ }
+
+ /*
+ * Now we quickly check if we have found a full valid response.
+ * If not so, we check the FD and buffer states before leaving.
+ * A full response is indicated by the fact that we have seen
+ * the double LF/CRLF, so the state is >= HTTP_MSG_BODY. Invalid
+ * responses are checked first.
+ *
+ * Depending on whether the client is still there or not, we
+ * may send an error response back or not. Note that normally
+ * we should only check for HTTP status there, and check I/O
+ * errors somewhere else.
+ */
+
+ if (unlikely(msg->msg_state < HTTP_MSG_BODY)) {
+ /* Invalid response */
+ if (unlikely(msg->msg_state == HTTP_MSG_ERROR)) {
+ /* we detected a parsing error. We want to archive this response
+ * in the dedicated proxy area for later troubleshooting.
+ */
+ hdr_response_bad:
+ if (msg->msg_state == HTTP_MSG_ERROR || msg->err_pos >= 0)
+ http_capture_bad_message(&s->be->invalid_rep, s, msg, msg->msg_state, sess->fe);
+
+ s->be->be_counters.failed_resp++;
+ if (objt_server(s->target)) {
+ objt_server(s->target)->counters.failed_resp++;
+ health_adjust(objt_server(s->target), HANA_STATUS_HTTP_HDRRSP);
+ }
+ abort_response:
+ channel_auto_close(rep);
+ rep->analysers = 0;
+ txn->status = 502;
+ s->si[1].flags |= SI_FL_NOLINGER;
+ channel_truncate(rep);
+ stream_int_retnclose(&s->si[0], http_error_message(s, HTTP_ERR_502));
+
+ if (!(s->flags & SF_ERR_MASK))
+ s->flags |= SF_ERR_PRXCOND;
+ if (!(s->flags & SF_FINST_MASK))
+ s->flags |= SF_FINST_H;
+
+ return 0;
+ }
+
+ /* too large response does not fit in buffer. */
+ else if (buffer_full(rep->buf, global.tune.maxrewrite)) {
+ if (msg->err_pos < 0)
+ msg->err_pos = rep->buf->i;
+ goto hdr_response_bad;
+ }
+
+ /* read error */
+ else if (rep->flags & CF_READ_ERROR) {
+ if (msg->err_pos >= 0)
+ http_capture_bad_message(&s->be->invalid_rep, s, msg, msg->msg_state, sess->fe);
+ else if (txn->flags & TX_NOT_FIRST)
+ goto abort_keep_alive;
+
+ s->be->be_counters.failed_resp++;
+ if (objt_server(s->target)) {
+ objt_server(s->target)->counters.failed_resp++;
+ health_adjust(objt_server(s->target), HANA_STATUS_HTTP_READ_ERROR);
+ }
+
+ channel_auto_close(rep);
+ rep->analysers = 0;
+ txn->status = 502;
+ s->si[1].flags |= SI_FL_NOLINGER;
+ channel_truncate(rep);
+ stream_int_retnclose(&s->si[0], http_error_message(s, HTTP_ERR_502));
+
+ if (!(s->flags & SF_ERR_MASK))
+ s->flags |= SF_ERR_SRVCL;
+ if (!(s->flags & SF_FINST_MASK))
+ s->flags |= SF_FINST_H;
+ return 0;
+ }
+
+ /* read timeout : return a 504 to the client. */
+ else if (rep->flags & CF_READ_TIMEOUT) {
+ if (msg->err_pos >= 0)
+ http_capture_bad_message(&s->be->invalid_rep, s, msg, msg->msg_state, sess->fe);
+
+ s->be->be_counters.failed_resp++;
+ if (objt_server(s->target)) {
+ objt_server(s->target)->counters.failed_resp++;
+ health_adjust(objt_server(s->target), HANA_STATUS_HTTP_READ_TIMEOUT);
+ }
+
+ channel_auto_close(rep);
+ rep->analysers = 0;
+ txn->status = 504;
+ s->si[1].flags |= SI_FL_NOLINGER;
+ channel_truncate(rep);
+ stream_int_retnclose(&s->si[0], http_error_message(s, HTTP_ERR_504));
+
+ if (!(s->flags & SF_ERR_MASK))
+ s->flags |= SF_ERR_SRVTO;
+ if (!(s->flags & SF_FINST_MASK))
+ s->flags |= SF_FINST_H;
+ return 0;
+ }
+
+ /* client abort with an abortonclose */
+ else if ((rep->flags & CF_SHUTR) && ((s->req.flags & (CF_SHUTR|CF_SHUTW)) == (CF_SHUTR|CF_SHUTW))) {
+ sess->fe->fe_counters.cli_aborts++;
+ s->be->be_counters.cli_aborts++;
+ if (objt_server(s->target))
+ objt_server(s->target)->counters.cli_aborts++;
+
+ rep->analysers = 0;
+ channel_auto_close(rep);
+
+ txn->status = 400;
+ channel_truncate(rep);
+ stream_int_retnclose(&s->si[0], http_error_message(s, HTTP_ERR_400));
+
+ if (!(s->flags & SF_ERR_MASK))
+ s->flags |= SF_ERR_CLICL;
+ if (!(s->flags & SF_FINST_MASK))
+ s->flags |= SF_FINST_H;
+
+ /* process_stream() will take care of the error */
+ return 0;
+ }
+
+ /* close from server, capture the response if the server has started to respond */
+ else if (rep->flags & CF_SHUTR) {
+ if (msg->msg_state >= HTTP_MSG_RPVER || msg->err_pos >= 0)
+ http_capture_bad_message(&s->be->invalid_rep, s, msg, msg->msg_state, sess->fe);
+ else if (txn->flags & TX_NOT_FIRST)
+ goto abort_keep_alive;
+
+ s->be->be_counters.failed_resp++;
+ if (objt_server(s->target)) {
+ objt_server(s->target)->counters.failed_resp++;
+ health_adjust(objt_server(s->target), HANA_STATUS_HTTP_BROKEN_PIPE);
+ }
+
+ channel_auto_close(rep);
+ rep->analysers = 0;
+ txn->status = 502;
+ s->si[1].flags |= SI_FL_NOLINGER;
+ channel_truncate(rep);
+ stream_int_retnclose(&s->si[0], http_error_message(s, HTTP_ERR_502));
+
+ if (!(s->flags & SF_ERR_MASK))
+ s->flags |= SF_ERR_SRVCL;
+ if (!(s->flags & SF_FINST_MASK))
+ s->flags |= SF_FINST_H;
+ return 0;
+ }
+
+ /* write error to client (we don't send any message then) */
+ else if (rep->flags & CF_WRITE_ERROR) {
+ if (msg->err_pos >= 0)
+ http_capture_bad_message(&s->be->invalid_rep, s, msg, msg->msg_state, sess->fe);
+ else if (txn->flags & TX_NOT_FIRST)
+ goto abort_keep_alive;
+
+ s->be->be_counters.failed_resp++;
+ rep->analysers = 0;
+ channel_auto_close(rep);
+
+ if (!(s->flags & SF_ERR_MASK))
+ s->flags |= SF_ERR_CLICL;
+ if (!(s->flags & SF_FINST_MASK))
+ s->flags |= SF_FINST_H;
+
+ /* process_stream() will take care of the error */
+ return 0;
+ }
+
+ channel_dont_close(rep);
+ rep->flags |= CF_READ_DONTWAIT; /* try to get back here ASAP */
+ return 0;
+ }
+
+ /* More interesting part now : we know that we have a complete
+ * response which at least looks like HTTP. We have an indicator
+ * of each header's length, so we can parse them quickly.
+ */
+
+ if (unlikely(msg->err_pos >= 0))
+ http_capture_bad_message(&s->be->invalid_rep, s, msg, msg->msg_state, sess->fe);
+
+ /*
+ * 1: get the status code
+ */
+ n = rep->buf->p[msg->sl.st.c] - '0';
+ if (n < 1 || n > 5)
+ n = 0;
+ /* when the client triggers a 4xx from the server, it's most often due
+ * to a missing object or permission. These events should be tracked
+ * because if they happen often, it may indicate a brute force or a
+ * vulnerability scan.
+ */
+ if (n == 4)
+ stream_inc_http_err_ctr(s);
+
+ if (objt_server(s->target))
+ objt_server(s->target)->counters.p.http.rsp[n]++;
+
+ /* RFC7230#2.6 has enforced the format of the HTTP version string to be
+ * exactly one digit "." one digit. This check may be disabled using
+ * option accept-invalid-http-response.
+ */
+ if (!(s->be->options2 & PR_O2_RSPBUG_OK)) {
+ if (msg->sl.st.v_l != 8) {
+ msg->err_pos = 0;
+ goto hdr_response_bad;
+ }
+
+ if (rep->buf->p[4] != '/' ||
+ !isdigit((unsigned char)rep->buf->p[5]) ||
+ rep->buf->p[6] != '.' ||
+ !isdigit((unsigned char)rep->buf->p[7])) {
+ msg->err_pos = 4;
+ goto hdr_response_bad;
+ }
+ }
+
+ /* check if the response is HTTP/1.1 or above */
+ if ((msg->sl.st.v_l == 8) &&
+ ((rep->buf->p[5] > '1') ||
+ ((rep->buf->p[5] == '1') && (rep->buf->p[7] >= '1'))))
+ msg->flags |= HTTP_MSGF_VER_11;
+
+ /* "connection" has not been parsed yet */
+ txn->flags &= ~(TX_HDR_CONN_PRS|TX_HDR_CONN_CLO|TX_HDR_CONN_KAL|TX_HDR_CONN_UPG|TX_CON_CLO_SET|TX_CON_KAL_SET);
+
+ /* transfer length unknown*/
+ msg->flags &= ~HTTP_MSGF_XFER_LEN;
+
+ txn->status = strl2ui(rep->buf->p + msg->sl.st.c, msg->sl.st.c_l);
+
+ /* Adjust server's health based on status code. Note: status codes 501
+ * and 505 are triggered on demand by client request, so we must not
+ * count them as server failures.
+ */
+ if (objt_server(s->target)) {
+ if (txn->status >= 100 && (txn->status < 500 || txn->status == 501 || txn->status == 505))
+ health_adjust(objt_server(s->target), HANA_STATUS_HTTP_OK);
+ else
+ health_adjust(objt_server(s->target), HANA_STATUS_HTTP_STS);
+ }
+
+ /*
+ * 2: check for cacheability.
+ */
+
+ switch (txn->status) {
+ case 100:
+ /*
+ * We may be facing a 100-continue response, in which case this
+ * is not the right response, and we're waiting for the next one.
+ * Let's allow this response to go to the client and wait for the
+ * next one.
+ */
+ hdr_idx_init(&txn->hdr_idx);
+ msg->next -= channel_forward(rep, msg->next);
+ msg->msg_state = HTTP_MSG_RPBEFORE;
+ txn->status = 0;
+ s->logs.t_data = -1; /* was not a response yet */
+ goto next_one;
+
+ case 200:
+ case 203:
+ case 206:
+ case 300:
+ case 301:
+ case 410:
+ /* RFC2616 @13.4:
+ * "A response received with a status code of
+ * 200, 203, 206, 300, 301 or 410 MAY be stored
+ * by a cache (...) unless a cache-control
+ * directive prohibits caching."
+ *
+ * RFC2616 @9.5: POST method :
+ * "Responses to this method are not cacheable,
+ * unless the response includes appropriate
+ * Cache-Control or Expires header fields."
+ */
+ if (likely(txn->meth != HTTP_METH_POST) &&
+ ((s->be->options & PR_O_CHK_CACHE) || (s->be->ck_opts & PR_CK_NOC)))
+ txn->flags |= TX_CACHEABLE | TX_CACHE_COOK;
+ break;
+ default:
+ break;
+ }
+
+ /*
+ * 3: we may need to capture headers
+ */
+ s->logs.logwait &= ~LW_RESP;
+ if (unlikely((s->logs.logwait & LW_RSPHDR) && s->res_cap))
+ capture_headers(rep->buf->p, &txn->hdr_idx,
+ s->res_cap, sess->fe->rsp_cap);
+
+ /* 4: determine the transfer-length according to RFC2616 #4.4, updated
+ * by RFC7230#3.3.3 :
+ *
+ * The length of a message body is determined by one of the following
+ * (in order of precedence):
+ *
+ * 1. Any response to a HEAD request and any response with a 1xx
+ * (Informational), 204 (No Content), or 304 (Not Modified) status
+ * code is always terminated by the first empty line after the
+ * header fields, regardless of the header fields present in the
+ * message, and thus cannot contain a message body.
+ *
+ * 2. Any 2xx (Successful) response to a CONNECT request implies that
+ * the connection will become a tunnel immediately after the empty
+ * line that concludes the header fields. A client MUST ignore any
+ * Content-Length or Transfer-Encoding header fields received in
+ * such a message.
+ *
+ * 3. If a Transfer-Encoding header field is present and the chunked
+ * transfer coding (Section 4.1) is the final encoding, the message
+ * body length is determined by reading and decoding the chunked
+ * data until the transfer coding indicates the data is complete.
+ *
+ * If a Transfer-Encoding header field is present in a response and
+ * the chunked transfer coding is not the final encoding, the
+ * message body length is determined by reading the connection until
+ * it is closed by the server. If a Transfer-Encoding header field
+ * is present in a request and the chunked transfer coding is not
+ * the final encoding, the message body length cannot be determined
+ * reliably; the server MUST respond with the 400 (Bad Request)
+ * status code and then close the connection.
+ *
+ * If a message is received with both a Transfer-Encoding and a
+ * Content-Length header field, the Transfer-Encoding overrides the
+ * Content-Length. Such a message might indicate an attempt to
+ * perform request smuggling (Section 9.5) or response splitting
+ * (Section 9.4) and ought to be handled as an error. A sender MUST
+ * remove the received Content-Length field prior to forwarding such
+ * a message downstream.
+ *
+ * 4. If a message is received without Transfer-Encoding and with
+ * either multiple Content-Length header fields having differing
+ * field-values or a single Content-Length header field having an
+ * invalid value, then the message framing is invalid and the
+ * recipient MUST treat it as an unrecoverable error. If this is a
+ * request message, the server MUST respond with a 400 (Bad Request)
+ * status code and then close the connection. If this is a response
+ * message received by a proxy, the proxy MUST close the connection
+ * to the server, discard the received response, and send a 502 (Bad
+ * Gateway) response to the client. If this is a response message
+ * received by a user agent, the user agent MUST close the
+ * connection to the server and discard the received response.
+ *
+ * 5. If a valid Content-Length header field is present without
+ * Transfer-Encoding, its decimal value defines the expected message
+ * body length in octets. If the sender closes the connection or
+ * the recipient times out before the indicated number of octets are
+ * received, the recipient MUST consider the message to be
+ * incomplete and close the connection.
+ *
+ * 6. If this is a request message and none of the above are true, then
+ * the message body length is zero (no message body is present).
+ *
+ * 7. Otherwise, this is a response message without a declared message
+ * body length, so the message body length is determined by the
+ * number of octets received prior to the server closing the
+ * connection.
+ */
+
+ /* Skip parsing if no content length is possible. The response flags
+ * remain 0 as well as the chunk_len, which may or may not mirror
+ * the real header value, and we note that we know the response's length.
+ * FIXME: should we parse anyway and return an error on chunked encoding ?
+ */
+ if (txn->meth == HTTP_METH_HEAD ||
+ (txn->status >= 100 && txn->status < 200) ||
+ txn->status == 204 || txn->status == 304) {
+ msg->flags |= HTTP_MSGF_XFER_LEN;
+ s->comp_algo = NULL;
+ goto skip_content_length;
+ }
+
+ use_close_only = 0;
+ ctx.idx = 0;
+ while (http_find_header2("Transfer-Encoding", 17, rep->buf->p, &txn->hdr_idx, &ctx)) {
+ if (ctx.vlen == 7 && strncasecmp(ctx.line + ctx.val, "chunked", 7) == 0)
+ msg->flags |= (HTTP_MSGF_TE_CHNK | HTTP_MSGF_XFER_LEN);
+ else if (msg->flags & HTTP_MSGF_TE_CHNK) {
+ /* bad transfer-encoding (chunked followed by something else) */
+ use_close_only = 1;
+ msg->flags &= ~(HTTP_MSGF_TE_CHNK | HTTP_MSGF_XFER_LEN);
+ break;
+ }
+ }
+
+ /* Chunked responses must have their content-length removed */
+ ctx.idx = 0;
+ if (use_close_only || (msg->flags & HTTP_MSGF_TE_CHNK)) {
+ while (http_find_header2("Content-Length", 14, rep->buf->p, &txn->hdr_idx, &ctx))
+ http_remove_header2(msg, &txn->hdr_idx, &ctx);
+ }
+ else while (http_find_header2("Content-Length", 14, rep->buf->p, &txn->hdr_idx, &ctx)) {
+ signed long long cl;
+
+ if (!ctx.vlen) {
+ msg->err_pos = ctx.line + ctx.val - rep->buf->p;
+ goto hdr_response_bad;
+ }
+
+ if (strl2llrc(ctx.line + ctx.val, ctx.vlen, &cl)) {
+ msg->err_pos = ctx.line + ctx.val - rep->buf->p;
+ goto hdr_response_bad; /* parse failure */
+ }
+
+ if (cl < 0) {
+ msg->err_pos = ctx.line + ctx.val - rep->buf->p;
+ goto hdr_response_bad;
+ }
+
+ if ((msg->flags & HTTP_MSGF_CNT_LEN) && (msg->chunk_len != cl)) {
+ msg->err_pos = ctx.line + ctx.val - rep->buf->p;
+ goto hdr_response_bad; /* already specified, was different */
+ }
+
+ msg->flags |= HTTP_MSGF_CNT_LEN | HTTP_MSGF_XFER_LEN;
+ msg->body_len = msg->chunk_len = cl;
+ }
+
+ if (sess->fe->comp || s->be->comp)
+ select_compression_response_header(s, rep->buf);
+
+skip_content_length:
+ /* Now we have to check if we need to modify the Connection header.
+ * This is more difficult on the response than it is on the request,
+ * because we can have two different HTTP versions and we don't know
+ * how the client will interprete a response. For instance, let's say
+ * that the client sends a keep-alive request in HTTP/1.0 and gets an
+ * HTTP/1.1 response without any header. Maybe it will bound itself to
+ * HTTP/1.0 because it only knows about it, and will consider the lack
+ * of header as a close, or maybe it knows HTTP/1.1 and can consider
+ * the lack of header as a keep-alive. Thus we will use two flags
+ * indicating how a request MAY be understood by the client. In case
+ * of multiple possibilities, we'll fix the header to be explicit. If
+ * ambiguous cases such as both close and keepalive are seen, then we
+ * will fall back to explicit close. Note that we won't take risks with
+ * HTTP/1.0 clients which may not necessarily understand keep-alive.
+ * See doc/internals/connection-header.txt for the complete matrix.
+ */
+
+ if (unlikely((txn->meth == HTTP_METH_CONNECT && txn->status == 200) ||
+ txn->status == 101)) {
+ /* Either we've established an explicit tunnel, or we're
+ * switching the protocol. In both cases, we're very unlikely
+ * to understand the next protocols. We have to switch to tunnel
+ * mode, so that we transfer the request and responses then let
+ * this protocol pass unmodified. When we later implement specific
+ * parsers for such protocols, we'll want to check the Upgrade
+ * header which contains information about that protocol for
+ * responses with status 101 (eg: see RFC2817 about TLS).
+ */
+ txn->flags = (txn->flags & ~TX_CON_WANT_MSK) | TX_CON_WANT_TUN;
+ }
+ else if ((txn->status >= 200) && !(txn->flags & TX_HDR_CONN_PRS) &&
+ ((txn->flags & TX_CON_WANT_MSK) != TX_CON_WANT_TUN ||
+ ((sess->fe->options & PR_O_HTTP_MODE) == PR_O_HTTP_PCL ||
+ (s->be->options & PR_O_HTTP_MODE) == PR_O_HTTP_PCL))) {
+ int to_del = 0;
+
+ /* this situation happens when combining pretend-keepalive with httpclose. */
+ if ((txn->flags & TX_CON_WANT_MSK) == TX_CON_WANT_KAL &&
+ ((sess->fe->options & PR_O_HTTP_MODE) == PR_O_HTTP_PCL ||
+ (s->be->options & PR_O_HTTP_MODE) == PR_O_HTTP_PCL))
+ txn->flags = (txn->flags & ~TX_CON_WANT_MSK) | TX_CON_WANT_CLO;
+
+ /* on unknown transfer length, we must close */
+ if (!(msg->flags & HTTP_MSGF_XFER_LEN) &&
+ (txn->flags & TX_CON_WANT_MSK) != TX_CON_WANT_TUN)
+ txn->flags = (txn->flags & ~TX_CON_WANT_MSK) | TX_CON_WANT_CLO;
+
+ /* now adjust header transformations depending on current state */
+ if ((txn->flags & TX_CON_WANT_MSK) == TX_CON_WANT_TUN ||
+ (txn->flags & TX_CON_WANT_MSK) == TX_CON_WANT_CLO) {
+ to_del |= 2; /* remove "keep-alive" on any response */
+ if (!(msg->flags & HTTP_MSGF_VER_11))
+ to_del |= 1; /* remove "close" for HTTP/1.0 responses */
+ }
+ else { /* SCL / KAL */
+ to_del |= 1; /* remove "close" on any response */
+ if (txn->req.flags & msg->flags & HTTP_MSGF_VER_11)
+ to_del |= 2; /* remove "keep-alive" on pure 1.1 responses */
+ }
+
+ /* Parse and remove some headers from the connection header */
+ http_parse_connection_header(txn, msg, to_del);
+
+ /* Some keep-alive responses are converted to Server-close if
+ * the server wants to close.
+ */
+ if ((txn->flags & TX_CON_WANT_MSK) == TX_CON_WANT_KAL) {
+ if ((txn->flags & TX_HDR_CONN_CLO) ||
+ (!(txn->flags & TX_HDR_CONN_KAL) && !(msg->flags & HTTP_MSGF_VER_11)))
+ txn->flags = (txn->flags & ~TX_CON_WANT_MSK) | TX_CON_WANT_SCL;
+ }
+ }
+
+ /* we want to have the response time before we start processing it */
+ s->logs.t_data = tv_ms_elapsed(&s->logs.tv_accept, &now);
+
+ /* end of job, return OK */
+ rep->analysers &= ~an_bit;
+ rep->analyse_exp = TICK_ETERNITY;
+ channel_auto_close(rep);
+ return 1;
+
+ abort_keep_alive:
+ /* A keep-alive request to the server failed on a network error.
+ * The client is required to retry. We need to close without returning
+ * any other information so that the client retries.
+ */
+ txn->status = 0;
+ rep->analysers = 0;
+ s->req.analysers = 0;
+ channel_auto_close(rep);
+ s->logs.logwait = 0;
+ s->logs.level = 0;
+ s->res.flags &= ~CF_EXPECT_MORE; /* speed up sending a previous response */
+ channel_truncate(rep);
+ stream_int_retnclose(&s->si[0], NULL);
+ return 0;
+}
+
+/* This function performs all the processing enabled for the current response.
+ * It normally returns 1 unless it wants to break. It relies on buffers flags,
+ * and updates s->res.analysers. It might make sense to explode it into several
+ * other functions. It works like process_request (see indications above).
+ */
+int http_process_res_common(struct stream *s, struct channel *rep, int an_bit, struct proxy *px)
+{
+ struct session *sess = s->sess;
+ struct http_txn *txn = s->txn;
+ struct http_msg *msg = &txn->rsp;
+ struct proxy *cur_proxy;
+ struct cond_wordlist *wl;
+ enum rule_result ret = HTTP_RULE_RES_CONT;
+
+ DPRINTF(stderr,"[%u] %s: stream=%p b=%p, exp(r,w)=%u,%u bf=%08x bh=%d analysers=%02x\n",
+ now_ms, __FUNCTION__,
+ s,
+ rep,
+ rep->rex, rep->wex,
+ rep->flags,
+ rep->buf->i,
+ rep->analysers);
+
+ if (unlikely(msg->msg_state < HTTP_MSG_BODY)) /* we need more data */
+ return 0;
+
+ /* The stats applet needs to adjust the Connection header but we don't
+ * apply any filter there.
+ */
+ if (unlikely(objt_applet(s->target) == &http_stats_applet)) {
+ rep->analysers &= ~an_bit;
+ rep->analyse_exp = TICK_ETERNITY;
+ goto skip_filters;
+ }
+
+ /*
+ * We will have to evaluate the filters.
+ * As opposed to version 1.2, now they will be evaluated in the
+ * filters order and not in the header order. This means that
+ * each filter has to be validated among all headers.
+ *
+ * Filters are tried with ->be first, then with ->fe if it is
+ * different from ->be.
+ *
+ * Maybe we are in resume condiion. In this case I choose the
+ * "struct proxy" which contains the rule list matching the resume
+ * pointer. If none of theses "struct proxy" match, I initialise
+ * the process with the first one.
+ *
+ * In fact, I check only correspondance betwwen the current list
+ * pointer and the ->fe rule list. If it doesn't match, I initialize
+ * the loop with the ->be.
+ */
+ if (s->current_rule_list == &sess->fe->http_res_rules)
+ cur_proxy = sess->fe;
+ else
+ cur_proxy = s->be;
+ while (1) {
+ struct proxy *rule_set = cur_proxy;
+
+ /* evaluate http-response rules */
+ if (ret == HTTP_RULE_RES_CONT) {
+ ret = http_res_get_intercept_rule(cur_proxy, &cur_proxy->http_res_rules, s);
+
+ if (ret == HTTP_RULE_RES_BADREQ)
+ goto return_srv_prx_502;
+
+ if (ret == HTTP_RULE_RES_DONE) {
+ rep->analysers &= ~an_bit;
+ rep->analyse_exp = TICK_ETERNITY;
+ return 1;
+ }
+ }
+
+ /* we need to be called again. */
+ if (ret == HTTP_RULE_RES_YIELD) {
+ channel_dont_close(rep);
+ return 0;
+ }
+
+ /* try headers filters */
+ if (rule_set->rsp_exp != NULL) {
+ if (apply_filters_to_response(s, rep, rule_set) < 0) {
+ return_bad_resp:
+ if (objt_server(s->target)) {
+ objt_server(s->target)->counters.failed_resp++;
+ health_adjust(objt_server(s->target), HANA_STATUS_HTTP_RSP);
+ }
+ s->be->be_counters.failed_resp++;
+ return_srv_prx_502:
+ rep->analysers = 0;
+ txn->status = 502;
+ s->logs.t_data = -1; /* was not a valid response */
+ s->si[1].flags |= SI_FL_NOLINGER;
+ channel_truncate(rep);
+ stream_int_retnclose(&s->si[0], http_error_message(s, HTTP_ERR_502));
+ if (!(s->flags & SF_ERR_MASK))
+ s->flags |= SF_ERR_PRXCOND;
+ if (!(s->flags & SF_FINST_MASK))
+ s->flags |= SF_FINST_H;
+ return 0;
+ }
+ }
+
+ /* has the response been denied ? */
+ if (txn->flags & TX_SVDENY) {
+ if (objt_server(s->target))
+ objt_server(s->target)->counters.failed_secu++;
+
+ s->be->be_counters.denied_resp++;
+ sess->fe->fe_counters.denied_resp++;
+ if (sess->listener->counters)
+ sess->listener->counters->denied_resp++;
+
+ goto return_srv_prx_502;
+ }
+
+ /* add response headers from the rule sets in the same order */
+ list_for_each_entry(wl, &rule_set->rsp_add, list) {
+ if (txn->status < 200 && txn->status != 101)
+ break;
+ if (wl->cond) {
+ int ret = acl_exec_cond(wl->cond, px, sess, s, SMP_OPT_DIR_RES|SMP_OPT_FINAL);
+ ret = acl_pass(ret);
+ if (((struct acl_cond *)wl->cond)->pol == ACL_COND_UNLESS)
+ ret = !ret;
+ if (!ret)
+ continue;
+ }
+ if (unlikely(http_header_add_tail(&txn->rsp, &txn->hdr_idx, wl->s) < 0))
+ goto return_bad_resp;
+ }
+
+ /* check whether we're already working on the frontend */
+ if (cur_proxy == sess->fe)
+ break;
+ cur_proxy = sess->fe;
+ }
+
+ /* After this point, this anayzer can't return yield, so we can
+ * remove the bit corresponding to this analyzer from the list.
+ *
+ * Note that the intermediate returns and goto found previously
+ * reset the analyzers.
+ */
+ rep->analysers &= ~an_bit;
+ rep->analyse_exp = TICK_ETERNITY;
+
+ /* OK that's all we can do for 1xx responses */
+ if (unlikely(txn->status < 200 && txn->status != 101))
+ goto skip_header_mangling;
+
+ /*
+ * Now check for a server cookie.
+ */
+ if (s->be->cookie_name || sess->fe->capture_name || (s->be->options & PR_O_CHK_CACHE))
+ manage_server_side_cookies(s, rep);
+
+ /*
+ * Check for cache-control or pragma headers if required.
+ */
+ if (((s->be->options & PR_O_CHK_CACHE) || (s->be->ck_opts & PR_CK_NOC)) && txn->status != 101)
+ check_response_for_cacheability(s, rep);
+
+ /*
+ * Add server cookie in the response if needed
+ */
+ if (objt_server(s->target) && (s->be->ck_opts & PR_CK_INS) &&
+ !((txn->flags & TX_SCK_FOUND) && (s->be->ck_opts & PR_CK_PSV)) &&
+ (!(s->flags & SF_DIRECT) ||
+ ((s->be->cookie_maxidle || txn->cookie_last_date) &&
+ (!txn->cookie_last_date || (txn->cookie_last_date - date.tv_sec) < 0)) ||
+ (s->be->cookie_maxlife && !txn->cookie_first_date) || // set the first_date
+ (!s->be->cookie_maxlife && txn->cookie_first_date)) && // remove the first_date
+ (!(s->be->ck_opts & PR_CK_POST) || (txn->meth == HTTP_METH_POST)) &&
+ !(s->flags & SF_IGNORE_PRST)) {
+ /* the server is known, it's not the one the client requested, or the
+ * cookie's last seen date needs to be refreshed. We have to
+ * insert a set-cookie here, except if we want to insert only on POST
+ * requests and this one isn't. Note that servers which don't have cookies
+ * (eg: some backup servers) will return a full cookie removal request.
+ */
+ if (!objt_server(s->target)->cookie) {
+ chunk_printf(&trash,
+ "Set-Cookie: %s=; Expires=Thu, 01-Jan-1970 00:00:01 GMT; path=/",
+ s->be->cookie_name);
+ }
+ else {
+ chunk_printf(&trash, "Set-Cookie: %s=%s", s->be->cookie_name, objt_server(s->target)->cookie);
+
+ if (s->be->cookie_maxidle || s->be->cookie_maxlife) {
+ /* emit last_date, which is mandatory */
+ trash.str[trash.len++] = COOKIE_DELIM_DATE;
+ s30tob64((date.tv_sec+3) >> 2, trash.str + trash.len);
+ trash.len += 5;
+
+ if (s->be->cookie_maxlife) {
+ /* emit first_date, which is either the original one or
+ * the current date.
+ */
+ trash.str[trash.len++] = COOKIE_DELIM_DATE;
+ s30tob64(txn->cookie_first_date ?
+ txn->cookie_first_date >> 2 :
+ (date.tv_sec+3) >> 2, trash.str + trash.len);
+ trash.len += 5;
+ }
+ }
+ chunk_appendf(&trash, "; path=/");
+ }
+
+ if (s->be->cookie_domain)
+ chunk_appendf(&trash, "; domain=%s", s->be->cookie_domain);
+
+ if (s->be->ck_opts & PR_CK_HTTPONLY)
+ chunk_appendf(&trash, "; HttpOnly");
+
+ if (s->be->ck_opts & PR_CK_SECURE)
+ chunk_appendf(&trash, "; Secure");
+
+ if (unlikely(http_header_add_tail2(&txn->rsp, &txn->hdr_idx, trash.str, trash.len) < 0))
+ goto return_bad_resp;
+
+ txn->flags &= ~TX_SCK_MASK;
+ if (objt_server(s->target)->cookie && (s->flags & SF_DIRECT))
+ /* the server did not change, only the date was updated */
+ txn->flags |= TX_SCK_UPDATED;
+ else
+ txn->flags |= TX_SCK_INSERTED;
+
+ /* Here, we will tell an eventual cache on the client side that we don't
+ * want it to cache this reply because HTTP/1.0 caches also cache cookies !
+ * Some caches understand the correct form: 'no-cache="set-cookie"', but
+ * others don't (eg: apache <= 1.3.26). So we use 'private' instead.
+ */
+ if ((s->be->ck_opts & PR_CK_NOC) && (txn->flags & TX_CACHEABLE)) {
+
+ txn->flags &= ~TX_CACHEABLE & ~TX_CACHE_COOK;
+
+ if (unlikely(http_header_add_tail2(&txn->rsp, &txn->hdr_idx,
+ "Cache-control: private", 22) < 0))
+ goto return_bad_resp;
+ }
+ }
+
+ /*
+ * Check if result will be cacheable with a cookie.
+ * We'll block the response if security checks have caught
+ * nasty things such as a cacheable cookie.
+ */
+ if (((txn->flags & (TX_CACHEABLE | TX_CACHE_COOK | TX_SCK_PRESENT)) ==
+ (TX_CACHEABLE | TX_CACHE_COOK | TX_SCK_PRESENT)) &&
+ (s->be->options & PR_O_CHK_CACHE)) {
+ /* we're in presence of a cacheable response containing
+ * a set-cookie header. We'll block it as requested by
+ * the 'checkcache' option, and send an alert.
+ */
+ if (objt_server(s->target))
+ objt_server(s->target)->counters.failed_secu++;
+
+ s->be->be_counters.denied_resp++;
+ sess->fe->fe_counters.denied_resp++;
+ if (sess->listener->counters)
+ sess->listener->counters->denied_resp++;
+
+ Alert("Blocking cacheable cookie in response from instance %s, server %s.\n",
+ s->be->id, objt_server(s->target) ? objt_server(s->target)->id : "<dispatch>");
+ send_log(s->be, LOG_ALERT,
+ "Blocking cacheable cookie in response from instance %s, server %s.\n",
+ s->be->id, objt_server(s->target) ? objt_server(s->target)->id : "<dispatch>");
+ goto return_srv_prx_502;
+ }
+
+ skip_filters:
+ /*
+ * Adjust "Connection: close" or "Connection: keep-alive" if needed.
+ * If an "Upgrade" token is found, the header is left untouched in order
+ * not to have to deal with some client bugs : some of them fail an upgrade
+ * if anything but "Upgrade" is present in the Connection header. We don't
+ * want to touch any 101 response either since it's switching to another
+ * protocol.
+ */
+ if ((txn->status != 101) && !(txn->flags & TX_HDR_CONN_UPG) &&
+ (((txn->flags & TX_CON_WANT_MSK) != TX_CON_WANT_TUN) ||
+ ((sess->fe->options & PR_O_HTTP_MODE) == PR_O_HTTP_PCL ||
+ (s->be->options & PR_O_HTTP_MODE) == PR_O_HTTP_PCL))) {
+ unsigned int want_flags = 0;
+
+ if ((txn->flags & TX_CON_WANT_MSK) == TX_CON_WANT_KAL ||
+ (txn->flags & TX_CON_WANT_MSK) == TX_CON_WANT_SCL) {
+ /* we want a keep-alive response here. Keep-alive header
+ * required if either side is not 1.1.
+ */
+ if (!(txn->req.flags & msg->flags & HTTP_MSGF_VER_11))
+ want_flags |= TX_CON_KAL_SET;
+ }
+ else {
+ /* we want a close response here. Close header required if
+ * the server is 1.1, regardless of the client.
+ */
+ if (msg->flags & HTTP_MSGF_VER_11)
+ want_flags |= TX_CON_CLO_SET;
+ }
+
+ if (want_flags != (txn->flags & (TX_CON_CLO_SET|TX_CON_KAL_SET)))
+ http_change_connection_header(txn, msg, want_flags);
+ }
+
+ skip_header_mangling:
+ if ((msg->flags & HTTP_MSGF_XFER_LEN) ||
+ (txn->flags & TX_CON_WANT_MSK) == TX_CON_WANT_TUN)
+ rep->analysers |= AN_RES_HTTP_XFER_BODY;
+
+ /* if the user wants to log as soon as possible, without counting
+ * bytes from the server, then this is the right moment. We have
+ * to temporarily assign bytes_out to log what we currently have.
+ */
+ if (!LIST_ISEMPTY(&sess->fe->logformat) && !(s->logs.logwait & LW_BYTES)) {
+ s->logs.t_close = s->logs.t_data; /* to get a valid end date */
+ s->logs.bytes_out = txn->rsp.eoh;
+ s->do_log(s);
+ s->logs.bytes_out = 0;
+ }
+ return 1;
+}
+
+/* This function is an analyser which forwards response body (including chunk
+ * sizes if any). It is called as soon as we must forward, even if we forward
+ * zero byte. The only situation where it must not be called is when we're in
+ * tunnel mode and we want to forward till the close. It's used both to forward
+ * remaining data and to resync after end of body. It expects the msg_state to
+ * be between MSG_BODY and MSG_DONE (inclusive). It returns zero if it needs to
+ * read more data, or 1 once we can go on with next request or end the stream.
+ *
+ * It is capable of compressing response data both in content-length mode and
+ * in chunked mode. The state machines follows different flows depending on
+ * whether content-length and chunked modes are used, since there are no
+ * trailers in content-length :
+ *
+ * chk-mode cl-mode
+ * ,----- BODY -----.
+ * / \
+ * V size > 0 V chk-mode
+ * .--> SIZE -------------> DATA -------------> CRLF
+ * | | size == 0 | last byte |
+ * | v final crlf v inspected |
+ * | TRAILERS -----------> DONE |
+ * | |
+ * `----------------------------------------------'
+ *
+ * Compression only happens in the DATA state, and must be flushed in final
+ * states (TRAILERS/DONE) or when leaving on missing data. Normal forwarding
+ * is performed at once on final states for all bytes parsed, or when leaving
+ * on missing data.
+ */
+int http_response_forward_body(struct stream *s, struct channel *res, int an_bit)
+{
+ struct session *sess = s->sess;
+ struct http_txn *txn = s->txn;
+ struct http_msg *msg = &s->txn->rsp;
+ static struct buffer *tmpbuf = &buf_empty;
+ int compressing = 0;
+ int ret;
+
+ if (unlikely(msg->msg_state < HTTP_MSG_BODY))
+ return 0;
+
+ if ((res->flags & (CF_READ_ERROR|CF_READ_TIMEOUT|CF_WRITE_ERROR|CF_WRITE_TIMEOUT)) ||
+ ((res->flags & CF_SHUTW) && (res->to_forward || res->buf->o)) ||
+ !s->req.analysers) {
+ /* Output closed while we were sending data. We must abort and
+ * wake the other side up.
+ */
+ msg->msg_state = HTTP_MSG_ERROR;
+ http_resync_states(s);
+ return 1;
+ }
+
+ /* in most states, we should abort in case of early close */
+ channel_auto_close(res);
+
+ if (msg->sov > 0) {
+ /* we have msg->sov which points to the first byte of message
+ * body, and res->buf.p still points to the beginning of the
+ * message. We forward the headers now, as we don't need them
+ * anymore, and we want to flush them.
+ */
+ b_adv(res->buf, msg->sov);
+ msg->next -= msg->sov;
+ msg->sov = 0;
+
+ /* The previous analysers guarantee that the state is somewhere
+ * between MSG_BODY and the first MSG_DATA. So msg->sol and
+ * msg->next are always correct.
+ */
+ if (msg->msg_state < HTTP_MSG_CHUNK_SIZE) {
+ if (msg->flags & HTTP_MSGF_TE_CHNK)
+ msg->msg_state = HTTP_MSG_CHUNK_SIZE;
+ else
+ msg->msg_state = HTTP_MSG_DATA;
+ }
+ }
+
+ if (res->to_forward) {
+ /* We can't process the buffer's contents yet */
+ res->flags |= CF_WAKE_WRITE;
+ goto missing_data;
+ }
+
+ if (unlikely(s->comp_algo != NULL) && msg->msg_state < HTTP_MSG_TRAILERS) {
+ /* We need a compression buffer in the DATA state to put the
+ * output of compressed data, and in CRLF state to let the
+ * TRAILERS state finish the job of removing the trailing CRLF.
+ */
+ if (unlikely(!tmpbuf->size)) {
+ /* this is the first time we need the compression buffer */
+ if (b_alloc(&tmpbuf) == NULL)
+ goto aborted_xfer; /* no memory */
+ }
+
+ ret = http_compression_buffer_init(s, res->buf, tmpbuf);
+ if (ret < 0) {
+ res->flags |= CF_WAKE_WRITE;
+ goto missing_data; /* not enough spaces in buffers */
+ }
+ compressing = 1;
+ }
+
+ while (1) {
+ switch (msg->msg_state - HTTP_MSG_DATA) {
+ case HTTP_MSG_DATA - HTTP_MSG_DATA: /* must still forward */
+ /* we may have some pending data starting at res->buf->p */
+ if (unlikely(s->comp_algo)) {
+ ret = http_compression_buffer_add_data(s, res->buf, tmpbuf);
+ if (ret < 0)
+ goto aborted_xfer;
+
+ if (msg->chunk_len) {
+ /* input empty or output full */
+ if (res->buf->i > msg->next)
+ res->flags |= CF_WAKE_WRITE;
+ goto missing_data;
+ }
+ }
+ else {
+ if (msg->chunk_len > res->buf->i - msg->next) {
+ /* output full */
+ res->flags |= CF_WAKE_WRITE;
+ goto missing_data;
+ }
+ msg->next += msg->chunk_len;
+ msg->chunk_len = 0;
+ }
+
+ /* nothing left to forward */
+ if (msg->flags & HTTP_MSGF_TE_CHNK) {
+ msg->msg_state = HTTP_MSG_CHUNK_CRLF;
+ } else {
+ msg->msg_state = HTTP_MSG_DONE;
+ break;
+ }
+ /* fall through for HTTP_MSG_CHUNK_CRLF */
+
+ case HTTP_MSG_CHUNK_CRLF - HTTP_MSG_DATA:
+ /* we want the CRLF after the data */
+
+ ret = http_skip_chunk_crlf(msg);
+ if (ret == 0)
+ goto missing_data;
+ else if (ret < 0) {
+ if (msg->err_pos >= 0)
+ http_capture_bad_message(&s->be->invalid_rep, s, msg, HTTP_MSG_CHUNK_CRLF, sess->fe);
+ goto return_bad_res;
+ }
+ /* we're in MSG_CHUNK_SIZE now, fall through */
+
+ case HTTP_MSG_CHUNK_SIZE - HTTP_MSG_DATA:
+ /* read the chunk size and assign it to ->chunk_len, then
+ * set ->next to point to the body and switch to DATA or
+ * TRAILERS state.
+ */
+
+ ret = http_parse_chunk_size(msg);
+ if (ret == 0)
+ goto missing_data;
+ else if (ret < 0) {
+ if (msg->err_pos >= 0)
+ http_capture_bad_message(&s->be->invalid_rep, s, msg, HTTP_MSG_CHUNK_SIZE, sess->fe);
+ goto return_bad_res;
+ }
+ /* otherwise we're in HTTP_MSG_DATA or HTTP_MSG_TRAILERS state */
+ break;
+
+ case HTTP_MSG_TRAILERS - HTTP_MSG_DATA:
+ if (unlikely(compressing)) {
+ /* we need to flush output contents before syncing FSMs */
+ http_compression_buffer_end(s, &res->buf, &tmpbuf, 1);
+ compressing = 0;
+ }
+
+ ret = http_forward_trailers(msg);
+ if (ret == 0)
+ goto missing_data;
+ else if (ret < 0) {
+ if (msg->err_pos >= 0)
+ http_capture_bad_message(&s->be->invalid_rep, s, msg, HTTP_MSG_TRAILERS, sess->fe);
+ goto return_bad_res;
+ }
+ /* we're in HTTP_MSG_DONE now, fall through */
+
+ default:
+ /* other states, DONE...TUNNEL */
+ if (unlikely(compressing)) {
+ /* we need to flush output contents before syncing FSMs */
+ http_compression_buffer_end(s, &res->buf, &tmpbuf, 1);
+ compressing = 0;
+ }
+
+ /* we may have some pending data starting at res->buf->p
+ * such as a last chunk of data or trailers.
+ */
+ b_adv(res->buf, msg->next);
+ msg->next = 0;
+
+ ret = msg->msg_state;
+ /* for keep-alive we don't want to forward closes on DONE */
+ if ((txn->flags & TX_CON_WANT_MSK) == TX_CON_WANT_KAL ||
+ (txn->flags & TX_CON_WANT_MSK) == TX_CON_WANT_SCL)
+ channel_dont_close(res);
+
+ if (http_resync_states(s)) {
+ /* some state changes occurred, maybe the analyser
+ * was disabled too.
+ */
+ if (unlikely(msg->msg_state == HTTP_MSG_ERROR)) {
+ if (res->flags & CF_SHUTW) {
+ /* response errors are most likely due to
+ * the client aborting the transfer.
+ */
+ goto aborted_xfer;
+ }
+ if (msg->err_pos >= 0)
+ http_capture_bad_message(&s->be->invalid_rep, s, msg, ret, sess->fe);
+ goto return_bad_res;
+ }
+ return 1;
+ }
+ return 0;
+ }
+ }
+
+ missing_data:
+ /* we may have some pending data starting at res->buf->p */
+ if (unlikely(compressing)) {
+ http_compression_buffer_end(s, &res->buf, &tmpbuf, msg->msg_state >= HTTP_MSG_TRAILERS);
+ compressing = 0;
+ }
+
+ if ((s->comp_algo == NULL || msg->msg_state >= HTTP_MSG_TRAILERS)) {
+ b_adv(res->buf, msg->next);
+ msg->next = 0;
+ msg->chunk_len -= channel_forward(res, msg->chunk_len);
+ }
+
+ if (res->flags & CF_SHUTW)
+ goto aborted_xfer;
+
+ /* stop waiting for data if the input is closed before the end. If the
+ * client side was already closed, it means that the client has aborted,
+ * so we don't want to count this as a server abort. Otherwise it's a
+ * server abort.
+ */
+ if (res->flags & CF_SHUTR) {
+ if ((s->req.flags & (CF_SHUTR|CF_SHUTW)) == (CF_SHUTR|CF_SHUTW))
+ goto aborted_xfer;
+ if (!(s->flags & SF_ERR_MASK))
+ s->flags |= SF_ERR_SRVCL;
+ s->be->be_counters.srv_aborts++;
+ if (objt_server(s->target))
+ objt_server(s->target)->counters.srv_aborts++;
+ goto return_bad_res_stats_ok;
+ }
+
+ /* we need to obey the req analyser, so if it leaves, we must too */
+ if (!s->req.analysers)
+ goto return_bad_res;
+
+ /* When TE: chunked is used, we need to get there again to parse remaining
+ * chunks even if the server has closed, so we don't want to set CF_DONTCLOSE.
+ * Similarly, with keep-alive on the client side, we don't want to forward a
+ * close.
+ */
+ if ((msg->flags & HTTP_MSGF_TE_CHNK) || s->comp_algo ||
+ (txn->flags & TX_CON_WANT_MSK) == TX_CON_WANT_KAL ||
+ (txn->flags & TX_CON_WANT_MSK) == TX_CON_WANT_SCL)
+ channel_dont_close(res);
+
+ /* We know that more data are expected, but we couldn't send more that
+ * what we did. So we always set the CF_EXPECT_MORE flag so that the
+ * system knows it must not set a PUSH on this first part. Interactive
+ * modes are already handled by the stream sock layer. We must not do
+ * this in content-length mode because it could present the MSG_MORE
+ * flag with the last block of forwarded data, which would cause an
+ * additional delay to be observed by the receiver.
+ */
+ if ((msg->flags & HTTP_MSGF_TE_CHNK) || s->comp_algo)
+ res->flags |= CF_EXPECT_MORE;
+
+ /* the stream handler will take care of timeouts and errors */
+ return 0;
+
+ return_bad_res: /* let's centralize all bad responses */
+ s->be->be_counters.failed_resp++;
+ if (objt_server(s->target))
+ objt_server(s->target)->counters.failed_resp++;
+
+ return_bad_res_stats_ok:
+ if (unlikely(compressing)) {
+ http_compression_buffer_end(s, &res->buf, &tmpbuf, msg->msg_state >= HTTP_MSG_TRAILERS);
+ compressing = 0;
+ }
+
+ /* we may have some pending data starting at res->buf->p */
+ if (s->comp_algo == NULL) {
+ b_adv(res->buf, msg->next);
+ msg->next = 0;
+ }
+
+ txn->rsp.msg_state = HTTP_MSG_ERROR;
+ /* don't send any error message as we're in the body */
+ stream_int_retnclose(&s->si[0], NULL);
+ res->analysers = 0;
+ s->req.analysers = 0; /* we're in data phase, we want to abort both directions */
+ if (objt_server(s->target))
+ health_adjust(objt_server(s->target), HANA_STATUS_HTTP_HDRRSP);
+
+ if (!(s->flags & SF_ERR_MASK))
+ s->flags |= SF_ERR_PRXCOND;
+ if (!(s->flags & SF_FINST_MASK))
+ s->flags |= SF_FINST_D;
+ return 0;
+
+ aborted_xfer:
+ if (unlikely(compressing)) {
+ http_compression_buffer_end(s, &res->buf, &tmpbuf, msg->msg_state >= HTTP_MSG_TRAILERS);
+ compressing = 0;
+ }
+
+ txn->rsp.msg_state = HTTP_MSG_ERROR;
+ /* don't send any error message as we're in the body */
+ stream_int_retnclose(&s->si[0], NULL);
+ res->analysers = 0;
+ s->req.analysers = 0; /* we're in data phase, we want to abort both directions */
+
+ sess->fe->fe_counters.cli_aborts++;
+ s->be->be_counters.cli_aborts++;
+ if (objt_server(s->target))
+ objt_server(s->target)->counters.cli_aborts++;
+
+ if (!(s->flags & SF_ERR_MASK))
+ s->flags |= SF_ERR_CLICL;
+ if (!(s->flags & SF_FINST_MASK))
+ s->flags |= SF_FINST_D;
+ return 0;
+}
+
+/* Iterate the same filter through all request headers.
+ * Returns 1 if this filter can be stopped upon return, otherwise 0.
+ * Since it can manage the switch to another backend, it updates the per-proxy
+ * DENY stats.
+ */
+int apply_filter_to_req_headers(struct stream *s, struct channel *req, struct hdr_exp *exp)
+{
+ char *cur_ptr, *cur_end, *cur_next;
+ int cur_idx, old_idx, last_hdr;
+ struct http_txn *txn = s->txn;
+ struct hdr_idx_elem *cur_hdr;
+ int delta;
+
+ last_hdr = 0;
+
+ cur_next = req->buf->p + hdr_idx_first_pos(&txn->hdr_idx);
+ old_idx = 0;
+
+ while (!last_hdr) {
+ if (unlikely(txn->flags & (TX_CLDENY | TX_CLTARPIT)))
+ return 1;
+ else if (unlikely(txn->flags & TX_CLALLOW) &&
+ (exp->action == ACT_ALLOW ||
+ exp->action == ACT_DENY ||
+ exp->action == ACT_TARPIT))
+ return 0;
+
+ cur_idx = txn->hdr_idx.v[old_idx].next;
+ if (!cur_idx)
+ break;
+
+ cur_hdr = &txn->hdr_idx.v[cur_idx];
+ cur_ptr = cur_next;
+ cur_end = cur_ptr + cur_hdr->len;
+ cur_next = cur_end + cur_hdr->cr + 1;
+
+ /* Now we have one header between cur_ptr and cur_end,
+ * and the next header starts at cur_next.
+ */
+
+ if (regex_exec_match2(exp->preg, cur_ptr, cur_end-cur_ptr, MAX_MATCH, pmatch, 0)) {
+ switch (exp->action) {
+ case ACT_ALLOW:
+ txn->flags |= TX_CLALLOW;
+ last_hdr = 1;
+ break;
+
+ case ACT_DENY:
+ txn->flags |= TX_CLDENY;
+ last_hdr = 1;
+ break;
+
+ case ACT_TARPIT:
+ txn->flags |= TX_CLTARPIT;
+ last_hdr = 1;
+ break;
+
+ case ACT_REPLACE:
+ trash.len = exp_replace(trash.str, trash.size, cur_ptr, exp->replace, pmatch);
+ if (trash.len < 0)
+ return -1;
+
+ delta = buffer_replace2(req->buf, cur_ptr, cur_end, trash.str, trash.len);
+ /* FIXME: if the user adds a newline in the replacement, the
+ * index will not be recalculated for now, and the new line
+ * will not be counted as a new header.
+ */
+
+ cur_end += delta;
+ cur_next += delta;
+ cur_hdr->len += delta;
+ http_msg_move_end(&txn->req, delta);
+ break;
+
+ case ACT_REMOVE:
+ delta = buffer_replace2(req->buf, cur_ptr, cur_next, NULL, 0);
+ cur_next += delta;
+
+ http_msg_move_end(&txn->req, delta);
+ txn->hdr_idx.v[old_idx].next = cur_hdr->next;
+ txn->hdr_idx.used--;
+ cur_hdr->len = 0;
+ cur_end = NULL; /* null-term has been rewritten */
+ cur_idx = old_idx;
+ break;
+
+ }
+ }
+
+ /* keep the link from this header to next one in case of later
+ * removal of next header.
+ */
+ old_idx = cur_idx;
+ }
+ return 0;
+}
+
+
+/* Apply the filter to the request line.
+ * Returns 0 if nothing has been done, 1 if the filter has been applied,
+ * or -1 if a replacement resulted in an invalid request line.
+ * Since it can manage the switch to another backend, it updates the per-proxy
+ * DENY stats.
+ */
+int apply_filter_to_req_line(struct stream *s, struct channel *req, struct hdr_exp *exp)
+{
+ char *cur_ptr, *cur_end;
+ int done;
+ struct http_txn *txn = s->txn;
+ int delta;
+
+ if (unlikely(txn->flags & (TX_CLDENY | TX_CLTARPIT)))
+ return 1;
+ else if (unlikely(txn->flags & TX_CLALLOW) &&
+ (exp->action == ACT_ALLOW ||
+ exp->action == ACT_DENY ||
+ exp->action == ACT_TARPIT))
+ return 0;
+ else if (exp->action == ACT_REMOVE)
+ return 0;
+
+ done = 0;
+
+ cur_ptr = req->buf->p;
+ cur_end = cur_ptr + txn->req.sl.rq.l;
+
+ /* Now we have the request line between cur_ptr and cur_end */
+
+ if (regex_exec_match2(exp->preg, cur_ptr, cur_end-cur_ptr, MAX_MATCH, pmatch, 0)) {
+ switch (exp->action) {
+ case ACT_ALLOW:
+ txn->flags |= TX_CLALLOW;
+ done = 1;
+ break;
+
+ case ACT_DENY:
+ txn->flags |= TX_CLDENY;
+ done = 1;
+ break;
+
+ case ACT_TARPIT:
+ txn->flags |= TX_CLTARPIT;
+ done = 1;
+ break;
+
+ case ACT_REPLACE:
+ trash.len = exp_replace(trash.str, trash.size, cur_ptr, exp->replace, pmatch);
+ if (trash.len < 0)
+ return -1;
+
+ delta = buffer_replace2(req->buf, cur_ptr, cur_end, trash.str, trash.len);
+ /* FIXME: if the user adds a newline in the replacement, the
+ * index will not be recalculated for now, and the new line
+ * will not be counted as a new header.
+ */
+
+ http_msg_move_end(&txn->req, delta);
+ cur_end += delta;
+ cur_end = (char *)http_parse_reqline(&txn->req,
+ HTTP_MSG_RQMETH,
+ cur_ptr, cur_end + 1,
+ NULL, NULL);
+ if (unlikely(!cur_end))
+ return -1;
+
+ /* we have a full request and we know that we have either a CR
+ * or an LF at <ptr>.
+ */
+ txn->meth = find_http_meth(cur_ptr, txn->req.sl.rq.m_l);
+ hdr_idx_set_start(&txn->hdr_idx, txn->req.sl.rq.l, *cur_end == '\r');
+ /* there is no point trying this regex on headers */
+ return 1;
+ }
+ }
+ return done;
+}
+
+
+
+/*
+ * Apply all the req filters of proxy <px> to all headers in buffer <req> of stream <s>.
+ * Returns 0 if everything is alright, or -1 in case a replacement lead to an
+ * unparsable request. Since it can manage the switch to another backend, it
+ * updates the per-proxy DENY stats.
+ */
+int apply_filters_to_request(struct stream *s, struct channel *req, struct proxy *px)
+{
+ struct session *sess = s->sess;
+ struct http_txn *txn = s->txn;
+ struct hdr_exp *exp;
+
+ for (exp = px->req_exp; exp; exp = exp->next) {
+ int ret;
+
+ /*
+ * The interleaving of transformations and verdicts
+ * makes it difficult to decide to continue or stop
+ * the evaluation.
+ */
+
+ if (txn->flags & (TX_CLDENY|TX_CLTARPIT))
+ break;
+
+ if ((txn->flags & TX_CLALLOW) &&
+ (exp->action == ACT_ALLOW || exp->action == ACT_DENY ||
+ exp->action == ACT_TARPIT || exp->action == ACT_PASS))
+ continue;
+
+ /* if this filter had a condition, evaluate it now and skip to
+ * next filter if the condition does not match.
+ */
+ if (exp->cond) {
+ ret = acl_exec_cond(exp->cond, px, sess, s, SMP_OPT_DIR_REQ|SMP_OPT_FINAL);
+ ret = acl_pass(ret);
+ if (((struct acl_cond *)exp->cond)->pol == ACL_COND_UNLESS)
+ ret = !ret;
+
+ if (!ret)
+ continue;
+ }
+
+ /* Apply the filter to the request line. */
+ ret = apply_filter_to_req_line(s, req, exp);
+ if (unlikely(ret < 0))
+ return -1;
+
+ if (likely(ret == 0)) {
+ /* The filter did not match the request, it can be
+ * iterated through all headers.
+ */
+ if (unlikely(apply_filter_to_req_headers(s, req, exp) < 0))
+ return -1;
+ }
+ }
+ return 0;
+}
+
+
+/* Find the end of a cookie value contained between <s> and <e>. It works the
+ * same way as with headers above except that the semi-colon also ends a token.
+ * See RFC2965 for more information. Note that it requires a valid header to
+ * return a valid result.
+ */
+char *find_cookie_value_end(char *s, const char *e)
+{
+ int quoted, qdpair;
+
+ quoted = qdpair = 0;
+ for (; s < e; s++) {
+ if (qdpair) qdpair = 0;
+ else if (quoted) {
+ if (*s == '\\') qdpair = 1;
+ else if (*s == '"') quoted = 0;
+ }
+ else if (*s == '"') quoted = 1;
+ else if (*s == ',' || *s == ';') return s;
+ }
+ return s;
+}
+
+/* Delete a value in a header between delimiters <from> and <next> in buffer
+ * <buf>. The number of characters displaced is returned, and the pointer to
+ * the first delimiter is updated if required. The function tries as much as
+ * possible to respect the following principles :
+ * - replace <from> delimiter by the <next> one unless <from> points to a
+ * colon, in which case <next> is simply removed
+ * - set exactly one space character after the new first delimiter, unless
+ * there are not enough characters in the block being moved to do so.
+ * - remove unneeded spaces before the previous delimiter and after the new
+ * one.
+ *
+ * It is the caller's responsibility to ensure that :
+ * - <from> points to a valid delimiter or the colon ;
+ * - <next> points to a valid delimiter or the final CR/LF ;
+ * - there are non-space chars before <from> ;
+ * - there is a CR/LF at or after <next>.
+ */
+int del_hdr_value(struct buffer *buf, char **from, char *next)
+{
+ char *prev = *from;
+
+ if (*prev == ':') {
+ /* We're removing the first value, preserve the colon and add a
+ * space if possible.
+ */
+ if (!http_is_crlf[(unsigned char)*next])
+ next++;
+ prev++;
+ if (prev < next)
+ *prev++ = ' ';
+
+ while (http_is_spht[(unsigned char)*next])
+ next++;
+ } else {
+ /* Remove useless spaces before the old delimiter. */
+ while (http_is_spht[(unsigned char)*(prev-1)])
+ prev--;
+ *from = prev;
+
+ /* copy the delimiter and if possible a space if we're
+ * not at the end of the line.
+ */
+ if (!http_is_crlf[(unsigned char)*next]) {
+ *prev++ = *next++;
+ if (prev + 1 < next)
+ *prev++ = ' ';
+ while (http_is_spht[(unsigned char)*next])
+ next++;
+ }
+ }
+ return buffer_replace2(buf, prev, next, NULL, 0);
+}
+
+/*
+ * Manage client-side cookie. It can impact performance by about 2% so it is
+ * desirable to call it only when needed. This code is quite complex because
+ * of the multiple very crappy and ambiguous syntaxes we have to support. it
+ * highly recommended not to touch this part without a good reason !
+ */
+void manage_client_side_cookies(struct stream *s, struct channel *req)
+{
+ struct http_txn *txn = s->txn;
+ struct session *sess = s->sess;
+ int preserve_hdr;
+ int cur_idx, old_idx;
+ char *hdr_beg, *hdr_end, *hdr_next, *del_from;
+ char *prev, *att_beg, *att_end, *equal, *val_beg, *val_end, *next;
+
+ /* Iterate through the headers, we start with the start line. */
+ old_idx = 0;
+ hdr_next = req->buf->p + hdr_idx_first_pos(&txn->hdr_idx);
+
+ while ((cur_idx = txn->hdr_idx.v[old_idx].next)) {
+ struct hdr_idx_elem *cur_hdr;
+ int val;
+
+ cur_hdr = &txn->hdr_idx.v[cur_idx];
+ hdr_beg = hdr_next;
+ hdr_end = hdr_beg + cur_hdr->len;
+ hdr_next = hdr_end + cur_hdr->cr + 1;
+
+ /* We have one full header between hdr_beg and hdr_end, and the
+ * next header starts at hdr_next. We're only interested in
+ * "Cookie:" headers.
+ */
+
+ val = http_header_match2(hdr_beg, hdr_end, "Cookie", 6);
+ if (!val) {
+ old_idx = cur_idx;
+ continue;
+ }
+
+ del_from = NULL; /* nothing to be deleted */
+ preserve_hdr = 0; /* assume we may kill the whole header */
+
+ /* Now look for cookies. Conforming to RFC2109, we have to support
+ * attributes whose name begin with a '$', and associate them with
+ * the right cookie, if we want to delete this cookie.
+ * So there are 3 cases for each cookie read :
+ * 1) it's a special attribute, beginning with a '$' : ignore it.
+ * 2) it's a server id cookie that we *MAY* want to delete : save
+ * some pointers on it (last semi-colon, beginning of cookie...)
+ * 3) it's an application cookie : we *MAY* have to delete a previous
+ * "special" cookie.
+ * At the end of loop, if a "special" cookie remains, we may have to
+ * remove it. If no application cookie persists in the header, we
+ * *MUST* delete it.
+ *
+ * Note: RFC2965 is unclear about the processing of spaces around
+ * the equal sign in the ATTR=VALUE form. A careful inspection of
+ * the RFC explicitly allows spaces before it, and not within the
+ * tokens (attrs or values). An inspection of RFC2109 allows that
+ * too but section 10.1.3 lets one think that spaces may be allowed
+ * after the equal sign too, resulting in some (rare) buggy
+ * implementations trying to do that. So let's do what servers do.
+ * Latest ietf draft forbids spaces all around. Also, earlier RFCs
+ * allowed quoted strings in values, with any possible character
+ * after a backslash, including control chars and delimitors, which
+ * causes parsing to become ambiguous. Browsers also allow spaces
+ * within values even without quotes.
+ *
+ * We have to keep multiple pointers in order to support cookie
+ * removal at the beginning, middle or end of header without
+ * corrupting the header. All of these headers are valid :
+ *
+ * Cookie:NAME1=VALUE1;NAME2=VALUE2;NAME3=VALUE3\r\n
+ * Cookie:NAME1=VALUE1;NAME2_ONLY ;NAME3=VALUE3\r\n
+ * Cookie: NAME1 = VALUE 1 ; NAME2 = VALUE2 ; NAME3 = VALUE3\r\n
+ * | | | | | | | | |
+ * | | | | | | | | hdr_end <--+
+ * | | | | | | | +--> next
+ * | | | | | | +----> val_end
+ * | | | | | +-----------> val_beg
+ * | | | | +--------------> equal
+ * | | | +----------------> att_end
+ * | | +---------------------> att_beg
+ * | +--------------------------> prev
+ * +--------------------------------> hdr_beg
+ */
+
+ for (prev = hdr_beg + 6; prev < hdr_end; prev = next) {
+ /* Iterate through all cookies on this line */
+
+ /* find att_beg */
+ att_beg = prev + 1;
+ while (att_beg < hdr_end && http_is_spht[(unsigned char)*att_beg])
+ att_beg++;
+
+ /* find att_end : this is the first character after the last non
+ * space before the equal. It may be equal to hdr_end.
+ */
+ equal = att_end = att_beg;
+
+ while (equal < hdr_end) {
+ if (*equal == '=' || *equal == ',' || *equal == ';')
+ break;
+ if (http_is_spht[(unsigned char)*equal++])
+ continue;
+ att_end = equal;
+ }
+
+ /* here, <equal> points to '=', a delimitor or the end. <att_end>
+ * is between <att_beg> and <equal>, both may be identical.
+ */
+
+ /* look for end of cookie if there is an equal sign */
+ if (equal < hdr_end && *equal == '=') {
+ /* look for the beginning of the value */
+ val_beg = equal + 1;
+ while (val_beg < hdr_end && http_is_spht[(unsigned char)*val_beg])
+ val_beg++;
+
+ /* find the end of the value, respecting quotes */
+ next = find_cookie_value_end(val_beg, hdr_end);
+
+ /* make val_end point to the first white space or delimitor after the value */
+ val_end = next;
+ while (val_end > val_beg && http_is_spht[(unsigned char)*(val_end - 1)])
+ val_end--;
+ } else {
+ val_beg = val_end = next = equal;
+ }
+
+ /* We have nothing to do with attributes beginning with '$'. However,
+ * they will automatically be removed if a header before them is removed,
+ * since they're supposed to be linked together.
+ */
+ if (*att_beg == '$')
+ continue;
+
+ /* Ignore cookies with no equal sign */
+ if (equal == next) {
+ /* This is not our cookie, so we must preserve it. But if we already
+ * scheduled another cookie for removal, we cannot remove the
+ * complete header, but we can remove the previous block itself.
+ */
+ preserve_hdr = 1;
+ if (del_from != NULL) {
+ int delta = del_hdr_value(req->buf, &del_from, prev);
+ val_end += delta;
+ next += delta;
+ hdr_end += delta;
+ hdr_next += delta;
+ cur_hdr->len += delta;
+ http_msg_move_end(&txn->req, delta);
+ prev = del_from;
+ del_from = NULL;
+ }
+ continue;
+ }
+
+ /* if there are spaces around the equal sign, we need to
+ * strip them otherwise we'll get trouble for cookie captures,
+ * or even for rewrites. Since this happens extremely rarely,
+ * it does not hurt performance.
+ */
+ if (unlikely(att_end != equal || val_beg > equal + 1)) {
+ int stripped_before = 0;
+ int stripped_after = 0;
+
+ if (att_end != equal) {
+ stripped_before = buffer_replace2(req->buf, att_end, equal, NULL, 0);
+ equal += stripped_before;
+ val_beg += stripped_before;
+ }
+
+ if (val_beg > equal + 1) {
+ stripped_after = buffer_replace2(req->buf, equal + 1, val_beg, NULL, 0);
+ val_beg += stripped_after;
+ stripped_before += stripped_after;
+ }
+
+ val_end += stripped_before;
+ next += stripped_before;
+ hdr_end += stripped_before;
+ hdr_next += stripped_before;
+ cur_hdr->len += stripped_before;
+ http_msg_move_end(&txn->req, stripped_before);
+ }
+ /* now everything is as on the diagram above */
+
+ /* First, let's see if we want to capture this cookie. We check
+ * that we don't already have a client side cookie, because we
+ * can only capture one. Also as an optimisation, we ignore
+ * cookies shorter than the declared name.
+ */
+ if (sess->fe->capture_name != NULL && txn->cli_cookie == NULL &&
+ (val_end - att_beg >= sess->fe->capture_namelen) &&
+ memcmp(att_beg, sess->fe->capture_name, sess->fe->capture_namelen) == 0) {
+ int log_len = val_end - att_beg;
+
+ if ((txn->cli_cookie = pool_alloc2(pool2_capture)) == NULL) {
+ Alert("HTTP logging : out of memory.\n");
+ } else {
+ if (log_len > sess->fe->capture_len)
+ log_len = sess->fe->capture_len;
+ memcpy(txn->cli_cookie, att_beg, log_len);
+ txn->cli_cookie[log_len] = 0;
+ }
+ }
+
+ /* Persistence cookies in passive, rewrite or insert mode have the
+ * following form :
+ *
+ * Cookie: NAME=SRV[|<lastseen>[|<firstseen>]]
+ *
+ * For cookies in prefix mode, the form is :
+ *
+ * Cookie: NAME=SRV~VALUE
+ */
+ if ((att_end - att_beg == s->be->cookie_len) && (s->be->cookie_name != NULL) &&
+ (memcmp(att_beg, s->be->cookie_name, att_end - att_beg) == 0)) {
+ struct server *srv = s->be->srv;
+ char *delim;
+
+ /* if we're in cookie prefix mode, we'll search the delimitor so that we
+ * have the server ID between val_beg and delim, and the original cookie between
+ * delim+1 and val_end. Otherwise, delim==val_end :
+ *
+ * Cookie: NAME=SRV; # in all but prefix modes
+ * Cookie: NAME=SRV~OPAQUE ; # in prefix mode
+ * | || || | |+-> next
+ * | || || | +--> val_end
+ * | || || +---------> delim
+ * | || |+------------> val_beg
+ * | || +-------------> att_end = equal
+ * | |+-----------------> att_beg
+ * | +------------------> prev
+ * +-------------------------> hdr_beg
+ */
+
+ if (s->be->ck_opts & PR_CK_PFX) {
+ for (delim = val_beg; delim < val_end; delim++)
+ if (*delim == COOKIE_DELIM)
+ break;
+ } else {
+ char *vbar1;
+ delim = val_end;
+ /* Now check if the cookie contains a date field, which would
+ * appear after a vertical bar ('|') just after the server name
+ * and before the delimiter.
+ */
+ vbar1 = memchr(val_beg, COOKIE_DELIM_DATE, val_end - val_beg);
+ if (vbar1) {
+ /* OK, so left of the bar is the server's cookie and
+ * right is the last seen date. It is a base64 encoded
+ * 30-bit value representing the UNIX date since the
+ * epoch in 4-second quantities.
+ */
+ int val;
+ delim = vbar1++;
+ if (val_end - vbar1 >= 5) {
+ val = b64tos30(vbar1);
+ if (val > 0)
+ txn->cookie_last_date = val << 2;
+ }
+ /* look for a second vertical bar */
+ vbar1 = memchr(vbar1, COOKIE_DELIM_DATE, val_end - vbar1);
+ if (vbar1 && (val_end - vbar1 > 5)) {
+ val = b64tos30(vbar1 + 1);
+ if (val > 0)
+ txn->cookie_first_date = val << 2;
+ }
+ }
+ }
+
+ /* if the cookie has an expiration date and the proxy wants to check
+ * it, then we do that now. We first check if the cookie is too old,
+ * then only if it has expired. We detect strict overflow because the
+ * time resolution here is not great (4 seconds). Cookies with dates
+ * in the future are ignored if their offset is beyond one day. This
+ * allows an admin to fix timezone issues without expiring everyone
+ * and at the same time avoids keeping unwanted side effects for too
+ * long.
+ */
+ if (txn->cookie_first_date && s->be->cookie_maxlife &&
+ (((signed)(date.tv_sec - txn->cookie_first_date) > (signed)s->be->cookie_maxlife) ||
+ ((signed)(txn->cookie_first_date - date.tv_sec) > 86400))) {
+ txn->flags &= ~TX_CK_MASK;
+ txn->flags |= TX_CK_OLD;
+ delim = val_beg; // let's pretend we have not found the cookie
+ txn->cookie_first_date = 0;
+ txn->cookie_last_date = 0;
+ }
+ else if (txn->cookie_last_date && s->be->cookie_maxidle &&
+ (((signed)(date.tv_sec - txn->cookie_last_date) > (signed)s->be->cookie_maxidle) ||
+ ((signed)(txn->cookie_last_date - date.tv_sec) > 86400))) {
+ txn->flags &= ~TX_CK_MASK;
+ txn->flags |= TX_CK_EXPIRED;
+ delim = val_beg; // let's pretend we have not found the cookie
+ txn->cookie_first_date = 0;
+ txn->cookie_last_date = 0;
+ }
+
+ /* Here, we'll look for the first running server which supports the cookie.
+ * This allows to share a same cookie between several servers, for example
+ * to dedicate backup servers to specific servers only.
+ * However, to prevent clients from sticking to cookie-less backup server
+ * when they have incidentely learned an empty cookie, we simply ignore
+ * empty cookies and mark them as invalid.
+ * The same behaviour is applied when persistence must be ignored.
+ */
+ if ((delim == val_beg) || (s->flags & (SF_IGNORE_PRST | SF_ASSIGNED)))
+ srv = NULL;
+
+ while (srv) {
+ if (srv->cookie && (srv->cklen == delim - val_beg) &&
+ !memcmp(val_beg, srv->cookie, delim - val_beg)) {
+ if ((srv->state != SRV_ST_STOPPED) ||
+ (s->be->options & PR_O_PERSIST) ||
+ (s->flags & SF_FORCE_PRST)) {
+ /* we found the server and we can use it */
+ txn->flags &= ~TX_CK_MASK;
+ txn->flags |= (srv->state != SRV_ST_STOPPED) ? TX_CK_VALID : TX_CK_DOWN;
+ s->flags |= SF_DIRECT | SF_ASSIGNED;
+ s->target = &srv->obj_type;
+ break;
+ } else {
+ /* we found a server, but it's down,
+ * mark it as such and go on in case
+ * another one is available.
+ */
+ txn->flags &= ~TX_CK_MASK;
+ txn->flags |= TX_CK_DOWN;
+ }
+ }
+ srv = srv->next;
+ }
+
+ if (!srv && !(txn->flags & (TX_CK_DOWN|TX_CK_EXPIRED|TX_CK_OLD))) {
+ /* no server matched this cookie or we deliberately skipped it */
+ txn->flags &= ~TX_CK_MASK;
+ if ((s->flags & (SF_IGNORE_PRST | SF_ASSIGNED)))
+ txn->flags |= TX_CK_UNUSED;
+ else
+ txn->flags |= TX_CK_INVALID;
+ }
+
+ /* depending on the cookie mode, we may have to either :
+ * - delete the complete cookie if we're in insert+indirect mode, so that
+ * the server never sees it ;
+ * - remove the server id from the cookie value, and tag the cookie as an
+ * application cookie so that it does not get accidentely removed later,
+ * if we're in cookie prefix mode
+ */
+ if ((s->be->ck_opts & PR_CK_PFX) && (delim != val_end)) {
+ int delta; /* negative */
+
+ delta = buffer_replace2(req->buf, val_beg, delim + 1, NULL, 0);
+ val_end += delta;
+ next += delta;
+ hdr_end += delta;
+ hdr_next += delta;
+ cur_hdr->len += delta;
+ http_msg_move_end(&txn->req, delta);
+
+ del_from = NULL;
+ preserve_hdr = 1; /* we want to keep this cookie */
+ }
+ else if (del_from == NULL &&
+ (s->be->ck_opts & (PR_CK_INS | PR_CK_IND)) == (PR_CK_INS | PR_CK_IND)) {
+ del_from = prev;
+ }
+ } else {
+ /* This is not our cookie, so we must preserve it. But if we already
+ * scheduled another cookie for removal, we cannot remove the
+ * complete header, but we can remove the previous block itself.
+ */
+ preserve_hdr = 1;
+
+ if (del_from != NULL) {
+ int delta = del_hdr_value(req->buf, &del_from, prev);
+ if (att_beg >= del_from)
+ att_beg += delta;
+ if (att_end >= del_from)
+ att_end += delta;
+ val_beg += delta;
+ val_end += delta;
+ next += delta;
+ hdr_end += delta;
+ hdr_next += delta;
+ cur_hdr->len += delta;
+ http_msg_move_end(&txn->req, delta);
+ prev = del_from;
+ del_from = NULL;
+ }
+ }
+
+ /* continue with next cookie on this header line */
+ att_beg = next;
+ } /* for each cookie */
+
+ /* There are no more cookies on this line.
+ * We may still have one (or several) marked for deletion at the
+ * end of the line. We must do this now in two ways :
+ * - if some cookies must be preserved, we only delete from the
+ * mark to the end of line ;
+ * - if nothing needs to be preserved, simply delete the whole header
+ */
+ if (del_from) {
+ int delta;
+ if (preserve_hdr) {
+ delta = del_hdr_value(req->buf, &del_from, hdr_end);
+ hdr_end = del_from;
+ cur_hdr->len += delta;
+ } else {
+ delta = buffer_replace2(req->buf, hdr_beg, hdr_next, NULL, 0);
+
+ /* FIXME: this should be a separate function */
+ txn->hdr_idx.v[old_idx].next = cur_hdr->next;
+ txn->hdr_idx.used--;
+ cur_hdr->len = 0;
+ cur_idx = old_idx;
+ }
+ hdr_next += delta;
+ http_msg_move_end(&txn->req, delta);
+ }
+
+ /* check next header */
+ old_idx = cur_idx;
+ }
+}
+
+
+/* Iterate the same filter through all response headers contained in <rtr>.
+ * Returns 1 if this filter can be stopped upon return, otherwise 0.
+ */
+int apply_filter_to_resp_headers(struct stream *s, struct channel *rtr, struct hdr_exp *exp)
+{
+ char *cur_ptr, *cur_end, *cur_next;
+ int cur_idx, old_idx, last_hdr;
+ struct http_txn *txn = s->txn;
+ struct hdr_idx_elem *cur_hdr;
+ int delta;
+
+ last_hdr = 0;
+
+ cur_next = rtr->buf->p + hdr_idx_first_pos(&txn->hdr_idx);
+ old_idx = 0;
+
+ while (!last_hdr) {
+ if (unlikely(txn->flags & TX_SVDENY))
+ return 1;
+ else if (unlikely(txn->flags & TX_SVALLOW) &&
+ (exp->action == ACT_ALLOW ||
+ exp->action == ACT_DENY))
+ return 0;
+
+ cur_idx = txn->hdr_idx.v[old_idx].next;
+ if (!cur_idx)
+ break;
+
+ cur_hdr = &txn->hdr_idx.v[cur_idx];
+ cur_ptr = cur_next;
+ cur_end = cur_ptr + cur_hdr->len;
+ cur_next = cur_end + cur_hdr->cr + 1;
+
+ /* Now we have one header between cur_ptr and cur_end,
+ * and the next header starts at cur_next.
+ */
+
+ if (regex_exec_match2(exp->preg, cur_ptr, cur_end-cur_ptr, MAX_MATCH, pmatch, 0)) {
+ switch (exp->action) {
+ case ACT_ALLOW:
+ txn->flags |= TX_SVALLOW;
+ last_hdr = 1;
+ break;
+
+ case ACT_DENY:
+ txn->flags |= TX_SVDENY;
+ last_hdr = 1;
+ break;
+
+ case ACT_REPLACE:
+ trash.len = exp_replace(trash.str, trash.size, cur_ptr, exp->replace, pmatch);
+ if (trash.len < 0)
+ return -1;
+
+ delta = buffer_replace2(rtr->buf, cur_ptr, cur_end, trash.str, trash.len);
+ /* FIXME: if the user adds a newline in the replacement, the
+ * index will not be recalculated for now, and the new line
+ * will not be counted as a new header.
+ */
+
+ cur_end += delta;
+ cur_next += delta;
+ cur_hdr->len += delta;
+ http_msg_move_end(&txn->rsp, delta);
+ break;
+
+ case ACT_REMOVE:
+ delta = buffer_replace2(rtr->buf, cur_ptr, cur_next, NULL, 0);
+ cur_next += delta;
+
+ http_msg_move_end(&txn->rsp, delta);
+ txn->hdr_idx.v[old_idx].next = cur_hdr->next;
+ txn->hdr_idx.used--;
+ cur_hdr->len = 0;
+ cur_end = NULL; /* null-term has been rewritten */
+ cur_idx = old_idx;
+ break;
+
+ }
+ }
+
+ /* keep the link from this header to next one in case of later
+ * removal of next header.
+ */
+ old_idx = cur_idx;
+ }
+ return 0;
+}
+
+
+/* Apply the filter to the status line in the response buffer <rtr>.
+ * Returns 0 if nothing has been done, 1 if the filter has been applied,
+ * or -1 if a replacement resulted in an invalid status line.
+ */
+int apply_filter_to_sts_line(struct stream *s, struct channel *rtr, struct hdr_exp *exp)
+{
+ char *cur_ptr, *cur_end;
+ int done;
+ struct http_txn *txn = s->txn;
+ int delta;
+
+
+ if (unlikely(txn->flags & TX_SVDENY))
+ return 1;
+ else if (unlikely(txn->flags & TX_SVALLOW) &&
+ (exp->action == ACT_ALLOW ||
+ exp->action == ACT_DENY))
+ return 0;
+ else if (exp->action == ACT_REMOVE)
+ return 0;
+
+ done = 0;
+
+ cur_ptr = rtr->buf->p;
+ cur_end = cur_ptr + txn->rsp.sl.st.l;
+
+ /* Now we have the status line between cur_ptr and cur_end */
+
+ if (regex_exec_match2(exp->preg, cur_ptr, cur_end-cur_ptr, MAX_MATCH, pmatch, 0)) {
+ switch (exp->action) {
+ case ACT_ALLOW:
+ txn->flags |= TX_SVALLOW;
+ done = 1;
+ break;
+
+ case ACT_DENY:
+ txn->flags |= TX_SVDENY;
+ done = 1;
+ break;
+
+ case ACT_REPLACE:
+ trash.len = exp_replace(trash.str, trash.size, cur_ptr, exp->replace, pmatch);
+ if (trash.len < 0)
+ return -1;
+
+ delta = buffer_replace2(rtr->buf, cur_ptr, cur_end, trash.str, trash.len);
+ /* FIXME: if the user adds a newline in the replacement, the
+ * index will not be recalculated for now, and the new line
+ * will not be counted as a new header.
+ */
+
+ http_msg_move_end(&txn->rsp, delta);
+ cur_end += delta;
+ cur_end = (char *)http_parse_stsline(&txn->rsp,
+ HTTP_MSG_RPVER,
+ cur_ptr, cur_end + 1,
+ NULL, NULL);
+ if (unlikely(!cur_end))
+ return -1;
+
+ /* we have a full respnse and we know that we have either a CR
+ * or an LF at <ptr>.
+ */
+ txn->status = strl2ui(rtr->buf->p + txn->rsp.sl.st.c, txn->rsp.sl.st.c_l);
+ hdr_idx_set_start(&txn->hdr_idx, txn->rsp.sl.st.l, *cur_end == '\r');
+ /* there is no point trying this regex on headers */
+ return 1;
+ }
+ }
+ return done;
+}
+
+
+
+/*
+ * Apply all the resp filters of proxy <px> to all headers in buffer <rtr> of stream <s>.
+ * Returns 0 if everything is alright, or -1 in case a replacement lead to an
+ * unparsable response.
+ */
+int apply_filters_to_response(struct stream *s, struct channel *rtr, struct proxy *px)
+{
+ struct session *sess = s->sess;
+ struct http_txn *txn = s->txn;
+ struct hdr_exp *exp;
+
+ for (exp = px->rsp_exp; exp; exp = exp->next) {
+ int ret;
+
+ /*
+ * The interleaving of transformations and verdicts
+ * makes it difficult to decide to continue or stop
+ * the evaluation.
+ */
+
+ if (txn->flags & TX_SVDENY)
+ break;
+
+ if ((txn->flags & TX_SVALLOW) &&
+ (exp->action == ACT_ALLOW || exp->action == ACT_DENY ||
+ exp->action == ACT_PASS)) {
+ exp = exp->next;
+ continue;
+ }
+
+ /* if this filter had a condition, evaluate it now and skip to
+ * next filter if the condition does not match.
+ */
+ if (exp->cond) {
+ ret = acl_exec_cond(exp->cond, px, sess, s, SMP_OPT_DIR_RES|SMP_OPT_FINAL);
+ ret = acl_pass(ret);
+ if (((struct acl_cond *)exp->cond)->pol == ACL_COND_UNLESS)
+ ret = !ret;
+ if (!ret)
+ continue;
+ }
+
+ /* Apply the filter to the status line. */
+ ret = apply_filter_to_sts_line(s, rtr, exp);
+ if (unlikely(ret < 0))
+ return -1;
+
+ if (likely(ret == 0)) {
+ /* The filter did not match the response, it can be
+ * iterated through all headers.
+ */
+ if (unlikely(apply_filter_to_resp_headers(s, rtr, exp) < 0))
+ return -1;
+ }
+ }
+ return 0;
+}
+
+
+/*
+ * Manage server-side cookies. It can impact performance by about 2% so it is
+ * desirable to call it only when needed. This function is also used when we
+ * just need to know if there is a cookie (eg: for check-cache).
+ */
+void manage_server_side_cookies(struct stream *s, struct channel *res)
+{
+ struct http_txn *txn = s->txn;
+ struct session *sess = s->sess;
+ struct server *srv;
+ int is_cookie2;
+ int cur_idx, old_idx, delta;
+ char *hdr_beg, *hdr_end, *hdr_next;
+ char *prev, *att_beg, *att_end, *equal, *val_beg, *val_end, *next;
+
+ /* Iterate through the headers.
+ * we start with the start line.
+ */
+ old_idx = 0;
+ hdr_next = res->buf->p + hdr_idx_first_pos(&txn->hdr_idx);
+
+ while ((cur_idx = txn->hdr_idx.v[old_idx].next)) {
+ struct hdr_idx_elem *cur_hdr;
+ int val;
+
+ cur_hdr = &txn->hdr_idx.v[cur_idx];
+ hdr_beg = hdr_next;
+ hdr_end = hdr_beg + cur_hdr->len;
+ hdr_next = hdr_end + cur_hdr->cr + 1;
+
+ /* We have one full header between hdr_beg and hdr_end, and the
+ * next header starts at hdr_next. We're only interested in
+ * "Set-Cookie" and "Set-Cookie2" headers.
+ */
+
+ is_cookie2 = 0;
+ prev = hdr_beg + 10;
+ val = http_header_match2(hdr_beg, hdr_end, "Set-Cookie", 10);
+ if (!val) {
+ val = http_header_match2(hdr_beg, hdr_end, "Set-Cookie2", 11);
+ if (!val) {
+ old_idx = cur_idx;
+ continue;
+ }
+ is_cookie2 = 1;
+ prev = hdr_beg + 11;
+ }
+
+ /* OK, right now we know we have a Set-Cookie* at hdr_beg, and
+ * <prev> points to the colon.
+ */
+ txn->flags |= TX_SCK_PRESENT;
+
+ /* Maybe we only wanted to see if there was a Set-Cookie (eg:
+ * check-cache is enabled) and we are not interested in checking
+ * them. Warning, the cookie capture is declared in the frontend.
+ */
+ if (s->be->cookie_name == NULL && sess->fe->capture_name == NULL)
+ return;
+
+ /* OK so now we know we have to process this response cookie.
+ * The format of the Set-Cookie header is slightly different
+ * from the format of the Cookie header in that it does not
+ * support the comma as a cookie delimiter (thus the header
+ * cannot be folded) because the Expires attribute described in
+ * the original Netscape's spec may contain an unquoted date
+ * with a comma inside. We have to live with this because
+ * many browsers don't support Max-Age and some browsers don't
+ * support quoted strings. However the Set-Cookie2 header is
+ * clean.
+ *
+ * We have to keep multiple pointers in order to support cookie
+ * removal at the beginning, middle or end of header without
+ * corrupting the header (in case of set-cookie2). A special
+ * pointer, <scav> points to the beginning of the set-cookie-av
+ * fields after the first semi-colon. The <next> pointer points
+ * either to the end of line (set-cookie) or next unquoted comma
+ * (set-cookie2). All of these headers are valid :
+ *
+ * Set-Cookie: NAME1 = VALUE 1 ; Secure; Path="/"\r\n
+ * Set-Cookie:NAME=VALUE; Secure; Expires=Thu, 01-Jan-1970 00:00:01 GMT\r\n
+ * Set-Cookie: NAME = VALUE ; Secure; Expires=Thu, 01-Jan-1970 00:00:01 GMT\r\n
+ * Set-Cookie2: NAME1 = VALUE 1 ; Max-Age=0, NAME2=VALUE2; Discard\r\n
+ * | | | | | | | | | |
+ * | | | | | | | | +-> next hdr_end <--+
+ * | | | | | | | +------------> scav
+ * | | | | | | +--------------> val_end
+ * | | | | | +--------------------> val_beg
+ * | | | | +----------------------> equal
+ * | | | +------------------------> att_end
+ * | | +----------------------------> att_beg
+ * | +------------------------------> prev
+ * +-----------------------------------------> hdr_beg
+ */
+
+ for (; prev < hdr_end; prev = next) {
+ /* Iterate through all cookies on this line */
+
+ /* find att_beg */
+ att_beg = prev + 1;
+ while (att_beg < hdr_end && http_is_spht[(unsigned char)*att_beg])
+ att_beg++;
+
+ /* find att_end : this is the first character after the last non
+ * space before the equal. It may be equal to hdr_end.
+ */
+ equal = att_end = att_beg;
+
+ while (equal < hdr_end) {
+ if (*equal == '=' || *equal == ';' || (is_cookie2 && *equal == ','))
+ break;
+ if (http_is_spht[(unsigned char)*equal++])
+ continue;
+ att_end = equal;
+ }
+
+ /* here, <equal> points to '=', a delimitor or the end. <att_end>
+ * is between <att_beg> and <equal>, both may be identical.
+ */
+
+ /* look for end of cookie if there is an equal sign */
+ if (equal < hdr_end && *equal == '=') {
+ /* look for the beginning of the value */
+ val_beg = equal + 1;
+ while (val_beg < hdr_end && http_is_spht[(unsigned char)*val_beg])
+ val_beg++;
+
+ /* find the end of the value, respecting quotes */
+ next = find_cookie_value_end(val_beg, hdr_end);
+
+ /* make val_end point to the first white space or delimitor after the value */
+ val_end = next;
+ while (val_end > val_beg && http_is_spht[(unsigned char)*(val_end - 1)])
+ val_end--;
+ } else {
+ /* <equal> points to next comma, semi-colon or EOL */
+ val_beg = val_end = next = equal;
+ }
+
+ if (next < hdr_end) {
+ /* Set-Cookie2 supports multiple cookies, and <next> points to
+ * a colon or semi-colon before the end. So skip all attr-value
+ * pairs and look for the next comma. For Set-Cookie, since
+ * commas are permitted in values, skip to the end.
+ */
+ if (is_cookie2)
+ next = find_hdr_value_end(next, hdr_end);
+ else
+ next = hdr_end;
+ }
+
+ /* Now everything is as on the diagram above */
+
+ /* Ignore cookies with no equal sign */
+ if (equal == val_end)
+ continue;
+
+ /* If there are spaces around the equal sign, we need to
+ * strip them otherwise we'll get trouble for cookie captures,
+ * or even for rewrites. Since this happens extremely rarely,
+ * it does not hurt performance.
+ */
+ if (unlikely(att_end != equal || val_beg > equal + 1)) {
+ int stripped_before = 0;
+ int stripped_after = 0;
+
+ if (att_end != equal) {
+ stripped_before = buffer_replace2(res->buf, att_end, equal, NULL, 0);
+ equal += stripped_before;
+ val_beg += stripped_before;
+ }
+
+ if (val_beg > equal + 1) {
+ stripped_after = buffer_replace2(res->buf, equal + 1, val_beg, NULL, 0);
+ val_beg += stripped_after;
+ stripped_before += stripped_after;
+ }
+
+ val_end += stripped_before;
+ next += stripped_before;
+ hdr_end += stripped_before;
+ hdr_next += stripped_before;
+ cur_hdr->len += stripped_before;
+ http_msg_move_end(&txn->rsp, stripped_before);
+ }
+
+ /* First, let's see if we want to capture this cookie. We check
+ * that we don't already have a server side cookie, because we
+ * can only capture one. Also as an optimisation, we ignore
+ * cookies shorter than the declared name.
+ */
+ if (sess->fe->capture_name != NULL &&
+ txn->srv_cookie == NULL &&
+ (val_end - att_beg >= sess->fe->capture_namelen) &&
+ memcmp(att_beg, sess->fe->capture_name, sess->fe->capture_namelen) == 0) {
+ int log_len = val_end - att_beg;
+ if ((txn->srv_cookie = pool_alloc2(pool2_capture)) == NULL) {
+ Alert("HTTP logging : out of memory.\n");
+ }
+ else {
+ if (log_len > sess->fe->capture_len)
+ log_len = sess->fe->capture_len;
+ memcpy(txn->srv_cookie, att_beg, log_len);
+ txn->srv_cookie[log_len] = 0;
+ }
+ }
+
+ srv = objt_server(s->target);
+ /* now check if we need to process it for persistence */
+ if (!(s->flags & SF_IGNORE_PRST) &&
+ (att_end - att_beg == s->be->cookie_len) && (s->be->cookie_name != NULL) &&
+ (memcmp(att_beg, s->be->cookie_name, att_end - att_beg) == 0)) {
+ /* assume passive cookie by default */
+ txn->flags &= ~TX_SCK_MASK;
+ txn->flags |= TX_SCK_FOUND;
+
+ /* If the cookie is in insert mode on a known server, we'll delete
+ * this occurrence because we'll insert another one later.
+ * We'll delete it too if the "indirect" option is set and we're in
+ * a direct access.
+ */
+ if (s->be->ck_opts & PR_CK_PSV) {
+ /* The "preserve" flag was set, we don't want to touch the
+ * server's cookie.
+ */
+ }
+ else if ((srv && (s->be->ck_opts & PR_CK_INS)) ||
+ ((s->flags & SF_DIRECT) && (s->be->ck_opts & PR_CK_IND))) {
+ /* this cookie must be deleted */
+ if (*prev == ':' && next == hdr_end) {
+ /* whole header */
+ delta = buffer_replace2(res->buf, hdr_beg, hdr_next, NULL, 0);
+ txn->hdr_idx.v[old_idx].next = cur_hdr->next;
+ txn->hdr_idx.used--;
+ cur_hdr->len = 0;
+ cur_idx = old_idx;
+ hdr_next += delta;
+ http_msg_move_end(&txn->rsp, delta);
+ /* note: while both invalid now, <next> and <hdr_end>
+ * are still equal, so the for() will stop as expected.
+ */
+ } else {
+ /* just remove the value */
+ int delta = del_hdr_value(res->buf, &prev, next);
+ next = prev;
+ hdr_end += delta;
+ hdr_next += delta;
+ cur_hdr->len += delta;
+ http_msg_move_end(&txn->rsp, delta);
+ }
+ txn->flags &= ~TX_SCK_MASK;
+ txn->flags |= TX_SCK_DELETED;
+ /* and go on with next cookie */
+ }
+ else if (srv && srv->cookie && (s->be->ck_opts & PR_CK_RW)) {
+ /* replace bytes val_beg->val_end with the cookie name associated
+ * with this server since we know it.
+ */
+ delta = buffer_replace2(res->buf, val_beg, val_end, srv->cookie, srv->cklen);
+ next += delta;
+ hdr_end += delta;
+ hdr_next += delta;
+ cur_hdr->len += delta;
+ http_msg_move_end(&txn->rsp, delta);
+
+ txn->flags &= ~TX_SCK_MASK;
+ txn->flags |= TX_SCK_REPLACED;
+ }
+ else if (srv && srv->cookie && (s->be->ck_opts & PR_CK_PFX)) {
+ /* insert the cookie name associated with this server
+ * before existing cookie, and insert a delimiter between them..
+ */
+ delta = buffer_replace2(res->buf, val_beg, val_beg, srv->cookie, srv->cklen + 1);
+ next += delta;
+ hdr_end += delta;
+ hdr_next += delta;
+ cur_hdr->len += delta;
+ http_msg_move_end(&txn->rsp, delta);
+
+ val_beg[srv->cklen] = COOKIE_DELIM;
+ txn->flags &= ~TX_SCK_MASK;
+ txn->flags |= TX_SCK_REPLACED;
+ }
+ }
+ /* that's done for this cookie, check the next one on the same
+ * line when next != hdr_end (only if is_cookie2).
+ */
+ }
+ /* check next header */
+ old_idx = cur_idx;
+ }
+}
+
+
+/*
+ * Check if response is cacheable or not. Updates s->flags.
+ */
+void check_response_for_cacheability(struct stream *s, struct channel *rtr)
+{
+ struct http_txn *txn = s->txn;
+ char *p1, *p2;
+
+ char *cur_ptr, *cur_end, *cur_next;
+ int cur_idx;
+
+ if (!(txn->flags & TX_CACHEABLE))
+ return;
+
+ /* Iterate through the headers.
+ * we start with the start line.
+ */
+ cur_idx = 0;
+ cur_next = rtr->buf->p + hdr_idx_first_pos(&txn->hdr_idx);
+
+ while ((cur_idx = txn->hdr_idx.v[cur_idx].next)) {
+ struct hdr_idx_elem *cur_hdr;
+ int val;
+
+ cur_hdr = &txn->hdr_idx.v[cur_idx];
+ cur_ptr = cur_next;
+ cur_end = cur_ptr + cur_hdr->len;
+ cur_next = cur_end + cur_hdr->cr + 1;
+
+ /* We have one full header between cur_ptr and cur_end, and the
+ * next header starts at cur_next. We're only interested in
+ * "Cookie:" headers.
+ */
+
+ val = http_header_match2(cur_ptr, cur_end, "Pragma", 6);
+ if (val) {
+ if ((cur_end - (cur_ptr + val) >= 8) &&
+ strncasecmp(cur_ptr + val, "no-cache", 8) == 0) {
+ txn->flags &= ~TX_CACHEABLE & ~TX_CACHE_COOK;
+ return;
+ }
+ }
+
+ val = http_header_match2(cur_ptr, cur_end, "Cache-control", 13);
+ if (!val)
+ continue;
+
+ /* OK, right now we know we have a cache-control header at cur_ptr */
+
+ p1 = cur_ptr + val; /* first non-space char after 'cache-control:' */
+
+ if (p1 >= cur_end) /* no more info */
+ continue;
+
+ /* p1 is at the beginning of the value */
+ p2 = p1;
+
+ while (p2 < cur_end && *p2 != '=' && *p2 != ',' && !isspace((unsigned char)*p2))
+ p2++;
+
+ /* we have a complete value between p1 and p2 */
+ if (p2 < cur_end && *p2 == '=') {
+ /* we have something of the form no-cache="set-cookie" */
+ if ((cur_end - p1 >= 21) &&
+ strncasecmp(p1, "no-cache=\"set-cookie", 20) == 0
+ && (p1[20] == '"' || p1[20] == ','))
+ txn->flags &= ~TX_CACHE_COOK;
+ continue;
+ }
+
+ /* OK, so we know that either p2 points to the end of string or to a comma */
+ if (((p2 - p1 == 7) && strncasecmp(p1, "private", 7) == 0) ||
+ ((p2 - p1 == 8) && strncasecmp(p1, "no-cache", 8) == 0) ||
+ ((p2 - p1 == 8) && strncasecmp(p1, "no-store", 8) == 0) ||
+ ((p2 - p1 == 9) && strncasecmp(p1, "max-age=0", 9) == 0) ||
+ ((p2 - p1 == 10) && strncasecmp(p1, "s-maxage=0", 10) == 0)) {
+ txn->flags &= ~TX_CACHEABLE & ~TX_CACHE_COOK;
+ return;
+ }
+
+ if ((p2 - p1 == 6) && strncasecmp(p1, "public", 6) == 0) {
+ txn->flags |= TX_CACHEABLE | TX_CACHE_COOK;
+ continue;
+ }
+ }
+}
+
+
+/*
+ * In a GET, HEAD or POST request, check if the requested URI matches the stats uri
+ * for the current backend.
+ *
+ * It is assumed that the request is either a HEAD, GET, or POST and that the
+ * uri_auth field is valid.
+ *
+ * Returns 1 if stats should be provided, otherwise 0.
+ */
+int stats_check_uri(struct stream_interface *si, struct http_txn *txn, struct proxy *backend)
+{
+ struct uri_auth *uri_auth = backend->uri_auth;
+ struct http_msg *msg = &txn->req;
+ const char *uri = msg->chn->buf->p+ msg->sl.rq.u;
+
+ if (!uri_auth)
+ return 0;
+
+ if (txn->meth != HTTP_METH_GET && txn->meth != HTTP_METH_HEAD && txn->meth != HTTP_METH_POST)
+ return 0;
+
+ /* check URI size */
+ if (uri_auth->uri_len > msg->sl.rq.u_l)
+ return 0;
+
+ if (memcmp(uri, uri_auth->uri_prefix, uri_auth->uri_len) != 0)
+ return 0;
+
+ return 1;
+}
+
+/*
+ * Capture a bad request or response and archive it in the proxy's structure.
+ * By default it tries to report the error position as msg->err_pos. However if
+ * this one is not set, it will then report msg->next, which is the last known
+ * parsing point. The function is able to deal with wrapping buffers. It always
+ * displays buffers as a contiguous area starting at buf->p.
+ */
+void http_capture_bad_message(struct error_snapshot *es, struct stream *s,
+ struct http_msg *msg,
+ enum ht_state state, struct proxy *other_end)
+{
+ struct session *sess = strm_sess(s);
+ struct channel *chn = msg->chn;
+ int len1, len2;
+
+ es->len = MIN(chn->buf->i, sizeof(es->buf));
+ len1 = chn->buf->data + chn->buf->size - chn->buf->p;
+ len1 = MIN(len1, es->len);
+ len2 = es->len - len1; /* remaining data if buffer wraps */
+
+ memcpy(es->buf, chn->buf->p, len1);
+ if (len2)
+ memcpy(es->buf + len1, chn->buf->data, len2);
+
+ if (msg->err_pos >= 0)
+ es->pos = msg->err_pos;
+ else
+ es->pos = msg->next;
+
+ es->when = date; // user-visible date
+ es->sid = s->uniq_id;
+ es->srv = objt_server(s->target);
+ es->oe = other_end;
+ if (objt_conn(sess->origin))
+ es->src = __objt_conn(sess->origin)->addr.from;
+ else
+ memset(&es->src, 0, sizeof(es->src));
+
+ es->state = state;
+ es->ev_id = error_snapshot_id++;
+ es->b_flags = chn->flags;
+ es->s_flags = s->flags;
+ es->t_flags = s->txn->flags;
+ es->m_flags = msg->flags;
+ es->b_out = chn->buf->o;
+ es->b_wrap = chn->buf->data + chn->buf->size - chn->buf->p;
+ es->b_tot = chn->total;
+ es->m_clen = msg->chunk_len;
+ es->m_blen = msg->body_len;
+}
+
+/* Return in <vptr> and <vlen> the pointer and length of occurrence <occ> of
+ * header whose name is <hname> of length <hlen>. If <ctx> is null, lookup is
+ * performed over the whole headers. Otherwise it must contain a valid header
+ * context, initialised with ctx->idx=0 for the first lookup in a series. If
+ * <occ> is positive or null, occurrence #occ from the beginning (or last ctx)
+ * is returned. Occ #0 and #1 are equivalent. If <occ> is negative (and no less
+ * than -MAX_HDR_HISTORY), the occurrence is counted from the last one which is
+ * -1. The value fetch stops at commas, so this function is suited for use with
+ * list headers.
+ * The return value is 0 if nothing was found, or non-zero otherwise.
+ */
+unsigned int http_get_hdr(const struct http_msg *msg, const char *hname, int hlen,
+ struct hdr_idx *idx, int occ,
+ struct hdr_ctx *ctx, char **vptr, int *vlen)
+{
+ struct hdr_ctx local_ctx;
+ char *ptr_hist[MAX_HDR_HISTORY];
+ int len_hist[MAX_HDR_HISTORY];
+ unsigned int hist_ptr;
+ int found;
+
+ if (!ctx) {
+ local_ctx.idx = 0;
+ ctx = &local_ctx;
+ }
+
+ if (occ >= 0) {
+ /* search from the beginning */
+ while (http_find_header2(hname, hlen, msg->chn->buf->p, idx, ctx)) {
+ occ--;
+ if (occ <= 0) {
+ *vptr = ctx->line + ctx->val;
+ *vlen = ctx->vlen;
+ return 1;
+ }
+ }
+ return 0;
+ }
+
+ /* negative occurrence, we scan all the list then walk back */
+ if (-occ > MAX_HDR_HISTORY)
+ return 0;
+
+ found = hist_ptr = 0;
+ while (http_find_header2(hname, hlen, msg->chn->buf->p, idx, ctx)) {
+ ptr_hist[hist_ptr] = ctx->line + ctx->val;
+ len_hist[hist_ptr] = ctx->vlen;
+ if (++hist_ptr >= MAX_HDR_HISTORY)
+ hist_ptr = 0;
+ found++;
+ }
+ if (-occ > found)
+ return 0;
+ /* OK now we have the last occurrence in [hist_ptr-1], and we need to
+ * find occurrence -occ. 0 <= hist_ptr < MAX_HDR_HISTORY, and we have
+ * -10 <= occ <= -1. So we have to check [hist_ptr%MAX_HDR_HISTORY+occ]
+ * to remain in the 0..9 range.
+ */
+ hist_ptr += occ + MAX_HDR_HISTORY;
+ if (hist_ptr >= MAX_HDR_HISTORY)
+ hist_ptr -= MAX_HDR_HISTORY;
+ *vptr = ptr_hist[hist_ptr];
+ *vlen = len_hist[hist_ptr];
+ return 1;
+}
+
+/* Return in <vptr> and <vlen> the pointer and length of occurrence <occ> of
+ * header whose name is <hname> of length <hlen>. If <ctx> is null, lookup is
+ * performed over the whole headers. Otherwise it must contain a valid header
+ * context, initialised with ctx->idx=0 for the first lookup in a series. If
+ * <occ> is positive or null, occurrence #occ from the beginning (or last ctx)
+ * is returned. Occ #0 and #1 are equivalent. If <occ> is negative (and no less
+ * than -MAX_HDR_HISTORY), the occurrence is counted from the last one which is
+ * -1. This function differs from http_get_hdr() in that it only returns full
+ * line header values and does not stop at commas.
+ * The return value is 0 if nothing was found, or non-zero otherwise.
+ */
+unsigned int http_get_fhdr(const struct http_msg *msg, const char *hname, int hlen,
+ struct hdr_idx *idx, int occ,
+ struct hdr_ctx *ctx, char **vptr, int *vlen)
+{
+ struct hdr_ctx local_ctx;
+ char *ptr_hist[MAX_HDR_HISTORY];
+ int len_hist[MAX_HDR_HISTORY];
+ unsigned int hist_ptr;
+ int found;
+
+ if (!ctx) {
+ local_ctx.idx = 0;
+ ctx = &local_ctx;
+ }
+
+ if (occ >= 0) {
+ /* search from the beginning */
+ while (http_find_full_header2(hname, hlen, msg->chn->buf->p, idx, ctx)) {
+ occ--;
+ if (occ <= 0) {
+ *vptr = ctx->line + ctx->val;
+ *vlen = ctx->vlen;
+ return 1;
+ }
+ }
+ return 0;
+ }
+
+ /* negative occurrence, we scan all the list then walk back */
+ if (-occ > MAX_HDR_HISTORY)
+ return 0;
+
+ found = hist_ptr = 0;
+ while (http_find_full_header2(hname, hlen, msg->chn->buf->p, idx, ctx)) {
+ ptr_hist[hist_ptr] = ctx->line + ctx->val;
+ len_hist[hist_ptr] = ctx->vlen;
+ if (++hist_ptr >= MAX_HDR_HISTORY)
+ hist_ptr = 0;
+ found++;
+ }
+ if (-occ > found)
+ return 0;
+ /* OK now we have the last occurrence in [hist_ptr-1], and we need to
+ * find occurrence -occ, so we have to check [hist_ptr+occ].
+ */
+ hist_ptr += occ;
+ if (hist_ptr >= MAX_HDR_HISTORY)
+ hist_ptr -= MAX_HDR_HISTORY;
+ *vptr = ptr_hist[hist_ptr];
+ *vlen = len_hist[hist_ptr];
+ return 1;
+}
+
+/*
+ * Print a debug line with a header. Always stop at the first CR or LF char,
+ * so it is safe to pass it a full buffer if needed. If <err> is not NULL, an
+ * arrow is printed after the line which contains the pointer.
+ */
+void debug_hdr(const char *dir, struct stream *s, const char *start, const char *end)
+{
+ struct session *sess = strm_sess(s);
+ int max;
+
+ chunk_printf(&trash, "%08x:%s.%s[%04x:%04x]: ", s->uniq_id, s->be->id,
+ dir,
+ objt_conn(sess->origin) ? (unsigned short)objt_conn(sess->origin)->t.sock.fd : -1,
+ objt_conn(s->si[1].end) ? (unsigned short)objt_conn(s->si[1].end)->t.sock.fd : -1);
+
+ for (max = 0; start + max < end; max++)
+ if (start[max] == '\r' || start[max] == '\n')
+ break;
+
+ UBOUND(max, trash.size - trash.len - 3);
+ trash.len += strlcpy2(trash.str + trash.len, start, max + 1);
+ trash.str[trash.len++] = '\n';
+ shut_your_big_mouth_gcc(write(1, trash.str, trash.len));
+}
+
+
+/* Allocate a new HTTP transaction for stream <s> unless there is one already.
+ * The hdr_idx is allocated as well. In case of allocation failure, everything
+ * allocated is freed and NULL is returned. Otherwise the new transaction is
+ * assigned to the stream and returned.
+ */
+struct http_txn *http_alloc_txn(struct stream *s)
+{
+ struct http_txn *txn = s->txn;
+
+ if (txn)
+ return txn;
+
+ txn = pool_alloc2(pool2_http_txn);
+ if (!txn)
+ return txn;
+
+ txn->hdr_idx.size = global.tune.max_http_hdr;
+ txn->hdr_idx.v = pool_alloc2(pool2_hdr_idx);
+ if (!txn->hdr_idx.v) {
+ pool_free2(pool2_http_txn, txn);
+ return NULL;
+ }
+
+ s->txn = txn;
+ return txn;
+}
+
+void http_txn_reset_req(struct http_txn *txn)
+{
+ txn->req.flags = 0;
+ txn->req.sol = txn->req.eol = txn->req.eoh = 0; /* relative to the buffer */
+ txn->req.next = 0;
+ txn->req.chunk_len = 0LL;
+ txn->req.body_len = 0LL;
+ txn->req.msg_state = HTTP_MSG_RQBEFORE; /* at the very beginning of the request */
+}
+
+void http_txn_reset_res(struct http_txn *txn)
+{
+ txn->rsp.flags = 0;
+ txn->rsp.sol = txn->rsp.eol = txn->rsp.eoh = 0; /* relative to the buffer */
+ txn->rsp.next = 0;
+ txn->rsp.chunk_len = 0LL;
+ txn->rsp.body_len = 0LL;
+ txn->rsp.msg_state = HTTP_MSG_RPBEFORE; /* at the very beginning of the response */
+}
+
+/*
+ * Initialize a new HTTP transaction for stream <s>. It is assumed that all
+ * the required fields are properly allocated and that we only need to (re)init
+ * them. This should be used before processing any new request.
+ */
+void http_init_txn(struct stream *s)
+{
+ struct http_txn *txn = s->txn;
+ struct proxy *fe = strm_fe(s);
+
+ txn->flags = 0;
+ txn->status = -1;
+
+ txn->cookie_first_date = 0;
+ txn->cookie_last_date = 0;
+
+ txn->srv_cookie = NULL;
+ txn->cli_cookie = NULL;
+ txn->uri = NULL;
+
+ http_txn_reset_req(txn);
+ http_txn_reset_res(txn);
+
+ txn->req.chn = &s->req;
+ txn->rsp.chn = &s->res;
+
+ txn->auth.method = HTTP_AUTH_UNKNOWN;
+
+ txn->req.err_pos = txn->rsp.err_pos = -2; /* block buggy requests/responses */
+ if (fe->options2 & PR_O2_REQBUG_OK)
+ txn->req.err_pos = -1; /* let buggy requests pass */
+
+ if (txn->hdr_idx.v)
+ hdr_idx_init(&txn->hdr_idx);
+
+ vars_init(&s->vars_txn, SCOPE_TXN);
+ vars_init(&s->vars_reqres, SCOPE_REQ);
+}
+
+/* to be used at the end of a transaction */
+void http_end_txn(struct stream *s)
+{
+ struct http_txn *txn = s->txn;
+ struct proxy *fe = strm_fe(s);
+
+ /* release any possible compression context */
+ if (s->flags & SF_COMP_READY)
+ s->comp_algo->end(&s->comp_ctx);
+ s->comp_algo = NULL;
+ s->flags &= ~SF_COMP_READY;
+
+ /* these ones will have been dynamically allocated */
+ pool_free2(pool2_requri, txn->uri);
+ pool_free2(pool2_capture, txn->cli_cookie);
+ pool_free2(pool2_capture, txn->srv_cookie);
+ pool_free2(pool2_uniqueid, s->unique_id);
+
+ s->unique_id = NULL;
+ txn->uri = NULL;
+ txn->srv_cookie = NULL;
+ txn->cli_cookie = NULL;
+
+ if (s->req_cap) {
+ struct cap_hdr *h;
+ for (h = fe->req_cap; h; h = h->next)
+ pool_free2(h->pool, s->req_cap[h->index]);
+ memset(s->req_cap, 0, fe->nb_req_cap * sizeof(void *));
+ }
+
+ if (s->res_cap) {
+ struct cap_hdr *h;
+ for (h = fe->rsp_cap; h; h = h->next)
+ pool_free2(h->pool, s->res_cap[h->index]);
+ memset(s->res_cap, 0, fe->nb_rsp_cap * sizeof(void *));
+ }
+
+ vars_prune(&s->vars_txn, s);
+ vars_prune(&s->vars_reqres, s);
+}
+
+/* to be used at the end of a transaction to prepare a new one */
+void http_reset_txn(struct stream *s)
+{
+ http_end_txn(s);
+ http_init_txn(s);
+
+ /* reinitialise the current rule list pointer to NULL. We are sure that
+ * any rulelist match the NULL pointer.
+ */
+ s->current_rule_list = NULL;
+
+ s->be = strm_fe(s);
+ s->logs.logwait = strm_fe(s)->to_log;
+ s->logs.level = 0;
+ stream_del_srv_conn(s);
+ s->target = NULL;
+ /* re-init store persistence */
+ s->store_count = 0;
+ s->uniq_id = global.req_count++;
+
+ s->pend_pos = NULL;
+
+ s->req.flags |= CF_READ_DONTWAIT; /* one read is usually enough */
+
+ /* We must trim any excess data from the response buffer, because we
+ * may have blocked an invalid response from a server that we don't
+ * want to accidentely forward once we disable the analysers, nor do
+ * we want those data to come along with next response. A typical
+ * example of such data would be from a buggy server responding to
+ * a HEAD with some data, or sending more than the advertised
+ * content-length.
+ */
+ if (unlikely(s->res.buf->i))
+ s->res.buf->i = 0;
+
+ s->req.rto = strm_fe(s)->timeout.client;
+ s->req.wto = TICK_ETERNITY;
+
+ s->res.rto = TICK_ETERNITY;
+ s->res.wto = strm_fe(s)->timeout.client;
+
+ s->req.rex = TICK_ETERNITY;
+ s->req.wex = TICK_ETERNITY;
+ s->req.analyse_exp = TICK_ETERNITY;
+ s->res.rex = TICK_ETERNITY;
+ s->res.wex = TICK_ETERNITY;
+ s->res.analyse_exp = TICK_ETERNITY;
+}
+
+void free_http_res_rules(struct list *r)
+{
+ struct act_rule *tr, *pr;
+
+ list_for_each_entry_safe(pr, tr, r, list) {
+ LIST_DEL(&pr->list);
+ regex_free(&pr->arg.hdr_add.re);
+ free(pr);
+ }
+}
+
+void free_http_req_rules(struct list *r)
+{
+ struct act_rule *tr, *pr;
+
+ list_for_each_entry_safe(pr, tr, r, list) {
+ LIST_DEL(&pr->list);
+ if (pr->action == ACT_HTTP_REQ_AUTH)
+ free(pr->arg.auth.realm);
+
+ regex_free(&pr->arg.hdr_add.re);
+ free(pr);
+ }
+}
+
+/* parse an "http-request" rule */
+struct act_rule *parse_http_req_cond(const char **args, const char *file, int linenum, struct proxy *proxy)
+{
+ struct act_rule *rule;
+ struct action_kw *custom = NULL;
+ int cur_arg;
+ char *error;
+
+ rule = (struct act_rule*)calloc(1, sizeof(struct act_rule));
+ if (!rule) {
+ Alert("parsing [%s:%d]: out of memory.\n", file, linenum);
+ goto out_err;
+ }
+
+ rule->deny_status = HTTP_ERR_403;
+ if (!strcmp(args[0], "allow")) {
+ rule->action = ACT_ACTION_ALLOW;
+ cur_arg = 1;
+ } else if (!strcmp(args[0], "deny") || !strcmp(args[0], "block")) {
+ int code;
+ int hc;
+
+ rule->action = ACT_ACTION_DENY;
+ cur_arg = 1;
+ if (strcmp(args[cur_arg], "deny_status") == 0) {
+ cur_arg++;
+ if (!args[cur_arg]) {
+ Alert("parsing [%s:%d] : error detected in %s '%s' while parsing 'http-request %s' rule : missing status code.\n",
+ file, linenum, proxy_type_str(proxy), proxy->id, args[0]);
+ goto out_err;
+ }
+
+ code = atol(args[cur_arg]);
+ cur_arg++;
+ for (hc = 0; hc < HTTP_ERR_SIZE; hc++) {
+ if (http_err_codes[hc] == code) {
+ rule->deny_status = hc;
+ break;
+ }
+ }
+
+ if (hc >= HTTP_ERR_SIZE) {
+ Warning("parsing [%s:%d] : status code %d not handled, using default code 403.\n",
+ file, linenum, code);
+ }
+ }
+ } else if (!strcmp(args[0], "tarpit")) {
+ rule->action = ACT_HTTP_REQ_TARPIT;
+ cur_arg = 1;
+ } else if (!strcmp(args[0], "auth")) {
+ rule->action = ACT_HTTP_REQ_AUTH;
+ cur_arg = 1;
+
+ while(*args[cur_arg]) {
+ if (!strcmp(args[cur_arg], "realm")) {
+ rule->arg.auth.realm = strdup(args[cur_arg + 1]);
+ cur_arg+=2;
+ continue;
+ } else
+ break;
+ }
+ } else if (!strcmp(args[0], "set-nice")) {
+ rule->action = ACT_HTTP_SET_NICE;
+ cur_arg = 1;
+
+ if (!*args[cur_arg] ||
+ (*args[cur_arg + 1] && strcmp(args[cur_arg + 1], "if") != 0 && strcmp(args[cur_arg + 1], "unless") != 0)) {
+ Alert("parsing [%s:%d]: 'http-request %s' expects exactly 1 argument (integer value).\n",
+ file, linenum, args[0]);
+ goto out_err;
+ }
+ rule->arg.nice = atoi(args[cur_arg]);
+ if (rule->arg.nice < -1024)
+ rule->arg.nice = -1024;
+ else if (rule->arg.nice > 1024)
+ rule->arg.nice = 1024;
+ cur_arg++;
+ } else if (!strcmp(args[0], "set-tos")) {
+#ifdef IP_TOS
+ char *err;
+ rule->action = ACT_HTTP_SET_TOS;
+ cur_arg = 1;
+
+ if (!*args[cur_arg] ||
+ (*args[cur_arg + 1] && strcmp(args[cur_arg + 1], "if") != 0 && strcmp(args[cur_arg + 1], "unless") != 0)) {
+ Alert("parsing [%s:%d]: 'http-request %s' expects exactly 1 argument (integer/hex value).\n",
+ file, linenum, args[0]);
+ goto out_err;
+ }
+
+ rule->arg.tos = strtol(args[cur_arg], &err, 0);
+ if (err && *err != '\0') {
+ Alert("parsing [%s:%d]: invalid character starting at '%s' in 'http-request %s' (integer/hex value expected).\n",
+ file, linenum, err, args[0]);
+ goto out_err;
+ }
+ cur_arg++;
+#else
+ Alert("parsing [%s:%d]: 'http-request %s' is not supported on this platform (IP_TOS undefined).\n", file, linenum, args[0]);
+ goto out_err;
+#endif
+ } else if (!strcmp(args[0], "set-mark")) {
+#ifdef SO_MARK
+ char *err;
+ rule->action = ACT_HTTP_SET_MARK;
+ cur_arg = 1;
+
+ if (!*args[cur_arg] ||
+ (*args[cur_arg + 1] && strcmp(args[cur_arg + 1], "if") != 0 && strcmp(args[cur_arg + 1], "unless") != 0)) {
+ Alert("parsing [%s:%d]: 'http-request %s' expects exactly 1 argument (integer/hex value).\n",
+ file, linenum, args[0]);
+ goto out_err;
+ }
+
+ rule->arg.mark = strtoul(args[cur_arg], &err, 0);
+ if (err && *err != '\0') {
+ Alert("parsing [%s:%d]: invalid character starting at '%s' in 'http-request %s' (integer/hex value expected).\n",
+ file, linenum, err, args[0]);
+ goto out_err;
+ }
+ cur_arg++;
+ global.last_checks |= LSTCHK_NETADM;
+#else
+ Alert("parsing [%s:%d]: 'http-request %s' is not supported on this platform (SO_MARK undefined).\n", file, linenum, args[0]);
+ goto out_err;
+#endif
+ } else if (!strcmp(args[0], "set-log-level")) {
+ rule->action = ACT_HTTP_SET_LOGL;
+ cur_arg = 1;
+
+ if (!*args[cur_arg] ||
+ (*args[cur_arg + 1] && strcmp(args[cur_arg + 1], "if") != 0 && strcmp(args[cur_arg + 1], "unless") != 0)) {
+ bad_log_level:
+ Alert("parsing [%s:%d]: 'http-request %s' expects exactly 1 argument (log level name or 'silent').\n",
+ file, linenum, args[0]);
+ goto out_err;
+ }
+ if (strcmp(args[cur_arg], "silent") == 0)
+ rule->arg.loglevel = -1;
+ else if ((rule->arg.loglevel = get_log_level(args[cur_arg]) + 1) == 0)
+ goto bad_log_level;
+ cur_arg++;
+ } else if (strcmp(args[0], "add-header") == 0 || strcmp(args[0], "set-header") == 0) {
+ rule->action = *args[0] == 'a' ? ACT_HTTP_ADD_HDR : ACT_HTTP_SET_HDR;
+ cur_arg = 1;
+
+ if (!*args[cur_arg] || !*args[cur_arg+1] ||
+ (*args[cur_arg+2] && strcmp(args[cur_arg+2], "if") != 0 && strcmp(args[cur_arg+2], "unless") != 0)) {
+ Alert("parsing [%s:%d]: 'http-request %s' expects exactly 2 arguments.\n",
+ file, linenum, args[0]);
+ goto out_err;
+ }
+
+ rule->arg.hdr_add.name = strdup(args[cur_arg]);
+ rule->arg.hdr_add.name_len = strlen(rule->arg.hdr_add.name);
+ LIST_INIT(&rule->arg.hdr_add.fmt);
+
+ proxy->conf.args.ctx = ARGC_HRQ;
+ parse_logformat_string(args[cur_arg + 1], proxy, &rule->arg.hdr_add.fmt, LOG_OPT_HTTP,
+ (proxy->cap & PR_CAP_FE) ? SMP_VAL_FE_HRQ_HDR : SMP_VAL_BE_HRQ_HDR,
+ file, linenum);
+ free(proxy->conf.lfs_file);
+ proxy->conf.lfs_file = strdup(proxy->conf.args.file);
+ proxy->conf.lfs_line = proxy->conf.args.line;
+ cur_arg += 2;
+ } else if (strcmp(args[0], "replace-header") == 0 || strcmp(args[0], "replace-value") == 0) {
+ rule->action = args[0][8] == 'h' ? ACT_HTTP_REPLACE_HDR : ACT_HTTP_REPLACE_VAL;
+ cur_arg = 1;
+
+ if (!*args[cur_arg] || !*args[cur_arg+1] || !*args[cur_arg+2] ||
+ (*args[cur_arg+3] && strcmp(args[cur_arg+3], "if") != 0 && strcmp(args[cur_arg+3], "unless") != 0)) {
+ Alert("parsing [%s:%d]: 'http-request %s' expects exactly 3 arguments.\n",
+ file, linenum, args[0]);
+ goto out_err;
+ }
+
+ rule->arg.hdr_add.name = strdup(args[cur_arg]);
+ rule->arg.hdr_add.name_len = strlen(rule->arg.hdr_add.name);
+ LIST_INIT(&rule->arg.hdr_add.fmt);
+
+ error = NULL;
+ if (!regex_comp(args[cur_arg + 1], &rule->arg.hdr_add.re, 1, 1, &error)) {
+ Alert("parsing [%s:%d] : '%s' : %s.\n", file, linenum,
+ args[cur_arg + 1], error);
+ free(error);
+ goto out_err;
+ }
+
+ proxy->conf.args.ctx = ARGC_HRQ;
+ parse_logformat_string(args[cur_arg + 2], proxy, &rule->arg.hdr_add.fmt, LOG_OPT_HTTP,
+ (proxy->cap & PR_CAP_FE) ? SMP_VAL_FE_HRQ_HDR : SMP_VAL_BE_HRQ_HDR,
+ file, linenum);
+
+ free(proxy->conf.lfs_file);
+ proxy->conf.lfs_file = strdup(proxy->conf.args.file);
+ proxy->conf.lfs_line = proxy->conf.args.line;
+ cur_arg += 3;
+ } else if (strcmp(args[0], "del-header") == 0) {
+ rule->action = ACT_HTTP_DEL_HDR;
+ cur_arg = 1;
+
+ if (!*args[cur_arg] ||
+ (*args[cur_arg+1] && strcmp(args[cur_arg+1], "if") != 0 && strcmp(args[cur_arg+1], "unless") != 0)) {
+ Alert("parsing [%s:%d]: 'http-request %s' expects exactly 1 argument.\n",
+ file, linenum, args[0]);
+ goto out_err;
+ }
+
+ rule->arg.hdr_add.name = strdup(args[cur_arg]);
+ rule->arg.hdr_add.name_len = strlen(rule->arg.hdr_add.name);
+
+ proxy->conf.args.ctx = ARGC_HRQ;
+ free(proxy->conf.lfs_file);
+ proxy->conf.lfs_file = strdup(proxy->conf.args.file);
+ proxy->conf.lfs_line = proxy->conf.args.line;
+ cur_arg += 1;
+ } else if (strncmp(args[0], "track-sc", 8) == 0 &&
+ args[0][9] == '\0' && args[0][8] >= '0' &&
+ args[0][8] < '0' + MAX_SESS_STKCTR) { /* track-sc 0..9 */
+ struct sample_expr *expr;
+ unsigned int where;
+ char *err = NULL;
+
+ cur_arg = 1;
+ proxy->conf.args.ctx = ARGC_TRK;
+
+ expr = sample_parse_expr((char **)args, &cur_arg, file, linenum, &err, &proxy->conf.args);
+ if (!expr) {
+ Alert("parsing [%s:%d] : error detected in %s '%s' while parsing 'http-request %s' rule : %s.\n",
+ file, linenum, proxy_type_str(proxy), proxy->id, args[0], err);
+ free(err);
+ goto out_err;
+ }
+
+ where = 0;
+ if (proxy->cap & PR_CAP_FE)
+ where |= SMP_VAL_FE_HRQ_HDR;
+ if (proxy->cap & PR_CAP_BE)
+ where |= SMP_VAL_BE_HRQ_HDR;
+
+ if (!(expr->fetch->val & where)) {
+ Alert("parsing [%s:%d] : error detected in %s '%s' while parsing 'http-request %s' rule :"
+ " fetch method '%s' extracts information from '%s', none of which is available here.\n",
+ file, linenum, proxy_type_str(proxy), proxy->id, args[0],
+ args[cur_arg-1], sample_src_names(expr->fetch->use));
+ free(expr);
+ goto out_err;
+ }
+
+ if (strcmp(args[cur_arg], "table") == 0) {
+ cur_arg++;
+ if (!args[cur_arg]) {
+ Alert("parsing [%s:%d] : error detected in %s '%s' while parsing 'http-request %s' rule : missing table name.\n",
+ file, linenum, proxy_type_str(proxy), proxy->id, args[0]);
+ free(expr);
+ goto out_err;
+ }
+ /* we copy the table name for now, it will be resolved later */
+ rule->arg.trk_ctr.table.n = strdup(args[cur_arg]);
+ cur_arg++;
+ }
+ rule->arg.trk_ctr.expr = expr;
+ rule->action = ACT_ACTION_TRK_SC0 + args[0][8] - '0';
+ } else if (strcmp(args[0], "redirect") == 0) {
+ struct redirect_rule *redir;
+ char *errmsg = NULL;
+
+ if ((redir = http_parse_redirect_rule(file, linenum, proxy, (const char **)args + 1, &errmsg, 1, 0)) == NULL) {
+ Alert("parsing [%s:%d] : error detected in %s '%s' while parsing 'http-request %s' rule : %s.\n",
+ file, linenum, proxy_type_str(proxy), proxy->id, args[0], errmsg);
+ goto out_err;
+ }
+
+ /* this redirect rule might already contain a parsed condition which
+ * we'll pass to the http-request rule.
+ */
+ rule->action = ACT_HTTP_REDIR;
+ rule->arg.redir = redir;
+ rule->cond = redir->cond;
+ redir->cond = NULL;
+ cur_arg = 2;
+ return rule;
+ } else if (strncmp(args[0], "add-acl", 7) == 0) {
+ /* http-request add-acl(<reference (acl name)>) <key pattern> */
+ rule->action = ACT_HTTP_ADD_ACL;
+ /*
+ * '+ 8' for 'add-acl('
+ * '- 9' for 'add-acl(' + trailing ')'
+ */
+ rule->arg.map.ref = my_strndup(args[0] + 8, strlen(args[0]) - 9);
+
+ cur_arg = 1;
+
+ if (!*args[cur_arg] ||
+ (*args[cur_arg+1] && strcmp(args[cur_arg+1], "if") != 0 && strcmp(args[cur_arg+1], "unless") != 0)) {
+ Alert("parsing [%s:%d]: 'http-request %s' expects exactly 1 argument.\n",
+ file, linenum, args[0]);
+ goto out_err;
+ }
+
+ LIST_INIT(&rule->arg.map.key);
+ proxy->conf.args.ctx = ARGC_HRQ;
+ parse_logformat_string(args[cur_arg], proxy, &rule->arg.map.key, LOG_OPT_HTTP,
+ (proxy->cap & PR_CAP_FE) ? SMP_VAL_FE_HRQ_HDR : SMP_VAL_BE_HRQ_HDR,
+ file, linenum);
+ free(proxy->conf.lfs_file);
+ proxy->conf.lfs_file = strdup(proxy->conf.args.file);
+ proxy->conf.lfs_line = proxy->conf.args.line;
+ cur_arg += 1;
+ } else if (strncmp(args[0], "del-acl", 7) == 0) {
+ /* http-request del-acl(<reference (acl name)>) <key pattern> */
+ rule->action = ACT_HTTP_DEL_ACL;
+ /*
+ * '+ 8' for 'del-acl('
+ * '- 9' for 'del-acl(' + trailing ')'
+ */
+ rule->arg.map.ref = my_strndup(args[0] + 8, strlen(args[0]) - 9);
+
+ cur_arg = 1;
+
+ if (!*args[cur_arg] ||
+ (*args[cur_arg+1] && strcmp(args[cur_arg+1], "if") != 0 && strcmp(args[cur_arg+1], "unless") != 0)) {
+ Alert("parsing [%s:%d]: 'http-request %s' expects exactly 1 argument.\n",
+ file, linenum, args[0]);
+ goto out_err;
+ }
+
+ LIST_INIT(&rule->arg.map.key);
+ proxy->conf.args.ctx = ARGC_HRQ;
+ parse_logformat_string(args[cur_arg], proxy, &rule->arg.map.key, LOG_OPT_HTTP,
+ (proxy->cap & PR_CAP_FE) ? SMP_VAL_FE_HRQ_HDR : SMP_VAL_BE_HRQ_HDR,
+ file, linenum);
+ free(proxy->conf.lfs_file);
+ proxy->conf.lfs_file = strdup(proxy->conf.args.file);
+ proxy->conf.lfs_line = proxy->conf.args.line;
+ cur_arg += 1;
+ } else if (strncmp(args[0], "del-map", 7) == 0) {
+ /* http-request del-map(<reference (map name)>) <key pattern> */
+ rule->action = ACT_HTTP_DEL_MAP;
+ /*
+ * '+ 8' for 'del-map('
+ * '- 9' for 'del-map(' + trailing ')'
+ */
+ rule->arg.map.ref = my_strndup(args[0] + 8, strlen(args[0]) - 9);
+
+ cur_arg = 1;
+
+ if (!*args[cur_arg] ||
+ (*args[cur_arg+1] && strcmp(args[cur_arg+1], "if") != 0 && strcmp(args[cur_arg+1], "unless") != 0)) {
+ Alert("parsing [%s:%d]: 'http-request %s' expects exactly 1 argument.\n",
+ file, linenum, args[0]);
+ goto out_err;
+ }
+
+ LIST_INIT(&rule->arg.map.key);
+ proxy->conf.args.ctx = ARGC_HRQ;
+ parse_logformat_string(args[cur_arg], proxy, &rule->arg.map.key, LOG_OPT_HTTP,
+ (proxy->cap & PR_CAP_FE) ? SMP_VAL_FE_HRQ_HDR : SMP_VAL_BE_HRQ_HDR,
+ file, linenum);
+ free(proxy->conf.lfs_file);
+ proxy->conf.lfs_file = strdup(proxy->conf.args.file);
+ proxy->conf.lfs_line = proxy->conf.args.line;
+ cur_arg += 1;
+ } else if (strncmp(args[0], "set-map", 7) == 0) {
+ /* http-request set-map(<reference (map name)>) <key pattern> <value pattern> */
+ rule->action = ACT_HTTP_SET_MAP;
+ /*
+ * '+ 8' for 'set-map('
+ * '- 9' for 'set-map(' + trailing ')'
+ */
+ rule->arg.map.ref = my_strndup(args[0] + 8, strlen(args[0]) - 9);
+
+ cur_arg = 1;
+
+ if (!*args[cur_arg] || !*args[cur_arg+1] ||
+ (*args[cur_arg+2] && strcmp(args[cur_arg+2], "if") != 0 && strcmp(args[cur_arg+2], "unless") != 0)) {
+ Alert("parsing [%s:%d]: 'http-request %s' expects exactly 2 arguments.\n",
+ file, linenum, args[0]);
+ goto out_err;
+ }
+
+ LIST_INIT(&rule->arg.map.key);
+ LIST_INIT(&rule->arg.map.value);
+ proxy->conf.args.ctx = ARGC_HRQ;
+
+ /* key pattern */
+ parse_logformat_string(args[cur_arg], proxy, &rule->arg.map.key, LOG_OPT_HTTP,
+ (proxy->cap & PR_CAP_FE) ? SMP_VAL_FE_HRQ_HDR : SMP_VAL_BE_HRQ_HDR,
+ file, linenum);
+
+ /* value pattern */
+ parse_logformat_string(args[cur_arg + 1], proxy, &rule->arg.map.value, LOG_OPT_HTTP,
+ (proxy->cap & PR_CAP_FE) ? SMP_VAL_FE_HRQ_HDR : SMP_VAL_BE_HRQ_HDR,
+ file, linenum);
+ free(proxy->conf.lfs_file);
+ proxy->conf.lfs_file = strdup(proxy->conf.args.file);
+ proxy->conf.lfs_line = proxy->conf.args.line;
+
+ cur_arg += 2;
+ } else if (strncmp(args[0], "set-src", 7) == 0) {
+ struct sample_expr *expr;
+ unsigned int where;
+ char *err = NULL;
+
+ cur_arg = 1;
+ proxy->conf.args.ctx = ARGC_HRQ;
+
+ expr = sample_parse_expr((char **)args, &cur_arg, file, linenum, &err, &proxy->conf.args);
+ if (!expr) {
+ Alert("parsing [%s:%d] : error detected in %s '%s' while parsing 'http-request %s' rule : %s.\n",
+ file, linenum, proxy_type_str(proxy), proxy->id, args[0], err);
+ free(err);
+ goto out_err;
+ }
+
+ where = 0;
+ if (proxy->cap & PR_CAP_FE)
+ where |= SMP_VAL_FE_HRQ_HDR;
+ if (proxy->cap & PR_CAP_BE)
+ where |= SMP_VAL_BE_HRQ_HDR;
+
+ if (!(expr->fetch->val & where)) {
+ Alert("parsing [%s:%d] : error detected in %s '%s' while parsing 'http-request %s' rule :"
+ " fetch method '%s' extracts information from '%s', none of which is available here.\n",
+ file, linenum, proxy_type_str(proxy), proxy->id, args[0],
+ args[cur_arg-1], sample_src_names(expr->fetch->use));
+ free(expr);
+ goto out_err;
+ }
+
+ rule->arg.expr = expr;
+ rule->action = ACT_HTTP_REQ_SET_SRC;
+ } else if (((custom = action_http_req_custom(args[0])) != NULL)) {
+ char *errmsg = NULL;
+ cur_arg = 1;
+ /* try in the module list */
+ rule->from = ACT_F_HTTP_REQ;
+ rule->kw = custom;
+ if (custom->parse(args, &cur_arg, proxy, rule, &errmsg) == ACT_RET_PRS_ERR) {
+ Alert("parsing [%s:%d] : error detected in %s '%s' while parsing 'http-request %s' rule : %s.\n",
+ file, linenum, proxy_type_str(proxy), proxy->id, args[0], errmsg);
+ free(errmsg);
+ goto out_err;
+ }
+ } else {
+ action_build_list(&http_req_keywords.list, &trash);
+ Alert("parsing [%s:%d]: 'http-request' expects 'allow', 'deny', 'auth', 'redirect', "
+ "'tarpit', 'add-header', 'set-header', 'replace-header', 'replace-value', 'set-nice', "
+ "'set-tos', 'set-mark', 'set-log-level', 'add-acl', 'del-acl', 'del-map', 'set-map', "
+ "'set-src'%s%s, but got '%s'%s.\n",
+ file, linenum, *trash.str ? ", " : "", trash.str, args[0], *args[0] ? "" : " (missing argument)");
+ goto out_err;
+ }
+
+ if (strcmp(args[cur_arg], "if") == 0 || strcmp(args[cur_arg], "unless") == 0) {
+ struct acl_cond *cond;
+ char *errmsg = NULL;
+
+ if ((cond = build_acl_cond(file, linenum, proxy, args+cur_arg, &errmsg)) == NULL) {
+ Alert("parsing [%s:%d] : error detected while parsing an 'http-request %s' condition : %s.\n",
+ file, linenum, args[0], errmsg);
+ free(errmsg);
+ goto out_err;
+ }
+ rule->cond = cond;
+ }
+ else if (*args[cur_arg]) {
+ Alert("parsing [%s:%d]: 'http-request %s' expects 'realm' for 'auth' or"
+ " either 'if' or 'unless' followed by a condition but found '%s'.\n",
+ file, linenum, args[0], args[cur_arg]);
+ goto out_err;
+ }
+
+ return rule;
+ out_err:
+ free(rule);
+ return NULL;
+}
+
+/* parse an "http-respose" rule */
+struct act_rule *parse_http_res_cond(const char **args, const char *file, int linenum, struct proxy *proxy)
+{
+ struct act_rule *rule;
+ struct action_kw *custom = NULL;
+ int cur_arg;
+ char *error;
+
+ rule = calloc(1, sizeof(*rule));
+ if (!rule) {
+ Alert("parsing [%s:%d]: out of memory.\n", file, linenum);
+ goto out_err;
+ }
+
+ if (!strcmp(args[0], "allow")) {
+ rule->action = ACT_ACTION_ALLOW;
+ cur_arg = 1;
+ } else if (!strcmp(args[0], "deny")) {
+ rule->action = ACT_ACTION_DENY;
+ cur_arg = 1;
+ } else if (!strcmp(args[0], "set-nice")) {
+ rule->action = ACT_HTTP_SET_NICE;
+ cur_arg = 1;
+
+ if (!*args[cur_arg] ||
+ (*args[cur_arg + 1] && strcmp(args[cur_arg + 1], "if") != 0 && strcmp(args[cur_arg + 1], "unless") != 0)) {
+ Alert("parsing [%s:%d]: 'http-response %s' expects exactly 1 argument (integer value).\n",
+ file, linenum, args[0]);
+ goto out_err;
+ }
+ rule->arg.nice = atoi(args[cur_arg]);
+ if (rule->arg.nice < -1024)
+ rule->arg.nice = -1024;
+ else if (rule->arg.nice > 1024)
+ rule->arg.nice = 1024;
+ cur_arg++;
+ } else if (!strcmp(args[0], "set-tos")) {
+#ifdef IP_TOS
+ char *err;
+ rule->action = ACT_HTTP_SET_TOS;
+ cur_arg = 1;
+
+ if (!*args[cur_arg] ||
+ (*args[cur_arg + 1] && strcmp(args[cur_arg + 1], "if") != 0 && strcmp(args[cur_arg + 1], "unless") != 0)) {
+ Alert("parsing [%s:%d]: 'http-response %s' expects exactly 1 argument (integer/hex value).\n",
+ file, linenum, args[0]);
+ goto out_err;
+ }
+
+ rule->arg.tos = strtol(args[cur_arg], &err, 0);
+ if (err && *err != '\0') {
+ Alert("parsing [%s:%d]: invalid character starting at '%s' in 'http-response %s' (integer/hex value expected).\n",
+ file, linenum, err, args[0]);
+ goto out_err;
+ }
+ cur_arg++;
+#else
+ Alert("parsing [%s:%d]: 'http-response %s' is not supported on this platform (IP_TOS undefined).\n", file, linenum, args[0]);
+ goto out_err;
+#endif
+ } else if (!strcmp(args[0], "set-mark")) {
+#ifdef SO_MARK
+ char *err;
+ rule->action = ACT_HTTP_SET_MARK;
+ cur_arg = 1;
+
+ if (!*args[cur_arg] ||
+ (*args[cur_arg + 1] && strcmp(args[cur_arg + 1], "if") != 0 && strcmp(args[cur_arg + 1], "unless") != 0)) {
+ Alert("parsing [%s:%d]: 'http-response %s' expects exactly 1 argument (integer/hex value).\n",
+ file, linenum, args[0]);
+ goto out_err;
+ }
+
+ rule->arg.mark = strtoul(args[cur_arg], &err, 0);
+ if (err && *err != '\0') {
+ Alert("parsing [%s:%d]: invalid character starting at '%s' in 'http-response %s' (integer/hex value expected).\n",
+ file, linenum, err, args[0]);
+ goto out_err;
+ }
+ cur_arg++;
+ global.last_checks |= LSTCHK_NETADM;
+#else
+ Alert("parsing [%s:%d]: 'http-response %s' is not supported on this platform (SO_MARK undefined).\n", file, linenum, args[0]);
+ goto out_err;
+#endif
+ } else if (!strcmp(args[0], "set-log-level")) {
+ rule->action = ACT_HTTP_SET_LOGL;
+ cur_arg = 1;
+
+ if (!*args[cur_arg] ||
+ (*args[cur_arg + 1] && strcmp(args[cur_arg + 1], "if") != 0 && strcmp(args[cur_arg + 1], "unless") != 0)) {
+ bad_log_level:
+ Alert("parsing [%s:%d]: 'http-response %s' expects exactly 1 argument (log level name or 'silent').\n",
+ file, linenum, args[0]);
+ goto out_err;
+ }
+ if (strcmp(args[cur_arg], "silent") == 0)
+ rule->arg.loglevel = -1;
+ else if ((rule->arg.loglevel = get_log_level(args[cur_arg] + 1)) == 0)
+ goto bad_log_level;
+ cur_arg++;
+ } else if (strcmp(args[0], "add-header") == 0 || strcmp(args[0], "set-header") == 0) {
+ rule->action = *args[0] == 'a' ? ACT_HTTP_ADD_HDR : ACT_HTTP_SET_HDR;
+ cur_arg = 1;
+
+ if (!*args[cur_arg] || !*args[cur_arg+1] ||
+ (*args[cur_arg+2] && strcmp(args[cur_arg+2], "if") != 0 && strcmp(args[cur_arg+2], "unless") != 0)) {
+ Alert("parsing [%s:%d]: 'http-response %s' expects exactly 2 arguments.\n",
+ file, linenum, args[0]);
+ goto out_err;
+ }
+
+ rule->arg.hdr_add.name = strdup(args[cur_arg]);
+ rule->arg.hdr_add.name_len = strlen(rule->arg.hdr_add.name);
+ LIST_INIT(&rule->arg.hdr_add.fmt);
+
+ proxy->conf.args.ctx = ARGC_HRS;
+ parse_logformat_string(args[cur_arg + 1], proxy, &rule->arg.hdr_add.fmt, LOG_OPT_HTTP,
+ (proxy->cap & PR_CAP_BE) ? SMP_VAL_BE_HRS_HDR : SMP_VAL_FE_HRS_HDR,
+ file, linenum);
+ free(proxy->conf.lfs_file);
+ proxy->conf.lfs_file = strdup(proxy->conf.args.file);
+ proxy->conf.lfs_line = proxy->conf.args.line;
+ cur_arg += 2;
+ } else if (strcmp(args[0], "replace-header") == 0 || strcmp(args[0], "replace-value") == 0) {
+ rule->action = args[0][8] == 'h' ? ACT_HTTP_REPLACE_HDR : ACT_HTTP_REPLACE_VAL;
+ cur_arg = 1;
+
+ if (!*args[cur_arg] || !*args[cur_arg+1] || !*args[cur_arg+2] ||
+ (*args[cur_arg+3] && strcmp(args[cur_arg+3], "if") != 0 && strcmp(args[cur_arg+3], "unless") != 0)) {
+ Alert("parsing [%s:%d]: 'http-response %s' expects exactly 3 arguments.\n",
+ file, linenum, args[0]);
+ goto out_err;
+ }
+
+ rule->arg.hdr_add.name = strdup(args[cur_arg]);
+ rule->arg.hdr_add.name_len = strlen(rule->arg.hdr_add.name);
+ LIST_INIT(&rule->arg.hdr_add.fmt);
+
+ error = NULL;
+ if (!regex_comp(args[cur_arg + 1], &rule->arg.hdr_add.re, 1, 1, &error)) {
+ Alert("parsing [%s:%d] : '%s' : %s.\n", file, linenum,
+ args[cur_arg + 1], error);
+ free(error);
+ goto out_err;
+ }
+
+ proxy->conf.args.ctx = ARGC_HRQ;
+ parse_logformat_string(args[cur_arg + 2], proxy, &rule->arg.hdr_add.fmt, LOG_OPT_HTTP,
+ (proxy->cap & PR_CAP_BE) ? SMP_VAL_BE_HRS_HDR : SMP_VAL_FE_HRS_HDR,
+ file, linenum);
+
+ free(proxy->conf.lfs_file);
+ proxy->conf.lfs_file = strdup(proxy->conf.args.file);
+ proxy->conf.lfs_line = proxy->conf.args.line;
+ cur_arg += 3;
+ } else if (strcmp(args[0], "del-header") == 0) {
+ rule->action = ACT_HTTP_DEL_HDR;
+ cur_arg = 1;
+
+ if (!*args[cur_arg] ||
+ (*args[cur_arg+1] && strcmp(args[cur_arg+1], "if") != 0 && strcmp(args[cur_arg+1], "unless") != 0)) {
+ Alert("parsing [%s:%d]: 'http-response %s' expects exactly 1 argument.\n",
+ file, linenum, args[0]);
+ goto out_err;
+ }
+
+ rule->arg.hdr_add.name = strdup(args[cur_arg]);
+ rule->arg.hdr_add.name_len = strlen(rule->arg.hdr_add.name);
+
+ proxy->conf.args.ctx = ARGC_HRS;
+ free(proxy->conf.lfs_file);
+ proxy->conf.lfs_file = strdup(proxy->conf.args.file);
+ proxy->conf.lfs_line = proxy->conf.args.line;
+ cur_arg += 1;
+ } else if (strncmp(args[0], "add-acl", 7) == 0) {
+ /* http-request add-acl(<reference (acl name)>) <key pattern> */
+ rule->action = ACT_HTTP_ADD_ACL;
+ /*
+ * '+ 8' for 'add-acl('
+ * '- 9' for 'add-acl(' + trailing ')'
+ */
+ rule->arg.map.ref = my_strndup(args[0] + 8, strlen(args[0]) - 9);
+
+ cur_arg = 1;
+
+ if (!*args[cur_arg] ||
+ (*args[cur_arg+1] && strcmp(args[cur_arg+1], "if") != 0 && strcmp(args[cur_arg+1], "unless") != 0)) {
+ Alert("parsing [%s:%d]: 'http-response %s' expects exactly 1 argument.\n",
+ file, linenum, args[0]);
+ goto out_err;
+ }
+
+ LIST_INIT(&rule->arg.map.key);
+ proxy->conf.args.ctx = ARGC_HRS;
+ parse_logformat_string(args[cur_arg], proxy, &rule->arg.map.key, LOG_OPT_HTTP,
+ (proxy->cap & PR_CAP_BE) ? SMP_VAL_BE_HRS_HDR : SMP_VAL_FE_HRS_HDR,
+ file, linenum);
+ free(proxy->conf.lfs_file);
+ proxy->conf.lfs_file = strdup(proxy->conf.args.file);
+ proxy->conf.lfs_line = proxy->conf.args.line;
+
+ cur_arg += 1;
+ } else if (strncmp(args[0], "del-acl", 7) == 0) {
+ /* http-response del-acl(<reference (acl name)>) <key pattern> */
+ rule->action = ACT_HTTP_DEL_ACL;
+ /*
+ * '+ 8' for 'del-acl('
+ * '- 9' for 'del-acl(' + trailing ')'
+ */
+ rule->arg.map.ref = my_strndup(args[0] + 8, strlen(args[0]) - 9);
+
+ cur_arg = 1;
+
+ if (!*args[cur_arg] ||
+ (*args[cur_arg+1] && strcmp(args[cur_arg+1], "if") != 0 && strcmp(args[cur_arg+1], "unless") != 0)) {
+ Alert("parsing [%s:%d]: 'http-response %s' expects exactly 1 argument.\n",
+ file, linenum, args[0]);
+ goto out_err;
+ }
+
+ LIST_INIT(&rule->arg.map.key);
+ proxy->conf.args.ctx = ARGC_HRS;
+ parse_logformat_string(args[cur_arg], proxy, &rule->arg.map.key, LOG_OPT_HTTP,
+ (proxy->cap & PR_CAP_BE) ? SMP_VAL_BE_HRS_HDR : SMP_VAL_FE_HRS_HDR,
+ file, linenum);
+ free(proxy->conf.lfs_file);
+ proxy->conf.lfs_file = strdup(proxy->conf.args.file);
+ proxy->conf.lfs_line = proxy->conf.args.line;
+ cur_arg += 1;
+ } else if (strncmp(args[0], "del-map", 7) == 0) {
+ /* http-response del-map(<reference (map name)>) <key pattern> */
+ rule->action = ACT_HTTP_DEL_MAP;
+ /*
+ * '+ 8' for 'del-map('
+ * '- 9' for 'del-map(' + trailing ')'
+ */
+ rule->arg.map.ref = my_strndup(args[0] + 8, strlen(args[0]) - 9);
+
+ cur_arg = 1;
+
+ if (!*args[cur_arg] ||
+ (*args[cur_arg+1] && strcmp(args[cur_arg+1], "if") != 0 && strcmp(args[cur_arg+1], "unless") != 0)) {
+ Alert("parsing [%s:%d]: 'http-response %s' expects exactly 1 argument.\n",
+ file, linenum, args[0]);
+ goto out_err;
+ }
+
+ LIST_INIT(&rule->arg.map.key);
+ proxy->conf.args.ctx = ARGC_HRS;
+ parse_logformat_string(args[cur_arg], proxy, &rule->arg.map.key, LOG_OPT_HTTP,
+ (proxy->cap & PR_CAP_BE) ? SMP_VAL_BE_HRS_HDR : SMP_VAL_FE_HRS_HDR,
+ file, linenum);
+ free(proxy->conf.lfs_file);
+ proxy->conf.lfs_file = strdup(proxy->conf.args.file);
+ proxy->conf.lfs_line = proxy->conf.args.line;
+ cur_arg += 1;
+ } else if (strncmp(args[0], "set-map", 7) == 0) {
+ /* http-response set-map(<reference (map name)>) <key pattern> <value pattern> */
+ rule->action = ACT_HTTP_SET_MAP;
+ /*
+ * '+ 8' for 'set-map('
+ * '- 9' for 'set-map(' + trailing ')'
+ */
+ rule->arg.map.ref = my_strndup(args[0] + 8, strlen(args[0]) - 9);
+
+ cur_arg = 1;
+
+ if (!*args[cur_arg] || !*args[cur_arg+1] ||
+ (*args[cur_arg+2] && strcmp(args[cur_arg+2], "if") != 0 && strcmp(args[cur_arg+2], "unless") != 0)) {
+ Alert("parsing [%s:%d]: 'http-response %s' expects exactly 2 arguments.\n",
+ file, linenum, args[0]);
+ goto out_err;
+ }
+
+ LIST_INIT(&rule->arg.map.key);
+ LIST_INIT(&rule->arg.map.value);
+
+ proxy->conf.args.ctx = ARGC_HRS;
+
+ /* key pattern */
+ parse_logformat_string(args[cur_arg], proxy, &rule->arg.map.key, LOG_OPT_HTTP,
+ (proxy->cap & PR_CAP_BE) ? SMP_VAL_BE_HRS_HDR : SMP_VAL_FE_HRS_HDR,
+ file, linenum);
+
+ /* value pattern */
+ parse_logformat_string(args[cur_arg + 1], proxy, &rule->arg.map.value, LOG_OPT_HTTP,
+ (proxy->cap & PR_CAP_BE) ? SMP_VAL_BE_HRS_HDR : SMP_VAL_FE_HRS_HDR,
+ file, linenum);
+
+ free(proxy->conf.lfs_file);
+ proxy->conf.lfs_file = strdup(proxy->conf.args.file);
+ proxy->conf.lfs_line = proxy->conf.args.line;
+
+ cur_arg += 2;
+ } else if (strcmp(args[0], "redirect") == 0) {
+ struct redirect_rule *redir;
+ char *errmsg = NULL;
+
+ if ((redir = http_parse_redirect_rule(file, linenum, proxy, (const char **)args + 1, &errmsg, 1, 1)) == NULL) {
+ Alert("parsing [%s:%d] : error detected in %s '%s' while parsing 'http-response %s' rule : %s.\n",
+ file, linenum, proxy_type_str(proxy), proxy->id, args[0], errmsg);
+ goto out_err;
+ }
+
+ /* this redirect rule might already contain a parsed condition which
+ * we'll pass to the http-request rule.
+ */
+ rule->action = ACT_HTTP_REDIR;
+ rule->arg.redir = redir;
+ rule->cond = redir->cond;
+ redir->cond = NULL;
+ cur_arg = 2;
+ return rule;
+ } else if (((custom = action_http_res_custom(args[0])) != NULL)) {
+ char *errmsg = NULL;
+ cur_arg = 1;
+ /* try in the module list */
+ rule->from = ACT_F_HTTP_RES;
+ rule->kw = custom;
+ if (custom->parse(args, &cur_arg, proxy, rule, &errmsg) == ACT_RET_PRS_ERR) {
+ Alert("parsing [%s:%d] : error detected in %s '%s' while parsing 'http-response %s' rule : %s.\n",
+ file, linenum, proxy_type_str(proxy), proxy->id, args[0], errmsg);
+ free(errmsg);
+ goto out_err;
+ }
+ } else {
+ action_build_list(&http_res_keywords.list, &trash);
+ Alert("parsing [%s:%d]: 'http-response' expects 'allow', 'deny', 'redirect', "
+ "'add-header', 'del-header', 'set-header', 'replace-header', 'replace-value', 'set-nice', "
+ "'set-tos', 'set-mark', 'set-log-level', 'add-acl', 'del-acl', 'del-map', 'set-map', "
+ "'set-src'%s%s, but got '%s'%s.\n",
+ file, linenum, *trash.str ? ", " : "", trash.str, args[0], *args[0] ? "" : " (missing argument)");
+ goto out_err;
+ }
+
+ if (strcmp(args[cur_arg], "if") == 0 || strcmp(args[cur_arg], "unless") == 0) {
+ struct acl_cond *cond;
+ char *errmsg = NULL;
+
+ if ((cond = build_acl_cond(file, linenum, proxy, args+cur_arg, &errmsg)) == NULL) {
+ Alert("parsing [%s:%d] : error detected while parsing an 'http-response %s' condition : %s.\n",
+ file, linenum, args[0], errmsg);
+ free(errmsg);
+ goto out_err;
+ }
+ rule->cond = cond;
+ }
+ else if (*args[cur_arg]) {
+ Alert("parsing [%s:%d]: 'http-response %s' expects"
+ " either 'if' or 'unless' followed by a condition but found '%s'.\n",
+ file, linenum, args[0], args[cur_arg]);
+ goto out_err;
+ }
+
+ return rule;
+ out_err:
+ free(rule);
+ return NULL;
+}
+
+/* Parses a redirect rule. Returns the redirect rule on success or NULL on error,
+ * with <err> filled with the error message. If <use_fmt> is not null, builds a
+ * dynamic log-format rule instead of a static string. Parameter <dir> indicates
+ * the direction of the rule, and equals 0 for request, non-zero for responses.
+ */
+struct redirect_rule *http_parse_redirect_rule(const char *file, int linenum, struct proxy *curproxy,
+ const char **args, char **errmsg, int use_fmt, int dir)
+{
+ struct redirect_rule *rule;
+ int cur_arg;
+ int type = REDIRECT_TYPE_NONE;
+ int code = 302;
+ const char *destination = NULL;
+ const char *cookie = NULL;
+ int cookie_set = 0;
+ unsigned int flags = REDIRECT_FLAG_NONE;
+ struct acl_cond *cond = NULL;
+
+ cur_arg = 0;
+ while (*(args[cur_arg])) {
+ if (strcmp(args[cur_arg], "location") == 0) {
+ if (!*args[cur_arg + 1])
+ goto missing_arg;
+
+ type = REDIRECT_TYPE_LOCATION;
+ cur_arg++;
+ destination = args[cur_arg];
+ }
+ else if (strcmp(args[cur_arg], "prefix") == 0) {
+ if (!*args[cur_arg + 1])
+ goto missing_arg;
+ type = REDIRECT_TYPE_PREFIX;
+ cur_arg++;
+ destination = args[cur_arg];
+ }
+ else if (strcmp(args[cur_arg], "scheme") == 0) {
+ if (!*args[cur_arg + 1])
+ goto missing_arg;
+
+ type = REDIRECT_TYPE_SCHEME;
+ cur_arg++;
+ destination = args[cur_arg];
+ }
+ else if (strcmp(args[cur_arg], "set-cookie") == 0) {
+ if (!*args[cur_arg + 1])
+ goto missing_arg;
+
+ cur_arg++;
+ cookie = args[cur_arg];
+ cookie_set = 1;
+ }
+ else if (strcmp(args[cur_arg], "clear-cookie") == 0) {
+ if (!*args[cur_arg + 1])
+ goto missing_arg;
+
+ cur_arg++;
+ cookie = args[cur_arg];
+ cookie_set = 0;
+ }
+ else if (strcmp(args[cur_arg], "code") == 0) {
+ if (!*args[cur_arg + 1])
+ goto missing_arg;
+
+ cur_arg++;
+ code = atol(args[cur_arg]);
+ if (code < 301 || code > 308 || (code > 303 && code < 307)) {
+ memprintf(errmsg,
+ "'%s': unsupported HTTP code '%s' (must be one of 301, 302, 303, 307 or 308)",
+ args[cur_arg - 1], args[cur_arg]);
+ return NULL;
+ }
+ }
+ else if (!strcmp(args[cur_arg],"drop-query")) {
+ flags |= REDIRECT_FLAG_DROP_QS;
+ }
+ else if (!strcmp(args[cur_arg],"append-slash")) {
+ flags |= REDIRECT_FLAG_APPEND_SLASH;
+ }
+ else if (strcmp(args[cur_arg], "if") == 0 ||
+ strcmp(args[cur_arg], "unless") == 0) {
+ cond = build_acl_cond(file, linenum, curproxy, (const char **)args + cur_arg, errmsg);
+ if (!cond) {
+ memprintf(errmsg, "error in condition: %s", *errmsg);
+ return NULL;
+ }
+ break;
+ }
+ else {
+ memprintf(errmsg,
+ "expects 'code', 'prefix', 'location', 'scheme', 'set-cookie', 'clear-cookie', 'drop-query' or 'append-slash' (was '%s')",
+ args[cur_arg]);
+ return NULL;
+ }
+ cur_arg++;
+ }
+
+ if (type == REDIRECT_TYPE_NONE) {
+ memprintf(errmsg, "redirection type expected ('prefix', 'location', or 'scheme')");
+ return NULL;
+ }
+
+ if (dir && type != REDIRECT_TYPE_LOCATION) {
+ memprintf(errmsg, "response only supports redirect type 'location'");
+ return NULL;
+ }
+
+ rule = (struct redirect_rule *)calloc(1, sizeof(*rule));
+ rule->cond = cond;
+ LIST_INIT(&rule->rdr_fmt);
+
+ if (!use_fmt) {
+ /* old-style static redirect rule */
+ rule->rdr_str = strdup(destination);
+ rule->rdr_len = strlen(destination);
+ }
+ else {
+ /* log-format based redirect rule */
+
+ /* Parse destination. Note that in the REDIRECT_TYPE_PREFIX case,
+ * if prefix == "/", we don't want to add anything, otherwise it
+ * makes it hard for the user to configure a self-redirection.
+ */
+ curproxy->conf.args.ctx = ARGC_RDR;
+ if (!(type == REDIRECT_TYPE_PREFIX && destination[0] == '/' && destination[1] == '\0')) {
+ parse_logformat_string(destination, curproxy, &rule->rdr_fmt, LOG_OPT_HTTP,
+ dir ? (curproxy->cap & PR_CAP_FE) ? SMP_VAL_FE_HRS_HDR : SMP_VAL_BE_HRS_HDR
+ : (curproxy->cap & PR_CAP_FE) ? SMP_VAL_FE_HRQ_HDR : SMP_VAL_BE_HRQ_HDR,
+ file, linenum);
+ free(curproxy->conf.lfs_file);
+ curproxy->conf.lfs_file = strdup(curproxy->conf.args.file);
+ curproxy->conf.lfs_line = curproxy->conf.args.line;
+ }
+ }
+
+ if (cookie) {
+ /* depending on cookie_set, either we want to set the cookie, or to clear it.
+ * a clear consists in appending "; path=/; Max-Age=0;" at the end.
+ */
+ rule->cookie_len = strlen(cookie);
+ if (cookie_set) {
+ rule->cookie_str = malloc(rule->cookie_len + 10);
+ memcpy(rule->cookie_str, cookie, rule->cookie_len);
+ memcpy(rule->cookie_str + rule->cookie_len, "; path=/;", 10);
+ rule->cookie_len += 9;
+ } else {
+ rule->cookie_str = malloc(rule->cookie_len + 21);
+ memcpy(rule->cookie_str, cookie, rule->cookie_len);
+ memcpy(rule->cookie_str + rule->cookie_len, "; path=/; Max-Age=0;", 21);
+ rule->cookie_len += 20;
+ }
+ }
+ rule->type = type;
+ rule->code = code;
+ rule->flags = flags;
+ LIST_INIT(&rule->list);
+ return rule;
+
+ missing_arg:
+ memprintf(errmsg, "missing argument for '%s'", args[cur_arg]);
+ return NULL;
+}
+
+/************************************************************************/
+/* The code below is dedicated to ACL parsing and matching */
+/************************************************************************/
+
+
+/* This function ensures that the prerequisites for an L7 fetch are ready,
+ * which means that a request or response is ready. If some data is missing,
+ * a parsing attempt is made. This is useful in TCP-based ACLs which are able
+ * to extract data from L7. If <req_vol> is non-null during a request prefetch,
+ * another test is made to ensure the required information is not gone.
+ *
+ * The function returns :
+ * 0 with SMP_F_MAY_CHANGE in the sample flags if some data is missing to
+ * decide whether or not an HTTP message is present ;
+ * 0 if the requested data cannot be fetched or if it is certain that
+ * we'll never have any HTTP message there ;
+ * 1 if an HTTP message is ready
+ */
+int smp_prefetch_http(struct proxy *px, struct stream *s, unsigned int opt,
+ const struct arg *args, struct sample *smp, int req_vol)
+{
+ struct http_txn *txn;
+ struct http_msg *msg;
+
+ /* Note: this function may only be used from places where
+ * http_init_txn() has already been done, and implies that <s>,
+ * <txn>, and <hdr_idx.v> are properly set. An extra check protects
+ * against an eventual mistake in the fetch capability matrix.
+ */
+
+ if (!s)
+ return 0;
+ if (!s->txn) {
+ if (unlikely(!http_alloc_txn(s)))
+ return 0; /* not enough memory */
+ http_init_txn(s);
+ }
+ txn = s->txn;
+ msg = &txn->req;
+
+ /* Check for a dependency on a request */
+ smp->data.type = SMP_T_BOOL;
+
+ if ((opt & SMP_OPT_DIR) == SMP_OPT_DIR_REQ) {
+ /* If the buffer does not leave enough free space at the end,
+ * we must first realign it.
+ */
+ if (s->req.buf->p > s->req.buf->data &&
+ s->req.buf->i + s->req.buf->p > s->req.buf->data + s->req.buf->size - global.tune.maxrewrite)
+ buffer_slow_realign(s->req.buf);
+
+ if (unlikely(txn->req.msg_state < HTTP_MSG_BODY)) {
+ if (msg->msg_state == HTTP_MSG_ERROR)
+ return 0;
+
+ /* Try to decode HTTP request */
+ if (likely(msg->next < s->req.buf->i))
+ http_msg_analyzer(msg, &txn->hdr_idx);
+
+ /* Still no valid request ? */
+ if (unlikely(msg->msg_state < HTTP_MSG_BODY)) {
+ if ((msg->msg_state == HTTP_MSG_ERROR) ||
+ buffer_full(s->req.buf, global.tune.maxrewrite)) {
+ return 0;
+ }
+ /* wait for final state */
+ smp->flags |= SMP_F_MAY_CHANGE;
+ return 0;
+ }
+
+ /* OK we just got a valid HTTP request. We have some minor
+ * preparation to perform so that further checks can rely
+ * on HTTP tests.
+ */
+
+ /* If the request was parsed but was too large, we must absolutely
+ * return an error so that it is not processed. At the moment this
+ * cannot happen, but if the parsers are to change in the future,
+ * we want this check to be maintained.
+ */
+ if (unlikely(s->req.buf->i + s->req.buf->p >
+ s->req.buf->data + s->req.buf->size - global.tune.maxrewrite)) {
+ msg->msg_state = HTTP_MSG_ERROR;
+ smp->data.u.sint = 1;
+ return 1;
+ }
+
+ txn->meth = find_http_meth(msg->chn->buf->p, msg->sl.rq.m_l);
+ if (txn->meth == HTTP_METH_GET || txn->meth == HTTP_METH_HEAD)
+ s->flags |= SF_REDIRECTABLE;
+
+ if (unlikely(msg->sl.rq.v_l == 0) && !http_upgrade_v09_to_v10(txn))
+ return 0;
+ }
+
+ if (req_vol && txn->rsp.msg_state != HTTP_MSG_RPBEFORE) {
+ return 0; /* data might have moved and indexes changed */
+ }
+
+ /* otherwise everything's ready for the request */
+ }
+ else {
+ /* Check for a dependency on a response */
+ if (txn->rsp.msg_state < HTTP_MSG_BODY) {
+ smp->flags |= SMP_F_MAY_CHANGE;
+ return 0;
+ }
+ }
+
+ /* everything's OK */
+ smp->data.u.sint = 1;
+ return 1;
+}
+
+/* 1. Check on METHOD
+ * We use the pre-parsed method if it is known, and store its number as an
+ * integer. If it is unknown, we use the pointer and the length.
+ */
+static int pat_parse_meth(const char *text, struct pattern *pattern, int mflags, char **err)
+{
+ int len, meth;
+
+ len = strlen(text);
+ meth = find_http_meth(text, len);
+
+ pattern->val.i = meth;
+ if (meth == HTTP_METH_OTHER) {
+ pattern->ptr.str = (char *)text;
+ pattern->len = len;
+ }
+ else {
+ pattern->ptr.str = NULL;
+ pattern->len = 0;
+ }
+ return 1;
+}
+
+/* This function fetches the method of current HTTP request and stores
+ * it in the global pattern struct as a chunk. There are two possibilities :
+ * - if the method is known (not HTTP_METH_OTHER), its identifier is stored
+ * in <len> and <ptr> is NULL ;
+ * - if the method is unknown (HTTP_METH_OTHER), <ptr> points to the text and
+ * <len> to its length.
+ * This is intended to be used with pat_match_meth() only.
+ */
+static int
+smp_fetch_meth(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ int meth;
+ struct http_txn *txn = smp->strm->txn;
+
+ CHECK_HTTP_MESSAGE_FIRST_PERM();
+
+ meth = txn->meth;
+ smp->data.type = SMP_T_METH;
+ smp->data.u.meth.meth = meth;
+ if (meth == HTTP_METH_OTHER) {
+ if (txn->rsp.msg_state != HTTP_MSG_RPBEFORE)
+ /* ensure the indexes are not affected */
+ return 0;
+ smp->flags |= SMP_F_CONST;
+ smp->data.u.meth.str.len = txn->req.sl.rq.m_l;
+ smp->data.u.meth.str.str = txn->req.chn->buf->p;
+ }
+ smp->flags |= SMP_F_VOL_1ST;
+ return 1;
+}
+
+/* See above how the method is stored in the global pattern */
+static struct pattern *pat_match_meth(struct sample *smp, struct pattern_expr *expr, int fill)
+{
+ int icase;
+ struct pattern_list *lst;
+ struct pattern *pattern;
+
+ list_for_each_entry(lst, &expr->patterns, list) {
+ pattern = &lst->pat;
+
+ /* well-known method */
+ if (pattern->val.i != HTTP_METH_OTHER) {
+ if (smp->data.u.meth.meth == pattern->val.i)
+ return pattern;
+ else
+ continue;
+ }
+
+ /* Other method, we must compare the strings */
+ if (pattern->len != smp->data.u.meth.str.len)
+ continue;
+
+ icase = expr->mflags & PAT_MF_IGNORE_CASE;
+ if ((icase && strncasecmp(pattern->ptr.str, smp->data.u.meth.str.str, smp->data.u.meth.str.len) == 0) ||
+ (!icase && strncmp(pattern->ptr.str, smp->data.u.meth.str.str, smp->data.u.meth.str.len) == 0))
+ return pattern;
+ }
+ return NULL;
+}
+
+static int
+smp_fetch_rqver(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct http_txn *txn = smp->strm->txn;
+ char *ptr;
+ int len;
+
+ CHECK_HTTP_MESSAGE_FIRST();
+
+ len = txn->req.sl.rq.v_l;
+ ptr = txn->req.chn->buf->p + txn->req.sl.rq.v;
+
+ while ((len-- > 0) && (*ptr++ != '/'));
+ if (len <= 0)
+ return 0;
+
+ smp->data.type = SMP_T_STR;
+ smp->data.u.str.str = ptr;
+ smp->data.u.str.len = len;
+
+ smp->flags = SMP_F_VOL_1ST | SMP_F_CONST;
+ return 1;
+}
+
+static int
+smp_fetch_stver(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct http_txn *txn;
+ char *ptr;
+ int len;
+
+ CHECK_HTTP_MESSAGE_FIRST();
+
+ txn = smp->strm->txn;
+ if (txn->rsp.msg_state < HTTP_MSG_BODY)
+ return 0;
+
+ len = txn->rsp.sl.st.v_l;
+ ptr = txn->rsp.chn->buf->p;
+
+ while ((len-- > 0) && (*ptr++ != '/'));
+ if (len <= 0)
+ return 0;
+
+ smp->data.type = SMP_T_STR;
+ smp->data.u.str.str = ptr;
+ smp->data.u.str.len = len;
+
+ smp->flags = SMP_F_VOL_1ST | SMP_F_CONST;
+ return 1;
+}
+
+/* 3. Check on Status Code. We manipulate integers here. */
+static int
+smp_fetch_stcode(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct http_txn *txn;
+ char *ptr;
+ int len;
+
+ CHECK_HTTP_MESSAGE_FIRST();
+
+ txn = smp->strm->txn;
+ if (txn->rsp.msg_state < HTTP_MSG_BODY)
+ return 0;
+
+ len = txn->rsp.sl.st.c_l;
+ ptr = txn->rsp.chn->buf->p + txn->rsp.sl.st.c;
+
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = __strl2ui(ptr, len);
+ smp->flags = SMP_F_VOL_1ST;
+ return 1;
+}
+
+/* returns the longest available part of the body. This requires that the body
+ * has been waited for using http-buffer-request.
+ */
+static int
+smp_fetch_body(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct http_txn *txn = smp->strm->txn;
+ struct http_msg *msg;
+ unsigned long len;
+ unsigned long block1;
+ char *body;
+ struct chunk *temp;
+
+ CHECK_HTTP_MESSAGE_FIRST();
+
+ if ((smp->opt & SMP_OPT_DIR) == SMP_OPT_DIR_REQ)
+ msg = &txn->req;
+ else
+ msg = &txn->rsp;
+
+ len = http_body_bytes(msg);
+ body = b_ptr(msg->chn->buf, -http_data_rewind(msg));
+
+ block1 = len;
+ if (block1 > msg->chn->buf->data + msg->chn->buf->size - body)
+ block1 = msg->chn->buf->data + msg->chn->buf->size - body;
+
+ if (block1 == len) {
+ /* buffer is not wrapped (or empty) */
+ smp->data.type = SMP_T_BIN;
+ smp->data.u.str.str = body;
+ smp->data.u.str.len = len;
+ smp->flags = SMP_F_VOL_TEST | SMP_F_CONST;
+ }
+ else {
+ /* buffer is wrapped, we need to defragment it */
+ temp = get_trash_chunk();
+ memcpy(temp->str, body, block1);
+ memcpy(temp->str + block1, msg->chn->buf->data, len - block1);
+ smp->data.type = SMP_T_BIN;
+ smp->data.u.str.str = temp->str;
+ smp->data.u.str.len = len;
+ smp->flags = SMP_F_VOL_TEST;
+ }
+ return 1;
+}
+
+
+/* returns the available length of the body. This requires that the body
+ * has been waited for using http-buffer-request.
+ */
+static int
+smp_fetch_body_len(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct http_txn *txn = smp->strm->txn;
+ struct http_msg *msg;
+
+ CHECK_HTTP_MESSAGE_FIRST();
+
+ if ((smp->opt & SMP_OPT_DIR) == SMP_OPT_DIR_REQ)
+ msg = &txn->req;
+ else
+ msg = &txn->rsp;
+
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = http_body_bytes(msg);
+
+ smp->flags = SMP_F_VOL_TEST;
+ return 1;
+}
+
+
+/* returns the advertised length of the body, or the advertised size of the
+ * chunks available in the buffer. This requires that the body has been waited
+ * for using http-buffer-request.
+ */
+static int
+smp_fetch_body_size(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct http_txn *txn = smp->strm->txn;
+ struct http_msg *msg;
+
+ CHECK_HTTP_MESSAGE_FIRST();
+
+ if ((smp->opt & SMP_OPT_DIR) == SMP_OPT_DIR_REQ)
+ msg = &txn->req;
+ else
+ msg = &txn->rsp;
+
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = msg->body_len;
+
+ smp->flags = SMP_F_VOL_TEST;
+ return 1;
+}
+
+
+/* 4. Check on URL/URI. A pointer to the URI is stored. */
+static int
+smp_fetch_url(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct http_txn *txn;
+
+ CHECK_HTTP_MESSAGE_FIRST();
+
+ txn = smp->strm->txn;
+ smp->data.type = SMP_T_STR;
+ smp->data.u.str.len = txn->req.sl.rq.u_l;
+ smp->data.u.str.str = txn->req.chn->buf->p + txn->req.sl.rq.u;
+ smp->flags = SMP_F_VOL_1ST | SMP_F_CONST;
+ return 1;
+}
+
+static int
+smp_fetch_url_ip(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct http_txn *txn;
+ struct sockaddr_storage addr;
+
+ CHECK_HTTP_MESSAGE_FIRST();
+
+ txn = smp->strm->txn;
+ url2sa(txn->req.chn->buf->p + txn->req.sl.rq.u, txn->req.sl.rq.u_l, &addr, NULL);
+ if (((struct sockaddr_in *)&addr)->sin_family != AF_INET)
+ return 0;
+
+ smp->data.type = SMP_T_IPV4;
+ smp->data.u.ipv4 = ((struct sockaddr_in *)&addr)->sin_addr;
+ smp->flags = 0;
+ return 1;
+}
+
+static int
+smp_fetch_url_port(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct http_txn *txn;
+ struct sockaddr_storage addr;
+
+ CHECK_HTTP_MESSAGE_FIRST();
+
+ txn = smp->strm->txn;
+ url2sa(txn->req.chn->buf->p + txn->req.sl.rq.u, txn->req.sl.rq.u_l, &addr, NULL);
+ if (((struct sockaddr_in *)&addr)->sin_family != AF_INET)
+ return 0;
+
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = ntohs(((struct sockaddr_in *)&addr)->sin_port);
+ smp->flags = 0;
+ return 1;
+}
+
+/* Fetch an HTTP header. A pointer to the beginning of the value is returned.
+ * Accepts an optional argument of type string containing the header field name,
+ * and an optional argument of type signed or unsigned integer to request an
+ * explicit occurrence of the header. Note that in the event of a missing name,
+ * headers are considered from the first one. It does not stop on commas and
+ * returns full lines instead (useful for User-Agent or Date for example).
+ */
+static int
+smp_fetch_fhdr(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct hdr_idx *idx;
+ struct hdr_ctx *ctx = smp->ctx.a[0];
+ const struct http_msg *msg;
+ int occ = 0;
+ const char *name_str = NULL;
+ int name_len = 0;
+
+ if (!ctx) {
+ /* first call */
+ ctx = &static_hdr_ctx;
+ ctx->idx = 0;
+ smp->ctx.a[0] = ctx;
+ }
+
+ if (args) {
+ if (args[0].type != ARGT_STR)
+ return 0;
+ name_str = args[0].data.str.str;
+ name_len = args[0].data.str.len;
+
+ if (args[1].type == ARGT_SINT)
+ occ = args[1].data.sint;
+ }
+
+ CHECK_HTTP_MESSAGE_FIRST();
+
+ idx = &smp->strm->txn->hdr_idx;
+ msg = ((smp->opt & SMP_OPT_DIR) == SMP_OPT_DIR_REQ) ? &smp->strm->txn->req : &smp->strm->txn->rsp;
+
+ if (ctx && !(smp->flags & SMP_F_NOT_LAST))
+ /* search for header from the beginning */
+ ctx->idx = 0;
+
+ if (!occ && !(smp->opt & SMP_OPT_ITERATE))
+ /* no explicit occurrence and single fetch => last header by default */
+ occ = -1;
+
+ if (!occ)
+ /* prepare to report multiple occurrences for ACL fetches */
+ smp->flags |= SMP_F_NOT_LAST;
+
+ smp->data.type = SMP_T_STR;
+ smp->flags |= SMP_F_VOL_HDR | SMP_F_CONST;
+ if (http_get_fhdr(msg, name_str, name_len, idx, occ, ctx, &smp->data.u.str.str, &smp->data.u.str.len))
+ return 1;
+
+ smp->flags &= ~SMP_F_NOT_LAST;
+ return 0;
+}
+
+/* 6. Check on HTTP header count. The number of occurrences is returned.
+ * Accepts exactly 1 argument of type string. It does not stop on commas and
+ * returns full lines instead (useful for User-Agent or Date for example).
+ */
+static int
+smp_fetch_fhdr_cnt(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct hdr_idx *idx;
+ struct hdr_ctx ctx;
+ const struct http_msg *msg;
+ int cnt;
+ const char *name = NULL;
+ int len = 0;
+
+ if (args && args->type == ARGT_STR) {
+ name = args->data.str.str;
+ len = args->data.str.len;
+ }
+
+ CHECK_HTTP_MESSAGE_FIRST();
+
+ idx = &smp->strm->txn->hdr_idx;
+ msg = ((smp->opt & SMP_OPT_DIR) == SMP_OPT_DIR_REQ) ? &smp->strm->txn->req : &smp->strm->txn->rsp;
+
+ ctx.idx = 0;
+ cnt = 0;
+ while (http_find_full_header2(name, len, msg->chn->buf->p, idx, &ctx))
+ cnt++;
+
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = cnt;
+ smp->flags = SMP_F_VOL_HDR;
+ return 1;
+}
+
+static int
+smp_fetch_hdr_names(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct hdr_idx *idx;
+ struct hdr_ctx ctx;
+ const struct http_msg *msg;
+ struct chunk *temp;
+ char del = ',';
+
+ if (args && args->type == ARGT_STR)
+ del = *args[0].data.str.str;
+
+ CHECK_HTTP_MESSAGE_FIRST();
+
+ idx = &smp->strm->txn->hdr_idx;
+ msg = ((smp->opt & SMP_OPT_DIR) == SMP_OPT_DIR_REQ) ? &smp->strm->txn->req : &smp->strm->txn->rsp;
+
+ temp = get_trash_chunk();
+
+ ctx.idx = 0;
+ while (http_find_next_header(msg->chn->buf->p, idx, &ctx)) {
+ if (temp->len)
+ temp->str[temp->len++] = del;
+ memcpy(temp->str + temp->len, ctx.line, ctx.del);
+ temp->len += ctx.del;
+ }
+
+ smp->data.type = SMP_T_STR;
+ smp->data.u.str.str = temp->str;
+ smp->data.u.str.len = temp->len;
+ smp->flags = SMP_F_VOL_HDR;
+ return 1;
+}
+
+/* Fetch an HTTP header. A pointer to the beginning of the value is returned.
+ * Accepts an optional argument of type string containing the header field name,
+ * and an optional argument of type signed or unsigned integer to request an
+ * explicit occurrence of the header. Note that in the event of a missing name,
+ * headers are considered from the first one.
+ */
+static int
+smp_fetch_hdr(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct hdr_idx *idx;
+ struct hdr_ctx *ctx = smp->ctx.a[0];
+ const struct http_msg *msg;
+ int occ = 0;
+ const char *name_str = NULL;
+ int name_len = 0;
+
+ if (!ctx) {
+ /* first call */
+ ctx = &static_hdr_ctx;
+ ctx->idx = 0;
+ smp->ctx.a[0] = ctx;
+ }
+
+ if (args) {
+ if (args[0].type != ARGT_STR)
+ return 0;
+ name_str = args[0].data.str.str;
+ name_len = args[0].data.str.len;
+
+ if (args[1].type == ARGT_SINT)
+ occ = args[1].data.sint;
+ }
+
+ CHECK_HTTP_MESSAGE_FIRST();
+
+ idx = &smp->strm->txn->hdr_idx;
+ msg = ((smp->opt & SMP_OPT_DIR) == SMP_OPT_DIR_REQ) ? &smp->strm->txn->req : &smp->strm->txn->rsp;
+
+ if (ctx && !(smp->flags & SMP_F_NOT_LAST))
+ /* search for header from the beginning */
+ ctx->idx = 0;
+
+ if (!occ && !(smp->opt & SMP_OPT_ITERATE))
+ /* no explicit occurrence and single fetch => last header by default */
+ occ = -1;
+
+ if (!occ)
+ /* prepare to report multiple occurrences for ACL fetches */
+ smp->flags |= SMP_F_NOT_LAST;
+
+ smp->data.type = SMP_T_STR;
+ smp->flags |= SMP_F_VOL_HDR | SMP_F_CONST;
+ if (http_get_hdr(msg, name_str, name_len, idx, occ, ctx, &smp->data.u.str.str, &smp->data.u.str.len))
+ return 1;
+
+ smp->flags &= ~SMP_F_NOT_LAST;
+ return 0;
+}
+
+/* 6. Check on HTTP header count. The number of occurrences is returned.
+ * Accepts exactly 1 argument of type string.
+ */
+static int
+smp_fetch_hdr_cnt(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct hdr_idx *idx;
+ struct hdr_ctx ctx;
+ const struct http_msg *msg;
+ int cnt;
+ const char *name = NULL;
+ int len = 0;
+
+ if (args && args->type == ARGT_STR) {
+ name = args->data.str.str;
+ len = args->data.str.len;
+ }
+
+ CHECK_HTTP_MESSAGE_FIRST();
+
+ idx = &smp->strm->txn->hdr_idx;
+ msg = ((smp->opt & SMP_OPT_DIR) == SMP_OPT_DIR_REQ) ? &smp->strm->txn->req : &smp->strm->txn->rsp;
+
+ ctx.idx = 0;
+ cnt = 0;
+ while (http_find_header2(name, len, msg->chn->buf->p, idx, &ctx))
+ cnt++;
+
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = cnt;
+ smp->flags = SMP_F_VOL_HDR;
+ return 1;
+}
+
+/* Fetch an HTTP header's integer value. The integer value is returned. It
+ * takes a mandatory argument of type string and an optional one of type int
+ * to designate a specific occurrence. It returns an unsigned integer, which
+ * may or may not be appropriate for everything.
+ */
+static int
+smp_fetch_hdr_val(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ int ret = smp_fetch_hdr(args, smp, kw, private);
+
+ if (ret > 0) {
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = strl2ic(smp->data.u.str.str, smp->data.u.str.len);
+ }
+
+ return ret;
+}
+
+/* Fetch an HTTP header's IP value. takes a mandatory argument of type string
+ * and an optional one of type int to designate a specific occurrence.
+ * It returns an IPv4 or IPv6 address.
+ */
+static int
+smp_fetch_hdr_ip(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ int ret;
+
+ while ((ret = smp_fetch_hdr(args, smp, kw, private)) > 0) {
+ if (url2ipv4((char *)smp->data.u.str.str, &smp->data.u.ipv4)) {
+ smp->data.type = SMP_T_IPV4;
+ break;
+ } else {
+ struct chunk *temp = get_trash_chunk();
+ if (smp->data.u.str.len < temp->size - 1) {
+ memcpy(temp->str, smp->data.u.str.str, smp->data.u.str.len);
+ temp->str[smp->data.u.str.len] = '\0';
+ if (inet_pton(AF_INET6, temp->str, &smp->data.u.ipv6)) {
+ smp->data.type = SMP_T_IPV6;
+ break;
+ }
+ }
+ }
+
+ /* if the header doesn't match an IP address, fetch next one */
+ if (!(smp->flags & SMP_F_NOT_LAST))
+ return 0;
+ }
+ return ret;
+}
+
+/* 8. Check on URI PATH. A pointer to the PATH is stored. The path starts at
+ * the first '/' after the possible hostname, and ends before the possible '?'.
+ */
+static int
+smp_fetch_path(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct http_txn *txn;
+ char *ptr, *end;
+
+ CHECK_HTTP_MESSAGE_FIRST();
+
+ txn = smp->strm->txn;
+ end = txn->req.chn->buf->p + txn->req.sl.rq.u + txn->req.sl.rq.u_l;
+ ptr = http_get_path(txn);
+ if (!ptr)
+ return 0;
+
+ /* OK, we got the '/' ! */
+ smp->data.type = SMP_T_STR;
+ smp->data.u.str.str = ptr;
+
+ while (ptr < end && *ptr != '?')
+ ptr++;
+
+ smp->data.u.str.len = ptr - smp->data.u.str.str;
+ smp->flags = SMP_F_VOL_1ST | SMP_F_CONST;
+ return 1;
+}
+
+/* This produces a concatenation of the first occurrence of the Host header
+ * followed by the path component if it begins with a slash ('/'). This means
+ * that '*' will not be added, resulting in exactly the first Host entry.
+ * If no Host header is found, then the path is returned as-is. The returned
+ * value is stored in the trash so it does not need to be marked constant.
+ * The returned sample is of type string.
+ */
+static int
+smp_fetch_base(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct http_txn *txn;
+ char *ptr, *end, *beg;
+ struct hdr_ctx ctx;
+ struct chunk *temp;
+
+ CHECK_HTTP_MESSAGE_FIRST();
+
+ txn = smp->strm->txn;
+ ctx.idx = 0;
+ if (!http_find_header2("Host", 4, txn->req.chn->buf->p, &txn->hdr_idx, &ctx) || !ctx.vlen)
+ return smp_fetch_path(args, smp, kw, private);
+
+ /* OK we have the header value in ctx.line+ctx.val for ctx.vlen bytes */
+ temp = get_trash_chunk();
+ memcpy(temp->str, ctx.line + ctx.val, ctx.vlen);
+ smp->data.type = SMP_T_STR;
+ smp->data.u.str.str = temp->str;
+ smp->data.u.str.len = ctx.vlen;
+
+ /* now retrieve the path */
+ end = txn->req.chn->buf->p + txn->req.sl.rq.u + txn->req.sl.rq.u_l;
+ beg = http_get_path(txn);
+ if (!beg)
+ beg = end;
+
+ for (ptr = beg; ptr < end && *ptr != '?'; ptr++);
+
+ if (beg < ptr && *beg == '/') {
+ memcpy(smp->data.u.str.str + smp->data.u.str.len, beg, ptr - beg);
+ smp->data.u.str.len += ptr - beg;
+ }
+
+ smp->flags = SMP_F_VOL_1ST;
+ return 1;
+}
+
+/* This produces a 32-bit hash of the concatenation of the first occurrence of
+ * the Host header followed by the path component if it begins with a slash ('/').
+ * This means that '*' will not be added, resulting in exactly the first Host
+ * entry. If no Host header is found, then the path is used. The resulting value
+ * is hashed using the path hash followed by a full avalanche hash and provides a
+ * 32-bit integer value. This fetch is useful for tracking per-path activity on
+ * high-traffic sites without having to store whole paths.
+ */
+int
+smp_fetch_base32(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct http_txn *txn;
+ struct hdr_ctx ctx;
+ unsigned int hash = 0;
+ char *ptr, *beg, *end;
+ int len;
+
+ CHECK_HTTP_MESSAGE_FIRST();
+
+ txn = smp->strm->txn;
+ ctx.idx = 0;
+ if (http_find_header2("Host", 4, txn->req.chn->buf->p, &txn->hdr_idx, &ctx)) {
+ /* OK we have the header value in ctx.line+ctx.val for ctx.vlen bytes */
+ ptr = ctx.line + ctx.val;
+ len = ctx.vlen;
+ while (len--)
+ hash = *(ptr++) + (hash << 6) + (hash << 16) - hash;
+ }
+
+ /* now retrieve the path */
+ end = txn->req.chn->buf->p + txn->req.sl.rq.u + txn->req.sl.rq.u_l;
+ beg = http_get_path(txn);
+ if (!beg)
+ beg = end;
+
+ for (ptr = beg; ptr < end && *ptr != '?'; ptr++);
+
+ if (beg < ptr && *beg == '/') {
+ while (beg < ptr)
+ hash = *(beg++) + (hash << 6) + (hash << 16) - hash;
+ }
+ hash = full_hash(hash);
+
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = hash;
+ smp->flags = SMP_F_VOL_1ST;
+ return 1;
+}
+
+/* This concatenates the source address with the 32-bit hash of the Host and
+ * path as returned by smp_fetch_base32(). The idea is to have per-source and
+ * per-path counters. The result is a binary block from 8 to 20 bytes depending
+ * on the source address length. The path hash is stored before the address so
+ * that in environments where IPv6 is insignificant, truncating the output to
+ * 8 bytes would still work.
+ */
+static int
+smp_fetch_base32_src(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct chunk *temp;
+ struct connection *cli_conn = objt_conn(smp->sess->origin);
+
+ if (!cli_conn)
+ return 0;
+
+ if (!smp_fetch_base32(args, smp, kw, private))
+ return 0;
+
+ temp = get_trash_chunk();
+ *(unsigned int *)temp->str = htonl(smp->data.u.sint);
+ temp->len += sizeof(unsigned int);
+
+ switch (cli_conn->addr.from.ss_family) {
+ case AF_INET:
+ memcpy(temp->str + temp->len, &((struct sockaddr_in *)&cli_conn->addr.from)->sin_addr, 4);
+ temp->len += 4;
+ break;
+ case AF_INET6:
+ memcpy(temp->str + temp->len, &((struct sockaddr_in6 *)&cli_conn->addr.from)->sin6_addr, 16);
+ temp->len += 16;
+ break;
+ default:
+ return 0;
+ }
+
+ smp->data.u.str = *temp;
+ smp->data.type = SMP_T_BIN;
+ return 1;
+}
+
+/* Extracts the query string, which comes after the question mark '?'. If no
+ * question mark is found, nothing is returned. Otherwise it returns a sample
+ * of type string carrying the whole query string.
+ */
+static int
+smp_fetch_query(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct http_txn *txn;
+ char *ptr, *end;
+
+ CHECK_HTTP_MESSAGE_FIRST();
+
+ txn = smp->strm->txn;
+ ptr = txn->req.chn->buf->p + txn->req.sl.rq.u;
+ end = ptr + txn->req.sl.rq.u_l;
+
+ /* look up the '?' */
+ do {
+ if (ptr == end)
+ return 0;
+ } while (*ptr++ != '?');
+
+ smp->data.type = SMP_T_STR;
+ smp->data.u.str.str = ptr;
+ smp->data.u.str.len = end - ptr;
+ smp->flags = SMP_F_VOL_1ST | SMP_F_CONST;
+ return 1;
+}
+
+static int
+smp_fetch_proto_http(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ /* Note: hdr_idx.v cannot be NULL in this ACL because the ACL is tagged
+ * as a layer7 ACL, which involves automatic allocation of hdr_idx.
+ */
+
+ CHECK_HTTP_MESSAGE_FIRST_PERM();
+
+ smp->data.type = SMP_T_BOOL;
+ smp->data.u.sint = 1;
+ return 1;
+}
+
+/* return a valid test if the current request is the first one on the connection */
+static int
+smp_fetch_http_first_req(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ smp->data.type = SMP_T_BOOL;
+ smp->data.u.sint = !(smp->strm->txn->flags & TX_NOT_FIRST);
+ return 1;
+}
+
+/* Accepts exactly 1 argument of type userlist */
+static int
+smp_fetch_http_auth(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+
+ if (!args || args->type != ARGT_USR)
+ return 0;
+
+ CHECK_HTTP_MESSAGE_FIRST();
+
+ if (!get_http_auth(smp->strm))
+ return 0;
+
+ smp->data.type = SMP_T_BOOL;
+ smp->data.u.sint = check_user(args->data.usr, smp->strm->txn->auth.user,
+ smp->strm->txn->auth.pass);
+ return 1;
+}
+
+/* Accepts exactly 1 argument of type userlist */
+static int
+smp_fetch_http_auth_grp(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ if (!args || args->type != ARGT_USR)
+ return 0;
+
+ CHECK_HTTP_MESSAGE_FIRST();
+
+ if (!get_http_auth(smp->strm))
+ return 0;
+
+ /* if the user does not belong to the userlist or has a wrong password,
+ * report that it unconditionally does not match. Otherwise we return
+ * a string containing the username.
+ */
+ if (!check_user(args->data.usr, smp->strm->txn->auth.user,
+ smp->strm->txn->auth.pass))
+ return 0;
+
+ /* pat_match_auth() will need the user list */
+ smp->ctx.a[0] = args->data.usr;
+
+ smp->data.type = SMP_T_STR;
+ smp->flags = SMP_F_CONST;
+ smp->data.u.str.str = smp->strm->txn->auth.user;
+ smp->data.u.str.len = strlen(smp->strm->txn->auth.user);
+
+ return 1;
+}
+
+/* Try to find the next occurrence of a cookie name in a cookie header value.
+ * The lookup begins at <hdr>. The pointer and size of the next occurrence of
+ * the cookie value is returned into *value and *value_l, and the function
+ * returns a pointer to the next pointer to search from if the value was found.
+ * Otherwise if the cookie was not found, NULL is returned and neither value
+ * nor value_l are touched. The input <hdr> string should first point to the
+ * header's value, and the <hdr_end> pointer must point to the first character
+ * not part of the value. <list> must be non-zero if value may represent a list
+ * of values (cookie headers). This makes it faster to abort parsing when no
+ * list is expected.
+ */
+char *
+extract_cookie_value(char *hdr, const char *hdr_end,
+ char *cookie_name, size_t cookie_name_l, int list,
+ char **value, int *value_l)
+{
+ char *equal, *att_end, *att_beg, *val_beg, *val_end;
+ char *next;
+
+ /* we search at least a cookie name followed by an equal, and more
+ * generally something like this :
+ * Cookie: NAME1 = VALUE 1 ; NAME2 = VALUE2 ; NAME3 = VALUE3\r\n
+ */
+ for (att_beg = hdr; att_beg + cookie_name_l + 1 < hdr_end; att_beg = next + 1) {
+ /* Iterate through all cookies on this line */
+
+ while (att_beg < hdr_end && http_is_spht[(unsigned char)*att_beg])
+ att_beg++;
+
+ /* find att_end : this is the first character after the last non
+ * space before the equal. It may be equal to hdr_end.
+ */
+ equal = att_end = att_beg;
+
+ while (equal < hdr_end) {
+ if (*equal == '=' || *equal == ';' || (list && *equal == ','))
+ break;
+ if (http_is_spht[(unsigned char)*equal++])
+ continue;
+ att_end = equal;
+ }
+
+ /* here, <equal> points to '=', a delimitor or the end. <att_end>
+ * is between <att_beg> and <equal>, both may be identical.
+ */
+
+ /* look for end of cookie if there is an equal sign */
+ if (equal < hdr_end && *equal == '=') {
+ /* look for the beginning of the value */
+ val_beg = equal + 1;
+ while (val_beg < hdr_end && http_is_spht[(unsigned char)*val_beg])
+ val_beg++;
+
+ /* find the end of the value, respecting quotes */
+ next = find_cookie_value_end(val_beg, hdr_end);
+
+ /* make val_end point to the first white space or delimitor after the value */
+ val_end = next;
+ while (val_end > val_beg && http_is_spht[(unsigned char)*(val_end - 1)])
+ val_end--;
+ } else {
+ val_beg = val_end = next = equal;
+ }
+
+ /* We have nothing to do with attributes beginning with '$'. However,
+ * they will automatically be removed if a header before them is removed,
+ * since they're supposed to be linked together.
+ */
+ if (*att_beg == '$')
+ continue;
+
+ /* Ignore cookies with no equal sign */
+ if (equal == next)
+ continue;
+
+ /* Now we have the cookie name between att_beg and att_end, and
+ * its value between val_beg and val_end.
+ */
+
+ if (att_end - att_beg == cookie_name_l &&
+ memcmp(att_beg, cookie_name, cookie_name_l) == 0) {
+ /* let's return this value and indicate where to go on from */
+ *value = val_beg;
+ *value_l = val_end - val_beg;
+ return next + 1;
+ }
+
+ /* Set-Cookie headers only have the name in the first attr=value part */
+ if (!list)
+ break;
+ }
+
+ return NULL;
+}
+
+/* Fetch a captured HTTP request header. The index is the position of
+ * the "capture" option in the configuration file
+ */
+static int
+smp_fetch_capture_header_req(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct proxy *fe = strm_fe(smp->strm);
+ int idx;
+
+ if (!args || args->type != ARGT_SINT)
+ return 0;
+
+ idx = args->data.sint;
+
+ if (idx > (fe->nb_req_cap - 1) || smp->strm->req_cap == NULL || smp->strm->req_cap[idx] == NULL)
+ return 0;
+
+ smp->data.type = SMP_T_STR;
+ smp->flags |= SMP_F_CONST;
+ smp->data.u.str.str = smp->strm->req_cap[idx];
+ smp->data.u.str.len = strlen(smp->strm->req_cap[idx]);
+
+ return 1;
+}
+
+/* Fetch a captured HTTP response header. The index is the position of
+ * the "capture" option in the configuration file
+ */
+static int
+smp_fetch_capture_header_res(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct proxy *fe = strm_fe(smp->strm);
+ int idx;
+
+ if (!args || args->type != ARGT_SINT)
+ return 0;
+
+ idx = args->data.sint;
+
+ if (idx > (fe->nb_rsp_cap - 1) || smp->strm->res_cap == NULL || smp->strm->res_cap[idx] == NULL)
+ return 0;
+
+ smp->data.type = SMP_T_STR;
+ smp->flags |= SMP_F_CONST;
+ smp->data.u.str.str = smp->strm->res_cap[idx];
+ smp->data.u.str.len = strlen(smp->strm->res_cap[idx]);
+
+ return 1;
+}
+
+/* Extracts the METHOD in the HTTP request, the txn->uri should be filled before the call */
+static int
+smp_fetch_capture_req_method(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct chunk *temp;
+ struct http_txn *txn = smp->strm->txn;
+ char *ptr;
+
+ if (!txn || !txn->uri)
+ return 0;
+
+ ptr = txn->uri;
+
+ while (*ptr != ' ' && *ptr != '\0') /* find first space */
+ ptr++;
+
+ temp = get_trash_chunk();
+ temp->str = txn->uri;
+ temp->len = ptr - txn->uri;
+ smp->data.u.str = *temp;
+ smp->data.type = SMP_T_STR;
+ smp->flags = SMP_F_CONST;
+
+ return 1;
+
+}
+
+/* Extracts the path in the HTTP request, the txn->uri should be filled before the call */
+static int
+smp_fetch_capture_req_uri(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct chunk *temp;
+ struct http_txn *txn = smp->strm->txn;
+ char *ptr;
+
+ if (!txn || !txn->uri)
+ return 0;
+
+ ptr = txn->uri;
+
+ while (*ptr != ' ' && *ptr != '\0') /* find first space */
+ ptr++;
+
+ if (!*ptr)
+ return 0;
+
+ ptr++; /* skip the space */
+
+ temp = get_trash_chunk();
+ ptr = temp->str = http_get_path_from_string(ptr);
+ if (!ptr)
+ return 0;
+ while (*ptr != ' ' && *ptr != '\0') /* find space after URI */
+ ptr++;
+
+ smp->data.u.str = *temp;
+ smp->data.u.str.len = ptr - temp->str;
+ smp->data.type = SMP_T_STR;
+ smp->flags = SMP_F_CONST;
+
+ return 1;
+}
+
+/* Retrieves the HTTP version from the request (either 1.0 or 1.1) and emits it
+ * as a string (either "HTTP/1.0" or "HTTP/1.1").
+ */
+static int
+smp_fetch_capture_req_ver(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct http_txn *txn = smp->strm->txn;
+
+ if (!txn || txn->req.msg_state < HTTP_MSG_HDR_FIRST)
+ return 0;
+
+ if (txn->req.flags & HTTP_MSGF_VER_11)
+ smp->data.u.str.str = "HTTP/1.1";
+ else
+ smp->data.u.str.str = "HTTP/1.0";
+
+ smp->data.u.str.len = 8;
+ smp->data.type = SMP_T_STR;
+ smp->flags = SMP_F_CONST;
+ return 1;
+
+}
+
+/* Retrieves the HTTP version from the response (either 1.0 or 1.1) and emits it
+ * as a string (either "HTTP/1.0" or "HTTP/1.1").
+ */
+static int
+smp_fetch_capture_res_ver(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct http_txn *txn = smp->strm->txn;
+
+ if (!txn || txn->rsp.msg_state < HTTP_MSG_HDR_FIRST)
+ return 0;
+
+ if (txn->rsp.flags & HTTP_MSGF_VER_11)
+ smp->data.u.str.str = "HTTP/1.1";
+ else
+ smp->data.u.str.str = "HTTP/1.0";
+
+ smp->data.u.str.len = 8;
+ smp->data.type = SMP_T_STR;
+ smp->flags = SMP_F_CONST;
+ return 1;
+
+}
+
+
+/* Iterate over all cookies present in a message. The context is stored in
+ * smp->ctx.a[0] for the in-header position, smp->ctx.a[1] for the
+ * end-of-header-value, and smp->ctx.a[2] for the hdr_ctx. Depending on
+ * the direction, multiple cookies may be parsed on the same line or not.
+ * The cookie name is in args and the name length in args->data.str.len.
+ * Accepts exactly 1 argument of type string. If the input options indicate
+ * that no iterating is desired, then only last value is fetched if any.
+ * The returned sample is of type CSTR. Can be used to parse cookies in other
+ * files.
+ */
+int smp_fetch_cookie(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct http_txn *txn;
+ struct hdr_idx *idx;
+ struct hdr_ctx *ctx = smp->ctx.a[2];
+ const struct http_msg *msg;
+ const char *hdr_name;
+ int hdr_name_len;
+ char *sol;
+ int occ = 0;
+ int found = 0;
+
+ if (!args || args->type != ARGT_STR)
+ return 0;
+
+ if (!ctx) {
+ /* first call */
+ ctx = &static_hdr_ctx;
+ ctx->idx = 0;
+ smp->ctx.a[2] = ctx;
+ }
+
+ CHECK_HTTP_MESSAGE_FIRST();
+
+ txn = smp->strm->txn;
+ idx = &smp->strm->txn->hdr_idx;
+
+ if ((smp->opt & SMP_OPT_DIR) == SMP_OPT_DIR_REQ) {
+ msg = &txn->req;
+ hdr_name = "Cookie";
+ hdr_name_len = 6;
+ } else {
+ msg = &txn->rsp;
+ hdr_name = "Set-Cookie";
+ hdr_name_len = 10;
+ }
+
+ if (!occ && !(smp->opt & SMP_OPT_ITERATE))
+ /* no explicit occurrence and single fetch => last cookie by default */
+ occ = -1;
+
+ /* OK so basically here, either we want only one value and it's the
+ * last one, or we want to iterate over all of them and we fetch the
+ * next one.
+ */
+
+ sol = msg->chn->buf->p;
+ if (!(smp->flags & SMP_F_NOT_LAST)) {
+ /* search for the header from the beginning, we must first initialize
+ * the search parameters.
+ */
+ smp->ctx.a[0] = NULL;
+ ctx->idx = 0;
+ }
+
+ smp->flags |= SMP_F_VOL_HDR;
+
+ while (1) {
+ /* Note: smp->ctx.a[0] == NULL every time we need to fetch a new header */
+ if (!smp->ctx.a[0]) {
+ if (!http_find_header2(hdr_name, hdr_name_len, sol, idx, ctx))
+ goto out;
+
+ if (ctx->vlen < args->data.str.len + 1)
+ continue;
+
+ smp->ctx.a[0] = ctx->line + ctx->val;
+ smp->ctx.a[1] = smp->ctx.a[0] + ctx->vlen;
+ }
+
+ smp->data.type = SMP_T_STR;
+ smp->flags |= SMP_F_CONST;
+ smp->ctx.a[0] = extract_cookie_value(smp->ctx.a[0], smp->ctx.a[1],
+ args->data.str.str, args->data.str.len,
+ (smp->opt & SMP_OPT_DIR) == SMP_OPT_DIR_REQ,
+ &smp->data.u.str.str,
+ &smp->data.u.str.len);
+ if (smp->ctx.a[0]) {
+ found = 1;
+ if (occ >= 0) {
+ /* one value was returned into smp->data.u.str.{str,len} */
+ smp->flags |= SMP_F_NOT_LAST;
+ return 1;
+ }
+ }
+ /* if we're looking for last occurrence, let's loop */
+ }
+ /* all cookie headers and values were scanned. If we're looking for the
+ * last occurrence, we may return it now.
+ */
+ out:
+ smp->flags &= ~SMP_F_NOT_LAST;
+ return found;
+}
+
+/* Iterate over all cookies present in a request to count how many occurrences
+ * match the name in args and args->data.str.len. If <multi> is non-null, then
+ * multiple cookies may be parsed on the same line. The returned sample is of
+ * type UINT. Accepts exactly 1 argument of type string.
+ */
+static int
+smp_fetch_cookie_cnt(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct http_txn *txn;
+ struct hdr_idx *idx;
+ struct hdr_ctx ctx;
+ const struct http_msg *msg;
+ const char *hdr_name;
+ int hdr_name_len;
+ int cnt;
+ char *val_beg, *val_end;
+ char *sol;
+
+ if (!args || args->type != ARGT_STR)
+ return 0;
+
+ CHECK_HTTP_MESSAGE_FIRST();
+
+ txn = smp->strm->txn;
+ idx = &smp->strm->txn->hdr_idx;
+
+ if ((smp->opt & SMP_OPT_DIR) == SMP_OPT_DIR_REQ) {
+ msg = &txn->req;
+ hdr_name = "Cookie";
+ hdr_name_len = 6;
+ } else {
+ msg = &txn->rsp;
+ hdr_name = "Set-Cookie";
+ hdr_name_len = 10;
+ }
+
+ sol = msg->chn->buf->p;
+ val_end = val_beg = NULL;
+ ctx.idx = 0;
+ cnt = 0;
+
+ while (1) {
+ /* Note: val_beg == NULL every time we need to fetch a new header */
+ if (!val_beg) {
+ if (!http_find_header2(hdr_name, hdr_name_len, sol, idx, &ctx))
+ break;
+
+ if (ctx.vlen < args->data.str.len + 1)
+ continue;
+
+ val_beg = ctx.line + ctx.val;
+ val_end = val_beg + ctx.vlen;
+ }
+
+ smp->data.type = SMP_T_STR;
+ smp->flags |= SMP_F_CONST;
+ while ((val_beg = extract_cookie_value(val_beg, val_end,
+ args->data.str.str, args->data.str.len,
+ (smp->opt & SMP_OPT_DIR) == SMP_OPT_DIR_REQ,
+ &smp->data.u.str.str,
+ &smp->data.u.str.len))) {
+ cnt++;
+ }
+ }
+
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = cnt;
+ smp->flags |= SMP_F_VOL_HDR;
+ return 1;
+}
+
+/* Fetch an cookie's integer value. The integer value is returned. It
+ * takes a mandatory argument of type string. It relies on smp_fetch_cookie().
+ */
+static int
+smp_fetch_cookie_val(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ int ret = smp_fetch_cookie(args, smp, kw, private);
+
+ if (ret > 0) {
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = strl2ic(smp->data.u.str.str, smp->data.u.str.len);
+ }
+
+ return ret;
+}
+
+/************************************************************************/
+/* The code below is dedicated to sample fetches */
+/************************************************************************/
+
+/*
+ * Given a path string and its length, find the position of beginning of the
+ * query string. Returns NULL if no query string is found in the path.
+ *
+ * Example: if path = "/foo/bar/fubar?yo=mama;ye=daddy", and n = 22:
+ *
+ * find_query_string(path, n, '?') points to "yo=mama;ye=daddy" string.
+ */
+static inline char *find_param_list(char *path, size_t path_l, char delim)
+{
+ char *p;
+
+ p = memchr(path, delim, path_l);
+ return p ? p + 1 : NULL;
+}
+
+static inline int is_param_delimiter(char c, char delim)
+{
+ return c == '&' || c == ';' || c == delim;
+}
+
+/* after increasing a pointer value, it can exceed the first buffer
+ * size. This function transform the value of <ptr> according with
+ * the expected position. <chunks> is an array of the one or two
+ * avalaible chunks. The first value is the start of the first chunk,
+ * the second value if the end+1 of the first chunks. The third value
+ * is NULL or the start of the second chunk and the fourth value is
+ * the end+1 of the second chunk. The function returns 1 if does a
+ * wrap, else returns 0.
+ */
+static inline int fix_pointer_if_wrap(const char **chunks, const char **ptr)
+{
+ if (*ptr < chunks[1])
+ return 0;
+ if (!chunks[2])
+ return 0;
+ *ptr = chunks[2] + ( *ptr - chunks[1] );
+ return 1;
+}
+
+/*
+ * Given a url parameter, find the starting position of the first occurence,
+ * or NULL if the parameter is not found.
+ *
+ * Example: if query_string is "yo=mama;ye=daddy" and url_param_name is "ye",
+ * the function will return query_string+8.
+ *
+ * Warning:this function returns a pointer that can be point to the first chunk
+ * or the second chunk. The caller must be check the position before using the
+ * result.
+ */
+static const char *
+find_url_param_pos(const char **chunks,
+ const char* url_param_name, size_t url_param_name_l,
+ char delim)
+{
+ const char *pos, *last, *equal;
+ const char **bufs = chunks;
+ int l1, l2;
+
+
+ pos = bufs[0];
+ last = bufs[1];
+ while (pos <= last) {
+ /* Check the equal. */
+ equal = pos + url_param_name_l;
+ if (fix_pointer_if_wrap(chunks, &equal)) {
+ if (equal >= chunks[3])
+ return NULL;
+ } else {
+ if (equal >= chunks[1])
+ return NULL;
+ }
+ if (*equal == '=') {
+ if (pos + url_param_name_l > last) {
+ /* process wrap case, we detect a wrap. In this case, the
+ * comparison is performed in two parts.
+ */
+
+ /* This is the end, we dont have any other chunk. */
+ if (bufs != chunks || !bufs[2])
+ return NULL;
+
+ /* Compute the length of each part of the comparison. */
+ l1 = last - pos;
+ l2 = url_param_name_l - l1;
+
+ /* The second buffer is too short to contain the compared string. */
+ if (bufs[2] + l2 > bufs[3])
+ return NULL;
+
+ if (memcmp(pos, url_param_name, l1) == 0 &&
+ memcmp(bufs[2], url_param_name+l1, l2) == 0)
+ return pos;
+
+ /* Perform wrapping and jump the string who fail the comparison. */
+ bufs += 2;
+ pos = bufs[0] + l2;
+ last = bufs[1];
+
+ } else {
+ /* process a simple comparison. */
+ if (memcmp(pos, url_param_name, url_param_name_l) == 0) {
+ return pos; }
+ pos += url_param_name_l + 1;
+ if (fix_pointer_if_wrap(chunks, &pos))
+ last = bufs[2];
+ }
+ }
+
+ while (1) {
+ /* Look for the next delimiter. */
+ while (pos <= last && !is_param_delimiter(*pos, delim))
+ pos++;
+ if (pos < last)
+ break;
+ /* process buffer wrapping. */
+ if (bufs != chunks || !bufs[2])
+ return NULL;
+ bufs += 2;
+ pos = bufs[0];
+ last = bufs[1];
+ }
+ pos++;
+ }
+ return NULL;
+}
+
+/*
+ * Given a url parameter name and a query string, find the next value.
+ * An empty url_param_name matches the first available parameter.
+ * If the parameter is found, 1 is returned and *vstart / *vend are updated to
+ * respectively provide a pointer to the value and its end.
+ * Otherwise, 0 is returned and vstart/vend are not modified.
+ */
+static int
+find_next_url_param(const char **chunks,
+ const char* url_param_name, size_t url_param_name_l,
+ const char **vstart, const char **vend, char delim)
+{
+ const char *arg_start, *qs_end;
+ const char *value_start, *value_end;
+
+ arg_start = chunks[0];
+ qs_end = chunks[1];
+ if (url_param_name_l) {
+ /* Looks for an argument name. */
+ arg_start = find_url_param_pos(chunks,
+ url_param_name, url_param_name_l,
+ delim);
+ /* Check for wrapping. */
+ if (arg_start > qs_end)
+ qs_end = chunks[3];
+ }
+ if (!arg_start)
+ return 0;
+
+ if (!url_param_name_l) {
+ while (1) {
+ /* looks for the first argument. */
+ value_start = memchr(arg_start, '=', qs_end - arg_start);
+ if (!value_start) {
+
+ /* Check for wrapping. */
+ if (arg_start >= chunks[0] &&
+ arg_start <= chunks[1] &&
+ chunks[2]) {
+ arg_start = chunks[2];
+ qs_end = chunks[3];
+ continue;
+ }
+ return 0;
+ }
+ break;
+ }
+ value_start++;
+ }
+ else {
+ /* Jump the argument length. */
+ value_start = arg_start + url_param_name_l + 1;
+
+ /* Check for pointer wrapping. */
+ if (fix_pointer_if_wrap(chunks, &value_start)) {
+ /* Update the end pointer. */
+ qs_end = chunks[3];
+
+ /* Check for overflow. */
+ if (value_start > qs_end)
+ return 0;
+ }
+ }
+
+ value_end = value_start;
+
+ while (1) {
+ while ((value_end < qs_end) && !is_param_delimiter(*value_end, delim))
+ value_end++;
+ if (value_end < qs_end)
+ break;
+ /* process buffer wrapping. */
+ if (value_end >= chunks[0] &&
+ value_end <= chunks[1] &&
+ chunks[2]) {
+ value_end = chunks[2];
+ qs_end = chunks[3];
+ continue;
+ }
+ break;
+ }
+
+ *vstart = value_start;
+ *vend = value_end;
+ return 1;
+}
+
+/* This scans a URL-encoded query string. It takes an optionally wrapping
+ * string whose first contigous chunk has its beginning in ctx->a[0] and end
+ * in ctx->a[1], and the optional second part in (ctx->a[2]..ctx->a[3]). The
+ * pointers are updated for next iteration before leaving.
+ */
+static int
+smp_fetch_param(char delim, const char *name, int name_len, const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ const char *vstart, *vend;
+ struct chunk *temp;
+ const char **chunks = (const char **)smp->ctx.a;
+
+ if (!find_next_url_param(chunks,
+ name, name_len,
+ &vstart, &vend,
+ delim))
+ return 0;
+
+ /* Create sample. If the value is contiguous, return the pointer as CONST,
+ * if the value is wrapped, copy-it in a buffer.
+ */
+ smp->data.type = SMP_T_STR;
+ if (chunks[2] &&
+ vstart >= chunks[0] && vstart <= chunks[1] &&
+ vend >= chunks[2] && vend <= chunks[3]) {
+ /* Wrapped case. */
+ temp = get_trash_chunk();
+ memcpy(temp->str, vstart, chunks[1] - vstart);
+ memcpy(temp->str + ( chunks[1] - vstart ), chunks[2], vend - chunks[2]);
+ smp->data.u.str.str = temp->str;
+ smp->data.u.str.len = ( chunks[1] - vstart ) + ( vend - chunks[2] );
+ } else {
+ /* Contiguous case. */
+ smp->data.u.str.str = (char *)vstart;
+ smp->data.u.str.len = vend - vstart;
+ smp->flags = SMP_F_VOL_1ST | SMP_F_CONST;
+ }
+
+ /* Update context, check wrapping. */
+ chunks[0] = vend;
+ if (chunks[2] && vend >= chunks[2] && vend <= chunks[3]) {
+ chunks[1] = chunks[3];
+ chunks[2] = NULL;
+ }
+
+ if (chunks[0] < chunks[1])
+ smp->flags |= SMP_F_NOT_LAST;
+
+ return 1;
+}
+
+/* This function iterates over each parameter of the query string. It uses
+ * ctx->a[0] and ctx->a[1] to store the beginning and end of the current
+ * parameter. Since it uses smp_fetch_param(), ctx->a[2..3] are both NULL.
+ * An optional parameter name is passed in args[0], otherwise any parameter is
+ * considered. It supports an optional delimiter argument for the beginning of
+ * the string in args[1], which defaults to "?".
+ */
+static int
+smp_fetch_url_param(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct http_msg *msg;
+ char delim = '?';
+ const char *name;
+ int name_len;
+
+ if (!args ||
+ (args[0].type && args[0].type != ARGT_STR) ||
+ (args[1].type && args[1].type != ARGT_STR))
+ return 0;
+
+ name = "";
+ name_len = 0;
+ if (args->type == ARGT_STR) {
+ name = args->data.str.str;
+ name_len = args->data.str.len;
+ }
+
+ if (args[1].type)
+ delim = *args[1].data.str.str;
+
+ if (!smp->ctx.a[0]) { // first call, find the query string
+ CHECK_HTTP_MESSAGE_FIRST();
+
+ msg = &smp->strm->txn->req;
+
+ smp->ctx.a[0] = find_param_list(msg->chn->buf->p + msg->sl.rq.u,
+ msg->sl.rq.u_l, delim);
+ if (!smp->ctx.a[0])
+ return 0;
+
+ smp->ctx.a[1] = msg->chn->buf->p + msg->sl.rq.u + msg->sl.rq.u_l;
+
+ /* Assume that the context is filled with NULL pointer
+ * before the first call.
+ * smp->ctx.a[2] = NULL;
+ * smp->ctx.a[3] = NULL;
+ */
+ }
+
+ return smp_fetch_param(delim, name, name_len, args, smp, kw, private);
+}
+
+/* This function iterates over each parameter of the body. This requires
+ * that the body has been waited for using http-buffer-request. It uses
+ * ctx->a[0] and ctx->a[1] to store the beginning and end of the first
+ * contigous part of the body, and optionally ctx->a[2..3] to reference the
+ * optional second part if the body wraps at the end of the buffer. An optional
+ * parameter name is passed in args[0], otherwise any parameter is considered.
+ */
+static int
+smp_fetch_body_param(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct http_txn *txn = smp->strm->txn;
+ struct http_msg *msg;
+ unsigned long len;
+ unsigned long block1;
+ char *body;
+ const char *name;
+ int name_len;
+
+ if (!args || (args[0].type && args[0].type != ARGT_STR))
+ return 0;
+
+ name = "";
+ name_len = 0;
+ if (args[0].type == ARGT_STR) {
+ name = args[0].data.str.str;
+ name_len = args[0].data.str.len;
+ }
+
+ if (!smp->ctx.a[0]) { // first call, find the query string
+ CHECK_HTTP_MESSAGE_FIRST();
+
+ if ((smp->opt & SMP_OPT_DIR) == SMP_OPT_DIR_REQ)
+ msg = &txn->req;
+ else
+ msg = &txn->rsp;
+
+ len = http_body_bytes(msg);
+ body = b_ptr(msg->chn->buf, -http_data_rewind(msg));
+
+ block1 = len;
+ if (block1 > msg->chn->buf->data + msg->chn->buf->size - body)
+ block1 = msg->chn->buf->data + msg->chn->buf->size - body;
+
+ if (block1 == len) {
+ /* buffer is not wrapped (or empty) */
+ smp->ctx.a[0] = body;
+ smp->ctx.a[1] = body + len;
+
+ /* Assume that the context is filled with NULL pointer
+ * before the first call.
+ * smp->ctx.a[2] = NULL;
+ * smp->ctx.a[3] = NULL;
+ */
+ }
+ else {
+ /* buffer is wrapped, we need to defragment it */
+ smp->ctx.a[0] = body;
+ smp->ctx.a[1] = body + block1;
+ smp->ctx.a[2] = msg->chn->buf->data;
+ smp->ctx.a[3] = msg->chn->buf->data + ( len - block1 );
+ }
+ }
+ return smp_fetch_param('&', name, name_len, args, smp, kw, private);
+}
+
+/* Return the signed integer value for the specified url parameter (see url_param
+ * above).
+ */
+static int
+smp_fetch_url_param_val(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ int ret = smp_fetch_url_param(args, smp, kw, private);
+
+ if (ret > 0) {
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = strl2ic(smp->data.u.str.str, smp->data.u.str.len);
+ }
+
+ return ret;
+}
+
+/* This produces a 32-bit hash of the concatenation of the first occurrence of
+ * the Host header followed by the path component if it begins with a slash ('/').
+ * This means that '*' will not be added, resulting in exactly the first Host
+ * entry. If no Host header is found, then the path is used. The resulting value
+ * is hashed using the url hash followed by a full avalanche hash and provides a
+ * 32-bit integer value. This fetch is useful for tracking per-URL activity on
+ * high-traffic sites without having to store whole paths.
+ * this differs from the base32 functions in that it includes the url parameters
+ * as well as the path
+ */
+static int
+smp_fetch_url32(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct http_txn *txn;
+ struct hdr_ctx ctx;
+ unsigned int hash = 0;
+ char *ptr, *beg, *end;
+ int len;
+
+ CHECK_HTTP_MESSAGE_FIRST();
+
+ txn = smp->strm->txn;
+ ctx.idx = 0;
+ if (http_find_header2("Host", 4, txn->req.chn->buf->p, &txn->hdr_idx, &ctx)) {
+ /* OK we have the header value in ctx.line+ctx.val for ctx.vlen bytes */
+ ptr = ctx.line + ctx.val;
+ len = ctx.vlen;
+ while (len--)
+ hash = *(ptr++) + (hash << 6) + (hash << 16) - hash;
+ }
+
+ /* now retrieve the path */
+ end = txn->req.chn->buf->p + txn->req.sl.rq.u + txn->req.sl.rq.u_l;
+ beg = http_get_path(txn);
+ if (!beg)
+ beg = end;
+
+ for (ptr = beg; ptr < end ; ptr++);
+
+ if (beg < ptr && *beg == '/') {
+ while (beg < ptr)
+ hash = *(beg++) + (hash << 6) + (hash << 16) - hash;
+ }
+ hash = full_hash(hash);
+
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = hash;
+ smp->flags = SMP_F_VOL_1ST;
+ return 1;
+}
+
+/* This concatenates the source address with the 32-bit hash of the Host and
+ * URL as returned by smp_fetch_base32(). The idea is to have per-source and
+ * per-url counters. The result is a binary block from 8 to 20 bytes depending
+ * on the source address length. The URL hash is stored before the address so
+ * that in environments where IPv6 is insignificant, truncating the output to
+ * 8 bytes would still work.
+ */
+static int
+smp_fetch_url32_src(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct chunk *temp;
+ struct connection *cli_conn = objt_conn(smp->sess->origin);
+ unsigned int hash;
+
+ if (!smp_fetch_url32(args, smp, kw, private))
+ return 0;
+
+ /* The returned hash is a 32 bytes integer. */
+ hash = smp->data.u.sint;
+
+ temp = get_trash_chunk();
+ memcpy(temp->str + temp->len, &hash, sizeof(hash));
+ temp->len += sizeof(hash);
+
+ switch (cli_conn->addr.from.ss_family) {
+ case AF_INET:
+ memcpy(temp->str + temp->len, &((struct sockaddr_in *)&cli_conn->addr.from)->sin_addr, 4);
+ temp->len += 4;
+ break;
+ case AF_INET6:
+ memcpy(temp->str + temp->len, &((struct sockaddr_in6 *)&cli_conn->addr.from)->sin6_addr, 16);
+ temp->len += 16;
+ break;
+ default:
+ return 0;
+ }
+
+ smp->data.u.str = *temp;
+ smp->data.type = SMP_T_BIN;
+ return 1;
+}
+
+/* This function is used to validate the arguments passed to any "hdr" fetch
+ * keyword. These keywords support an optional positive or negative occurrence
+ * number. We must ensure that the number is greater than -MAX_HDR_HISTORY. It
+ * is assumed that the types are already the correct ones. Returns 0 on error,
+ * non-zero if OK. If <err> is not NULL, it will be filled with a pointer to an
+ * error message in case of error, that the caller is responsible for freeing.
+ * The initial location must either be freeable or NULL.
+ */
+int val_hdr(struct arg *arg, char **err_msg)
+{
+ if (arg && arg[1].type == ARGT_SINT && arg[1].data.sint < -MAX_HDR_HISTORY) {
+ memprintf(err_msg, "header occurrence must be >= %d", -MAX_HDR_HISTORY);
+ return 0;
+ }
+ return 1;
+}
+
+/* takes an UINT value on input supposed to represent the time since EPOCH,
+ * adds an optional offset found in args[0] and emits a string representing
+ * the date in RFC-1123/5322 format.
+ */
+static int sample_conv_http_date(const struct arg *args, struct sample *smp, void *private)
+{
+ const char day[7][4] = { "Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun" };
+ const char mon[12][4] = { "Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec" };
+ struct chunk *temp;
+ struct tm *tm;
+ /* With high numbers, the date returned can be negative, the 55 bits mask prevent this. */
+ time_t curr_date = smp->data.u.sint & 0x007fffffffffffffLL;
+
+ /* add offset */
+ if (args && (args[0].type == ARGT_SINT))
+ curr_date += args[0].data.sint;
+
+ tm = gmtime(&curr_date);
+ if (!tm)
+ return 0;
+
+ temp = get_trash_chunk();
+ temp->len = snprintf(temp->str, temp->size - temp->len,
+ "%s, %02d %s %04d %02d:%02d:%02d GMT",
+ day[tm->tm_wday], tm->tm_mday, mon[tm->tm_mon], 1900+tm->tm_year,
+ tm->tm_hour, tm->tm_min, tm->tm_sec);
+
+ smp->data.u.str = *temp;
+ smp->data.type = SMP_T_STR;
+ return 1;
+}
+
+/* Match language range with language tag. RFC2616 14.4:
+ *
+ * A language-range matches a language-tag if it exactly equals
+ * the tag, or if it exactly equals a prefix of the tag such
+ * that the first tag character following the prefix is "-".
+ *
+ * Return 1 if the strings match, else return 0.
+ */
+static inline int language_range_match(const char *range, int range_len,
+ const char *tag, int tag_len)
+{
+ const char *end = range + range_len;
+ const char *tend = tag + tag_len;
+ while (range < end) {
+ if (*range == '-' && tag == tend)
+ return 1;
+ if (*range != *tag || tag == tend)
+ return 0;
+ range++;
+ tag++;
+ }
+ /* Return true only if the last char of the tag is matched. */
+ return tag == tend;
+}
+
+/* Arguments: The list of expected value, the number of parts returned and the separator */
+static int sample_conv_q_prefered(const struct arg *args, struct sample *smp, void *private)
+{
+ const char *al = smp->data.u.str.str;
+ const char *end = al + smp->data.u.str.len;
+ const char *token;
+ int toklen;
+ int qvalue;
+ const char *str;
+ const char *w;
+ int best_q = 0;
+
+ /* Set the constant to the sample, because the output of the
+ * function will be peek in the constant configuration string.
+ */
+ smp->flags |= SMP_F_CONST;
+ smp->data.u.str.size = 0;
+ smp->data.u.str.str = "";
+ smp->data.u.str.len = 0;
+
+ /* Parse the accept language */
+ while (1) {
+
+ /* Jump spaces, quit if the end is detected. */
+ while (al < end && isspace((unsigned char)*al))
+ al++;
+ if (al >= end)
+ break;
+
+ /* Start of the fisrt word. */
+ token = al;
+
+ /* Look for separator: isspace(), ',' or ';'. Next value if 0 length word. */
+ while (al < end && *al != ';' && *al != ',' && !isspace((unsigned char)*al))
+ al++;
+ if (al == token)
+ goto expect_comma;
+
+ /* Length of the token. */
+ toklen = al - token;
+ qvalue = 1000;
+
+ /* Check if the token exists in the list. If the token not exists,
+ * jump to the next token.
+ */
+ str = args[0].data.str.str;
+ w = str;
+ while (1) {
+ if (*str == ';' || *str == '\0') {
+ if (language_range_match(token, toklen, w, str-w))
+ goto look_for_q;
+ if (*str == '\0')
+ goto expect_comma;
+ w = str + 1;
+ }
+ str++;
+ }
+ goto expect_comma;
+
+look_for_q:
+
+ /* Jump spaces, quit if the end is detected. */
+ while (al < end && isspace((unsigned char)*al))
+ al++;
+ if (al >= end)
+ goto process_value;
+
+ /* If ',' is found, process the result */
+ if (*al == ',')
+ goto process_value;
+
+ /* If the character is different from ';', look
+ * for the end of the header part in best effort.
+ */
+ if (*al != ';')
+ goto expect_comma;
+
+ /* Assumes that the char is ';', now expect "q=". */
+ al++;
+
+ /* Jump spaces, process value if the end is detected. */
+ while (al < end && isspace((unsigned char)*al))
+ al++;
+ if (al >= end)
+ goto process_value;
+
+ /* Expect 'q'. If no 'q', continue in best effort */
+ if (*al != 'q')
+ goto process_value;
+ al++;
+
+ /* Jump spaces, process value if the end is detected. */
+ while (al < end && isspace((unsigned char)*al))
+ al++;
+ if (al >= end)
+ goto process_value;
+
+ /* Expect '='. If no '=', continue in best effort */
+ if (*al != '=')
+ goto process_value;
+ al++;
+
+ /* Jump spaces, process value if the end is detected. */
+ while (al < end && isspace((unsigned char)*al))
+ al++;
+ if (al >= end)
+ goto process_value;
+
+ /* Parse the q value. */
+ qvalue = parse_qvalue(al, &al);
+
+process_value:
+
+ /* If the new q value is the best q value, then store the associated
+ * language in the response. If qvalue is the biggest value (1000),
+ * break the process.
+ */
+ if (qvalue > best_q) {
+ smp->data.u.str.str = (char *)w;
+ smp->data.u.str.len = str - w;
+ if (qvalue >= 1000)
+ break;
+ best_q = qvalue;
+ }
+
+expect_comma:
+
+ /* Expect comma or end. If the end is detected, quit the loop. */
+ while (al < end && *al != ',')
+ al++;
+ if (al >= end)
+ break;
+
+ /* Comma is found, jump it and restart the analyzer. */
+ al++;
+ }
+
+ /* Set default value if required. */
+ if (smp->data.u.str.len == 0 && args[1].type == ARGT_STR) {
+ smp->data.u.str.str = args[1].data.str.str;
+ smp->data.u.str.len = args[1].data.str.len;
+ }
+
+ /* Return true only if a matching language was found. */
+ return smp->data.u.str.len != 0;
+}
+
+/* This fetch url-decode any input string. */
+static int sample_conv_url_dec(const struct arg *args, struct sample *smp, void *private)
+{
+ /* If the constant flag is set or if not size is avalaible at
+ * the end of the buffer, copy the string in other buffer
+ * before decoding.
+ */
+ if (smp->flags & SMP_F_CONST || smp->data.u.str.size <= smp->data.u.str.len) {
+ struct chunk *str = get_trash_chunk();
+ memcpy(str->str, smp->data.u.str.str, smp->data.u.str.len);
+ smp->data.u.str.str = str->str;
+ smp->data.u.str.size = str->size;
+ smp->flags &= ~SMP_F_CONST;
+ }
+
+ /* Add final \0 required by url_decode(), and convert the input string. */
+ smp->data.u.str.str[smp->data.u.str.len] = '\0';
+ smp->data.u.str.len = url_decode(smp->data.u.str.str);
+ return 1;
+}
+
+static int smp_conv_req_capture(const struct arg *args, struct sample *smp, void *private)
+{
+ struct proxy *fe = strm_fe(smp->strm);
+ int idx, i;
+ struct cap_hdr *hdr;
+ int len;
+
+ if (!args || args->type != ARGT_SINT)
+ return 0;
+
+ idx = args->data.sint;
+
+ /* Check the availibity of the capture id. */
+ if (idx > fe->nb_req_cap - 1)
+ return 0;
+
+ /* Look for the original configuration. */
+ for (hdr = fe->req_cap, i = fe->nb_req_cap - 1;
+ hdr != NULL && i != idx ;
+ i--, hdr = hdr->next);
+ if (!hdr)
+ return 0;
+
+ /* check for the memory allocation */
+ if (smp->strm->req_cap[hdr->index] == NULL)
+ smp->strm->req_cap[hdr->index] = pool_alloc2(hdr->pool);
+ if (smp->strm->req_cap[hdr->index] == NULL)
+ return 0;
+
+ /* Check length. */
+ len = smp->data.u.str.len;
+ if (len > hdr->len)
+ len = hdr->len;
+
+ /* Capture input data. */
+ memcpy(smp->strm->req_cap[idx], smp->data.u.str.str, len);
+ smp->strm->req_cap[idx][len] = '\0';
+
+ return 1;
+}
+
+static int smp_conv_res_capture(const struct arg *args, struct sample *smp, void *private)
+{
+ struct proxy *fe = strm_fe(smp->strm);
+ int idx, i;
+ struct cap_hdr *hdr;
+ int len;
+
+ if (!args || args->type != ARGT_SINT)
+ return 0;
+
+ idx = args->data.sint;
+
+ /* Check the availibity of the capture id. */
+ if (idx > fe->nb_rsp_cap - 1)
+ return 0;
+
+ /* Look for the original configuration. */
+ for (hdr = fe->rsp_cap, i = fe->nb_rsp_cap - 1;
+ hdr != NULL && i != idx ;
+ i--, hdr = hdr->next);
+ if (!hdr)
+ return 0;
+
+ /* check for the memory allocation */
+ if (smp->strm->res_cap[hdr->index] == NULL)
+ smp->strm->res_cap[hdr->index] = pool_alloc2(hdr->pool);
+ if (smp->strm->res_cap[hdr->index] == NULL)
+ return 0;
+
+ /* Check length. */
+ len = smp->data.u.str.len;
+ if (len > hdr->len)
+ len = hdr->len;
+
+ /* Capture input data. */
+ memcpy(smp->strm->res_cap[idx], smp->data.u.str.str, len);
+ smp->strm->res_cap[idx][len] = '\0';
+
+ return 1;
+}
+
+/* This function executes one of the set-{method,path,query,uri} actions. It
+ * takes the string from the variable 'replace' with length 'len', then modifies
+ * the relevant part of the request line accordingly. Then it updates various
+ * pointers to the next elements which were moved, and the total buffer length.
+ * It finds the action to be performed in p[2], previously filled by function
+ * parse_set_req_line(). It returns 0 in case of success, -1 in case of internal
+ * error, though this can be revisited when this code is finally exploited.
+ *
+ * 'action' can be '0' to replace method, '1' to replace path, '2' to replace
+ * query string and 3 to replace uri.
+ *
+ * In query string case, the mark question '?' must be set at the start of the
+ * string by the caller, event if the replacement query string is empty.
+ */
+int http_replace_req_line(int action, const char *replace, int len,
+ struct proxy *px, struct stream *s)
+{
+ struct http_txn *txn = s->txn;
+ char *cur_ptr, *cur_end;
+ int offset = 0;
+ int delta;
+
+ switch (action) {
+ case 0: // method
+ cur_ptr = s->req.buf->p;
+ cur_end = cur_ptr + txn->req.sl.rq.m_l;
+
+ /* adjust req line offsets and lengths */
+ delta = len - offset - (cur_end - cur_ptr);
+ txn->req.sl.rq.m_l += delta;
+ txn->req.sl.rq.u += delta;
+ txn->req.sl.rq.v += delta;
+ break;
+
+ case 1: // path
+ cur_ptr = http_get_path(txn);
+ if (!cur_ptr)
+ cur_ptr = s->req.buf->p + txn->req.sl.rq.u;
+
+ cur_end = cur_ptr;
+ while (cur_end < s->req.buf->p + txn->req.sl.rq.u + txn->req.sl.rq.u_l && *cur_end != '?')
+ cur_end++;
+
+ /* adjust req line offsets and lengths */
+ delta = len - offset - (cur_end - cur_ptr);
+ txn->req.sl.rq.u_l += delta;
+ txn->req.sl.rq.v += delta;
+ break;
+
+ case 2: // query
+ offset = 1;
+ cur_ptr = s->req.buf->p + txn->req.sl.rq.u;
+ cur_end = cur_ptr + txn->req.sl.rq.u_l;
+ while (cur_ptr < cur_end && *cur_ptr != '?')
+ cur_ptr++;
+
+ /* skip the question mark or indicate that we must insert it
+ * (but only if the format string is not empty then).
+ */
+ if (cur_ptr < cur_end)
+ cur_ptr++;
+ else if (len > 1)
+ offset = 0;
+
+ /* adjust req line offsets and lengths */
+ delta = len - offset - (cur_end - cur_ptr);
+ txn->req.sl.rq.u_l += delta;
+ txn->req.sl.rq.v += delta;
+ break;
+
+ case 3: // uri
+ cur_ptr = s->req.buf->p + txn->req.sl.rq.u;
+ cur_end = cur_ptr + txn->req.sl.rq.u_l;
+
+ /* adjust req line offsets and lengths */
+ delta = len - offset - (cur_end - cur_ptr);
+ txn->req.sl.rq.u_l += delta;
+ txn->req.sl.rq.v += delta;
+ break;
+
+ default:
+ return -1;
+ }
+
+ /* commit changes and adjust end of message */
+ delta = buffer_replace2(s->req.buf, cur_ptr, cur_end, replace + offset, len - offset);
+ txn->req.sl.rq.l += delta;
+ txn->hdr_idx.v[0].len += delta;
+ http_msg_move_end(&txn->req, delta);
+ return 0;
+}
+
+/* This function replace the HTTP status code and the associated message. The
+ * variable <status> contains the new status code. This function never fails.
+ */
+void http_set_status(unsigned int status, struct stream *s)
+{
+ struct http_txn *txn = s->txn;
+ char *cur_ptr, *cur_end;
+ int delta;
+ char *res;
+ int c_l;
+ const char *msg;
+ int msg_len;
+
+ chunk_reset(&trash);
+
+ res = ultoa_o(status, trash.str, trash.size);
+ c_l = res - trash.str;
+
+ trash.str[c_l] = ' ';
+ trash.len = c_l + 1;
+
+ msg = get_reason(status);
+ msg_len = strlen(msg);
+
+ strncpy(&trash.str[trash.len], msg, trash.size - trash.len);
+ trash.len += msg_len;
+
+ cur_ptr = s->res.buf->p + txn->rsp.sl.st.c;
+ cur_end = s->res.buf->p + txn->rsp.sl.st.r + txn->rsp.sl.st.r_l;
+
+ /* commit changes and adjust message */
+ delta = buffer_replace2(s->res.buf, cur_ptr, cur_end, trash.str, trash.len);
+
+ /* adjust res line offsets and lengths */
+ txn->rsp.sl.st.r += c_l - txn->rsp.sl.st.c_l;
+ txn->rsp.sl.st.c_l = c_l;
+ txn->rsp.sl.st.r_l = msg_len;
+
+ delta = trash.len - (cur_end - cur_ptr);
+ txn->rsp.sl.st.l += delta;
+ txn->hdr_idx.v[0].len += delta;
+ http_msg_move_end(&txn->rsp, delta);
+}
+
+/* This function executes one of the set-{method,path,query,uri} actions. It
+ * builds a string in the trash from the specified format string. It finds
+ * the action to be performed in <http.action>, previously filled by function
+ * parse_set_req_line(). The replacement action is excuted by the function
+ * http_action_set_req_line(). It always returns ACT_RET_CONT. If an error
+ * occurs the action is canceled, but the rule processing continue.
+ */
+enum act_return http_action_set_req_line(struct act_rule *rule, struct proxy *px,
+ struct session *sess, struct stream *s, int flags)
+{
+ chunk_reset(&trash);
+
+ /* If we have to create a query string, prepare a '?'. */
+ if (rule->arg.http.action == 2)
+ trash.str[trash.len++] = '?';
+ trash.len += build_logline(s, trash.str + trash.len, trash.size - trash.len, &rule->arg.http.logfmt);
+
+ http_replace_req_line(rule->arg.http.action, trash.str, trash.len, px, s);
+ return ACT_RET_CONT;
+}
+
+/* This function is just a compliant action wrapper for "set-status". */
+enum act_return action_http_set_status(struct act_rule *rule, struct proxy *px,
+ struct session *sess, struct stream *s, int flags)
+{
+ http_set_status(rule->arg.status.code, s);
+ return ACT_RET_CONT;
+}
+
+/* parse an http-request action among :
+ * set-method
+ * set-path
+ * set-query
+ * set-uri
+ *
+ * All of them accept a single argument of type string representing a log-format.
+ * The resulting rule makes use of arg->act.p[0..1] to store the log-format list
+ * head, and p[2] to store the action as an int (0=method, 1=path, 2=query, 3=uri).
+ * It returns ACT_RET_PRS_OK on success, ACT_RET_PRS_ERR on error.
+ */
+enum act_parse_ret parse_set_req_line(const char **args, int *orig_arg, struct proxy *px,
+ struct act_rule *rule, char **err)
+{
+ int cur_arg = *orig_arg;
+
+ rule->action = ACT_CUSTOM;
+
+ switch (args[0][4]) {
+ case 'm' :
+ rule->arg.http.action = 0;
+ rule->action_ptr = http_action_set_req_line;
+ break;
+ case 'p' :
+ rule->arg.http.action = 1;
+ rule->action_ptr = http_action_set_req_line;
+ break;
+ case 'q' :
+ rule->arg.http.action = 2;
+ rule->action_ptr = http_action_set_req_line;
+ break;
+ case 'u' :
+ rule->arg.http.action = 3;
+ rule->action_ptr = http_action_set_req_line;
+ break;
+ default:
+ memprintf(err, "internal error: unhandled action '%s'", args[0]);
+ return ACT_RET_PRS_ERR;
+ }
+
+ if (!*args[cur_arg] ||
+ (*args[cur_arg + 1] && strcmp(args[cur_arg + 1], "if") != 0 && strcmp(args[cur_arg + 1], "unless") != 0)) {
+ memprintf(err, "expects exactly 1 argument <format>");
+ return ACT_RET_PRS_ERR;
+ }
+
+ LIST_INIT(&rule->arg.http.logfmt);
+ proxy->conf.args.ctx = ARGC_HRQ;
+ parse_logformat_string(args[cur_arg], proxy, &rule->arg.http.logfmt, LOG_OPT_HTTP,
+ (proxy->cap & PR_CAP_FE) ? SMP_VAL_FE_HRQ_HDR : SMP_VAL_BE_HRQ_HDR,
+ proxy->conf.args.file, proxy->conf.args.line);
+
+ (*orig_arg)++;
+ return ACT_RET_PRS_OK;
+}
+
+/* parse set-status action:
+ * This action accepts a single argument of type int representing
+ * an http status code. It returns ACT_RET_PRS_OK on success,
+ * ACT_RET_PRS_ERR on error.
+ */
+enum act_parse_ret parse_http_set_status(const char **args, int *orig_arg, struct proxy *px,
+ struct act_rule *rule, char **err)
+{
+ char *error;
+
+ rule->action = ACT_CUSTOM;
+ rule->action_ptr = action_http_set_status;
+
+ /* Check if an argument is available */
+ if (!*args[*orig_arg]) {
+ memprintf(err, "expects exactly 1 argument <status>");
+ return ACT_RET_PRS_ERR;
+ }
+
+ /* convert status code as integer */
+ rule->arg.status.code = strtol(args[*orig_arg], &error, 10);
+ if (*error != '\0' || rule->arg.status.code < 100 || rule->arg.status.code > 999) {
+ memprintf(err, "expects an integer status code between 100 and 999");
+ return ACT_RET_PRS_ERR;
+ }
+
+ (*orig_arg)++;
+ return ACT_RET_PRS_OK;
+}
+
+/* This function executes the "capture" action. It executes a fetch expression,
+ * turns the result into a string and puts it in a capture slot. It always
+ * returns 1. If an error occurs the action is cancelled, but the rule
+ * processing continues.
+ */
+enum act_return http_action_req_capture(struct act_rule *rule, struct proxy *px,
+ struct session *sess, struct stream *s, int flags)
+{
+ struct sample *key;
+ struct cap_hdr *h = rule->arg.cap.hdr;
+ char **cap = s->req_cap;
+ int len;
+
+ key = sample_fetch_as_type(s->be, sess, s, SMP_OPT_DIR_REQ|SMP_OPT_FINAL, rule->arg.cap.expr, SMP_T_STR);
+ if (!key)
+ return ACT_RET_CONT;
+
+ if (cap[h->index] == NULL)
+ cap[h->index] = pool_alloc2(h->pool);
+
+ if (cap[h->index] == NULL) /* no more capture memory */
+ return ACT_RET_CONT;
+
+ len = key->data.u.str.len;
+ if (len > h->len)
+ len = h->len;
+
+ memcpy(cap[h->index], key->data.u.str.str, len);
+ cap[h->index][len] = 0;
+ return ACT_RET_CONT;
+}
+
+/* This function executes the "capture" action and store the result in a
+ * capture slot if exists. It executes a fetch expression, turns the result
+ * into a string and puts it in a capture slot. It always returns 1. If an
+ * error occurs the action is cancelled, but the rule processing continues.
+ */
+enum act_return http_action_req_capture_by_id(struct act_rule *rule, struct proxy *px,
+ struct session *sess, struct stream *s, int flags)
+{
+ struct sample *key;
+ struct cap_hdr *h;
+ char **cap = s->req_cap;
+ struct proxy *fe = strm_fe(s);
+ int len;
+ int i;
+
+ /* Look for the original configuration. */
+ for (h = fe->req_cap, i = fe->nb_req_cap - 1;
+ h != NULL && i != rule->arg.capid.idx ;
+ i--, h = h->next);
+ if (!h)
+ return ACT_RET_CONT;
+
+ key = sample_fetch_as_type(s->be, sess, s, SMP_OPT_DIR_REQ|SMP_OPT_FINAL, rule->arg.capid.expr, SMP_T_STR);
+ if (!key)
+ return ACT_RET_CONT;
+
+ if (cap[h->index] == NULL)
+ cap[h->index] = pool_alloc2(h->pool);
+
+ if (cap[h->index] == NULL) /* no more capture memory */
+ return ACT_RET_CONT;
+
+ len = key->data.u.str.len;
+ if (len > h->len)
+ len = h->len;
+
+ memcpy(cap[h->index], key->data.u.str.str, len);
+ cap[h->index][len] = 0;
+ return ACT_RET_CONT;
+}
+
+/* parse an "http-request capture" action. It takes a single argument which is
+ * a sample fetch expression. It stores the expression into arg->act.p[0] and
+ * the allocated hdr_cap struct or the preallocated "id" into arg->act.p[1].
+ * It returns ACT_RET_PRS_OK on success, ACT_RET_PRS_ERR on error.
+ */
+enum act_parse_ret parse_http_req_capture(const char **args, int *orig_arg, struct proxy *px,
+ struct act_rule *rule, char **err)
+{
+ struct sample_expr *expr;
+ struct cap_hdr *hdr;
+ int cur_arg;
+ int len = 0;
+
+ for (cur_arg = *orig_arg; cur_arg < *orig_arg + 3 && *args[cur_arg]; cur_arg++)
+ if (strcmp(args[cur_arg], "if") == 0 ||
+ strcmp(args[cur_arg], "unless") == 0)
+ break;
+
+ if (cur_arg < *orig_arg + 3) {
+ memprintf(err, "expects <expression> [ 'len' <length> | id <idx> ]");
+ return ACT_RET_PRS_ERR;
+ }
+
+ cur_arg = *orig_arg;
+ expr = sample_parse_expr((char **)args, &cur_arg, px->conf.args.file, px->conf.args.line, err, &px->conf.args);
+ if (!expr)
+ return ACT_RET_PRS_ERR;
+
+ if (!(expr->fetch->val & SMP_VAL_FE_HRQ_HDR)) {
+ memprintf(err,
+ "fetch method '%s' extracts information from '%s', none of which is available here",
+ args[cur_arg-1], sample_src_names(expr->fetch->use));
+ free(expr);
+ return ACT_RET_PRS_ERR;
+ }
+
+ if (!args[cur_arg] || !*args[cur_arg]) {
+ memprintf(err, "expects 'len or 'id'");
+ free(expr);
+ return ACT_RET_PRS_ERR;
+ }
+
+ if (strcmp(args[cur_arg], "len") == 0) {
+ cur_arg++;
+
+ if (!(px->cap & PR_CAP_FE)) {
+ memprintf(err, "proxy '%s' has no frontend capability", px->id);
+ return ACT_RET_PRS_ERR;
+ }
+
+ proxy->conf.args.ctx = ARGC_CAP;
+
+ if (!args[cur_arg]) {
+ memprintf(err, "missing length value");
+ free(expr);
+ return ACT_RET_PRS_ERR;
+ }
+ /* we copy the table name for now, it will be resolved later */
+ len = atoi(args[cur_arg]);
+ if (len <= 0) {
+ memprintf(err, "length must be > 0");
+ free(expr);
+ return ACT_RET_PRS_ERR;
+ }
+ cur_arg++;
+
+ if (!len) {
+ memprintf(err, "a positive 'len' argument is mandatory");
+ free(expr);
+ return ACT_RET_PRS_ERR;
+ }
+
+ hdr = calloc(sizeof(struct cap_hdr), 1);
+ hdr->next = px->req_cap;
+ hdr->name = NULL; /* not a header capture */
+ hdr->namelen = 0;
+ hdr->len = len;
+ hdr->pool = create_pool("caphdr", hdr->len + 1, MEM_F_SHARED);
+ hdr->index = px->nb_req_cap++;
+
+ px->req_cap = hdr;
+ px->to_log |= LW_REQHDR;
+
+ rule->action = ACT_CUSTOM;
+ rule->action_ptr = http_action_req_capture;
+ rule->arg.cap.expr = expr;
+ rule->arg.cap.hdr = hdr;
+ }
+
+ else if (strcmp(args[cur_arg], "id") == 0) {
+ int id;
+ char *error;
+
+ cur_arg++;
+
+ if (!args[cur_arg]) {
+ memprintf(err, "missing id value");
+ free(expr);
+ return ACT_RET_PRS_ERR;
+ }
+
+ id = strtol(args[cur_arg], &error, 10);
+ if (*error != '\0') {
+ memprintf(err, "cannot parse id '%s'", args[cur_arg]);
+ free(expr);
+ return ACT_RET_PRS_ERR;
+ }
+ cur_arg++;
+
+ proxy->conf.args.ctx = ARGC_CAP;
+
+ rule->action = ACT_CUSTOM;
+ rule->action_ptr = http_action_req_capture_by_id;
+ rule->arg.capid.expr = expr;
+ rule->arg.capid.idx = id;
+ }
+
+ else {
+ memprintf(err, "expects 'len' or 'id', found '%s'", args[cur_arg]);
+ free(expr);
+ return ACT_RET_PRS_ERR;
+ }
+
+ *orig_arg = cur_arg;
+ return ACT_RET_PRS_OK;
+}
+
+/* This function executes the "capture" action and store the result in a
+ * capture slot if exists. It executes a fetch expression, turns the result
+ * into a string and puts it in a capture slot. It always returns 1. If an
+ * error occurs the action is cancelled, but the rule processing continues.
+ */
+enum act_return http_action_res_capture_by_id(struct act_rule *rule, struct proxy *px,
+ struct session *sess, struct stream *s, int flags)
+{
+ struct sample *key;
+ struct cap_hdr *h;
+ char **cap = s->res_cap;
+ struct proxy *fe = strm_fe(s);
+ int len;
+ int i;
+
+ /* Look for the original configuration. */
+ for (h = fe->rsp_cap, i = fe->nb_rsp_cap - 1;
+ h != NULL && i != rule->arg.capid.idx ;
+ i--, h = h->next);
+ if (!h)
+ return ACT_RET_CONT;
+
+ key = sample_fetch_as_type(s->be, sess, s, SMP_OPT_DIR_RES|SMP_OPT_FINAL, rule->arg.capid.expr, SMP_T_STR);
+ if (!key)
+ return ACT_RET_CONT;
+
+ if (cap[h->index] == NULL)
+ cap[h->index] = pool_alloc2(h->pool);
+
+ if (cap[h->index] == NULL) /* no more capture memory */
+ return ACT_RET_CONT;
+
+ len = key->data.u.str.len;
+ if (len > h->len)
+ len = h->len;
+
+ memcpy(cap[h->index], key->data.u.str.str, len);
+ cap[h->index][len] = 0;
+ return ACT_RET_CONT;
+}
+
+/* parse an "http-response capture" action. It takes a single argument which is
+ * a sample fetch expression. It stores the expression into arg->act.p[0] and
+ * the allocated hdr_cap struct od the preallocated id into arg->act.p[1].
+ * It returns ACT_RET_PRS_OK on success, ACT_RET_PRS_ERR on error.
+ */
+enum act_parse_ret parse_http_res_capture(const char **args, int *orig_arg, struct proxy *px,
+ struct act_rule *rule, char **err)
+{
+ struct sample_expr *expr;
+ int cur_arg;
+ int id;
+ char *error;
+
+ for (cur_arg = *orig_arg; cur_arg < *orig_arg + 3 && *args[cur_arg]; cur_arg++)
+ if (strcmp(args[cur_arg], "if") == 0 ||
+ strcmp(args[cur_arg], "unless") == 0)
+ break;
+
+ if (cur_arg < *orig_arg + 3) {
+ memprintf(err, "expects <expression> [ 'len' <length> | id <idx> ]");
+ return ACT_RET_PRS_ERR;
+ }
+
+ cur_arg = *orig_arg;
+ expr = sample_parse_expr((char **)args, &cur_arg, px->conf.args.file, px->conf.args.line, err, &px->conf.args);
+ if (!expr)
+ return ACT_RET_PRS_ERR;
+
+ if (!(expr->fetch->val & SMP_VAL_FE_HRS_HDR)) {
+ memprintf(err,
+ "fetch method '%s' extracts information from '%s', none of which is available here",
+ args[cur_arg-1], sample_src_names(expr->fetch->use));
+ free(expr);
+ return ACT_RET_PRS_ERR;
+ }
+
+ if (!args[cur_arg] || !*args[cur_arg]) {
+ memprintf(err, "expects 'len or 'id'");
+ free(expr);
+ return ACT_RET_PRS_ERR;
+ }
+
+ if (strcmp(args[cur_arg], "id") != 0) {
+ memprintf(err, "expects 'id', found '%s'", args[cur_arg]);
+ free(expr);
+ return ACT_RET_PRS_ERR;
+ }
+
+ cur_arg++;
+
+ if (!args[cur_arg]) {
+ memprintf(err, "missing id value");
+ free(expr);
+ return ACT_RET_PRS_ERR;
+ }
+
+ id = strtol(args[cur_arg], &error, 10);
+ if (*error != '\0') {
+ memprintf(err, "cannot parse id '%s'", args[cur_arg]);
+ free(expr);
+ return ACT_RET_PRS_ERR;
+ }
+ cur_arg++;
+
+ proxy->conf.args.ctx = ARGC_CAP;
+
+ rule->action = ACT_CUSTOM;
+ rule->action_ptr = http_action_res_capture_by_id;
+ rule->arg.capid.expr = expr;
+ rule->arg.capid.idx = id;
+
+ *orig_arg = cur_arg;
+ return ACT_RET_PRS_OK;
+}
+
+/*
+ * Return the struct http_req_action_kw associated to a keyword.
+ */
+struct action_kw *action_http_req_custom(const char *kw)
+{
+ return action_lookup(&http_req_keywords.list, kw);
+}
+
+/*
+ * Return the struct http_res_action_kw associated to a keyword.
+ */
+struct action_kw *action_http_res_custom(const char *kw)
+{
+ return action_lookup(&http_res_keywords.list, kw);
+}
+
+/************************************************************************/
+/* All supported ACL keywords must be declared here. */
+/************************************************************************/
+
+/* Note: must not be declared <const> as its list will be overwritten.
+ * Please take care of keeping this list alphabetically sorted.
+ */
+static struct acl_kw_list acl_kws = {ILH, {
+ { "base", "base", PAT_MATCH_STR },
+ { "base_beg", "base", PAT_MATCH_BEG },
+ { "base_dir", "base", PAT_MATCH_DIR },
+ { "base_dom", "base", PAT_MATCH_DOM },
+ { "base_end", "base", PAT_MATCH_END },
+ { "base_len", "base", PAT_MATCH_LEN },
+ { "base_reg", "base", PAT_MATCH_REG },
+ { "base_sub", "base", PAT_MATCH_SUB },
+
+ { "cook", "req.cook", PAT_MATCH_STR },
+ { "cook_beg", "req.cook", PAT_MATCH_BEG },
+ { "cook_dir", "req.cook", PAT_MATCH_DIR },
+ { "cook_dom", "req.cook", PAT_MATCH_DOM },
+ { "cook_end", "req.cook", PAT_MATCH_END },
+ { "cook_len", "req.cook", PAT_MATCH_LEN },
+ { "cook_reg", "req.cook", PAT_MATCH_REG },
+ { "cook_sub", "req.cook", PAT_MATCH_SUB },
+
+ { "hdr", "req.hdr", PAT_MATCH_STR },
+ { "hdr_beg", "req.hdr", PAT_MATCH_BEG },
+ { "hdr_dir", "req.hdr", PAT_MATCH_DIR },
+ { "hdr_dom", "req.hdr", PAT_MATCH_DOM },
+ { "hdr_end", "req.hdr", PAT_MATCH_END },
+ { "hdr_len", "req.hdr", PAT_MATCH_LEN },
+ { "hdr_reg", "req.hdr", PAT_MATCH_REG },
+ { "hdr_sub", "req.hdr", PAT_MATCH_SUB },
+
+ /* these two declarations uses strings with list storage (in place
+ * of tree storage). The basic match is PAT_MATCH_STR, but the indexation
+ * and delete functions are relative to the list management. The parse
+ * and match method are related to the corresponding fetch methods. This
+ * is very particular ACL declaration mode.
+ */
+ { "http_auth_group", NULL, PAT_MATCH_STR, NULL, pat_idx_list_str, pat_del_list_ptr, NULL, pat_match_auth },
+ { "method", NULL, PAT_MATCH_STR, pat_parse_meth, pat_idx_list_str, pat_del_list_ptr, NULL, pat_match_meth },
+
+ { "path", "path", PAT_MATCH_STR },
+ { "path_beg", "path", PAT_MATCH_BEG },
+ { "path_dir", "path", PAT_MATCH_DIR },
+ { "path_dom", "path", PAT_MATCH_DOM },
+ { "path_end", "path", PAT_MATCH_END },
+ { "path_len", "path", PAT_MATCH_LEN },
+ { "path_reg", "path", PAT_MATCH_REG },
+ { "path_sub", "path", PAT_MATCH_SUB },
+
+ { "req_ver", "req.ver", PAT_MATCH_STR },
+ { "resp_ver", "res.ver", PAT_MATCH_STR },
+
+ { "scook", "res.cook", PAT_MATCH_STR },
+ { "scook_beg", "res.cook", PAT_MATCH_BEG },
+ { "scook_dir", "res.cook", PAT_MATCH_DIR },
+ { "scook_dom", "res.cook", PAT_MATCH_DOM },
+ { "scook_end", "res.cook", PAT_MATCH_END },
+ { "scook_len", "res.cook", PAT_MATCH_LEN },
+ { "scook_reg", "res.cook", PAT_MATCH_REG },
+ { "scook_sub", "res.cook", PAT_MATCH_SUB },
+
+ { "shdr", "res.hdr", PAT_MATCH_STR },
+ { "shdr_beg", "res.hdr", PAT_MATCH_BEG },
+ { "shdr_dir", "res.hdr", PAT_MATCH_DIR },
+ { "shdr_dom", "res.hdr", PAT_MATCH_DOM },
+ { "shdr_end", "res.hdr", PAT_MATCH_END },
+ { "shdr_len", "res.hdr", PAT_MATCH_LEN },
+ { "shdr_reg", "res.hdr", PAT_MATCH_REG },
+ { "shdr_sub", "res.hdr", PAT_MATCH_SUB },
+
+ { "url", "url", PAT_MATCH_STR },
+ { "url_beg", "url", PAT_MATCH_BEG },
+ { "url_dir", "url", PAT_MATCH_DIR },
+ { "url_dom", "url", PAT_MATCH_DOM },
+ { "url_end", "url", PAT_MATCH_END },
+ { "url_len", "url", PAT_MATCH_LEN },
+ { "url_reg", "url", PAT_MATCH_REG },
+ { "url_sub", "url", PAT_MATCH_SUB },
+
+ { "urlp", "urlp", PAT_MATCH_STR },
+ { "urlp_beg", "urlp", PAT_MATCH_BEG },
+ { "urlp_dir", "urlp", PAT_MATCH_DIR },
+ { "urlp_dom", "urlp", PAT_MATCH_DOM },
+ { "urlp_end", "urlp", PAT_MATCH_END },
+ { "urlp_len", "urlp", PAT_MATCH_LEN },
+ { "urlp_reg", "urlp", PAT_MATCH_REG },
+ { "urlp_sub", "urlp", PAT_MATCH_SUB },
+
+ { /* END */ },
+}};
+
+/************************************************************************/
+/* All supported pattern keywords must be declared here. */
+/************************************************************************/
+/* Note: must not be declared <const> as its list will be overwritten */
+static struct sample_fetch_kw_list sample_fetch_keywords = {ILH, {
+ { "base", smp_fetch_base, 0, NULL, SMP_T_STR, SMP_USE_HRQHV },
+ { "base32", smp_fetch_base32, 0, NULL, SMP_T_SINT, SMP_USE_HRQHV },
+ { "base32+src", smp_fetch_base32_src, 0, NULL, SMP_T_BIN, SMP_USE_HRQHV },
+
+ /* capture are allocated and are permanent in the stream */
+ { "capture.req.hdr", smp_fetch_capture_header_req, ARG1(1,SINT), NULL, SMP_T_STR, SMP_USE_HRQHP },
+
+ /* retrieve these captures from the HTTP logs */
+ { "capture.req.method", smp_fetch_capture_req_method, 0, NULL, SMP_T_STR, SMP_USE_HRQHP },
+ { "capture.req.uri", smp_fetch_capture_req_uri, 0, NULL, SMP_T_STR, SMP_USE_HRQHP },
+ { "capture.req.ver", smp_fetch_capture_req_ver, 0, NULL, SMP_T_STR, SMP_USE_HRQHP },
+
+ { "capture.res.hdr", smp_fetch_capture_header_res, ARG1(1,SINT), NULL, SMP_T_STR, SMP_USE_HRSHP },
+ { "capture.res.ver", smp_fetch_capture_res_ver, 0, NULL, SMP_T_STR, SMP_USE_HRQHP },
+
+ /* cookie is valid in both directions (eg: for "stick ...") but cook*
+ * are only here to match the ACL's name, are request-only and are used
+ * for ACL compatibility only.
+ */
+ { "cook", smp_fetch_cookie, ARG1(0,STR), NULL, SMP_T_STR, SMP_USE_HRQHV },
+ { "cookie", smp_fetch_cookie, ARG1(0,STR), NULL, SMP_T_STR, SMP_USE_HRQHV|SMP_USE_HRSHV },
+ { "cook_cnt", smp_fetch_cookie_cnt, ARG1(0,STR), NULL, SMP_T_SINT, SMP_USE_HRQHV },
+ { "cook_val", smp_fetch_cookie_val, ARG1(0,STR), NULL, SMP_T_SINT, SMP_USE_HRQHV },
+
+ /* hdr is valid in both directions (eg: for "stick ...") but hdr_* are
+ * only here to match the ACL's name, are request-only and are used for
+ * ACL compatibility only.
+ */
+ { "hdr", smp_fetch_hdr, ARG2(0,STR,SINT), val_hdr, SMP_T_STR, SMP_USE_HRQHV|SMP_USE_HRSHV },
+ { "hdr_cnt", smp_fetch_hdr_cnt, ARG1(0,STR), NULL, SMP_T_SINT, SMP_USE_HRQHV },
+ { "hdr_ip", smp_fetch_hdr_ip, ARG2(0,STR,SINT), val_hdr, SMP_T_IPV4, SMP_USE_HRQHV },
+ { "hdr_val", smp_fetch_hdr_val, ARG2(0,STR,SINT), val_hdr, SMP_T_SINT, SMP_USE_HRQHV },
+
+ { "http_auth", smp_fetch_http_auth, ARG1(1,USR), NULL, SMP_T_BOOL, SMP_USE_HRQHV },
+ { "http_auth_group", smp_fetch_http_auth_grp, ARG1(1,USR), NULL, SMP_T_STR, SMP_USE_HRQHV },
+ { "http_first_req", smp_fetch_http_first_req, 0, NULL, SMP_T_BOOL, SMP_USE_HRQHP },
+ { "method", smp_fetch_meth, 0, NULL, SMP_T_METH, SMP_USE_HRQHP },
+ { "path", smp_fetch_path, 0, NULL, SMP_T_STR, SMP_USE_HRQHV },
+ { "query", smp_fetch_query, 0, NULL, SMP_T_STR, SMP_USE_HRQHV },
+
+ /* HTTP protocol on the request path */
+ { "req.proto_http", smp_fetch_proto_http, 0, NULL, SMP_T_BOOL, SMP_USE_HRQHP },
+ { "req_proto_http", smp_fetch_proto_http, 0, NULL, SMP_T_BOOL, SMP_USE_HRQHP },
+
+ /* HTTP version on the request path */
+ { "req.ver", smp_fetch_rqver, 0, NULL, SMP_T_STR, SMP_USE_HRQHV },
+ { "req_ver", smp_fetch_rqver, 0, NULL, SMP_T_STR, SMP_USE_HRQHV },
+
+ { "req.body", smp_fetch_body, 0, NULL, SMP_T_BIN, SMP_USE_HRQHV },
+ { "req.body_len", smp_fetch_body_len, 0, NULL, SMP_T_SINT, SMP_USE_HRQHV },
+ { "req.body_size", smp_fetch_body_size, 0, NULL, SMP_T_SINT, SMP_USE_HRQHV },
+ { "req.body_param", smp_fetch_body_param, ARG1(0,STR), NULL, SMP_T_BIN, SMP_USE_HRQHV },
+
+ /* HTTP version on the response path */
+ { "res.ver", smp_fetch_stver, 0, NULL, SMP_T_STR, SMP_USE_HRSHV },
+ { "resp_ver", smp_fetch_stver, 0, NULL, SMP_T_STR, SMP_USE_HRSHV },
+
+ /* explicit req.{cook,hdr} are used to force the fetch direction to be request-only */
+ { "req.cook", smp_fetch_cookie, ARG1(0,STR), NULL, SMP_T_STR, SMP_USE_HRQHV },
+ { "req.cook_cnt", smp_fetch_cookie_cnt, ARG1(0,STR), NULL, SMP_T_SINT, SMP_USE_HRQHV },
+ { "req.cook_val", smp_fetch_cookie_val, ARG1(0,STR), NULL, SMP_T_SINT, SMP_USE_HRQHV },
+
+ { "req.fhdr", smp_fetch_fhdr, ARG2(0,STR,SINT), val_hdr, SMP_T_STR, SMP_USE_HRQHV },
+ { "req.fhdr_cnt", smp_fetch_fhdr_cnt, ARG1(0,STR), NULL, SMP_T_SINT, SMP_USE_HRQHV },
+ { "req.hdr", smp_fetch_hdr, ARG2(0,STR,SINT), val_hdr, SMP_T_STR, SMP_USE_HRQHV },
+ { "req.hdr_cnt", smp_fetch_hdr_cnt, ARG1(0,STR), NULL, SMP_T_SINT, SMP_USE_HRQHV },
+ { "req.hdr_ip", smp_fetch_hdr_ip, ARG2(0,STR,SINT), val_hdr, SMP_T_IPV4, SMP_USE_HRQHV },
+ { "req.hdr_names", smp_fetch_hdr_names, ARG1(0,STR), NULL, SMP_T_STR, SMP_USE_HRQHV },
+ { "req.hdr_val", smp_fetch_hdr_val, ARG2(0,STR,SINT), val_hdr, SMP_T_SINT, SMP_USE_HRQHV },
+
+ /* explicit req.{cook,hdr} are used to force the fetch direction to be response-only */
+ { "res.cook", smp_fetch_cookie, ARG1(0,STR), NULL, SMP_T_STR, SMP_USE_HRSHV },
+ { "res.cook_cnt", smp_fetch_cookie_cnt, ARG1(0,STR), NULL, SMP_T_SINT, SMP_USE_HRSHV },
+ { "res.cook_val", smp_fetch_cookie_val, ARG1(0,STR), NULL, SMP_T_SINT, SMP_USE_HRSHV },
+
+ { "res.fhdr", smp_fetch_fhdr, ARG2(0,STR,SINT), val_hdr, SMP_T_STR, SMP_USE_HRSHV },
+ { "res.fhdr_cnt", smp_fetch_fhdr_cnt, ARG1(0,STR), NULL, SMP_T_SINT, SMP_USE_HRSHV },
+ { "res.hdr", smp_fetch_hdr, ARG2(0,STR,SINT), val_hdr, SMP_T_STR, SMP_USE_HRSHV },
+ { "res.hdr_cnt", smp_fetch_hdr_cnt, ARG1(0,STR), NULL, SMP_T_SINT, SMP_USE_HRSHV },
+ { "res.hdr_ip", smp_fetch_hdr_ip, ARG2(0,STR,SINT), val_hdr, SMP_T_IPV4, SMP_USE_HRSHV },
+ { "res.hdr_names", smp_fetch_hdr_names, ARG1(0,STR), NULL, SMP_T_STR, SMP_USE_HRSHV },
+ { "res.hdr_val", smp_fetch_hdr_val, ARG2(0,STR,SINT), val_hdr, SMP_T_SINT, SMP_USE_HRSHV },
+
+ /* scook is valid only on the response and is used for ACL compatibility */
+ { "scook", smp_fetch_cookie, ARG1(0,STR), NULL, SMP_T_STR, SMP_USE_HRSHV },
+ { "scook_cnt", smp_fetch_cookie_cnt, ARG1(0,STR), NULL, SMP_T_SINT, SMP_USE_HRSHV },
+ { "scook_val", smp_fetch_cookie_val, ARG1(0,STR), NULL, SMP_T_SINT, SMP_USE_HRSHV },
+ { "set-cookie", smp_fetch_cookie, ARG1(0,STR), NULL, SMP_T_STR, SMP_USE_HRSHV }, /* deprecated */
+
+ /* shdr is valid only on the response and is used for ACL compatibility */
+ { "shdr", smp_fetch_hdr, ARG2(0,STR,SINT), val_hdr, SMP_T_STR, SMP_USE_HRSHV },
+ { "shdr_cnt", smp_fetch_hdr_cnt, ARG1(0,STR), NULL, SMP_T_SINT, SMP_USE_HRSHV },
+ { "shdr_ip", smp_fetch_hdr_ip, ARG2(0,STR,SINT), val_hdr, SMP_T_IPV4, SMP_USE_HRSHV },
+ { "shdr_val", smp_fetch_hdr_val, ARG2(0,STR,SINT), val_hdr, SMP_T_SINT, SMP_USE_HRSHV },
+
+ { "status", smp_fetch_stcode, 0, NULL, SMP_T_SINT, SMP_USE_HRSHP },
+ { "url", smp_fetch_url, 0, NULL, SMP_T_STR, SMP_USE_HRQHV },
+ { "url32", smp_fetch_url32, 0, NULL, SMP_T_SINT, SMP_USE_HRQHV },
+ { "url32+src", smp_fetch_url32_src, 0, NULL, SMP_T_BIN, SMP_USE_HRQHV },
+ { "url_ip", smp_fetch_url_ip, 0, NULL, SMP_T_IPV4, SMP_USE_HRQHV },
+ { "url_port", smp_fetch_url_port, 0, NULL, SMP_T_SINT, SMP_USE_HRQHV },
+ { "url_param", smp_fetch_url_param, ARG2(0,STR,STR), NULL, SMP_T_STR, SMP_USE_HRQHV },
+ { "urlp" , smp_fetch_url_param, ARG2(0,STR,STR), NULL, SMP_T_STR, SMP_USE_HRQHV },
+ { "urlp_val", smp_fetch_url_param_val, ARG2(0,STR,STR), NULL, SMP_T_SINT, SMP_USE_HRQHV },
+ { /* END */ },
+}};
+
+
+/************************************************************************/
+/* All supported converter keywords must be declared here. */
+/************************************************************************/
+/* Note: must not be declared <const> as its list will be overwritten */
+static struct sample_conv_kw_list sample_conv_kws = {ILH, {
+ { "http_date", sample_conv_http_date, ARG1(0,SINT), NULL, SMP_T_SINT, SMP_T_STR},
+ { "language", sample_conv_q_prefered, ARG2(1,STR,STR), NULL, SMP_T_STR, SMP_T_STR},
+ { "capture-req", smp_conv_req_capture, ARG1(1,SINT), NULL, SMP_T_STR, SMP_T_STR},
+ { "capture-res", smp_conv_res_capture, ARG1(1,SINT), NULL, SMP_T_STR, SMP_T_STR},
+ { "url_dec", sample_conv_url_dec, 0, NULL, SMP_T_STR, SMP_T_STR},
+ { NULL, NULL, 0, 0, 0 },
+}};
+
+
+/************************************************************************/
+/* All supported http-request action keywords must be declared here. */
+/************************************************************************/
+struct action_kw_list http_req_actions = {
+ .kw = {
+ { "capture", parse_http_req_capture },
+ { "set-method", parse_set_req_line },
+ { "set-path", parse_set_req_line },
+ { "set-query", parse_set_req_line },
+ { "set-uri", parse_set_req_line },
+ { NULL, NULL }
+ }
+};
+
+struct action_kw_list http_res_actions = {
+ .kw = {
+ { "capture", parse_http_res_capture },
+ { "set-status", parse_http_set_status },
+ { NULL, NULL }
+ }
+};
+
+__attribute__((constructor))
+static void __http_protocol_init(void)
+{
+ acl_register_keywords(&acl_kws);
+ sample_register_fetches(&sample_fetch_keywords);
+ sample_register_convs(&sample_conv_kws);
+ http_req_keywords_register(&http_req_actions);
+ http_res_keywords_register(&http_res_actions);
+}
+
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * AF_INET/AF_INET6 SOCK_STREAM protocol layer (tcp)
+ *
+ * Copyright 2000-2013 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <ctype.h>
+#include <errno.h>
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <time.h>
+
+#include <sys/param.h>
+#include <sys/socket.h>
+#include <sys/stat.h>
+#include <sys/types.h>
+#include <sys/un.h>
+
+#include <netinet/tcp.h>
+#include <netinet/in.h>
+#include <netinet/ip.h>
+
+#include <common/cfgparse.h>
+#include <common/compat.h>
+#include <common/config.h>
+#include <common/debug.h>
+#include <common/errors.h>
+#include <common/mini-clist.h>
+#include <common/standard.h>
+#include <common/namespace.h>
+
+#include <types/global.h>
+#include <types/capture.h>
+#include <types/connection.h>
+
+#include <proto/acl.h>
+#include <proto/action.h>
+#include <proto/arg.h>
+#include <proto/channel.h>
+#include <proto/connection.h>
+#include <proto/fd.h>
+#include <proto/listener.h>
+#include <proto/log.h>
+#include <proto/port_range.h>
+#include <proto/protocol.h>
+#include <proto/proto_http.h>
+#include <proto/proto_tcp.h>
+#include <proto/proxy.h>
+#include <proto/sample.h>
+#include <proto/server.h>
+#include <proto/stream.h>
+#include <proto/stick_table.h>
+#include <proto/stream_interface.h>
+#include <proto/task.h>
+
+static int tcp_bind_listeners(struct protocol *proto, char *errmsg, int errlen);
+static int tcp_bind_listener(struct listener *listener, char *errmsg, int errlen);
+
+/* List head of all known action keywords for "tcp-request connection" */
+struct list tcp_req_conn_keywords = LIST_HEAD_INIT(tcp_req_conn_keywords);
+struct list tcp_req_cont_keywords = LIST_HEAD_INIT(tcp_req_cont_keywords);
+struct list tcp_res_cont_keywords = LIST_HEAD_INIT(tcp_res_cont_keywords);
+
+/* Note: must not be declared <const> as its list will be overwritten */
+static struct protocol proto_tcpv4 = {
+ .name = "tcpv4",
+ .sock_domain = AF_INET,
+ .sock_type = SOCK_STREAM,
+ .sock_prot = IPPROTO_TCP,
+ .sock_family = AF_INET,
+ .sock_addrlen = sizeof(struct sockaddr_in),
+ .l3_addrlen = 32/8,
+ .accept = &listener_accept,
+ .connect = tcp_connect_server,
+ .bind = tcp_bind_listener,
+ .bind_all = tcp_bind_listeners,
+ .unbind_all = unbind_all_listeners,
+ .enable_all = enable_all_listeners,
+ .get_src = tcp_get_src,
+ .get_dst = tcp_get_dst,
+ .drain = tcp_drain,
+ .pause = tcp_pause_listener,
+ .listeners = LIST_HEAD_INIT(proto_tcpv4.listeners),
+ .nb_listeners = 0,
+};
+
+/* Note: must not be declared <const> as its list will be overwritten */
+static struct protocol proto_tcpv6 = {
+ .name = "tcpv6",
+ .sock_domain = AF_INET6,
+ .sock_type = SOCK_STREAM,
+ .sock_prot = IPPROTO_TCP,
+ .sock_family = AF_INET6,
+ .sock_addrlen = sizeof(struct sockaddr_in6),
+ .l3_addrlen = 128/8,
+ .accept = &listener_accept,
+ .connect = tcp_connect_server,
+ .bind = tcp_bind_listener,
+ .bind_all = tcp_bind_listeners,
+ .unbind_all = unbind_all_listeners,
+ .enable_all = enable_all_listeners,
+ .get_src = tcp_get_src,
+ .get_dst = tcp_get_dst,
+ .drain = tcp_drain,
+ .pause = tcp_pause_listener,
+ .listeners = LIST_HEAD_INIT(proto_tcpv6.listeners),
+ .nb_listeners = 0,
+};
+
+/*
+ * Register keywords.
+ */
+void tcp_req_conn_keywords_register(struct action_kw_list *kw_list)
+{
+ LIST_ADDQ(&tcp_req_conn_keywords, &kw_list->list);
+}
+
+void tcp_req_cont_keywords_register(struct action_kw_list *kw_list)
+{
+ LIST_ADDQ(&tcp_req_cont_keywords, &kw_list->list);
+}
+
+void tcp_res_cont_keywords_register(struct action_kw_list *kw_list)
+{
+ LIST_ADDQ(&tcp_res_cont_keywords, &kw_list->list);
+}
+
+/*
+ * Return the struct http_req_action_kw associated to a keyword.
+ */
+static struct action_kw *tcp_req_conn_action(const char *kw)
+{
+ return action_lookup(&tcp_req_conn_keywords, kw);
+}
+
+static struct action_kw *tcp_req_cont_action(const char *kw)
+{
+ return action_lookup(&tcp_req_cont_keywords, kw);
+}
+
+static struct action_kw *tcp_res_cont_action(const char *kw)
+{
+ return action_lookup(&tcp_res_cont_keywords, kw);
+}
+
+/* Binds ipv4/ipv6 address <local> to socket <fd>, unless <flags> is set, in which
+ * case we try to bind <remote>. <flags> is a 2-bit field consisting of :
+ * - 0 : ignore remote address (may even be a NULL pointer)
+ * - 1 : use provided address
+ * - 2 : use provided port
+ * - 3 : use both
+ *
+ * The function supports multiple foreign binding methods :
+ * - linux_tproxy: we directly bind to the foreign address
+ * The second one can be used as a fallback for the first one.
+ * This function returns 0 when everything's OK, 1 if it could not bind, to the
+ * local address, 2 if it could not bind to the foreign address.
+ */
+int tcp_bind_socket(int fd, int flags, struct sockaddr_storage *local, struct sockaddr_storage *remote)
+{
+ struct sockaddr_storage bind_addr;
+ int foreign_ok = 0;
+ int ret;
+ static int ip_transp_working = 1;
+ static int ip6_transp_working = 1;
+
+ switch (local->ss_family) {
+ case AF_INET:
+ if (flags && ip_transp_working) {
+ /* This deserves some explanation. Some platforms will support
+ * multiple combinations of certain methods, so we try the
+ * supported ones until one succeeds.
+ */
+ if (0
+#if defined(IP_TRANSPARENT)
+ || (setsockopt(fd, SOL_IP, IP_TRANSPARENT, &one, sizeof(one)) == 0)
+#endif
+#if defined(IP_FREEBIND)
+ || (setsockopt(fd, SOL_IP, IP_FREEBIND, &one, sizeof(one)) == 0)
+#endif
+#if defined(IP_BINDANY)
+ || (setsockopt(fd, IPPROTO_IP, IP_BINDANY, &one, sizeof(one)) == 0)
+#endif
+#if defined(SO_BINDANY)
+ || (setsockopt(fd, SOL_SOCKET, SO_BINDANY, &one, sizeof(one)) == 0)
+#endif
+ )
+ foreign_ok = 1;
+ else
+ ip_transp_working = 0;
+ }
+ break;
+ case AF_INET6:
+ if (flags && ip6_transp_working) {
+ if (0
+#if defined(IPV6_TRANSPARENT)
+ || (setsockopt(fd, SOL_IPV6, IPV6_TRANSPARENT, &one, sizeof(one)) == 0)
+#endif
+#if defined(IP_FREEBIND)
+ || (setsockopt(fd, SOL_IP, IP_FREEBIND, &one, sizeof(one)) == 0)
+#endif
+#if defined(IPV6_BINDANY)
+ || (setsockopt(fd, IPPROTO_IPV6, IPV6_BINDANY, &one, sizeof(one)) == 0)
+#endif
+#if defined(SO_BINDANY)
+ || (setsockopt(fd, SOL_SOCKET, SO_BINDANY, &one, sizeof(one)) == 0)
+#endif
+ )
+ foreign_ok = 1;
+ else
+ ip6_transp_working = 0;
+ }
+ break;
+ }
+
+ if (flags) {
+ memset(&bind_addr, 0, sizeof(bind_addr));
+ bind_addr.ss_family = remote->ss_family;
+ switch (remote->ss_family) {
+ case AF_INET:
+ if (flags & 1)
+ ((struct sockaddr_in *)&bind_addr)->sin_addr = ((struct sockaddr_in *)remote)->sin_addr;
+ if (flags & 2)
+ ((struct sockaddr_in *)&bind_addr)->sin_port = ((struct sockaddr_in *)remote)->sin_port;
+ break;
+ case AF_INET6:
+ if (flags & 1)
+ ((struct sockaddr_in6 *)&bind_addr)->sin6_addr = ((struct sockaddr_in6 *)remote)->sin6_addr;
+ if (flags & 2)
+ ((struct sockaddr_in6 *)&bind_addr)->sin6_port = ((struct sockaddr_in6 *)remote)->sin6_port;
+ break;
+ default:
+ /* we don't want to try to bind to an unknown address family */
+ foreign_ok = 0;
+ }
+ }
+
+ setsockopt(fd, SOL_SOCKET, SO_REUSEADDR, &one, sizeof(one));
+ if (foreign_ok) {
+ if (is_inet_addr(&bind_addr)) {
+ ret = bind(fd, (struct sockaddr *)&bind_addr, get_addr_len(&bind_addr));
+ if (ret < 0)
+ return 2;
+ }
+ }
+ else {
+ if (is_inet_addr(local)) {
+ ret = bind(fd, (struct sockaddr *)local, get_addr_len(local));
+ if (ret < 0)
+ return 1;
+ }
+ }
+
+ if (!flags)
+ return 0;
+
+ if (!foreign_ok)
+ /* we could not bind to a foreign address */
+ return 2;
+
+ return 0;
+}
+
+static int create_server_socket(struct connection *conn)
+{
+ const struct netns_entry *ns = NULL;
+
+#ifdef CONFIG_HAP_NS
+ if (objt_server(conn->target)) {
+ if (__objt_server(conn->target)->flags & SRV_F_USE_NS_FROM_PP)
+ ns = conn->proxy_netns;
+ else
+ ns = __objt_server(conn->target)->netns;
+ }
+#endif
+ return my_socketat(ns, conn->addr.to.ss_family, SOCK_STREAM, IPPROTO_TCP);
+}
+
+/*
+ * This function initiates a TCP connection establishment to the target assigned
+ * to connection <conn> using (si->{target,addr.to}). A source address may be
+ * pointed to by conn->addr.from in case of transparent proxying. Normal source
+ * bind addresses are still determined locally (due to the possible need of a
+ * source port). conn->target may point either to a valid server or to a backend,
+ * depending on conn->target. Only OBJ_TYPE_PROXY and OBJ_TYPE_SERVER are
+ * supported. The <data> parameter is a boolean indicating whether there are data
+ * waiting for being sent or not, in order to adjust data write polling and on
+ * some platforms, the ability to avoid an empty initial ACK. The <delack> argument
+ * allows the caller to force using a delayed ACK when establishing the connection :
+ * - 0 = no delayed ACK unless data are advertised and backend has tcp-smart-connect
+ * - 1 = delayed ACK if backend has tcp-smart-connect, regardless of data
+ * - 2 = delayed ACK regardless of backend options
+ *
+ * Note that a pending send_proxy message accounts for data.
+ *
+ * It can return one of :
+ * - SF_ERR_NONE if everything's OK
+ * - SF_ERR_SRVTO if there are no more servers
+ * - SF_ERR_SRVCL if the connection was refused by the server
+ * - SF_ERR_PRXCOND if the connection has been limited by the proxy (maxconn)
+ * - SF_ERR_RESOURCE if a system resource is lacking (eg: fd limits, ports, ...)
+ * - SF_ERR_INTERNAL for any other purely internal errors
+ * Additionnally, in the case of SF_ERR_RESOURCE, an emergency log will be emitted.
+ *
+ * The connection's fd is inserted only when SF_ERR_NONE is returned, otherwise
+ * it's invalid and the caller has nothing to do.
+ */
+
+int tcp_connect_server(struct connection *conn, int data, int delack)
+{
+ int fd;
+ struct server *srv;
+ struct proxy *be;
+ struct conn_src *src;
+
+ conn->flags = CO_FL_WAIT_L4_CONN; /* connection in progress */
+
+ switch (obj_type(conn->target)) {
+ case OBJ_TYPE_PROXY:
+ be = objt_proxy(conn->target);
+ srv = NULL;
+ break;
+ case OBJ_TYPE_SERVER:
+ srv = objt_server(conn->target);
+ be = srv->proxy;
+ break;
+ default:
+ conn->flags |= CO_FL_ERROR;
+ return SF_ERR_INTERNAL;
+ }
+
+ fd = conn->t.sock.fd = create_server_socket(conn);
+
+ if (fd == -1) {
+ qfprintf(stderr, "Cannot get a server socket.\n");
+
+ if (errno == ENFILE) {
+ conn->err_code = CO_ER_SYS_FDLIM;
+ send_log(be, LOG_EMERG,
+ "Proxy %s reached system FD limit at %d. Please check system tunables.\n",
+ be->id, maxfd);
+ }
+ else if (errno == EMFILE) {
+ conn->err_code = CO_ER_PROC_FDLIM;
+ send_log(be, LOG_EMERG,
+ "Proxy %s reached process FD limit at %d. Please check 'ulimit-n' and restart.\n",
+ be->id, maxfd);
+ }
+ else if (errno == ENOBUFS || errno == ENOMEM) {
+ conn->err_code = CO_ER_SYS_MEMLIM;
+ send_log(be, LOG_EMERG,
+ "Proxy %s reached system memory limit at %d sockets. Please check system tunables.\n",
+ be->id, maxfd);
+ }
+ else if (errno == EAFNOSUPPORT || errno == EPROTONOSUPPORT) {
+ conn->err_code = CO_ER_NOPROTO;
+ }
+ else
+ conn->err_code = CO_ER_SOCK_ERR;
+
+ /* this is a resource error */
+ conn->flags |= CO_FL_ERROR;
+ return SF_ERR_RESOURCE;
+ }
+
+ if (fd >= global.maxsock) {
+ /* do not log anything there, it's a normal condition when this option
+ * is used to serialize connections to a server !
+ */
+ Alert("socket(): not enough free sockets. Raise -n argument. Giving up.\n");
+ close(fd);
+ conn->err_code = CO_ER_CONF_FDLIM;
+ conn->flags |= CO_FL_ERROR;
+ return SF_ERR_PRXCOND; /* it is a configuration limit */
+ }
+
+ if ((fcntl(fd, F_SETFL, O_NONBLOCK)==-1) ||
+ (setsockopt(fd, IPPROTO_TCP, TCP_NODELAY, &one, sizeof(one)) == -1)) {
+ qfprintf(stderr,"Cannot set client socket to non blocking mode.\n");
+ close(fd);
+ conn->err_code = CO_ER_SOCK_ERR;
+ conn->flags |= CO_FL_ERROR;
+ return SF_ERR_INTERNAL;
+ }
+
+ if (be->options & PR_O_TCP_SRV_KA)
+ setsockopt(fd, SOL_SOCKET, SO_KEEPALIVE, &one, sizeof(one));
+
+ /* allow specific binding :
+ * - server-specific at first
+ * - proxy-specific next
+ */
+ if (srv && srv->conn_src.opts & CO_SRC_BIND)
+ src = &srv->conn_src;
+ else if (be->conn_src.opts & CO_SRC_BIND)
+ src = &be->conn_src;
+ else
+ src = NULL;
+
+ if (src) {
+ int ret, flags = 0;
+
+ if (is_inet_addr(&conn->addr.from)) {
+ switch (src->opts & CO_SRC_TPROXY_MASK) {
+ case CO_SRC_TPROXY_CLI:
+ conn->flags |= CO_FL_PRIVATE;
+ /* fall through */
+ case CO_SRC_TPROXY_ADDR:
+ flags = 3;
+ break;
+ case CO_SRC_TPROXY_CIP:
+ case CO_SRC_TPROXY_DYN:
+ conn->flags |= CO_FL_PRIVATE;
+ flags = 1;
+ break;
+ }
+ }
+
+#ifdef SO_BINDTODEVICE
+ /* Note: this might fail if not CAP_NET_RAW */
+ if (src->iface_name)
+ setsockopt(fd, SOL_SOCKET, SO_BINDTODEVICE, src->iface_name, src->iface_len + 1);
+#endif
+
+ if (src->sport_range) {
+ int attempts = 10; /* should be more than enough to find a spare port */
+ struct sockaddr_storage sa;
+
+ ret = 1;
+ sa = src->source_addr;
+
+ do {
+ /* note: in case of retry, we may have to release a previously
+ * allocated port, hence this loop's construct.
+ */
+ port_range_release_port(fdinfo[fd].port_range, fdinfo[fd].local_port);
+ fdinfo[fd].port_range = NULL;
+
+ if (!attempts)
+ break;
+ attempts--;
+
+ fdinfo[fd].local_port = port_range_alloc_port(src->sport_range);
+ if (!fdinfo[fd].local_port) {
+ conn->err_code = CO_ER_PORT_RANGE;
+ break;
+ }
+
+ fdinfo[fd].port_range = src->sport_range;
+ set_host_port(&sa, fdinfo[fd].local_port);
+
+ ret = tcp_bind_socket(fd, flags, &sa, &conn->addr.from);
+ if (ret != 0)
+ conn->err_code = CO_ER_CANT_BIND;
+ } while (ret != 0); /* binding NOK */
+ }
+ else {
+ ret = tcp_bind_socket(fd, flags, &src->source_addr, &conn->addr.from);
+ if (ret != 0)
+ conn->err_code = CO_ER_CANT_BIND;
+ }
+
+ if (unlikely(ret != 0)) {
+ port_range_release_port(fdinfo[fd].port_range, fdinfo[fd].local_port);
+ fdinfo[fd].port_range = NULL;
+ close(fd);
+
+ if (ret == 1) {
+ Alert("Cannot bind to source address before connect() for backend %s. Aborting.\n",
+ be->id);
+ send_log(be, LOG_EMERG,
+ "Cannot bind to source address before connect() for backend %s.\n",
+ be->id);
+ } else {
+ Alert("Cannot bind to tproxy source address before connect() for backend %s. Aborting.\n",
+ be->id);
+ send_log(be, LOG_EMERG,
+ "Cannot bind to tproxy source address before connect() for backend %s.\n",
+ be->id);
+ }
+ conn->flags |= CO_FL_ERROR;
+ return SF_ERR_RESOURCE;
+ }
+ }
+
+#if defined(TCP_QUICKACK)
+ /* disabling tcp quick ack now allows the first request to leave the
+ * machine with the first ACK. We only do this if there are pending
+ * data in the buffer.
+ */
+ if (delack == 2 || ((delack || data || conn->send_proxy_ofs) && (be->options2 & PR_O2_SMARTCON)))
+ setsockopt(fd, IPPROTO_TCP, TCP_QUICKACK, &zero, sizeof(zero));
+#endif
+
+#ifdef TCP_USER_TIMEOUT
+ /* there is not much more we can do here when it fails, it's still minor */
+ if (srv && srv->tcp_ut)
+ setsockopt(fd, IPPROTO_TCP, TCP_USER_TIMEOUT, &srv->tcp_ut, sizeof(srv->tcp_ut));
+#endif
+ if (global.tune.server_sndbuf)
+ setsockopt(fd, SOL_SOCKET, SO_SNDBUF, &global.tune.server_sndbuf, sizeof(global.tune.server_sndbuf));
+
+ if (global.tune.server_rcvbuf)
+ setsockopt(fd, SOL_SOCKET, SO_RCVBUF, &global.tune.server_rcvbuf, sizeof(global.tune.server_rcvbuf));
+
+ if ((connect(fd, (struct sockaddr *)&conn->addr.to, get_addr_len(&conn->addr.to)) == -1) &&
+ (errno != EINPROGRESS) && (errno != EALREADY) && (errno != EISCONN)) {
+
+ if (errno == EAGAIN || errno == EADDRINUSE || errno == EADDRNOTAVAIL) {
+ char *msg;
+ if (errno == EAGAIN || errno == EADDRNOTAVAIL) {
+ msg = "no free ports";
+ conn->err_code = CO_ER_FREE_PORTS;
+ }
+ else {
+ msg = "local address already in use";
+ conn->err_code = CO_ER_ADDR_INUSE;
+ }
+
+ qfprintf(stderr,"Connect() failed for backend %s: %s.\n", be->id, msg);
+ port_range_release_port(fdinfo[fd].port_range, fdinfo[fd].local_port);
+ fdinfo[fd].port_range = NULL;
+ close(fd);
+ send_log(be, LOG_ERR, "Connect() failed for backend %s: %s.\n", be->id, msg);
+ conn->flags |= CO_FL_ERROR;
+ return SF_ERR_RESOURCE;
+ } else if (errno == ETIMEDOUT) {
+ //qfprintf(stderr,"Connect(): ETIMEDOUT");
+ port_range_release_port(fdinfo[fd].port_range, fdinfo[fd].local_port);
+ fdinfo[fd].port_range = NULL;
+ close(fd);
+ conn->err_code = CO_ER_SOCK_ERR;
+ conn->flags |= CO_FL_ERROR;
+ return SF_ERR_SRVTO;
+ } else {
+ // (errno == ECONNREFUSED || errno == ENETUNREACH || errno == EACCES || errno == EPERM)
+ //qfprintf(stderr,"Connect(): %d", errno);
+ port_range_release_port(fdinfo[fd].port_range, fdinfo[fd].local_port);
+ fdinfo[fd].port_range = NULL;
+ close(fd);
+ conn->err_code = CO_ER_SOCK_ERR;
+ conn->flags |= CO_FL_ERROR;
+ return SF_ERR_SRVCL;
+ }
+ }
+
+ conn->flags |= CO_FL_ADDR_TO_SET;
+
+ /* Prepare to send a few handshakes related to the on-wire protocol. */
+ if (conn->send_proxy_ofs)
+ conn->flags |= CO_FL_SEND_PROXY;
+
+ conn_ctrl_init(conn); /* registers the FD */
+ fdtab[fd].linger_risk = 1; /* close hard if needed */
+ conn_sock_want_send(conn); /* for connect status */
+
+ if (conn_xprt_init(conn) < 0) {
+ conn_force_close(conn);
+ conn->flags |= CO_FL_ERROR;
+ return SF_ERR_RESOURCE;
+ }
+
+ if (data)
+ conn_data_want_send(conn); /* prepare to send data if any */
+
+ return SF_ERR_NONE; /* connection is OK */
+}
+
+
+/*
+ * Retrieves the source address for the socket <fd>, with <dir> indicating
+ * if we're a listener (=0) or an initiator (!=0). It returns 0 in case of
+ * success, -1 in case of error. The socket's source address is stored in
+ * <sa> for <salen> bytes.
+ */
+int tcp_get_src(int fd, struct sockaddr *sa, socklen_t salen, int dir)
+{
+ if (dir)
+ return getsockname(fd, sa, &salen);
+ else
+ return getpeername(fd, sa, &salen);
+}
+
+
+/*
+ * Retrieves the original destination address for the socket <fd>, with <dir>
+ * indicating if we're a listener (=0) or an initiator (!=0). In the case of a
+ * listener, if the original destination address was translated, the original
+ * address is retrieved. It returns 0 in case of success, -1 in case of error.
+ * The socket's source address is stored in <sa> for <salen> bytes.
+ */
+int tcp_get_dst(int fd, struct sockaddr *sa, socklen_t salen, int dir)
+{
+ if (dir)
+ return getpeername(fd, sa, &salen);
+ else {
+ int ret = getsockname(fd, sa, &salen);
+
+ if (ret < 0)
+ return ret;
+
+#if defined(TPROXY) && defined(SO_ORIGINAL_DST)
+ /* For TPROXY and Netfilter's NAT, we can retrieve the original
+ * IPv4 address before DNAT/REDIRECT. We must not do that with
+ * other families because v6-mapped IPv4 addresses are still
+ * reported as v4.
+ */
+ if (((struct sockaddr_storage *)sa)->ss_family == AF_INET
+ && getsockopt(fd, SOL_IP, SO_ORIGINAL_DST, sa, &salen) == 0)
+ return 0;
+#endif
+ return ret;
+ }
+}
+
+/* Tries to drain any pending incoming data from the socket to reach the
+ * receive shutdown. Returns positive if the shutdown was found, negative
+ * if EAGAIN was hit, otherwise zero. This is useful to decide whether we
+ * can close a connection cleanly are we must kill it hard.
+ */
+int tcp_drain(int fd)
+{
+ int turns = 2;
+ int len;
+
+ while (turns) {
+#ifdef MSG_TRUNC_CLEARS_INPUT
+ len = recv(fd, NULL, INT_MAX, MSG_DONTWAIT | MSG_NOSIGNAL | MSG_TRUNC);
+ if (len == -1 && errno == EFAULT)
+#endif
+ len = recv(fd, trash.str, trash.size, MSG_DONTWAIT | MSG_NOSIGNAL);
+
+ if (len == 0) {
+ /* cool, shutdown received */
+ fdtab[fd].linger_risk = 0;
+ return 1;
+ }
+
+ if (len < 0) {
+ if (errno == EAGAIN) {
+ /* connection not closed yet */
+ fd_cant_recv(fd);
+ return -1;
+ }
+ if (errno == EINTR) /* oops, try again */
+ continue;
+ /* other errors indicate a dead connection, fine. */
+ fdtab[fd].linger_risk = 0;
+ return 1;
+ }
+ /* OK we read some data, let's try again once */
+ turns--;
+ }
+ /* some data are still present, give up */
+ return 0;
+}
+
+/* This is the callback which is set when a connection establishment is pending
+ * and we have nothing to send. It updates the FD polling status. It returns 0
+ * if it fails in a fatal way or needs to poll to go further, otherwise it
+ * returns non-zero and removes the CO_FL_WAIT_L4_CONN flag from the connection's
+ * flags. In case of error, it sets CO_FL_ERROR and leaves the error code in
+ * errno. The error checking is done in two passes in order to limit the number
+ * of syscalls in the normal case :
+ * - if POLL_ERR was reported by the poller, we check for a pending error on
+ * the socket before proceeding. If found, it's assigned to errno so that
+ * upper layers can see it.
+ * - otherwise connect() is used to check the connection state again, since
+ * the getsockopt return cannot reliably be used to know if the connection
+ * is still pending or ready. This one may often return an error as well,
+ * since we don't always have POLL_ERR (eg: OSX or cached events).
+ */
+int tcp_connect_probe(struct connection *conn)
+{
+ int fd = conn->t.sock.fd;
+ socklen_t lskerr;
+ int skerr;
+
+ if (conn->flags & CO_FL_ERROR)
+ return 0;
+
+ if (!conn_ctrl_ready(conn))
+ return 0;
+
+ if (!(conn->flags & CO_FL_WAIT_L4_CONN))
+ return 1; /* strange we were called while ready */
+
+ if (!fd_send_ready(fd))
+ return 0;
+
+ /* we might be the first witness of FD_POLL_ERR. Note that FD_POLL_HUP
+ * without FD_POLL_IN also indicates a hangup without input data meaning
+ * there was no connection.
+ */
+ if (fdtab[fd].ev & FD_POLL_ERR ||
+ (fdtab[fd].ev & (FD_POLL_IN|FD_POLL_HUP)) == FD_POLL_HUP) {
+ skerr = 0;
+ lskerr = sizeof(skerr);
+ getsockopt(fd, SOL_SOCKET, SO_ERROR, &skerr, &lskerr);
+ errno = skerr;
+ if (errno == EAGAIN)
+ errno = 0;
+ if (errno)
+ goto out_error;
+ }
+
+ /* Use connect() to check the state of the socket. This has the
+ * advantage of giving us the following info :
+ * - error
+ * - connecting (EALREADY, EINPROGRESS)
+ * - connected (EISCONN, 0)
+ */
+ if (connect(fd, (struct sockaddr *)&conn->addr.to, get_addr_len(&conn->addr.to)) < 0) {
+ if (errno == EALREADY || errno == EINPROGRESS) {
+ __conn_sock_stop_recv(conn);
+ fd_cant_send(fd);
+ return 0;
+ }
+
+ if (errno && errno != EISCONN)
+ goto out_error;
+
+ /* otherwise we're connected */
+ }
+
+ /* The FD is ready now, we'll mark the connection as complete and
+ * forward the event to the transport layer which will notify the
+ * data layer.
+ */
+ conn->flags &= ~CO_FL_WAIT_L4_CONN;
+ return 1;
+
+ out_error:
+ /* Write error on the file descriptor. Report it to the connection
+ * and disable polling on this FD.
+ */
+ fdtab[fd].linger_risk = 0;
+ conn->flags |= CO_FL_ERROR | CO_FL_SOCK_RD_SH | CO_FL_SOCK_WR_SH;
+ __conn_sock_stop_both(conn);
+ return 0;
+}
+
+
+/* This function tries to bind a TCPv4/v6 listener. It may return a warning or
+ * an error message in <errmsg> if the message is at most <errlen> bytes long
+ * (including '\0'). Note that <errmsg> may be NULL if <errlen> is also zero.
+ * The return value is composed from ERR_ABORT, ERR_WARN,
+ * ERR_ALERT, ERR_RETRYABLE and ERR_FATAL. ERR_NONE indicates that everything
+ * was alright and that no message was returned. ERR_RETRYABLE means that an
+ * error occurred but that it may vanish after a retry (eg: port in use), and
+ * ERR_FATAL indicates a non-fixable error. ERR_WARN and ERR_ALERT do not alter
+ * the meaning of the error, but just indicate that a message is present which
+ * should be displayed with the respective level. Last, ERR_ABORT indicates
+ * that it's pointless to try to start other listeners. No error message is
+ * returned if errlen is NULL.
+ */
+int tcp_bind_listener(struct listener *listener, char *errmsg, int errlen)
+{
+ __label__ tcp_return, tcp_close_return;
+ int fd, err;
+ int ext, ready;
+ socklen_t ready_len;
+ const char *msg = NULL;
+
+ /* ensure we never return garbage */
+ if (errlen)
+ *errmsg = 0;
+
+ if (listener->state != LI_ASSIGNED)
+ return ERR_NONE; /* already bound */
+
+ err = ERR_NONE;
+
+ /* if the listener already has an fd assigned, then we were offered the
+ * fd by an external process (most likely the parent), and we don't want
+ * to create a new socket. However we still want to set a few flags on
+ * the socket.
+ */
+ fd = listener->fd;
+ ext = (fd >= 0);
+
+ if (!ext) {
+ fd = my_socketat(listener->netns, listener->addr.ss_family, SOCK_STREAM, IPPROTO_TCP);
+
+ if (fd == -1) {
+ err |= ERR_RETRYABLE | ERR_ALERT;
+ msg = "cannot create listening socket";
+ goto tcp_return;
+ }
+ }
+
+ if (fd >= global.maxsock) {
+ err |= ERR_FATAL | ERR_ABORT | ERR_ALERT;
+ msg = "not enough free sockets (raise '-n' parameter)";
+ goto tcp_close_return;
+ }
+
+ if (fcntl(fd, F_SETFL, O_NONBLOCK) == -1) {
+ err |= ERR_FATAL | ERR_ALERT;
+ msg = "cannot make socket non-blocking";
+ goto tcp_close_return;
+ }
+
+ if (!ext && setsockopt(fd, SOL_SOCKET, SO_REUSEADDR, &one, sizeof(one)) == -1) {
+ /* not fatal but should be reported */
+ msg = "cannot do so_reuseaddr";
+ err |= ERR_ALERT;
+ }
+
+ if (listener->options & LI_O_NOLINGER)
+ setsockopt(fd, SOL_SOCKET, SO_LINGER, &nolinger, sizeof(struct linger));
+
+#ifdef SO_REUSEPORT
+ /* OpenBSD supports this. As it's present in old libc versions of Linux,
+ * it might return an error that we will silently ignore.
+ */
+ if (!ext)
+ setsockopt(fd, SOL_SOCKET, SO_REUSEPORT, &one, sizeof(one));
+#endif
+
+ if (!ext && (listener->options & LI_O_FOREIGN)) {
+ switch (listener->addr.ss_family) {
+ case AF_INET:
+ if (1
+#if defined(IP_TRANSPARENT)
+ && (setsockopt(fd, SOL_IP, IP_TRANSPARENT, &one, sizeof(one)) == -1)
+#endif
+#if defined(IP_FREEBIND)
+ && (setsockopt(fd, SOL_IP, IP_FREEBIND, &one, sizeof(one)) == -1)
+#endif
+#if defined(IP_BINDANY)
+ && (setsockopt(fd, IPPROTO_IP, IP_BINDANY, &one, sizeof(one)) == -1)
+#endif
+#if defined(SO_BINDANY)
+ && (setsockopt(fd, SOL_SOCKET, SO_BINDANY, &one, sizeof(one)) == -1)
+#endif
+ ) {
+ msg = "cannot make listening socket transparent";
+ err |= ERR_ALERT;
+ }
+ break;
+ case AF_INET6:
+ if (1
+#if defined(IPV6_TRANSPARENT)
+ && (setsockopt(fd, SOL_IPV6, IPV6_TRANSPARENT, &one, sizeof(one)) == -1)
+#endif
+#if defined(IP_FREEBIND)
+ && (setsockopt(fd, SOL_IP, IP_FREEBIND, &one, sizeof(one)) == -1)
+#endif
+#if defined(IPV6_BINDANY)
+ && (setsockopt(fd, IPPROTO_IPV6, IPV6_BINDANY, &one, sizeof(one)) == -1)
+#endif
+#if defined(SO_BINDANY)
+ && (setsockopt(fd, SOL_SOCKET, SO_BINDANY, &one, sizeof(one)) == -1)
+#endif
+ ) {
+ msg = "cannot make listening socket transparent";
+ err |= ERR_ALERT;
+ }
+ break;
+ }
+ }
+
+#ifdef SO_BINDTODEVICE
+ /* Note: this might fail if not CAP_NET_RAW */
+ if (!ext && listener->interface) {
+ if (setsockopt(fd, SOL_SOCKET, SO_BINDTODEVICE,
+ listener->interface, strlen(listener->interface) + 1) == -1) {
+ msg = "cannot bind listener to device";
+ err |= ERR_WARN;
+ }
+ }
+#endif
+#if defined(TCP_MAXSEG)
+ if (listener->maxseg > 0) {
+ if (setsockopt(fd, IPPROTO_TCP, TCP_MAXSEG,
+ &listener->maxseg, sizeof(listener->maxseg)) == -1) {
+ msg = "cannot set MSS";
+ err |= ERR_WARN;
+ }
+ }
+#endif
+#if defined(TCP_USER_TIMEOUT)
+ if (listener->tcp_ut) {
+ if (setsockopt(fd, IPPROTO_TCP, TCP_USER_TIMEOUT,
+ &listener->tcp_ut, sizeof(listener->tcp_ut)) == -1) {
+ msg = "cannot set TCP User Timeout";
+ err |= ERR_WARN;
+ }
+ }
+#endif
+#if defined(TCP_DEFER_ACCEPT)
+ if (listener->options & LI_O_DEF_ACCEPT) {
+ /* defer accept by up to one second */
+ int accept_delay = 1;
+ if (setsockopt(fd, IPPROTO_TCP, TCP_DEFER_ACCEPT, &accept_delay, sizeof(accept_delay)) == -1) {
+ msg = "cannot enable DEFER_ACCEPT";
+ err |= ERR_WARN;
+ }
+ }
+#endif
+#if defined(TCP_FASTOPEN)
+ if (listener->options & LI_O_TCP_FO) {
+ /* TFO needs a queue length, let's use the configured backlog */
+ int qlen = listener->backlog ? listener->backlog : listener->maxconn;
+ if (setsockopt(fd, IPPROTO_TCP, TCP_FASTOPEN, &qlen, sizeof(qlen)) == -1) {
+ msg = "cannot enable TCP_FASTOPEN";
+ err |= ERR_WARN;
+ }
+ }
+#endif
+#if defined(IPV6_V6ONLY)
+ if (listener->options & LI_O_V6ONLY)
+ setsockopt(fd, IPPROTO_IPV6, IPV6_V6ONLY, &one, sizeof(one));
+ else if (listener->options & LI_O_V4V6)
+ setsockopt(fd, IPPROTO_IPV6, IPV6_V6ONLY, &zero, sizeof(zero));
+#endif
+
+ if (!ext && bind(fd, (struct sockaddr *)&listener->addr, listener->proto->sock_addrlen) == -1) {
+ err |= ERR_RETRYABLE | ERR_ALERT;
+ msg = "cannot bind socket";
+ goto tcp_close_return;
+ }
+
+ ready = 0;
+ ready_len = sizeof(ready);
+ if (getsockopt(fd, SOL_SOCKET, SO_ACCEPTCONN, &ready, &ready_len) == -1)
+ ready = 0;
+
+ if (!(ext && ready) && /* only listen if not already done by external process */
+ listen(fd, listener->backlog ? listener->backlog : listener->maxconn) == -1) {
+ err |= ERR_RETRYABLE | ERR_ALERT;
+ msg = "cannot listen to socket";
+ goto tcp_close_return;
+ }
+
+#if defined(TCP_QUICKACK)
+ if (listener->options & LI_O_NOQUICKACK)
+ setsockopt(fd, IPPROTO_TCP, TCP_QUICKACK, &zero, sizeof(zero));
+#endif
+
+ /* the socket is ready */
+ listener->fd = fd;
+ listener->state = LI_LISTEN;
+
+ fdtab[fd].owner = listener; /* reference the listener instead of a task */
+ fdtab[fd].iocb = listener->proto->accept;
+ fd_insert(fd);
+
+ tcp_return:
+ if (msg && errlen) {
+ char pn[INET6_ADDRSTRLEN];
+
+ addr_to_str(&listener->addr, pn, sizeof(pn));
+ snprintf(errmsg, errlen, "%s [%s:%d]", msg, pn, get_host_port(&listener->addr));
+ }
+ return err;
+
+ tcp_close_return:
+ close(fd);
+ goto tcp_return;
+}
+
+/* This function creates all TCP sockets bound to the protocol entry <proto>.
+ * It is intended to be used as the protocol's bind_all() function.
+ * The sockets will be registered but not added to any fd_set, in order not to
+ * loose them across the fork(). A call to enable_all_listeners() is needed
+ * to complete initialization. The return value is composed from ERR_*.
+ */
+static int tcp_bind_listeners(struct protocol *proto, char *errmsg, int errlen)
+{
+ struct listener *listener;
+ int err = ERR_NONE;
+
+ list_for_each_entry(listener, &proto->listeners, proto_list) {
+ err |= tcp_bind_listener(listener, errmsg, errlen);
+ if (err & ERR_ABORT)
+ break;
+ }
+
+ return err;
+}
+
+/* Add listener to the list of tcpv4 listeners. The listener's state
+ * is automatically updated from LI_INIT to LI_ASSIGNED. The number of
+ * listeners is updated. This is the function to use to add a new listener.
+ */
+void tcpv4_add_listener(struct listener *listener)
+{
+ if (listener->state != LI_INIT)
+ return;
+ listener->state = LI_ASSIGNED;
+ listener->proto = &proto_tcpv4;
+ LIST_ADDQ(&proto_tcpv4.listeners, &listener->proto_list);
+ proto_tcpv4.nb_listeners++;
+}
+
+/* Add listener to the list of tcpv4 listeners. The listener's state
+ * is automatically updated from LI_INIT to LI_ASSIGNED. The number of
+ * listeners is updated. This is the function to use to add a new listener.
+ */
+void tcpv6_add_listener(struct listener *listener)
+{
+ if (listener->state != LI_INIT)
+ return;
+ listener->state = LI_ASSIGNED;
+ listener->proto = &proto_tcpv6;
+ LIST_ADDQ(&proto_tcpv6.listeners, &listener->proto_list);
+ proto_tcpv6.nb_listeners++;
+}
+
+/* Pause a listener. Returns < 0 in case of failure, 0 if the listener
+ * was totally stopped, or > 0 if correctly paused.
+ */
+int tcp_pause_listener(struct listener *l)
+{
+ if (shutdown(l->fd, SHUT_WR) != 0)
+ return -1; /* Solaris dies here */
+
+ if (listen(l->fd, l->backlog ? l->backlog : l->maxconn) != 0)
+ return -1; /* OpenBSD dies here */
+
+ if (shutdown(l->fd, SHUT_RD) != 0)
+ return -1; /* should always be OK */
+ return 1;
+}
+
+/* This function performs the TCP request analysis on the current request. It
+ * returns 1 if the processing can continue on next analysers, or zero if it
+ * needs more data, encounters an error, or wants to immediately abort the
+ * request. It relies on buffers flags, and updates s->req->analysers. The
+ * function may be called for frontend rules and backend rules. It only relies
+ * on the backend pointer so this works for both cases.
+ */
+int tcp_inspect_request(struct stream *s, struct channel *req, int an_bit)
+{
+ struct session *sess = s->sess;
+ struct act_rule *rule;
+ struct stksess *ts;
+ struct stktable *t;
+ int partial;
+ int act_flags = 0;
+
+ DPRINTF(stderr,"[%u] %s: stream=%p b=%p, exp(r,w)=%u,%u bf=%08x bh=%d analysers=%02x\n",
+ now_ms, __FUNCTION__,
+ s,
+ req,
+ req->rex, req->wex,
+ req->flags,
+ req->buf->i,
+ req->analysers);
+
+ /* We don't know whether we have enough data, so must proceed
+ * this way :
+ * - iterate through all rules in their declaration order
+ * - if one rule returns MISS, it means the inspect delay is
+ * not over yet, then return immediately, otherwise consider
+ * it as a non-match.
+ * - if one rule returns OK, then return OK
+ * - if one rule returns KO, then return KO
+ */
+
+ if ((req->flags & CF_SHUTR) || buffer_full(req->buf, global.tune.maxrewrite) ||
+ !s->be->tcp_req.inspect_delay || tick_is_expired(req->analyse_exp, now_ms))
+ partial = SMP_OPT_FINAL;
+ else
+ partial = 0;
+
+ /* If "the current_rule_list" match the executed rule list, we are in
+ * resume condition. If a resume is needed it is always in the action
+ * and never in the ACL or converters. In this case, we initialise the
+ * current rule, and go to the action execution point.
+ */
+ if (s->current_rule) {
+ rule = s->current_rule;
+ s->current_rule = NULL;
+ if (s->current_rule_list == &s->be->tcp_req.inspect_rules)
+ goto resume_execution;
+ }
+ s->current_rule_list = &s->be->tcp_req.inspect_rules;
+
+ list_for_each_entry(rule, &s->be->tcp_req.inspect_rules, list) {
+ enum acl_test_res ret = ACL_TEST_PASS;
+
+ if (rule->cond) {
+ ret = acl_exec_cond(rule->cond, s->be, sess, s, SMP_OPT_DIR_REQ | partial);
+ if (ret == ACL_TEST_MISS)
+ goto missing_data;
+
+ ret = acl_pass(ret);
+ if (rule->cond->pol == ACL_COND_UNLESS)
+ ret = !ret;
+ }
+
+ if (ret) {
+ act_flags |= ACT_FLAG_FIRST;
+resume_execution:
+ /* we have a matching rule. */
+ if (rule->action == ACT_ACTION_ALLOW) {
+ break;
+ }
+ else if (rule->action == ACT_ACTION_DENY) {
+ channel_abort(req);
+ channel_abort(&s->res);
+ req->analysers = 0;
+
+ s->be->be_counters.denied_req++;
+ sess->fe->fe_counters.denied_req++;
+ if (sess->listener->counters)
+ sess->listener->counters->denied_req++;
+
+ if (!(s->flags & SF_ERR_MASK))
+ s->flags |= SF_ERR_PRXCOND;
+ if (!(s->flags & SF_FINST_MASK))
+ s->flags |= SF_FINST_R;
+ return 0;
+ }
+ else if (rule->action >= ACT_ACTION_TRK_SC0 && rule->action <= ACT_ACTION_TRK_SCMAX) {
+ /* Note: only the first valid tracking parameter of each
+ * applies.
+ */
+ struct stktable_key *key;
+ struct sample smp;
+
+ if (stkctr_entry(&s->stkctr[tcp_trk_idx(rule->action)]))
+ continue;
+
+ t = rule->arg.trk_ctr.table.t;
+ key = stktable_fetch_key(t, s->be, sess, s, SMP_OPT_DIR_REQ | partial, rule->arg.trk_ctr.expr, &smp);
+
+ if ((smp.flags & SMP_F_MAY_CHANGE) && !(partial & SMP_OPT_FINAL))
+ goto missing_data; /* key might appear later */
+
+ if (key && (ts = stktable_get_entry(t, key))) {
+ stream_track_stkctr(&s->stkctr[tcp_trk_idx(rule->action)], t, ts);
+ stkctr_set_flags(&s->stkctr[tcp_trk_idx(rule->action)], STKCTR_TRACK_CONTENT);
+ if (sess->fe != s->be)
+ stkctr_set_flags(&s->stkctr[tcp_trk_idx(rule->action)], STKCTR_TRACK_BACKEND);
+ }
+ }
+ else if (rule->action == ACT_TCP_CAPTURE) {
+ struct sample *key;
+ struct cap_hdr *h = rule->arg.cap.hdr;
+ char **cap = s->req_cap;
+ int len;
+
+ key = sample_fetch_as_type(s->be, sess, s, SMP_OPT_DIR_REQ | partial, rule->arg.cap.expr, SMP_T_STR);
+ if (!key)
+ continue;
+
+ if (key->flags & SMP_F_MAY_CHANGE)
+ goto missing_data;
+
+ if (cap[h->index] == NULL)
+ cap[h->index] = pool_alloc2(h->pool);
+
+ if (cap[h->index] == NULL) /* no more capture memory */
+ continue;
+
+ len = key->data.u.str.len;
+ if (len > h->len)
+ len = h->len;
+
+ memcpy(cap[h->index], key->data.u.str.str, len);
+ cap[h->index][len] = 0;
+ }
+ else {
+ /* Custom keywords. */
+ if (!rule->action_ptr)
+ continue;
+
+ if (partial & SMP_OPT_FINAL)
+ act_flags |= ACT_FLAG_FINAL;
+
+ switch (rule->action_ptr(rule, s->be, s->sess, s, act_flags)) {
+ case ACT_RET_ERR:
+ case ACT_RET_CONT:
+ continue;
+ case ACT_RET_STOP:
+ break;
+ case ACT_RET_YIELD:
+ s->current_rule = rule;
+ goto missing_data;
+ }
+ break; /* ACT_RET_STOP */
+ }
+ }
+ }
+
+ /* if we get there, it means we have no rule which matches, or
+ * we have an explicit accept, so we apply the default accept.
+ */
+ req->analysers &= ~an_bit;
+ req->analyse_exp = TICK_ETERNITY;
+ return 1;
+
+ missing_data:
+ channel_dont_connect(req);
+ /* just set the request timeout once at the beginning of the request */
+ if (!tick_isset(req->analyse_exp) && s->be->tcp_req.inspect_delay)
+ req->analyse_exp = tick_add(now_ms, s->be->tcp_req.inspect_delay);
+ return 0;
+
+}
+
+/* This function performs the TCP response analysis on the current response. It
+ * returns 1 if the processing can continue on next analysers, or zero if it
+ * needs more data, encounters an error, or wants to immediately abort the
+ * response. It relies on buffers flags, and updates s->rep->analysers. The
+ * function may be called for backend rules.
+ */
+int tcp_inspect_response(struct stream *s, struct channel *rep, int an_bit)
+{
+ struct session *sess = s->sess;
+ struct act_rule *rule;
+ int partial;
+ int act_flags = 0;
+
+ DPRINTF(stderr,"[%u] %s: stream=%p b=%p, exp(r,w)=%u,%u bf=%08x bh=%d analysers=%02x\n",
+ now_ms, __FUNCTION__,
+ s,
+ rep,
+ rep->rex, rep->wex,
+ rep->flags,
+ rep->buf->i,
+ rep->analysers);
+
+ /* We don't know whether we have enough data, so must proceed
+ * this way :
+ * - iterate through all rules in their declaration order
+ * - if one rule returns MISS, it means the inspect delay is
+ * not over yet, then return immediately, otherwise consider
+ * it as a non-match.
+ * - if one rule returns OK, then return OK
+ * - if one rule returns KO, then return KO
+ */
+
+ if (rep->flags & CF_SHUTR || tick_is_expired(rep->analyse_exp, now_ms))
+ partial = SMP_OPT_FINAL;
+ else
+ partial = 0;
+
+ /* If "the current_rule_list" match the executed rule list, we are in
+ * resume condition. If a resume is needed it is always in the action
+ * and never in the ACL or converters. In this case, we initialise the
+ * current rule, and go to the action execution point.
+ */
+ if (s->current_rule) {
+ rule = s->current_rule;
+ s->current_rule = NULL;
+ if (s->current_rule_list == &s->be->tcp_rep.inspect_rules)
+ goto resume_execution;
+ }
+ s->current_rule_list = &s->be->tcp_rep.inspect_rules;
+
+ list_for_each_entry(rule, &s->be->tcp_rep.inspect_rules, list) {
+ enum acl_test_res ret = ACL_TEST_PASS;
+
+ if (rule->cond) {
+ ret = acl_exec_cond(rule->cond, s->be, sess, s, SMP_OPT_DIR_RES | partial);
+ if (ret == ACL_TEST_MISS) {
+ /* just set the analyser timeout once at the beginning of the response */
+ if (!tick_isset(rep->analyse_exp) && s->be->tcp_rep.inspect_delay)
+ rep->analyse_exp = tick_add(now_ms, s->be->tcp_rep.inspect_delay);
+ return 0;
+ }
+
+ ret = acl_pass(ret);
+ if (rule->cond->pol == ACL_COND_UNLESS)
+ ret = !ret;
+ }
+
+ if (ret) {
+ act_flags |= ACT_FLAG_FIRST;
+resume_execution:
+ /* we have a matching rule. */
+ if (rule->action == ACT_ACTION_ALLOW) {
+ break;
+ }
+ else if (rule->action == ACT_ACTION_DENY) {
+ channel_abort(rep);
+ channel_abort(&s->req);
+ rep->analysers = 0;
+
+ s->be->be_counters.denied_resp++;
+ sess->fe->fe_counters.denied_resp++;
+ if (sess->listener->counters)
+ sess->listener->counters->denied_resp++;
+
+ if (!(s->flags & SF_ERR_MASK))
+ s->flags |= SF_ERR_PRXCOND;
+ if (!(s->flags & SF_FINST_MASK))
+ s->flags |= SF_FINST_D;
+ return 0;
+ }
+ else if (rule->action == ACT_TCP_CLOSE) {
+ chn_prod(rep)->flags |= SI_FL_NOLINGER | SI_FL_NOHALF;
+ si_shutr(chn_prod(rep));
+ si_shutw(chn_prod(rep));
+ break;
+ }
+ else {
+ /* Custom keywords. */
+ if (!rule->action_ptr)
+ continue;
+
+ if (partial & SMP_OPT_FINAL)
+ act_flags |= ACT_FLAG_FINAL;
+
+ switch (rule->action_ptr(rule, s->be, s->sess, s, act_flags)) {
+ case ACT_RET_ERR:
+ case ACT_RET_CONT:
+ continue;
+ case ACT_RET_STOP:
+ break;
+ case ACT_RET_YIELD:
+ channel_dont_close(rep);
+ s->current_rule = rule;
+ return 0;
+ }
+ break; /* ACT_RET_STOP */
+ }
+ }
+ }
+
+ /* if we get there, it means we have no rule which matches, or
+ * we have an explicit accept, so we apply the default accept.
+ */
+ rep->analysers &= ~an_bit;
+ rep->analyse_exp = TICK_ETERNITY;
+ return 1;
+}
+
+
+/* This function performs the TCP layer4 analysis on the current request. It
+ * returns 0 if a reject rule matches, otherwise 1 if either an accept rule
+ * matches or if no more rule matches. It can only use rules which don't need
+ * any data. This only works on connection-based client-facing stream interfaces.
+ */
+int tcp_exec_req_rules(struct session *sess)
+{
+ struct act_rule *rule;
+ struct stksess *ts;
+ struct stktable *t = NULL;
+ struct connection *conn = objt_conn(sess->origin);
+ int result = 1;
+ enum acl_test_res ret;
+
+ if (!conn)
+ return result;
+
+ list_for_each_entry(rule, &sess->fe->tcp_req.l4_rules, list) {
+ ret = ACL_TEST_PASS;
+
+ if (rule->cond) {
+ ret = acl_exec_cond(rule->cond, sess->fe, sess, NULL, SMP_OPT_DIR_REQ|SMP_OPT_FINAL);
+ ret = acl_pass(ret);
+ if (rule->cond->pol == ACL_COND_UNLESS)
+ ret = !ret;
+ }
+
+ if (ret) {
+ /* we have a matching rule. */
+ if (rule->action == ACT_ACTION_ALLOW) {
+ break;
+ }
+ else if (rule->action == ACT_ACTION_DENY) {
+ sess->fe->fe_counters.denied_conn++;
+ if (sess->listener->counters)
+ sess->listener->counters->denied_conn++;
+
+ result = 0;
+ break;
+ }
+ else if (rule->action >= ACT_ACTION_TRK_SC0 && rule->action <= ACT_ACTION_TRK_SCMAX) {
+ /* Note: only the first valid tracking parameter of each
+ * applies.
+ */
+ struct stktable_key *key;
+
+ if (stkctr_entry(&sess->stkctr[tcp_trk_idx(rule->action)]))
+ continue;
+
+ t = rule->arg.trk_ctr.table.t;
+ key = stktable_fetch_key(t, sess->fe, sess, NULL, SMP_OPT_DIR_REQ|SMP_OPT_FINAL, rule->arg.trk_ctr.expr, NULL);
+
+ if (key && (ts = stktable_get_entry(t, key)))
+ stream_track_stkctr(&sess->stkctr[tcp_trk_idx(rule->action)], t, ts);
+ }
+ else if (rule->action == ACT_TCP_EXPECT_PX) {
+ conn->flags |= CO_FL_ACCEPT_PROXY;
+ conn_sock_want_recv(conn);
+ }
+ else {
+ /* Custom keywords. */
+ if (!rule->action_ptr)
+ break;
+ switch (rule->action_ptr(rule, sess->fe, sess, NULL, ACT_FLAG_FINAL | ACT_FLAG_FIRST)) {
+ case ACT_RET_YIELD:
+ /* yield is not allowed at this point. If this return code is
+ * used it is a bug, so I prefer to abort the process.
+ */
+ send_log(sess->fe, LOG_WARNING,
+ "Internal error: yield not allowed with tcp-request connection actions.");
+ case ACT_RET_STOP:
+ break;
+ case ACT_RET_CONT:
+ continue;
+ case ACT_RET_ERR:
+ result = 0;
+ break;
+ }
+ break; /* ACT_RET_STOP */
+ }
+ }
+ }
+ return result;
+}
+
+/* Executes the "silent-drop" action. May be called from {tcp,http}{request,response} */
+static enum act_return tcp_exec_action_silent_drop(struct act_rule *rule, struct proxy *px, struct session *sess, struct stream *strm, int flags)
+{
+ struct connection *conn = objt_conn(sess->origin);
+
+ if (!conn)
+ goto out;
+
+ if (!conn_ctrl_ready(conn))
+ goto out;
+
+#ifdef TCP_QUICKACK
+ /* drain is needed only to send the quick ACK */
+ conn_sock_drain(conn);
+
+ /* re-enable quickack if it was disabled to ack all data and avoid
+ * retransmits from the client that might trigger a real reset.
+ */
+ setsockopt(conn->t.sock.fd, SOL_TCP, TCP_QUICKACK, &one, sizeof(one));
+#endif
+ /* lingering must absolutely be disabled so that we don't send a
+ * shutdown(), this is critical to the TCP_REPAIR trick. When no stream
+ * is present, returning with ERR will cause lingering to be disabled.
+ */
+ if (strm)
+ strm->si[0].flags |= SI_FL_NOLINGER;
+
+ /* We're on the client-facing side, we must force to disable lingering to
+ * ensure we will use an RST exclusively and kill any pending data.
+ */
+ fdtab[conn->t.sock.fd].linger_risk = 1;
+
+#ifdef TCP_REPAIR
+ if (setsockopt(conn->t.sock.fd, SOL_TCP, TCP_REPAIR, &one, sizeof(one)) == 0) {
+ /* socket will be quiet now */
+ goto out;
+ }
+#endif
+ /* either TCP_REPAIR is not defined or it failed (eg: permissions).
+ * Let's fall back on the TTL trick, though it only works for routed
+ * network and has no effect on local net.
+ */
+#ifdef IP_TTL
+ setsockopt(conn->t.sock.fd, SOL_IP, IP_TTL, &one, sizeof(one));
+#endif
+ out:
+ /* kill the stream if any */
+ if (strm) {
+ channel_abort(&strm->req);
+ channel_abort(&strm->res);
+ strm->req.analysers = 0;
+ strm->res.analysers = 0;
+ strm->be->be_counters.denied_req++;
+ if (!(strm->flags & SF_ERR_MASK))
+ strm->flags |= SF_ERR_PRXCOND;
+ if (!(strm->flags & SF_FINST_MASK))
+ strm->flags |= SF_FINST_R;
+ }
+
+ sess->fe->fe_counters.denied_req++;
+ if (sess->listener->counters)
+ sess->listener->counters->denied_req++;
+
+ return ACT_RET_STOP;
+}
+
+/* Parse a tcp-response rule. Return a negative value in case of failure */
+static int tcp_parse_response_rule(char **args, int arg, int section_type,
+ struct proxy *curpx, struct proxy *defpx,
+ struct act_rule *rule, char **err,
+ unsigned int where,
+ const char *file, int line)
+{
+ if (curpx == defpx || !(curpx->cap & PR_CAP_BE)) {
+ memprintf(err, "%s %s is only allowed in 'backend' sections",
+ args[0], args[1]);
+ return -1;
+ }
+
+ if (strcmp(args[arg], "accept") == 0) {
+ arg++;
+ rule->action = ACT_ACTION_ALLOW;
+ }
+ else if (strcmp(args[arg], "reject") == 0) {
+ arg++;
+ rule->action = ACT_ACTION_DENY;
+ }
+ else if (strcmp(args[arg], "close") == 0) {
+ arg++;
+ rule->action = ACT_TCP_CLOSE;
+ }
+ else {
+ struct action_kw *kw;
+ kw = tcp_res_cont_action(args[arg]);
+ if (kw) {
+ arg++;
+ rule->from = ACT_F_TCP_RES_CNT;
+ rule->kw = kw;
+ if (kw->parse((const char **)args, &arg, curpx, rule, err) == ACT_RET_PRS_ERR)
+ return -1;
+ } else {
+ action_build_list(&tcp_res_cont_keywords, &trash);
+ memprintf(err,
+ "'%s %s' expects 'accept', 'close', 'reject', %s in %s '%s' (got '%s')",
+ args[0], args[1], trash.str, proxy_type_str(curpx), curpx->id, args[arg]);
+ return -1;
+ }
+ }
+
+ if (strcmp(args[arg], "if") == 0 || strcmp(args[arg], "unless") == 0) {
+ if ((rule->cond = build_acl_cond(file, line, curpx, (const char **)args+arg, err)) == NULL) {
+ memprintf(err,
+ "'%s %s %s' : error detected in %s '%s' while parsing '%s' condition : %s",
+ args[0], args[1], args[2], proxy_type_str(curpx), curpx->id, args[arg], *err);
+ return -1;
+ }
+ }
+ else if (*args[arg]) {
+ memprintf(err,
+ "'%s %s %s' only accepts 'if' or 'unless', in %s '%s' (got '%s')",
+ args[0], args[1], args[2], proxy_type_str(curpx), curpx->id, args[arg]);
+ return -1;
+ }
+ return 0;
+}
+
+
+
+/* Parse a tcp-request rule. Return a negative value in case of failure */
+static int tcp_parse_request_rule(char **args, int arg, int section_type,
+ struct proxy *curpx, struct proxy *defpx,
+ struct act_rule *rule, char **err,
+ unsigned int where, const char *file, int line)
+{
+ if (curpx == defpx) {
+ memprintf(err, "%s %s is not allowed in 'defaults' sections",
+ args[0], args[1]);
+ return -1;
+ }
+
+ if (!strcmp(args[arg], "accept")) {
+ arg++;
+ rule->action = ACT_ACTION_ALLOW;
+ }
+ else if (!strcmp(args[arg], "reject")) {
+ arg++;
+ rule->action = ACT_ACTION_DENY;
+ }
+ else if (strcmp(args[arg], "capture") == 0) {
+ struct sample_expr *expr;
+ struct cap_hdr *hdr;
+ int kw = arg;
+ int len = 0;
+
+ if (!(curpx->cap & PR_CAP_FE)) {
+ memprintf(err,
+ "'%s %s %s' : proxy '%s' has no frontend capability",
+ args[0], args[1], args[kw], curpx->id);
+ return -1;
+ }
+
+ if (!(where & SMP_VAL_FE_REQ_CNT)) {
+ memprintf(err,
+ "'%s %s' is not allowed in '%s %s' rules in %s '%s'",
+ args[arg], args[arg+1], args[0], args[1], proxy_type_str(curpx), curpx->id);
+ return -1;
+ }
+
+ arg++;
+
+ curpx->conf.args.ctx = ARGC_CAP;
+ expr = sample_parse_expr(args, &arg, file, line, err, &curpx->conf.args);
+ if (!expr) {
+ memprintf(err,
+ "'%s %s %s' : %s",
+ args[0], args[1], args[kw], *err);
+ return -1;
+ }
+
+ if (!(expr->fetch->val & where)) {
+ memprintf(err,
+ "'%s %s %s' : fetch method '%s' extracts information from '%s', none of which is available here",
+ args[0], args[1], args[kw], args[arg-1], sample_src_names(expr->fetch->use));
+ free(expr);
+ return -1;
+ }
+
+ if (strcmp(args[arg], "len") == 0) {
+ arg++;
+ if (!args[arg]) {
+ memprintf(err,
+ "'%s %s %s' : missing length value",
+ args[0], args[1], args[kw]);
+ free(expr);
+ return -1;
+ }
+ /* we copy the table name for now, it will be resolved later */
+ len = atoi(args[arg]);
+ if (len <= 0) {
+ memprintf(err,
+ "'%s %s %s' : length must be > 0",
+ args[0], args[1], args[kw]);
+ free(expr);
+ return -1;
+ }
+ arg++;
+ }
+
+ if (!len) {
+ memprintf(err,
+ "'%s %s %s' : a positive 'len' argument is mandatory",
+ args[0], args[1], args[kw]);
+ free(expr);
+ return -1;
+ }
+
+ hdr = calloc(sizeof(struct cap_hdr), 1);
+ hdr->next = curpx->req_cap;
+ hdr->name = NULL; /* not a header capture */
+ hdr->namelen = 0;
+ hdr->len = len;
+ hdr->pool = create_pool("caphdr", hdr->len + 1, MEM_F_SHARED);
+ hdr->index = curpx->nb_req_cap++;
+
+ curpx->req_cap = hdr;
+ curpx->to_log |= LW_REQHDR;
+
+ /* check if we need to allocate an hdr_idx struct for HTTP parsing */
+ curpx->http_needed |= !!(expr->fetch->use & SMP_USE_HTTP_ANY);
+
+ rule->arg.cap.expr = expr;
+ rule->arg.cap.hdr = hdr;
+ rule->action = ACT_TCP_CAPTURE;
+ }
+ else if (strncmp(args[arg], "track-sc", 8) == 0 &&
+ args[arg][9] == '\0' && args[arg][8] >= '0' &&
+ args[arg][8] < '0' + MAX_SESS_STKCTR) { /* track-sc 0..9 */
+ struct sample_expr *expr;
+ int kw = arg;
+
+ arg++;
+
+ curpx->conf.args.ctx = ARGC_TRK;
+ expr = sample_parse_expr(args, &arg, file, line, err, &curpx->conf.args);
+ if (!expr) {
+ memprintf(err,
+ "'%s %s %s' : %s",
+ args[0], args[1], args[kw], *err);
+ return -1;
+ }
+
+ if (!(expr->fetch->val & where)) {
+ memprintf(err,
+ "'%s %s %s' : fetch method '%s' extracts information from '%s', none of which is available here",
+ args[0], args[1], args[kw], args[arg-1], sample_src_names(expr->fetch->use));
+ free(expr);
+ return -1;
+ }
+
+ /* check if we need to allocate an hdr_idx struct for HTTP parsing */
+ curpx->http_needed |= !!(expr->fetch->use & SMP_USE_HTTP_ANY);
+
+ if (strcmp(args[arg], "table") == 0) {
+ arg++;
+ if (!args[arg]) {
+ memprintf(err,
+ "'%s %s %s' : missing table name",
+ args[0], args[1], args[kw]);
+ free(expr);
+ return -1;
+ }
+ /* we copy the table name for now, it will be resolved later */
+ rule->arg.trk_ctr.table.n = strdup(args[arg]);
+ arg++;
+ }
+ rule->arg.trk_ctr.expr = expr;
+ rule->action = ACT_ACTION_TRK_SC0 + args[kw][8] - '0';
+ }
+ else if (strcmp(args[arg], "expect-proxy") == 0) {
+ if (strcmp(args[arg+1], "layer4") != 0) {
+ memprintf(err,
+ "'%s %s %s' only supports 'layer4' in %s '%s' (got '%s')",
+ args[0], args[1], args[arg], proxy_type_str(curpx), curpx->id, args[arg+1]);
+ return -1;
+ }
+
+ if (!(where & SMP_VAL_FE_CON_ACC)) {
+ memprintf(err,
+ "'%s %s' is not allowed in '%s %s' rules in %s '%s'",
+ args[arg], args[arg+1], args[0], args[1], proxy_type_str(curpx), curpx->id);
+ return -1;
+ }
+
+ arg += 2;
+ rule->action = ACT_TCP_EXPECT_PX;
+ }
+ else {
+ struct action_kw *kw;
+ if (where & SMP_VAL_FE_CON_ACC) {
+ kw = tcp_req_conn_action(args[arg]);
+ rule->kw = kw;
+ rule->from = ACT_F_TCP_REQ_CON;
+ } else {
+ kw = tcp_req_cont_action(args[arg]);
+ rule->kw = kw;
+ rule->from = ACT_F_TCP_REQ_CNT;
+ }
+ if (kw) {
+ arg++;
+ if (kw->parse((const char **)args, &arg, curpx, rule, err) == ACT_RET_PRS_ERR)
+ return -1;
+ } else {
+ if (where & SMP_VAL_FE_CON_ACC)
+ action_build_list(&tcp_req_conn_keywords, &trash);
+ else
+ action_build_list(&tcp_req_cont_keywords, &trash);
+ memprintf(err,
+ "'%s %s' expects 'accept', 'reject', 'track-sc0' ... 'track-sc%d', %s "
+ "in %s '%s' (got '%s').\n",
+ args[0], args[1], MAX_SESS_STKCTR-1, trash.str, proxy_type_str(curpx),
+ curpx->id, args[arg]);
+ return -1;
+ }
+ }
+
+ if (strcmp(args[arg], "if") == 0 || strcmp(args[arg], "unless") == 0) {
+ if ((rule->cond = build_acl_cond(file, line, curpx, (const char **)args+arg, err)) == NULL) {
+ memprintf(err,
+ "'%s %s %s' : error detected in %s '%s' while parsing '%s' condition : %s",
+ args[0], args[1], args[2], proxy_type_str(curpx), curpx->id, args[arg], *err);
+ return -1;
+ }
+ }
+ else if (*args[arg]) {
+ memprintf(err,
+ "'%s %s %s' only accepts 'if' or 'unless', in %s '%s' (got '%s')",
+ args[0], args[1], args[2], proxy_type_str(curpx), curpx->id, args[arg]);
+ return -1;
+ }
+ return 0;
+}
+
+/* This function should be called to parse a line starting with the "tcp-response"
+ * keyword.
+ */
+static int tcp_parse_tcp_rep(char **args, int section_type, struct proxy *curpx,
+ struct proxy *defpx, const char *file, int line,
+ char **err)
+{
+ const char *ptr = NULL;
+ unsigned int val;
+ int warn = 0;
+ int arg;
+ struct act_rule *rule;
+ unsigned int where;
+ const struct acl *acl;
+ const char *kw;
+
+ if (!*args[1]) {
+ memprintf(err, "missing argument for '%s' in %s '%s'",
+ args[0], proxy_type_str(curpx), curpx->id);
+ return -1;
+ }
+
+ if (strcmp(args[1], "inspect-delay") == 0) {
+ if (curpx == defpx || !(curpx->cap & PR_CAP_BE)) {
+ memprintf(err, "%s %s is only allowed in 'backend' sections",
+ args[0], args[1]);
+ return -1;
+ }
+
+ if (!*args[2] || (ptr = parse_time_err(args[2], &val, TIME_UNIT_MS))) {
+ memprintf(err,
+ "'%s %s' expects a positive delay in milliseconds, in %s '%s'",
+ args[0], args[1], proxy_type_str(curpx), curpx->id);
+ if (ptr)
+ memprintf(err, "%s (unexpected character '%c')", *err, *ptr);
+ return -1;
+ }
+
+ if (curpx->tcp_rep.inspect_delay) {
+ memprintf(err, "ignoring %s %s (was already defined) in %s '%s'",
+ args[0], args[1], proxy_type_str(curpx), curpx->id);
+ return 1;
+ }
+ curpx->tcp_rep.inspect_delay = val;
+ return 0;
+ }
+
+ rule = calloc(1, sizeof(*rule));
+ LIST_INIT(&rule->list);
+ arg = 1;
+ where = 0;
+
+ if (strcmp(args[1], "content") == 0) {
+ arg++;
+
+ if (curpx->cap & PR_CAP_FE)
+ where |= SMP_VAL_FE_RES_CNT;
+ if (curpx->cap & PR_CAP_BE)
+ where |= SMP_VAL_BE_RES_CNT;
+
+ if (tcp_parse_response_rule(args, arg, section_type, curpx, defpx, rule, err, where, file, line) < 0)
+ goto error;
+
+ acl = rule->cond ? acl_cond_conflicts(rule->cond, where) : NULL;
+ if (acl) {
+ if (acl->name && *acl->name)
+ memprintf(err,
+ "acl '%s' will never match in '%s %s' because it only involves keywords that are incompatible with '%s'",
+ acl->name, args[0], args[1], sample_ckp_names(where));
+ else
+ memprintf(err,
+ "anonymous acl will never match in '%s %s' because it uses keyword '%s' which is incompatible with '%s'",
+ args[0], args[1],
+ LIST_ELEM(acl->expr.n, struct acl_expr *, list)->kw,
+ sample_ckp_names(where));
+
+ warn++;
+ }
+ else if (rule->cond && acl_cond_kw_conflicts(rule->cond, where, &acl, &kw)) {
+ if (acl->name && *acl->name)
+ memprintf(err,
+ "acl '%s' involves keyword '%s' which is incompatible with '%s'",
+ acl->name, kw, sample_ckp_names(where));
+ else
+ memprintf(err,
+ "anonymous acl involves keyword '%s' which is incompatible with '%s'",
+ kw, sample_ckp_names(where));
+ warn++;
+ }
+
+ LIST_ADDQ(&curpx->tcp_rep.inspect_rules, &rule->list);
+ }
+ else {
+ memprintf(err,
+ "'%s' expects 'inspect-delay' or 'content' in %s '%s' (got '%s')",
+ args[0], proxy_type_str(curpx), curpx->id, args[1]);
+ goto error;
+ }
+
+ return warn;
+ error:
+ free(rule);
+ return -1;
+}
+
+
+/* This function should be called to parse a line starting with the "tcp-request"
+ * keyword.
+ */
+static int tcp_parse_tcp_req(char **args, int section_type, struct proxy *curpx,
+ struct proxy *defpx, const char *file, int line,
+ char **err)
+{
+ const char *ptr = NULL;
+ unsigned int val;
+ int warn = 0;
+ int arg;
+ struct act_rule *rule;
+ unsigned int where;
+ const struct acl *acl;
+ const char *kw;
+
+ if (!*args[1]) {
+ if (curpx == defpx)
+ memprintf(err, "missing argument for '%s' in defaults section", args[0]);
+ else
+ memprintf(err, "missing argument for '%s' in %s '%s'",
+ args[0], proxy_type_str(curpx), curpx->id);
+ return -1;
+ }
+
+ if (!strcmp(args[1], "inspect-delay")) {
+ if (curpx == defpx) {
+ memprintf(err, "%s %s is not allowed in 'defaults' sections",
+ args[0], args[1]);
+ return -1;
+ }
+
+ if (!*args[2] || (ptr = parse_time_err(args[2], &val, TIME_UNIT_MS))) {
+ memprintf(err,
+ "'%s %s' expects a positive delay in milliseconds, in %s '%s'",
+ args[0], args[1], proxy_type_str(curpx), curpx->id);
+ if (ptr)
+ memprintf(err, "%s (unexpected character '%c')", *err, *ptr);
+ return -1;
+ }
+
+ if (curpx->tcp_req.inspect_delay) {
+ memprintf(err, "ignoring %s %s (was already defined) in %s '%s'",
+ args[0], args[1], proxy_type_str(curpx), curpx->id);
+ return 1;
+ }
+ curpx->tcp_req.inspect_delay = val;
+ return 0;
+ }
+
+ rule = calloc(1, sizeof(*rule));
+ LIST_INIT(&rule->list);
+ arg = 1;
+ where = 0;
+
+ if (strcmp(args[1], "content") == 0) {
+ arg++;
+
+ if (curpx->cap & PR_CAP_FE)
+ where |= SMP_VAL_FE_REQ_CNT;
+ if (curpx->cap & PR_CAP_BE)
+ where |= SMP_VAL_BE_REQ_CNT;
+
+ if (tcp_parse_request_rule(args, arg, section_type, curpx, defpx, rule, err, where, file, line) < 0)
+ goto error;
+
+ acl = rule->cond ? acl_cond_conflicts(rule->cond, where) : NULL;
+ if (acl) {
+ if (acl->name && *acl->name)
+ memprintf(err,
+ "acl '%s' will never match in '%s %s' because it only involves keywords that are incompatible with '%s'",
+ acl->name, args[0], args[1], sample_ckp_names(where));
+ else
+ memprintf(err,
+ "anonymous acl will never match in '%s %s' because it uses keyword '%s' which is incompatible with '%s'",
+ args[0], args[1],
+ LIST_ELEM(acl->expr.n, struct acl_expr *, list)->kw,
+ sample_ckp_names(where));
+
+ warn++;
+ }
+ else if (rule->cond && acl_cond_kw_conflicts(rule->cond, where, &acl, &kw)) {
+ if (acl->name && *acl->name)
+ memprintf(err,
+ "acl '%s' involves keyword '%s' which is incompatible with '%s'",
+ acl->name, kw, sample_ckp_names(where));
+ else
+ memprintf(err,
+ "anonymous acl involves keyword '%s' which is incompatible with '%s'",
+ kw, sample_ckp_names(where));
+ warn++;
+ }
+
+ /* the following function directly emits the warning */
+ warnif_misplaced_tcp_cont(curpx, file, line, args[0]);
+ LIST_ADDQ(&curpx->tcp_req.inspect_rules, &rule->list);
+ }
+ else if (strcmp(args[1], "connection") == 0) {
+ arg++;
+
+ if (!(curpx->cap & PR_CAP_FE)) {
+ memprintf(err, "%s %s is not allowed because %s %s is not a frontend",
+ args[0], args[1], proxy_type_str(curpx), curpx->id);
+ goto error;
+ }
+
+ where |= SMP_VAL_FE_CON_ACC;
+
+ if (tcp_parse_request_rule(args, arg, section_type, curpx, defpx, rule, err, where, file, line) < 0)
+ goto error;
+
+ acl = rule->cond ? acl_cond_conflicts(rule->cond, where) : NULL;
+ if (acl) {
+ if (acl->name && *acl->name)
+ memprintf(err,
+ "acl '%s' will never match in '%s %s' because it only involves keywords that are incompatible with '%s'",
+ acl->name, args[0], args[1], sample_ckp_names(where));
+ else
+ memprintf(err,
+ "anonymous acl will never match in '%s %s' because it uses keyword '%s' which is incompatible with '%s'",
+ args[0], args[1],
+ LIST_ELEM(acl->expr.n, struct acl_expr *, list)->kw,
+ sample_ckp_names(where));
+
+ warn++;
+ }
+ else if (rule->cond && acl_cond_kw_conflicts(rule->cond, where, &acl, &kw)) {
+ if (acl->name && *acl->name)
+ memprintf(err,
+ "acl '%s' involves keyword '%s' which is incompatible with '%s'",
+ acl->name, kw, sample_ckp_names(where));
+ else
+ memprintf(err,
+ "anonymous acl involves keyword '%s' which is incompatible with '%s'",
+ kw, sample_ckp_names(where));
+ warn++;
+ }
+
+ /* the following function directly emits the warning */
+ warnif_misplaced_tcp_conn(curpx, file, line, args[0]);
+ LIST_ADDQ(&curpx->tcp_req.l4_rules, &rule->list);
+ }
+ else {
+ if (curpx == defpx)
+ memprintf(err,
+ "'%s' expects 'inspect-delay', 'connection', or 'content' in defaults section (got '%s')",
+ args[0], args[1]);
+ else
+ memprintf(err,
+ "'%s' expects 'inspect-delay', 'connection', or 'content' in %s '%s' (got '%s')",
+ args[0], proxy_type_str(curpx), curpx->id, args[1]);
+ goto error;
+ }
+
+ return warn;
+ error:
+ free(rule);
+ return -1;
+}
+
+/* Parse a "silent-drop" action. It takes no argument. It returns ACT_RET_PRS_OK on
+ * success, ACT_RET_PRS_ERR on error.
+ */
+static enum act_parse_ret tcp_parse_silent_drop(const char **args, int *orig_arg, struct proxy *px,
+ struct act_rule *rule, char **err)
+{
+ rule->action = ACT_CUSTOM;
+ rule->action_ptr = tcp_exec_action_silent_drop;
+ return ACT_RET_PRS_OK;
+}
+
+
+/************************************************************************/
+/* All supported sample fetch functions must be declared here */
+/************************************************************************/
+
+/* fetch the connection's source IPv4/IPv6 address */
+int smp_fetch_src(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct connection *cli_conn = objt_conn(smp->sess->origin);
+
+ if (!cli_conn)
+ return 0;
+
+ switch (cli_conn->addr.from.ss_family) {
+ case AF_INET:
+ smp->data.u.ipv4 = ((struct sockaddr_in *)&cli_conn->addr.from)->sin_addr;
+ smp->data.type = SMP_T_IPV4;
+ break;
+ case AF_INET6:
+ smp->data.u.ipv6 = ((struct sockaddr_in6 *)&cli_conn->addr.from)->sin6_addr;
+ smp->data.type = SMP_T_IPV6;
+ break;
+ default:
+ return 0;
+ }
+
+ smp->flags = 0;
+ return 1;
+}
+
+/* set temp integer to the connection's source port */
+static int
+smp_fetch_sport(const struct arg *args, struct sample *smp, const char *k, void *private)
+{
+ struct connection *cli_conn = objt_conn(smp->sess->origin);
+
+ if (!cli_conn)
+ return 0;
+
+ smp->data.type = SMP_T_SINT;
+ if (!(smp->data.u.sint = get_host_port(&cli_conn->addr.from)))
+ return 0;
+
+ smp->flags = 0;
+ return 1;
+}
+
+/* fetch the connection's destination IPv4/IPv6 address */
+static int
+smp_fetch_dst(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct connection *cli_conn = objt_conn(smp->sess->origin);
+
+ if (!cli_conn)
+ return 0;
+
+ conn_get_to_addr(cli_conn);
+
+ switch (cli_conn->addr.to.ss_family) {
+ case AF_INET:
+ smp->data.u.ipv4 = ((struct sockaddr_in *)&cli_conn->addr.to)->sin_addr;
+ smp->data.type = SMP_T_IPV4;
+ break;
+ case AF_INET6:
+ smp->data.u.ipv6 = ((struct sockaddr_in6 *)&cli_conn->addr.to)->sin6_addr;
+ smp->data.type = SMP_T_IPV6;
+ break;
+ default:
+ return 0;
+ }
+
+ smp->flags = 0;
+ return 1;
+}
+
+/* set temp integer to the frontend connexion's destination port */
+static int
+smp_fetch_dport(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct connection *cli_conn = objt_conn(smp->sess->origin);
+
+ if (!cli_conn)
+ return 0;
+
+ conn_get_to_addr(cli_conn);
+
+ smp->data.type = SMP_T_SINT;
+ if (!(smp->data.u.sint = get_host_port(&cli_conn->addr.to)))
+ return 0;
+
+ smp->flags = 0;
+ return 1;
+}
+
+#ifdef IPV6_V6ONLY
+/* parse the "v4v6" bind keyword */
+static int bind_parse_v4v6(char **args, int cur_arg, struct proxy *px, struct bind_conf *conf, char **err)
+{
+ struct listener *l;
+
+ list_for_each_entry(l, &conf->listeners, by_bind) {
+ if (l->addr.ss_family == AF_INET6)
+ l->options |= LI_O_V4V6;
+ }
+
+ return 0;
+}
+
+/* parse the "v6only" bind keyword */
+static int bind_parse_v6only(char **args, int cur_arg, struct proxy *px, struct bind_conf *conf, char **err)
+{
+ struct listener *l;
+
+ list_for_each_entry(l, &conf->listeners, by_bind) {
+ if (l->addr.ss_family == AF_INET6)
+ l->options |= LI_O_V6ONLY;
+ }
+
+ return 0;
+}
+#endif
+
+#ifdef CONFIG_HAP_TRANSPARENT
+/* parse the "transparent" bind keyword */
+static int bind_parse_transparent(char **args, int cur_arg, struct proxy *px, struct bind_conf *conf, char **err)
+{
+ struct listener *l;
+
+ list_for_each_entry(l, &conf->listeners, by_bind) {
+ if (l->addr.ss_family == AF_INET || l->addr.ss_family == AF_INET6)
+ l->options |= LI_O_FOREIGN;
+ }
+
+ return 0;
+}
+#endif
+
+#ifdef TCP_DEFER_ACCEPT
+/* parse the "defer-accept" bind keyword */
+static int bind_parse_defer_accept(char **args, int cur_arg, struct proxy *px, struct bind_conf *conf, char **err)
+{
+ struct listener *l;
+
+ list_for_each_entry(l, &conf->listeners, by_bind) {
+ if (l->addr.ss_family == AF_INET || l->addr.ss_family == AF_INET6)
+ l->options |= LI_O_DEF_ACCEPT;
+ }
+
+ return 0;
+}
+#endif
+
+#ifdef TCP_FASTOPEN
+/* parse the "tfo" bind keyword */
+static int bind_parse_tfo(char **args, int cur_arg, struct proxy *px, struct bind_conf *conf, char **err)
+{
+ struct listener *l;
+
+ list_for_each_entry(l, &conf->listeners, by_bind) {
+ if (l->addr.ss_family == AF_INET || l->addr.ss_family == AF_INET6)
+ l->options |= LI_O_TCP_FO;
+ }
+
+ return 0;
+}
+#endif
+
+#ifdef TCP_MAXSEG
+/* parse the "mss" bind keyword */
+static int bind_parse_mss(char **args, int cur_arg, struct proxy *px, struct bind_conf *conf, char **err)
+{
+ struct listener *l;
+ int mss;
+
+ if (!*args[cur_arg + 1]) {
+ memprintf(err, "'%s' : missing MSS value", args[cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ mss = atoi(args[cur_arg + 1]);
+ if (!mss || abs(mss) > 65535) {
+ memprintf(err, "'%s' : expects an MSS with and absolute value between 1 and 65535", args[cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ list_for_each_entry(l, &conf->listeners, by_bind) {
+ if (l->addr.ss_family == AF_INET || l->addr.ss_family == AF_INET6)
+ l->maxseg = mss;
+ }
+
+ return 0;
+}
+#endif
+
+#ifdef TCP_USER_TIMEOUT
+/* parse the "tcp-ut" bind keyword */
+static int bind_parse_tcp_ut(char **args, int cur_arg, struct proxy *px, struct bind_conf *conf, char **err)
+{
+ const char *ptr = NULL;
+ struct listener *l;
+ unsigned int timeout;
+
+ if (!*args[cur_arg + 1]) {
+ memprintf(err, "'%s' : missing TCP User Timeout value", args[cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ ptr = parse_time_err(args[cur_arg + 1], &timeout, TIME_UNIT_MS);
+ if (ptr) {
+ memprintf(err, "'%s' : expects a positive delay in milliseconds", args[cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ list_for_each_entry(l, &conf->listeners, by_bind) {
+ if (l->addr.ss_family == AF_INET || l->addr.ss_family == AF_INET6)
+ l->tcp_ut = timeout;
+ }
+
+ return 0;
+}
+#endif
+
+#ifdef SO_BINDTODEVICE
+/* parse the "interface" bind keyword */
+static int bind_parse_interface(char **args, int cur_arg, struct proxy *px, struct bind_conf *conf, char **err)
+{
+ struct listener *l;
+
+ if (!*args[cur_arg + 1]) {
+ memprintf(err, "'%s' : missing interface name", args[cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ list_for_each_entry(l, &conf->listeners, by_bind) {
+ if (l->addr.ss_family == AF_INET || l->addr.ss_family == AF_INET6)
+ l->interface = strdup(args[cur_arg + 1]);
+ }
+
+ global.last_checks |= LSTCHK_NETADM;
+ return 0;
+}
+#endif
+
+#ifdef CONFIG_HAP_NS
+/* parse the "namespace" bind keyword */
+static int bind_parse_namespace(char **args, int cur_arg, struct proxy *px, struct bind_conf *conf, char **err)
+{
+ struct listener *l;
+ char *namespace = NULL;
+
+ if (!*args[cur_arg + 1]) {
+ memprintf(err, "'%s' : missing namespace id", args[cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+ namespace = args[cur_arg + 1];
+
+ list_for_each_entry(l, &conf->listeners, by_bind) {
+ l->netns = netns_store_lookup(namespace, strlen(namespace));
+
+ if (l->netns == NULL)
+ l->netns = netns_store_insert(namespace);
+
+ if (l->netns == NULL) {
+ Alert("Cannot open namespace '%s'.\n", args[cur_arg + 1]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+ }
+ return 0;
+}
+#endif
+
+#ifdef TCP_USER_TIMEOUT
+/* parse the "tcp-ut" server keyword */
+static int srv_parse_tcp_ut(char **args, int *cur_arg, struct proxy *px, struct server *newsrv, char **err)
+{
+ const char *ptr = NULL;
+ unsigned int timeout;
+
+ if (!*args[*cur_arg + 1]) {
+ memprintf(err, "'%s' : missing TCP User Timeout value", args[*cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ ptr = parse_time_err(args[*cur_arg + 1], &timeout, TIME_UNIT_MS);
+ if (ptr) {
+ memprintf(err, "'%s' : expects a positive delay in milliseconds", args[*cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ if (newsrv->addr.ss_family == AF_INET || newsrv->addr.ss_family == AF_INET6)
+ newsrv->tcp_ut = timeout;
+
+ return 0;
+}
+#endif
+
+static struct cfg_kw_list cfg_kws = {ILH, {
+ { CFG_LISTEN, "tcp-request", tcp_parse_tcp_req },
+ { CFG_LISTEN, "tcp-response", tcp_parse_tcp_rep },
+ { 0, NULL, NULL },
+}};
+
+
+/* Note: must not be declared <const> as its list will be overwritten.
+ * Please take care of keeping this list alphabetically sorted.
+ */
+static struct acl_kw_list acl_kws = {ILH, {
+ { /* END */ },
+}};
+
+
+/* Note: must not be declared <const> as its list will be overwritten.
+ * Note: fetches that may return multiple types must be declared as the lowest
+ * common denominator, the type that can be casted into all other ones. For
+ * instance v4/v6 must be declared v4.
+ */
+static struct sample_fetch_kw_list sample_fetch_keywords = {ILH, {
+ { "dst", smp_fetch_dst, 0, NULL, SMP_T_IPV4, SMP_USE_L4CLI },
+ { "dst_port", smp_fetch_dport, 0, NULL, SMP_T_SINT, SMP_USE_L4CLI },
+ { "src", smp_fetch_src, 0, NULL, SMP_T_IPV4, SMP_USE_L4CLI },
+ { "src_port", smp_fetch_sport, 0, NULL, SMP_T_SINT, SMP_USE_L4CLI },
+ { /* END */ },
+}};
+
+/************************************************************************/
+/* All supported bind keywords must be declared here. */
+/************************************************************************/
+
+/* Note: must not be declared <const> as its list will be overwritten.
+ * Please take care of keeping this list alphabetically sorted, doing so helps
+ * all code contributors.
+ * Optional keywords are also declared with a NULL ->parse() function so that
+ * the config parser can report an appropriate error when a known keyword was
+ * not enabled.
+ */
+static struct bind_kw_list bind_kws = { "TCP", { }, {
+#ifdef TCP_DEFER_ACCEPT
+ { "defer-accept", bind_parse_defer_accept, 0 }, /* wait for some data for 1 second max before doing accept */
+#endif
+#ifdef SO_BINDTODEVICE
+ { "interface", bind_parse_interface, 1 }, /* specifically bind to this interface */
+#endif
+#ifdef TCP_MAXSEG
+ { "mss", bind_parse_mss, 1 }, /* set MSS of listening socket */
+#endif
+#ifdef TCP_USER_TIMEOUT
+ { "tcp-ut", bind_parse_tcp_ut, 1 }, /* set User Timeout on listening socket */
+#endif
+#ifdef TCP_FASTOPEN
+ { "tfo", bind_parse_tfo, 0 }, /* enable TCP_FASTOPEN of listening socket */
+#endif
+#ifdef CONFIG_HAP_TRANSPARENT
+ { "transparent", bind_parse_transparent, 0 }, /* transparently bind to the specified addresses */
+#endif
+#ifdef IPV6_V6ONLY
+ { "v4v6", bind_parse_v4v6, 0 }, /* force socket to bind to IPv4+IPv6 */
+ { "v6only", bind_parse_v6only, 0 }, /* force socket to bind to IPv6 only */
+#endif
+#ifdef CONFIG_HAP_NS
+ { "namespace", bind_parse_namespace, 1 },
+#endif
+ /* the versions with the NULL parse function*/
+ { "defer-accept", NULL, 0 },
+ { "interface", NULL, 1 },
+ { "mss", NULL, 1 },
+ { "transparent", NULL, 0 },
+ { "v4v6", NULL, 0 },
+ { "v6only", NULL, 0 },
+ { NULL, NULL, 0 },
+}};
+
+static struct srv_kw_list srv_kws = { "TCP", { }, {
+#ifdef TCP_USER_TIMEOUT
+ { "tcp-ut", srv_parse_tcp_ut, 1, 0 }, /* set TCP user timeout on server */
+#endif
+ { NULL, NULL, 0 },
+}};
+
+static struct action_kw_list tcp_req_conn_actions = {ILH, {
+ { "silent-drop", tcp_parse_silent_drop },
+ { /* END */ }
+}};
+
+static struct action_kw_list tcp_req_cont_actions = {ILH, {
+ { "silent-drop", tcp_parse_silent_drop },
+ { /* END */ }
+}};
+
+static struct action_kw_list tcp_res_cont_actions = {ILH, {
+ { "silent-drop", tcp_parse_silent_drop },
+ { /* END */ }
+}};
+
+static struct action_kw_list http_req_actions = {ILH, {
+ { "silent-drop", tcp_parse_silent_drop },
+ { /* END */ }
+}};
+
+static struct action_kw_list http_res_actions = {ILH, {
+ { "silent-drop", tcp_parse_silent_drop },
+ { /* END */ }
+}};
+
+
+__attribute__((constructor))
+static void __tcp_protocol_init(void)
+{
+ protocol_register(&proto_tcpv4);
+ protocol_register(&proto_tcpv6);
+ sample_register_fetches(&sample_fetch_keywords);
+ cfg_register_keywords(&cfg_kws);
+ acl_register_keywords(&acl_kws);
+ bind_register_keywords(&bind_kws);
+ srv_register_keywords(&srv_kws);
+ tcp_req_conn_keywords_register(&tcp_req_conn_actions);
+ tcp_req_cont_keywords_register(&tcp_req_cont_actions);
+ tcp_res_cont_keywords_register(&tcp_res_cont_actions);
+ http_req_keywords_register(&http_req_actions);
+ http_res_keywords_register(&http_res_actions);
+}
+
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * UDP protocol related functions
+ *
+ * Copyright 2014 Baptiste Assmann <bedis9@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <types/global.h>
+#include <types/fd.h>
+#include <types/proto_udp.h>
+
+#include <proto/fd.h>
+
+/* datagram handler callback */
+int dgram_fd_handler(int fd)
+{
+ struct dgram_conn *dgram = fdtab[fd].owner;
+
+ if (unlikely(!dgram))
+ return 0;
+
+ if (fd_recv_ready(fd))
+ dgram->data->recv(dgram);
+ else if (fd_send_ready(fd))
+ dgram->data->send(dgram);
+
+ return 0;
+}
--- /dev/null
+/*
+ * UNIX SOCK_STREAM protocol layer (uxst)
+ *
+ * Copyright 2000-2010 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <ctype.h>
+#include <errno.h>
+#include <fcntl.h>
+#include <pwd.h>
+#include <grp.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <syslog.h>
+#include <time.h>
+
+#include <sys/socket.h>
+#include <sys/stat.h>
+#include <sys/types.h>
+#include <sys/un.h>
+
+#include <common/compat.h>
+#include <common/config.h>
+#include <common/debug.h>
+#include <common/errors.h>
+#include <common/mini-clist.h>
+#include <common/standard.h>
+#include <common/time.h>
+#include <common/version.h>
+
+#include <types/global.h>
+
+#include <proto/connection.h>
+#include <proto/fd.h>
+#include <proto/listener.h>
+#include <proto/log.h>
+#include <proto/protocol.h>
+#include <proto/proto_uxst.h>
+#include <proto/task.h>
+
+static int uxst_bind_listener(struct listener *listener, char *errmsg, int errlen);
+static int uxst_bind_listeners(struct protocol *proto, char *errmsg, int errlen);
+static int uxst_unbind_listeners(struct protocol *proto);
+static int uxst_connect_server(struct connection *conn, int data, int delack);
+
+/* Note: must not be declared <const> as its list will be overwritten */
+static struct protocol proto_unix = {
+ .name = "unix_stream",
+ .sock_domain = PF_UNIX,
+ .sock_type = SOCK_STREAM,
+ .sock_prot = 0,
+ .sock_family = AF_UNIX,
+ .sock_addrlen = sizeof(struct sockaddr_un),
+ .l3_addrlen = sizeof(((struct sockaddr_un*)0)->sun_path),/* path len */
+ .accept = &listener_accept,
+ .connect = &uxst_connect_server,
+ .bind = uxst_bind_listener,
+ .bind_all = uxst_bind_listeners,
+ .unbind_all = uxst_unbind_listeners,
+ .enable_all = enable_all_listeners,
+ .disable_all = disable_all_listeners,
+ .get_src = uxst_get_src,
+ .get_dst = uxst_get_dst,
+ .pause = uxst_pause_listener,
+ .listeners = LIST_HEAD_INIT(proto_unix.listeners),
+ .nb_listeners = 0,
+};
+
+/********************************
+ * 1) low-level socket functions
+ ********************************/
+
+/*
+ * Retrieves the source address for the socket <fd>, with <dir> indicating
+ * if we're a listener (=0) or an initiator (!=0). It returns 0 in case of
+ * success, -1 in case of error. The socket's source address is stored in
+ * <sa> for <salen> bytes.
+ */
+int uxst_get_src(int fd, struct sockaddr *sa, socklen_t salen, int dir)
+{
+ if (dir)
+ return getsockname(fd, sa, &salen);
+ else
+ return getpeername(fd, sa, &salen);
+}
+
+
+/*
+ * Retrieves the original destination address for the socket <fd>, with <dir>
+ * indicating if we're a listener (=0) or an initiator (!=0). It returns 0 in
+ * case of success, -1 in case of error. The socket's source address is stored
+ * in <sa> for <salen> bytes.
+ */
+int uxst_get_dst(int fd, struct sockaddr *sa, socklen_t salen, int dir)
+{
+ if (dir)
+ return getpeername(fd, sa, &salen);
+ else
+ return getsockname(fd, sa, &salen);
+}
+
+
+/* Tries to destroy the UNIX stream socket <path>. The socket must not be used
+ * anymore. It practises best effort, and no error is returned.
+ */
+static void destroy_uxst_socket(const char *path)
+{
+ struct sockaddr_un addr;
+ int sock, ret;
+
+ /* if the path was cleared, we do nothing */
+ if (!*path)
+ return;
+
+ /* We might have been chrooted, so we may not be able to access the
+ * socket. In order to avoid bothering the other end, we connect with a
+ * wrong protocol, namely SOCK_DGRAM. The return code from connect()
+ * is enough to know if the socket is still live or not. If it's live
+ * in mode SOCK_STREAM, we get EPROTOTYPE or anything else but not
+ * ECONNREFUSED. In this case, we do not touch it because it's used
+ * by some other process.
+ */
+ sock = socket(PF_UNIX, SOCK_DGRAM, 0);
+ if (sock < 0)
+ return;
+
+ addr.sun_family = AF_UNIX;
+ strncpy(addr.sun_path, path, sizeof(addr.sun_path));
+ addr.sun_path[sizeof(addr.sun_path) - 1] = 0;
+ ret = connect(sock, (struct sockaddr *)&addr, sizeof(addr));
+ if (ret < 0 && errno == ECONNREFUSED) {
+ /* Connect failed: the socket still exists but is not used
+ * anymore. Let's remove this socket now.
+ */
+ unlink(path);
+ }
+ close(sock);
+}
+
+
+/********************************
+ * 2) listener-oriented functions
+ ********************************/
+
+
+/* This function creates a UNIX socket associated to the listener. It changes
+ * the state from ASSIGNED to LISTEN. The socket is NOT enabled for polling.
+ * The return value is composed from ERR_NONE, ERR_RETRYABLE and ERR_FATAL. It
+ * may return a warning or an error message in <errmsg> if the message is at
+ * most <errlen> bytes long (including '\0'). Note that <errmsg> may be NULL if
+ * <errlen> is also zero.
+ */
+static int uxst_bind_listener(struct listener *listener, char *errmsg, int errlen)
+{
+ int fd;
+ char tempname[MAXPATHLEN];
+ char backname[MAXPATHLEN];
+ struct sockaddr_un addr;
+ const char *msg = NULL;
+ const char *path;
+ int ext, ready;
+ socklen_t ready_len;
+ int err;
+ int ret;
+
+ err = ERR_NONE;
+
+ /* ensure we never return garbage */
+ if (errlen)
+ *errmsg = 0;
+
+ if (listener->state != LI_ASSIGNED)
+ return ERR_NONE; /* already bound */
+
+ path = ((struct sockaddr_un *)&listener->addr)->sun_path;
+
+ /* if the listener already has an fd assigned, then we were offered the
+ * fd by an external process (most likely the parent), and we don't want
+ * to create a new socket. However we still want to set a few flags on
+ * the socket.
+ */
+ fd = listener->fd;
+ ext = (fd >= 0);
+ if (ext)
+ goto fd_ready;
+
+ if (path[0]) {
+ ret = snprintf(tempname, MAXPATHLEN, "%s.%d.tmp", path, pid);
+ if (ret < 0 || ret >= MAXPATHLEN) {
+ err |= ERR_FATAL | ERR_ALERT;
+ msg = "name too long for UNIX socket";
+ goto err_return;
+ }
+
+ ret = snprintf(backname, MAXPATHLEN, "%s.%d.bak", path, pid);
+ if (ret < 0 || ret >= MAXPATHLEN) {
+ err |= ERR_FATAL | ERR_ALERT;
+ msg = "name too long for UNIX socket";
+ goto err_return;
+ }
+
+ /* 2. clean existing orphaned entries */
+ if (unlink(tempname) < 0 && errno != ENOENT) {
+ err |= ERR_FATAL | ERR_ALERT;
+ msg = "error when trying to unlink previous UNIX socket";
+ goto err_return;
+ }
+
+ if (unlink(backname) < 0 && errno != ENOENT) {
+ err |= ERR_FATAL | ERR_ALERT;
+ msg = "error when trying to unlink previous UNIX socket";
+ goto err_return;
+ }
+
+ /* 3. backup existing socket */
+ if (link(path, backname) < 0 && errno != ENOENT) {
+ err |= ERR_FATAL | ERR_ALERT;
+ msg = "error when trying to preserve previous UNIX socket";
+ goto err_return;
+ }
+
+ strncpy(addr.sun_path, tempname, sizeof(addr.sun_path));
+ addr.sun_path[sizeof(addr.sun_path) - 1] = 0;
+ }
+ else {
+ /* first char is zero, it's an abstract socket whose address
+ * is defined by all the bytes past this zero.
+ */
+ memcpy(addr.sun_path, path, sizeof(addr.sun_path));
+ }
+ addr.sun_family = AF_UNIX;
+
+ fd = socket(PF_UNIX, SOCK_STREAM, 0);
+ if (fd < 0) {
+ err |= ERR_FATAL | ERR_ALERT;
+ msg = "cannot create UNIX socket";
+ goto err_unlink_back;
+ }
+
+ fd_ready:
+ if (fd >= global.maxsock) {
+ err |= ERR_FATAL | ERR_ALERT;
+ msg = "socket(): not enough free sockets, raise -n argument";
+ goto err_unlink_temp;
+ }
+
+ if (fcntl(fd, F_SETFL, O_NONBLOCK) == -1) {
+ err |= ERR_FATAL | ERR_ALERT;
+ msg = "cannot make UNIX socket non-blocking";
+ goto err_unlink_temp;
+ }
+
+ if (!ext && bind(fd, (struct sockaddr *)&addr, sizeof(addr)) < 0) {
+ /* note that bind() creates the socket <tempname> on the file system */
+ if (errno == EADDRINUSE) {
+ /* the old process might still own it, let's retry */
+ err |= ERR_RETRYABLE | ERR_ALERT;
+ msg = "cannot listen to socket";
+ }
+ else {
+ err |= ERR_FATAL | ERR_ALERT;
+ msg = "cannot bind UNIX socket";
+ }
+ goto err_unlink_temp;
+ }
+
+ /* <uid> and <gid> different of -1 will be used to change the socket owner.
+ * If <mode> is not 0, it will be used to restrict access to the socket.
+ * While it is known not to be portable on every OS, it's still useful
+ * where it works. We also don't change permissions on abstract sockets.
+ */
+ if (!ext && path[0] &&
+ (((listener->bind_conf->ux.uid != -1 || listener->bind_conf->ux.gid != -1) &&
+ (chown(tempname, listener->bind_conf->ux.uid, listener->bind_conf->ux.gid) == -1)) ||
+ (listener->bind_conf->ux.mode != 0 && chmod(tempname, listener->bind_conf->ux.mode) == -1))) {
+ err |= ERR_FATAL | ERR_ALERT;
+ msg = "cannot change UNIX socket ownership";
+ goto err_unlink_temp;
+ }
+
+ ready = 0;
+ ready_len = sizeof(ready);
+ if (getsockopt(fd, SOL_SOCKET, SO_ACCEPTCONN, &ready, &ready_len) == -1)
+ ready = 0;
+
+ if (!(ext && ready) && /* only listen if not already done by external process */
+ listen(fd, listener->backlog ? listener->backlog : listener->maxconn) < 0) {
+ err |= ERR_FATAL | ERR_ALERT;
+ msg = "cannot listen to UNIX socket";
+ goto err_unlink_temp;
+ }
+
+ /* Point of no return: we are ready, we'll switch the sockets. We don't
+ * fear loosing the socket <path> because we have a copy of it in
+ * backname. Abstract sockets are not renamed.
+ */
+ if (!ext && path[0] && rename(tempname, path) < 0) {
+ err |= ERR_FATAL | ERR_ALERT;
+ msg = "cannot switch final and temporary UNIX sockets";
+ goto err_rename;
+ }
+
+ /* Cleanup: If we're bound to an fd inherited from the parent, we
+ * want to ensure that destroy_uxst_socket() will never remove the
+ * path, and for this we simply clear the path to the socket, which
+ * under Linux corresponds to an abstract socket.
+ */
+ if (!ext && path[0])
+ unlink(backname);
+ else
+ ((struct sockaddr_un *)&listener->addr)->sun_path[0] = 0;
+
+ /* the socket is now listening */
+ listener->fd = fd;
+ listener->state = LI_LISTEN;
+
+ /* the function for the accept() event */
+ fd_insert(fd);
+ fdtab[fd].iocb = listener->proto->accept;
+ fdtab[fd].owner = listener; /* reference the listener instead of a task */
+ return err;
+
+ err_rename:
+ ret = rename(backname, path);
+ if (ret < 0 && errno == ENOENT)
+ unlink(path);
+ err_unlink_temp:
+ if (!ext && path[0])
+ unlink(tempname);
+ close(fd);
+ err_unlink_back:
+ if (!ext && path[0])
+ unlink(backname);
+ err_return:
+ if (msg && errlen) {
+ if (!ext)
+ snprintf(errmsg, errlen, "%s [%s]", msg, path);
+ else
+ snprintf(errmsg, errlen, "%s [fd %d]", msg, fd);
+ }
+ return err;
+}
+
+/* This function closes the UNIX sockets for the specified listener.
+ * The listener enters the LI_ASSIGNED state. It always returns ERR_NONE.
+ */
+static int uxst_unbind_listener(struct listener *listener)
+{
+ if (listener->state > LI_ASSIGNED) {
+ unbind_listener(listener);
+ destroy_uxst_socket(((struct sockaddr_un *)&listener->addr)->sun_path);
+ }
+ return ERR_NONE;
+}
+
+/* Add a listener to the list of unix stream listeners. The listener's state
+ * is automatically updated from LI_INIT to LI_ASSIGNED. The number of
+ * listeners is updated. This is the function to use to add a new listener.
+ */
+void uxst_add_listener(struct listener *listener)
+{
+ if (listener->state != LI_INIT)
+ return;
+ listener->state = LI_ASSIGNED;
+ listener->proto = &proto_unix;
+ LIST_ADDQ(&proto_unix.listeners, &listener->proto_list);
+ proto_unix.nb_listeners++;
+}
+
+/* Pause a listener. Returns < 0 in case of failure, 0 if the listener
+ * was totally stopped, or > 0 if correctly paused. Nothing is done for
+ * plain unix sockets since currently it's the new process which handles
+ * the renaming. Abstract sockets are completely unbound.
+ */
+int uxst_pause_listener(struct listener *l)
+{
+ if (((struct sockaddr_un *)&l->addr)->sun_path[0])
+ return 1;
+
+ unbind_listener(l);
+ return 0;
+}
+
+
+/*
+ * This function initiates a UNIX connection establishment to the target assigned
+ * to connection <conn> using (si->{target,addr.to}). The source address is ignored
+ * and will be selected by the system. conn->target may point either to a valid
+ * server or to a backend, depending on conn->target. Only OBJ_TYPE_PROXY and
+ * OBJ_TYPE_SERVER are supported. The <data> parameter is a boolean indicating
+ * whether there are data waiting for being sent or not, in order to adjust data
+ * write polling and on some platforms. The <delack> argument is ignored.
+ *
+ * Note that a pending send_proxy message accounts for data.
+ *
+ * It can return one of :
+ * - SF_ERR_NONE if everything's OK
+ * - SF_ERR_SRVTO if there are no more servers
+ * - SF_ERR_SRVCL if the connection was refused by the server
+ * - SF_ERR_PRXCOND if the connection has been limited by the proxy (maxconn)
+ * - SF_ERR_RESOURCE if a system resource is lacking (eg: fd limits, ports, ...)
+ * - SF_ERR_INTERNAL for any other purely internal errors
+ * Additionnally, in the case of SF_ERR_RESOURCE, an emergency log will be emitted.
+ *
+ * The connection's fd is inserted only when SF_ERR_NONE is returned, otherwise
+ * it's invalid and the caller has nothing to do.
+ */
+int uxst_connect_server(struct connection *conn, int data, int delack)
+{
+ int fd;
+ struct server *srv;
+ struct proxy *be;
+
+ conn->flags = 0;
+
+ switch (obj_type(conn->target)) {
+ case OBJ_TYPE_PROXY:
+ be = objt_proxy(conn->target);
+ srv = NULL;
+ break;
+ case OBJ_TYPE_SERVER:
+ srv = objt_server(conn->target);
+ be = srv->proxy;
+ break;
+ default:
+ conn->flags |= CO_FL_ERROR;
+ return SF_ERR_INTERNAL;
+ }
+
+ if ((fd = conn->t.sock.fd = socket(PF_UNIX, SOCK_STREAM, 0)) == -1) {
+ qfprintf(stderr, "Cannot get a server socket.\n");
+
+ if (errno == ENFILE) {
+ conn->err_code = CO_ER_SYS_FDLIM;
+ send_log(be, LOG_EMERG,
+ "Proxy %s reached system FD limit at %d. Please check system tunables.\n",
+ be->id, maxfd);
+ }
+ else if (errno == EMFILE) {
+ conn->err_code = CO_ER_PROC_FDLIM;
+ send_log(be, LOG_EMERG,
+ "Proxy %s reached process FD limit at %d. Please check 'ulimit-n' and restart.\n",
+ be->id, maxfd);
+ }
+ else if (errno == ENOBUFS || errno == ENOMEM) {
+ conn->err_code = CO_ER_SYS_MEMLIM;
+ send_log(be, LOG_EMERG,
+ "Proxy %s reached system memory limit at %d sockets. Please check system tunables.\n",
+ be->id, maxfd);
+ }
+ else if (errno == EAFNOSUPPORT || errno == EPROTONOSUPPORT) {
+ conn->err_code = CO_ER_NOPROTO;
+ }
+ else
+ conn->err_code = CO_ER_SOCK_ERR;
+
+ /* this is a resource error */
+ conn->flags |= CO_FL_ERROR;
+ return SF_ERR_RESOURCE;
+ }
+
+ if (fd >= global.maxsock) {
+ /* do not log anything there, it's a normal condition when this option
+ * is used to serialize connections to a server !
+ */
+ Alert("socket(): not enough free sockets. Raise -n argument. Giving up.\n");
+ close(fd);
+ conn->err_code = CO_ER_CONF_FDLIM;
+ conn->flags |= CO_FL_ERROR;
+ return SF_ERR_PRXCOND; /* it is a configuration limit */
+ }
+
+ if (fcntl(fd, F_SETFL, O_NONBLOCK) == -1) {
+ qfprintf(stderr,"Cannot set client socket to non blocking mode.\n");
+ close(fd);
+ conn->err_code = CO_ER_SOCK_ERR;
+ conn->flags |= CO_FL_ERROR;
+ return SF_ERR_INTERNAL;
+ }
+
+ /* if a send_proxy is there, there are data */
+ data |= conn->send_proxy_ofs;
+
+ if (global.tune.server_sndbuf)
+ setsockopt(fd, SOL_SOCKET, SO_SNDBUF, &global.tune.server_sndbuf, sizeof(global.tune.server_sndbuf));
+
+ if (global.tune.server_rcvbuf)
+ setsockopt(fd, SOL_SOCKET, SO_RCVBUF, &global.tune.server_rcvbuf, sizeof(global.tune.server_rcvbuf));
+
+ if (connect(fd, (struct sockaddr *)&conn->addr.to, get_addr_len(&conn->addr.to)) == -1) {
+ if (errno == EALREADY || errno == EISCONN) {
+ conn->flags &= ~CO_FL_WAIT_L4_CONN;
+ }
+ else if (errno == EINPROGRESS) {
+ conn->flags |= CO_FL_WAIT_L4_CONN;
+ }
+ else if (errno == EAGAIN || errno == EADDRINUSE || errno == EADDRNOTAVAIL) {
+ char *msg;
+ if (errno == EAGAIN || errno == EADDRNOTAVAIL) {
+ msg = "no free ports";
+ conn->err_code = CO_ER_FREE_PORTS;
+ }
+ else {
+ msg = "local address already in use";
+ conn->err_code = CO_ER_ADDR_INUSE;
+ }
+
+ qfprintf(stderr,"Connect() failed for backend %s: %s.\n", be->id, msg);
+ close(fd);
+ send_log(be, LOG_ERR, "Connect() failed for backend %s: %s.\n", be->id, msg);
+ conn->flags |= CO_FL_ERROR;
+ return SF_ERR_RESOURCE;
+ }
+ else if (errno == ETIMEDOUT) {
+ close(fd);
+ conn->err_code = CO_ER_SOCK_ERR;
+ conn->flags |= CO_FL_ERROR;
+ return SF_ERR_SRVTO;
+ }
+ else { // (errno == ECONNREFUSED || errno == ENETUNREACH || errno == EACCES || errno == EPERM)
+ close(fd);
+ conn->err_code = CO_ER_SOCK_ERR;
+ conn->flags |= CO_FL_ERROR;
+ return SF_ERR_SRVCL;
+ }
+ }
+ else {
+ /* connect() already succeeded, which is quite usual for unix
+ * sockets. Let's avoid a second connect() probe to complete it,
+ * but we need to ensure we'll wake up if there's no more handshake
+ * pending (eg: for health checks).
+ */
+ conn->flags &= ~CO_FL_WAIT_L4_CONN;
+ if (!(conn->flags & CO_FL_HANDSHAKE))
+ data = 1;
+ }
+
+ conn->flags |= CO_FL_ADDR_TO_SET;
+
+ /* Prepare to send a few handshakes related to the on-wire protocol. */
+ if (conn->send_proxy_ofs)
+ conn->flags |= CO_FL_SEND_PROXY;
+
+ conn_ctrl_init(conn); /* registers the FD */
+ fdtab[fd].linger_risk = 0; /* no need to disable lingering */
+ if (conn->flags & CO_FL_HANDSHAKE)
+ conn_sock_want_send(conn); /* for connect status or proxy protocol */
+
+ if (conn_xprt_init(conn) < 0) {
+ conn_force_close(conn);
+ conn->flags |= CO_FL_ERROR;
+ return SF_ERR_RESOURCE;
+ }
+
+ if (data)
+ conn_data_want_send(conn); /* prepare to send data if any */
+
+ return SF_ERR_NONE; /* connection is OK */
+}
+
+
+/********************************
+ * 3) protocol-oriented functions
+ ********************************/
+
+
+/* This function creates all UNIX sockets bound to the protocol entry <proto>.
+ * It is intended to be used as the protocol's bind_all() function.
+ * The sockets will be registered but not added to any fd_set, in order not to
+ * loose them across the fork(). A call to uxst_enable_listeners() is needed
+ * to complete initialization.
+ *
+ * The return value is composed from ERR_NONE, ERR_RETRYABLE and ERR_FATAL.
+ */
+static int uxst_bind_listeners(struct protocol *proto, char *errmsg, int errlen)
+{
+ struct listener *listener;
+ int err = ERR_NONE;
+
+ list_for_each_entry(listener, &proto->listeners, proto_list) {
+ err |= uxst_bind_listener(listener, errmsg, errlen);
+ if (err & ERR_ABORT)
+ break;
+ }
+ return err;
+}
+
+
+/* This function stops all listening UNIX sockets bound to the protocol
+ * <proto>. It does not detaches them from the protocol.
+ * It always returns ERR_NONE.
+ */
+static int uxst_unbind_listeners(struct protocol *proto)
+{
+ struct listener *listener;
+
+ list_for_each_entry(listener, &proto->listeners, proto_list)
+ uxst_unbind_listener(listener);
+ return ERR_NONE;
+}
+
+/* parse the "mode" bind keyword */
+static int bind_parse_mode(char **args, int cur_arg, struct proxy *px, struct bind_conf *conf, char **err)
+{
+ if (!*args[cur_arg + 1]) {
+ memprintf(err, "'%s' : missing mode (octal integer expected)", args[cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ conf->ux.mode = strtol(args[cur_arg + 1], NULL, 8);
+ return 0;
+}
+
+/* parse the "gid" bind keyword */
+static int bind_parse_gid(char **args, int cur_arg, struct proxy *px, struct bind_conf *conf, char **err)
+{
+ if (!*args[cur_arg + 1]) {
+ memprintf(err, "'%s' : missing value", args[cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ conf->ux.gid = atol(args[cur_arg + 1]);
+ return 0;
+}
+
+/* parse the "group" bind keyword */
+static int bind_parse_group(char **args, int cur_arg, struct proxy *px, struct bind_conf *conf, char **err)
+{
+ struct group *group;
+
+ if (!*args[cur_arg + 1]) {
+ memprintf(err, "'%s' : missing group name", args[cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ group = getgrnam(args[cur_arg + 1]);
+ if (!group) {
+ memprintf(err, "'%s' : unknown group name '%s'", args[cur_arg], args[cur_arg + 1]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ conf->ux.gid = group->gr_gid;
+ return 0;
+}
+
+/* parse the "uid" bind keyword */
+static int bind_parse_uid(char **args, int cur_arg, struct proxy *px, struct bind_conf *conf, char **err)
+{
+ if (!*args[cur_arg + 1]) {
+ memprintf(err, "'%s' : missing value", args[cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ conf->ux.uid = atol(args[cur_arg + 1]);
+ return 0;
+}
+
+/* parse the "user" bind keyword */
+static int bind_parse_user(char **args, int cur_arg, struct proxy *px, struct bind_conf *conf, char **err)
+{
+ struct passwd *user;
+
+ if (!*args[cur_arg + 1]) {
+ memprintf(err, "'%s' : missing user name", args[cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ user = getpwnam(args[cur_arg + 1]);
+ if (!user) {
+ memprintf(err, "'%s' : unknown user name '%s'", args[cur_arg], args[cur_arg + 1]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ conf->ux.uid = user->pw_uid;
+ return 0;
+}
+
+/* Note: must not be declared <const> as its list will be overwritten.
+ * Please take care of keeping this list alphabetically sorted, doing so helps
+ * all code contributors.
+ * Optional keywords are also declared with a NULL ->parse() function so that
+ * the config parser can report an appropriate error when a known keyword was
+ * not enabled.
+ */
+static struct bind_kw_list bind_kws = { "UNIX", { }, {
+ { "gid", bind_parse_gid, 1 }, /* set the socket's gid */
+ { "group", bind_parse_group, 1 }, /* set the socket's gid from the group name */
+ { "mode", bind_parse_mode, 1 }, /* set the socket's mode (eg: 0644)*/
+ { "uid", bind_parse_uid, 1 }, /* set the socket's uid */
+ { "user", bind_parse_user, 1 }, /* set the socket's uid from the user name */
+ { NULL, NULL, 0 },
+}};
+
+/********************************
+ * 4) high-level functions
+ ********************************/
+
+__attribute__((constructor))
+static void __uxst_protocol_init(void)
+{
+ protocol_register(&proto_unix);
+ bind_register_keywords(&bind_kws);
+}
+
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * Protocol registration functions.
+ *
+ * Copyright 2000-2012 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <sys/socket.h>
+
+#include <common/config.h>
+#include <common/errors.h>
+#include <common/mini-clist.h>
+#include <common/standard.h>
+
+#include <types/protocol.h>
+
+/* List head of all registered protocols */
+static struct list protocols = LIST_HEAD_INIT(protocols);
+struct protocol *__protocol_by_family[AF_MAX] = { };
+
+/* Registers the protocol <proto> */
+void protocol_register(struct protocol *proto)
+{
+ LIST_ADDQ(&protocols, &proto->list);
+ if (proto->sock_domain >= 0 && proto->sock_domain < AF_MAX)
+ __protocol_by_family[proto->sock_domain] = proto;
+}
+
+/* Unregisters the protocol <proto>. Note that all listeners must have
+ * previously been unbound.
+ */
+void protocol_unregister(struct protocol *proto)
+{
+ LIST_DEL(&proto->list);
+ LIST_INIT(&proto->list);
+}
+
+/* binds all listeners of all registered protocols. Returns a composition
+ * of ERR_NONE, ERR_RETRYABLE, ERR_FATAL.
+ */
+int protocol_bind_all(char *errmsg, int errlen)
+{
+ struct protocol *proto;
+ int err;
+
+ err = 0;
+ list_for_each_entry(proto, &protocols, list) {
+ if (proto->bind_all) {
+ err |= proto->bind_all(proto, errmsg, errlen);
+ if ( err & ERR_ABORT )
+ break;
+ }
+ }
+ return err;
+}
+
+/* unbinds all listeners of all registered protocols. They are also closed.
+ * This must be performed before calling exit() in order to get a chance to
+ * remove file-system based sockets and pipes.
+ * Returns a composition of ERR_NONE, ERR_RETRYABLE, ERR_FATAL, ERR_ABORT.
+ */
+int protocol_unbind_all(void)
+{
+ struct protocol *proto;
+ int err;
+
+ err = 0;
+ list_for_each_entry(proto, &protocols, list) {
+ if (proto->unbind_all) {
+ err |= proto->unbind_all(proto);
+ }
+ }
+ return err;
+}
+
+/* enables all listeners of all registered protocols. This is intended to be
+ * used after a fork() to enable reading on all file descriptors. Returns a
+ * composition of ERR_NONE, ERR_RETRYABLE, ERR_FATAL.
+ */
+int protocol_enable_all(void)
+{
+ struct protocol *proto;
+ int err;
+
+ err = 0;
+ list_for_each_entry(proto, &protocols, list) {
+ if (proto->enable_all) {
+ err |= proto->enable_all(proto);
+ }
+ }
+ return err;
+}
+
+/* disables all listeners of all registered protocols. This may be used before
+ * a fork() to avoid duplicating poll lists. Returns a composition of ERR_NONE,
+ * ERR_RETRYABLE, ERR_FATAL.
+ */
+int protocol_disable_all(void)
+{
+ struct protocol *proto;
+ int err;
+
+ err = 0;
+ list_for_each_entry(proto, &protocols, list) {
+ if (proto->disable_all) {
+ err |= proto->disable_all(proto);
+ }
+ }
+ return err;
+}
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * Proxy variables and functions.
+ *
+ * Copyright 2000-2009 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <fcntl.h>
+#include <unistd.h>
+#include <string.h>
+#include <sys/types.h>
+#include <sys/socket.h>
+#include <sys/stat.h>
+
+#include <common/defaults.h>
+#include <common/cfgparse.h>
+#include <common/compat.h>
+#include <common/config.h>
+#include <common/errors.h>
+#include <common/memory.h>
+#include <common/time.h>
+
+#include <eb32tree.h>
+#include <ebistree.h>
+
+#include <types/capture.h>
+#include <types/global.h>
+#include <types/obj_type.h>
+#include <types/peers.h>
+
+#include <proto/backend.h>
+#include <proto/fd.h>
+#include <proto/hdr_idx.h>
+#include <proto/listener.h>
+#include <proto/log.h>
+#include <proto/proto_tcp.h>
+#include <proto/proto_http.h>
+#include <proto/proxy.h>
+#include <proto/signal.h>
+#include <proto/stream.h>
+#include <proto/task.h>
+
+
+int listeners; /* # of proxy listeners, set by cfgparse */
+struct proxy *proxy = NULL; /* list of all existing proxies */
+struct eb_root used_proxy_id = EB_ROOT; /* list of proxy IDs in use */
+struct eb_root proxy_by_name = EB_ROOT; /* tree of proxies sorted by name */
+unsigned int error_snapshot_id = 0; /* global ID assigned to each error then incremented */
+
+/*
+ * This function returns a string containing a name describing capabilities to
+ * report comprehensible error messages. Specifically, it will return the words
+ * "frontend", "backend", "ruleset" when appropriate, or "proxy" for all other
+ * cases including the proxies declared in "listen" mode.
+ */
+const char *proxy_cap_str(int cap)
+{
+ if ((cap & PR_CAP_LISTEN) != PR_CAP_LISTEN) {
+ if (cap & PR_CAP_FE)
+ return "frontend";
+ else if (cap & PR_CAP_BE)
+ return "backend";
+ else if (cap & PR_CAP_RS)
+ return "ruleset";
+ }
+ return "proxy";
+}
+
+/*
+ * This function returns a string containing the mode of the proxy in a format
+ * suitable for error messages.
+ */
+const char *proxy_mode_str(int mode) {
+
+ if (mode == PR_MODE_TCP)
+ return "tcp";
+ else if (mode == PR_MODE_HTTP)
+ return "http";
+ else if (mode == PR_MODE_HEALTH)
+ return "health";
+ else
+ return "unknown";
+}
+
+/*
+ * This function scans the list of backends and servers to retrieve the first
+ * backend and the first server with the given names, and sets them in both
+ * parameters. It returns zero if either is not found, or non-zero and sets
+ * the ones it did not found to NULL. If a NULL pointer is passed for the
+ * backend, only the pointer to the server will be updated.
+ */
+int get_backend_server(const char *bk_name, const char *sv_name,
+ struct proxy **bk, struct server **sv)
+{
+ struct proxy *p;
+ struct server *s;
+ int sid;
+
+ *sv = NULL;
+
+ sid = -1;
+ if (*sv_name == '#')
+ sid = atoi(sv_name + 1);
+
+ p = proxy_be_by_name(bk_name);
+ if (bk)
+ *bk = p;
+ if (!p)
+ return 0;
+
+ for (s = p->srv; s; s = s->next)
+ if ((sid >= 0 && s->puid == sid) ||
+ (sid < 0 && strcmp(s->id, sv_name) == 0))
+ break;
+ *sv = s;
+ if (!s)
+ return 0;
+ return 1;
+}
+
+/* This function parses a "timeout" statement in a proxy section. It returns
+ * -1 if there is any error, 1 for a warning, otherwise zero. If it does not
+ * return zero, it will write an error or warning message into a preallocated
+ * buffer returned at <err>. The trailing is not be written. The function must
+ * be called with <args> pointing to the first command line word, with <proxy>
+ * pointing to the proxy being parsed, and <defpx> to the default proxy or NULL.
+ * As a special case for compatibility with older configs, it also accepts
+ * "{cli|srv|con}timeout" in args[0].
+ */
+static int proxy_parse_timeout(char **args, int section, struct proxy *proxy,
+ struct proxy *defpx, const char *file, int line,
+ char **err)
+{
+ unsigned timeout;
+ int retval, cap;
+ const char *res, *name;
+ int *tv = NULL;
+ int *td = NULL;
+ int warn = 0;
+
+ retval = 0;
+
+ /* simply skip "timeout" but remain compatible with old form */
+ if (strcmp(args[0], "timeout") == 0)
+ args++;
+
+ name = args[0];
+ if (!strcmp(args[0], "client") || (!strcmp(args[0], "clitimeout") && (warn = WARN_CLITO_DEPRECATED))) {
+ name = "client";
+ tv = &proxy->timeout.client;
+ td = &defpx->timeout.client;
+ cap = PR_CAP_FE;
+ } else if (!strcmp(args[0], "tarpit")) {
+ tv = &proxy->timeout.tarpit;
+ td = &defpx->timeout.tarpit;
+ cap = PR_CAP_FE | PR_CAP_BE;
+ } else if (!strcmp(args[0], "http-keep-alive")) {
+ tv = &proxy->timeout.httpka;
+ td = &defpx->timeout.httpka;
+ cap = PR_CAP_FE | PR_CAP_BE;
+ } else if (!strcmp(args[0], "http-request")) {
+ tv = &proxy->timeout.httpreq;
+ td = &defpx->timeout.httpreq;
+ cap = PR_CAP_FE | PR_CAP_BE;
+ } else if (!strcmp(args[0], "server") || (!strcmp(args[0], "srvtimeout") && (warn = WARN_SRVTO_DEPRECATED))) {
+ name = "server";
+ tv = &proxy->timeout.server;
+ td = &defpx->timeout.server;
+ cap = PR_CAP_BE;
+ } else if (!strcmp(args[0], "connect") || (!strcmp(args[0], "contimeout") && (warn = WARN_CONTO_DEPRECATED))) {
+ name = "connect";
+ tv = &proxy->timeout.connect;
+ td = &defpx->timeout.connect;
+ cap = PR_CAP_BE;
+ } else if (!strcmp(args[0], "check")) {
+ tv = &proxy->timeout.check;
+ td = &defpx->timeout.check;
+ cap = PR_CAP_BE;
+ } else if (!strcmp(args[0], "queue")) {
+ tv = &proxy->timeout.queue;
+ td = &defpx->timeout.queue;
+ cap = PR_CAP_BE;
+ } else if (!strcmp(args[0], "tunnel")) {
+ tv = &proxy->timeout.tunnel;
+ td = &defpx->timeout.tunnel;
+ cap = PR_CAP_BE;
+ } else if (!strcmp(args[0], "client-fin")) {
+ tv = &proxy->timeout.clientfin;
+ td = &defpx->timeout.clientfin;
+ cap = PR_CAP_FE;
+ } else if (!strcmp(args[0], "server-fin")) {
+ tv = &proxy->timeout.serverfin;
+ td = &defpx->timeout.serverfin;
+ cap = PR_CAP_BE;
+ } else {
+ memprintf(err,
+ "'timeout' supports 'client', 'server', 'connect', 'check', "
+ "'queue', 'http-keep-alive', 'http-request', 'tunnel', 'tarpit', "
+ "'client-fin' and 'server-fin' (got '%s')",
+ args[0]);
+ return -1;
+ }
+
+ if (*args[1] == 0) {
+ memprintf(err, "'timeout %s' expects an integer value (in milliseconds)", name);
+ return -1;
+ }
+
+ res = parse_time_err(args[1], &timeout, TIME_UNIT_MS);
+ if (res) {
+ memprintf(err, "unexpected character '%c' in 'timeout %s'", *res, name);
+ return -1;
+ }
+
+ if (!(proxy->cap & cap)) {
+ memprintf(err, "'timeout %s' will be ignored because %s '%s' has no %s capability",
+ name, proxy_type_str(proxy), proxy->id,
+ (cap & PR_CAP_BE) ? "backend" : "frontend");
+ retval = 1;
+ }
+ else if (defpx && *tv != *td) {
+ memprintf(err, "overwriting 'timeout %s' which was already specified", name);
+ retval = 1;
+ }
+ else if (warn) {
+ if (!already_warned(warn)) {
+ memprintf(err, "the '%s' directive is now deprecated in favor of 'timeout %s', and will not be supported in future versions.",
+ args[0], name);
+ retval = 1;
+ }
+ }
+
+ if (*args[2] != 0) {
+ memprintf(err, "'timeout %s' : unexpected extra argument '%s' after value '%s'.", name, args[2], args[1]);
+ retval = -1;
+ }
+
+ *tv = MS_TO_TICKS(timeout);
+ return retval;
+}
+
+/* This function parses a "rate-limit" statement in a proxy section. It returns
+ * -1 if there is any error, 1 for a warning, otherwise zero. If it does not
+ * return zero, it will write an error or warning message into a preallocated
+ * buffer returned at <err>. The function must be called with <args> pointing
+ * to the first command line word, with <proxy> pointing to the proxy being
+ * parsed, and <defpx> to the default proxy or NULL.
+ */
+static int proxy_parse_rate_limit(char **args, int section, struct proxy *proxy,
+ struct proxy *defpx, const char *file, int line,
+ char **err)
+{
+ int retval, cap;
+ char *res;
+ unsigned int *tv = NULL;
+ unsigned int *td = NULL;
+ unsigned int val;
+
+ retval = 0;
+
+ if (strcmp(args[1], "sessions") == 0) {
+ tv = &proxy->fe_sps_lim;
+ td = &defpx->fe_sps_lim;
+ cap = PR_CAP_FE;
+ }
+ else {
+ memprintf(err, "'%s' only supports 'sessions' (got '%s')", args[0], args[1]);
+ return -1;
+ }
+
+ if (*args[2] == 0) {
+ memprintf(err, "'%s %s' expects expects an integer value (in sessions/second)", args[0], args[1]);
+ return -1;
+ }
+
+ val = strtoul(args[2], &res, 0);
+ if (*res) {
+ memprintf(err, "'%s %s' : unexpected character '%c' in integer value '%s'", args[0], args[1], *res, args[2]);
+ return -1;
+ }
+
+ if (!(proxy->cap & cap)) {
+ memprintf(err, "%s %s will be ignored because %s '%s' has no %s capability",
+ args[0], args[1], proxy_type_str(proxy), proxy->id,
+ (cap & PR_CAP_BE) ? "backend" : "frontend");
+ retval = 1;
+ }
+ else if (defpx && *tv != *td) {
+ memprintf(err, "overwriting %s %s which was already specified", args[0], args[1]);
+ retval = 1;
+ }
+
+ *tv = val;
+ return retval;
+}
+
+/* This function parses a "max-keep-alive-queue" statement in a proxy section.
+ * It returns -1 if there is any error, 1 for a warning, otherwise zero. If it
+ * does not return zero, it will write an error or warning message into a
+ * preallocated buffer returned at <err>. The function must be called with
+ * <args> pointing to the first command line word, with <proxy> pointing to
+ * the proxy being parsed, and <defpx> to the default proxy or NULL.
+ */
+static int proxy_parse_max_ka_queue(char **args, int section, struct proxy *proxy,
+ struct proxy *defpx, const char *file, int line,
+ char **err)
+{
+ int retval;
+ char *res;
+ unsigned int val;
+
+ retval = 0;
+
+ if (*args[1] == 0) {
+ memprintf(err, "'%s' expects expects an integer value (or -1 to disable)", args[0]);
+ return -1;
+ }
+
+ val = strtol(args[1], &res, 0);
+ if (*res) {
+ memprintf(err, "'%s' : unexpected character '%c' in integer value '%s'", args[0], *res, args[1]);
+ return -1;
+ }
+
+ if (!(proxy->cap & PR_CAP_BE)) {
+ memprintf(err, "%s will be ignored because %s '%s' has no backend capability",
+ args[0], proxy_type_str(proxy), proxy->id);
+ retval = 1;
+ }
+
+ /* we store <val+1> so that a user-facing value of -1 is stored as zero (default) */
+ proxy->max_ka_queue = val + 1;
+ return retval;
+}
+
+/* This function parses a "declare" statement in a proxy section. It returns -1
+ * if there is any error, 1 for warning, otherwise 0. If it does not return zero,
+ * it will write an error or warning message into a preallocated buffer returned
+ * at <err>. The function must be called with <args> pointing to the first command
+ * line word, with <proxy> pointing to the proxy being parsed, and <defpx> to the
+ * default proxy or NULL.
+ */
+static int proxy_parse_declare(char **args, int section, struct proxy *curpx,
+ struct proxy *defpx, const char *file, int line,
+ char **err)
+{
+ /* Capture keyword wannot be declared in a default proxy. */
+ if (curpx == defpx) {
+ memprintf(err, "'%s' not avalaible in default section", args[0]);
+ return -1;
+ }
+
+ /* Capture keywork is only avalaible in frontend. */
+ if (!(curpx->cap & PR_CAP_FE)) {
+ memprintf(err, "'%s' only avalaible in frontend or listen section", args[0]);
+ return -1;
+ }
+
+ /* Check mandatory second keyword. */
+ if (!args[1] || !*args[1]) {
+ memprintf(err, "'%s' needs a second keyword that specify the type of declaration ('capture')", args[0]);
+ return -1;
+ }
+
+ /* Actually, declare is only avalaible for declaring capture
+ * slot, but in the future it can declare maps or variables.
+ * So, this section permits to check and switch acording with
+ * the second keyword.
+ */
+ if (strcmp(args[1], "capture") == 0) {
+ char *error = NULL;
+ long len;
+ struct cap_hdr *hdr;
+
+ /* Check the next keyword. */
+ if (!args[2] || !*args[2] ||
+ (strcmp(args[2], "response") != 0 &&
+ strcmp(args[2], "request") != 0)) {
+ memprintf(err, "'%s %s' requires a direction ('request' or 'response')", args[0], args[1]);
+ return -1;
+ }
+
+ /* Check the 'len' keyword. */
+ if (!args[3] || !*args[3] || strcmp(args[3], "len") != 0) {
+ memprintf(err, "'%s %s' requires a capture length ('len')", args[0], args[1]);
+ return -1;
+ }
+
+ /* Check the length value. */
+ if (!args[4] || !*args[4]) {
+ memprintf(err, "'%s %s': 'len' requires a numeric value that represents the "
+ "capture length",
+ args[0], args[1]);
+ return -1;
+ }
+
+ /* convert the length value. */
+ len = strtol(args[4], &error, 10);
+ if (*error != '\0') {
+ memprintf(err, "'%s %s': cannot parse the length '%s'.",
+ args[0], args[1], args[3]);
+ return -1;
+ }
+
+ /* check length. */
+ if (len <= 0) {
+ memprintf(err, "length must be > 0");
+ return -1;
+ }
+
+ /* register the capture. */
+ hdr = calloc(1, sizeof(struct cap_hdr));
+ hdr->name = NULL; /* not a header capture */
+ hdr->namelen = 0;
+ hdr->len = len;
+ hdr->pool = create_pool("caphdr", hdr->len + 1, MEM_F_SHARED);
+
+ if (strcmp(args[2], "request") == 0) {
+ hdr->next = curpx->req_cap;
+ hdr->index = curpx->nb_req_cap++;
+ curpx->req_cap = hdr;
+ }
+ if (strcmp(args[2], "response") == 0) {
+ hdr->next = curpx->rsp_cap;
+ hdr->index = curpx->nb_rsp_cap++;
+ curpx->rsp_cap = hdr;
+ }
+ return 0;
+ }
+ else {
+ memprintf(err, "unknown declaration type '%s' (supports 'capture')", args[1]);
+ return -1;
+ }
+}
+
+/* This function inserts proxy <px> into the tree of known proxies. The proxy's
+ * name is used as the storing key so it must already have been initialized.
+ */
+void proxy_store_name(struct proxy *px)
+{
+ px->conf.by_name.key = px->id;
+ ebis_insert(&proxy_by_name, &px->conf.by_name);
+}
+
+/* Returns a pointer to the first proxy matching capabilities <cap> and id
+ * <id>. NULL is returned if no match is found. If <table> is non-zero, it
+ * only considers proxies having a table.
+ */
+struct proxy *proxy_find_by_id(int id, int cap, int table)
+{
+ struct eb32_node *n;
+
+ for (n = eb32_lookup(&used_proxy_id, id); n; n = eb32_next(n)) {
+ struct proxy *px = container_of(n, struct proxy, conf.id);
+
+ if (px->uuid != id)
+ break;
+
+ if ((px->cap & cap) != cap)
+ continue;
+
+ if (table && !px->table.size)
+ continue;
+
+ return px;
+ }
+ return NULL;
+}
+
+/* Returns a pointer to the first proxy matching either name <name>, or id
+ * <name> if <name> begins with a '#'. NULL is returned if no match is found.
+ * If <table> is non-zero, it only considers proxies having a table.
+ */
+struct proxy *proxy_find_by_name(const char *name, int cap, int table)
+{
+ struct proxy *curproxy;
+
+ if (*name == '#') {
+ curproxy = proxy_find_by_id(atoi(name + 1), cap, table);
+ if (curproxy)
+ return curproxy;
+ }
+ else {
+ struct ebpt_node *node;
+
+ for (node = ebis_lookup(&proxy_by_name, name); node; node = ebpt_next(node)) {
+ curproxy = container_of(node, struct proxy, conf.by_name);
+
+ if (strcmp(curproxy->id, name) != 0)
+ break;
+
+ if ((curproxy->cap & cap) != cap)
+ continue;
+
+ if (table && !curproxy->table.size)
+ continue;
+
+ return curproxy;
+ }
+ }
+ return NULL;
+}
+
+/* Finds the best match for a proxy with capabilities <cap>, name <name> and id
+ * <id>. At most one of <id> or <name> may be different provided that <cap> is
+ * valid. Either <id> or <name> may be left unspecified (0). The purpose is to
+ * find a proxy based on some information from a previous configuration, across
+ * reloads or during information exchange between peers.
+ *
+ * Names are looked up first if present, then IDs are compared if present. In
+ * case of an inexact match whatever is forced in the configuration has
+ * precedence in the following order :
+ * - 1) forced ID (proves a renaming / change of proxy type)
+ * - 2) proxy name+type (may indicate a move if ID differs)
+ * - 3) automatic ID+type (may indicate a renaming)
+ *
+ * Depending on what is found, we can end up in the following situations :
+ *
+ * name id cap | possible causes
+ * -------------+-----------------
+ * -- -- -- | nothing found
+ * -- -- ok | nothing found
+ * -- ok -- | proxy deleted, ID points to next one
+ * -- ok ok | proxy renamed, or deleted with ID pointing to next one
+ * ok -- -- | proxy deleted, but other half with same name still here (before)
+ * ok -- ok | proxy's ID changed (proxy moved in the config file)
+ * ok ok -- | proxy deleted, but other half with same name still here (after)
+ * ok ok ok | perfect match
+ *
+ * Upon return if <diff> is not NULL, it is zeroed then filled with up to 3 bits :
+ * - PR_FBM_MISMATCH_ID : proxy was found but ID differs
+ * (and ID was not zero)
+ * - PR_FBM_MISMATCH_NAME : proxy was found by ID but name differs
+ * (and name was not NULL)
+ * - PR_FBM_MISMATCH_PROXYTYPE : a proxy of different type was found with
+ * the same name and/or id
+ *
+ * Only a valid proxy is returned. If capabilities do not match, NULL is
+ * returned. The caller can check <diff> to report detailed warnings / errors,
+ * and decide whether or not to use what was found.
+ */
+struct proxy *proxy_find_best_match(int cap, const char *name, int id, int *diff)
+{
+ struct proxy *byname;
+ struct proxy *byid;
+
+ if (!name && !id)
+ return NULL;
+
+ if (diff)
+ *diff = 0;
+
+ byname = byid = NULL;
+
+ if (name) {
+ byname = proxy_find_by_name(name, cap, 0);
+ if (byname && (!id || byname->uuid == id))
+ return byname;
+ }
+
+ /* remaining possiblities :
+ * - name not set
+ * - name set but not found
+ * - name found, but ID doesn't match.
+ */
+ if (id) {
+ byid = proxy_find_by_id(id, cap, 0);
+ if (byid) {
+ if (byname) {
+ /* id+type found, name+type found, but not all 3.
+ * ID wins only if forced, otherwise name wins.
+ */
+ if (byid->options & PR_O_FORCED_ID) {
+ if (diff)
+ *diff |= PR_FBM_MISMATCH_NAME;
+ return byid;
+ }
+ else {
+ if (diff)
+ *diff |= PR_FBM_MISMATCH_ID;
+ return byname;
+ }
+ }
+
+ /* remaining possiblities :
+ * - name not set
+ * - name set but not found
+ */
+ if (name && diff)
+ *diff |= PR_FBM_MISMATCH_NAME;
+ return byid;
+ }
+
+ /* ID not found */
+ if (byname) {
+ if (diff)
+ *diff |= PR_FBM_MISMATCH_ID;
+ return byname;
+ }
+ }
+
+ /* All remaining possiblities will lead to NULL. If we can report more
+ * detailed information to the caller about changed types and/or name,
+ * we'll do it. For example, we could detect that "listen foo" was
+ * split into "frontend foo_ft" and "backend foo_bk" if IDs are forced.
+ * - name not set, ID not found
+ * - name not found, ID not set
+ * - name not found, ID not found
+ */
+ if (!diff)
+ return NULL;
+
+ if (name) {
+ byname = proxy_find_by_name(name, 0, 0);
+ if (byname && (!id || byname->uuid == id))
+ *diff |= PR_FBM_MISMATCH_PROXYTYPE;
+ }
+
+ if (id) {
+ byid = proxy_find_by_id(id, 0, 0);
+ if (byid) {
+ if (!name)
+ *diff |= PR_FBM_MISMATCH_PROXYTYPE; /* only type changed */
+ else if (byid->options & PR_O_FORCED_ID)
+ *diff |= PR_FBM_MISMATCH_NAME | PR_FBM_MISMATCH_PROXYTYPE; /* name and type changed */
+ /* otherwise it's a different proxy that was returned */
+ }
+ }
+ return NULL;
+}
+
+/*
+ * This function finds a server with matching name within selected proxy.
+ * It also checks if there are more matching servers with
+ * requested name as this often leads into unexpected situations.
+ */
+
+struct server *findserver(const struct proxy *px, const char *name) {
+
+ struct server *cursrv, *target = NULL;
+
+ if (!px)
+ return NULL;
+
+ for (cursrv = px->srv; cursrv; cursrv = cursrv->next) {
+ if (strcmp(cursrv->id, name))
+ continue;
+
+ if (!target) {
+ target = cursrv;
+ continue;
+ }
+
+ Alert("Refusing to use duplicated server '%s' found in proxy: %s!\n",
+ name, px->id);
+
+ return NULL;
+ }
+
+ return target;
+}
+
+/* This function checks that the designated proxy has no http directives
+ * enabled. It will output a warning if there are, and will fix some of them.
+ * It returns the number of fatal errors encountered. This should be called
+ * at the end of the configuration parsing if the proxy is not in http mode.
+ * The <file> argument is used to construct the error message.
+ */
+int proxy_cfg_ensure_no_http(struct proxy *curproxy)
+{
+ if (curproxy->cookie_name != NULL) {
+ Warning("config : cookie will be ignored for %s '%s' (needs 'mode http').\n",
+ proxy_type_str(curproxy), curproxy->id);
+ }
+ if (curproxy->rsp_exp != NULL) {
+ Warning("config : server regular expressions will be ignored for %s '%s' (needs 'mode http').\n",
+ proxy_type_str(curproxy), curproxy->id);
+ }
+ if (curproxy->req_exp != NULL) {
+ Warning("config : client regular expressions will be ignored for %s '%s' (needs 'mode http').\n",
+ proxy_type_str(curproxy), curproxy->id);
+ }
+ if (curproxy->monitor_uri != NULL) {
+ Warning("config : monitor-uri will be ignored for %s '%s' (needs 'mode http').\n",
+ proxy_type_str(curproxy), curproxy->id);
+ }
+ if (curproxy->lbprm.algo & BE_LB_NEED_HTTP) {
+ curproxy->lbprm.algo &= ~BE_LB_ALGO;
+ curproxy->lbprm.algo |= BE_LB_ALGO_RR;
+ Warning("config : Layer 7 hash not possible for %s '%s' (needs 'mode http'). Falling back to round robin.\n",
+ proxy_type_str(curproxy), curproxy->id);
+ }
+ if (curproxy->to_log & (LW_REQ | LW_RESP)) {
+ curproxy->to_log &= ~(LW_REQ | LW_RESP);
+ Warning("parsing [%s:%d] : HTTP log/header format not usable with %s '%s' (needs 'mode http').\n",
+ curproxy->conf.lfs_file, curproxy->conf.lfs_line,
+ proxy_type_str(curproxy), curproxy->id);
+ }
+ if (curproxy->conf.logformat_string == default_http_log_format ||
+ curproxy->conf.logformat_string == clf_http_log_format) {
+ /* Note: we don't change the directive's file:line number */
+ curproxy->conf.logformat_string = default_tcp_log_format;
+ Warning("parsing [%s:%d] : 'option httplog' not usable with %s '%s' (needs 'mode http'). Falling back to 'option tcplog'.\n",
+ curproxy->conf.lfs_file, curproxy->conf.lfs_line,
+ proxy_type_str(curproxy), curproxy->id);
+ }
+
+ return 0;
+}
+
+/* Perform the most basic initialization of a proxy :
+ * memset(), list_init(*), reset_timeouts(*).
+ * Any new proxy or peer should be initialized via this function.
+ */
+void init_new_proxy(struct proxy *p)
+{
+ memset(p, 0, sizeof(struct proxy));
+ p->obj_type = OBJ_TYPE_PROXY;
+ LIST_INIT(&p->pendconns);
+ LIST_INIT(&p->acl);
+ LIST_INIT(&p->http_req_rules);
+ LIST_INIT(&p->http_res_rules);
+ LIST_INIT(&p->block_rules);
+ LIST_INIT(&p->redirect_rules);
+ LIST_INIT(&p->mon_fail_cond);
+ LIST_INIT(&p->switching_rules);
+ LIST_INIT(&p->server_rules);
+ LIST_INIT(&p->persist_rules);
+ LIST_INIT(&p->sticking_rules);
+ LIST_INIT(&p->storersp_rules);
+ LIST_INIT(&p->tcp_req.inspect_rules);
+ LIST_INIT(&p->tcp_rep.inspect_rules);
+ LIST_INIT(&p->tcp_req.l4_rules);
+ LIST_INIT(&p->req_add);
+ LIST_INIT(&p->rsp_add);
+ LIST_INIT(&p->listener_queue);
+ LIST_INIT(&p->logsrvs);
+ LIST_INIT(&p->logformat);
+ LIST_INIT(&p->logformat_sd);
+ LIST_INIT(&p->format_unique_id);
+ LIST_INIT(&p->conf.bind);
+ LIST_INIT(&p->conf.listeners);
+ LIST_INIT(&p->conf.args.list);
+ LIST_INIT(&p->tcpcheck_rules);
+
+ /* Timeouts are defined as -1 */
+ proxy_reset_timeouts(p);
+ p->tcp_rep.inspect_delay = TICK_ETERNITY;
+
+ /* initial uuid is unassigned (-1) */
+ p->uuid = -1;
+}
+
+/*
+ * This function creates all proxy sockets. It should be done very early,
+ * typically before privileges are dropped. The sockets will be registered
+ * but not added to any fd_set, in order not to loose them across the fork().
+ * The proxies also start in READY state because they all have their listeners
+ * bound.
+ *
+ * Its return value is composed from ERR_NONE, ERR_RETRYABLE and ERR_FATAL.
+ * Retryable errors will only be printed if <verbose> is not zero.
+ */
+int start_proxies(int verbose)
+{
+ struct proxy *curproxy;
+ struct listener *listener;
+ int lerr, err = ERR_NONE;
+ int pxerr;
+ char msg[100];
+
+ for (curproxy = proxy; curproxy != NULL; curproxy = curproxy->next) {
+ if (curproxy->state != PR_STNEW)
+ continue; /* already initialized */
+
+ pxerr = 0;
+ list_for_each_entry(listener, &curproxy->conf.listeners, by_fe) {
+ if (listener->state != LI_ASSIGNED)
+ continue; /* already started */
+
+ lerr = listener->proto->bind(listener, msg, sizeof(msg));
+
+ /* errors are reported if <verbose> is set or if they are fatal */
+ if (verbose || (lerr & (ERR_FATAL | ERR_ABORT))) {
+ if (lerr & ERR_ALERT)
+ Alert("Starting %s %s: %s\n",
+ proxy_type_str(curproxy), curproxy->id, msg);
+ else if (lerr & ERR_WARN)
+ Warning("Starting %s %s: %s\n",
+ proxy_type_str(curproxy), curproxy->id, msg);
+ }
+
+ err |= lerr;
+ if (lerr & (ERR_ABORT | ERR_FATAL)) {
+ pxerr |= 1;
+ break;
+ }
+ else if (lerr & ERR_CODE) {
+ pxerr |= 1;
+ continue;
+ }
+ }
+
+ if (!pxerr) {
+ curproxy->state = PR_STREADY;
+ send_log(curproxy, LOG_NOTICE, "Proxy %s started.\n", curproxy->id);
+ }
+
+ if (err & ERR_ABORT)
+ break;
+ }
+
+ return err;
+}
+
+
+/*
+ * This is the proxy management task. It enables proxies when there are enough
+ * free streams, or stops them when the table is full. It is designed to be
+ * called as a task which is woken up upon stopping or when rate limiting must
+ * be enforced.
+ */
+struct task *manage_proxy(struct task *t)
+{
+ struct proxy *p = t->context;
+ int next = TICK_ETERNITY;
+ unsigned int wait;
+
+ /* We should periodically try to enable listeners waiting for a
+ * global resource here.
+ */
+
+ /* first, let's check if we need to stop the proxy */
+ if (unlikely(stopping && p->state != PR_STSTOPPED)) {
+ int t;
+ t = tick_remain(now_ms, p->stop_time);
+ if (t == 0) {
+ Warning("Proxy %s stopped (FE: %lld conns, BE: %lld conns).\n",
+ p->id, p->fe_counters.cum_conn, p->be_counters.cum_conn);
+ send_log(p, LOG_WARNING, "Proxy %s stopped (FE: %lld conns, BE: %lld conns).\n",
+ p->id, p->fe_counters.cum_conn, p->be_counters.cum_conn);
+ stop_proxy(p);
+ /* try to free more memory */
+ pool_gc2();
+ }
+ else {
+ next = tick_first(next, p->stop_time);
+ }
+ }
+
+ /* If the proxy holds a stick table, we need to purge all unused
+ * entries. These are all the ones in the table with ref_cnt == 0
+ * and all the ones in the pool used to allocate new entries. Any
+ * entry attached to an existing stream waiting for a store will
+ * be in neither list. Any entry being dumped will have ref_cnt > 0.
+ * However we protect tables that are being synced to peers.
+ */
+ if (unlikely(stopping && p->state == PR_STSTOPPED && p->table.current)) {
+ if (!p->table.syncing) {
+ stktable_trash_oldest(&p->table, p->table.current);
+ pool_gc2();
+ }
+ if (p->table.current) {
+ /* some entries still remain, let's recheck in one second */
+ next = tick_first(next, tick_add(now_ms, 1000));
+ }
+ }
+
+ /* the rest below is just for frontends */
+ if (!(p->cap & PR_CAP_FE))
+ goto out;
+
+ /* check the various reasons we may find to block the frontend */
+ if (unlikely(p->feconn >= p->maxconn)) {
+ if (p->state == PR_STREADY)
+ p->state = PR_STFULL;
+ goto out;
+ }
+
+ /* OK we have no reason to block, so let's unblock if we were blocking */
+ if (p->state == PR_STFULL)
+ p->state = PR_STREADY;
+
+ if (p->fe_sps_lim &&
+ (wait = next_event_delay(&p->fe_sess_per_sec, p->fe_sps_lim, 0))) {
+ /* we're blocking because a limit was reached on the number of
+ * requests/s on the frontend. We want to re-check ASAP, which
+ * means in 1 ms before estimated expiration date, because the
+ * timer will have settled down.
+ */
+ next = tick_first(next, tick_add(now_ms, wait));
+ goto out;
+ }
+
+ /* The proxy is not limited so we can re-enable any waiting listener */
+ if (!LIST_ISEMPTY(&p->listener_queue))
+ dequeue_all_listeners(&p->listener_queue);
+ out:
+ t->expire = next;
+ task_queue(t);
+ return t;
+}
+
+
+/*
+ * this function disables health-check servers so that the process will quickly be ignored
+ * by load balancers. Note that if a proxy was already in the PAUSED state, then its grace
+ * time will not be used since it would already not listen anymore to the socket.
+ */
+void soft_stop(void)
+{
+ struct proxy *p;
+ struct peers *prs;
+
+ stopping = 1;
+ p = proxy;
+ tv_update_date(0,1); /* else, the old time before select will be used */
+ while (p) {
+ if (p->state != PR_STSTOPPED) {
+ Warning("Stopping %s %s in %d ms.\n", proxy_cap_str(p->cap), p->id, p->grace);
+ send_log(p, LOG_WARNING, "Stopping %s %s in %d ms.\n", proxy_cap_str(p->cap), p->id, p->grace);
+ p->stop_time = tick_add(now_ms, p->grace);
+
+ /* Note: do not wake up stopped proxies' task nor their tables'
+ * tasks as these ones might point to already released entries.
+ */
+ if (p->table.size && p->table.sync_task)
+ task_wakeup(p->table.sync_task, TASK_WOKEN_MSG);
+
+ if (p->task)
+ task_wakeup(p->task, TASK_WOKEN_MSG);
+ }
+ p = p->next;
+ }
+
+ prs = peers;
+ while (prs) {
+ if (prs->peers_fe)
+ stop_proxy(prs->peers_fe);
+ prs = prs->next;
+ }
+ /* signal zero is used to broadcast the "stopping" event */
+ signal_handler(0);
+}
+
+
+/* Temporarily disables listening on all of the proxy's listeners. Upon
+ * success, the proxy enters the PR_PAUSED state. If disabling at least one
+ * listener returns an error, then the proxy state is set to PR_STERROR
+ * because we don't know how to resume from this. The function returns 0
+ * if it fails, or non-zero on success.
+ */
+int pause_proxy(struct proxy *p)
+{
+ struct listener *l;
+
+ if (!(p->cap & PR_CAP_FE) || p->state == PR_STERROR ||
+ p->state == PR_STSTOPPED || p->state == PR_STPAUSED)
+ return 1;
+
+ Warning("Pausing %s %s.\n", proxy_cap_str(p->cap), p->id);
+ send_log(p, LOG_WARNING, "Pausing %s %s.\n", proxy_cap_str(p->cap), p->id);
+
+ list_for_each_entry(l, &p->conf.listeners, by_fe) {
+ if (!pause_listener(l))
+ p->state = PR_STERROR;
+ }
+
+ if (p->state == PR_STERROR) {
+ Warning("%s %s failed to enter pause mode.\n", proxy_cap_str(p->cap), p->id);
+ send_log(p, LOG_WARNING, "%s %s failed to enter pause mode.\n", proxy_cap_str(p->cap), p->id);
+ return 0;
+ }
+
+ p->state = PR_STPAUSED;
+ return 1;
+}
+
+
+/*
+ * This function completely stops a proxy and releases its listeners. It has
+ * to be called when going down in order to release the ports so that another
+ * process may bind to them. It must also be called on disabled proxies at the
+ * end of start-up. When all listeners are closed, the proxy is set to the
+ * PR_STSTOPPED state.
+ */
+void stop_proxy(struct proxy *p)
+{
+ struct listener *l;
+
+ list_for_each_entry(l, &p->conf.listeners, by_fe) {
+ unbind_listener(l);
+ if (l->state >= LI_ASSIGNED) {
+ delete_listener(l);
+ listeners--;
+ jobs--;
+ }
+ }
+ p->state = PR_STSTOPPED;
+}
+
+/* This function resumes listening on the specified proxy. It scans all of its
+ * listeners and tries to enable them all. If any of them fails, the proxy is
+ * put back to the paused state. It returns 1 upon success, or zero if an error
+ * is encountered.
+ */
+int resume_proxy(struct proxy *p)
+{
+ struct listener *l;
+ int fail;
+
+ if (p->state != PR_STPAUSED)
+ return 1;
+
+ Warning("Enabling %s %s.\n", proxy_cap_str(p->cap), p->id);
+ send_log(p, LOG_WARNING, "Enabling %s %s.\n", proxy_cap_str(p->cap), p->id);
+
+ fail = 0;
+ list_for_each_entry(l, &p->conf.listeners, by_fe) {
+ if (!resume_listener(l)) {
+ int port;
+
+ port = get_host_port(&l->addr);
+ if (port) {
+ Warning("Port %d busy while trying to enable %s %s.\n",
+ port, proxy_cap_str(p->cap), p->id);
+ send_log(p, LOG_WARNING, "Port %d busy while trying to enable %s %s.\n",
+ port, proxy_cap_str(p->cap), p->id);
+ }
+ else {
+ Warning("Bind on socket %d busy while trying to enable %s %s.\n",
+ l->luid, proxy_cap_str(p->cap), p->id);
+ send_log(p, LOG_WARNING, "Bind on socket %d busy while trying to enable %s %s.\n",
+ l->luid, proxy_cap_str(p->cap), p->id);
+ }
+
+ /* Another port might have been enabled. Let's stop everything. */
+ fail = 1;
+ break;
+ }
+ }
+
+ p->state = PR_STREADY;
+ if (fail) {
+ pause_proxy(p);
+ return 0;
+ }
+ return 1;
+}
+
+/*
+ * This function temporarily disables listening so that another new instance
+ * can start listening. It is designed to be called upon reception of a
+ * SIGTTOU, after which either a SIGUSR1 can be sent to completely stop
+ * the proxy, or a SIGTTIN can be sent to listen again.
+ */
+void pause_proxies(void)
+{
+ int err;
+ struct proxy *p;
+ struct peers *prs;
+
+ err = 0;
+ p = proxy;
+ tv_update_date(0,1); /* else, the old time before select will be used */
+ while (p) {
+ err |= !pause_proxy(p);
+ p = p->next;
+ }
+
+ prs = peers;
+ while (prs) {
+ if (prs->peers_fe)
+ err |= !pause_proxy(prs->peers_fe);
+ prs = prs->next;
+ }
+
+ if (err) {
+ Warning("Some proxies refused to pause, performing soft stop now.\n");
+ send_log(p, LOG_WARNING, "Some proxies refused to pause, performing soft stop now.\n");
+ soft_stop();
+ }
+}
+
+
+/*
+ * This function reactivates listening. This can be used after a call to
+ * sig_pause(), for example when a new instance has failed starting up.
+ * It is designed to be called upon reception of a SIGTTIN.
+ */
+void resume_proxies(void)
+{
+ int err;
+ struct proxy *p;
+ struct peers *prs;
+
+ err = 0;
+ p = proxy;
+ tv_update_date(0,1); /* else, the old time before select will be used */
+ while (p) {
+ err |= !resume_proxy(p);
+ p = p->next;
+ }
+
+ prs = peers;
+ while (prs) {
+ if (prs->peers_fe)
+ err |= !resume_proxy(prs->peers_fe);
+ prs = prs->next;
+ }
+
+ if (err) {
+ Warning("Some proxies refused to resume, a restart is probably needed to resume safe operations.\n");
+ send_log(p, LOG_WARNING, "Some proxies refused to resume, a restart is probably needed to resume safe operations.\n");
+ }
+}
+
+/* Set current stream's backend to <be>. Nothing is done if the
+ * stream already had a backend assigned, which is indicated by
+ * s->flags & SF_BE_ASSIGNED.
+ * All flags, stats and counters which need be updated are updated.
+ * Returns 1 if done, 0 in case of internal error, eg: lack of resource.
+ */
+int stream_set_backend(struct stream *s, struct proxy *be)
+{
+ if (s->flags & SF_BE_ASSIGNED)
+ return 1;
+ s->be = be;
+ be->beconn++;
+ if (be->beconn > be->be_counters.conn_max)
+ be->be_counters.conn_max = be->beconn;
+ proxy_inc_be_ctr(be);
+
+ /* assign new parameters to the stream from the new backend */
+ s->si[1].flags &= ~SI_FL_INDEP_STR;
+ if (be->options2 & PR_O2_INDEPSTR)
+ s->si[1].flags |= SI_FL_INDEP_STR;
+
+ /* We want to enable the backend-specific analysers except those which
+ * were already run as part of the frontend/listener. Note that it would
+ * be more reliable to store the list of analysers that have been run,
+ * but what we do here is OK for now.
+ */
+ s->req.analysers |= be->be_req_ana;
+ if (strm_li(s))
+ s->req.analysers &= ~strm_li(s)->analysers;
+
+ /* If the target backend requires HTTP processing, we have to allocate
+ * the HTTP transaction and hdr_idx if we did not have one.
+ */
+ if (unlikely(!s->txn && be->http_needed)) {
+ if (unlikely(!http_alloc_txn(s)))
+ return 0; /* not enough memory */
+
+ /* and now initialize the HTTP transaction state */
+ http_init_txn(s);
+ }
+
+ if (s->txn) {
+ if (be->options2 & PR_O2_RSPBUG_OK)
+ s->txn->rsp.err_pos = -1; /* let buggy responses pass */
+
+ /* If we chain to an HTTP backend running a different HTTP mode, we
+ * have to re-adjust the desired keep-alive/close mode to accommodate
+ * both the frontend's and the backend's modes.
+ */
+ if (strm_fe(s)->mode == PR_MODE_HTTP && be->mode == PR_MODE_HTTP &&
+ ((strm_fe(s)->options & PR_O_HTTP_MODE) != (be->options & PR_O_HTTP_MODE)))
+ http_adjust_conn_mode(s, s->txn, &s->txn->req);
+
+ /* If an LB algorithm needs to access some pre-parsed body contents,
+ * we must not start to forward anything until the connection is
+ * confirmed otherwise we'll lose the pointer to these data and
+ * prevent the hash from being doable again after a redispatch.
+ */
+ if (be->mode == PR_MODE_HTTP &&
+ (be->lbprm.algo & (BE_LB_KIND | BE_LB_PARM)) == (BE_LB_KIND_HI | BE_LB_HASH_PRM))
+ s->txn->req.flags |= HTTP_MSGF_WAIT_CONN;
+
+ /* we may request to parse a request body */
+ if ((be->options & PR_O_WREQ_BODY) &&
+ (s->txn->req.body_len || (s->txn->req.flags & HTTP_MSGF_TE_CHNK)))
+ s->req.analysers |= AN_REQ_HTTP_BODY;
+ }
+
+ s->flags |= SF_BE_ASSIGNED;
+ if (be->options2 & PR_O2_NODELAY) {
+ s->req.flags |= CF_NEVER_WAIT;
+ s->res.flags |= CF_NEVER_WAIT;
+ }
+
+ return 1;
+}
+
+static struct cfg_kw_list cfg_kws = {ILH, {
+ { CFG_LISTEN, "timeout", proxy_parse_timeout },
+ { CFG_LISTEN, "clitimeout", proxy_parse_timeout },
+ { CFG_LISTEN, "contimeout", proxy_parse_timeout },
+ { CFG_LISTEN, "srvtimeout", proxy_parse_timeout },
+ { CFG_LISTEN, "rate-limit", proxy_parse_rate_limit },
+ { CFG_LISTEN, "max-keep-alive-queue", proxy_parse_max_ka_queue },
+ { CFG_LISTEN, "declare", proxy_parse_declare },
+ { 0, NULL, NULL },
+}};
+
+__attribute__((constructor))
+static void __proxy_module_init(void)
+{
+ cfg_register_keywords(&cfg_kws);
+}
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * Queue management functions.
+ *
+ * Copyright 2000-2009 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <common/config.h>
+#include <common/memory.h>
+#include <common/time.h>
+
+#include <proto/queue.h>
+#include <proto/server.h>
+#include <proto/stream.h>
+#include <proto/stream_interface.h>
+#include <proto/task.h>
+
+
+struct pool_head *pool2_pendconn;
+
+/* perform minimal intializations, report 0 in case of error, 1 if OK. */
+int init_pendconn()
+{
+ pool2_pendconn = create_pool("pendconn", sizeof(struct pendconn), MEM_F_SHARED);
+ return pool2_pendconn != NULL;
+}
+
+/* returns the effective dynamic maxconn for a server, considering the minconn
+ * and the proxy's usage relative to its dynamic connections limit. It is
+ * expected that 0 < s->minconn <= s->maxconn when this is called. If the
+ * server is currently warming up, the slowstart is also applied to the
+ * resulting value, which can be lower than minconn in this case, but never
+ * less than 1.
+ */
+unsigned int srv_dynamic_maxconn(const struct server *s)
+{
+ unsigned int max;
+
+ if (s->proxy->beconn >= s->proxy->fullconn)
+ /* no fullconn or proxy is full */
+ max = s->maxconn;
+ else if (s->minconn == s->maxconn)
+ /* static limit */
+ max = s->maxconn;
+ else max = MAX(s->minconn,
+ s->proxy->beconn * s->maxconn / s->proxy->fullconn);
+
+ if ((s->state == SRV_ST_STARTING) &&
+ now.tv_sec < s->last_change + s->slowstart &&
+ now.tv_sec >= s->last_change) {
+ unsigned int ratio;
+ ratio = 100 * (now.tv_sec - s->last_change) / s->slowstart;
+ max = MAX(1, max * ratio / 100);
+ }
+ return max;
+}
+
+
+/*
+ * Manages a server's connection queue. This function will try to dequeue as
+ * many pending streams as possible, and wake them up.
+ */
+void process_srv_queue(struct server *s)
+{
+ struct proxy *p = s->proxy;
+ int maxconn;
+
+ /* First, check if we can handle some connections queued at the proxy. We
+ * will take as many as we can handle.
+ */
+
+ maxconn = srv_dynamic_maxconn(s);
+ while (s->served < maxconn) {
+ struct stream *strm = pendconn_get_next_strm(s, p);
+
+ if (strm == NULL)
+ break;
+ task_wakeup(strm->task, TASK_WOKEN_RES);
+ }
+}
+
+/* Detaches the next pending connection from either a server or a proxy, and
+ * returns its associated stream. If no pending connection is found, NULL is
+ * returned. Note that neither <srv> nor <px> may be NULL.
+ * Priority is given to the oldest request in the queue if both <srv> and <px>
+ * have pending requests. This ensures that no request will be left unserved.
+ * The <px> queue is not considered if the server (or a tracked server) is not
+ * RUNNING, is disabled, or has a null weight (server going down). The <srv>
+ * queue is still considered in this case, because if some connections remain
+ * there, it means that some requests have been forced there after it was seen
+ * down (eg: due to option persist).
+ * The stream is immediately marked as "assigned", and both its <srv> and
+ * <srv_conn> are set to <srv>,
+ */
+struct stream *pendconn_get_next_strm(struct server *srv, struct proxy *px)
+{
+ struct pendconn *ps, *pp;
+ struct stream *strm;
+ struct server *rsrv;
+
+ rsrv = srv->track;
+ if (!rsrv)
+ rsrv = srv;
+
+ ps = pendconn_from_srv(srv);
+ pp = pendconn_from_px(px);
+ /* we want to get the definitive pendconn in <ps> */
+ if (!pp || !srv_is_usable(rsrv)) {
+ if (!ps)
+ return NULL;
+ } else {
+ /* pendconn exists in the proxy queue */
+ if (!ps || tv_islt(&pp->strm->logs.tv_request, &ps->strm->logs.tv_request))
+ ps = pp;
+ }
+ strm = ps->strm;
+ pendconn_free(ps);
+
+ /* we want to note that the stream has now been assigned a server */
+ strm->flags |= SF_ASSIGNED;
+ strm->target = &srv->obj_type;
+ stream_add_srv_conn(strm, srv);
+ srv->served++;
+ if (px->lbprm.server_take_conn)
+ px->lbprm.server_take_conn(srv);
+
+ return strm;
+}
+
+/* Adds the stream <strm> to the pending connection list of server <strm>->srv
+ * or to the one of <strm>->proxy if srv is NULL. All counters and back pointers
+ * are updated accordingly. Returns NULL if no memory is available, otherwise the
+ * pendconn itself. If the stream was already marked as served, its flag is
+ * cleared. It is illegal to call this function with a non-NULL strm->srv_conn.
+ */
+struct pendconn *pendconn_add(struct stream *strm)
+{
+ struct pendconn *p;
+ struct server *srv;
+
+ p = pool_alloc2(pool2_pendconn);
+ if (!p)
+ return NULL;
+
+ strm->pend_pos = p;
+ p->strm = strm;
+ p->srv = srv = objt_server(strm->target);
+
+ if (strm->flags & SF_ASSIGNED && srv) {
+ LIST_ADDQ(&srv->pendconns, &p->list);
+ srv->nbpend++;
+ strm->logs.srv_queue_size += srv->nbpend;
+ if (srv->nbpend > srv->counters.nbpend_max)
+ srv->counters.nbpend_max = srv->nbpend;
+ } else {
+ LIST_ADDQ(&strm->be->pendconns, &p->list);
+ strm->be->nbpend++;
+ strm->logs.prx_queue_size += strm->be->nbpend;
+ if (strm->be->nbpend > strm->be->be_counters.nbpend_max)
+ strm->be->be_counters.nbpend_max = strm->be->nbpend;
+ }
+ strm->be->totpend++;
+ return p;
+}
+
+/* Redistribute pending connections when a server goes down. The number of
+ * connections redistributed is returned.
+ */
+int pendconn_redistribute(struct server *s)
+{
+ struct pendconn *pc, *pc_bck;
+ int xferred = 0;
+
+ list_for_each_entry_safe(pc, pc_bck, &s->pendconns, list) {
+ struct stream *strm = pc->strm;
+
+ if ((strm->be->options & (PR_O_REDISP|PR_O_PERSIST)) == PR_O_REDISP &&
+ !(strm->flags & SF_FORCE_PRST)) {
+ /* The REDISP option was specified. We will ignore
+ * cookie and force to balance or use the dispatcher.
+ */
+
+ /* it's left to the dispatcher to choose a server */
+ strm->flags &= ~(SF_DIRECT | SF_ASSIGNED | SF_ADDR_SET);
+
+ pendconn_free(pc);
+ task_wakeup(strm->task, TASK_WOKEN_RES);
+ xferred++;
+ }
+ }
+ return xferred;
+}
+
+/* Check for pending connections at the backend, and assign some of them to
+ * the server coming up. The server's weight is checked before being assigned
+ * connections it may not be able to handle. The total number of transferred
+ * connections is returned.
+ */
+int pendconn_grab_from_px(struct server *s)
+{
+ int xferred;
+
+ if (!srv_is_usable(s))
+ return 0;
+
+ for (xferred = 0; !s->maxconn || xferred < srv_dynamic_maxconn(s); xferred++) {
+ struct stream *strm;
+ struct pendconn *p;
+
+ p = pendconn_from_px(s->proxy);
+ if (!p)
+ break;
+ p->strm->target = &s->obj_type;
+ strm = p->strm;
+ pendconn_free(p);
+ task_wakeup(strm->task, TASK_WOKEN_RES);
+ }
+ return xferred;
+}
+
+/*
+ * Detaches pending connection <p>, decreases the pending count, and frees
+ * the pending connection. The connection might have been queued to a specific
+ * server as well as to the proxy. The stream also gets marked unqueued.
+ */
+void pendconn_free(struct pendconn *p)
+{
+ LIST_DEL(&p->list);
+ p->strm->pend_pos = NULL;
+ if (p->srv)
+ p->srv->nbpend--;
+ else
+ p->strm->be->nbpend--;
+ p->strm->be->totpend--;
+ pool_free2(pool2_pendconn, p);
+}
+
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * RAW transport layer over SOCK_STREAM sockets.
+ *
+ * Copyright 2000-2012 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#define _GNU_SOURCE
+#include <errno.h>
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+
+#include <sys/socket.h>
+#include <sys/stat.h>
+#include <sys/types.h>
+
+#include <netinet/tcp.h>
+
+#include <common/buffer.h>
+#include <common/compat.h>
+#include <common/config.h>
+#include <common/debug.h>
+#include <common/standard.h>
+#include <common/ticks.h>
+#include <common/time.h>
+
+#include <proto/connection.h>
+#include <proto/fd.h>
+#include <proto/freq_ctr.h>
+#include <proto/log.h>
+#include <proto/pipe.h>
+#include <proto/raw_sock.h>
+#include <proto/stream_interface.h>
+#include <proto/task.h>
+
+#include <types/global.h>
+
+
+#if defined(CONFIG_HAP_LINUX_SPLICE)
+#include <common/splice.h>
+
+/* A pipe contains 16 segments max, and it's common to see segments of 1448 bytes
+ * because of timestamps. Use this as a hint for not looping on splice().
+ */
+#define SPLICE_FULL_HINT 16*1448
+
+/* how many data we attempt to splice at once when the buffer is configured for
+ * infinite forwarding */
+#define MAX_SPLICE_AT_ONCE (1<<30)
+
+/* Versions of splice between 2.6.25 and 2.6.27.12 were bogus and would return EAGAIN
+ * on incoming shutdowns. On these versions, we have to call recv() after such a return
+ * in order to find whether splice is OK or not. Since 2.6.27.13 we don't need to do
+ * this anymore, and we can avoid this logic by defining ASSUME_SPLICE_WORKS.
+ */
+
+/* Returns :
+ * -1 if splice() is not supported
+ * >= 0 to report the amount of spliced bytes.
+ * connection flags are updated (error, read0, wait_room, wait_data).
+ * The caller must have previously allocated the pipe.
+ */
+int raw_sock_to_pipe(struct connection *conn, struct pipe *pipe, unsigned int count)
+{
+#ifndef ASSUME_SPLICE_WORKS
+ static int splice_detects_close;
+#endif
+ int ret;
+ int retval = 0;
+
+
+ if (!conn_ctrl_ready(conn))
+ return 0;
+
+ if (!fd_recv_ready(conn->t.sock.fd))
+ return 0;
+
+ errno = 0;
+
+ /* Under Linux, if FD_POLL_HUP is set, we have reached the end.
+ * Since older splice() implementations were buggy and returned
+ * EAGAIN on end of read, let's bypass the call to splice() now.
+ */
+ if (unlikely(!(fdtab[conn->t.sock.fd].ev & FD_POLL_IN))) {
+ /* stop here if we reached the end of data */
+ if ((fdtab[conn->t.sock.fd].ev & (FD_POLL_ERR|FD_POLL_HUP)) == FD_POLL_HUP)
+ goto out_read0;
+
+ /* report error on POLL_ERR before connection establishment */
+ if ((fdtab[conn->t.sock.fd].ev & FD_POLL_ERR) && (conn->flags & CO_FL_WAIT_L4_CONN)) {
+ conn->flags |= CO_FL_ERROR | CO_FL_SOCK_RD_SH | CO_FL_SOCK_WR_SH;
+ errno = 0; /* let the caller do a getsockopt() if it wants it */
+ return retval;
+ }
+ }
+
+ while (count) {
+ if (count > MAX_SPLICE_AT_ONCE)
+ count = MAX_SPLICE_AT_ONCE;
+
+ ret = splice(conn->t.sock.fd, NULL, pipe->prod, NULL, count,
+ SPLICE_F_MOVE|SPLICE_F_NONBLOCK);
+
+ if (ret <= 0) {
+ if (ret == 0) {
+ /* connection closed. This is only detected by
+ * recent kernels (>= 2.6.27.13). If we notice
+ * it works, we store the info for later use.
+ */
+#ifndef ASSUME_SPLICE_WORKS
+ splice_detects_close = 1;
+#endif
+ goto out_read0;
+ }
+
+ if (errno == EAGAIN) {
+ /* there are two reasons for EAGAIN :
+ * - nothing in the socket buffer (standard)
+ * - pipe is full
+ * - the connection is closed (kernel < 2.6.27.13)
+ * The last case is annoying but know if we can detect it
+ * and if we can't then we rely on the call to recv() to
+ * get a valid verdict. The difference between the first
+ * two situations is problematic. Since we don't know if
+ * the pipe is full, we'll stop if the pipe is not empty.
+ * Anyway, we will almost always fill/empty the pipe.
+ */
+ if (pipe->data) {
+ /* alway stop reading until the pipe is flushed */
+ conn->flags |= CO_FL_WAIT_ROOM;
+ break;
+ }
+
+ /* We don't know if the connection was closed,
+ * but if we know splice detects close, then we
+ * know it for sure.
+ * But if we're called upon POLLIN with an empty
+ * pipe and get EAGAIN, it is suspect enough to
+ * try to fall back to the normal recv scheme
+ * which will be able to deal with the situation.
+ */
+#ifndef ASSUME_SPLICE_WORKS
+ if (splice_detects_close)
+#endif
+ fd_cant_recv(conn->t.sock.fd); /* we know for sure that it's EAGAIN */
+ break;
+ }
+ else if (errno == ENOSYS || errno == EINVAL || errno == EBADF) {
+ /* splice not supported on this end, disable it.
+ * We can safely return -1 since there is no
+ * chance that any data has been piped yet.
+ */
+ return -1;
+ }
+ else if (errno == EINTR) {
+ /* try again */
+ continue;
+ }
+ /* here we have another error */
+ conn->flags |= CO_FL_ERROR;
+ break;
+ } /* ret <= 0 */
+
+ retval += ret;
+ pipe->data += ret;
+ count -= ret;
+
+ if (pipe->data >= SPLICE_FULL_HINT || ret >= global.tune.recv_enough) {
+ /* We've read enough of it for this time, let's stop before
+ * being asked to poll.
+ */
+ conn->flags |= CO_FL_WAIT_ROOM;
+ fd_done_recv(conn->t.sock.fd);
+ break;
+ }
+ } /* while */
+
+ if (unlikely(conn->flags & CO_FL_WAIT_L4_CONN) && retval)
+ conn->flags &= ~CO_FL_WAIT_L4_CONN;
+ return retval;
+
+ out_read0:
+ conn_sock_read0(conn);
+ conn->flags &= ~CO_FL_WAIT_L4_CONN;
+ return retval;
+}
+
+/* Send as many bytes as possible from the pipe to the connection's socket.
+ */
+int raw_sock_from_pipe(struct connection *conn, struct pipe *pipe)
+{
+ int ret, done;
+
+ if (!conn_ctrl_ready(conn))
+ return 0;
+
+ if (!fd_send_ready(conn->t.sock.fd))
+ return 0;
+
+ done = 0;
+ while (pipe->data) {
+ ret = splice(pipe->cons, NULL, conn->t.sock.fd, NULL, pipe->data,
+ SPLICE_F_MOVE|SPLICE_F_NONBLOCK);
+
+ if (ret <= 0) {
+ if (ret == 0 || errno == EAGAIN) {
+ fd_cant_send(conn->t.sock.fd);
+ break;
+ }
+ else if (errno == EINTR)
+ continue;
+
+ /* here we have another error */
+ conn->flags |= CO_FL_ERROR;
+ break;
+ }
+
+ done += ret;
+ pipe->data -= ret;
+ }
+ if (unlikely(conn->flags & CO_FL_WAIT_L4_CONN) && done)
+ conn->flags &= ~CO_FL_WAIT_L4_CONN;
+ return done;
+}
+
+#endif /* CONFIG_HAP_LINUX_SPLICE */
+
+
+/* Receive up to <count> bytes from connection <conn>'s socket and store them
+ * into buffer <buf>. Only one call to recv() is performed, unless the
+ * buffer wraps, in which case a second call may be performed. The connection's
+ * flags are updated with whatever special event is detected (error, read0,
+ * empty). The caller is responsible for taking care of those events and
+ * avoiding the call if inappropriate. The function does not call the
+ * connection's polling update function, so the caller is responsible for this.
+ * errno is cleared before starting so that the caller knows that if it spots an
+ * error without errno, it's pending and can be retrieved via getsockopt(SO_ERROR).
+ */
+static int raw_sock_to_buf(struct connection *conn, struct buffer *buf, int count)
+{
+ int ret, done = 0;
+ int try;
+
+ if (!conn_ctrl_ready(conn))
+ return 0;
+
+ if (!fd_recv_ready(conn->t.sock.fd))
+ return 0;
+
+ errno = 0;
+
+ if (unlikely(!(fdtab[conn->t.sock.fd].ev & FD_POLL_IN))) {
+ /* stop here if we reached the end of data */
+ if ((fdtab[conn->t.sock.fd].ev & (FD_POLL_ERR|FD_POLL_HUP)) == FD_POLL_HUP)
+ goto read0;
+
+ /* report error on POLL_ERR before connection establishment */
+ if ((fdtab[conn->t.sock.fd].ev & FD_POLL_ERR) && (conn->flags & CO_FL_WAIT_L4_CONN)) {
+ conn->flags |= CO_FL_ERROR | CO_FL_SOCK_RD_SH | CO_FL_SOCK_WR_SH;
+ return done;
+ }
+ }
+
+ /* let's realign the buffer to optimize I/O */
+ if (buffer_empty(buf))
+ buf->p = buf->data;
+
+ /* read the largest possible block. For this, we perform only one call
+ * to recv() unless the buffer wraps and we exactly fill the first hunk,
+ * in which case we accept to do it once again. A new attempt is made on
+ * EINTR too.
+ */
+ while (count > 0) {
+ /* first check if we have some room after p+i */
+ try = buf->data + buf->size - (buf->p + buf->i);
+ /* otherwise continue between data and p-o */
+ if (try <= 0) {
+ try = buf->p - (buf->data + buf->o);
+ if (try <= 0)
+ break;
+ }
+ if (try > count)
+ try = count;
+
+ ret = recv(conn->t.sock.fd, bi_end(buf), try, 0);
+
+ if (ret > 0) {
+ buf->i += ret;
+ done += ret;
+ if (ret < try) {
+ /* unfortunately, on level-triggered events, POLL_HUP
+ * is generally delivered AFTER the system buffer is
+ * empty, so this one might never match.
+ */
+ if (fdtab[conn->t.sock.fd].ev & FD_POLL_HUP)
+ goto read0;
+
+ fd_done_recv(conn->t.sock.fd);
+ break;
+ }
+ count -= ret;
+ }
+ else if (ret == 0) {
+ goto read0;
+ }
+ else if (errno == EAGAIN || errno == ENOTCONN) {
+ fd_cant_recv(conn->t.sock.fd);
+ break;
+ }
+ else if (errno != EINTR) {
+ conn->flags |= CO_FL_ERROR | CO_FL_SOCK_RD_SH | CO_FL_SOCK_WR_SH;
+ break;
+ }
+ }
+
+ if (unlikely(conn->flags & CO_FL_WAIT_L4_CONN) && done)
+ conn->flags &= ~CO_FL_WAIT_L4_CONN;
+ return done;
+
+ read0:
+ conn_sock_read0(conn);
+ conn->flags &= ~CO_FL_WAIT_L4_CONN;
+
+ /* Now a final check for a possible asynchronous low-level error
+ * report. This can happen when a connection receives a reset
+ * after a shutdown, both POLL_HUP and POLL_ERR are queued, and
+ * we might have come from there by just checking POLL_HUP instead
+ * of recv()'s return value 0, so we have no way to tell there was
+ * an error without checking.
+ */
+ if (unlikely(fdtab[conn->t.sock.fd].ev & FD_POLL_ERR))
+ conn->flags |= CO_FL_ERROR | CO_FL_SOCK_RD_SH | CO_FL_SOCK_WR_SH;
+ return done;
+}
+
+
+/* Send all pending bytes from buffer <buf> to connection <conn>'s socket.
+ * <flags> may contain some CO_SFL_* flags to hint the system about other
+ * pending data for example.
+ * Only one call to send() is performed, unless the buffer wraps, in which case
+ * a second call may be performed. The connection's flags are updated with
+ * whatever special event is detected (error, empty). The caller is responsible
+ * for taking care of those events and avoiding the call if inappropriate. The
+ * function does not call the connection's polling update function, so the caller
+ * is responsible for this.
+ */
+static int raw_sock_from_buf(struct connection *conn, struct buffer *buf, int flags)
+{
+ int ret, try, done, send_flag;
+
+ if (!conn_ctrl_ready(conn))
+ return 0;
+
+ if (!fd_send_ready(conn->t.sock.fd))
+ return 0;
+
+ done = 0;
+ /* send the largest possible block. For this we perform only one call
+ * to send() unless the buffer wraps and we exactly fill the first hunk,
+ * in which case we accept to do it once again.
+ */
+ while (buf->o) {
+ try = buf->o;
+ /* outgoing data may wrap at the end */
+ if (buf->data + try > buf->p)
+ try = buf->data + try - buf->p;
+
+ send_flag = MSG_DONTWAIT | MSG_NOSIGNAL;
+ if (try < buf->o || flags & CO_SFL_MSG_MORE)
+ send_flag |= MSG_MORE;
+
+ ret = send(conn->t.sock.fd, bo_ptr(buf), try, send_flag);
+
+ if (ret > 0) {
+ buf->o -= ret;
+ done += ret;
+
+ if (likely(buffer_empty(buf)))
+ /* optimize data alignment in the buffer */
+ buf->p = buf->data;
+
+ /* if the system buffer is full, don't insist */
+ if (ret < try)
+ break;
+ }
+ else if (ret == 0 || errno == EAGAIN || errno == ENOTCONN) {
+ /* nothing written, we need to poll for write first */
+ fd_cant_send(conn->t.sock.fd);
+ break;
+ }
+ else if (errno != EINTR) {
+ conn->flags |= CO_FL_ERROR | CO_FL_SOCK_RD_SH | CO_FL_SOCK_WR_SH;
+ break;
+ }
+ }
+ if (unlikely(conn->flags & CO_FL_WAIT_L4_CONN) && done)
+ conn->flags &= ~CO_FL_WAIT_L4_CONN;
+ return done;
+}
+
+
+/* transport-layer operations for RAW sockets */
+struct xprt_ops raw_sock = {
+ .snd_buf = raw_sock_from_buf,
+ .rcv_buf = raw_sock_to_buf,
+#if defined(CONFIG_HAP_LINUX_SPLICE)
+ .rcv_pipe = raw_sock_to_pipe,
+ .snd_pipe = raw_sock_from_pipe,
+#endif
+ .shutr = NULL,
+ .shutw = NULL,
+ .close = NULL,
+};
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ Red Black Trees
+ (C) 1999 Andrea Arcangeli <andrea@suse.de>
+ (C) 2002 David Woodhouse <dwmw2@infradead.org>
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+ linux/lib/rbtree.c
+*/
+
+/*
+#include <linux/rbtree.h>
+#include <linux/module.h>
+*/
+
+#include <stdlib.h>
+#include <common/rbtree.h>
+
+static void __rb_rotate_left(struct rb_node *node, struct rb_root *root)
+{
+ struct rb_node *right = node->rb_right;
+
+ if ((node->rb_right = right->rb_left))
+ right->rb_left->rb_parent = node;
+ right->rb_left = node;
+
+ if ((right->rb_parent = node->rb_parent))
+ {
+ if (node == node->rb_parent->rb_left)
+ node->rb_parent->rb_left = right;
+ else
+ node->rb_parent->rb_right = right;
+ }
+ else
+ root->rb_node = right;
+ node->rb_parent = right;
+}
+
+static void __rb_rotate_right(struct rb_node *node, struct rb_root *root)
+{
+ struct rb_node *left = node->rb_left;
+
+ if ((node->rb_left = left->rb_right))
+ left->rb_right->rb_parent = node;
+ left->rb_right = node;
+
+ if ((left->rb_parent = node->rb_parent))
+ {
+ if (node == node->rb_parent->rb_right)
+ node->rb_parent->rb_right = left;
+ else
+ node->rb_parent->rb_left = left;
+ }
+ else
+ root->rb_node = left;
+ node->rb_parent = left;
+}
+
+void rb_insert_color(struct rb_node *node, struct rb_root *root)
+{
+ struct rb_node *parent, *gparent;
+
+ while ((parent = node->rb_parent) && parent->rb_color == RB_RED)
+ {
+ gparent = parent->rb_parent;
+
+ if (parent == gparent->rb_left)
+ {
+ {
+ register struct rb_node *uncle = gparent->rb_right;
+ if (uncle && uncle->rb_color == RB_RED)
+ {
+ uncle->rb_color = RB_BLACK;
+ parent->rb_color = RB_BLACK;
+ gparent->rb_color = RB_RED;
+ node = gparent;
+ continue;
+ }
+ }
+
+ if (parent->rb_right == node)
+ {
+ register struct rb_node *tmp;
+ __rb_rotate_left(parent, root);
+ tmp = parent;
+ parent = node;
+ node = tmp;
+ }
+
+ parent->rb_color = RB_BLACK;
+ gparent->rb_color = RB_RED;
+ __rb_rotate_right(gparent, root);
+ } else {
+ {
+ register struct rb_node *uncle = gparent->rb_left;
+ if (uncle && uncle->rb_color == RB_RED)
+ {
+ uncle->rb_color = RB_BLACK;
+ parent->rb_color = RB_BLACK;
+ gparent->rb_color = RB_RED;
+ node = gparent;
+ continue;
+ }
+ }
+
+ if (parent->rb_left == node)
+ {
+ register struct rb_node *tmp;
+ __rb_rotate_right(parent, root);
+ tmp = parent;
+ parent = node;
+ node = tmp;
+ }
+
+ parent->rb_color = RB_BLACK;
+ gparent->rb_color = RB_RED;
+ __rb_rotate_left(gparent, root);
+ }
+ }
+
+ root->rb_node->rb_color = RB_BLACK;
+}
+// EXPORT_SYMBOL(rb_insert_color);
+
+static void __rb_erase_color(struct rb_node *node, struct rb_node *parent,
+ struct rb_root *root)
+{
+ struct rb_node *other;
+
+ while ((!node || node->rb_color == RB_BLACK) && node != root->rb_node)
+ {
+ if (parent->rb_left == node)
+ {
+ other = parent->rb_right;
+ if (other->rb_color == RB_RED)
+ {
+ other->rb_color = RB_BLACK;
+ parent->rb_color = RB_RED;
+ __rb_rotate_left(parent, root);
+ other = parent->rb_right;
+ }
+ if ((!other->rb_left ||
+ other->rb_left->rb_color == RB_BLACK)
+ && (!other->rb_right ||
+ other->rb_right->rb_color == RB_BLACK))
+ {
+ other->rb_color = RB_RED;
+ node = parent;
+ parent = node->rb_parent;
+ }
+ else
+ {
+ if (!other->rb_right ||
+ other->rb_right->rb_color == RB_BLACK)
+ {
+ register struct rb_node *o_left;
+ if ((o_left = other->rb_left))
+ o_left->rb_color = RB_BLACK;
+ other->rb_color = RB_RED;
+ __rb_rotate_right(other, root);
+ other = parent->rb_right;
+ }
+ other->rb_color = parent->rb_color;
+ parent->rb_color = RB_BLACK;
+ if (other->rb_right)
+ other->rb_right->rb_color = RB_BLACK;
+ __rb_rotate_left(parent, root);
+ node = root->rb_node;
+ break;
+ }
+ }
+ else
+ {
+ other = parent->rb_left;
+ if (other->rb_color == RB_RED)
+ {
+ other->rb_color = RB_BLACK;
+ parent->rb_color = RB_RED;
+ __rb_rotate_right(parent, root);
+ other = parent->rb_left;
+ }
+ if ((!other->rb_left ||
+ other->rb_left->rb_color == RB_BLACK)
+ && (!other->rb_right ||
+ other->rb_right->rb_color == RB_BLACK))
+ {
+ other->rb_color = RB_RED;
+ node = parent;
+ parent = node->rb_parent;
+ }
+ else
+ {
+ if (!other->rb_left ||
+ other->rb_left->rb_color == RB_BLACK)
+ {
+ register struct rb_node *o_right;
+ if ((o_right = other->rb_right))
+ o_right->rb_color = RB_BLACK;
+ other->rb_color = RB_RED;
+ __rb_rotate_left(other, root);
+ other = parent->rb_left;
+ }
+ other->rb_color = parent->rb_color;
+ parent->rb_color = RB_BLACK;
+ if (other->rb_left)
+ other->rb_left->rb_color = RB_BLACK;
+ __rb_rotate_right(parent, root);
+ node = root->rb_node;
+ break;
+ }
+ }
+ }
+ if (node)
+ node->rb_color = RB_BLACK;
+}
+
+void rb_erase(struct rb_node *node, struct rb_root *root)
+{
+ struct rb_node *child, *parent;
+ int color;
+
+ if (!node->rb_left)
+ child = node->rb_right;
+ else if (!node->rb_right)
+ child = node->rb_left;
+ else
+ {
+ struct rb_node *old = node, *left;
+
+ node = node->rb_right;
+ while ((left = node->rb_left) != NULL)
+ node = left;
+ child = node->rb_right;
+ parent = node->rb_parent;
+ color = node->rb_color;
+
+ if (child)
+ child->rb_parent = parent;
+ if (parent)
+ {
+ if (parent->rb_left == node)
+ parent->rb_left = child;
+ else
+ parent->rb_right = child;
+ }
+ else
+ root->rb_node = child;
+
+ if (node->rb_parent == old)
+ parent = node;
+ node->rb_parent = old->rb_parent;
+ node->rb_color = old->rb_color;
+ node->rb_right = old->rb_right;
+ node->rb_left = old->rb_left;
+
+ if (old->rb_parent)
+ {
+ if (old->rb_parent->rb_left == old)
+ old->rb_parent->rb_left = node;
+ else
+ old->rb_parent->rb_right = node;
+ } else
+ root->rb_node = node;
+
+ old->rb_left->rb_parent = node;
+ if (old->rb_right)
+ old->rb_right->rb_parent = node;
+ goto color;
+ }
+
+ parent = node->rb_parent;
+ color = node->rb_color;
+
+ if (child)
+ child->rb_parent = parent;
+ if (parent)
+ {
+ if (parent->rb_left == node)
+ parent->rb_left = child;
+ else
+ parent->rb_right = child;
+ }
+ else
+ root->rb_node = child;
+
+ color:
+ if (color == RB_BLACK)
+ __rb_erase_color(child, parent, root);
+}
+// EXPORT_SYMBOL(rb_erase);
+
+/*
+ * This function returns the first node (in sort order) of the tree.
+ */
+struct rb_node *rb_first(struct rb_root *root)
+{
+ struct rb_node *n;
+
+ n = root->rb_node;
+ if (!n)
+ return NULL;
+ while (n->rb_left)
+ n = n->rb_left;
+ return n;
+}
+// EXPORT_SYMBOL(rb_first);
+
+struct rb_node *rb_last(struct rb_root *root)
+{
+ struct rb_node *n;
+
+ n = root->rb_node;
+ if (!n)
+ return NULL;
+ while (n->rb_right)
+ n = n->rb_right;
+ return n;
+}
+// EXPORT_SYMBOL(rb_last);
+
+struct rb_node *rb_next(struct rb_node *node)
+{
+ /* If we have a right-hand child, go down and then left as far
+ as we can. */
+ if (node->rb_right) {
+ node = node->rb_right;
+ while (node->rb_left)
+ node=node->rb_left;
+ return node;
+ }
+
+ /* No right-hand children. Everything down and left is
+ smaller than us, so any 'next' node must be in the general
+ direction of our parent. Go up the tree; any time the
+ ancestor is a right-hand child of its parent, keep going
+ up. First time it's a left-hand child of its parent, said
+ parent is our 'next' node. */
+ while (node->rb_parent && node == node->rb_parent->rb_right)
+ node = node->rb_parent;
+
+ return node->rb_parent;
+}
+// EXPORT_SYMBOL(rb_next);
+
+struct rb_node *rb_prev(struct rb_node *node)
+{
+ /* If we have a left-hand child, go down and then right as far
+ as we can. */
+ if (node->rb_left) {
+ node = node->rb_left;
+ while (node->rb_right)
+ node=node->rb_right;
+ return node;
+ }
+
+ /* No left-hand children. Go up till we find an ancestor which
+ is a right-hand child of its parent */
+ while (node->rb_parent && node == node->rb_parent->rb_left)
+ node = node->rb_parent;
+
+ return node->rb_parent;
+}
+// EXPORT_SYMBOL(rb_prev);
+
+void rb_replace_node(struct rb_node *victim, struct rb_node *new,
+ struct rb_root *root)
+{
+ struct rb_node *parent = victim->rb_parent;
+
+ /* Set the surrounding nodes to point to the replacement */
+ if (parent) {
+ if (victim == parent->rb_left)
+ parent->rb_left = new;
+ else
+ parent->rb_right = new;
+ } else {
+ root->rb_node = new;
+ }
+ if (victim->rb_left)
+ victim->rb_left->rb_parent = new;
+ if (victim->rb_right)
+ victim->rb_right->rb_parent = new;
+
+ /* Copy the pointers/colour from the victim to the replacement */
+ *new = *victim;
+}
+// EXPORT_SYMBOL(rb_replace_node);
--- /dev/null
+/*
+ * Regex and string management functions.
+ *
+ * Copyright 2000-2010 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <ctype.h>
+#include <stdlib.h>
+#include <string.h>
+
+#include <common/config.h>
+#include <common/defaults.h>
+#include <common/regex.h>
+#include <common/standard.h>
+#include <proto/log.h>
+
+/* regex trash buffer used by various regex tests */
+regmatch_t pmatch[MAX_MATCH]; /* rm_so, rm_eo for regular expressions */
+
+int exp_replace(char *dst, unsigned int dst_size, char *src, const char *str, const regmatch_t *matches)
+{
+ char *old_dst = dst;
+ char* dst_end = dst + dst_size;
+
+ while (*str) {
+ if (*str == '\\') {
+ str++;
+ if (!*str)
+ return -1;
+
+ if (isdigit((unsigned char)*str)) {
+ int len, num;
+
+ num = *str - '0';
+ str++;
+
+ if (matches[num].rm_eo > -1 && matches[num].rm_so > -1) {
+ len = matches[num].rm_eo - matches[num].rm_so;
+
+ if (dst + len >= dst_end)
+ return -1;
+
+ memcpy(dst, src + matches[num].rm_so, len);
+ dst += len;
+ }
+
+ } else if (*str == 'x') {
+ unsigned char hex1, hex2;
+ str++;
+
+ if (!*str)
+ return -1;
+
+ hex1 = toupper(*str++) - '0';
+
+ if (!*str)
+ return -1;
+
+ hex2 = toupper(*str++) - '0';
+
+ if (hex1 > 9) hex1 -= 'A' - '9' - 1;
+ if (hex2 > 9) hex2 -= 'A' - '9' - 1;
+
+ if (dst >= dst_end)
+ return -1;
+
+ *dst++ = (hex1<<4) + hex2;
+ } else {
+ if (dst >= dst_end)
+ return -1;
+
+ *dst++ = *str++;
+ }
+ } else {
+ if (dst >= dst_end)
+ return -1;
+
+ *dst++ = *str++;
+ }
+ }
+ if (dst >= dst_end)
+ return -1;
+
+ *dst = '\0';
+ return dst - old_dst;
+}
+
+/* returns NULL if the replacement string <str> is valid, or the pointer to the first error */
+const char *check_replace_string(const char *str)
+{
+ const char *err = NULL;
+ while (*str) {
+ if (*str == '\\') {
+ err = str; /* in case of a backslash, we return the pointer to it */
+ str++;
+ if (!*str)
+ return err;
+ else if (isdigit((unsigned char)*str))
+ err = NULL;
+ else if (*str == 'x') {
+ str++;
+ if (!ishex(*str))
+ return err;
+ str++;
+ if (!ishex(*str))
+ return err;
+ err = NULL;
+ }
+ else {
+ Warning("'\\%c' : deprecated use of a backslash before something not '\\','x' or a digit.\n", *str);
+ err = NULL;
+ }
+ }
+ str++;
+ }
+ return err;
+}
+
+
+/* returns the pointer to an error in the replacement string, or NULL if OK */
+const char *chain_regex(struct hdr_exp **head, struct my_regex *preg,
+ int action, const char *replace, void *cond)
+{
+ struct hdr_exp *exp;
+
+ if (replace != NULL) {
+ const char *err;
+ err = check_replace_string(replace);
+ if (err)
+ return err;
+ }
+
+ while (*head != NULL)
+ head = &(*head)->next;
+
+ exp = calloc(1, sizeof(struct hdr_exp));
+
+ exp->preg = preg;
+ exp->replace = replace;
+ exp->action = action;
+ exp->cond = cond;
+ *head = exp;
+
+ return NULL;
+}
+
+/* This function apply regex. It take const null terminated char as input.
+ * If the function doesn't match, it returns false, else it returns true.
+ * When it is compiled with JIT, this function execute strlen on the subject.
+ * Currently the only supported flag is REG_NOTBOL.
+ */
+int regex_exec_match(const struct my_regex *preg, const char *subject,
+ size_t nmatch, regmatch_t pmatch[], int flags) {
+#if defined(USE_PCRE) || defined(USE_PCRE_JIT)
+ int ret;
+ int matches[MAX_MATCH * 3];
+ int enmatch;
+ int i;
+ int options;
+
+ /* Silently limit the number of allowed matches. max
+ * match i the maximum value for match, in fact this
+ * limit is not applyied.
+ */
+ enmatch = nmatch;
+ if (enmatch > MAX_MATCH)
+ enmatch = MAX_MATCH;
+
+ options = 0;
+ if (flags & REG_NOTBOL)
+ options |= PCRE_NOTBOL;
+
+ /* The value returned by pcre_exec() is one more than the highest numbered
+ * pair that has been set. For example, if two substrings have been captured,
+ * the returned value is 3. If there are no capturing subpatterns, the return
+ * value from a successful match is 1, indicating that just the first pair of
+ * offsets has been set.
+ *
+ * It seems that this function returns 0 if it detect more matches than avalaible
+ * space in the matches array.
+ */
+ ret = pcre_exec(preg->reg, preg->extra, subject, strlen(subject), 0, options, matches, enmatch * 3);
+ if (ret < 0)
+ return 0;
+
+ if (ret == 0)
+ ret = enmatch;
+
+ for (i=0; i<nmatch; i++) {
+ /* Copy offset. */
+ if (i < ret) {
+ pmatch[i].rm_so = matches[(i*2)];
+ pmatch[i].rm_eo = matches[(i*2)+1];
+ continue;
+ }
+ /* Set the unmatvh flag (-1). */
+ pmatch[i].rm_so = -1;
+ pmatch[i].rm_eo = -1;
+ }
+ return 1;
+#else
+ int match;
+
+ flags &= REG_NOTBOL;
+ match = regexec(&preg->regex, subject, nmatch, pmatch, flags);
+ if (match == REG_NOMATCH)
+ return 0;
+ return 1;
+#endif
+}
+
+/* This function apply regex. It take a "char *" ans length as input. The
+ * <subject> can be modified during the processing. If the function doesn't
+ * match, it returns false, else it returns true.
+ * When it is compiled with standard POSIX regex or PCRE, this function add
+ * a temporary null chracters at the end of the <subject>. The <subject> must
+ * have a real length of <length> + 1. Currently the only supported flag is
+ * REG_NOTBOL.
+ */
+int regex_exec_match2(const struct my_regex *preg, char *subject, int length,
+ size_t nmatch, regmatch_t pmatch[], int flags) {
+#if defined(USE_PCRE) || defined(USE_PCRE_JIT)
+ int ret;
+ int matches[MAX_MATCH * 3];
+ int enmatch;
+ int i;
+ int options;
+
+ /* Silently limit the number of allowed matches. max
+ * match i the maximum value for match, in fact this
+ * limit is not applyied.
+ */
+ enmatch = nmatch;
+ if (enmatch > MAX_MATCH)
+ enmatch = MAX_MATCH;
+
+ options = 0;
+ if (flags & REG_NOTBOL)
+ options |= PCRE_NOTBOL;
+
+ /* The value returned by pcre_exec() is one more than the highest numbered
+ * pair that has been set. For example, if two substrings have been captured,
+ * the returned value is 3. If there are no capturing subpatterns, the return
+ * value from a successful match is 1, indicating that just the first pair of
+ * offsets has been set.
+ *
+ * It seems that this function returns 0 if it detect more matches than avalaible
+ * space in the matches array.
+ */
+ ret = pcre_exec(preg->reg, preg->extra, subject, length, 0, options, matches, enmatch * 3);
+ if (ret < 0)
+ return 0;
+
+ if (ret == 0)
+ ret = enmatch;
+
+ for (i=0; i<nmatch; i++) {
+ /* Copy offset. */
+ if (i < ret) {
+ pmatch[i].rm_so = matches[(i*2)];
+ pmatch[i].rm_eo = matches[(i*2)+1];
+ continue;
+ }
+ /* Set the unmatvh flag (-1). */
+ pmatch[i].rm_so = -1;
+ pmatch[i].rm_eo = -1;
+ }
+ return 1;
+#else
+ char old_char = subject[length];
+ int match;
+
+ flags &= REG_NOTBOL;
+ subject[length] = 0;
+ match = regexec(&preg->regex, subject, nmatch, pmatch, flags);
+ subject[length] = old_char;
+ if (match == REG_NOMATCH)
+ return 0;
+ return 1;
+#endif
+}
+
+int regex_comp(const char *str, struct my_regex *regex, int cs, int cap, char **err)
+{
+#if defined(USE_PCRE) || defined(USE_PCRE_JIT)
+ int flags = 0;
+ const char *error;
+ int erroffset;
+
+ if (!cs)
+ flags |= PCRE_CASELESS;
+ if (!cap)
+ flags |= PCRE_NO_AUTO_CAPTURE;
+
+ regex->reg = pcre_compile(str, flags, &error, &erroffset, NULL);
+ if (!regex->reg) {
+ memprintf(err, "regex '%s' is invalid (error=%s, erroffset=%d)", str, error, erroffset);
+ return 0;
+ }
+
+ regex->extra = pcre_study(regex->reg, PCRE_STUDY_JIT_COMPILE, &error);
+ if (!regex->extra && error != NULL) {
+ pcre_free(regex->reg);
+ memprintf(err, "failed to compile regex '%s' (error=%s)", str, error);
+ return 0;
+ }
+#else
+ int flags = REG_EXTENDED;
+
+ if (!cs)
+ flags |= REG_ICASE;
+ if (!cap)
+ flags |= REG_NOSUB;
+
+ if (regcomp(®ex->regex, str, flags) != 0) {
+ memprintf(err, "regex '%s' is invalid", str);
+ return 0;
+ }
+#endif
+ return 1;
+}
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * Sample management functions.
+ *
+ * Copyright 2009-2010 EXCELIANCE, Emeric Brun <ebrun@exceliance.fr>
+ * Copyright (C) 2012 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <ctype.h>
+#include <string.h>
+#include <arpa/inet.h>
+#include <stdio.h>
+
+#include <types/global.h>
+
+#include <common/chunk.h>
+#include <common/hash.h>
+#include <common/standard.h>
+#include <common/uri_auth.h>
+#include <common/base64.h>
+
+#include <proto/arg.h>
+#include <proto/auth.h>
+#include <proto/log.h>
+#include <proto/proto_http.h>
+#include <proto/proxy.h>
+#include <proto/sample.h>
+#include <proto/stick_table.h>
+#include <proto/vars.h>
+
+/* sample type names */
+const char *smp_to_type[SMP_TYPES] = {
+ [SMP_T_ANY] = "any",
+ [SMP_T_BOOL] = "bool",
+ [SMP_T_SINT] = "sint",
+ [SMP_T_ADDR] = "addr",
+ [SMP_T_IPV4] = "ipv4",
+ [SMP_T_IPV6] = "ipv6",
+ [SMP_T_STR] = "str",
+ [SMP_T_BIN] = "bin",
+ [SMP_T_METH] = "meth",
+};
+
+/* static sample used in sample_process() when <p> is NULL */
+static struct sample temp_smp;
+
+/* list head of all known sample fetch keywords */
+static struct sample_fetch_kw_list sample_fetches = {
+ .list = LIST_HEAD_INIT(sample_fetches.list)
+};
+
+/* list head of all known sample format conversion keywords */
+static struct sample_conv_kw_list sample_convs = {
+ .list = LIST_HEAD_INIT(sample_convs.list)
+};
+
+const unsigned int fetch_cap[SMP_SRC_ENTRIES] = {
+ [SMP_SRC_INTRN] = (SMP_VAL_FE_CON_ACC | SMP_VAL_FE_SES_ACC | SMP_VAL_FE_REQ_CNT |
+ SMP_VAL_FE_HRQ_HDR | SMP_VAL_FE_HRQ_BDY | SMP_VAL_FE_SET_BCK |
+ SMP_VAL_BE_REQ_CNT | SMP_VAL_BE_HRQ_HDR | SMP_VAL_BE_HRQ_BDY |
+ SMP_VAL_BE_SET_SRV | SMP_VAL_BE_SRV_CON | SMP_VAL_BE_RES_CNT |
+ SMP_VAL_BE_HRS_HDR | SMP_VAL_BE_HRS_BDY | SMP_VAL_BE_STO_RUL |
+ SMP_VAL_FE_RES_CNT | SMP_VAL_FE_HRS_HDR | SMP_VAL_FE_HRS_BDY |
+ SMP_VAL_FE_LOG_END),
+
+ [SMP_SRC_LISTN] = (SMP_VAL_FE_CON_ACC | SMP_VAL_FE_SES_ACC | SMP_VAL_FE_REQ_CNT |
+ SMP_VAL_FE_HRQ_HDR | SMP_VAL_FE_HRQ_BDY | SMP_VAL_FE_SET_BCK |
+ SMP_VAL_BE_REQ_CNT | SMP_VAL_BE_HRQ_HDR | SMP_VAL_BE_HRQ_BDY |
+ SMP_VAL_BE_SET_SRV | SMP_VAL_BE_SRV_CON | SMP_VAL_BE_RES_CNT |
+ SMP_VAL_BE_HRS_HDR | SMP_VAL_BE_HRS_BDY | SMP_VAL_BE_STO_RUL |
+ SMP_VAL_FE_RES_CNT | SMP_VAL_FE_HRS_HDR | SMP_VAL_FE_HRS_BDY |
+ SMP_VAL_FE_LOG_END),
+
+ [SMP_SRC_FTEND] = (SMP_VAL_FE_CON_ACC | SMP_VAL_FE_SES_ACC | SMP_VAL_FE_REQ_CNT |
+ SMP_VAL_FE_HRQ_HDR | SMP_VAL_FE_HRQ_BDY | SMP_VAL_FE_SET_BCK |
+ SMP_VAL_BE_REQ_CNT | SMP_VAL_BE_HRQ_HDR | SMP_VAL_BE_HRQ_BDY |
+ SMP_VAL_BE_SET_SRV | SMP_VAL_BE_SRV_CON | SMP_VAL_BE_RES_CNT |
+ SMP_VAL_BE_HRS_HDR | SMP_VAL_BE_HRS_BDY | SMP_VAL_BE_STO_RUL |
+ SMP_VAL_FE_RES_CNT | SMP_VAL_FE_HRS_HDR | SMP_VAL_FE_HRS_BDY |
+ SMP_VAL_FE_LOG_END),
+
+ [SMP_SRC_L4CLI] = (SMP_VAL_FE_CON_ACC | SMP_VAL_FE_SES_ACC | SMP_VAL_FE_REQ_CNT |
+ SMP_VAL_FE_HRQ_HDR | SMP_VAL_FE_HRQ_BDY | SMP_VAL_FE_SET_BCK |
+ SMP_VAL_BE_REQ_CNT | SMP_VAL_BE_HRQ_HDR | SMP_VAL_BE_HRQ_BDY |
+ SMP_VAL_BE_SET_SRV | SMP_VAL_BE_SRV_CON | SMP_VAL_BE_RES_CNT |
+ SMP_VAL_BE_HRS_HDR | SMP_VAL_BE_HRS_BDY | SMP_VAL_BE_STO_RUL |
+ SMP_VAL_FE_RES_CNT | SMP_VAL_FE_HRS_HDR | SMP_VAL_FE_HRS_BDY |
+ SMP_VAL_FE_LOG_END),
+
+ [SMP_SRC_L5CLI] = (SMP_VAL___________ | SMP_VAL_FE_SES_ACC | SMP_VAL_FE_REQ_CNT |
+ SMP_VAL_FE_HRQ_HDR | SMP_VAL_FE_HRQ_BDY | SMP_VAL_FE_SET_BCK |
+ SMP_VAL_BE_REQ_CNT | SMP_VAL_BE_HRQ_HDR | SMP_VAL_BE_HRQ_BDY |
+ SMP_VAL_BE_SET_SRV | SMP_VAL_BE_SRV_CON | SMP_VAL_BE_RES_CNT |
+ SMP_VAL_BE_HRS_HDR | SMP_VAL_BE_HRS_BDY | SMP_VAL_BE_STO_RUL |
+ SMP_VAL_FE_RES_CNT | SMP_VAL_FE_HRS_HDR | SMP_VAL_FE_HRS_BDY |
+ SMP_VAL_FE_LOG_END),
+
+ [SMP_SRC_TRACK] = (SMP_VAL_FE_CON_ACC | SMP_VAL_FE_SES_ACC | SMP_VAL_FE_REQ_CNT |
+ SMP_VAL_FE_HRQ_HDR | SMP_VAL_FE_HRQ_BDY | SMP_VAL_FE_SET_BCK |
+ SMP_VAL_BE_REQ_CNT | SMP_VAL_BE_HRQ_HDR | SMP_VAL_BE_HRQ_BDY |
+ SMP_VAL_BE_SET_SRV | SMP_VAL_BE_SRV_CON | SMP_VAL_BE_RES_CNT |
+ SMP_VAL_BE_HRS_HDR | SMP_VAL_BE_HRS_BDY | SMP_VAL_BE_STO_RUL |
+ SMP_VAL_FE_RES_CNT | SMP_VAL_FE_HRS_HDR | SMP_VAL_FE_HRS_BDY |
+ SMP_VAL_FE_LOG_END),
+
+ [SMP_SRC_L6REQ] = (SMP_VAL___________ | SMP_VAL___________ | SMP_VAL_FE_REQ_CNT |
+ SMP_VAL_FE_HRQ_HDR | SMP_VAL_FE_HRQ_BDY | SMP_VAL_FE_SET_BCK |
+ SMP_VAL_BE_REQ_CNT | SMP_VAL_BE_HRQ_HDR | SMP_VAL_BE_HRQ_BDY |
+ SMP_VAL_BE_SET_SRV | SMP_VAL_BE_SRV_CON | SMP_VAL___________ |
+ SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL___________),
+
+ [SMP_SRC_HRQHV] = (SMP_VAL___________ | SMP_VAL___________ | SMP_VAL_FE_REQ_CNT |
+ SMP_VAL_FE_HRQ_HDR | SMP_VAL_FE_HRQ_BDY | SMP_VAL_FE_SET_BCK |
+ SMP_VAL_BE_REQ_CNT | SMP_VAL_BE_HRQ_HDR | SMP_VAL_BE_HRQ_BDY |
+ SMP_VAL_BE_SET_SRV | SMP_VAL_BE_SRV_CON | SMP_VAL___________ |
+ SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL___________),
+
+ [SMP_SRC_HRQHP] = (SMP_VAL___________ | SMP_VAL___________ | SMP_VAL_FE_REQ_CNT |
+ SMP_VAL_FE_HRQ_HDR | SMP_VAL_FE_HRQ_BDY | SMP_VAL_FE_SET_BCK |
+ SMP_VAL_BE_REQ_CNT | SMP_VAL_BE_HRQ_HDR | SMP_VAL_BE_HRQ_BDY |
+ SMP_VAL_BE_SET_SRV | SMP_VAL_BE_SRV_CON | SMP_VAL_BE_RES_CNT |
+ SMP_VAL_BE_HRS_HDR | SMP_VAL_BE_HRS_BDY | SMP_VAL_BE_STO_RUL |
+ SMP_VAL_FE_RES_CNT | SMP_VAL_FE_HRS_HDR | SMP_VAL_FE_HRS_BDY |
+ SMP_VAL_FE_LOG_END),
+
+ [SMP_SRC_HRQBO] = (SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL___________ | SMP_VAL_FE_HRQ_BDY | SMP_VAL_FE_SET_BCK |
+ SMP_VAL_BE_REQ_CNT | SMP_VAL_BE_HRQ_HDR | SMP_VAL_BE_HRQ_BDY |
+ SMP_VAL_BE_SET_SRV | SMP_VAL_BE_SRV_CON | SMP_VAL___________ |
+ SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL___________),
+
+ [SMP_SRC_BKEND] = (SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL_BE_REQ_CNT | SMP_VAL_BE_HRQ_HDR | SMP_VAL_BE_HRQ_BDY |
+ SMP_VAL_BE_SET_SRV | SMP_VAL_BE_SRV_CON | SMP_VAL_BE_RES_CNT |
+ SMP_VAL_BE_HRS_HDR | SMP_VAL_BE_HRS_BDY | SMP_VAL_BE_STO_RUL |
+ SMP_VAL_FE_RES_CNT | SMP_VAL_FE_HRS_HDR | SMP_VAL_FE_HRS_BDY |
+ SMP_VAL_FE_LOG_END),
+
+ [SMP_SRC_SERVR] = (SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL___________ | SMP_VAL_BE_SRV_CON | SMP_VAL_BE_RES_CNT |
+ SMP_VAL_BE_HRS_HDR | SMP_VAL_BE_HRS_BDY | SMP_VAL_BE_STO_RUL |
+ SMP_VAL_FE_RES_CNT | SMP_VAL_FE_HRS_HDR | SMP_VAL_FE_HRS_BDY |
+ SMP_VAL_FE_LOG_END),
+
+ [SMP_SRC_L4SRV] = (SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL___________ | SMP_VAL___________ | SMP_VAL_BE_RES_CNT |
+ SMP_VAL_BE_HRS_HDR | SMP_VAL_BE_HRS_BDY | SMP_VAL_BE_STO_RUL |
+ SMP_VAL_FE_RES_CNT | SMP_VAL_FE_HRS_HDR | SMP_VAL_FE_HRS_BDY |
+ SMP_VAL_FE_LOG_END),
+
+ [SMP_SRC_L5SRV] = (SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL___________ | SMP_VAL___________ | SMP_VAL_BE_RES_CNT |
+ SMP_VAL_BE_HRS_HDR | SMP_VAL_BE_HRS_BDY | SMP_VAL_BE_STO_RUL |
+ SMP_VAL_FE_RES_CNT | SMP_VAL_FE_HRS_HDR | SMP_VAL_FE_HRS_BDY |
+ SMP_VAL_FE_LOG_END),
+
+ [SMP_SRC_L6RES] = (SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL___________ | SMP_VAL___________ | SMP_VAL_BE_RES_CNT |
+ SMP_VAL_BE_HRS_HDR | SMP_VAL_BE_HRS_BDY | SMP_VAL_BE_STO_RUL |
+ SMP_VAL_FE_RES_CNT | SMP_VAL_FE_HRS_HDR | SMP_VAL_FE_HRS_BDY |
+ SMP_VAL___________),
+
+ [SMP_SRC_HRSHV] = (SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL___________ | SMP_VAL___________ | SMP_VAL_BE_RES_CNT |
+ SMP_VAL_BE_HRS_HDR | SMP_VAL_BE_HRS_BDY | SMP_VAL_BE_STO_RUL |
+ SMP_VAL_FE_RES_CNT | SMP_VAL_FE_HRS_HDR | SMP_VAL_FE_HRS_BDY |
+ SMP_VAL___________),
+
+ [SMP_SRC_HRSHP] = (SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL___________ | SMP_VAL___________ | SMP_VAL_BE_RES_CNT |
+ SMP_VAL_BE_HRS_HDR | SMP_VAL_BE_HRS_BDY | SMP_VAL_BE_STO_RUL |
+ SMP_VAL_FE_RES_CNT | SMP_VAL_FE_HRS_HDR | SMP_VAL_FE_HRS_BDY |
+ SMP_VAL_FE_LOG_END),
+
+ [SMP_SRC_HRSBO] = (SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL___________ | SMP_VAL_BE_HRS_BDY | SMP_VAL_BE_STO_RUL |
+ SMP_VAL_FE_RES_CNT | SMP_VAL_FE_HRS_HDR | SMP_VAL_FE_HRS_BDY |
+ SMP_VAL___________),
+
+ [SMP_SRC_RQFIN] = (SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL_FE_LOG_END),
+
+ [SMP_SRC_RSFIN] = (SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL_FE_LOG_END),
+
+ [SMP_SRC_TXFIN] = (SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL_FE_LOG_END),
+
+ [SMP_SRC_SSFIN] = (SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL___________ | SMP_VAL___________ | SMP_VAL___________ |
+ SMP_VAL_FE_LOG_END),
+};
+
+static const char *fetch_src_names[SMP_SRC_ENTRIES] = {
+ [SMP_SRC_INTRN] = "internal state",
+ [SMP_SRC_LISTN] = "listener",
+ [SMP_SRC_FTEND] = "frontend",
+ [SMP_SRC_L4CLI] = "client address",
+ [SMP_SRC_L5CLI] = "client-side connection",
+ [SMP_SRC_TRACK] = "track counters",
+ [SMP_SRC_L6REQ] = "request buffer",
+ [SMP_SRC_HRQHV] = "HTTP request headers",
+ [SMP_SRC_HRQHP] = "HTTP request",
+ [SMP_SRC_HRQBO] = "HTTP request body",
+ [SMP_SRC_BKEND] = "backend",
+ [SMP_SRC_SERVR] = "server",
+ [SMP_SRC_L4SRV] = "server address",
+ [SMP_SRC_L5SRV] = "server-side connection",
+ [SMP_SRC_L6RES] = "response buffer",
+ [SMP_SRC_HRSHV] = "HTTP response headers",
+ [SMP_SRC_HRSHP] = "HTTP response",
+ [SMP_SRC_HRSBO] = "HTTP response body",
+ [SMP_SRC_RQFIN] = "request buffer statistics",
+ [SMP_SRC_RSFIN] = "response buffer statistics",
+ [SMP_SRC_TXFIN] = "transaction statistics",
+ [SMP_SRC_SSFIN] = "session statistics",
+};
+
+static const char *fetch_ckp_names[SMP_CKP_ENTRIES] = {
+ [SMP_CKP_FE_CON_ACC] = "frontend tcp-request connection rule",
+ [SMP_CKP_FE_SES_ACC] = "frontend tcp-request session rule",
+ [SMP_CKP_FE_REQ_CNT] = "frontend tcp-request content rule",
+ [SMP_CKP_FE_HRQ_HDR] = "frontend http-request header rule",
+ [SMP_CKP_FE_HRQ_BDY] = "frontend http-request body rule",
+ [SMP_CKP_FE_SET_BCK] = "frontend use-backend rule",
+ [SMP_CKP_BE_REQ_CNT] = "backend tcp-request content rule",
+ [SMP_CKP_BE_HRQ_HDR] = "backend http-request header rule",
+ [SMP_CKP_BE_HRQ_BDY] = "backend http-request body rule",
+ [SMP_CKP_BE_SET_SRV] = "backend use-server, balance or stick-match rule",
+ [SMP_CKP_BE_SRV_CON] = "server source selection",
+ [SMP_CKP_BE_RES_CNT] = "backend tcp-response content rule",
+ [SMP_CKP_BE_HRS_HDR] = "backend http-response header rule",
+ [SMP_CKP_BE_HRS_BDY] = "backend http-response body rule",
+ [SMP_CKP_BE_STO_RUL] = "backend stick-store rule",
+ [SMP_CKP_FE_RES_CNT] = "frontend tcp-response content rule",
+ [SMP_CKP_FE_HRS_HDR] = "frontend http-response header rule",
+ [SMP_CKP_FE_HRS_BDY] = "frontend http-response body rule",
+ [SMP_CKP_FE_LOG_END] = "logs",
+};
+
+/* This function returns the type of the data returned by the sample_expr.
+ * It assumes that the <expr> and all of its converters are properly
+ * initialized.
+ */
+inline
+int smp_expr_output_type(struct sample_expr *expr)
+{
+ struct sample_conv_expr *smp_expr;
+
+ if (!LIST_ISEMPTY(&expr->conv_exprs)) {
+ smp_expr = LIST_PREV(&expr->conv_exprs, struct sample_conv_expr *, list);
+ return smp_expr->conv->out_type;
+ }
+ return expr->fetch->out_type;
+}
+
+
+/* fill the trash with a comma-delimited list of source names for the <use> bit
+ * field which must be composed of a non-null set of SMP_USE_* flags. The return
+ * value is the pointer to the string in the trash buffer.
+ */
+const char *sample_src_names(unsigned int use)
+{
+ int bit;
+
+ trash.len = 0;
+ trash.str[0] = '\0';
+ for (bit = 0; bit < SMP_SRC_ENTRIES; bit++) {
+ if (!(use & ~((1 << bit) - 1)))
+ break; /* no more bits */
+
+ if (!(use & (1 << bit)))
+ continue; /* bit not set */
+
+ trash.len += snprintf(trash.str + trash.len, trash.size - trash.len, "%s%s",
+ (use & ((1 << bit) - 1)) ? "," : "",
+ fetch_src_names[bit]);
+ }
+ return trash.str;
+}
+
+/* return a pointer to the correct sample checkpoint name, or "unknown" when
+ * the flags are invalid. Only the lowest bit is used, higher bits are ignored
+ * if set.
+ */
+const char *sample_ckp_names(unsigned int use)
+{
+ int bit;
+
+ for (bit = 0; bit < SMP_CKP_ENTRIES; bit++)
+ if (use & (1 << bit))
+ return fetch_ckp_names[bit];
+ return "unknown sample check place, please report this bug";
+}
+
+/*
+ * Registers the sample fetch keyword list <kwl> as a list of valid keywords
+ * for next parsing sessions. The fetch keywords capabilities are also computed
+ * from their ->use field.
+ */
+void sample_register_fetches(struct sample_fetch_kw_list *kwl)
+{
+ struct sample_fetch *sf;
+ int bit;
+
+ for (sf = kwl->kw; sf->kw != NULL; sf++) {
+ for (bit = 0; bit < SMP_SRC_ENTRIES; bit++)
+ if (sf->use & (1 << bit))
+ sf->val |= fetch_cap[bit];
+ }
+ LIST_ADDQ(&sample_fetches.list, &kwl->list);
+}
+
+/*
+ * Registers the sample format coverstion keyword list <pckl> as a list of valid keywords for next
+ * parsing sessions.
+ */
+void sample_register_convs(struct sample_conv_kw_list *pckl)
+{
+ LIST_ADDQ(&sample_convs.list, &pckl->list);
+}
+
+/*
+ * Returns the pointer on sample fetch keyword structure identified by
+ * string of <len> in buffer <kw>.
+ *
+ */
+struct sample_fetch *find_sample_fetch(const char *kw, int len)
+{
+ int index;
+ struct sample_fetch_kw_list *kwl;
+
+ list_for_each_entry(kwl, &sample_fetches.list, list) {
+ for (index = 0; kwl->kw[index].kw != NULL; index++) {
+ if (strncmp(kwl->kw[index].kw, kw, len) == 0 &&
+ kwl->kw[index].kw[len] == '\0')
+ return &kwl->kw[index];
+ }
+ }
+ return NULL;
+}
+
+/* This fucntion browse the list of available saple fetch. <current> is
+ * the last used sample fetch. If it is the first call, it must set to NULL.
+ * <idx> is the index of the next sampleèfetch entry. It is used as private
+ * value. It is useles to initiate it.
+ *
+ * It returns always the newt fetch_sample entry, and NULL when the end of
+ * the list is reached.
+ */
+struct sample_fetch *sample_fetch_getnext(struct sample_fetch *current, int *idx)
+{
+ struct sample_fetch_kw_list *kwl;
+ struct sample_fetch *base;
+
+ if (!current) {
+ /* Get first kwl entry. */
+ kwl = LIST_NEXT(&sample_fetches.list, struct sample_fetch_kw_list *, list);
+ (*idx) = 0;
+ } else {
+ /* Get kwl corresponding to the curret entry. */
+ base = current + 1 - (*idx);
+ kwl = container_of(base, struct sample_fetch_kw_list, kw);
+ }
+
+ while (1) {
+
+ /* Check if kwl is the last entry. */
+ if (&kwl->list == &sample_fetches.list)
+ return NULL;
+
+ /* idx contain the next keyword. If it is available, return it. */
+ if (kwl->kw[*idx].kw) {
+ (*idx)++;
+ return &kwl->kw[(*idx)-1];
+ }
+
+ /* get next entry in the main list, and return NULL if the end is reached. */
+ kwl = LIST_NEXT(&kwl->list, struct sample_fetch_kw_list *, list);
+
+ /* Set index to 0, ans do one other loop. */
+ (*idx) = 0;
+ }
+}
+
+/* This function browses the list of available converters. <current> is
+ * the last used converter. If it is the first call, it must set to NULL.
+ * <idx> is the index of the next converter entry. It is used as private
+ * value. It is useless to initiate it.
+ *
+ * It returns always the next sample_conv entry, and NULL when the end of
+ * the list is reached.
+ */
+struct sample_conv *sample_conv_getnext(struct sample_conv *current, int *idx)
+{
+ struct sample_conv_kw_list *kwl;
+ struct sample_conv *base;
+
+ if (!current) {
+ /* Get first kwl entry. */
+ kwl = LIST_NEXT(&sample_convs.list, struct sample_conv_kw_list *, list);
+ (*idx) = 0;
+ } else {
+ /* Get kwl corresponding to the curret entry. */
+ base = current + 1 - (*idx);
+ kwl = container_of(base, struct sample_conv_kw_list, kw);
+ }
+
+ while (1) {
+ /* Check if kwl is the last entry. */
+ if (&kwl->list == &sample_convs.list)
+ return NULL;
+
+ /* idx contain the next keyword. If it is available, return it. */
+ if (kwl->kw[*idx].kw) {
+ (*idx)++;
+ return &kwl->kw[(*idx)-1];
+ }
+
+ /* get next entry in the main list, and return NULL if the end is reached. */
+ kwl = LIST_NEXT(&kwl->list, struct sample_conv_kw_list *, list);
+
+ /* Set index to 0, ans do one other loop. */
+ (*idx) = 0;
+ }
+}
+
+/*
+ * Returns the pointer on sample format conversion keyword structure identified by
+ * string of <len> in buffer <kw>.
+ *
+ */
+struct sample_conv *find_sample_conv(const char *kw, int len)
+{
+ int index;
+ struct sample_conv_kw_list *kwl;
+
+ list_for_each_entry(kwl, &sample_convs.list, list) {
+ for (index = 0; kwl->kw[index].kw != NULL; index++) {
+ if (strncmp(kwl->kw[index].kw, kw, len) == 0 &&
+ kwl->kw[index].kw[len] == '\0')
+ return &kwl->kw[index];
+ }
+ }
+ return NULL;
+}
+
+/******************************************************************/
+/* Sample casts functions */
+/* Note: these functions do *NOT* set the output type on the */
+/* sample, the caller is responsible for doing this on return. */
+/******************************************************************/
+
+static int c_ip2int(struct sample *smp)
+{
+ smp->data.u.sint = ntohl(smp->data.u.ipv4.s_addr);
+ smp->data.type = SMP_T_SINT;
+ return 1;
+}
+
+static int c_ip2str(struct sample *smp)
+{
+ struct chunk *trash = get_trash_chunk();
+
+ if (!inet_ntop(AF_INET, (void *)&smp->data.u.ipv4, trash->str, trash->size))
+ return 0;
+
+ trash->len = strlen(trash->str);
+ smp->data.u.str = *trash;
+ smp->data.type = SMP_T_STR;
+ smp->flags &= ~SMP_F_CONST;
+
+ return 1;
+}
+
+static int c_ip2ipv6(struct sample *smp)
+{
+ v4tov6(&smp->data.u.ipv6, &smp->data.u.ipv4);
+ smp->data.type = SMP_T_IPV6;
+ return 1;
+}
+
+static int c_ipv62ip(struct sample *smp)
+{
+ if (!v6tov4(&smp->data.u.ipv4, &smp->data.u.ipv6))
+ return 0;
+ smp->data.type = SMP_T_IPV6;
+ return 1;
+}
+
+static int c_ipv62str(struct sample *smp)
+{
+ struct chunk *trash = get_trash_chunk();
+
+ if (!inet_ntop(AF_INET6, (void *)&smp->data.u.ipv6, trash->str, trash->size))
+ return 0;
+
+ trash->len = strlen(trash->str);
+ smp->data.u.str = *trash;
+ smp->data.type = SMP_T_STR;
+ smp->flags &= ~SMP_F_CONST;
+ return 1;
+}
+
+/*
+static int c_ipv62ip(struct sample *smp)
+{
+ return v6tov4(&smp->data.u.ipv4, &smp->data.u.ipv6);
+}
+*/
+
+static int c_int2ip(struct sample *smp)
+{
+ smp->data.u.ipv4.s_addr = htonl((unsigned int)smp->data.u.sint);
+ smp->data.type = SMP_T_IPV4;
+ return 1;
+}
+
+static int c_int2ipv6(struct sample *smp)
+{
+ smp->data.u.ipv4.s_addr = htonl((unsigned int)smp->data.u.sint);
+ v4tov6(&smp->data.u.ipv6, &smp->data.u.ipv4);
+ smp->data.type = SMP_T_IPV6;
+ return 1;
+}
+
+static int c_str2addr(struct sample *smp)
+{
+ if (!buf2ip(smp->data.u.str.str, smp->data.u.str.len, &smp->data.u.ipv4)) {
+ if (!buf2ip6(smp->data.u.str.str, smp->data.u.str.len, &smp->data.u.ipv6))
+ return 0;
+ smp->data.type = SMP_T_IPV6;
+ smp->flags &= ~SMP_F_CONST;
+ return 1;
+ }
+ smp->data.type = SMP_T_IPV4;
+ smp->flags &= ~SMP_F_CONST;
+ return 1;
+}
+
+static int c_str2ip(struct sample *smp)
+{
+ if (!buf2ip(smp->data.u.str.str, smp->data.u.str.len, &smp->data.u.ipv4))
+ return 0;
+ smp->data.type = SMP_T_IPV4;
+ smp->flags &= ~SMP_F_CONST;
+ return 1;
+}
+
+static int c_str2ipv6(struct sample *smp)
+{
+ if (!buf2ip6(smp->data.u.str.str, smp->data.u.str.len, &smp->data.u.ipv6))
+ return 0;
+ smp->data.type = SMP_T_IPV6;
+ smp->flags &= ~SMP_F_CONST;
+ return 1;
+}
+
+/*
+ * The NULL char always enforces the end of string if it is met.
+ * Data is never changed, so we can ignore the CONST case
+ */
+static int c_bin2str(struct sample *smp)
+{
+ int i;
+
+ for (i = 0; i < smp->data.u.str.len; i++) {
+ if (!smp->data.u.str.str[i]) {
+ smp->data.u.str.len = i;
+ break;
+ }
+ }
+ return 1;
+}
+
+static int c_int2str(struct sample *smp)
+{
+ struct chunk *trash = get_trash_chunk();
+ char *pos;
+
+ pos = lltoa_r(smp->data.u.sint, trash->str, trash->size);
+ if (!pos)
+ return 0;
+
+ trash->size = trash->size - (pos - trash->str);
+ trash->str = pos;
+ trash->len = strlen(pos);
+ smp->data.u.str = *trash;
+ smp->data.type = SMP_T_STR;
+ smp->flags &= ~SMP_F_CONST;
+ return 1;
+}
+
+/* This function duplicates data and removes the flag "const". */
+int smp_dup(struct sample *smp)
+{
+ struct chunk *trash;
+
+ /* If the const flag is not set, we don't need to duplicate the
+ * pattern as it can be modified in place.
+ */
+ if (!(smp->flags & SMP_F_CONST))
+ return 1;
+
+ switch (smp->data.type) {
+ case SMP_T_BOOL:
+ case SMP_T_SINT:
+ case SMP_T_ADDR:
+ case SMP_T_IPV4:
+ case SMP_T_IPV6:
+ /* These type are not const. */
+ break;
+ case SMP_T_STR:
+ case SMP_T_BIN:
+ /* Duplicate data. */
+ trash = get_trash_chunk();
+ trash->len = smp->data.u.str.len < trash->size ? smp->data.u.str.len : trash->size;
+ memcpy(trash->str, smp->data.u.str.str, trash->len);
+ smp->data.u.str = *trash;
+ break;
+ default:
+ /* Other cases are unexpected. */
+ return 0;
+ }
+
+ /* remove const flag */
+ smp->flags &= ~SMP_F_CONST;
+ return 1;
+}
+
+int c_none(struct sample *smp)
+{
+ return 1;
+}
+
+static int c_str2int(struct sample *smp)
+{
+ const char *str;
+ const char *end;
+
+ if (smp->data.u.str.len == 0)
+ return 0;
+
+ str = smp->data.u.str.str;
+ end = smp->data.u.str.str + smp->data.u.str.len;
+
+ smp->data.u.sint = read_int64(&str, end);
+ smp->data.type = SMP_T_SINT;
+ smp->flags &= ~SMP_F_CONST;
+ return 1;
+}
+
+static int c_str2meth(struct sample *smp)
+{
+ enum http_meth_t meth;
+ int len;
+
+ meth = find_http_meth(smp->data.u.str.str, smp->data.u.str.len);
+ if (meth == HTTP_METH_OTHER) {
+ len = smp->data.u.str.len;
+ smp->data.u.meth.str.str = smp->data.u.str.str;
+ smp->data.u.meth.str.len = len;
+ }
+ else
+ smp->flags &= ~SMP_F_CONST;
+ smp->data.u.meth.meth = meth;
+ smp->data.type = SMP_T_METH;
+ return 1;
+}
+
+static int c_meth2str(struct sample *smp)
+{
+ int len;
+ enum http_meth_t meth;
+
+ if (smp->data.u.meth.meth == HTTP_METH_OTHER) {
+ /* The method is unknown. Copy the original pointer. */
+ len = smp->data.u.meth.str.len;
+ smp->data.u.str.str = smp->data.u.meth.str.str;
+ smp->data.u.str.len = len;
+ smp->data.type = SMP_T_STR;
+ }
+ else if (smp->data.u.meth.meth < HTTP_METH_OTHER) {
+ /* The method is known, copy the pointer containing the string. */
+ meth = smp->data.u.meth.meth;
+ smp->data.u.str.str = http_known_methods[meth].name;
+ smp->data.u.str.len = http_known_methods[meth].len;
+ smp->flags |= SMP_F_CONST;
+ smp->data.type = SMP_T_STR;
+ }
+ else {
+ /* Unknown method */
+ return 0;
+ }
+ return 1;
+}
+
+static int c_addr2bin(struct sample *smp)
+{
+ struct chunk *chk = get_trash_chunk();
+
+ if (smp->data.type == SMP_T_IPV4) {
+ chk->len = 4;
+ memcpy(chk->str, &smp->data.u.ipv4, chk->len);
+ }
+ else if (smp->data.type == SMP_T_IPV6) {
+ chk->len = 16;
+ memcpy(chk->str, &smp->data.u.ipv6, chk->len);
+ }
+ else
+ return 0;
+
+ smp->data.u.str = *chk;
+ smp->data.type = SMP_T_BIN;
+ return 1;
+}
+
+static int c_int2bin(struct sample *smp)
+{
+ struct chunk *chk = get_trash_chunk();
+
+ *(unsigned long long int *)chk->str = htonll(smp->data.u.sint);
+ chk->len = 8;
+
+ smp->data.u.str = *chk;
+ smp->data.type = SMP_T_BIN;
+ return 1;
+}
+
+
+/*****************************************************************/
+/* Sample casts matrix: */
+/* sample_casts[from type][to type] */
+/* NULL pointer used for impossible sample casts */
+/*****************************************************************/
+
+sample_cast_fct sample_casts[SMP_TYPES][SMP_TYPES] = {
+/* to: ANY BOOL SINT ADDR IPV4 IPV6 STR BIN METH */
+/* from: ANY */ { c_none, c_none, c_none, c_none, c_none, c_none, c_none, c_none, c_none, },
+/* BOOL */ { c_none, c_none, c_none, NULL, NULL, NULL, c_int2str, NULL, NULL, },
+/* SINT */ { c_none, c_none, c_none, c_int2ip, c_int2ip, c_int2ipv6, c_int2str, c_int2bin, NULL, },
+/* ADDR */ { c_none, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, },
+/* IPV4 */ { c_none, NULL, c_ip2int, c_none, c_none, c_ip2ipv6, c_ip2str, c_addr2bin, NULL, },
+/* IPV6 */ { c_none, NULL, NULL, c_none, c_ipv62ip,c_none, c_ipv62str, c_addr2bin, NULL, },
+/* STR */ { c_none, c_str2int, c_str2int, c_str2addr, c_str2ip, c_str2ipv6, c_none, c_none, c_str2meth, },
+/* BIN */ { c_none, NULL, NULL, NULL, NULL, NULL, c_bin2str, c_none, c_str2meth, },
+/* METH */ { c_none, NULL, NULL, NULL, NULL, NULL, c_meth2str, c_meth2str, c_none, }
+};
+
+/*
+ * Parse a sample expression configuration:
+ * fetch keyword followed by format conversion keywords.
+ * Returns a pointer on allocated sample expression structure.
+ * The caller must have set al->ctx.
+ */
+struct sample_expr *sample_parse_expr(char **str, int *idx, const char *file, int line, char **err_msg, struct arg_list *al)
+{
+ const char *begw; /* beginning of word */
+ const char *endw; /* end of word */
+ const char *endt; /* end of term */
+ struct sample_expr *expr;
+ struct sample_fetch *fetch;
+ struct sample_conv *conv;
+ unsigned long prev_type;
+ char *fkw = NULL;
+ char *ckw = NULL;
+ int err_arg;
+
+ begw = str[*idx];
+ for (endw = begw; *endw && *endw != '(' && *endw != ','; endw++);
+
+ if (endw == begw) {
+ memprintf(err_msg, "missing fetch method");
+ goto out_error;
+ }
+
+ /* keep a copy of the current fetch keyword for error reporting */
+ fkw = my_strndup(begw, endw - begw);
+
+ fetch = find_sample_fetch(begw, endw - begw);
+ if (!fetch) {
+ memprintf(err_msg, "unknown fetch method '%s'", fkw);
+ goto out_error;
+ }
+
+ endt = endw;
+ if (*endt == '(') {
+ /* look for the end of this term and skip the opening parenthesis */
+ endt = ++endw;
+ while (*endt && *endt != ')')
+ endt++;
+ if (*endt != ')') {
+ memprintf(err_msg, "missing closing ')' after arguments to fetch keyword '%s'", fkw);
+ goto out_error;
+ }
+ }
+
+ /* At this point, we have :
+ * - begw : beginning of the keyword
+ * - endw : end of the keyword, first character not part of keyword
+ * nor the opening parenthesis (so first character of args
+ * if present).
+ * - endt : end of the term (=endw or last parenthesis if args are present)
+ */
+
+ if (fetch->out_type >= SMP_TYPES) {
+ memprintf(err_msg, "returns type of fetch method '%s' is unknown", fkw);
+ goto out_error;
+ }
+ prev_type = fetch->out_type;
+
+ expr = calloc(1, sizeof(struct sample_expr));
+ if (!expr)
+ goto out_error;
+
+ LIST_INIT(&(expr->conv_exprs));
+ expr->fetch = fetch;
+ expr->arg_p = empty_arg_list;
+
+ /* Note that we call the argument parser even with an empty string,
+ * this allows it to automatically create entries for mandatory
+ * implicit arguments (eg: local proxy name).
+ */
+ al->kw = expr->fetch->kw;
+ al->conv = NULL;
+ if (make_arg_list(endw, endt - endw, fetch->arg_mask, &expr->arg_p, err_msg, NULL, &err_arg, al) < 0) {
+ memprintf(err_msg, "fetch method '%s' : %s", fkw, *err_msg);
+ goto out_error;
+ }
+
+ if (!expr->arg_p) {
+ expr->arg_p = empty_arg_list;
+ }
+ else if (fetch->val_args && !fetch->val_args(expr->arg_p, err_msg)) {
+ memprintf(err_msg, "invalid args in fetch method '%s' : %s", fkw, *err_msg);
+ goto out_error;
+ }
+
+ /* Now process the converters if any. We have two supported syntaxes
+ * for the converters, which can be combined :
+ * - comma-delimited list of converters just after the keyword and args ;
+ * - one converter per keyword
+ * The combination allows to have each keyword being a comma-delimited
+ * series of converters.
+ *
+ * We want to process the former first, then the latter. For this we start
+ * from the beginning of the supposed place in the exiting conv chain, which
+ * starts at the last comma (endt).
+ */
+
+ while (1) {
+ struct sample_conv_expr *conv_expr;
+
+ if (*endt == ')') /* skip last closing parenthesis */
+ endt++;
+
+ if (*endt && *endt != ',') {
+ if (ckw)
+ memprintf(err_msg, "missing comma after conv keyword '%s'", ckw);
+ else
+ memprintf(err_msg, "missing comma after fetch keyword '%s'", fkw);
+ goto out_error;
+ }
+
+ while (*endt == ',') /* then trailing commas */
+ endt++;
+
+ begw = endt; /* start of conv keyword */
+
+ if (!*begw) {
+ /* none ? skip to next string */
+ (*idx)++;
+ begw = str[*idx];
+ if (!begw || !*begw)
+ break;
+ }
+
+ for (endw = begw; *endw && *endw != '(' && *endw != ','; endw++);
+
+ free(ckw);
+ ckw = my_strndup(begw, endw - begw);
+
+ conv = find_sample_conv(begw, endw - begw);
+ if (!conv) {
+ /* we found an isolated keyword that we don't know, it's not ours */
+ if (begw == str[*idx])
+ break;
+ memprintf(err_msg, "unknown conv method '%s'", ckw);
+ goto out_error;
+ }
+
+ endt = endw;
+ if (*endt == '(') {
+ /* look for the end of this term */
+ while (*endt && *endt != ')')
+ endt++;
+ if (*endt != ')') {
+ memprintf(err_msg, "syntax error: missing ')' after conv keyword '%s'", ckw);
+ goto out_error;
+ }
+ }
+
+ if (conv->in_type >= SMP_TYPES || conv->out_type >= SMP_TYPES) {
+ memprintf(err_msg, "returns type of conv method '%s' is unknown", ckw);
+ goto out_error;
+ }
+
+ /* If impossible type conversion */
+ if (!sample_casts[prev_type][conv->in_type]) {
+ memprintf(err_msg, "conv method '%s' cannot be applied", ckw);
+ goto out_error;
+ }
+
+ prev_type = conv->out_type;
+ conv_expr = calloc(1, sizeof(struct sample_conv_expr));
+ if (!conv_expr)
+ goto out_error;
+
+ LIST_ADDQ(&(expr->conv_exprs), &(conv_expr->list));
+ conv_expr->conv = conv;
+
+ if (endt != endw) {
+ int err_arg;
+
+ if (!conv->arg_mask) {
+ memprintf(err_msg, "conv method '%s' does not support any args", ckw);
+ goto out_error;
+ }
+
+ al->kw = expr->fetch->kw;
+ al->conv = conv_expr->conv->kw;
+ if (make_arg_list(endw + 1, endt - endw - 1, conv->arg_mask, &conv_expr->arg_p, err_msg, NULL, &err_arg, al) < 0) {
+ memprintf(err_msg, "invalid arg %d in conv method '%s' : %s", err_arg+1, ckw, *err_msg);
+ goto out_error;
+ }
+
+ if (!conv_expr->arg_p)
+ conv_expr->arg_p = empty_arg_list;
+
+ if (conv->val_args && !conv->val_args(conv_expr->arg_p, conv, file, line, err_msg)) {
+ memprintf(err_msg, "invalid args in conv method '%s' : %s", ckw, *err_msg);
+ goto out_error;
+ }
+ }
+ else if (ARGM(conv->arg_mask)) {
+ memprintf(err_msg, "missing args for conv method '%s'", ckw);
+ goto out_error;
+ }
+ }
+
+ out:
+ free(fkw);
+ free(ckw);
+ return expr;
+
+out_error:
+ /* TODO: prune_sample_expr(expr); */
+ expr = NULL;
+ goto out;
+}
+
+/*
+ * Process a fetch + format conversion of defined by the sample expression <expr>
+ * on request or response considering the <opt> parameter.
+ * Returns a pointer on a typed sample structure containing the result or NULL if
+ * sample is not found or when format conversion failed.
+ * If <p> is not null, function returns results in structure pointed by <p>.
+ * If <p> is null, functions returns a pointer on a static sample structure.
+ *
+ * Note: the fetch functions are required to properly set the return type. The
+ * conversion functions must do so too. However the cast functions do not need
+ * to since they're made to cast mutiple types according to what is required.
+ *
+ * The caller may indicate in <opt> if it considers the result final or not.
+ * The caller needs to check the SMP_F_MAY_CHANGE flag in p->flags to verify
+ * if the result is stable or not, according to the following table :
+ *
+ * return MAY_CHANGE FINAL Meaning for the sample
+ * NULL 0 * Not present and will never be (eg: header)
+ * NULL 1 0 Not present yet, could change (eg: POST param)
+ * NULL 1 1 Not present yet, will not change anymore
+ * smp 0 * Present and will not change (eg: header)
+ * smp 1 0 Present, may change (eg: request length)
+ * smp 1 1 Present, last known value (eg: request length)
+ */
+struct sample *sample_process(struct proxy *px, struct session *sess,
+ struct stream *strm, unsigned int opt,
+ struct sample_expr *expr, struct sample *p)
+{
+ struct sample_conv_expr *conv_expr;
+
+ if (p == NULL) {
+ p = &temp_smp;
+ memset(p, 0, sizeof(*p));
+ }
+
+ p->px = px;
+ p->sess = sess;
+ p->strm = strm;
+ p->opt = opt;
+ if (!expr->fetch->process(expr->arg_p, p, expr->fetch->kw, expr->fetch->private))
+ return NULL;
+
+ list_for_each_entry(conv_expr, &expr->conv_exprs, list) {
+ /* we want to ensure that p->type can be casted into
+ * conv_expr->conv->in_type. We have 3 possibilities :
+ * - NULL => not castable.
+ * - c_none => nothing to do (let's optimize it)
+ * - other => apply cast and prepare to fail
+ */
+ if (!sample_casts[p->data.type][conv_expr->conv->in_type])
+ return NULL;
+
+ if (sample_casts[p->data.type][conv_expr->conv->in_type] != c_none &&
+ !sample_casts[p->data.type][conv_expr->conv->in_type](p))
+ return NULL;
+
+ /* OK cast succeeded */
+
+ if (!conv_expr->conv->process(conv_expr->arg_p, p, conv_expr->conv->private))
+ return NULL;
+ }
+ return p;
+}
+
+/*
+ * Resolve all remaining arguments in proxy <p>. Returns the number of
+ * errors or 0 if everything is fine.
+ */
+int smp_resolve_args(struct proxy *p)
+{
+ struct arg_list *cur, *bak;
+ const char *ctx, *where;
+ const char *conv_ctx, *conv_pre, *conv_pos;
+ struct userlist *ul;
+ struct my_regex *reg;
+ struct arg *arg;
+ int cfgerr = 0;
+ int rflags;
+
+ list_for_each_entry_safe(cur, bak, &p->conf.args.list, list) {
+ struct proxy *px;
+ struct server *srv;
+ char *pname, *sname;
+ char *err;
+
+ arg = cur->arg;
+
+ /* prepare output messages */
+ conv_pre = conv_pos = conv_ctx = "";
+ if (cur->conv) {
+ conv_ctx = cur->conv;
+ conv_pre = "conversion keyword '";
+ conv_pos = "' for ";
+ }
+
+ where = "in";
+ ctx = "sample fetch keyword";
+ switch (cur->ctx) {
+ case ARGC_STK: where = "in stick rule in"; break;
+ case ARGC_TRK: where = "in tracking rule in"; break;
+ case ARGC_LOG: where = "in log-format string in"; break;
+ case ARGC_LOGSD: where = "in log-format-sd string in"; break;
+ case ARGC_HRQ: where = "in http-request header format string in"; break;
+ case ARGC_HRS: where = "in http-response header format string in"; break;
+ case ARGC_UIF: where = "in unique-id-format string in"; break;
+ case ARGC_RDR: where = "in redirect format string in"; break;
+ case ARGC_CAP: where = "in capture rule in"; break;
+ case ARGC_ACL: ctx = "ACL keyword"; break;
+ case ARGC_SRV: where = "in server directive in"; break;
+ }
+
+ /* set a few default settings */
+ px = p;
+ pname = p->id;
+
+ switch (arg->type) {
+ case ARGT_SRV:
+ if (!arg->data.str.len) {
+ Alert("parsing [%s:%d] : missing server name in arg %d of %s%s%s%s '%s' %s proxy '%s'.\n",
+ cur->file, cur->line,
+ cur->arg_pos + 1, conv_pre, conv_ctx, conv_pos, ctx, cur->kw, where, p->id);
+ cfgerr++;
+ continue;
+ }
+
+ /* we support two formats : "bck/srv" and "srv" */
+ sname = strrchr(arg->data.str.str, '/');
+
+ if (sname) {
+ *sname++ = '\0';
+ pname = arg->data.str.str;
+
+ px = proxy_be_by_name(pname);
+ if (!px) {
+ Alert("parsing [%s:%d] : unable to find proxy '%s' referenced in arg %d of %s%s%s%s '%s' %s proxy '%s'.\n",
+ cur->file, cur->line, pname,
+ cur->arg_pos + 1, conv_pre, conv_ctx, conv_pos, ctx, cur->kw, where, p->id);
+ cfgerr++;
+ break;
+ }
+ }
+ else
+ sname = arg->data.str.str;
+
+ srv = findserver(px, sname);
+ if (!srv) {
+ Alert("parsing [%s:%d] : unable to find server '%s' in proxy '%s', referenced in arg %d of %s%s%s%s '%s' %s proxy '%s'.\n",
+ cur->file, cur->line, sname, pname,
+ cur->arg_pos + 1, conv_pre, conv_ctx, conv_pos, ctx, cur->kw, where, p->id);
+ cfgerr++;
+ break;
+ }
+
+ free(arg->data.str.str);
+ arg->data.str.str = NULL;
+ arg->unresolved = 0;
+ arg->data.srv = srv;
+ break;
+
+ case ARGT_FE:
+ if (arg->data.str.len) {
+ pname = arg->data.str.str;
+ px = proxy_fe_by_name(pname);
+ }
+
+ if (!px) {
+ Alert("parsing [%s:%d] : unable to find frontend '%s' referenced in arg %d of %s%s%s%s '%s' %s proxy '%s'.\n",
+ cur->file, cur->line, pname,
+ cur->arg_pos + 1, conv_pre, conv_ctx, conv_pos, ctx, cur->kw, where, p->id);
+ cfgerr++;
+ break;
+ }
+
+ if (!(px->cap & PR_CAP_FE)) {
+ Alert("parsing [%s:%d] : proxy '%s', referenced in arg %d of %s%s%s%s '%s' %s proxy '%s', has not frontend capability.\n",
+ cur->file, cur->line, pname,
+ cur->arg_pos + 1, conv_pre, conv_ctx, conv_pos, ctx, cur->kw, where, p->id);
+ cfgerr++;
+ break;
+ }
+
+ free(arg->data.str.str);
+ arg->data.str.str = NULL;
+ arg->unresolved = 0;
+ arg->data.prx = px;
+ break;
+
+ case ARGT_BE:
+ if (arg->data.str.len) {
+ pname = arg->data.str.str;
+ px = proxy_be_by_name(pname);
+ }
+
+ if (!px) {
+ Alert("parsing [%s:%d] : unable to find backend '%s' referenced in arg %d of %s%s%s%s '%s' %s proxy '%s'.\n",
+ cur->file, cur->line, pname,
+ cur->arg_pos + 1, conv_pre, conv_ctx, conv_pos, ctx, cur->kw, where, p->id);
+ cfgerr++;
+ break;
+ }
+
+ if (!(px->cap & PR_CAP_BE)) {
+ Alert("parsing [%s:%d] : proxy '%s', referenced in arg %d of %s%s%s%s '%s' %s proxy '%s', has not backend capability.\n",
+ cur->file, cur->line, pname,
+ cur->arg_pos + 1, conv_pre, conv_ctx, conv_pos, ctx, cur->kw, where, p->id);
+ cfgerr++;
+ break;
+ }
+
+ free(arg->data.str.str);
+ arg->data.str.str = NULL;
+ arg->unresolved = 0;
+ arg->data.prx = px;
+ break;
+
+ case ARGT_TAB:
+ if (arg->data.str.len) {
+ pname = arg->data.str.str;
+ px = proxy_tbl_by_name(pname);
+ }
+
+ if (!px) {
+ Alert("parsing [%s:%d] : unable to find table '%s' referenced in arg %d of %s%s%s%s '%s' %s proxy '%s'.\n",
+ cur->file, cur->line, pname,
+ cur->arg_pos + 1, conv_pre, conv_ctx, conv_pos, ctx, cur->kw, where, p->id);
+ cfgerr++;
+ break;
+ }
+
+ if (!px->table.size) {
+ Alert("parsing [%s:%d] : no table in proxy '%s' referenced in arg %d of %s%s%s%s '%s' %s proxy '%s'.\n",
+ cur->file, cur->line, pname,
+ cur->arg_pos + 1, conv_pre, conv_ctx, conv_pos, ctx, cur->kw, where, p->id);
+ cfgerr++;
+ break;
+ }
+
+ free(arg->data.str.str);
+ arg->data.str.str = NULL;
+ arg->unresolved = 0;
+ arg->data.prx = px;
+ break;
+
+ case ARGT_USR:
+ if (!arg->data.str.len) {
+ Alert("parsing [%s:%d] : missing userlist name in arg %d of %s%s%s%s '%s' %s proxy '%s'.\n",
+ cur->file, cur->line,
+ cur->arg_pos + 1, conv_pre, conv_ctx, conv_pos, ctx, cur->kw, where, p->id);
+ cfgerr++;
+ break;
+ }
+
+ if (p->uri_auth && p->uri_auth->userlist &&
+ !strcmp(p->uri_auth->userlist->name, arg->data.str.str))
+ ul = p->uri_auth->userlist;
+ else
+ ul = auth_find_userlist(arg->data.str.str);
+
+ if (!ul) {
+ Alert("parsing [%s:%d] : unable to find userlist '%s' referenced in arg %d of %s%s%s%s '%s' %s proxy '%s'.\n",
+ cur->file, cur->line, arg->data.str.str,
+ cur->arg_pos + 1, conv_pre, conv_ctx, conv_pos, ctx, cur->kw, where, p->id);
+ cfgerr++;
+ break;
+ }
+
+ free(arg->data.str.str);
+ arg->data.str.str = NULL;
+ arg->unresolved = 0;
+ arg->data.usr = ul;
+ break;
+
+ case ARGT_REG:
+ if (!arg->data.str.len) {
+ Alert("parsing [%s:%d] : missing regex in arg %d of %s%s%s%s '%s' %s proxy '%s'.\n",
+ cur->file, cur->line,
+ cur->arg_pos + 1, conv_pre, conv_ctx, conv_pos, ctx, cur->kw, where, p->id);
+ cfgerr++;
+ continue;
+ }
+
+ reg = calloc(1, sizeof(*reg));
+ if (!reg) {
+ Alert("parsing [%s:%d] : not enough memory to build regex in arg %d of %s%s%s%s '%s' %s proxy '%s'.\n",
+ cur->file, cur->line,
+ cur->arg_pos + 1, conv_pre, conv_ctx, conv_pos, ctx, cur->kw, where, p->id);
+ cfgerr++;
+ continue;
+ }
+
+ rflags = 0;
+ rflags |= (arg->type_flags & ARGF_REG_ICASE) ? REG_ICASE : 0;
+ err = NULL;
+
+ if (!regex_comp(arg->data.str.str, reg, !(rflags & REG_ICASE), 1 /* capture substr */, &err)) {
+ Alert("parsing [%s:%d] : error in regex '%s' in arg %d of %s%s%s%s '%s' %s proxy '%s' : %s.\n",
+ cur->file, cur->line,
+ arg->data.str.str,
+ cur->arg_pos + 1, conv_pre, conv_ctx, conv_pos, ctx, cur->kw, where, p->id, err);
+ cfgerr++;
+ continue;
+ }
+
+ free(arg->data.str.str);
+ arg->data.str.str = NULL;
+ arg->unresolved = 0;
+ arg->data.reg = reg;
+ break;
+
+
+ }
+
+ LIST_DEL(&cur->list);
+ free(cur);
+ } /* end of args processing */
+
+ return cfgerr;
+}
+
+/*
+ * Process a fetch + format conversion as defined by the sample expression
+ * <expr> on request or response considering the <opt> parameter. The output is
+ * not explicitly set to <smp_type>, but shall be compatible with it as
+ * specified by 'sample_casts' table. If a stable sample can be fetched, or an
+ * unstable one when <opt> contains SMP_OPT_FINAL, the sample is converted and
+ * returned without the SMP_F_MAY_CHANGE flag. If an unstable sample is found
+ * and <opt> does not contain SMP_OPT_FINAL, then the sample is returned as-is
+ * with its SMP_F_MAY_CHANGE flag so that the caller can check it and decide to
+ * take actions (eg: wait longer). If a sample could not be found or could not
+ * be converted, NULL is returned. The caller MUST NOT use the sample if the
+ * SMP_F_MAY_CHANGE flag is present, as it is used only as a hint that there is
+ * still hope to get it after waiting longer, and is not converted to string.
+ * The possible output combinations are the following :
+ *
+ * return MAY_CHANGE FINAL Meaning for the sample
+ * NULL * * Not present and will never be (eg: header)
+ * smp 0 * Final value converted (eg: header)
+ * smp 1 0 Not present yet, may appear later (eg: header)
+ * smp 1 1 never happens (either flag is cleared on output)
+ */
+struct sample *sample_fetch_as_type(struct proxy *px, struct session *sess,
+ struct stream *strm, unsigned int opt,
+ struct sample_expr *expr, int smp_type)
+{
+ struct sample *smp = &temp_smp;
+
+ memset(smp, 0, sizeof(*smp));
+
+ if (!sample_process(px, sess, strm, opt, expr, smp)) {
+ if ((smp->flags & SMP_F_MAY_CHANGE) && !(opt & SMP_OPT_FINAL))
+ return smp;
+ return NULL;
+ }
+
+ if (!sample_casts[smp->data.type][smp_type])
+ return NULL;
+
+ if (!sample_casts[smp->data.type][smp_type](smp))
+ return NULL;
+
+ smp->flags &= ~SMP_F_MAY_CHANGE;
+ return smp;
+}
+
+/*****************************************************************/
+/* Sample format convert functions */
+/* These functions set the data type on return. */
+/*****************************************************************/
+
+#ifdef DEBUG_EXPR
+static int sample_conv_debug(const struct arg *arg_p, struct sample *smp, void *private)
+{
+ int i;
+ struct sample tmp;
+
+ if (!(global.mode & MODE_QUIET) || (global.mode & (MODE_VERBOSE | MODE_STARTING))) {
+ fprintf(stderr, "[debug converter] type: %s ", smp_to_type[smp->data.type]);
+ if (!sample_casts[smp->data.type][SMP_T_STR]) {
+ fprintf(stderr, "(undisplayable)");
+ } else {
+
+ /* Copy sample fetch. This put the sample as const, the
+ * cast will copy data if a transformation is required.
+ */
+ memcpy(&tmp, smp, sizeof(struct sample));
+ tmp.flags = SMP_F_CONST;
+
+ if (!sample_casts[smp->data.type][SMP_T_STR](&tmp))
+ fprintf(stderr, "(undisplayable)");
+
+ else {
+ /* Display the displayable chars*. */
+ fprintf(stderr, "<");
+ for (i = 0; i < tmp.data.u.str.len; i++) {
+ if (isprint(tmp.data.u.str.str[i]))
+ fputc(tmp.data.u.str.str[i], stderr);
+ else
+ fputc('.', stderr);
+ }
+ }
+ fprintf(stderr, ">\n");
+ }
+ }
+ return 1;
+}
+#endif
+
+static int sample_conv_bin2base64(const struct arg *arg_p, struct sample *smp, void *private)
+{
+ struct chunk *trash = get_trash_chunk();
+ int b64_len;
+
+ trash->len = 0;
+ b64_len = a2base64(smp->data.u.str.str, smp->data.u.str.len, trash->str, trash->size);
+ if (b64_len < 0)
+ return 0;
+
+ trash->len = b64_len;
+ smp->data.u.str = *trash;
+ smp->data.type = SMP_T_STR;
+ smp->flags &= ~SMP_F_CONST;
+ return 1;
+}
+
+static int sample_conv_bin2hex(const struct arg *arg_p, struct sample *smp, void *private)
+{
+ struct chunk *trash = get_trash_chunk();
+ unsigned char c;
+ int ptr = 0;
+
+ trash->len = 0;
+ while (ptr < smp->data.u.str.len && trash->len <= trash->size - 2) {
+ c = smp->data.u.str.str[ptr++];
+ trash->str[trash->len++] = hextab[(c >> 4) & 0xF];
+ trash->str[trash->len++] = hextab[c & 0xF];
+ }
+ smp->data.u.str = *trash;
+ smp->data.type = SMP_T_STR;
+ smp->flags &= ~SMP_F_CONST;
+ return 1;
+}
+
+/* hashes the binary input into a 32-bit unsigned int */
+static int sample_conv_djb2(const struct arg *arg_p, struct sample *smp, void *private)
+{
+ smp->data.u.sint = hash_djb2(smp->data.u.str.str, smp->data.u.str.len);
+ if (arg_p && arg_p->data.sint)
+ smp->data.u.sint = full_hash(smp->data.u.sint);
+ smp->data.type = SMP_T_SINT;
+ return 1;
+}
+
+static int sample_conv_str2lower(const struct arg *arg_p, struct sample *smp, void *private)
+{
+ int i;
+
+ if (!smp_dup(smp))
+ return 0;
+
+ if (!smp->data.u.str.size)
+ return 0;
+
+ for (i = 0; i < smp->data.u.str.len; i++) {
+ if ((smp->data.u.str.str[i] >= 'A') && (smp->data.u.str.str[i] <= 'Z'))
+ smp->data.u.str.str[i] += 'a' - 'A';
+ }
+ return 1;
+}
+
+static int sample_conv_str2upper(const struct arg *arg_p, struct sample *smp, void *private)
+{
+ int i;
+
+ if (!smp_dup(smp))
+ return 0;
+
+ if (!smp->data.u.str.size)
+ return 0;
+
+ for (i = 0; i < smp->data.u.str.len; i++) {
+ if ((smp->data.u.str.str[i] >= 'a') && (smp->data.u.str.str[i] <= 'z'))
+ smp->data.u.str.str[i] += 'A' - 'a';
+ }
+ return 1;
+}
+
+/* takes the netmask in arg_p */
+static int sample_conv_ipmask(const struct arg *arg_p, struct sample *smp, void *private)
+{
+ smp->data.u.ipv4.s_addr &= arg_p->data.ipv4.s_addr;
+ smp->data.type = SMP_T_IPV4;
+ return 1;
+}
+
+/* takes an UINT value on input supposed to represent the time since EPOCH,
+ * adds an optional offset found in args[1] and emits a string representing
+ * the local time in the format specified in args[1] using strftime().
+ */
+static int sample_conv_ltime(const struct arg *args, struct sample *smp, void *private)
+{
+ struct chunk *temp;
+ /* With high numbers, the date returned can be negative, the 55 bits mask prevent this. */
+ time_t curr_date = smp->data.u.sint & 0x007fffffffffffffLL;
+ struct tm *tm;
+
+ /* add offset */
+ if (args[1].type == ARGT_SINT)
+ curr_date += args[1].data.sint;
+
+ tm = localtime(&curr_date);
+ if (!tm)
+ return 0;
+ temp = get_trash_chunk();
+ temp->len = strftime(temp->str, temp->size, args[0].data.str.str, tm);
+ smp->data.u.str = *temp;
+ smp->data.type = SMP_T_STR;
+ return 1;
+}
+
+/* hashes the binary input into a 32-bit unsigned int */
+static int sample_conv_sdbm(const struct arg *arg_p, struct sample *smp, void *private)
+{
+ smp->data.u.sint = hash_sdbm(smp->data.u.str.str, smp->data.u.str.len);
+ if (arg_p && arg_p->data.sint)
+ smp->data.u.sint = full_hash(smp->data.u.sint);
+ smp->data.type = SMP_T_SINT;
+ return 1;
+}
+
+/* takes an UINT value on input supposed to represent the time since EPOCH,
+ * adds an optional offset found in args[1] and emits a string representing
+ * the UTC date in the format specified in args[1] using strftime().
+ */
+static int sample_conv_utime(const struct arg *args, struct sample *smp, void *private)
+{
+ struct chunk *temp;
+ /* With high numbers, the date returned can be negative, the 55 bits mask prevent this. */
+ time_t curr_date = smp->data.u.sint & 0x007fffffffffffffLL;
+ struct tm *tm;
+
+ /* add offset */
+ if (args[1].type == ARGT_SINT)
+ curr_date += args[1].data.sint;
+
+ tm = gmtime(&curr_date);
+ if (!tm)
+ return 0;
+ temp = get_trash_chunk();
+ temp->len = strftime(temp->str, temp->size, args[0].data.str.str, tm);
+ smp->data.u.str = *temp;
+ smp->data.type = SMP_T_STR;
+ return 1;
+}
+
+/* hashes the binary input into a 32-bit unsigned int */
+static int sample_conv_wt6(const struct arg *arg_p, struct sample *smp, void *private)
+{
+ smp->data.u.sint = hash_wt6(smp->data.u.str.str, smp->data.u.str.len);
+ if (arg_p && arg_p->data.sint)
+ smp->data.u.sint = full_hash(smp->data.u.sint);
+ smp->data.type = SMP_T_SINT;
+ return 1;
+}
+
+/* hashes the binary input into a 32-bit unsigned int */
+static int sample_conv_crc32(const struct arg *arg_p, struct sample *smp, void *private)
+{
+ smp->data.u.sint = hash_crc32(smp->data.u.str.str, smp->data.u.str.len);
+ if (arg_p && arg_p->data.sint)
+ smp->data.u.sint = full_hash(smp->data.u.sint);
+ smp->data.type = SMP_T_SINT;
+ return 1;
+}
+
+/* This function escape special json characters. The returned string can be
+ * safely set between two '"' and used as json string. The json string is
+ * defined like this:
+ *
+ * any Unicode character except '"' or '\' or control character
+ * \", \\, \/, \b, \f, \n, \r, \t, \u + four-hex-digits
+ *
+ * The enum input_type contain all the allowed mode for decoding the input
+ * string.
+ */
+enum input_type {
+ IT_ASCII = 0,
+ IT_UTF8,
+ IT_UTF8S,
+ IT_UTF8P,
+ IT_UTF8PS,
+};
+static int sample_conv_json_check(struct arg *arg, struct sample_conv *conv,
+ const char *file, int line, char **err)
+{
+ if (!arg) {
+ memprintf(err, "Unexpected empty arg list");
+ return 0;
+ }
+
+ if (arg->type != ARGT_STR) {
+ memprintf(err, "Unexpected arg type");
+ return 0;
+ }
+
+ if (strcmp(arg->data.str.str, "") == 0) {
+ arg->type = ARGT_SINT;
+ arg->data.sint = IT_ASCII;
+ return 1;
+ }
+
+ else if (strcmp(arg->data.str.str, "ascii") == 0) {
+ arg->type = ARGT_SINT;
+ arg->data.sint = IT_ASCII;
+ return 1;
+ }
+
+ else if (strcmp(arg->data.str.str, "utf8") == 0) {
+ arg->type = ARGT_SINT;
+ arg->data.sint = IT_UTF8;
+ return 1;
+ }
+
+ else if (strcmp(arg->data.str.str, "utf8s") == 0) {
+ arg->type = ARGT_SINT;
+ arg->data.sint = IT_UTF8S;
+ return 1;
+ }
+
+ else if (strcmp(arg->data.str.str, "utf8p") == 0) {
+ arg->type = ARGT_SINT;
+ arg->data.sint = IT_UTF8P;
+ return 1;
+ }
+
+ else if (strcmp(arg->data.str.str, "utf8ps") == 0) {
+ arg->type = ARGT_SINT;
+ arg->data.sint = IT_UTF8PS;
+ return 1;
+ }
+
+ memprintf(err, "Unexpected input code type at file '%s', line %d. "
+ "Allowed value are 'ascii', 'utf8', 'utf8p' and 'utf8pp'", file, line);
+ return 0;
+}
+
+static int sample_conv_json(const struct arg *arg_p, struct sample *smp, void *private)
+{
+ struct chunk *temp;
+ char _str[7]; /* \u + 4 hex digit + null char for sprintf. */
+ const char *str;
+ int len;
+ enum input_type input_type = IT_ASCII;
+ unsigned int c;
+ unsigned int ret;
+ char *p;
+
+ if (arg_p)
+ input_type = arg_p->data.sint;
+
+ temp = get_trash_chunk();
+ temp->len = 0;
+
+ p = smp->data.u.str.str;
+ while (p < smp->data.u.str.str + smp->data.u.str.len) {
+
+ if (input_type == IT_ASCII) {
+ /* Read input as ASCII. */
+ c = *(unsigned char *)p;
+ p++;
+ }
+ else {
+ /* Read input as UTF8. */
+ ret = utf8_next(p, smp->data.u.str.len - ( p - smp->data.u.str.str ), &c);
+ p += utf8_return_length(ret);
+
+ if (input_type == IT_UTF8 && utf8_return_code(ret) != UTF8_CODE_OK)
+ return 0;
+ if (input_type == IT_UTF8S && utf8_return_code(ret) != UTF8_CODE_OK)
+ continue;
+ if (input_type == IT_UTF8P && utf8_return_code(ret) & (UTF8_CODE_INVRANGE|UTF8_CODE_BADSEQ))
+ return 0;
+ if (input_type == IT_UTF8PS && utf8_return_code(ret) & (UTF8_CODE_INVRANGE|UTF8_CODE_BADSEQ))
+ continue;
+
+ /* Check too big values. */
+ if ((unsigned int)c > 0xffff) {
+ if (input_type == IT_UTF8 || input_type == IT_UTF8P)
+ return 0;
+ continue;
+ }
+ }
+
+ /* Convert character. */
+ if (c == '"') {
+ len = 2;
+ str = "\\\"";
+ }
+ else if (c == '\\') {
+ len = 2;
+ str = "\\\\";
+ }
+ else if (c == '/') {
+ len = 2;
+ str = "\\/";
+ }
+ else if (c == '\b') {
+ len = 2;
+ str = "\\b";
+ }
+ else if (c == '\f') {
+ len = 2;
+ str = "\\f";
+ }
+ else if (c == '\r') {
+ len = 2;
+ str = "\\r";
+ }
+ else if (c == '\n') {
+ len = 2;
+ str = "\\n";
+ }
+ else if (c == '\t') {
+ len = 2;
+ str = "\\t";
+ }
+ else if (c > 0xff || !isprint(c)) {
+ /* isprint generate a segfault if c is too big. The man says that
+ * c must have the value of an unsigned char or EOF.
+ */
+ len = 6;
+ _str[0] = '\\';
+ _str[1] = 'u';
+ snprintf(&_str[2], 5, "%04x", (unsigned short)c);
+ str = _str;
+ }
+ else {
+ len = 1;
+ str = (char *)&c;
+ }
+
+ /* Check length */
+ if (temp->len + len > temp->size)
+ return 0;
+
+ /* Copy string. */
+ memcpy(temp->str + temp->len, str, len);
+ temp->len += len;
+ }
+
+ smp->flags &= ~SMP_F_CONST;
+ smp->data.u.str = *temp;
+ smp->data.type = SMP_T_STR;
+
+ return 1;
+}
+
+/* This sample function is designed to extract some bytes from an input buffer.
+ * First arg is the offset.
+ * Optional second arg is the length to truncate */
+static int sample_conv_bytes(const struct arg *arg_p, struct sample *smp, void *private)
+{
+ if (smp->data.u.str.len <= arg_p[0].data.sint) {
+ smp->data.u.str.len = 0;
+ return 1;
+ }
+
+ if (smp->data.u.str.size)
+ smp->data.u.str.size -= arg_p[0].data.sint;
+ smp->data.u.str.len -= arg_p[0].data.sint;
+ smp->data.u.str.str += arg_p[0].data.sint;
+
+ if ((arg_p[1].type == ARGT_SINT) && (arg_p[1].data.sint < smp->data.u.str.len))
+ smp->data.u.str.len = arg_p[1].data.sint;
+
+ return 1;
+}
+
+static int sample_conv_field_check(struct arg *args, struct sample_conv *conv,
+ const char *file, int line, char **err)
+{
+ struct arg *arg = args;
+
+ if (!arg) {
+ memprintf(err, "Unexpected empty arg list");
+ return 0;
+ }
+
+ if (arg->type != ARGT_SINT) {
+ memprintf(err, "Unexpected arg type");
+ return 0;
+ }
+
+ if (!arg->data.sint) {
+ memprintf(err, "Unexpected value 0 for index");
+ return 0;
+ }
+
+ arg++;
+
+ if (arg->type != ARGT_STR) {
+ memprintf(err, "Unexpected arg type");
+ return 0;
+ }
+
+ if (!arg->data.str.len) {
+ memprintf(err, "Empty separators list");
+ return 0;
+ }
+
+ return 1;
+}
+
+/* This sample function is designed to a return selected part of a string (field).
+ * First arg is the index of the field (start at 1)
+ * Second arg is a char list of separators (type string)
+ */
+static int sample_conv_field(const struct arg *arg_p, struct sample *smp, void *private)
+{
+ unsigned int field;
+ char *start, *end;
+ int i;
+
+ if (!arg_p[0].data.sint)
+ return 0;
+
+ field = 1;
+ end = start = smp->data.u.str.str;
+ while (end - smp->data.u.str.str < smp->data.u.str.len) {
+
+ for (i = 0 ; i < arg_p[1].data.str.len ; i++) {
+ if (*end == arg_p[1].data.str.str[i]) {
+ if (field == arg_p[0].data.sint)
+ goto found;
+ start = end+1;
+ field++;
+ break;
+ }
+ }
+ end++;
+ }
+
+ /* Field not found */
+ if (field != arg_p[0].data.sint) {
+ smp->data.u.str.len = 0;
+ return 1;
+ }
+found:
+ smp->data.u.str.len = end - start;
+ /* If ret string is len 0, no need to
+ change pointers or to update size */
+ if (!smp->data.u.str.len)
+ return 1;
+
+ smp->data.u.str.str = start;
+
+ /* Compute remaining size if needed
+ Note: smp->data.u.str.size cannot be set to 0 */
+ if (smp->data.u.str.size)
+ smp->data.u.str.size -= start - smp->data.u.str.str;
+
+ return 1;
+}
+
+/* This sample function is designed to return a word from a string.
+ * First arg is the index of the word (start at 1)
+ * Second arg is a char list of words separators (type string)
+ */
+static int sample_conv_word(const struct arg *arg_p, struct sample *smp, void *private)
+{
+ unsigned int word;
+ char *start, *end;
+ int i, issep, inword;
+
+ if (!arg_p[0].data.sint)
+ return 0;
+
+ word = 0;
+ inword = 0;
+ end = start = smp->data.u.str.str;
+ while (end - smp->data.u.str.str < smp->data.u.str.len) {
+ issep = 0;
+ for (i = 0 ; i < arg_p[1].data.str.len ; i++) {
+ if (*end == arg_p[1].data.str.str[i]) {
+ issep = 1;
+ break;
+ }
+ }
+ if (!inword) {
+ if (!issep) {
+ word++;
+ start = end;
+ inword = 1;
+ }
+ }
+ else if (issep) {
+ if (word == arg_p[0].data.sint)
+ goto found;
+ inword = 0;
+ }
+ end++;
+ }
+
+ /* Field not found */
+ if (word != arg_p[0].data.sint) {
+ smp->data.u.str.len = 0;
+ return 1;
+ }
+found:
+ smp->data.u.str.len = end - start;
+ /* If ret string is len 0, no need to
+ change pointers or to update size */
+ if (!smp->data.u.str.len)
+ return 1;
+
+ smp->data.u.str.str = start;
+
+ /* Compute remaining size if needed
+ Note: smp->data.u.str.size cannot be set to 0 */
+ if (smp->data.u.str.size)
+ smp->data.u.str.size -= start - smp->data.u.str.str;
+
+ return 1;
+}
+
+static int sample_conv_regsub_check(struct arg *args, struct sample_conv *conv,
+ const char *file, int line, char **err)
+{
+ struct arg *arg = args;
+ char *p;
+ int len;
+
+ /* arg0 is a regex, it uses type_flag for ICASE and global match */
+ arg[0].type_flags = 0;
+
+ if (arg[2].type != ARGT_STR)
+ return 1;
+
+ p = arg[2].data.str.str;
+ len = arg[2].data.str.len;
+ while (len) {
+ if (*p == 'i') {
+ arg[0].type_flags |= ARGF_REG_ICASE;
+ }
+ else if (*p == 'g') {
+ arg[0].type_flags |= ARGF_REG_GLOB;
+ }
+ else {
+ memprintf(err, "invalid regex flag '%c', only 'i' and 'g' are supported", *p);
+ return 0;
+ }
+ p++;
+ len--;
+ }
+ return 1;
+}
+
+/* This sample function is designed to do the equivalent of s/match/replace/ on
+ * the input string. It applies a regex and restarts from the last matched
+ * location until nothing matches anymore. First arg is the regex to apply to
+ * the input string, second arg is the replacement expression.
+ */
+static int sample_conv_regsub(const struct arg *arg_p, struct sample *smp, void *private)
+{
+ char *start, *end;
+ struct my_regex *reg = arg_p[0].data.reg;
+ regmatch_t pmatch[MAX_MATCH];
+ struct chunk *trash = get_trash_chunk();
+ int flag, max;
+ int found;
+
+ start = smp->data.u.str.str;
+ end = start + smp->data.u.str.len;
+
+ flag = 0;
+ while (1) {
+ /* check for last round which is used to copy remaining parts
+ * when not running in global replacement mode.
+ */
+ found = 0;
+ if ((arg_p[0].type_flags & ARGF_REG_GLOB) || !(flag & REG_NOTBOL)) {
+ /* Note: we can have start == end on empty strings or at the end */
+ found = regex_exec_match2(reg, start, end - start, MAX_MATCH, pmatch, flag);
+ }
+
+ if (!found)
+ pmatch[0].rm_so = end - start;
+
+ /* copy the heading non-matching part (which may also be the tail if nothing matches) */
+ max = trash->size - trash->len;
+ if (max && pmatch[0].rm_so > 0) {
+ if (max > pmatch[0].rm_so)
+ max = pmatch[0].rm_so;
+ memcpy(trash->str + trash->len, start, max);
+ trash->len += max;
+ }
+
+ if (!found)
+ break;
+
+ /* replace the matching part */
+ max = trash->size - trash->len;
+ if (max) {
+ if (max > arg_p[1].data.str.len)
+ max = arg_p[1].data.str.len;
+ memcpy(trash->str + trash->len, arg_p[1].data.str.str, max);
+ trash->len += max;
+ }
+
+ /* stop here if we're done with this string */
+ if (start >= end)
+ break;
+
+ /* We have a special case for matches of length 0 (eg: "x*y*").
+ * These ones are considered to match in front of a character,
+ * so we have to copy that character and skip to the next one.
+ */
+ if (!pmatch[0].rm_eo) {
+ if (trash->len < trash->size)
+ trash->str[trash->len++] = start[pmatch[0].rm_eo];
+ pmatch[0].rm_eo++;
+ }
+
+ start += pmatch[0].rm_eo;
+ flag |= REG_NOTBOL;
+ }
+
+ smp->data.u.str = *trash;
+ return 1;
+}
+
+/* This function check an operator entry. It expects a string.
+ * The string can be an integer or a variable name.
+ */
+static int check_operator(struct arg *args, struct sample_conv *conv,
+ const char *file, int line, char **err)
+{
+ const char *str;
+ const char *end;
+
+ /* Try to decode a variable. */
+ if (vars_check_arg(&args[0], NULL))
+ return 1;
+
+ /* Try to convert an integer */
+ str = args[0].data.str.str;
+ end = str + strlen(str);
+ args[0].data.sint = read_int64(&str, end);
+ if (*str != '\0') {
+ memprintf(err, "expects an integer or a variable name");
+ return 0;
+ }
+ args[0].type = ARGT_SINT;
+ return 1;
+}
+
+/* This fucntion returns a sample struct filled with a arg content.
+ * If the arg contain an integer, the integer is returned in the
+ * sample. If the arg contains a variable descriptor, it returns the
+ * variable value.
+ *
+ * This function returns 0 if an error occurs, otherwise it returns 1.
+ */
+static inline int sample_conv_var2smp(const struct arg *arg, struct stream *strm, struct sample *smp)
+{
+ switch (arg->type) {
+ case ARGT_SINT:
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = arg->data.sint;
+ return 1;
+ case ARGT_VAR:
+ if (!vars_get_by_desc(&arg->data.var, strm, smp))
+ return 0;
+ if (!sample_casts[smp->data.type][SMP_T_SINT])
+ return 0;
+ if (!sample_casts[smp->data.type][SMP_T_SINT](smp))
+ return 0;
+ return 1;
+ default:
+ return 0;
+ }
+}
+
+/* Takes a SINT on input, applies a binary twos complement and returns the SINT
+ * result.
+ */
+static int sample_conv_binary_cpl(const struct arg *arg_p, struct sample *smp, void *private)
+{
+ smp->data.u.sint = ~smp->data.u.sint;
+ return 1;
+}
+
+/* Takes a SINT on input, applies a binary "and" with the SINT directly in
+ * arg_p or in the varaible described in arg_p, and returns the SINT result.
+ */
+static int sample_conv_binary_and(const struct arg *arg_p, struct sample *smp, void *private)
+{
+ struct sample tmp;
+
+ if (!sample_conv_var2smp(arg_p, smp->strm, &tmp))
+ return 0;
+ smp->data.u.sint &= tmp.data.u.sint;
+ return 1;
+}
+
+/* Takes a SINT on input, applies a binary "or" with the SINT directly in
+ * arg_p or in the varaible described in arg_p, and returns the SINT result.
+ */
+static int sample_conv_binary_or(const struct arg *arg_p, struct sample *smp, void *private)
+{
+ struct sample tmp;
+
+ if (!sample_conv_var2smp(arg_p, smp->strm, &tmp))
+ return 0;
+ smp->data.u.sint |= tmp.data.u.sint;
+ return 1;
+}
+
+/* Takes a SINT on input, applies a binary "xor" with the SINT directly in
+ * arg_p or in the varaible described in arg_p, and returns the SINT result.
+ */
+static int sample_conv_binary_xor(const struct arg *arg_p, struct sample *smp, void *private)
+{
+ struct sample tmp;
+
+ if (!sample_conv_var2smp(arg_p, smp->strm, &tmp))
+ return 0;
+ smp->data.u.sint ^= tmp.data.u.sint;
+ return 1;
+}
+
+static inline long long int arith_add(long long int a, long long int b)
+{
+ /* Prevent overflow and makes capped calculus.
+ * We must ensure that the check calculus doesn't
+ * exceed the signed 64 bits limits.
+ *
+ * +----------+----------+
+ * | a<0 | a>=0 |
+ * +------+----------+----------+
+ * | b<0 | MIN-a>b | no check |
+ * +------+----------+----------+
+ * | b>=0 | no check | MAX-a<b |
+ * +------+----------+----------+
+ */
+ if ((a ^ b) >= 0) {
+ /* signs are differents. */
+ if (a < 0) {
+ if (LLONG_MIN - a > b)
+ return LLONG_MIN;
+ }
+ if (LLONG_MAX - a < b)
+ return LLONG_MAX;
+ }
+ return a + b;
+}
+
+/* Takes a SINT on input, applies an arithmetic "add" with the SINT directly in
+ * arg_p or in the varaible described in arg_p, and returns the SINT result.
+ */
+static int sample_conv_arith_add(const struct arg *arg_p, struct sample *smp, void *private)
+{
+ struct sample tmp;
+
+ if (!sample_conv_var2smp(arg_p, smp->strm, &tmp))
+ return 0;
+ smp->data.u.sint = arith_add(smp->data.u.sint, tmp.data.u.sint);
+ return 1;
+}
+
+/* Takes a SINT on input, applies an arithmetic "sub" with the SINT directly in
+ * arg_p or in the varaible described in arg_p, and returns the SINT result.
+ */
+static int sample_conv_arith_sub(const struct arg *arg_p,
+ struct sample *smp, void *private)
+{
+ struct sample tmp;
+
+ if (!sample_conv_var2smp(arg_p, smp->strm, &tmp))
+ return 0;
+
+ /* We cannot represent -LLONG_MIN because abs(LLONG_MIN) is greater
+ * than abs(LLONG_MAX). So, the following code use LLONG_MAX in place
+ * of -LLONG_MIN and correct the result.
+ */
+ if (tmp.data.u.sint == LLONG_MIN) {
+ smp->data.u.sint = arith_add(smp->data.u.sint, LLONG_MAX);
+ if (smp->data.u.sint < LLONG_MAX)
+ smp->data.u.sint++;
+ return 1;
+ }
+
+ /* standard substraction: we use the "add" function and negate
+ * the second operand.
+ */
+ smp->data.u.sint = arith_add(smp->data.u.sint, -tmp.data.u.sint);
+ return 1;
+}
+
+/* Takes a SINT on input, applies an arithmetic "mul" with the SINT directly in
+ * arg_p or in the varaible described in arg_p, and returns the SINT result.
+ * If the result makes an overflow, then the largest possible quantity is
+ * returned.
+ */
+static int sample_conv_arith_mul(const struct arg *arg_p,
+ struct sample *smp, void *private)
+{
+ struct sample tmp;
+ long long int c;
+
+ if (!sample_conv_var2smp(arg_p, smp->strm, &tmp))
+ return 0;
+
+ /* prevent divide by 0 during the check */
+ if (!smp->data.u.sint || !tmp.data.u.sint) {
+ smp->data.u.sint = 0;
+ return 1;
+ }
+
+ /* The multiply between LLONG_MIN and -1 returns a
+ * "floting point exception".
+ */
+ if (smp->data.u.sint == LLONG_MIN && tmp.data.u.sint == -1) {
+ smp->data.u.sint = LLONG_MAX;
+ return 1;
+ }
+
+ /* execute standard multiplication. */
+ c = smp->data.u.sint * tmp.data.u.sint;
+
+ /* check for overflow and makes capped multiply. */
+ if (smp->data.u.sint != c / tmp.data.u.sint) {
+ if ((smp->data.u.sint < 0) == (tmp.data.u.sint < 0)) {
+ smp->data.u.sint = LLONG_MAX;
+ return 1;
+ }
+ smp->data.u.sint = LLONG_MIN;
+ return 1;
+ }
+ smp->data.u.sint = c;
+ return 1;
+}
+
+/* Takes a SINT on input, applies an arithmetic "div" with the SINT directly in
+ * arg_p or in the varaible described in arg_p, and returns the SINT result.
+ * If arg_p makes the result overflow, then the largest possible quantity is
+ * returned.
+ */
+static int sample_conv_arith_div(const struct arg *arg_p,
+ struct sample *smp, void *private)
+{
+ struct sample tmp;
+
+ if (!sample_conv_var2smp(arg_p, smp->strm, &tmp))
+ return 0;
+
+ if (tmp.data.u.sint) {
+ /* The divide between LLONG_MIN and -1 returns a
+ * "floting point exception".
+ */
+ if (smp->data.u.sint == LLONG_MIN && tmp.data.u.sint == -1) {
+ smp->data.u.sint = LLONG_MAX;
+ return 1;
+ }
+ smp->data.u.sint /= tmp.data.u.sint;
+ return 1;
+ }
+ smp->data.u.sint = LLONG_MAX;
+ return 1;
+}
+
+/* Takes a SINT on input, applies an arithmetic "mod" with the SINT directly in
+ * arg_p or in the varaible described in arg_p, and returns the SINT result.
+ * If arg_p makes the result overflow, then 0 is returned.
+ */
+static int sample_conv_arith_mod(const struct arg *arg_p,
+ struct sample *smp, void *private)
+{
+ struct sample tmp;
+
+ if (!sample_conv_var2smp(arg_p, smp->strm, &tmp))
+ return 0;
+
+ if (tmp.data.u.sint) {
+ /* The divide between LLONG_MIN and -1 returns a
+ * "floting point exception".
+ */
+ if (smp->data.u.sint == LLONG_MIN && tmp.data.u.sint == -1) {
+ smp->data.u.sint = 0;
+ return 1;
+ }
+ smp->data.u.sint %= tmp.data.u.sint;
+ return 1;
+ }
+ smp->data.u.sint = 0;
+ return 1;
+}
+
+/* Takes an SINT on input, applies an arithmetic "neg" and returns the SINT
+ * result.
+ */
+static int sample_conv_arith_neg(const struct arg *arg_p,
+ struct sample *smp, void *private)
+{
+ if (smp->data.u.sint == LLONG_MIN)
+ smp->data.u.sint = LLONG_MAX;
+ else
+ smp->data.u.sint = -smp->data.u.sint;
+ return 1;
+}
+
+/* Takes a SINT on input, returns true is the value is non-null, otherwise
+ * false. The output is a BOOL.
+ */
+static int sample_conv_arith_bool(const struct arg *arg_p,
+ struct sample *smp, void *private)
+{
+ smp->data.u.sint = !!smp->data.u.sint;
+ smp->data.type = SMP_T_BOOL;
+ return 1;
+}
+
+/* Takes a SINT on input, returns false is the value is non-null, otherwise
+ * truee. The output is a BOOL.
+ */
+static int sample_conv_arith_not(const struct arg *arg_p,
+ struct sample *smp, void *private)
+{
+ smp->data.u.sint = !smp->data.u.sint;
+ smp->data.type = SMP_T_BOOL;
+ return 1;
+}
+
+/* Takes a SINT on input, returns true is the value is odd, otherwise false.
+ * The output is a BOOL.
+ */
+static int sample_conv_arith_odd(const struct arg *arg_p,
+ struct sample *smp, void *private)
+{
+ smp->data.u.sint = smp->data.u.sint & 1;
+ smp->data.type = SMP_T_BOOL;
+ return 1;
+}
+
+/* Takes a SINT on input, returns true is the value is even, otherwise false.
+ * The output is a BOOL.
+ */
+static int sample_conv_arith_even(const struct arg *arg_p,
+ struct sample *smp, void *private)
+{
+ smp->data.u.sint = !(smp->data.u.sint & 1);
+ smp->data.type = SMP_T_BOOL;
+ return 1;
+}
+
+/************************************************************************/
+/* All supported sample fetch functions must be declared here */
+/************************************************************************/
+
+/* force TRUE to be returned at the fetch level */
+static int
+smp_fetch_true(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ smp->data.type = SMP_T_BOOL;
+ smp->data.u.sint = 1;
+ return 1;
+}
+
+/* force FALSE to be returned at the fetch level */
+static int
+smp_fetch_false(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ smp->data.type = SMP_T_BOOL;
+ smp->data.u.sint = 0;
+ return 1;
+}
+
+/* retrieve environment variable $1 as a string */
+static int
+smp_fetch_env(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ char *env;
+
+ if (!args || args[0].type != ARGT_STR)
+ return 0;
+
+ env = getenv(args[0].data.str.str);
+ if (!env)
+ return 0;
+
+ smp->data.type = SMP_T_STR;
+ smp->flags = SMP_F_CONST;
+ smp->data.u.str.str = env;
+ smp->data.u.str.len = strlen(env);
+ return 1;
+}
+
+/* retrieve the current local date in epoch time, and applies an optional offset
+ * of args[0] seconds.
+ */
+static int
+smp_fetch_date(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ smp->data.u.sint = date.tv_sec;
+
+ /* add offset */
+ if (args && args[0].type == ARGT_SINT)
+ smp->data.u.sint += args[0].data.sint;
+
+ smp->data.type = SMP_T_SINT;
+ smp->flags |= SMP_F_VOL_TEST | SMP_F_MAY_CHANGE;
+ return 1;
+}
+
+/* returns the number of processes */
+static int
+smp_fetch_nbproc(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = global.nbproc;
+ return 1;
+}
+
+/* returns the number of the current process (between 1 and nbproc */
+static int
+smp_fetch_proc(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = relative_pid;
+ return 1;
+}
+
+/* generate a random 32-bit integer for whatever purpose, with an optional
+ * range specified in argument.
+ */
+static int
+smp_fetch_rand(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ smp->data.u.sint = random();
+
+ /* reduce if needed. Don't do a modulo, use all bits! */
+ if (args && args[0].type == ARGT_SINT)
+ smp->data.u.sint = (smp->data.u.sint * args[0].data.sint) / ((u64)RAND_MAX+1);
+
+ smp->data.type = SMP_T_SINT;
+ smp->flags |= SMP_F_VOL_TEST | SMP_F_MAY_CHANGE;
+ return 1;
+}
+
+/* returns true if the current process is stopping */
+static int
+smp_fetch_stopping(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ smp->data.type = SMP_T_BOOL;
+ smp->data.u.sint = stopping;
+ return 1;
+}
+
+static int smp_fetch_const_str(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ smp->flags |= SMP_F_CONST;
+ smp->data.type = SMP_T_STR;
+ smp->data.u.str.str = args[0].data.str.str;
+ smp->data.u.str.len = args[0].data.str.len;
+ return 1;
+}
+
+static int smp_check_const_bool(struct arg *args, char **err)
+{
+ if (strcasecmp(args[0].data.str.str, "true") == 0 ||
+ strcasecmp(args[0].data.str.str, "1") == 0) {
+ args[0].type = ARGT_SINT;
+ args[0].data.sint = 1;
+ return 1;
+ }
+ if (strcasecmp(args[0].data.str.str, "false") == 0 ||
+ strcasecmp(args[0].data.str.str, "0") == 0) {
+ args[0].type = ARGT_SINT;
+ args[0].data.sint = 0;
+ return 1;
+ }
+ memprintf(err, "Expects 'true', 'false', '0' or '1'");
+ return 0;
+}
+
+static int smp_fetch_const_bool(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ smp->data.type = SMP_T_BOOL;
+ smp->data.u.sint = args[0].data.sint;
+ return 1;
+}
+
+static int smp_fetch_const_int(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = args[0].data.sint;
+ return 1;
+}
+
+static int smp_fetch_const_ipv4(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ smp->data.type = SMP_T_IPV4;
+ smp->data.u.ipv4 = args[0].data.ipv4;
+ return 1;
+}
+
+static int smp_fetch_const_ipv6(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ smp->data.type = SMP_T_IPV6;
+ smp->data.u.ipv6 = args[0].data.ipv6;
+ return 1;
+}
+
+static int smp_check_const_bin(struct arg *args, char **err)
+{
+ char *binstr;
+ int binstrlen;
+
+ if (!parse_binary(args[0].data.str.str, &binstr, &binstrlen, err))
+ return 0;
+ args[0].type = ARGT_STR;
+ args[0].data.str.str = binstr;
+ args[0].data.str.len = binstrlen;
+ return 1;
+}
+
+static int smp_fetch_const_bin(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ smp->flags |= SMP_F_CONST;
+ smp->data.type = SMP_T_BIN;
+ smp->data.u.str.str = args[0].data.str.str;
+ smp->data.u.str.len = args[0].data.str.len;
+ return 1;
+}
+
+static int smp_check_const_meth(struct arg *args, char **err)
+{
+ enum http_meth_t meth;
+ int i;
+
+ meth = find_http_meth(args[0].data.str.str, args[0].data.str.len);
+ if (meth != HTTP_METH_OTHER) {
+ args[0].type = ARGT_SINT;
+ args[0].data.sint = meth;
+ } else {
+ /* Check method avalaibility. A methos is a token defined as :
+ * tchar = "!" / "#" / "$" / "%" / "&" / "'" / "*" / "+" / "-" / "." /
+ * "^" / "_" / "`" / "|" / "~" / DIGIT / ALPHA
+ * token = 1*tchar
+ */
+ for (i = 0; i < args[0].data.str.len; i++) {
+ if (!http_is_token[(unsigned char)args[0].data.str.str[i]]) {
+ memprintf(err, "expects valid method.");
+ return 0;
+ }
+ }
+ }
+ return 1;
+}
+
+static int smp_fetch_const_meth(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ smp->data.type = SMP_T_METH;
+ if (args[0].type == ARGT_SINT) {
+ smp->flags &= ~SMP_F_CONST;
+ smp->data.u.meth.meth = args[0].data.sint;
+ smp->data.u.meth.str.str = "";
+ smp->data.u.meth.str.len = 0;
+ } else {
+ smp->flags |= SMP_F_CONST;
+ smp->data.u.meth.meth = HTTP_METH_OTHER;
+ smp->data.u.meth.str.str = args[0].data.str.str;
+ smp->data.u.meth.str.len = args[0].data.str.len;
+ }
+ return 1;
+}
+
+/* Note: must not be declared <const> as its list will be overwritten.
+ * Note: fetches that may return multiple types must be declared as the lowest
+ * common denominator, the type that can be casted into all other ones. For
+ * instance IPv4/IPv6 must be declared IPv4.
+ */
+static struct sample_fetch_kw_list smp_kws = {ILH, {
+ { "always_false", smp_fetch_false, 0, NULL, SMP_T_BOOL, SMP_USE_INTRN },
+ { "always_true", smp_fetch_true, 0, NULL, SMP_T_BOOL, SMP_USE_INTRN },
+ { "env", smp_fetch_env, ARG1(1,STR), NULL, SMP_T_STR, SMP_USE_INTRN },
+ { "date", smp_fetch_date, ARG1(0,SINT), NULL, SMP_T_SINT, SMP_USE_INTRN },
+ { "nbproc", smp_fetch_nbproc,0, NULL, SMP_T_SINT, SMP_USE_INTRN },
+ { "proc", smp_fetch_proc, 0, NULL, SMP_T_SINT, SMP_USE_INTRN },
+ { "rand", smp_fetch_rand, ARG1(0,SINT), NULL, SMP_T_SINT, SMP_USE_INTRN },
+ { "stopping", smp_fetch_stopping, 0, NULL, SMP_T_BOOL, SMP_USE_INTRN },
+
+ { "str", smp_fetch_const_str, ARG1(1,STR), NULL , SMP_T_STR, SMP_USE_INTRN },
+ { "bool", smp_fetch_const_bool, ARG1(1,STR), smp_check_const_bool, SMP_T_BOOL, SMP_USE_INTRN },
+ { "int", smp_fetch_const_int, ARG1(1,SINT), NULL , SMP_T_SINT, SMP_USE_INTRN },
+ { "ipv4", smp_fetch_const_ipv4, ARG1(1,IPV4), NULL , SMP_T_IPV4, SMP_USE_INTRN },
+ { "ipv6", smp_fetch_const_ipv6, ARG1(1,IPV6), NULL , SMP_T_IPV6, SMP_USE_INTRN },
+ { "bin", smp_fetch_const_bin, ARG1(1,STR), smp_check_const_bin , SMP_T_BIN, SMP_USE_INTRN },
+ { "meth", smp_fetch_const_meth, ARG1(1,STR), smp_check_const_meth, SMP_T_METH, SMP_USE_INTRN },
+
+ { /* END */ },
+}};
+
+/* Note: must not be declared <const> as its list will be overwritten */
+static struct sample_conv_kw_list sample_conv_kws = {ILH, {
+#ifdef DEBUG_EXPR
+ { "debug", sample_conv_debug, 0, NULL, SMP_T_ANY, SMP_T_ANY },
+#endif
+
+ { "base64", sample_conv_bin2base64,0, NULL, SMP_T_BIN, SMP_T_STR },
+ { "upper", sample_conv_str2upper, 0, NULL, SMP_T_STR, SMP_T_STR },
+ { "lower", sample_conv_str2lower, 0, NULL, SMP_T_STR, SMP_T_STR },
+ { "hex", sample_conv_bin2hex, 0, NULL, SMP_T_BIN, SMP_T_STR },
+ { "ipmask", sample_conv_ipmask, ARG1(1,MSK4), NULL, SMP_T_IPV4, SMP_T_IPV4 },
+ { "ltime", sample_conv_ltime, ARG2(1,STR,SINT), NULL, SMP_T_SINT, SMP_T_STR },
+ { "utime", sample_conv_utime, ARG2(1,STR,SINT), NULL, SMP_T_SINT, SMP_T_STR },
+ { "crc32", sample_conv_crc32, ARG1(0,SINT), NULL, SMP_T_BIN, SMP_T_SINT },
+ { "djb2", sample_conv_djb2, ARG1(0,SINT), NULL, SMP_T_BIN, SMP_T_SINT },
+ { "sdbm", sample_conv_sdbm, ARG1(0,SINT), NULL, SMP_T_BIN, SMP_T_SINT },
+ { "wt6", sample_conv_wt6, ARG1(0,SINT), NULL, SMP_T_BIN, SMP_T_SINT },
+ { "json", sample_conv_json, ARG1(1,STR), sample_conv_json_check, SMP_T_STR, SMP_T_STR },
+ { "bytes", sample_conv_bytes, ARG2(1,SINT,SINT), NULL, SMP_T_BIN, SMP_T_BIN },
+ { "field", sample_conv_field, ARG2(2,SINT,STR), sample_conv_field_check, SMP_T_STR, SMP_T_STR },
+ { "word", sample_conv_word, ARG2(2,SINT,STR), sample_conv_field_check, SMP_T_STR, SMP_T_STR },
+ { "regsub", sample_conv_regsub, ARG3(2,REG,STR,STR), sample_conv_regsub_check, SMP_T_STR, SMP_T_STR },
+
+ { "and", sample_conv_binary_and, ARG1(1,STR), check_operator, SMP_T_SINT, SMP_T_SINT },
+ { "or", sample_conv_binary_or, ARG1(1,STR), check_operator, SMP_T_SINT, SMP_T_SINT },
+ { "xor", sample_conv_binary_xor, ARG1(1,STR), check_operator, SMP_T_SINT, SMP_T_SINT },
+ { "cpl", sample_conv_binary_cpl, 0, NULL, SMP_T_SINT, SMP_T_SINT },
+ { "bool", sample_conv_arith_bool, 0, NULL, SMP_T_SINT, SMP_T_BOOL },
+ { "not", sample_conv_arith_not, 0, NULL, SMP_T_SINT, SMP_T_BOOL },
+ { "odd", sample_conv_arith_odd, 0, NULL, SMP_T_SINT, SMP_T_BOOL },
+ { "even", sample_conv_arith_even, 0, NULL, SMP_T_SINT, SMP_T_BOOL },
+ { "add", sample_conv_arith_add, ARG1(1,STR), check_operator, SMP_T_SINT, SMP_T_SINT },
+ { "sub", sample_conv_arith_sub, ARG1(1,STR), check_operator, SMP_T_SINT, SMP_T_SINT },
+ { "mul", sample_conv_arith_mul, ARG1(1,STR), check_operator, SMP_T_SINT, SMP_T_SINT },
+ { "div", sample_conv_arith_div, ARG1(1,STR), check_operator, SMP_T_SINT, SMP_T_SINT },
+ { "mod", sample_conv_arith_mod, ARG1(1,STR), check_operator, SMP_T_SINT, SMP_T_SINT },
+ { "neg", sample_conv_arith_neg, 0, NULL, SMP_T_SINT, SMP_T_SINT },
+
+ { NULL, NULL, 0, 0, 0 },
+}};
+
+__attribute__((constructor))
+static void __sample_init(void)
+{
+ /* register sample fetch and format conversion keywords */
+ sample_register_fetches(&smp_kws);
+ sample_register_convs(&sample_conv_kws);
+}
--- /dev/null
+/*
+ * Server management functions.
+ *
+ * Copyright 2000-2012 Willy Tarreau <w@1wt.eu>
+ * Copyright 2007-2008 Krzysztof Piotr Oledzki <ole@ans.pl>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <ctype.h>
+#include <errno.h>
+
+#include <common/cfgparse.h>
+#include <common/config.h>
+#include <common/errors.h>
+#include <common/namespace.h>
+#include <common/time.h>
+
+#include <types/global.h>
+#include <types/dns.h>
+
+#include <proto/checks.h>
+#include <proto/port_range.h>
+#include <proto/protocol.h>
+#include <proto/queue.h>
+#include <proto/raw_sock.h>
+#include <proto/server.h>
+#include <proto/stream.h>
+#include <proto/task.h>
+#include <proto/dns.h>
+
+static void srv_update_state(struct server *srv, int version, char **params);
+
+/* List head of all known server keywords */
+static struct srv_kw_list srv_keywords = {
+ .list = LIST_HEAD_INIT(srv_keywords.list)
+};
+
+int srv_downtime(const struct server *s)
+{
+ if ((s->state != SRV_ST_STOPPED) && s->last_change < now.tv_sec) // ignore negative time
+ return s->down_time;
+
+ return now.tv_sec - s->last_change + s->down_time;
+}
+
+int srv_lastsession(const struct server *s)
+{
+ if (s->counters.last_sess)
+ return now.tv_sec - s->counters.last_sess;
+
+ return -1;
+}
+
+int srv_getinter(const struct check *check)
+{
+ const struct server *s = check->server;
+
+ if ((check->state & CHK_ST_CONFIGURED) && (check->health == check->rise + check->fall - 1))
+ return check->inter;
+
+ if ((s->state == SRV_ST_STOPPED) && check->health == 0)
+ return (check->downinter)?(check->downinter):(check->inter);
+
+ return (check->fastinter)?(check->fastinter):(check->inter);
+}
+
+/*
+ * Registers the server keyword list <kwl> as a list of valid keywords for next
+ * parsing sessions.
+ */
+void srv_register_keywords(struct srv_kw_list *kwl)
+{
+ LIST_ADDQ(&srv_keywords.list, &kwl->list);
+}
+
+/* Return a pointer to the server keyword <kw>, or NULL if not found. If the
+ * keyword is found with a NULL ->parse() function, then an attempt is made to
+ * find one with a valid ->parse() function. This way it is possible to declare
+ * platform-dependant, known keywords as NULL, then only declare them as valid
+ * if some options are met. Note that if the requested keyword contains an
+ * opening parenthesis, everything from this point is ignored.
+ */
+struct srv_kw *srv_find_kw(const char *kw)
+{
+ int index;
+ const char *kwend;
+ struct srv_kw_list *kwl;
+ struct srv_kw *ret = NULL;
+
+ kwend = strchr(kw, '(');
+ if (!kwend)
+ kwend = kw + strlen(kw);
+
+ list_for_each_entry(kwl, &srv_keywords.list, list) {
+ for (index = 0; kwl->kw[index].kw != NULL; index++) {
+ if ((strncmp(kwl->kw[index].kw, kw, kwend - kw) == 0) &&
+ kwl->kw[index].kw[kwend-kw] == 0) {
+ if (kwl->kw[index].parse)
+ return &kwl->kw[index]; /* found it !*/
+ else
+ ret = &kwl->kw[index]; /* may be OK */
+ }
+ }
+ }
+ return ret;
+}
+
+/* Dumps all registered "server" keywords to the <out> string pointer. The
+ * unsupported keywords are only dumped if their supported form was not
+ * found.
+ */
+void srv_dump_kws(char **out)
+{
+ struct srv_kw_list *kwl;
+ int index;
+
+ *out = NULL;
+ list_for_each_entry(kwl, &srv_keywords.list, list) {
+ for (index = 0; kwl->kw[index].kw != NULL; index++) {
+ if (kwl->kw[index].parse ||
+ srv_find_kw(kwl->kw[index].kw) == &kwl->kw[index]) {
+ memprintf(out, "%s[%4s] %s%s%s%s\n", *out ? *out : "",
+ kwl->scope,
+ kwl->kw[index].kw,
+ kwl->kw[index].skip ? " <arg>" : "",
+ kwl->kw[index].default_ok ? " [dflt_ok]" : "",
+ kwl->kw[index].parse ? "" : " (not supported)");
+ }
+ }
+ }
+}
+
+/* parse the "id" server keyword */
+static int srv_parse_id(char **args, int *cur_arg, struct proxy *curproxy, struct server *newsrv, char **err)
+{
+ struct eb32_node *node;
+
+ if (!*args[*cur_arg + 1]) {
+ memprintf(err, "'%s' : expects an integer argument", args[*cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ newsrv->puid = atol(args[*cur_arg + 1]);
+ newsrv->conf.id.key = newsrv->puid;
+
+ if (newsrv->puid <= 0) {
+ memprintf(err, "'%s' : custom id has to be > 0", args[*cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ node = eb32_lookup(&curproxy->conf.used_server_id, newsrv->puid);
+ if (node) {
+ struct server *target = container_of(node, struct server, conf.id);
+ memprintf(err, "'%s' : custom id %d already used at %s:%d ('server %s')",
+ args[*cur_arg], newsrv->puid, target->conf.file, target->conf.line,
+ target->id);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ eb32_insert(&curproxy->conf.used_server_id, &newsrv->conf.id);
+ newsrv->flags |= SRV_F_FORCED_ID;
+ return 0;
+}
+
+/* Shutdown all connections of a server. The caller must pass a termination
+ * code in <why>, which must be one of SF_ERR_* indicating the reason for the
+ * shutdown.
+ */
+void srv_shutdown_streams(struct server *srv, int why)
+{
+ struct stream *stream, *stream_bck;
+
+ list_for_each_entry_safe(stream, stream_bck, &srv->actconns, by_srv)
+ if (stream->srv_conn == srv)
+ stream_shutdown(stream, why);
+}
+
+/* Shutdown all connections of all backup servers of a proxy. The caller must
+ * pass a termination code in <why>, which must be one of SF_ERR_* indicating
+ * the reason for the shutdown.
+ */
+void srv_shutdown_backup_streams(struct proxy *px, int why)
+{
+ struct server *srv;
+
+ for (srv = px->srv; srv != NULL; srv = srv->next)
+ if (srv->flags & SRV_F_BACKUP)
+ srv_shutdown_streams(srv, why);
+}
+
+/* Appends some information to a message string related to a server going UP or
+ * DOWN. If both <forced> and <reason> are null and the server tracks another
+ * one, a "via" information will be provided to know where the status came from.
+ * If <reason> is non-null, the entire string will be appended after a comma and
+ * a space (eg: to report some information from the check that changed the state).
+ * If <xferred> is non-negative, some information about requeued streams are
+ * provided.
+ */
+void srv_append_status(struct chunk *msg, struct server *s, const char *reason, int xferred, int forced)
+{
+ if (reason)
+ chunk_appendf(msg, ", %s", reason);
+ else if (!forced && s->track)
+ chunk_appendf(msg, " via %s/%s", s->track->proxy->id, s->track->id);
+
+ if (xferred >= 0) {
+ if (s->state == SRV_ST_STOPPED)
+ chunk_appendf(msg, ". %d active and %d backup servers left.%s"
+ " %d sessions active, %d requeued, %d remaining in queue",
+ s->proxy->srv_act, s->proxy->srv_bck,
+ (s->proxy->srv_bck && !s->proxy->srv_act) ? " Running on backup." : "",
+ s->cur_sess, xferred, s->nbpend);
+ else
+ chunk_appendf(msg, ". %d active and %d backup servers online.%s"
+ " %d sessions requeued, %d total in queue",
+ s->proxy->srv_act, s->proxy->srv_bck,
+ (s->proxy->srv_bck && !s->proxy->srv_act) ? " Running on backup." : "",
+ xferred, s->nbpend);
+ }
+}
+
+/* Marks server <s> down, regardless of its checks' statuses, notifies by all
+ * available means, recounts the remaining servers on the proxy and transfers
+ * queued streams whenever possible to other servers. It automatically
+ * recomputes the number of servers, but not the map. Maintenance servers are
+ * ignored. It reports <reason> if non-null as the reason for going down. Note
+ * that it makes use of the trash to build the log strings, so <reason> must
+ * not be placed there.
+ */
+void srv_set_stopped(struct server *s, const char *reason)
+{
+ struct server *srv;
+ int prev_srv_count = s->proxy->srv_bck + s->proxy->srv_act;
+ int srv_was_stopping = (s->state == SRV_ST_STOPPING);
+ int log_level;
+ int xferred;
+
+ if ((s->admin & SRV_ADMF_MAINT) || s->state == SRV_ST_STOPPED)
+ return;
+
+ s->last_change = now.tv_sec;
+ s->state = SRV_ST_STOPPED;
+ if (s->proxy->lbprm.set_server_status_down)
+ s->proxy->lbprm.set_server_status_down(s);
+
+ if (s->onmarkeddown & HANA_ONMARKEDDOWN_SHUTDOWNSESSIONS)
+ srv_shutdown_streams(s, SF_ERR_DOWN);
+
+ /* we might have streams queued on this server and waiting for
+ * a connection. Those which are redispatchable will be queued
+ * to another server or to the proxy itself.
+ */
+ xferred = pendconn_redistribute(s);
+
+ chunk_printf(&trash,
+ "%sServer %s/%s is DOWN", s->flags & SRV_F_BACKUP ? "Backup " : "",
+ s->proxy->id, s->id);
+
+ srv_append_status(&trash, s, reason, xferred, 0);
+ Warning("%s.\n", trash.str);
+
+ /* we don't send an alert if the server was previously paused */
+ log_level = srv_was_stopping ? LOG_NOTICE : LOG_ALERT;
+ send_log(s->proxy, log_level, "%s.\n", trash.str);
+ send_email_alert(s, log_level, "%s", trash.str);
+
+ if (prev_srv_count && s->proxy->srv_bck == 0 && s->proxy->srv_act == 0)
+ set_backend_down(s->proxy);
+
+ s->counters.down_trans++;
+
+ for (srv = s->trackers; srv; srv = srv->tracknext)
+ srv_set_stopped(srv, NULL);
+}
+
+/* Marks server <s> up regardless of its checks' statuses and provided it isn't
+ * in maintenance. Notifies by all available means, recounts the remaining
+ * servers on the proxy and tries to grab requests from the proxy. It
+ * automatically recomputes the number of servers, but not the map. Maintenance
+ * servers are ignored. It reports <reason> if non-null as the reason for going
+ * up. Note that it makes use of the trash to build the log strings, so <reason>
+ * must not be placed there.
+ */
+void srv_set_running(struct server *s, const char *reason)
+{
+ struct server *srv;
+ int xferred;
+
+ if (s->admin & SRV_ADMF_MAINT)
+ return;
+
+ if (s->state == SRV_ST_STARTING || s->state == SRV_ST_RUNNING)
+ return;
+
+ if (s->proxy->srv_bck == 0 && s->proxy->srv_act == 0) {
+ if (s->proxy->last_change < now.tv_sec) // ignore negative times
+ s->proxy->down_time += now.tv_sec - s->proxy->last_change;
+ s->proxy->last_change = now.tv_sec;
+ }
+
+ if (s->state == SRV_ST_STOPPED && s->last_change < now.tv_sec) // ignore negative times
+ s->down_time += now.tv_sec - s->last_change;
+
+ s->last_change = now.tv_sec;
+
+ s->state = SRV_ST_STARTING;
+ if (s->slowstart > 0)
+ task_schedule(s->warmup, tick_add(now_ms, MS_TO_TICKS(MAX(1000, s->slowstart / 20))));
+ else
+ s->state = SRV_ST_RUNNING;
+
+ server_recalc_eweight(s);
+
+ /* If the server is set with "on-marked-up shutdown-backup-sessions",
+ * and it's not a backup server and its effective weight is > 0,
+ * then it can accept new connections, so we shut down all streams
+ * on all backup servers.
+ */
+ if ((s->onmarkedup & HANA_ONMARKEDUP_SHUTDOWNBACKUPSESSIONS) &&
+ !(s->flags & SRV_F_BACKUP) && s->eweight)
+ srv_shutdown_backup_streams(s->proxy, SF_ERR_UP);
+
+ /* check if we can handle some connections queued at the proxy. We
+ * will take as many as we can handle.
+ */
+ xferred = pendconn_grab_from_px(s);
+
+ chunk_printf(&trash,
+ "%sServer %s/%s is UP", s->flags & SRV_F_BACKUP ? "Backup " : "",
+ s->proxy->id, s->id);
+
+ srv_append_status(&trash, s, reason, xferred, 0);
+ Warning("%s.\n", trash.str);
+ send_log(s->proxy, LOG_NOTICE, "%s.\n", trash.str);
+ send_email_alert(s, LOG_NOTICE, "%s", trash.str);
+
+ for (srv = s->trackers; srv; srv = srv->tracknext)
+ srv_set_running(srv, NULL);
+}
+
+/* Marks server <s> stopping regardless of its checks' statuses and provided it
+ * isn't in maintenance. Notifies by all available means, recounts the remaining
+ * servers on the proxy and tries to grab requests from the proxy. It
+ * automatically recomputes the number of servers, but not the map. Maintenance
+ * servers are ignored. It reports <reason> if non-null as the reason for going
+ * up. Note that it makes use of the trash to build the log strings, so <reason>
+ * must not be placed there.
+ */
+void srv_set_stopping(struct server *s, const char *reason)
+{
+ struct server *srv;
+ int xferred;
+
+ if (s->admin & SRV_ADMF_MAINT)
+ return;
+
+ if (s->state == SRV_ST_STOPPING)
+ return;
+
+ s->last_change = now.tv_sec;
+ s->state = SRV_ST_STOPPING;
+ if (s->proxy->lbprm.set_server_status_down)
+ s->proxy->lbprm.set_server_status_down(s);
+
+ /* we might have streams queued on this server and waiting for
+ * a connection. Those which are redispatchable will be queued
+ * to another server or to the proxy itself.
+ */
+ xferred = pendconn_redistribute(s);
+
+ chunk_printf(&trash,
+ "%sServer %s/%s is stopping", s->flags & SRV_F_BACKUP ? "Backup " : "",
+ s->proxy->id, s->id);
+
+ srv_append_status(&trash, s, reason, xferred, 0);
+
+ Warning("%s.\n", trash.str);
+ send_log(s->proxy, LOG_NOTICE, "%s.\n", trash.str);
+
+ if (!s->proxy->srv_bck && !s->proxy->srv_act)
+ set_backend_down(s->proxy);
+
+ for (srv = s->trackers; srv; srv = srv->tracknext)
+ srv_set_stopping(srv, NULL);
+}
+
+/* Enables admin flag <mode> (among SRV_ADMF_*) on server <s>. This is used to
+ * enforce either maint mode or drain mode. It is not allowed to set more than
+ * one flag at once. The equivalent "inherited" flag is propagated to all
+ * tracking servers. Maintenance mode disables health checks (but not agent
+ * checks). When either the flag is already set or no flag is passed, nothing
+ * is done.
+ */
+void srv_set_admin_flag(struct server *s, enum srv_admin mode)
+{
+ struct check *check = &s->check;
+ struct server *srv;
+ int xferred;
+
+ if (!mode)
+ return;
+
+ /* stop going down as soon as we meet a server already in the same state */
+ if (s->admin & mode)
+ return;
+
+ s->admin |= mode;
+
+ /* stop going down if the equivalent flag was already present (forced or inherited) */
+ if (((mode & SRV_ADMF_MAINT) && (s->admin & ~mode & SRV_ADMF_MAINT)) ||
+ ((mode & SRV_ADMF_DRAIN) && (s->admin & ~mode & SRV_ADMF_DRAIN)))
+ return;
+
+ /* Maintenance must also disable health checks */
+ if (mode & SRV_ADMF_MAINT) {
+ if (s->check.state & CHK_ST_ENABLED) {
+ s->check.state |= CHK_ST_PAUSED;
+ check->health = 0;
+ }
+
+ if (s->state == SRV_ST_STOPPED) { /* server was already down */
+ chunk_printf(&trash,
+ "%sServer %s/%s was DOWN and now enters maintenance",
+ s->flags & SRV_F_BACKUP ? "Backup " : "", s->proxy->id, s->id);
+
+ srv_append_status(&trash, s, NULL, -1, (mode & SRV_ADMF_FMAINT));
+
+ Warning("%s.\n", trash.str);
+ send_log(s->proxy, LOG_NOTICE, "%s.\n", trash.str);
+ }
+ else { /* server was still running */
+ int srv_was_stopping = (s->state == SRV_ST_STOPPING) || (s->admin & SRV_ADMF_DRAIN);
+ int prev_srv_count = s->proxy->srv_bck + s->proxy->srv_act;
+
+ check->health = 0; /* failure */
+ s->last_change = now.tv_sec;
+ s->state = SRV_ST_STOPPED;
+ if (s->proxy->lbprm.set_server_status_down)
+ s->proxy->lbprm.set_server_status_down(s);
+
+ if (s->onmarkeddown & HANA_ONMARKEDDOWN_SHUTDOWNSESSIONS)
+ srv_shutdown_streams(s, SF_ERR_DOWN);
+
+ /* we might have streams queued on this server and waiting for
+ * a connection. Those which are redispatchable will be queued
+ * to another server or to the proxy itself.
+ */
+ xferred = pendconn_redistribute(s);
+
+ chunk_printf(&trash,
+ "%sServer %s/%s is going DOWN for maintenance",
+ s->flags & SRV_F_BACKUP ? "Backup " : "",
+ s->proxy->id, s->id);
+
+ srv_append_status(&trash, s, NULL, xferred, (mode & SRV_ADMF_FMAINT));
+
+ Warning("%s.\n", trash.str);
+ send_log(s->proxy, srv_was_stopping ? LOG_NOTICE : LOG_ALERT, "%s.\n", trash.str);
+
+ if (prev_srv_count && s->proxy->srv_bck == 0 && s->proxy->srv_act == 0)
+ set_backend_down(s->proxy);
+
+ s->counters.down_trans++;
+ }
+ }
+
+ /* drain state is applied only if not yet in maint */
+ if ((mode & SRV_ADMF_DRAIN) && !(s->admin & SRV_ADMF_MAINT)) {
+ int prev_srv_count = s->proxy->srv_bck + s->proxy->srv_act;
+
+ s->last_change = now.tv_sec;
+ if (s->proxy->lbprm.set_server_status_down)
+ s->proxy->lbprm.set_server_status_down(s);
+
+ /* we might have streams queued on this server and waiting for
+ * a connection. Those which are redispatchable will be queued
+ * to another server or to the proxy itself.
+ */
+ xferred = pendconn_redistribute(s);
+
+ chunk_printf(&trash, "%sServer %s/%s enters drain state",
+ s->flags & SRV_F_BACKUP ? "Backup " : "", s->proxy->id, s->id);
+
+ srv_append_status(&trash, s, NULL, xferred, (mode & SRV_ADMF_FDRAIN));
+
+ Warning("%s.\n", trash.str);
+ send_log(s->proxy, LOG_NOTICE, "%s.\n", trash.str);
+ send_email_alert(s, LOG_NOTICE, "%s", trash.str);
+
+ if (prev_srv_count && s->proxy->srv_bck == 0 && s->proxy->srv_act == 0)
+ set_backend_down(s->proxy);
+ }
+
+ /* compute the inherited flag to propagate */
+ if (mode & SRV_ADMF_MAINT)
+ mode = SRV_ADMF_IMAINT;
+ else if (mode & SRV_ADMF_DRAIN)
+ mode = SRV_ADMF_IDRAIN;
+
+ for (srv = s->trackers; srv; srv = srv->tracknext)
+ srv_set_admin_flag(srv, mode);
+}
+
+/* Disables admin flag <mode> (among SRV_ADMF_*) on server <s>. This is used to
+ * stop enforcing either maint mode or drain mode. It is not allowed to set more
+ * than one flag at once. The equivalent "inherited" flag is propagated to all
+ * tracking servers. Leaving maintenance mode re-enables health checks. When
+ * either the flag is already cleared or no flag is passed, nothing is done.
+ */
+void srv_clr_admin_flag(struct server *s, enum srv_admin mode)
+{
+ struct check *check = &s->check;
+ struct server *srv;
+ int xferred = -1;
+
+ if (!mode)
+ return;
+
+ /* stop going down as soon as we see the flag is not there anymore */
+ if (!(s->admin & mode))
+ return;
+
+ s->admin &= ~mode;
+
+ if (s->admin & SRV_ADMF_MAINT) {
+ /* remaining in maintenance mode, let's inform precisely about the
+ * situation.
+ */
+ if (mode & SRV_ADMF_FMAINT) {
+ chunk_printf(&trash,
+ "%sServer %s/%s is leaving forced maintenance but remains in maintenance",
+ s->flags & SRV_F_BACKUP ? "Backup " : "",
+ s->proxy->id, s->id);
+
+ if (s->track) /* normally it's mandatory here */
+ chunk_appendf(&trash, " via %s/%s",
+ s->track->proxy->id, s->track->id);
+ Warning("%s.\n", trash.str);
+ send_log(s->proxy, LOG_NOTICE, "%s.\n", trash.str);
+ }
+ else if (mode & SRV_ADMF_IMAINT) {
+ chunk_printf(&trash,
+ "%sServer %s/%s remains in forced maintenance",
+ s->flags & SRV_F_BACKUP ? "Backup " : "",
+ s->proxy->id, s->id);
+ Warning("%s.\n", trash.str);
+ send_log(s->proxy, LOG_NOTICE, "%s.\n", trash.str);
+ }
+ /* don't report anything when leaving drain mode and remaining in maintenance */
+ }
+ else if (mode & SRV_ADMF_MAINT) {
+ /* OK here we're leaving maintenance, we have many things to check,
+ * because the server might possibly be coming back up depending on
+ * its state. In practice, leaving maintenance means that we should
+ * immediately turn to UP (more or less the slowstart) under the
+ * following conditions :
+ * - server is neither checked nor tracked
+ * - server tracks another server which is not checked
+ * - server tracks another server which is already up
+ * Which sums up as something simpler :
+ * "either the tracking server is up or the server's checks are disabled
+ * or up". Otherwise we only re-enable health checks. There's a special
+ * case associated to the stopping state which can be inherited. Note
+ * that the server might still be in drain mode, which is naturally dealt
+ * with by the lower level functions.
+ */
+
+ if (s->check.state & CHK_ST_ENABLED) {
+ s->check.state &= ~CHK_ST_PAUSED;
+ check->health = check->rise; /* start OK but check immediately */
+ }
+
+ if ((!s->track || s->track->state != SRV_ST_STOPPED) &&
+ (!(s->agent.state & CHK_ST_ENABLED) || (s->agent.health >= s->agent.rise)) &&
+ (!(s->check.state & CHK_ST_ENABLED) || (s->check.health >= s->check.rise))) {
+ if (s->proxy->srv_bck == 0 && s->proxy->srv_act == 0) {
+ if (s->proxy->last_change < now.tv_sec) // ignore negative times
+ s->proxy->down_time += now.tv_sec - s->proxy->last_change;
+ s->proxy->last_change = now.tv_sec;
+ }
+
+ if (s->last_change < now.tv_sec) // ignore negative times
+ s->down_time += now.tv_sec - s->last_change;
+ s->last_change = now.tv_sec;
+
+ if (s->track && s->track->state == SRV_ST_STOPPING)
+ s->state = SRV_ST_STOPPING;
+ else {
+ s->state = SRV_ST_STARTING;
+ if (s->slowstart > 0)
+ task_schedule(s->warmup, tick_add(now_ms, MS_TO_TICKS(MAX(1000, s->slowstart / 20))));
+ else
+ s->state = SRV_ST_RUNNING;
+ }
+
+ server_recalc_eweight(s);
+
+ /* If the server is set with "on-marked-up shutdown-backup-sessions",
+ * and it's not a backup server and its effective weight is > 0,
+ * then it can accept new connections, so we shut down all streams
+ * on all backup servers.
+ */
+ if ((s->onmarkedup & HANA_ONMARKEDUP_SHUTDOWNBACKUPSESSIONS) &&
+ !(s->flags & SRV_F_BACKUP) && s->eweight)
+ srv_shutdown_backup_streams(s->proxy, SF_ERR_UP);
+
+ /* check if we can handle some connections queued at the proxy. We
+ * will take as many as we can handle.
+ */
+ xferred = pendconn_grab_from_px(s);
+ }
+
+ if (mode & SRV_ADMF_FMAINT) {
+ chunk_printf(&trash,
+ "%sServer %s/%s is %s/%s (leaving forced maintenance)",
+ s->flags & SRV_F_BACKUP ? "Backup " : "",
+ s->proxy->id, s->id,
+ (s->state == SRV_ST_STOPPED) ? "DOWN" : "UP",
+ (s->admin & SRV_ADMF_DRAIN) ? "DRAIN" : "READY");
+ }
+ else {
+ chunk_printf(&trash,
+ "%sServer %s/%s is %s/%s (leaving maintenance)",
+ s->flags & SRV_F_BACKUP ? "Backup " : "",
+ s->proxy->id, s->id,
+ (s->state == SRV_ST_STOPPED) ? "DOWN" : "UP",
+ (s->admin & SRV_ADMF_DRAIN) ? "DRAIN" : "READY");
+ srv_append_status(&trash, s, NULL, xferred, 0);
+ }
+ Warning("%s.\n", trash.str);
+ send_log(s->proxy, LOG_NOTICE, "%s.\n", trash.str);
+ }
+ else if ((mode & SRV_ADMF_DRAIN) && (s->admin & SRV_ADMF_DRAIN)) {
+ /* remaining in drain mode after removing one of its flags */
+
+ if (mode & SRV_ADMF_FDRAIN) {
+ chunk_printf(&trash,
+ "%sServer %s/%s is leaving forced drain but remains in drain mode",
+ s->flags & SRV_F_BACKUP ? "Backup " : "",
+ s->proxy->id, s->id);
+
+ if (s->track) /* normally it's mandatory here */
+ chunk_appendf(&trash, " via %s/%s",
+ s->track->proxy->id, s->track->id);
+ }
+ else {
+ chunk_printf(&trash,
+ "%sServer %s/%s remains in forced drain mode",
+ s->flags & SRV_F_BACKUP ? "Backup " : "",
+ s->proxy->id, s->id);
+ }
+ Warning("%s.\n", trash.str);
+ send_log(s->proxy, LOG_NOTICE, "%s.\n", trash.str);
+ }
+ else if (mode & SRV_ADMF_DRAIN) {
+ /* OK completely leaving drain mode */
+ if (s->proxy->srv_bck == 0 && s->proxy->srv_act == 0) {
+ if (s->proxy->last_change < now.tv_sec) // ignore negative times
+ s->proxy->down_time += now.tv_sec - s->proxy->last_change;
+ s->proxy->last_change = now.tv_sec;
+ }
+
+ if (s->last_change < now.tv_sec) // ignore negative times
+ s->down_time += now.tv_sec - s->last_change;
+ s->last_change = now.tv_sec;
+ server_recalc_eweight(s);
+
+ if (mode & SRV_ADMF_FDRAIN) {
+ chunk_printf(&trash,
+ "%sServer %s/%s is %s (leaving forced drain)",
+ s->flags & SRV_F_BACKUP ? "Backup " : "",
+ s->proxy->id, s->id,
+ (s->state == SRV_ST_STOPPED) ? "DOWN" : "UP");
+ }
+ else {
+ chunk_printf(&trash,
+ "%sServer %s/%s is %s (leaving drain)",
+ s->flags & SRV_F_BACKUP ? "Backup " : "",
+ s->proxy->id, s->id,
+ (s->state == SRV_ST_STOPPED) ? "DOWN" : "UP");
+ if (s->track) /* normally it's mandatory here */
+ chunk_appendf(&trash, " via %s/%s",
+ s->track->proxy->id, s->track->id);
+ }
+ Warning("%s.\n", trash.str);
+ send_log(s->proxy, LOG_NOTICE, "%s.\n", trash.str);
+ }
+
+ /* stop going down if the equivalent flag is still present (forced or inherited) */
+ if (((mode & SRV_ADMF_MAINT) && (s->admin & SRV_ADMF_MAINT)) ||
+ ((mode & SRV_ADMF_DRAIN) && (s->admin & SRV_ADMF_DRAIN)))
+ return;
+
+ if (mode & SRV_ADMF_MAINT)
+ mode = SRV_ADMF_IMAINT;
+ else if (mode & SRV_ADMF_DRAIN)
+ mode = SRV_ADMF_IDRAIN;
+
+ for (srv = s->trackers; srv; srv = srv->tracknext)
+ srv_clr_admin_flag(srv, mode);
+}
+
+/* Note: must not be declared <const> as its list will be overwritten.
+ * Please take care of keeping this list alphabetically sorted, doing so helps
+ * all code contributors.
+ * Optional keywords are also declared with a NULL ->parse() function so that
+ * the config parser can report an appropriate error when a known keyword was
+ * not enabled.
+ */
+static struct srv_kw_list srv_kws = { "ALL", { }, {
+ { "id", srv_parse_id, 1, 0 }, /* set id# of server */
+ { NULL, NULL, 0 },
+}};
+
+__attribute__((constructor))
+static void __listener_init(void)
+{
+ srv_register_keywords(&srv_kws);
+}
+
+/* Recomputes the server's eweight based on its state, uweight, the current time,
+ * and the proxy's algorihtm. To be used after updating sv->uweight. The warmup
+ * state is automatically disabled if the time is elapsed.
+ */
+void server_recalc_eweight(struct server *sv)
+{
+ struct proxy *px = sv->proxy;
+ unsigned w;
+
+ if (now.tv_sec < sv->last_change || now.tv_sec >= sv->last_change + sv->slowstart) {
+ /* go to full throttle if the slowstart interval is reached */
+ if (sv->state == SRV_ST_STARTING)
+ sv->state = SRV_ST_RUNNING;
+ }
+
+ /* We must take care of not pushing the server to full throttle during slow starts.
+ * It must also start immediately, at least at the minimal step when leaving maintenance.
+ */
+ if ((sv->state == SRV_ST_STARTING) && (px->lbprm.algo & BE_LB_PROP_DYN))
+ w = (px->lbprm.wdiv * (now.tv_sec - sv->last_change) + sv->slowstart) / sv->slowstart;
+ else
+ w = px->lbprm.wdiv;
+
+ sv->eweight = (sv->uweight * w + px->lbprm.wmult - 1) / px->lbprm.wmult;
+
+ /* now propagate the status change to any LB algorithms */
+ if (px->lbprm.update_server_eweight)
+ px->lbprm.update_server_eweight(sv);
+ else if (srv_is_usable(sv)) {
+ if (px->lbprm.set_server_status_up)
+ px->lbprm.set_server_status_up(sv);
+ }
+ else {
+ if (px->lbprm.set_server_status_down)
+ px->lbprm.set_server_status_down(sv);
+ }
+}
+
+/*
+ * Parses weight_str and configures sv accordingly.
+ * Returns NULL on success, error message string otherwise.
+ */
+const char *server_parse_weight_change_request(struct server *sv,
+ const char *weight_str)
+{
+ struct proxy *px;
+ long int w;
+ char *end;
+
+ px = sv->proxy;
+
+ /* if the weight is terminated with '%', it is set relative to
+ * the initial weight, otherwise it is absolute.
+ */
+ if (!*weight_str)
+ return "Require <weight> or <weight%>.\n";
+
+ w = strtol(weight_str, &end, 10);
+ if (end == weight_str)
+ return "Empty weight string empty or preceded by garbage";
+ else if (end[0] == '%' && end[1] == '\0') {
+ if (w < 0)
+ return "Relative weight must be positive.\n";
+ /* Avoid integer overflow */
+ if (w > 25600)
+ w = 25600;
+ w = sv->iweight * w / 100;
+ if (w > 256)
+ w = 256;
+ }
+ else if (w < 0 || w > 256)
+ return "Absolute weight can only be between 0 and 256 inclusive.\n";
+ else if (end[0] != '\0')
+ return "Trailing garbage in weight string";
+
+ if (w && w != sv->iweight && !(px->lbprm.algo & BE_LB_PROP_DYN))
+ return "Backend is using a static LB algorithm and only accepts weights '0%' and '100%'.\n";
+
+ sv->uweight = w;
+ server_recalc_eweight(sv);
+
+ return NULL;
+}
+
+/*
+ * Parses <addr_str> and configures <sv> accordingly.
+ * Returns:
+ * - error string on error
+ * - NULL on success
+ */
+const char *server_parse_addr_change_request(struct server *sv,
+ const char *addr_str)
+{
+ unsigned char ip[INET6_ADDRSTRLEN];
+
+ if (inet_pton(AF_INET6, addr_str, ip)) {
+ update_server_addr(sv, ip, AF_INET6, "stats command\n");
+ return NULL;
+ }
+ if (inet_pton(AF_INET, addr_str, ip)) {
+ update_server_addr(sv, ip, AF_INET, "stats command\n");
+ return NULL;
+ }
+
+ return "Could not understand IP address format.\n";
+}
+
+int parse_server(const char *file, int linenum, char **args, struct proxy *curproxy, struct proxy *defproxy)
+{
+ struct server *newsrv = NULL;
+ const char *err;
+ char *errmsg = NULL;
+ int err_code = 0;
+ unsigned val;
+ char *fqdn = NULL;
+
+ if (!strcmp(args[0], "server") || !strcmp(args[0], "default-server")) { /* server address */
+ int cur_arg;
+ short realport = 0;
+ int do_agent = 0, do_check = 0, defsrv = (*args[0] == 'd');
+
+ if (!defsrv && curproxy == defproxy) {
+ Alert("parsing [%s:%d] : '%s' not allowed in 'defaults' section.\n", file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ else if (warnifnotcap(curproxy, PR_CAP_BE, file, linenum, args[0], NULL))
+ err_code |= ERR_ALERT | ERR_FATAL;
+
+ if (!*args[2]) {
+ Alert("parsing [%s:%d] : '%s' expects <name> and <addr>[:<port>] as arguments.\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ err = invalid_char(args[1]);
+ if (err && !defsrv) {
+ Alert("parsing [%s:%d] : character '%c' is not permitted in server name '%s'.\n",
+ file, linenum, *err, args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (!defsrv) {
+ struct sockaddr_storage *sk;
+ int port1, port2;
+ struct protocol *proto;
+ struct dns_resolution *curr_resolution;
+
+ if ((newsrv = (struct server *)calloc(1, sizeof(struct server))) == NULL) {
+ Alert("parsing [%s:%d] : out of memory.\n", file, linenum);
+ err_code |= ERR_ALERT | ERR_ABORT;
+ goto out;
+ }
+
+ /* the servers are linked backwards first */
+ newsrv->next = curproxy->srv;
+ curproxy->srv = newsrv;
+ newsrv->proxy = curproxy;
+ newsrv->conf.file = strdup(file);
+ newsrv->conf.line = linenum;
+
+ newsrv->obj_type = OBJ_TYPE_SERVER;
+ LIST_INIT(&newsrv->actconns);
+ LIST_INIT(&newsrv->pendconns);
+ LIST_INIT(&newsrv->priv_conns);
+ LIST_INIT(&newsrv->idle_conns);
+ LIST_INIT(&newsrv->safe_conns);
+ do_check = 0;
+ do_agent = 0;
+ newsrv->flags = 0;
+ newsrv->admin = 0;
+ newsrv->state = SRV_ST_RUNNING; /* early server setup */
+ newsrv->last_change = now.tv_sec;
+ newsrv->id = strdup(args[1]);
+
+ /* several ways to check the port component :
+ * - IP => port=+0, relative (IPv4 only)
+ * - IP: => port=+0, relative
+ * - IP:N => port=N, absolute
+ * - IP:+N => port=+N, relative
+ * - IP:-N => port=-N, relative
+ */
+ sk = str2sa_range(args[2], &port1, &port2, &errmsg, NULL, &fqdn, 1);
+ if (!sk) {
+ Alert("parsing [%s:%d] : '%s %s' : %s\n", file, linenum, args[0], args[1], errmsg);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ proto = protocol_by_family(sk->ss_family);
+ if (!proto || !proto->connect) {
+ Alert("parsing [%s:%d] : '%s %s' : connect() not supported for this address family.\n",
+ file, linenum, args[0], args[1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (!port1 || !port2) {
+ /* no port specified, +offset, -offset */
+ newsrv->flags |= SRV_F_MAPPORTS;
+ }
+ else if (port1 != port2) {
+ /* port range */
+ Alert("parsing [%s:%d] : '%s %s' : port ranges are not allowed in '%s'\n",
+ file, linenum, args[0], args[1], args[2]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ else {
+ /* used by checks */
+ realport = port1;
+ }
+
+ /* save hostname and create associated name resolution */
+ newsrv->hostname = fqdn;
+ if (!fqdn)
+ goto skip_name_resolution;
+
+ fqdn = NULL;
+ if ((curr_resolution = calloc(1, sizeof(struct dns_resolution))) == NULL)
+ goto skip_name_resolution;
+
+ curr_resolution->hostname_dn_len = dns_str_to_dn_label_len(newsrv->hostname);
+ if ((curr_resolution->hostname_dn = calloc(curr_resolution->hostname_dn_len + 1, sizeof(char))) == NULL)
+ goto skip_name_resolution;
+ if ((dns_str_to_dn_label(newsrv->hostname, curr_resolution->hostname_dn, curr_resolution->hostname_dn_len + 1)) == NULL) {
+ Alert("parsing [%s:%d] : Invalid hostname '%s'\n",
+ file, linenum, args[2]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ curr_resolution->requester = newsrv;
+ curr_resolution->requester_cb = snr_resolution_cb;
+ curr_resolution->requester_error_cb = snr_resolution_error_cb;
+ curr_resolution->status = RSLV_STATUS_NONE;
+ curr_resolution->step = RSLV_STEP_NONE;
+ /* a first resolution has been done by the configuration parser */
+ curr_resolution->last_resolution = 0;
+ newsrv->resolution = curr_resolution;
+
+ skip_name_resolution:
+ newsrv->addr = *sk;
+ newsrv->xprt = newsrv->check.xprt = newsrv->agent.xprt = &raw_sock;
+
+ if (!protocol_by_family(newsrv->addr.ss_family)) {
+ Alert("parsing [%s:%d] : Unknown protocol family %d '%s'\n",
+ file, linenum, newsrv->addr.ss_family, args[2]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ newsrv->check.use_ssl = curproxy->defsrv.check.use_ssl;
+ newsrv->check.port = curproxy->defsrv.check.port;
+ newsrv->check.inter = curproxy->defsrv.check.inter;
+ newsrv->check.fastinter = curproxy->defsrv.check.fastinter;
+ newsrv->check.downinter = curproxy->defsrv.check.downinter;
+ newsrv->agent.use_ssl = curproxy->defsrv.agent.use_ssl;
+ newsrv->agent.port = curproxy->defsrv.agent.port;
+ newsrv->agent.inter = curproxy->defsrv.agent.inter;
+ newsrv->agent.fastinter = curproxy->defsrv.agent.fastinter;
+ newsrv->agent.downinter = curproxy->defsrv.agent.downinter;
+ newsrv->maxqueue = curproxy->defsrv.maxqueue;
+ newsrv->minconn = curproxy->defsrv.minconn;
+ newsrv->maxconn = curproxy->defsrv.maxconn;
+ newsrv->slowstart = curproxy->defsrv.slowstart;
+ newsrv->onerror = curproxy->defsrv.onerror;
+ newsrv->onmarkeddown = curproxy->defsrv.onmarkeddown;
+ newsrv->onmarkedup = curproxy->defsrv.onmarkedup;
+ newsrv->consecutive_errors_limit
+ = curproxy->defsrv.consecutive_errors_limit;
+#ifdef OPENSSL
+ newsrv->use_ssl = curproxy->defsrv.use_ssl;
+#endif
+ newsrv->uweight = newsrv->iweight
+ = curproxy->defsrv.iweight;
+
+ newsrv->check.status = HCHK_STATUS_INI;
+ newsrv->check.rise = curproxy->defsrv.check.rise;
+ newsrv->check.fall = curproxy->defsrv.check.fall;
+ newsrv->check.health = newsrv->check.rise; /* up, but will fall down at first failure */
+ newsrv->check.server = newsrv;
+ newsrv->check.tcpcheck_rules = &curproxy->tcpcheck_rules;
+
+ newsrv->agent.status = HCHK_STATUS_INI;
+ newsrv->agent.rise = curproxy->defsrv.agent.rise;
+ newsrv->agent.fall = curproxy->defsrv.agent.fall;
+ newsrv->agent.health = newsrv->agent.rise; /* up, but will fall down at first failure */
+ newsrv->agent.server = newsrv;
+ newsrv->resolver_family_priority = curproxy->defsrv.resolver_family_priority;
+ if (newsrv->resolver_family_priority == AF_UNSPEC)
+ newsrv->resolver_family_priority = AF_INET6;
+
+ cur_arg = 3;
+ } else {
+ newsrv = &curproxy->defsrv;
+ cur_arg = 1;
+ newsrv->resolver_family_priority = AF_INET6;
+ }
+
+ while (*args[cur_arg]) {
+ if (!strcmp(args[cur_arg], "agent-check")) {
+ global.maxsock++;
+ do_agent = 1;
+ cur_arg += 1;
+ } else if (!strcmp(args[cur_arg], "agent-inter")) {
+ const char *err = parse_time_err(args[cur_arg + 1], &val, TIME_UNIT_MS);
+ if (err) {
+ Alert("parsing [%s:%d] : unexpected character '%c' in 'agent-inter' argument of server %s.\n",
+ file, linenum, *err, newsrv->id);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ if (val <= 0) {
+ Alert("parsing [%s:%d]: invalid value %d for argument '%s' of server %s.\n",
+ file, linenum, val, args[cur_arg], newsrv->id);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ newsrv->agent.inter = val;
+ cur_arg += 2;
+ }
+ else if (!strcmp(args[cur_arg], "agent-port")) {
+ global.maxsock++;
+ newsrv->agent.port = atol(args[cur_arg + 1]);
+ cur_arg += 2;
+ }
+ else if (!defsrv && !strcmp(args[cur_arg], "cookie")) {
+ newsrv->cookie = strdup(args[cur_arg + 1]);
+ newsrv->cklen = strlen(args[cur_arg + 1]);
+ cur_arg += 2;
+ }
+ else if (!defsrv && !strcmp(args[cur_arg], "redir")) {
+ newsrv->rdr_pfx = strdup(args[cur_arg + 1]);
+ newsrv->rdr_len = strlen(args[cur_arg + 1]);
+ cur_arg += 2;
+ }
+ else if (!strcmp(args[cur_arg], "resolvers")) {
+ newsrv->resolvers_id = strdup(args[cur_arg + 1]);
+ cur_arg += 2;
+ }
+ else if (!strcmp(args[cur_arg], "resolve-prefer")) {
+ if (!strcmp(args[cur_arg + 1], "ipv4"))
+ newsrv->resolver_family_priority = AF_INET;
+ else if (!strcmp(args[cur_arg + 1], "ipv6"))
+ newsrv->resolver_family_priority = AF_INET6;
+ else {
+ Alert("parsing [%s:%d]: '%s' expects either ipv4 or ipv6 as argument.\n",
+ file, linenum, args[cur_arg]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ cur_arg += 2;
+ }
+ else if (!strcmp(args[cur_arg], "rise")) {
+ if (!*args[cur_arg + 1]) {
+ Alert("parsing [%s:%d]: '%s' expects an integer argument.\n",
+ file, linenum, args[cur_arg]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ newsrv->check.rise = atol(args[cur_arg + 1]);
+ if (newsrv->check.rise <= 0) {
+ Alert("parsing [%s:%d]: '%s' has to be > 0.\n",
+ file, linenum, args[cur_arg]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (newsrv->check.health)
+ newsrv->check.health = newsrv->check.rise;
+ cur_arg += 2;
+ }
+ else if (!strcmp(args[cur_arg], "fall")) {
+ newsrv->check.fall = atol(args[cur_arg + 1]);
+
+ if (!*args[cur_arg + 1]) {
+ Alert("parsing [%s:%d]: '%s' expects an integer argument.\n",
+ file, linenum, args[cur_arg]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (newsrv->check.fall <= 0) {
+ Alert("parsing [%s:%d]: '%s' has to be > 0.\n",
+ file, linenum, args[cur_arg]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ cur_arg += 2;
+ }
+ else if (!strcmp(args[cur_arg], "inter")) {
+ const char *err = parse_time_err(args[cur_arg + 1], &val, TIME_UNIT_MS);
+ if (err) {
+ Alert("parsing [%s:%d] : unexpected character '%c' in 'inter' argument of server %s.\n",
+ file, linenum, *err, newsrv->id);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ if (val <= 0) {
+ Alert("parsing [%s:%d]: invalid value %d for argument '%s' of server %s.\n",
+ file, linenum, val, args[cur_arg], newsrv->id);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ newsrv->check.inter = val;
+ cur_arg += 2;
+ }
+ else if (!strcmp(args[cur_arg], "fastinter")) {
+ const char *err = parse_time_err(args[cur_arg + 1], &val, TIME_UNIT_MS);
+ if (err) {
+ Alert("parsing [%s:%d]: unexpected character '%c' in 'fastinter' argument of server %s.\n",
+ file, linenum, *err, newsrv->id);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ if (val <= 0) {
+ Alert("parsing [%s:%d]: invalid value %d for argument '%s' of server %s.\n",
+ file, linenum, val, args[cur_arg], newsrv->id);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ newsrv->check.fastinter = val;
+ cur_arg += 2;
+ }
+ else if (!strcmp(args[cur_arg], "downinter")) {
+ const char *err = parse_time_err(args[cur_arg + 1], &val, TIME_UNIT_MS);
+ if (err) {
+ Alert("parsing [%s:%d]: unexpected character '%c' in 'downinter' argument of server %s.\n",
+ file, linenum, *err, newsrv->id);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ if (val <= 0) {
+ Alert("parsing [%s:%d]: invalid value %d for argument '%s' of server %s.\n",
+ file, linenum, val, args[cur_arg], newsrv->id);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ newsrv->check.downinter = val;
+ cur_arg += 2;
+ }
+ else if (!defsrv && !strcmp(args[cur_arg], "addr")) {
+ struct sockaddr_storage *sk;
+ int port1, port2;
+ struct protocol *proto;
+
+ sk = str2sa_range(args[cur_arg + 1], &port1, &port2, &errmsg, NULL, NULL, 1);
+ if (!sk) {
+ Alert("parsing [%s:%d] : '%s' : %s\n",
+ file, linenum, args[cur_arg], errmsg);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ proto = protocol_by_family(sk->ss_family);
+ if (!proto || !proto->connect) {
+ Alert("parsing [%s:%d] : '%s %s' : connect() not supported for this address family.\n",
+ file, linenum, args[cur_arg], args[cur_arg + 1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (port1 != port2) {
+ Alert("parsing [%s:%d] : '%s' : port ranges and offsets are not allowed in '%s'\n",
+ file, linenum, args[cur_arg], args[cur_arg + 1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ newsrv->check.addr = newsrv->agent.addr = *sk;
+ cur_arg += 2;
+ }
+ else if (!strcmp(args[cur_arg], "port")) {
+ newsrv->check.port = atol(args[cur_arg + 1]);
+ cur_arg += 2;
+ }
+ else if (!defsrv && !strcmp(args[cur_arg], "backup")) {
+ newsrv->flags |= SRV_F_BACKUP;
+ cur_arg ++;
+ }
+ else if (!defsrv && !strcmp(args[cur_arg], "non-stick")) {
+ newsrv->flags |= SRV_F_NON_STICK;
+ cur_arg ++;
+ }
+ else if (!defsrv && !strcmp(args[cur_arg], "send-proxy")) {
+ newsrv->pp_opts |= SRV_PP_V1;
+ cur_arg ++;
+ }
+ else if (!defsrv && !strcmp(args[cur_arg], "send-proxy-v2")) {
+ newsrv->pp_opts |= SRV_PP_V2;
+ cur_arg ++;
+ }
+ else if (!defsrv && !strcmp(args[cur_arg], "check-send-proxy")) {
+ newsrv->check.send_proxy = 1;
+ cur_arg ++;
+ }
+ else if (!strcmp(args[cur_arg], "weight")) {
+ int w;
+ w = atol(args[cur_arg + 1]);
+ if (w < 0 || w > SRV_UWGHT_MAX) {
+ Alert("parsing [%s:%d] : weight of server %s is not within 0 and %d (%d).\n",
+ file, linenum, newsrv->id, SRV_UWGHT_MAX, w);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ newsrv->uweight = newsrv->iweight = w;
+ cur_arg += 2;
+ }
+ else if (!strcmp(args[cur_arg], "minconn")) {
+ newsrv->minconn = atol(args[cur_arg + 1]);
+ cur_arg += 2;
+ }
+ else if (!strcmp(args[cur_arg], "maxconn")) {
+ newsrv->maxconn = atol(args[cur_arg + 1]);
+ cur_arg += 2;
+ }
+ else if (!strcmp(args[cur_arg], "maxqueue")) {
+ newsrv->maxqueue = atol(args[cur_arg + 1]);
+ cur_arg += 2;
+ }
+ else if (!strcmp(args[cur_arg], "slowstart")) {
+ /* slowstart is stored in seconds */
+ const char *err = parse_time_err(args[cur_arg + 1], &val, TIME_UNIT_MS);
+ if (err) {
+ Alert("parsing [%s:%d] : unexpected character '%c' in 'slowstart' argument of server %s.\n",
+ file, linenum, *err, newsrv->id);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ newsrv->slowstart = (val + 999) / 1000;
+ cur_arg += 2;
+ }
+ else if (!defsrv && !strcmp(args[cur_arg], "track")) {
+
+ if (!*args[cur_arg + 1]) {
+ Alert("parsing [%s:%d]: 'track' expects [<proxy>/]<server> as argument.\n",
+ file, linenum);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ newsrv->trackit = strdup(args[cur_arg + 1]);
+
+ cur_arg += 2;
+ }
+ else if (!defsrv && !strcmp(args[cur_arg], "check")) {
+ global.maxsock++;
+ do_check = 1;
+ cur_arg += 1;
+ }
+ else if (!defsrv && !strcmp(args[cur_arg], "disabled")) {
+ newsrv->admin |= SRV_ADMF_CMAINT;
+ newsrv->admin |= SRV_ADMF_FMAINT;
+ newsrv->state = SRV_ST_STOPPED;
+ newsrv->check.state |= CHK_ST_PAUSED;
+ newsrv->check.health = 0;
+ cur_arg += 1;
+ }
+ else if (!defsrv && !strcmp(args[cur_arg], "observe")) {
+ if (!strcmp(args[cur_arg + 1], "none"))
+ newsrv->observe = HANA_OBS_NONE;
+ else if (!strcmp(args[cur_arg + 1], "layer4"))
+ newsrv->observe = HANA_OBS_LAYER4;
+ else if (!strcmp(args[cur_arg + 1], "layer7")) {
+ if (curproxy->mode != PR_MODE_HTTP) {
+ Alert("parsing [%s:%d]: '%s' can only be used in http proxies.\n",
+ file, linenum, args[cur_arg + 1]);
+ err_code |= ERR_ALERT;
+ }
+ newsrv->observe = HANA_OBS_LAYER7;
+ }
+ else {
+ Alert("parsing [%s:%d]: '%s' expects one of 'none', "
+ "'layer4', 'layer7' but got '%s'\n",
+ file, linenum, args[cur_arg], args[cur_arg + 1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ cur_arg += 2;
+ }
+ else if (!strcmp(args[cur_arg], "on-error")) {
+ if (!strcmp(args[cur_arg + 1], "fastinter"))
+ newsrv->onerror = HANA_ONERR_FASTINTER;
+ else if (!strcmp(args[cur_arg + 1], "fail-check"))
+ newsrv->onerror = HANA_ONERR_FAILCHK;
+ else if (!strcmp(args[cur_arg + 1], "sudden-death"))
+ newsrv->onerror = HANA_ONERR_SUDDTH;
+ else if (!strcmp(args[cur_arg + 1], "mark-down"))
+ newsrv->onerror = HANA_ONERR_MARKDWN;
+ else {
+ Alert("parsing [%s:%d]: '%s' expects one of 'fastinter', "
+ "'fail-check', 'sudden-death' or 'mark-down' but got '%s'\n",
+ file, linenum, args[cur_arg], args[cur_arg + 1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ cur_arg += 2;
+ }
+ else if (!strcmp(args[cur_arg], "on-marked-down")) {
+ if (!strcmp(args[cur_arg + 1], "shutdown-sessions"))
+ newsrv->onmarkeddown = HANA_ONMARKEDDOWN_SHUTDOWNSESSIONS;
+ else {
+ Alert("parsing [%s:%d]: '%s' expects 'shutdown-sessions' but got '%s'\n",
+ file, linenum, args[cur_arg], args[cur_arg + 1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ cur_arg += 2;
+ }
+ else if (!strcmp(args[cur_arg], "on-marked-up")) {
+ if (!strcmp(args[cur_arg + 1], "shutdown-backup-sessions"))
+ newsrv->onmarkedup = HANA_ONMARKEDUP_SHUTDOWNBACKUPSESSIONS;
+ else {
+ Alert("parsing [%s:%d]: '%s' expects 'shutdown-backup-sessions' but got '%s'\n",
+ file, linenum, args[cur_arg], args[cur_arg + 1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ cur_arg += 2;
+ }
+ else if (!strcmp(args[cur_arg], "error-limit")) {
+ if (!*args[cur_arg + 1]) {
+ Alert("parsing [%s:%d]: '%s' expects an integer argument.\n",
+ file, linenum, args[cur_arg]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ newsrv->consecutive_errors_limit = atoi(args[cur_arg + 1]);
+
+ if (newsrv->consecutive_errors_limit <= 0) {
+ Alert("parsing [%s:%d]: %s has to be > 0.\n",
+ file, linenum, args[cur_arg]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ cur_arg += 2;
+ }
+ else if (!defsrv && !strcmp(args[cur_arg], "source")) { /* address to which we bind when connecting */
+ int port_low, port_high;
+ struct sockaddr_storage *sk;
+ struct protocol *proto;
+
+ if (!*args[cur_arg + 1]) {
+ Alert("parsing [%s:%d] : '%s' expects <addr>[:<port>[-<port>]], and optionally '%s' <addr>, and '%s' <name> as argument.\n",
+ file, linenum, "source", "usesrc", "interface");
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ newsrv->conn_src.opts |= CO_SRC_BIND;
+ sk = str2sa_range(args[cur_arg + 1], &port_low, &port_high, &errmsg, NULL, NULL, 1);
+ if (!sk) {
+ Alert("parsing [%s:%d] : '%s %s' : %s\n",
+ file, linenum, args[cur_arg], args[cur_arg+1], errmsg);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ proto = protocol_by_family(sk->ss_family);
+ if (!proto || !proto->connect) {
+ Alert("parsing [%s:%d] : '%s %s' : connect() not supported for this address family.\n",
+ file, linenum, args[cur_arg], args[cur_arg+1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ newsrv->conn_src.source_addr = *sk;
+
+ if (port_low != port_high) {
+ int i;
+
+ if (!port_low || !port_high) {
+ Alert("parsing [%s:%d] : %s does not support port offsets (found '%s').\n",
+ file, linenum, args[cur_arg], args[cur_arg + 1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (port_low <= 0 || port_low > 65535 ||
+ port_high <= 0 || port_high > 65535 ||
+ port_low > port_high) {
+ Alert("parsing [%s:%d] : invalid source port range %d-%d.\n",
+ file, linenum, port_low, port_high);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ newsrv->conn_src.sport_range = port_range_alloc_range(port_high - port_low + 1);
+ for (i = 0; i < newsrv->conn_src.sport_range->size; i++)
+ newsrv->conn_src.sport_range->ports[i] = port_low + i;
+ }
+
+ cur_arg += 2;
+ while (*(args[cur_arg])) {
+ if (!strcmp(args[cur_arg], "usesrc")) { /* address to use outside */
+#if defined(CONFIG_HAP_TRANSPARENT)
+ if (!*args[cur_arg + 1]) {
+ Alert("parsing [%s:%d] : '%s' expects <addr>[:<port>], 'client', 'clientip', or 'hdr_ip(name,#)' as argument.\n",
+ file, linenum, "usesrc");
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ if (!strcmp(args[cur_arg + 1], "client")) {
+ newsrv->conn_src.opts &= ~CO_SRC_TPROXY_MASK;
+ newsrv->conn_src.opts |= CO_SRC_TPROXY_CLI;
+ } else if (!strcmp(args[cur_arg + 1], "clientip")) {
+ newsrv->conn_src.opts &= ~CO_SRC_TPROXY_MASK;
+ newsrv->conn_src.opts |= CO_SRC_TPROXY_CIP;
+ } else if (!strncmp(args[cur_arg + 1], "hdr_ip(", 7)) {
+ char *name, *end;
+
+ name = args[cur_arg+1] + 7;
+ while (isspace(*name))
+ name++;
+
+ end = name;
+ while (*end && !isspace(*end) && *end != ',' && *end != ')')
+ end++;
+
+ newsrv->conn_src.opts &= ~CO_SRC_TPROXY_MASK;
+ newsrv->conn_src.opts |= CO_SRC_TPROXY_DYN;
+ newsrv->conn_src.bind_hdr_name = calloc(1, end - name + 1);
+ newsrv->conn_src.bind_hdr_len = end - name;
+ memcpy(newsrv->conn_src.bind_hdr_name, name, end - name);
+ newsrv->conn_src.bind_hdr_name[end-name] = '\0';
+ newsrv->conn_src.bind_hdr_occ = -1;
+
+ /* now look for an occurrence number */
+ while (isspace(*end))
+ end++;
+ if (*end == ',') {
+ end++;
+ name = end;
+ if (*end == '-')
+ end++;
+ while (isdigit((int)*end))
+ end++;
+ newsrv->conn_src.bind_hdr_occ = strl2ic(name, end-name);
+ }
+
+ if (newsrv->conn_src.bind_hdr_occ < -MAX_HDR_HISTORY) {
+ Alert("parsing [%s:%d] : usesrc hdr_ip(name,num) does not support negative"
+ " occurrences values smaller than %d.\n",
+ file, linenum, MAX_HDR_HISTORY);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ } else {
+ struct sockaddr_storage *sk;
+ int port1, port2;
+
+ sk = str2sa_range(args[cur_arg + 1], &port1, &port2, &errmsg, NULL, NULL, 1);
+ if (!sk) {
+ Alert("parsing [%s:%d] : '%s %s' : %s\n",
+ file, linenum, args[cur_arg], args[cur_arg+1], errmsg);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ proto = protocol_by_family(sk->ss_family);
+ if (!proto || !proto->connect) {
+ Alert("parsing [%s:%d] : '%s %s' : connect() not supported for this address family.\n",
+ file, linenum, args[cur_arg], args[cur_arg+1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (port1 != port2) {
+ Alert("parsing [%s:%d] : '%s' : port ranges and offsets are not allowed in '%s'\n",
+ file, linenum, args[cur_arg], args[cur_arg + 1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ newsrv->conn_src.tproxy_addr = *sk;
+ newsrv->conn_src.opts |= CO_SRC_TPROXY_ADDR;
+ }
+ global.last_checks |= LSTCHK_NETADM;
+ cur_arg += 2;
+ continue;
+#else /* no TPROXY support */
+ Alert("parsing [%s:%d] : '%s' not allowed here because support for TPROXY was not compiled in.\n",
+ file, linenum, "usesrc");
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+#endif /* defined(CONFIG_HAP_TRANSPARENT) */
+ } /* "usesrc" */
+
+ if (!strcmp(args[cur_arg], "interface")) { /* specifically bind to this interface */
+#ifdef SO_BINDTODEVICE
+ if (!*args[cur_arg + 1]) {
+ Alert("parsing [%s:%d] : '%s' : missing interface name.\n",
+ file, linenum, args[0]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ free(newsrv->conn_src.iface_name);
+ newsrv->conn_src.iface_name = strdup(args[cur_arg + 1]);
+ newsrv->conn_src.iface_len = strlen(newsrv->conn_src.iface_name);
+ global.last_checks |= LSTCHK_NETADM;
+#else
+ Alert("parsing [%s:%d] : '%s' : '%s' option not implemented.\n",
+ file, linenum, args[0], args[cur_arg]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+#endif
+ cur_arg += 2;
+ continue;
+ }
+ /* this keyword in not an option of "source" */
+ break;
+ } /* while */
+ }
+ else if (!defsrv && !strcmp(args[cur_arg], "usesrc")) { /* address to use outside: needs "source" first */
+ Alert("parsing [%s:%d] : '%s' only allowed after a '%s' statement.\n",
+ file, linenum, "usesrc", "source");
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ else if (!defsrv && !strcmp(args[cur_arg], "namespace")) {
+#ifdef CONFIG_HAP_NS
+ char *arg = args[cur_arg + 1];
+ if (!strcmp(arg, "*")) {
+ newsrv->flags |= SRV_F_USE_NS_FROM_PP;
+ } else {
+ newsrv->netns = netns_store_lookup(arg, strlen(arg));
+
+ if (newsrv->netns == NULL)
+ newsrv->netns = netns_store_insert(arg);
+
+ if (newsrv->netns == NULL) {
+ Alert("Cannot open namespace '%s'.\n", args[cur_arg + 1]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ }
+#else
+ Alert("parsing [%s:%d] : '%s' : '%s' option not implemented.\n",
+ file, linenum, args[0], args[cur_arg]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+#endif
+ cur_arg += 2;
+ }
+ else {
+ static int srv_dumped;
+ struct srv_kw *kw;
+ char *err;
+
+ kw = srv_find_kw(args[cur_arg]);
+ if (kw) {
+ char *err = NULL;
+ int code;
+
+ if (!kw->parse) {
+ Alert("parsing [%s:%d] : '%s %s' : '%s' option is not implemented in this version (check build options).\n",
+ file, linenum, args[0], args[1], args[cur_arg]);
+ cur_arg += 1 + kw->skip ;
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (defsrv && !kw->default_ok) {
+ Alert("parsing [%s:%d] : '%s %s' : '%s' option is not accepted in default-server sections.\n",
+ file, linenum, args[0], args[1], args[cur_arg]);
+ cur_arg += 1 + kw->skip ;
+ err_code |= ERR_ALERT;
+ continue;
+ }
+
+ code = kw->parse(args, &cur_arg, curproxy, newsrv, &err);
+ err_code |= code;
+
+ if (code) {
+ if (err && *err) {
+ indent_msg(&err, 2);
+ Alert("parsing [%s:%d] : '%s %s' : %s\n", file, linenum, args[0], args[1], err);
+ }
+ else
+ Alert("parsing [%s:%d] : '%s %s' : error encountered while processing '%s'.\n",
+ file, linenum, args[0], args[1], args[cur_arg]);
+ if (code & ERR_FATAL) {
+ free(err);
+ cur_arg += 1 + kw->skip;
+ goto out;
+ }
+ }
+ free(err);
+ cur_arg += 1 + kw->skip;
+ continue;
+ }
+
+ err = NULL;
+ if (!srv_dumped) {
+ srv_dump_kws(&err);
+ indent_msg(&err, 4);
+ srv_dumped = 1;
+ }
+
+ Alert("parsing [%s:%d] : '%s %s' unknown keyword '%s'.%s%s\n",
+ file, linenum, args[0], args[1], args[cur_arg],
+ err ? " Registered keywords :" : "", err ? err : "");
+ free(err);
+
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ }
+
+ if (do_check) {
+ const char *ret;
+
+ if (newsrv->trackit) {
+ Alert("parsing [%s:%d]: unable to enable checks and tracking at the same time!\n",
+ file, linenum);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ /* If neither a port nor an addr was specified and no check transport
+ * layer is forced, then the transport layer used by the checks is the
+ * same as for the production traffic. Otherwise we use raw_sock by
+ * default, unless one is specified.
+ */
+ if (!newsrv->check.port && !is_addr(&newsrv->check.addr)) {
+#ifdef USE_OPENSSL
+ newsrv->check.use_ssl |= (newsrv->use_ssl || (newsrv->proxy->options & PR_O_TCPCHK_SSL));
+#endif
+ newsrv->check.send_proxy |= (newsrv->pp_opts);
+ }
+ /* try to get the port from check_core.addr if check.port not set */
+ if (!newsrv->check.port)
+ newsrv->check.port = get_host_port(&newsrv->check.addr);
+
+ if (!newsrv->check.port)
+ newsrv->check.port = realport; /* by default */
+
+ if (!newsrv->check.port) {
+ /* not yet valid, because no port was set on
+ * the server either. We'll check if we have
+ * a known port on the first listener.
+ */
+ struct listener *l;
+
+ list_for_each_entry(l, &curproxy->conf.listeners, by_fe) {
+ newsrv->check.port = get_host_port(&l->addr);
+ if (newsrv->check.port)
+ break;
+ }
+ }
+ /*
+ * We need at least a service port, a check port or the first tcp-check rule must
+ * be a 'connect' one when checking an IPv4/IPv6 server.
+ */
+ if (!newsrv->check.port &&
+ (is_inet_addr(&newsrv->check.addr) ||
+ (!is_addr(&newsrv->check.addr) && is_inet_addr(&newsrv->addr)))) {
+ struct tcpcheck_rule *n = NULL, *r = NULL;
+ struct list *l;
+
+ r = (struct tcpcheck_rule *)newsrv->proxy->tcpcheck_rules.n;
+ if (!r) {
+ Alert("parsing [%s:%d] : server %s has neither service port nor check port. Check has been disabled.\n",
+ file, linenum, newsrv->id);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ /* search the first action (connect / send / expect) in the list */
+ l = &newsrv->proxy->tcpcheck_rules;
+ list_for_each_entry(n, l, list) {
+ r = (struct tcpcheck_rule *)n->list.n;
+ if (r->action != TCPCHK_ACT_COMMENT)
+ break;
+ }
+ if ((r->action != TCPCHK_ACT_CONNECT) || !r->port) {
+ Alert("parsing [%s:%d] : server %s has neither service port nor check port nor tcp_check rule 'connect' with port information. Check has been disabled.\n",
+ file, linenum, newsrv->id);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ else {
+ /* scan the tcp-check ruleset to ensure a port has been configured */
+ l = &newsrv->proxy->tcpcheck_rules;
+ list_for_each_entry(n, l, list) {
+ r = (struct tcpcheck_rule *)n->list.n;
+ if ((r->action == TCPCHK_ACT_CONNECT) && (!r->port)) {
+ Alert("parsing [%s:%d] : server %s has neither service port nor check port, and a tcp_check rule 'connect' with no port information. Check has been disabled.\n",
+ file, linenum, newsrv->id);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+ }
+ }
+ }
+
+ /* note: check type will be set during the config review phase */
+ ret = init_check(&newsrv->check, 0);
+ if (ret) {
+ Alert("parsing [%s:%d] : %s.\n", file, linenum, ret);
+ err_code |= ERR_ALERT | ERR_ABORT;
+ goto out;
+ }
+
+ if (newsrv->resolution)
+ newsrv->resolution->resolver_family_priority = newsrv->resolver_family_priority;
+
+ newsrv->check.state |= CHK_ST_CONFIGURED | CHK_ST_ENABLED;
+ }
+
+ if (do_agent) {
+ const char *ret;
+
+ if (!newsrv->agent.port) {
+ Alert("parsing [%s:%d] : server %s does not have agent port. Agent check has been disabled.\n",
+ file, linenum, newsrv->id);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
+
+ if (!newsrv->agent.inter)
+ newsrv->agent.inter = newsrv->check.inter;
+
+ ret = init_check(&newsrv->agent, PR_O2_LB_AGENT_CHK);
+ if (ret) {
+ Alert("parsing [%s:%d] : %s.\n", file, linenum, ret);
+ err_code |= ERR_ALERT | ERR_ABORT;
+ goto out;
+ }
+
+ newsrv->agent.state |= CHK_ST_CONFIGURED | CHK_ST_ENABLED | CHK_ST_AGENT;
+ }
+
+ if (!defsrv) {
+ if (newsrv->flags & SRV_F_BACKUP)
+ curproxy->srv_bck++;
+ else
+ curproxy->srv_act++;
+
+ srv_lb_commit_status(newsrv);
+ }
+ }
+ free(fqdn);
+ return 0;
+
+ out:
+ free(fqdn);
+ free(errmsg);
+ return err_code;
+}
+
+/* Returns a pointer to the first server matching either id <id>.
+ * NULL is returned if no match is found.
+ * the lookup is performed in the backend <bk>
+ */
+struct server *server_find_by_id(struct proxy *bk, int id)
+{
+ struct eb32_node *eb32;
+ struct server *curserver;
+
+ if (!bk || (id ==0))
+ return NULL;
+
+ /* <bk> has no backend capabilities, so it can't have a server */
+ if (!(bk->cap & PR_CAP_BE))
+ return NULL;
+
+ curserver = NULL;
+
+ eb32 = eb32_lookup(&bk->conf.used_server_id, id);
+ if (eb32)
+ curserver = container_of(eb32, struct server, conf.id);
+
+ return curserver;
+}
+
+/* Returns a pointer to the first server matching either name <name>, or id
+ * if <name> starts with a '#'. NULL is returned if no match is found.
+ * the lookup is performed in the backend <bk>
+ */
+struct server *server_find_by_name(struct proxy *bk, const char *name)
+{
+ struct server *curserver;
+
+ if (!bk || !name)
+ return NULL;
+
+ /* <bk> has no backend capabilities, so it can't have a server */
+ if (!(bk->cap & PR_CAP_BE))
+ return NULL;
+
+ curserver = NULL;
+ if (*name == '#') {
+ curserver = server_find_by_id(bk, atoi(name + 1));
+ if (curserver)
+ return curserver;
+ }
+ else {
+ curserver = bk->srv;
+
+ while (curserver && (strcmp(curserver->id, name) != 0))
+ curserver = curserver->next;
+
+ if (curserver)
+ return curserver;
+ }
+
+ return NULL;
+}
+
+struct server *server_find_best_match(struct proxy *bk, char *name, int id, int *diff)
+{
+ struct server *byname;
+ struct server *byid;
+
+ if (!name && !id)
+ return NULL;
+
+ if (diff)
+ *diff = 0;
+
+ byname = byid = NULL;
+
+ if (name) {
+ byname = server_find_by_name(bk, name);
+ if (byname && (!id || byname->puid == id))
+ return byname;
+ }
+
+ /* remaining possibilities :
+ * - name not set
+ * - name set but not found
+ * - name found but ID doesn't match
+ */
+ if (id) {
+ byid = server_find_by_id(bk, id);
+ if (byid) {
+ if (byname) {
+ /* use id only if forced by configuration */
+ if (byid->flags & SRV_F_FORCED_ID) {
+ if (diff)
+ *diff |= 2;
+ return byid;
+ }
+ else {
+ if (diff)
+ *diff |= 1;
+ return byname;
+ }
+ }
+
+ /* remaining possibilities:
+ * - name not set
+ * - name set but not found
+ */
+ if (name && diff)
+ *diff |= 2;
+ return byid;
+ }
+
+ /* id bot found */
+ if (byname) {
+ if (diff)
+ *diff |= 1;
+ return byname;
+ }
+ }
+
+ return NULL;
+}
+
+/* Update a server state using the parameters available in the params list */
+static void srv_update_state(struct server *srv, int version, char **params)
+{
+ char *p;
+ struct chunk *msg;
+
+ /* fields since version 1
+ * and common to all other upcoming versions
+ */
+ struct sockaddr_storage addr;
+ enum srv_state srv_op_state;
+ enum srv_admin srv_admin_state;
+ unsigned srv_uweight, srv_iweight;
+ unsigned long srv_last_time_change;
+ short srv_check_status;
+ enum chk_result srv_check_result;
+ int srv_check_health;
+ int srv_check_state, srv_agent_state;
+ int bk_f_forced_id;
+ int srv_f_forced_id;
+
+ msg = get_trash_chunk();
+ switch (version) {
+ case 1:
+ /*
+ * now we can proceed with server's state update:
+ * srv_addr: params[0]
+ * srv_op_state: params[1]
+ * srv_admin_state: params[2]
+ * srv_uweight: params[3]
+ * srv_iweight: params[4]
+ * srv_last_time_change: params[5]
+ * srv_check_status: params[6]
+ * srv_check_result: params[7]
+ * srv_check_health: params[8]
+ * srv_check_state: params[9]
+ * srv_agent_state: params[10]
+ * bk_f_forced_id: params[11]
+ * srv_f_forced_id: params[12]
+ */
+
+ /* validating srv_op_state */
+ p = NULL;
+ errno = 0;
+ srv_op_state = strtol(params[1], &p, 10);
+ if ((p == params[1]) || errno == EINVAL || errno == ERANGE ||
+ (srv_op_state != SRV_ST_STOPPED &&
+ srv_op_state != SRV_ST_STARTING &&
+ srv_op_state != SRV_ST_RUNNING &&
+ srv_op_state != SRV_ST_STOPPING)) {
+ chunk_appendf(msg, ", invalid srv_op_state value '%s'", params[1]);
+ }
+
+ /* validating srv_admin_state */
+ p = NULL;
+ errno = 0;
+ srv_admin_state = strtol(params[2], &p, 10);
+ if ((p == params[2]) || errno == EINVAL || errno == ERANGE ||
+ (srv_admin_state != 0 &&
+ srv_admin_state != SRV_ADMF_FMAINT &&
+ srv_admin_state != SRV_ADMF_IMAINT &&
+ srv_admin_state != SRV_ADMF_CMAINT &&
+ srv_admin_state != (SRV_ADMF_CMAINT | SRV_ADMF_FMAINT) &&
+ srv_admin_state != SRV_ADMF_FDRAIN &&
+ srv_admin_state != SRV_ADMF_IDRAIN)) {
+ chunk_appendf(msg, ", invalid srv_admin_state value '%s'", params[2]);
+ }
+
+ /* validating srv_uweight */
+ p = NULL;
+ errno = 0;
+ srv_uweight = strtol(params[3], &p, 10);
+ if ((p == params[3]) || errno == EINVAL || errno == ERANGE || (srv_uweight > SRV_UWGHT_MAX))
+ chunk_appendf(msg, ", invalid srv_uweight value '%s'", params[3]);
+
+ /* validating srv_iweight */
+ p = NULL;
+ errno = 0;
+ srv_iweight = strtol(params[4], &p, 10);
+ if ((p == params[4]) || errno == EINVAL || errno == ERANGE || (srv_iweight > SRV_UWGHT_MAX))
+ chunk_appendf(msg, ", invalid srv_iweight value '%s'", params[4]);
+
+ /* validating srv_last_time_change */
+ p = NULL;
+ errno = 0;
+ srv_last_time_change = strtol(params[5], &p, 10);
+ if ((p == params[5]) || errno == EINVAL || errno == ERANGE)
+ chunk_appendf(msg, ", invalid srv_last_time_change value '%s'", params[5]);
+
+ /* validating srv_check_status */
+ p = NULL;
+ errno = 0;
+ srv_check_status = strtol(params[6], &p, 10);
+ if (p == params[6] || errno == EINVAL || errno == ERANGE ||
+ (srv_check_status >= HCHK_STATUS_SIZE))
+ chunk_appendf(msg, ", invalid srv_check_status value '%s'", params[6]);
+
+ /* validating srv_check_result */
+ p = NULL;
+ errno = 0;
+ srv_check_result = strtol(params[7], &p, 10);
+ if ((p == params[7]) || errno == EINVAL || errno == ERANGE ||
+ (srv_check_result != CHK_RES_UNKNOWN &&
+ srv_check_result != CHK_RES_NEUTRAL &&
+ srv_check_result != CHK_RES_FAILED &&
+ srv_check_result != CHK_RES_PASSED &&
+ srv_check_result != CHK_RES_CONDPASS)) {
+ chunk_appendf(msg, ", invalid srv_check_result value '%s'", params[7]);
+ }
+
+ /* validating srv_check_health */
+ p = NULL;
+ errno = 0;
+ srv_check_health = strtol(params[8], &p, 10);
+ if (p == params[8] || errno == EINVAL || errno == ERANGE)
+ chunk_appendf(msg, ", invalid srv_check_health value '%s'", params[8]);
+
+ /* validating srv_check_state */
+ p = NULL;
+ errno = 0;
+ srv_check_state = strtol(params[9], &p, 10);
+ if (p == params[9] || errno == EINVAL || errno == ERANGE ||
+ (srv_check_state & ~(CHK_ST_INPROGRESS | CHK_ST_CONFIGURED | CHK_ST_ENABLED | CHK_ST_PAUSED | CHK_ST_AGENT)))
+ chunk_appendf(msg, ", invalid srv_check_state value '%s'", params[9]);
+
+ /* validating srv_agent_state */
+ p = NULL;
+ errno = 0;
+ srv_agent_state = strtol(params[10], &p, 10);
+ if (p == params[10] || errno == EINVAL || errno == ERANGE ||
+ (srv_agent_state & ~(CHK_ST_INPROGRESS | CHK_ST_CONFIGURED | CHK_ST_ENABLED | CHK_ST_PAUSED | CHK_ST_AGENT)))
+ chunk_appendf(msg, ", invalid srv_agent_state value '%s'", params[10]);
+
+ /* validating bk_f_forced_id */
+ p = NULL;
+ errno = 0;
+ bk_f_forced_id = strtol(params[11], &p, 10);
+ if (p == params[11] || errno == EINVAL || errno == ERANGE || !((bk_f_forced_id == 0) || (bk_f_forced_id == 1)))
+ chunk_appendf(msg, ", invalid bk_f_forced_id value '%s'", params[11]);
+
+ /* validating srv_f_forced_id */
+ p = NULL;
+ errno = 0;
+ srv_f_forced_id = strtol(params[12], &p, 10);
+ if (p == params[12] || errno == EINVAL || errno == ERANGE || !((srv_f_forced_id == 0) || (srv_f_forced_id == 1)))
+ chunk_appendf(msg, ", invalid srv_f_forced_id value '%s'", params[12]);
+
+
+ /* don't apply anything if one error has been detected */
+ if (msg->len)
+ goto out;
+
+ /* recover operational state and apply it to this server
+ * and all servers tracking this one */
+ switch (srv_op_state) {
+ case SRV_ST_STOPPED:
+ srv->check.health = 0;
+ srv_set_stopped(srv, "changed from server-state after a reload");
+ break;
+ case SRV_ST_STARTING:
+ srv->state = srv_op_state;
+ break;
+ case SRV_ST_STOPPING:
+ srv->check.health = srv->check.rise + srv->check.fall - 1;
+ srv_set_stopping(srv, "changed from server-state after a reload");
+ break;
+ case SRV_ST_RUNNING:
+ srv->check.health = srv->check.rise + srv->check.fall - 1;
+ srv_set_running(srv, "");
+ break;
+ }
+
+ /* When applying server state, the following rules apply:
+ * - in case of a configuration change, we apply the setting from the new
+ * configuration, regardless of old running state
+ * - if no configuration change, we apply old running state only if old running
+ * state is different from new configuration state
+ */
+ /* configuration has changed */
+ if ((srv_admin_state & SRV_ADMF_CMAINT) != (srv->admin & SRV_ADMF_CMAINT)) {
+ if (srv->admin & SRV_ADMF_CMAINT)
+ srv_adm_set_maint(srv);
+ else
+ srv_adm_set_ready(srv);
+ }
+ /* configuration is the same, let's compate old running state and new conf state */
+ else {
+ if (srv_admin_state & SRV_ADMF_FMAINT && !(srv->admin & SRV_ADMF_CMAINT))
+ srv_adm_set_maint(srv);
+ else if (!(srv_admin_state & SRV_ADMF_FMAINT) && (srv->admin & SRV_ADMF_CMAINT))
+ srv_adm_set_ready(srv);
+ }
+ /* apply drain mode if server is currently enabled */
+ if (!(srv->admin & SRV_ADMF_FMAINT) && (srv_admin_state & SRV_ADMF_FDRAIN)) {
+ /* The SRV_ADMF_FDRAIN flag is inherited when srv->iweight is 0
+ * (srv->iweight is the weight set up in configuration)
+ * so we don't want to apply it when srv_iweight is 0 and
+ * srv->iweight is greater than 0. Purpose is to give the
+ * chance to the admin to re-enable this server from configuration
+ * file by setting a new weight > 0.
+ */
+ if ((srv_iweight == 0) && (srv->iweight > 0)) {
+ srv_adm_set_drain(srv);
+ }
+ }
+
+ srv->last_change = date.tv_sec - srv_last_time_change;
+ srv->check.status = srv_check_status;
+ srv->check.result = srv_check_result;
+ srv->check.health = srv_check_health;
+
+ /* Only case we want to apply is removing ENABLED flag which could have been
+ * done by the "disable health" command over the stats socket
+ */
+ if ((srv->check.state & CHK_ST_CONFIGURED) &&
+ (srv_check_state & CHK_ST_CONFIGURED) &&
+ !(srv_check_state & CHK_ST_ENABLED))
+ srv->check.state &= ~CHK_ST_ENABLED;
+
+ /* Only case we want to apply is removing ENABLED flag which could have been
+ * done by the "disable agent" command over the stats socket
+ */
+ if ((srv->agent.state & CHK_ST_CONFIGURED) &&
+ (srv_agent_state & CHK_ST_CONFIGURED) &&
+ !(srv_agent_state & CHK_ST_ENABLED))
+ srv->agent.state &= ~CHK_ST_ENABLED;
+
+ /* We want to apply the previous 'running' weight (srv_uweight) only if there
+ * was no change in the configuration: both previous and new iweight are equals
+ *
+ * It means that a configuration file change has precedence over a unix socket change
+ * for server's weight
+ *
+ * by default, HAProxy applies the following weight when parsing the configuration
+ * srv->uweight = srv->iweight
+ */
+ if (srv_iweight == srv->iweight) {
+ srv->uweight = srv_uweight;
+ }
+ server_recalc_eweight(srv);
+
+ /* update server IP only if DNS resolution is used on the server */
+ if (srv->resolution) {
+ memset(&addr, 0, sizeof(struct sockaddr_storage));
+ if (str2ip2(params[0], &addr, 0))
+ memcpy(&srv->addr, &addr, sizeof(struct sockaddr_storage));
+ else
+ chunk_appendf(msg, ", can't parse IP: %s", params[0]);
+ }
+ break;
+ default:
+ chunk_appendf(msg, ", version '%d' not supported", version);
+ }
+
+ out:
+ if (msg->len)
+ Warning("server-state application failed for server '%s/%s'%s",
+ srv->proxy->id, srv->id, msg->str);
+}
+
+/* This function parses all the proxies and only take care of the backends (since we're looking for server)
+ * For each proxy, it does the following:
+ * - opens its server state file (either one or local one)
+ * - read whole file, line by line
+ * - analyse each line to check if it matches our current backend:
+ * - backend name matches
+ * - backend id matches if id is forced and name doesn't match
+ * - if the server pointed by the line is found, then state is applied
+ *
+ * If the running backend uuid or id differs from the state file, then HAProxy reports
+ * a warning.
+ */
+void apply_server_state(void)
+{
+ char *cur, *end;
+ char mybuf[SRV_STATE_LINE_MAXLEN];
+ int mybuflen;
+ char *params[SRV_STATE_FILE_MAX_FIELDS];
+ char *srv_params[SRV_STATE_FILE_MAX_FIELDS];
+ int arg, srv_arg, version, diff;
+ FILE *f;
+ char *filepath;
+ char globalfilepath[MAXPATHLEN + 1];
+ char localfilepath[MAXPATHLEN + 1];
+ int len, fileopenerr, globalfilepathlen, localfilepathlen;
+ extern struct proxy *proxy;
+ struct proxy *curproxy, *bk;
+ struct server *srv;
+
+ globalfilepathlen = 0;
+ /* create the globalfilepath variable */
+ if (global.server_state_file) {
+ /* absolute path or no base directory provided */
+ if ((global.server_state_file[0] == '/') || (!global.server_state_base)) {
+ len = strlen(global.server_state_file);
+ if (len > MAXPATHLEN) {
+ globalfilepathlen = 0;
+ goto globalfileerror;
+ }
+ memcpy(globalfilepath, global.server_state_file, len);
+ globalfilepath[len] = '\0';
+ globalfilepathlen = len;
+ }
+ else if (global.server_state_base) {
+ len = strlen(global.server_state_base);
+ globalfilepathlen += len;
+
+ if (globalfilepathlen > MAXPATHLEN) {
+ globalfilepathlen = 0;
+ goto globalfileerror;
+ }
+ strncpy(globalfilepath, global.server_state_base, len);
+ globalfilepath[globalfilepathlen] = 0;
+
+ /* append a slash if needed */
+ if (!globalfilepathlen || globalfilepath[globalfilepathlen - 1] != '/') {
+ if (globalfilepathlen + 1 > MAXPATHLEN) {
+ globalfilepathlen = 0;
+ goto globalfileerror;
+ }
+ globalfilepath[globalfilepathlen++] = '/';
+ }
+
+ len = strlen(global.server_state_file);
+ if (globalfilepathlen + len > MAXPATHLEN) {
+ globalfilepathlen = 0;
+ goto globalfileerror;
+ }
+ memcpy(globalfilepath + globalfilepathlen, global.server_state_file, len);
+ globalfilepathlen += len;
+ globalfilepath[globalfilepathlen++] = 0;
+ }
+ }
+ globalfileerror:
+ if (globalfilepathlen == 0)
+ globalfilepath[0] = '\0';
+
+ /* read servers state from local file */
+ for (curproxy = proxy; curproxy != NULL; curproxy = curproxy->next) {
+ /* servers are only in backends */
+ if (!(curproxy->cap & PR_CAP_BE))
+ continue;
+ fileopenerr = 0;
+ filepath = NULL;
+
+ /* search server state file path and name */
+ switch (curproxy->load_server_state_from_file) {
+ /* read servers state from global file */
+ case PR_SRV_STATE_FILE_GLOBAL:
+ /* there was an error while generating global server state file path */
+ if (globalfilepathlen == 0)
+ continue;
+ filepath = globalfilepath;
+ fileopenerr = 1;
+ break;
+ /* this backend has its own file */
+ case PR_SRV_STATE_FILE_LOCAL:
+ localfilepathlen = 0;
+ localfilepath[0] = '\0';
+ len = 0;
+ /* create the localfilepath variable */
+ /* absolute path or no base directory provided */
+ if ((curproxy->server_state_file_name[0] == '/') || (!global.server_state_base)) {
+ len = strlen(curproxy->server_state_file_name);
+ if (len > MAXPATHLEN) {
+ localfilepathlen = 0;
+ goto localfileerror;
+ }
+ memcpy(localfilepath, curproxy->server_state_file_name, len);
+ localfilepath[len] = '\0';
+ localfilepathlen = len;
+ }
+ else if (global.server_state_base) {
+ len = strlen(global.server_state_base);
+ localfilepathlen += len;
+
+ if (localfilepathlen > MAXPATHLEN) {
+ localfilepathlen = 0;
+ goto localfileerror;
+ }
+ strncpy(localfilepath, global.server_state_base, len);
+ localfilepath[localfilepathlen] = 0;
+
+ /* append a slash if needed */
+ if (!localfilepathlen || localfilepath[localfilepathlen - 1] != '/') {
+ if (localfilepathlen + 1 > MAXPATHLEN) {
+ localfilepathlen = 0;
+ goto localfileerror;
+ }
+ localfilepath[localfilepathlen++] = '/';
+ }
+
+ len = strlen(curproxy->server_state_file_name);
+ if (localfilepathlen + len > MAXPATHLEN) {
+ localfilepathlen = 0;
+ goto localfileerror;
+ }
+ memcpy(localfilepath + localfilepathlen, curproxy->server_state_file_name, len);
+ localfilepathlen += len;
+ localfilepath[localfilepathlen++] = 0;
+ }
+ filepath = localfilepath;
+ localfileerror:
+ if (localfilepathlen == 0)
+ localfilepath[0] = '\0';
+
+ break;
+ case PR_SRV_STATE_FILE_NONE:
+ default:
+ continue;
+ }
+
+ /* preload global state file */
+ errno = 0;
+ f = fopen(filepath, "r");
+ if (errno && fileopenerr)
+ Warning("Can't open server state file '%s': %s\n", filepath, strerror(errno));
+ if (!f)
+ continue;
+
+ mybuf[0] = '\0';
+ mybuflen = 0;
+ version = 0;
+
+ /* first character of first line of the file must contain the version of the export */
+ if (fgets(mybuf, SRV_STATE_LINE_MAXLEN, f) == NULL) {
+ Warning("Can't read first line of the server state file '%s'\n", filepath);
+ goto fileclose;
+ }
+
+ cur = mybuf;
+ version = atoi(cur);
+ if ((version < SRV_STATE_FILE_VERSION_MIN) ||
+ (version > SRV_STATE_FILE_VERSION_MAX))
+ goto fileclose;
+
+ while (fgets(mybuf, SRV_STATE_LINE_MAXLEN, f)) {
+ int bk_f_forced_id = 0;
+ int check_id = 0;
+ int check_name = 0;
+
+ mybuflen = strlen(mybuf);
+ cur = mybuf;
+ end = cur + mybuflen;
+
+ bk = NULL;
+ srv = NULL;
+
+ /* we need at least one character */
+ if (mybuflen == 0)
+ continue;
+
+ /* ignore blank characters at the beginning of the line */
+ while (isspace(*cur))
+ ++cur;
+
+ if (cur == end)
+ continue;
+
+ /* ignore comment line */
+ if (*cur == '#')
+ continue;
+
+ /* we're now ready to move the line into *srv_params[] */
+ params[0] = cur;
+ arg = 1;
+ srv_arg = 0;
+ while (*cur && arg < SRV_STATE_FILE_MAX_FIELDS) {
+ if (isspace(*cur)) {
+ *cur = '\0';
+ ++cur;
+ while (isspace(*cur))
+ ++cur;
+ switch (version) {
+ case 1:
+ /*
+ * srv_addr: params[4] => srv_params[0]
+ * srv_op_state: params[5] => srv_params[1]
+ * srv_admin_state: params[6] => srv_params[2]
+ * srv_uweight: params[7] => srv_params[3]
+ * srv_iweight: params[8] => srv_params[4]
+ * srv_last_time_change: params[9] => srv_params[5]
+ * srv_check_status: params[10] => srv_params[6]
+ * srv_check_result: params[11] => srv_params[7]
+ * srv_check_health: params[12] => srv_params[8]
+ * srv_check_state: params[13] => srv_params[9]
+ * srv_agent_state: params[14] => srv_params[10]
+ * bk_f_forced_id: params[15] => srv_params[11]
+ * srv_f_forced_id: params[16] => srv_params[12]
+ */
+ if (arg >= 4) {
+ srv_params[srv_arg] = cur;
+ ++srv_arg;
+ }
+ break;
+ }
+
+ params[arg] = cur;
+ ++arg;
+ }
+ else {
+ ++cur;
+ }
+ }
+
+ /* if line is incomplete line, then ignore it.
+ * otherwise, update useful flags */
+ switch (version) {
+ case 1:
+ if (arg < SRV_STATE_FILE_NB_FIELDS_VERSION_1)
+ continue;
+ bk_f_forced_id = (atoi(params[15]) & PR_O_FORCED_ID);
+ check_id = (atoi(params[0]) == curproxy->uuid);
+ check_name = (strcmp(curproxy->id, params[1]) == 0);
+ break;
+ }
+
+ diff = 0;
+ bk = curproxy;
+
+ /* if backend can't be found, let's continue */
+ if (!check_id && !check_name)
+ continue;
+ else if (!check_id && check_name) {
+ Warning("backend ID mismatch: from server state file: '%s', from running config '%d'\n", params[0], bk->uuid);
+ send_log(bk, LOG_NOTICE, "backend ID mismatch: from server state file: '%s', from running config '%d'\n", params[0], bk->uuid);
+ }
+ else if (check_id && !check_name) {
+ Warning("backend name mismatch: from server state file: '%s', from running config '%s'\n", params[1], bk->id);
+ send_log(bk, LOG_NOTICE, "backend name mismatch: from server state file: '%s', from running config '%s'\n", params[1], bk->id);
+ /* if name doesn't match, we still want to update curproxy if the backend id
+ * was forced in previous the previous configuration */
+ if (!bk_f_forced_id)
+ continue;
+ }
+
+ /* look for the server by its id: param[2] */
+ /* else look for the server by its name: param[3] */
+ diff = 0;
+ srv = server_find_best_match(bk, params[3], atoi(params[2]), &diff);
+
+ if (!srv) {
+ /* if no server found, then warning and continue with next line */
+ Warning("can't find server '%s' with id '%s' in backend with id '%s' or name '%s'\n",
+ params[3], params[2], params[0], params[1]);
+ send_log(bk, LOG_NOTICE, "can't find server '%s' with id '%s' in backend with id '%s' or name '%s'\n",
+ params[3], params[2], params[0], params[1]);
+ continue;
+ }
+ else if (diff & PR_FBM_MISMATCH_ID) {
+ Warning("In backend '%s' (id: '%d'): server ID mismatch: from server state file: '%s', from running config %d\n", bk->id, bk->uuid, params[2], srv->puid);
+ send_log(bk, LOG_NOTICE, "In backend '%s' (id: %d): server ID mismatch: from server state file: '%s', from running config %d\n", bk->id, bk->uuid, params[2], srv->puid);
+ }
+ else if (diff & PR_FBM_MISMATCH_NAME) {
+ Warning("In backend '%s' (id: %d): server name mismatch: from server state file: '%s', from running config '%s'\n", bk->id, bk->uuid, params[3], srv->id);
+ send_log(bk, LOG_NOTICE, "In backend '%s' (id: %d): server name mismatch: from server state file: '%s', from running config '%s'\n", bk->id, bk->uuid, params[3], srv->id);
+ }
+
+ /* now we can proceed with server's state update */
+ srv_update_state(srv, version, srv_params);
+ }
+fileclose:
+ fclose(f);
+ }
+}
+
+/*
+ * update a server's current IP address.
+ * ip is a pointer to the new IP address, whose address family is ip_sin_family.
+ * ip is in network format.
+ * updater is a string which contains an information about the requester of the update.
+ * updater is used if not NULL.
+ *
+ * A log line and a stderr warning message is generated based on server's backend options.
+ */
+int update_server_addr(struct server *s, void *ip, int ip_sin_family, char *updater)
+{
+ /* generates a log line and a warning on stderr */
+ if (1) {
+ /* book enough space for both IPv4 and IPv6 */
+ char oldip[INET6_ADDRSTRLEN];
+ char newip[INET6_ADDRSTRLEN];
+
+ memset(oldip, '\0', INET6_ADDRSTRLEN);
+ memset(newip, '\0', INET6_ADDRSTRLEN);
+
+ /* copy old IP address in a string */
+ switch (s->addr.ss_family) {
+ case AF_INET:
+ inet_ntop(s->addr.ss_family, &((struct sockaddr_in *)&s->addr)->sin_addr, oldip, INET_ADDRSTRLEN);
+ break;
+ case AF_INET6:
+ inet_ntop(s->addr.ss_family, &((struct sockaddr_in6 *)&s->addr)->sin6_addr, oldip, INET6_ADDRSTRLEN);
+ break;
+ };
+
+ /* copy new IP address in a string */
+ switch (ip_sin_family) {
+ case AF_INET:
+ inet_ntop(ip_sin_family, ip, newip, INET_ADDRSTRLEN);
+ break;
+ case AF_INET6:
+ inet_ntop(ip_sin_family, ip, newip, INET6_ADDRSTRLEN);
+ break;
+ };
+
+ /* save log line into a buffer */
+ chunk_printf(&trash, "%s/%s changed its IP from %s to %s by %s",
+ s->proxy->id, s->id, oldip, newip, updater);
+
+ /* write the buffer on stderr */
+ Warning("%s.\n", trash.str);
+
+ /* send a log */
+ send_log(s->proxy, LOG_NOTICE, "%s.\n", trash.str);
+ }
+
+ /* save the new IP family */
+ s->addr.ss_family = ip_sin_family;
+ /* save the new IP address */
+ switch (ip_sin_family) {
+ case AF_INET:
+ ((struct sockaddr_in *)&s->addr)->sin_addr.s_addr = *(uint32_t *)ip;
+ break;
+ case AF_INET6:
+ memcpy(((struct sockaddr_in6 *)&s->addr)->sin6_addr.s6_addr, ip, 16);
+ break;
+ };
+
+ return 0;
+}
+
+/*
+ * update server status based on result of name resolution
+ * returns:
+ * 0 if server status is updated
+ * 1 if server status has not changed
+ */
+int snr_update_srv_status(struct server *s)
+{
+ struct dns_resolution *resolution = s->resolution;
+
+ switch (resolution->status) {
+ case RSLV_STATUS_NONE:
+ /* status when HAProxy has just (re)started */
+ trigger_resolution(s);
+ break;
+
+ default:
+ break;
+ }
+
+ return 1;
+}
+
+/*
+ * Server Name Resolution valid response callback
+ * It expects:
+ * - <nameserver>: the name server which answered the valid response
+ * - <response>: buffer containing a valid DNS response
+ * - <response_len>: size of <response>
+ * It performs the following actions:
+ * - ignore response if current ip found and server family not met
+ * - update with first new ip found if family is met and current IP is not found
+ * returns:
+ * 0 on error
+ * 1 when no error or safe ignore
+ */
+int snr_resolution_cb(struct dns_resolution *resolution, struct dns_nameserver *nameserver, unsigned char *response, int response_len)
+{
+ struct server *s;
+ void *serverip, *firstip;
+ short server_sin_family, firstip_sin_family;
+ unsigned char *response_end;
+ int ret;
+ struct chunk *chk = get_trash_chunk();
+
+ /* initializing variables */
+ response_end = response + response_len; /* pointer to mark the end of the response */
+ firstip = NULL; /* pointer to the first valid response found */
+ /* it will be used as the new IP if a change is required */
+ firstip_sin_family = AF_UNSPEC;
+ serverip = NULL; /* current server IP address */
+
+ /* shortcut to the server whose name is being resolved */
+ s = (struct server *)resolution->requester;
+
+ /* initializing server IP pointer */
+ server_sin_family = s->addr.ss_family;
+ switch (server_sin_family) {
+ case AF_INET:
+ serverip = &((struct sockaddr_in *)&s->addr)->sin_addr.s_addr;
+ break;
+
+ case AF_INET6:
+ serverip = &((struct sockaddr_in6 *)&s->addr)->sin6_addr.s6_addr;
+ break;
+
+ default:
+ goto invalid;
+ }
+
+ ret = dns_get_ip_from_response(response, response_end, resolution->hostname_dn, resolution->hostname_dn_len,
+ serverip, server_sin_family, resolution->resolver_family_priority, &firstip,
+ &firstip_sin_family);
+
+ switch (ret) {
+ case DNS_UPD_NO:
+ if (resolution->status != RSLV_STATUS_VALID) {
+ resolution->status = RSLV_STATUS_VALID;
+ resolution->last_status_change = now_ms;
+ }
+ goto stop_resolution;
+
+ case DNS_UPD_SRVIP_NOT_FOUND:
+ goto save_ip;
+
+ case DNS_UPD_CNAME:
+ if (resolution->status != RSLV_STATUS_VALID) {
+ resolution->status = RSLV_STATUS_VALID;
+ resolution->last_status_change = now_ms;
+ }
+ goto invalid;
+
+ case DNS_UPD_NO_IP_FOUND:
+ if (resolution->status != RSLV_STATUS_OTHER) {
+ resolution->status = RSLV_STATUS_OTHER;
+ resolution->last_status_change = now_ms;
+ }
+ goto stop_resolution;
+
+ case DNS_UPD_NAME_ERROR:
+ /* if this is not the last expected response, we ignore it */
+ if (resolution->nb_responses < nameserver->resolvers->count_nameservers)
+ return 0;
+ /* update resolution status to OTHER error type */
+ if (resolution->status != RSLV_STATUS_OTHER) {
+ resolution->status = RSLV_STATUS_OTHER;
+ resolution->last_status_change = now_ms;
+ }
+ goto stop_resolution;
+
+ default:
+ goto invalid;
+
+ }
+
+ save_ip:
+ nameserver->counters.update += 1;
+ if (resolution->status != RSLV_STATUS_VALID) {
+ resolution->status = RSLV_STATUS_VALID;
+ resolution->last_status_change = now_ms;
+ }
+
+ /* save the first ip we found */
+ chunk_printf(chk, "%s/%s", nameserver->resolvers->id, nameserver->id);
+ update_server_addr(s, firstip, firstip_sin_family, (char *)chk->str);
+
+ stop_resolution:
+ /* update last resolution date and time */
+ resolution->last_resolution = now_ms;
+ /* reset current status flag */
+ resolution->step = RSLV_STEP_NONE;
+ /* reset values */
+ dns_reset_resolution(resolution);
+
+ dns_update_resolvers_timeout(nameserver->resolvers);
+
+ snr_update_srv_status(s);
+ return 0;
+
+ invalid:
+ nameserver->counters.invalid += 1;
+ if (resolution->nb_responses >= nameserver->resolvers->count_nameservers)
+ goto stop_resolution;
+
+ snr_update_srv_status(s);
+ return 0;
+}
+
+/*
+ * Server Name Resolution error management callback
+ * returns:
+ * 0 on error
+ * 1 when no error or safe ignore
+ */
+int snr_resolution_error_cb(struct dns_resolution *resolution, int error_code)
+{
+ struct server *s;
+ struct dns_resolvers *resolvers;
+ int res_preferred_afinet, res_preferred_afinet6;
+
+ /* shortcut to the server whose name is being resolved */
+ s = (struct server *)resolution->requester;
+ resolvers = resolution->resolvers;
+
+ /* can be ignored if this is not the last response */
+ if ((error_code != DNS_RESP_TIMEOUT) && (resolution->nb_responses < resolvers->count_nameservers)) {
+ return 1;
+ }
+
+ switch (error_code) {
+ case DNS_RESP_INVALID:
+ case DNS_RESP_WRONG_NAME:
+ if (resolution->status != RSLV_STATUS_INVALID) {
+ resolution->status = RSLV_STATUS_INVALID;
+ resolution->last_status_change = now_ms;
+ }
+ break;
+
+ case DNS_RESP_ANCOUNT_ZERO:
+ case DNS_RESP_TRUNCATED:
+ case DNS_RESP_ERROR:
+ case DNS_RESP_NO_EXPECTED_RECORD:
+ res_preferred_afinet = resolution->resolver_family_priority == AF_INET && resolution->query_type == DNS_RTYPE_A;
+ res_preferred_afinet6 = resolution->resolver_family_priority == AF_INET6 && resolution->query_type == DNS_RTYPE_AAAA;
+
+ if ((res_preferred_afinet || res_preferred_afinet6)
+ || (resolution->try > 0)) {
+ /* let's change the query type */
+ if (res_preferred_afinet6) {
+ /* fallback from AAAA to A */
+ resolution->query_type = DNS_RTYPE_A;
+ }
+ else if (res_preferred_afinet) {
+ /* fallback from A to AAAA */
+ resolution->query_type = DNS_RTYPE_AAAA;
+ }
+ else {
+ resolution->try -= 1;
+ if (resolution->resolver_family_priority == AF_INET) {
+ resolution->query_type = DNS_RTYPE_A;
+ } else {
+ resolution->query_type = DNS_RTYPE_AAAA;
+ }
+ }
+
+ dns_send_query(resolution);
+
+ /*
+ * move the resolution to the last element of the FIFO queue
+ * and update timeout wakeup based on the new first entry
+ */
+ if (dns_check_resolution_queue(resolvers) > 1) {
+ /* second resolution becomes first one */
+ LIST_DEL(&resolution->list);
+ /* ex first resolution goes to the end of the queue */
+ LIST_ADDQ(&resolvers->curr_resolution, &resolution->list);
+ }
+ dns_update_resolvers_timeout(resolvers);
+ goto leave;
+ }
+ else {
+ if (resolution->status != RSLV_STATUS_OTHER) {
+ resolution->status = RSLV_STATUS_OTHER;
+ resolution->last_status_change = now_ms;
+ }
+ }
+ break;
+
+ case DNS_RESP_NX_DOMAIN:
+ if (resolution->status != RSLV_STATUS_NX) {
+ resolution->status = RSLV_STATUS_NX;
+ resolution->last_status_change = now_ms;
+ }
+ break;
+
+ case DNS_RESP_REFUSED:
+ if (resolution->status != RSLV_STATUS_REFUSED) {
+ resolution->status = RSLV_STATUS_REFUSED;
+ resolution->last_status_change = now_ms;
+ }
+ break;
+
+ case DNS_RESP_CNAME_ERROR:
+ break;
+
+ case DNS_RESP_TIMEOUT:
+ if (resolution->status != RSLV_STATUS_TIMEOUT) {
+ resolution->status = RSLV_STATUS_TIMEOUT;
+ resolution->last_status_change = now_ms;
+ }
+ break;
+ }
+
+ /* update last resolution date and time */
+ resolution->last_resolution = now_ms;
+ /* reset current status flag */
+ resolution->step = RSLV_STEP_NONE;
+ /* reset values */
+ dns_reset_resolution(resolution);
+
+ LIST_DEL(&resolution->list);
+ dns_update_resolvers_timeout(resolvers);
+
+ leave:
+ snr_update_srv_status(s);
+ return 1;
+}
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * Session management functions.
+ *
+ * Copyright 2000-2015 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <common/config.h>
+#include <common/buffer.h>
+#include <common/debug.h>
+#include <common/memory.h>
+
+#include <types/global.h>
+#include <types/session.h>
+
+#include <proto/connection.h>
+#include <proto/listener.h>
+#include <proto/log.h>
+#include <proto/proto_http.h>
+#include <proto/proto_tcp.h>
+#include <proto/proxy.h>
+#include <proto/raw_sock.h>
+#include <proto/session.h>
+#include <proto/stream.h>
+#include <proto/vars.h>
+
+struct pool_head *pool2_session;
+
+static int conn_complete_session(struct connection *conn);
+static int conn_update_session(struct connection *conn);
+static struct task *session_expire_embryonic(struct task *t);
+
+/* data layer callbacks for an embryonic stream */
+struct data_cb sess_conn_cb = {
+ .recv = NULL,
+ .send = NULL,
+ .wake = conn_update_session,
+ .init = conn_complete_session,
+};
+
+/* Create a a new session and assign it to frontend <fe>, listener <li>,
+ * origin <origin>, set the current date and clear the stick counters pointers.
+ * Returns the session upon success or NULL. The session may be released using
+ * session_free().
+ */
+struct session *session_new(struct proxy *fe, struct listener *li, enum obj_type *origin)
+{
+ struct session *sess;
+
+ sess = pool_alloc2(pool2_session);
+ if (sess) {
+ sess->listener = li;
+ sess->fe = fe;
+ sess->origin = origin;
+ sess->accept_date = date; /* user-visible date for logging */
+ sess->tv_accept = now; /* corrected date for internal use */
+ memset(sess->stkctr, 0, sizeof(sess->stkctr));
+ vars_init(&sess->vars, SCOPE_SESS);
+ }
+ return sess;
+}
+
+void session_free(struct session *sess)
+{
+ session_store_counters(sess);
+ vars_prune_per_sess(&sess->vars);
+ pool_free2(pool2_session, sess);
+}
+
+/* perform minimal intializations, report 0 in case of error, 1 if OK. */
+int init_session()
+{
+ pool2_session = create_pool("session", sizeof(struct session), MEM_F_SHARED);
+ return pool2_session != NULL;
+}
+
+/* count a new session to keep frontend, listener and track stats up to date */
+static void session_count_new(struct session *sess)
+{
+ struct stkctr *stkctr;
+ void *ptr;
+ int i;
+
+ proxy_inc_fe_sess_ctr(sess->listener, sess->fe);
+
+ for (i = 0; i < MAX_SESS_STKCTR; i++) {
+ stkctr = &sess->stkctr[i];
+ if (!stkctr_entry(stkctr))
+ continue;
+
+ ptr = stktable_data_ptr(stkctr->table, stkctr_entry(stkctr), STKTABLE_DT_SESS_CNT);
+ if (ptr)
+ stktable_data_cast(ptr, sess_cnt)++;
+
+ ptr = stktable_data_ptr(stkctr->table, stkctr_entry(stkctr), STKTABLE_DT_SESS_RATE);
+ if (ptr)
+ update_freq_ctr_period(&stktable_data_cast(ptr, sess_rate),
+ stkctr->table->data_arg[STKTABLE_DT_SESS_RATE].u, 1);
+ }
+}
+
+/* This function is called from the protocol layer accept() in order to
+ * instanciate a new session on behalf of a given listener and frontend. It
+ * returns a positive value upon success, 0 if the connection can be ignored,
+ * or a negative value upon critical failure. The accepted file descriptor is
+ * closed if we return <= 0. If no handshake is needed, it immediately tries
+ * to instanciate a new stream.
+ */
+int session_accept_fd(struct listener *l, int cfd, struct sockaddr_storage *addr)
+{
+ struct connection *cli_conn;
+ struct proxy *p = l->frontend;
+ struct session *sess;
+ struct stream *strm;
+ struct task *t;
+ int ret;
+
+
+ ret = -1; /* assume unrecoverable error by default */
+
+ if (unlikely((cli_conn = conn_new()) == NULL))
+ goto out_close;
+
+ conn_prepare(cli_conn, l->proto, l->xprt);
+
+ cli_conn->t.sock.fd = cfd;
+ cli_conn->addr.from = *addr;
+ cli_conn->flags |= CO_FL_ADDR_FROM_SET;
+ cli_conn->target = &l->obj_type;
+ cli_conn->proxy_netns = l->netns;
+
+ conn_ctrl_init(cli_conn);
+
+ /* wait for a PROXY protocol header */
+ if (l->options & LI_O_ACC_PROXY) {
+ cli_conn->flags |= CO_FL_ACCEPT_PROXY;
+ conn_sock_want_recv(cli_conn);
+ }
+
+ conn_data_want_recv(cli_conn);
+ if (conn_xprt_init(cli_conn) < 0)
+ goto out_free_conn;
+
+ sess = session_new(p, l, &cli_conn->obj_type);
+ if (!sess)
+ goto out_free_conn;
+
+ p->feconn++;
+ /* This session was accepted, count it now */
+ if (p->feconn > p->fe_counters.conn_max)
+ p->fe_counters.conn_max = p->feconn;
+
+ proxy_inc_fe_conn_ctr(l, p);
+
+ /* now evaluate the tcp-request layer4 rules. We only need a session
+ * and no stream for these rules.
+ */
+ if ((l->options & LI_O_TCP_RULES) && !tcp_exec_req_rules(sess)) {
+ /* let's do a no-linger now to close with a single RST. */
+ setsockopt(cfd, SOL_SOCKET, SO_LINGER, (struct linger *) &nolinger, sizeof(struct linger));
+ ret = 0; /* successful termination */
+ goto out_free_sess;
+ }
+
+ /* monitor-net and health mode are processed immediately after TCP
+ * connection rules. This way it's possible to block them, but they
+ * never use the lower data layers, they send directly over the socket,
+ * as they were designed for. We first flush the socket receive buffer
+ * in order to avoid emission of an RST by the system. We ignore any
+ * error.
+ */
+ if (unlikely((p->mode == PR_MODE_HEALTH) ||
+ ((l->options & LI_O_CHK_MONNET) &&
+ addr->ss_family == AF_INET &&
+ (((struct sockaddr_in *)addr)->sin_addr.s_addr & p->mon_mask.s_addr) == p->mon_net.s_addr))) {
+ /* we have 4 possibilities here :
+ * - HTTP mode, from monitoring address => send "HTTP/1.0 200 OK"
+ * - HEALTH mode with HTTP check => send "HTTP/1.0 200 OK"
+ * - HEALTH mode without HTTP check => just send "OK"
+ * - TCP mode from monitoring address => just close
+ */
+ if (l->proto->drain)
+ l->proto->drain(cfd);
+ if (p->mode == PR_MODE_HTTP ||
+ (p->mode == PR_MODE_HEALTH && (p->options2 & PR_O2_CHK_ANY) == PR_O2_HTTP_CHK))
+ send(cfd, "HTTP/1.0 200 OK\r\n\r\n", 19, MSG_DONTWAIT|MSG_NOSIGNAL|MSG_MORE);
+ else if (p->mode == PR_MODE_HEALTH)
+ send(cfd, "OK\n", 3, MSG_DONTWAIT|MSG_NOSIGNAL|MSG_MORE);
+ ret = 0;
+ goto out_free_sess;
+ }
+
+ /* Adjust some socket options */
+ if (l->addr.ss_family == AF_INET || l->addr.ss_family == AF_INET6) {
+ setsockopt(cfd, IPPROTO_TCP, TCP_NODELAY, (char *) &one, sizeof(one));
+
+ if (p->options & PR_O_TCP_CLI_KA)
+ setsockopt(cfd, SOL_SOCKET, SO_KEEPALIVE, (char *) &one, sizeof(one));
+
+ if (p->options & PR_O_TCP_NOLING)
+ fdtab[cfd].linger_risk = 1;
+
+#if defined(TCP_MAXSEG)
+ if (l->maxseg < 0) {
+ /* we just want to reduce the current MSS by that value */
+ int mss;
+ socklen_t mss_len = sizeof(mss);
+ if (getsockopt(cfd, IPPROTO_TCP, TCP_MAXSEG, &mss, &mss_len) == 0) {
+ mss += l->maxseg; /* remember, it's < 0 */
+ setsockopt(cfd, IPPROTO_TCP, TCP_MAXSEG, &mss, sizeof(mss));
+ }
+ }
+#endif
+ }
+
+ if (global.tune.client_sndbuf)
+ setsockopt(cfd, SOL_SOCKET, SO_SNDBUF, &global.tune.client_sndbuf, sizeof(global.tune.client_sndbuf));
+
+ if (global.tune.client_rcvbuf)
+ setsockopt(cfd, SOL_SOCKET, SO_RCVBUF, &global.tune.client_rcvbuf, sizeof(global.tune.client_rcvbuf));
+
+ if (unlikely((t = task_new()) == NULL))
+ goto out_free_sess;
+
+ t->context = sess;
+ t->nice = l->nice;
+
+ /* OK, now either we have a pending handshake to execute with and
+ * then we must return to the I/O layer, or we can proceed with the
+ * end of the stream initialization. In case of handshake, we also
+ * set the I/O timeout to the frontend's client timeout.
+ *
+ * At this point we set the relation between sess/task/conn this way :
+ *
+ * orig -- sess <-- context
+ * | |
+ * v |
+ * conn -- owner ---> task
+ */
+ if (cli_conn->flags & CO_FL_HANDSHAKE) {
+ conn_attach(cli_conn, t, &sess_conn_cb);
+ t->process = session_expire_embryonic;
+ t->expire = tick_add_ifset(now_ms, p->timeout.client);
+ task_queue(t);
+ cli_conn->flags |= CO_FL_INIT_DATA | CO_FL_WAKE_DATA;
+ return 1;
+ }
+
+ /* OK let's complete stream initialization since there is no handshake */
+ cli_conn->flags |= CO_FL_CONNECTED;
+
+ /* we want the connection handler to notify the stream interface about updates. */
+ cli_conn->flags |= CO_FL_WAKE_DATA;
+
+ /* if logs require transport layer information, note it on the connection */
+ if (sess->fe->to_log & LW_XPRT)
+ cli_conn->flags |= CO_FL_XPRT_TRACKED;
+
+ session_count_new(sess);
+ strm = stream_new(sess, t, &cli_conn->obj_type);
+ if (!strm)
+ goto out_free_task;
+
+ strm->target = sess->listener->default_target;
+ strm->req.analysers = sess->listener->analysers;
+ return 1;
+
+ out_free_task:
+ task_free(t);
+ out_free_sess:
+ p->feconn--;
+ session_free(sess);
+ out_free_conn:
+ cli_conn->flags &= ~CO_FL_XPRT_TRACKED;
+ conn_xprt_close(cli_conn);
+ conn_free(cli_conn);
+ out_close:
+ if (ret < 0 && l->xprt == &raw_sock && p->mode == PR_MODE_HTTP) {
+ /* critical error, no more memory, try to emit a 500 response */
+ struct chunk *err_msg = &p->errmsg[HTTP_ERR_500];
+ if (!err_msg->str)
+ err_msg = &http_err_chunks[HTTP_ERR_500];
+ send(cfd, err_msg->str, err_msg->len, MSG_DONTWAIT|MSG_NOSIGNAL);
+ }
+
+ if (fdtab[cfd].owner)
+ fd_delete(cfd);
+ else
+ close(cfd);
+ return ret;
+}
+
+
+/* prepare the trash with a log prefix for session <sess>. It only works with
+ * embryonic sessions based on a real connection. This function requires that
+ * at sess->origin points to the incoming connection.
+ */
+static void session_prepare_log_prefix(struct session *sess)
+{
+ struct tm tm;
+ char pn[INET6_ADDRSTRLEN];
+ int ret;
+ char *end;
+ struct connection *cli_conn = __objt_conn(sess->origin);
+
+ ret = addr_to_str(&cli_conn->addr.from, pn, sizeof(pn));
+ if (ret <= 0)
+ chunk_printf(&trash, "unknown [");
+ else if (ret == AF_UNIX)
+ chunk_printf(&trash, "%s:%d [", pn, sess->listener->luid);
+ else
+ chunk_printf(&trash, "%s:%d [", pn, get_host_port(&cli_conn->addr.from));
+
+ get_localtime(sess->accept_date.tv_sec, &tm);
+ end = date2str_log(trash.str + trash.len, &tm, &(sess->accept_date), trash.size - trash.len);
+ trash.len = end - trash.str;
+ if (sess->listener->name)
+ chunk_appendf(&trash, "] %s/%s", sess->fe->id, sess->listener->name);
+ else
+ chunk_appendf(&trash, "] %s/%d", sess->fe->id, sess->listener->luid);
+}
+
+/* This function kills an existing embryonic session. It stops the connection's
+ * transport layer, releases assigned resources, resumes the listener if it was
+ * disabled and finally kills the file descriptor. This function requires that
+ * sess->origin points to the incoming connection.
+ */
+static void session_kill_embryonic(struct session *sess)
+{
+ int level = LOG_INFO;
+ struct connection *conn = __objt_conn(sess->origin);
+ struct task *task = conn->owner;
+ unsigned int log = sess->fe->to_log;
+ const char *err_msg;
+
+ if (sess->fe->options2 & PR_O2_LOGERRORS)
+ level = LOG_ERR;
+
+ if (log && (sess->fe->options & PR_O_NULLNOLOG)) {
+ /* with "option dontlognull", we don't log connections with no transfer */
+ if (!conn->err_code ||
+ conn->err_code == CO_ER_PRX_EMPTY || conn->err_code == CO_ER_PRX_ABORT ||
+ conn->err_code == CO_ER_SSL_EMPTY || conn->err_code == CO_ER_SSL_ABORT)
+ log = 0;
+ }
+
+ if (log) {
+ if (!conn->err_code && (task->state & TASK_WOKEN_TIMER)) {
+ if (conn->flags & CO_FL_ACCEPT_PROXY)
+ conn->err_code = CO_ER_PRX_TIMEOUT;
+ else if (conn->flags & CO_FL_SSL_WAIT_HS)
+ conn->err_code = CO_ER_SSL_TIMEOUT;
+ }
+
+ session_prepare_log_prefix(sess);
+ err_msg = conn_err_code_str(conn);
+ if (err_msg)
+ send_log(sess->fe, level, "%s: %s\n", trash.str, err_msg);
+ else
+ send_log(sess->fe, level, "%s: unknown connection error (code=%d flags=%08x)\n",
+ trash.str, conn->err_code, conn->flags);
+ }
+
+ /* kill the connection now */
+ conn_force_close(conn);
+ conn_free(conn);
+
+ sess->fe->feconn--;
+
+ if (!(sess->listener->options & LI_O_UNLIMITED))
+ actconn--;
+ jobs--;
+ sess->listener->nbconn--;
+ if (sess->listener->state == LI_FULL)
+ resume_listener(sess->listener);
+
+ /* Dequeues all of the listeners waiting for a resource */
+ if (!LIST_ISEMPTY(&global_listener_queue))
+ dequeue_all_listeners(&global_listener_queue);
+
+ if (!LIST_ISEMPTY(&sess->fe->listener_queue) &&
+ (!sess->fe->fe_sps_lim || freq_ctr_remain(&sess->fe->fe_sess_per_sec, sess->fe->fe_sps_lim, 0) > 0))
+ dequeue_all_listeners(&sess->fe->listener_queue);
+
+ task_delete(task);
+ task_free(task);
+ session_free(sess);
+}
+
+/* Manages the embryonic session timeout. It is only called when the timeout
+ * strikes and performs the required cleanup.
+ */
+static struct task *session_expire_embryonic(struct task *t)
+{
+ struct session *sess = t->context;
+
+ if (!(t->state & TASK_WOKEN_TIMER))
+ return t;
+
+ session_kill_embryonic(sess);
+ return NULL;
+}
+
+/* Finish initializing a session from a connection, or kills it if the
+ * connection shows and error. Returns <0 if the connection was killed.
+ */
+static int conn_complete_session(struct connection *conn)
+{
+ struct task *task = conn->owner;
+ struct session *sess = task->context;
+ struct stream *strm;
+
+ if (conn->flags & CO_FL_ERROR)
+ goto fail;
+
+ /* we want the connection handler to notify the stream interface about updates. */
+ conn->flags |= CO_FL_WAKE_DATA;
+
+ /* if logs require transport layer information, note it on the connection */
+ if (sess->fe->to_log & LW_XPRT)
+ conn->flags |= CO_FL_XPRT_TRACKED;
+
+ session_count_new(sess);
+ task->process = sess->listener->handler;
+ strm = stream_new(sess, task, &conn->obj_type);
+ if (!strm)
+ goto fail;
+
+ strm->target = sess->listener->default_target;
+ strm->req.analysers = sess->listener->analysers;
+ conn->flags &= ~CO_FL_INIT_DATA;
+
+ return 0;
+
+ fail:
+ session_kill_embryonic(sess);
+ return -1;
+}
+
+/* Update a session status. The connection is killed in case of
+ * error, and <0 will be returned. Otherwise it does nothing.
+ */
+static int conn_update_session(struct connection *conn)
+{
+ struct task *task = conn->owner;
+ struct session *sess = task->context;
+
+ if (conn->flags & CO_FL_ERROR) {
+ session_kill_embryonic(sess);
+ return -1;
+ }
+ return 0;
+}
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * shctx.c - shared context management functions for SSL
+ *
+ * Copyright (C) 2011-2012 EXCELIANCE
+ *
+ * Author: Emeric Brun - emeric@exceliance.fr
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <sys/mman.h>
+#ifndef USE_PRIVATE_CACHE
+#ifdef USE_PTHREAD_PSHARED
+#include <pthread.h>
+#else
+#ifdef USE_SYSCALL_FUTEX
+#include <unistd.h>
+#include <linux/futex.h>
+#include <sys/syscall.h>
+#endif
+#endif
+#endif
+#include <arpa/inet.h>
+#include <ebmbtree.h>
+#include <types/global.h>
+#include "proto/shctx.h"
+
+struct shsess_packet_hdr {
+ unsigned int eol;
+ unsigned char final:1;
+ unsigned char seq:7;
+ unsigned char id[SSL_MAX_SSL_SESSION_ID_LENGTH];
+};
+
+struct shsess_packet {
+ unsigned char version;
+ unsigned char sig[SHA_DIGEST_LENGTH];
+ struct shsess_packet_hdr hdr;
+ unsigned char data[0];
+};
+
+struct shared_session {
+ struct ebmb_node key;
+ unsigned char key_data[SSL_MAX_SSL_SESSION_ID_LENGTH];
+ unsigned char data[SHSESS_BLOCK_MIN_SIZE];
+};
+
+struct shared_block {
+ union {
+ struct shared_session session;
+ unsigned char data[sizeof(struct shared_session)];
+ } data;
+ short int data_len;
+ struct shared_block *p;
+ struct shared_block *n;
+};
+
+struct shared_context {
+#ifndef USE_PRIVATE_CACHE
+#ifdef USE_PTHREAD_PSHARED
+ pthread_mutex_t mutex;
+#else
+ unsigned int waiters;
+#endif
+#endif
+ struct shsess_packet_hdr upd;
+ unsigned char data[SHSESS_MAX_DATA_LEN];
+ short int data_len;
+ struct shared_block active;
+ struct shared_block free;
+};
+
+/* Static shared context */
+static struct shared_context *shctx = NULL;
+
+/* Lock functions */
+
+#if defined (USE_PRIVATE_CACHE)
+
+#define shared_context_lock()
+#define shared_context_unlock()
+
+#elif defined (USE_PTHREAD_PSHARED)
+static int use_shared_mem = 0;
+
+#define shared_context_lock() if (use_shared_mem) pthread_mutex_lock(&shctx->mutex)
+#define shared_context_unlock() if (use_shared_mem) pthread_mutex_unlock(&shctx->mutex)
+
+#else
+static int use_shared_mem = 0;
+
+#ifdef USE_SYSCALL_FUTEX
+static inline void _shared_context_wait4lock(unsigned int *count, unsigned int *uaddr, int value)
+{
+ syscall(SYS_futex, uaddr, FUTEX_WAIT, value, NULL, 0, 0);
+}
+
+static inline void _shared_context_awakelocker(unsigned int *uaddr)
+{
+ syscall(SYS_futex, uaddr, FUTEX_WAKE, 1, NULL, 0, 0);
+}
+
+#else /* internal spin lock */
+
+#if defined (__i486__) || defined (__i586__) || defined (__i686__) || defined (__x86_64__)
+static inline void relax()
+{
+ __asm volatile("rep;nop\n" ::: "memory");
+}
+#else /* if no x86_64 or i586 arch: use less optimized but generic asm */
+static inline void relax()
+{
+ __asm volatile("" ::: "memory");
+}
+#endif
+
+static inline void _shared_context_wait4lock(unsigned int *count, unsigned int *uaddr, int value)
+{
+ int i;
+
+ for (i = 0; i < *count; i++) {
+ relax();
+ relax();
+ }
+ *count = *count << 1;
+}
+
+#define _shared_context_awakelocker(a)
+
+#endif
+
+#if defined (__i486__) || defined (__i586__) || defined (__i686__) || defined (__x86_64__)
+static inline unsigned int xchg(unsigned int *ptr, unsigned int x)
+{
+ __asm volatile("lock xchgl %0,%1"
+ : "=r" (x), "+m" (*ptr)
+ : "0" (x)
+ : "memory");
+ return x;
+}
+
+static inline unsigned int cmpxchg(unsigned int *ptr, unsigned int old, unsigned int new)
+{
+ unsigned int ret;
+
+ __asm volatile("lock cmpxchgl %2,%1"
+ : "=a" (ret), "+m" (*ptr)
+ : "r" (new), "0" (old)
+ : "memory");
+ return ret;
+}
+
+static inline unsigned char atomic_dec(unsigned int *ptr)
+{
+ unsigned char ret;
+ __asm volatile("lock decl %0\n"
+ "setne %1\n"
+ : "+m" (*ptr), "=qm" (ret)
+ :
+ : "memory");
+ return ret;
+}
+
+#else /* if no x86_64 or i586 arch: use less optimized gcc >= 4.1 built-ins */
+static inline unsigned int xchg(unsigned int *ptr, unsigned int x)
+{
+ return __sync_lock_test_and_set(ptr, x);
+}
+
+static inline unsigned int cmpxchg(unsigned int *ptr, unsigned int old, unsigned int new)
+{
+ return __sync_val_compare_and_swap(ptr, old, new);
+}
+
+static inline unsigned char atomic_dec(unsigned int *ptr)
+{
+ return __sync_sub_and_fetch(ptr, 1) ? 1 : 0;
+}
+
+#endif
+
+static inline void _shared_context_lock(void)
+{
+ unsigned int x;
+ unsigned int count = 4;
+
+ x = cmpxchg(&shctx->waiters, 0, 1);
+ if (x) {
+ if (x != 2)
+ x = xchg(&shctx->waiters, 2);
+
+ while (x) {
+ _shared_context_wait4lock(&count, &shctx->waiters, 2);
+ x = xchg(&shctx->waiters, 2);
+ }
+ }
+}
+
+static inline void _shared_context_unlock(void)
+{
+ if (atomic_dec(&shctx->waiters)) {
+ shctx->waiters = 0;
+ _shared_context_awakelocker(&shctx->waiters);
+ }
+}
+
+#define shared_context_lock() if (use_shared_mem) _shared_context_lock()
+
+#define shared_context_unlock() if (use_shared_mem) _shared_context_unlock()
+
+#endif
+
+/* List Macros */
+
+#define shblock_unset(s) (s)->n->p = (s)->p; \
+ (s)->p->n = (s)->n;
+
+#define shblock_set_free(s) shblock_unset(s) \
+ (s)->n = &shctx->free; \
+ (s)->p = shctx->free.p; \
+ shctx->free.p->n = s; \
+ shctx->free.p = s;
+
+
+#define shblock_set_active(s) shblock_unset(s) \
+ (s)->n = &shctx->active; \
+ (s)->p = shctx->active.p; \
+ shctx->active.p->n = s; \
+ shctx->active.p = s;
+
+
+/* Tree Macros */
+
+#define shsess_tree_delete(s) ebmb_delete(&(s)->key);
+
+#define shsess_tree_insert(s) (struct shared_session *)ebmb_insert(&shctx->active.data.session.key.node.branches, \
+ &(s)->key, SSL_MAX_SSL_SESSION_ID_LENGTH);
+
+#define shsess_tree_lookup(k) (struct shared_session *)ebmb_lookup(&shctx->active.data.session.key.node.branches, \
+ (k), SSL_MAX_SSL_SESSION_ID_LENGTH);
+
+/* shared session functions */
+
+/* Free session blocks, returns number of freed blocks */
+static int shsess_free(struct shared_session *shsess)
+{
+ struct shared_block *block;
+ int ret = 1;
+
+ if (((struct shared_block *)shsess)->data_len <= sizeof(shsess->data)) {
+ shblock_set_free((struct shared_block *)shsess);
+ return ret;
+ }
+ block = ((struct shared_block *)shsess)->n;
+ shblock_set_free((struct shared_block *)shsess);
+ while (1) {
+ struct shared_block *next;
+
+ if (block->data_len <= sizeof(block->data)) {
+ /* last block */
+ shblock_set_free(block);
+ ret++;
+ break;
+ }
+ next = block->n;
+ shblock_set_free(block);
+ ret++;
+ block = next;
+ }
+ return ret;
+}
+
+/* This function frees enough blocks to store a new session of data_len.
+ * Returns a ptr on a free block if it succeeds, or NULL if there are not
+ * enough blocks to store that session.
+ */
+static struct shared_session *shsess_get_next(int data_len)
+{
+ int head = 0;
+ struct shared_block *b;
+
+ b = shctx->free.n;
+ while (b != &shctx->free) {
+ if (!head) {
+ data_len -= sizeof(b->data.session.data);
+ head = 1;
+ }
+ else
+ data_len -= sizeof(b->data.data);
+ if (data_len <= 0)
+ return &shctx->free.n->data.session;
+ b = b->n;
+ }
+ b = shctx->active.n;
+ while (b != &shctx->active) {
+ int freed;
+
+ shsess_tree_delete(&b->data.session);
+ freed = shsess_free(&b->data.session);
+ if (!head)
+ data_len -= sizeof(b->data.session.data) + (freed-1)*sizeof(b->data.data);
+ else
+ data_len -= freed*sizeof(b->data.data);
+ if (data_len <= 0)
+ return &shctx->free.n->data.session;
+ b = shctx->active.n;
+ }
+ return NULL;
+}
+
+/* store a session into the cache
+ * s_id : session id padded with zero to SSL_MAX_SSL_SESSION_ID_LENGTH
+ * data: asn1 encoded session
+ * data_len: asn1 encoded session length
+ * Returns 1 id session was stored (else 0)
+ */
+static int shsess_store(unsigned char *s_id, unsigned char *data, int data_len)
+{
+ struct shared_session *shsess, *oldshsess;
+
+ shsess = shsess_get_next(data_len);
+ if (!shsess) {
+ /* Could not retrieve enough free blocks to store that session */
+ return 0;
+ }
+
+ /* prepare key */
+ memcpy(shsess->key_data, s_id, SSL_MAX_SSL_SESSION_ID_LENGTH);
+
+ /* it returns the already existing node
+ or current node if none, never returns null */
+ oldshsess = shsess_tree_insert(shsess);
+ if (oldshsess != shsess) {
+ /* free all blocks used by old node */
+ shsess_free(oldshsess);
+ shsess = oldshsess;
+ }
+
+ ((struct shared_block *)shsess)->data_len = data_len;
+ if (data_len <= sizeof(shsess->data)) {
+ /* Store on a single block */
+ memcpy(shsess->data, data, data_len);
+ shblock_set_active((struct shared_block *)shsess);
+ }
+ else {
+ unsigned char *p;
+ /* Store on multiple blocks */
+ int cur_len;
+
+ memcpy(shsess->data, data, sizeof(shsess->data));
+ p = data + sizeof(shsess->data);
+ cur_len = data_len - sizeof(shsess->data);
+ shblock_set_active((struct shared_block *)shsess);
+ while (1) {
+ /* Store next data on free block.
+ * shsess_get_next guarantees that there are enough
+ * free blocks in queue.
+ */
+ struct shared_block *block;
+
+ block = shctx->free.n;
+ if (cur_len <= sizeof(block->data)) {
+ /* This is the last block */
+ block->data_len = cur_len;
+ memcpy(block->data.data, p, cur_len);
+ shblock_set_active(block);
+ break;
+ }
+ /* Intermediate block */
+ block->data_len = cur_len;
+ memcpy(block->data.data, p, sizeof(block->data));
+ p += sizeof(block->data.data);
+ cur_len -= sizeof(block->data.data);
+ shblock_set_active(block);
+ }
+ }
+
+ return 1;
+}
+
+
+/* SSL context callbacks */
+
+/* SSL callback used on new session creation */
+int shctx_new_cb(SSL *ssl, SSL_SESSION *sess)
+{
+ unsigned char encsess[sizeof(struct shsess_packet)+SHSESS_MAX_DATA_LEN];
+ struct shsess_packet *packet = (struct shsess_packet *)encsess;
+ unsigned char *p;
+ int data_len, sid_length, sid_ctx_length;
+
+
+ /* Session id is already stored in to key and session id is known
+ * so we dont store it to keep size.
+ */
+ sid_length = sess->session_id_length;
+ sess->session_id_length = 0;
+ sid_ctx_length = sess->sid_ctx_length;
+ sess->sid_ctx_length = 0;
+
+ /* check if buffer is large enough for the ASN1 encoded session */
+ data_len = i2d_SSL_SESSION(sess, NULL);
+ if (data_len > SHSESS_MAX_DATA_LEN)
+ goto err;
+
+ /* process ASN1 session encoding before the lock */
+ p = packet->data;
+ i2d_SSL_SESSION(sess, &p);
+
+ memcpy(packet->hdr.id, sess->session_id, sid_length);
+ if (sid_length < SSL_MAX_SSL_SESSION_ID_LENGTH)
+ memset(&packet->hdr.id[sid_length], 0, SSL_MAX_SSL_SESSION_ID_LENGTH-sid_length);
+
+ shared_context_lock();
+
+ /* store to cache */
+ shsess_store(packet->hdr.id, packet->data, data_len);
+
+ shared_context_unlock();
+
+err:
+ /* reset original length values */
+ sess->session_id_length = sid_length;
+ sess->sid_ctx_length = sid_ctx_length;
+
+ return 0; /* do not increment session reference count */
+}
+
+/* SSL callback used on lookup an existing session cause none found in internal cache */
+SSL_SESSION *shctx_get_cb(SSL *ssl, unsigned char *key, int key_len, int *do_copy)
+{
+ struct shared_session *shsess;
+ unsigned char data[SHSESS_MAX_DATA_LEN], *p;
+ unsigned char tmpkey[SSL_MAX_SSL_SESSION_ID_LENGTH];
+ int data_len;
+ SSL_SESSION *sess;
+
+ global.shctx_lookups++;
+
+ /* allow the session to be freed automatically by openssl */
+ *do_copy = 0;
+
+ /* tree key is zeros padded sessionid */
+ if (key_len < SSL_MAX_SSL_SESSION_ID_LENGTH) {
+ memcpy(tmpkey, key, key_len);
+ memset(tmpkey + key_len, 0, SSL_MAX_SSL_SESSION_ID_LENGTH - key_len);
+ key = tmpkey;
+ }
+
+ /* lock cache */
+ shared_context_lock();
+
+ /* lookup for session */
+ shsess = shsess_tree_lookup(key);
+ if (!shsess) {
+ /* no session found: unlock cache and exit */
+ shared_context_unlock();
+ global.shctx_misses++;
+ return NULL;
+ }
+
+ data_len = ((struct shared_block *)shsess)->data_len;
+ if (data_len <= sizeof(shsess->data)) {
+ /* Session stored on single block */
+ memcpy(data, shsess->data, data_len);
+ shblock_set_active((struct shared_block *)shsess);
+ }
+ else {
+ /* Session stored on multiple blocks */
+ struct shared_block *block;
+
+ memcpy(data, shsess->data, sizeof(shsess->data));
+ p = data + sizeof(shsess->data);
+ block = ((struct shared_block *)shsess)->n;
+ shblock_set_active((struct shared_block *)shsess);
+ while (1) {
+ /* Retrieve data from next block */
+ struct shared_block *next;
+
+ if (block->data_len <= sizeof(block->data.data)) {
+ /* This is the last block */
+ memcpy(p, block->data.data, block->data_len);
+ p += block->data_len;
+ shblock_set_active(block);
+ break;
+ }
+ /* Intermediate block */
+ memcpy(p, block->data.data, sizeof(block->data.data));
+ p += sizeof(block->data.data);
+ next = block->n;
+ shblock_set_active(block);
+ block = next;
+ }
+ }
+
+ shared_context_unlock();
+
+ /* decode ASN1 session */
+ p = data;
+ sess = d2i_SSL_SESSION(NULL, (const unsigned char **)&p, data_len);
+ /* Reset session id and session id contenxt */
+ if (sess) {
+ memcpy(sess->session_id, key, key_len);
+ sess->session_id_length = key_len;
+ memcpy(sess->sid_ctx, (const unsigned char *)SHCTX_APPNAME, strlen(SHCTX_APPNAME));
+ sess->sid_ctx_length = ssl->sid_ctx_length;
+ }
+
+ return sess;
+}
+
+/* SSL callback used to signal session is no more used in internal cache */
+void shctx_remove_cb(SSL_CTX *ctx, SSL_SESSION *sess)
+{
+ struct shared_session *shsess;
+ unsigned char tmpkey[SSL_MAX_SSL_SESSION_ID_LENGTH];
+ unsigned char *key = sess->session_id;
+ (void)ctx;
+
+ /* tree key is zeros padded sessionid */
+ if (sess->session_id_length < SSL_MAX_SSL_SESSION_ID_LENGTH) {
+ memcpy(tmpkey, sess->session_id, sess->session_id_length);
+ memset(tmpkey+sess->session_id_length, 0, SSL_MAX_SSL_SESSION_ID_LENGTH - sess->session_id_length);
+ key = tmpkey;
+ }
+
+ shared_context_lock();
+
+ /* lookup for session */
+ shsess = shsess_tree_lookup(key);
+ if (shsess) {
+ /* free session */
+ shsess_tree_delete(shsess);
+ shsess_free(shsess);
+ }
+
+ /* unlock cache */
+ shared_context_unlock();
+}
+
+/* Allocate shared memory context.
+ * <size> is maximum cached sessions.
+ * If <size> is set to less or equal to 0, ssl cache is disabled.
+ * Returns: -1 on alloc failure, <size> if it performs context alloc,
+ * and 0 if cache is already allocated.
+ */
+int shared_context_init(int size, int shared)
+{
+ int i;
+#ifndef USE_PRIVATE_CACHE
+#ifdef USE_PTHREAD_PSHARED
+ pthread_mutexattr_t attr;
+#endif
+#endif
+ struct shared_block *prev,*cur;
+ int maptype = MAP_PRIVATE;
+
+ if (shctx)
+ return 0;
+
+ if (size<=0)
+ return 0;
+
+ /* Increate size by one to reserve one node for lookup */
+ size++;
+#ifndef USE_PRIVATE_CACHE
+ if (shared)
+ maptype = MAP_SHARED;
+#endif
+
+ shctx = (struct shared_context *)mmap(NULL, sizeof(struct shared_context)+(size*sizeof(struct shared_block)),
+ PROT_READ | PROT_WRITE, maptype | MAP_ANON, -1, 0);
+ if (!shctx || shctx == MAP_FAILED) {
+ shctx = NULL;
+ return SHCTX_E_ALLOC_CACHE;
+ }
+
+#ifndef USE_PRIVATE_CACHE
+ if (maptype == MAP_SHARED) {
+#ifdef USE_PTHREAD_PSHARED
+ if (pthread_mutexattr_init(&attr)) {
+ munmap(shctx, sizeof(struct shared_context)+(size*sizeof(struct shared_block)));
+ shctx = NULL;
+ return SHCTX_E_INIT_LOCK;
+ }
+
+ if (pthread_mutexattr_setpshared(&attr, PTHREAD_PROCESS_SHARED)) {
+ pthread_mutexattr_destroy(&attr);
+ munmap(shctx, sizeof(struct shared_context)+(size*sizeof(struct shared_block)));
+ shctx = NULL;
+ return SHCTX_E_INIT_LOCK;
+ }
+
+ if (pthread_mutex_init(&shctx->mutex, &attr)) {
+ pthread_mutexattr_destroy(&attr);
+ munmap(shctx, sizeof(struct shared_context)+(size*sizeof(struct shared_block)));
+ shctx = NULL;
+ return SHCTX_E_INIT_LOCK;
+ }
+#else
+ shctx->waiters = 0;
+#endif
+ use_shared_mem = 1;
+ }
+#endif
+
+ memset(&shctx->active.data.session.key, 0, sizeof(struct ebmb_node));
+ memset(&shctx->free.data.session.key, 0, sizeof(struct ebmb_node));
+
+ /* No duplicate authorized in tree: */
+ shctx->active.data.session.key.node.branches = EB_ROOT_UNIQUE;
+
+ /* Init remote update cache */
+ shctx->upd.eol = 0;
+ shctx->upd.seq = 0;
+ shctx->data_len = 0;
+
+ cur = &shctx->active;
+ cur->n = cur->p = cur;
+
+ cur = &shctx->free;
+ for (i = 0 ; i < size ; i++) {
+ prev = cur;
+ cur = (struct shared_block *)((char *)prev + sizeof(struct shared_block));
+ prev->n = cur;
+ cur->p = prev;
+ }
+ cur->n = &shctx->free;
+ shctx->free.p = cur;
+
+ return size;
+}
+
+
+/* Set session cache mode to server and disable openssl internal cache.
+ * Set shared cache callbacks on an ssl context.
+ * Shared context MUST be firstly initialized */
+void shared_context_set_cache(SSL_CTX *ctx)
+{
+ SSL_CTX_set_session_id_context(ctx, (const unsigned char *)SHCTX_APPNAME, strlen(SHCTX_APPNAME));
+
+ if (!shctx) {
+ SSL_CTX_set_session_cache_mode(ctx, SSL_SESS_CACHE_OFF);
+ return;
+ }
+
+ SSL_CTX_set_session_cache_mode(ctx, SSL_SESS_CACHE_SERVER |
+ SSL_SESS_CACHE_NO_INTERNAL |
+ SSL_SESS_CACHE_NO_AUTO_CLEAR);
+
+ /* Set callbacks */
+ SSL_CTX_sess_set_new_cb(ctx, shctx_new_cb);
+ SSL_CTX_sess_set_get_cb(ctx, shctx_get_cb);
+ SSL_CTX_sess_set_remove_cb(ctx, shctx_remove_cb);
+}
--- /dev/null
+/*
+ * Asynchronous signal delivery functions.
+ *
+ * Copyright 2000-2010 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <signal.h>
+#include <string.h>
+
+#include <proto/signal.h>
+#include <proto/log.h>
+#include <proto/task.h>
+
+/* Principle : we keep an in-order list of the first occurrence of all received
+ * signals. All occurrences of a same signal are grouped though. The signal
+ * queue does not need to be deeper than the number of signals we can handle.
+ * The handlers will be called asynchronously with the signal number. They can
+ * check themselves the number of calls by checking the descriptor this signal.
+ */
+
+int signal_queue_len; /* length of signal queue, <= MAX_SIGNAL (1 entry per signal max) */
+int signal_queue[MAX_SIGNAL]; /* in-order queue of received signals */
+struct signal_descriptor signal_state[MAX_SIGNAL];
+struct pool_head *pool2_sig_handlers = NULL;
+sigset_t blocked_sig;
+int signal_pending = 0; /* non-zero if t least one signal remains unprocessed */
+
+/* Common signal handler, used by all signals. Received signals are queued.
+ * Signal number zero has a specific status, as it cannot be delivered by the
+ * system, any function may call it to perform asynchronous signal delivery.
+ */
+void signal_handler(int sig)
+{
+ if (sig < 0 || sig >= MAX_SIGNAL) {
+ /* unhandled signal */
+ signal(sig, SIG_IGN);
+ qfprintf(stderr, "Received unhandled signal %d. Signal has been disabled.\n", sig);
+ return;
+ }
+
+ if (!signal_state[sig].count) {
+ /* signal was not queued yet */
+ if (signal_queue_len < MAX_SIGNAL)
+ signal_queue[signal_queue_len++] = sig;
+ else
+ qfprintf(stderr, "Signal %d : signal queue is unexpectedly full.\n", sig);
+ }
+
+ signal_state[sig].count++;
+ if (sig)
+ signal(sig, signal_handler); /* re-arm signal */
+}
+
+/* Call handlers of all pending signals and clear counts and queue length. The
+ * handlers may unregister themselves by calling signal_register() while they
+ * are called, just like it is done with normal signal handlers.
+ * Note that it is more efficient to call the inline version which checks the
+ * queue length before getting here.
+ */
+void __signal_process_queue()
+{
+ int sig, cur_pos = 0;
+ struct signal_descriptor *desc;
+ sigset_t old_sig;
+
+ /* block signal delivery during processing */
+ sigprocmask(SIG_SETMASK, &blocked_sig, &old_sig);
+
+ /* It is important that we scan the queue forwards so that we can
+ * catch any signal that would have been queued by another signal
+ * handler. That allows real signal handlers to redistribute signals
+ * to tasks subscribed to signal zero.
+ */
+ for (cur_pos = 0; cur_pos < signal_queue_len; cur_pos++) {
+ sig = signal_queue[cur_pos];
+ desc = &signal_state[sig];
+ if (desc->count) {
+ struct sig_handler *sh, *shb;
+ list_for_each_entry_safe(sh, shb, &desc->handlers, list) {
+ if ((sh->flags & SIG_F_TYPE_FCT) && sh->handler)
+ ((void (*)(struct sig_handler *))sh->handler)(sh);
+ else if ((sh->flags & SIG_F_TYPE_TASK) && sh->handler)
+ task_wakeup(sh->handler, sh->arg | TASK_WOKEN_SIGNAL);
+ }
+ desc->count = 0;
+ }
+ }
+ signal_queue_len = 0;
+
+ /* restore signal delivery */
+ sigprocmask(SIG_SETMASK, &old_sig, NULL);
+}
+
+/* perform minimal intializations, report 0 in case of error, 1 if OK. */
+int signal_init()
+{
+ int sig;
+
+ signal_queue_len = 0;
+ memset(signal_queue, 0, sizeof(signal_queue));
+ memset(signal_state, 0, sizeof(signal_state));
+ sigfillset(&blocked_sig);
+ sigdelset(&blocked_sig, SIGPROF);
+ for (sig = 0; sig < MAX_SIGNAL; sig++)
+ LIST_INIT(&signal_state[sig].handlers);
+
+ pool2_sig_handlers = create_pool("sig_handlers", sizeof(struct sig_handler), MEM_F_SHARED);
+ return pool2_sig_handlers != NULL;
+}
+
+/* releases all registered signal handlers */
+void deinit_signals()
+{
+ int sig;
+ struct sig_handler *sh, *shb;
+
+ for (sig = 0; sig < MAX_SIGNAL; sig++) {
+ if (sig != SIGPROF)
+ signal(sig, SIG_DFL);
+ list_for_each_entry_safe(sh, shb, &signal_state[sig].handlers, list) {
+ LIST_DEL(&sh->list);
+ pool_free2(pool2_sig_handlers, sh);
+ }
+ }
+}
+
+/* Register a function and an integer argument on a signal. A pointer to the
+ * newly allocated sig_handler is returned, or NULL in case of any error. The
+ * caller is responsible for unregistering the function when not used anymore.
+ * Note that passing a NULL as the function pointer enables interception of the
+ * signal without processing, which is identical to SIG_IGN. If the signal is
+ * zero (which the system cannot deliver), only internal functions will be able
+ * to notify the registered functions.
+ */
+struct sig_handler *signal_register_fct(int sig, void (*fct)(struct sig_handler *), int arg)
+{
+ struct sig_handler *sh;
+
+ if (sig < 0 || sig >= MAX_SIGNAL)
+ return NULL;
+
+ if (sig)
+ signal(sig, fct ? signal_handler : SIG_IGN);
+
+ if (!fct)
+ return NULL;
+
+ sh = pool_alloc2(pool2_sig_handlers);
+ if (!sh)
+ return NULL;
+
+ sh->handler = fct;
+ sh->arg = arg;
+ sh->flags = SIG_F_TYPE_FCT;
+ LIST_ADDQ(&signal_state[sig].handlers, &sh->list);
+ return sh;
+}
+
+/* Register a task and a wake-up reason on a signal. A pointer to the newly
+ * allocated sig_handler is returned, or NULL in case of any error. The caller
+ * is responsible for unregistering the task when not used anymore. Note that
+ * passing a NULL as the task pointer enables interception of the signal
+ * without processing, which is identical to SIG_IGN. If the signal is zero
+ * (which the system cannot deliver), only internal functions will be able to
+ * notify the registered functions.
+ */
+struct sig_handler *signal_register_task(int sig, struct task *task, int reason)
+{
+ struct sig_handler *sh;
+
+ if (sig < 0 || sig >= MAX_SIGNAL)
+ return NULL;
+
+ if (sig)
+ signal(sig, signal_handler);
+
+ if (!task)
+ return NULL;
+
+ sh = pool_alloc2(pool2_sig_handlers);
+ if (!sh)
+ return NULL;
+
+ sh->handler = task;
+ sh->arg = reason & ~TASK_WOKEN_ANY;
+ sh->flags = SIG_F_TYPE_TASK;
+ LIST_ADDQ(&signal_state[sig].handlers, &sh->list);
+ return sh;
+}
+
+/* Immediately unregister a handler so that no further signals may be delivered
+ * to it. The struct is released so the caller may not reference it anymore.
+ */
+void signal_unregister_handler(struct sig_handler *handler)
+{
+ LIST_DEL(&handler->list);
+ pool_free2(pool2_sig_handlers, handler);
+}
+
+/* Immediately unregister a handler so that no further signals may be delivered
+ * to it. The handler struct does not need to be known, only the function or
+ * task pointer. This method is expensive because it scans all the list, so it
+ * should only be used for rare cases (eg: exit). The struct is released so the
+ * caller may not reference it anymore.
+ */
+void signal_unregister_target(int sig, void *target)
+{
+ struct sig_handler *sh, *shb;
+
+ if (sig < 0 || sig >= MAX_SIGNAL)
+ return;
+
+ if (!target)
+ return;
+
+ list_for_each_entry_safe(sh, shb, &signal_state[sig].handlers, list) {
+ if (sh->handler == target) {
+ LIST_DEL(&sh->list);
+ pool_free2(pool2_sig_handlers, sh);
+ break;
+ }
+ }
+}
--- /dev/null
+/*
+ * SSL/TLS transport layer over SOCK_STREAM sockets
+ *
+ * Copyright (C) 2012 EXCELIANCE, Emeric Brun <ebrun@exceliance.fr>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ * Acknowledgement:
+ * We'd like to specially thank the Stud project authors for a very clean
+ * and well documented code which helped us understand how the OpenSSL API
+ * ought to be used in non-blocking mode. This is one difficult part which
+ * is not easy to get from the OpenSSL doc, and reading the Stud code made
+ * it much more obvious than the examples in the OpenSSL package. Keep up
+ * the good works, guys !
+ *
+ * Stud is an extremely efficient and scalable SSL/TLS proxy which combines
+ * particularly well with haproxy. For more info about this project, visit :
+ * https://github.com/bumptech/stud
+ *
+ */
+
+#define _GNU_SOURCE
+#include <ctype.h>
+#include <dirent.h>
+#include <errno.h>
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <unistd.h>
+
+#include <sys/socket.h>
+#include <sys/stat.h>
+#include <sys/types.h>
+#include <netdb.h>
+#include <netinet/tcp.h>
+
+#include <openssl/ssl.h>
+#include <openssl/x509.h>
+#include <openssl/x509v3.h>
+#include <openssl/x509.h>
+#include <openssl/err.h>
+#include <openssl/rand.h>
+#if (defined SSL_CTRL_SET_TLSEXT_STATUS_REQ_CB && !defined OPENSSL_NO_OCSP)
+#include <openssl/ocsp.h>
+#endif
+#ifndef OPENSSL_NO_DH
+#include <openssl/dh.h>
+#endif
+
+#include <import/lru.h>
+#include <import/xxhash.h>
+
+#include <common/buffer.h>
+#include <common/compat.h>
+#include <common/config.h>
+#include <common/debug.h>
+#include <common/errors.h>
+#include <common/standard.h>
+#include <common/ticks.h>
+#include <common/time.h>
+#include <common/cfgparse.h>
+#include <common/base64.h>
+
+#include <ebsttree.h>
+
+#include <types/global.h>
+#include <types/ssl_sock.h>
+
+#include <proto/acl.h>
+#include <proto/arg.h>
+#include <proto/connection.h>
+#include <proto/fd.h>
+#include <proto/freq_ctr.h>
+#include <proto/frontend.h>
+#include <proto/listener.h>
+#include <proto/pattern.h>
+#include <proto/proto_tcp.h>
+#include <proto/server.h>
+#include <proto/log.h>
+#include <proto/proxy.h>
+#include <proto/shctx.h>
+#include <proto/ssl_sock.h>
+#include <proto/stream.h>
+#include <proto/task.h>
+
+/* Warning, these are bits, not integers! */
+#define SSL_SOCK_ST_FL_VERIFY_DONE 0x00000001
+#define SSL_SOCK_ST_FL_16K_WBFSIZE 0x00000002
+#define SSL_SOCK_SEND_UNLIMITED 0x00000004
+#define SSL_SOCK_RECV_HEARTBEAT 0x00000008
+
+/* bits 0xFFFF0000 are reserved to store verify errors */
+
+/* Verify errors macros */
+#define SSL_SOCK_CA_ERROR_TO_ST(e) (((e > 63) ? 63 : e) << (16))
+#define SSL_SOCK_CAEDEPTH_TO_ST(d) (((d > 15) ? 15 : d) << (6+16))
+#define SSL_SOCK_CRTERROR_TO_ST(e) (((e > 63) ? 63 : e) << (4+6+16))
+
+#define SSL_SOCK_ST_TO_CA_ERROR(s) ((s >> (16)) & 63)
+#define SSL_SOCK_ST_TO_CAEDEPTH(s) ((s >> (6+16)) & 15)
+#define SSL_SOCK_ST_TO_CRTERROR(s) ((s >> (4+6+16)) & 63)
+
+/* Supported hash function for TLS tickets */
+#ifdef OPENSSL_NO_SHA256
+#define HASH_FUNCT EVP_sha1
+#else
+#define HASH_FUNCT EVP_sha256
+#endif /* OPENSSL_NO_SHA256 */
+
+/* server and bind verify method, it uses a global value as default */
+enum {
+ SSL_SOCK_VERIFY_DEFAULT = 0,
+ SSL_SOCK_VERIFY_REQUIRED = 1,
+ SSL_SOCK_VERIFY_OPTIONAL = 2,
+ SSL_SOCK_VERIFY_NONE = 3,
+};
+
+int sslconns = 0;
+int totalsslconns = 0;
+
+#if (defined SSL_CTRL_SET_TLSEXT_TICKET_KEY_CB && TLS_TICKETS_NO > 0)
+struct list tlskeys_reference = LIST_HEAD_INIT(tlskeys_reference);
+#endif
+
+#ifndef OPENSSL_NO_DH
+static int ssl_dh_ptr_index = -1;
+static DH *global_dh = NULL;
+static DH *local_dh_1024 = NULL;
+static DH *local_dh_2048 = NULL;
+static DH *local_dh_4096 = NULL;
+#endif /* OPENSSL_NO_DH */
+
+#ifdef SSL_CTRL_SET_TLSEXT_HOSTNAME
+/* X509V3 Extensions that will be added on generated certificates */
+#define X509V3_EXT_SIZE 5
+static char *x509v3_ext_names[X509V3_EXT_SIZE] = {
+ "basicConstraints",
+ "nsComment",
+ "subjectKeyIdentifier",
+ "authorityKeyIdentifier",
+ "keyUsage",
+};
+static char *x509v3_ext_values[X509V3_EXT_SIZE] = {
+ "CA:FALSE",
+ "\"OpenSSL Generated Certificate\"",
+ "hash",
+ "keyid,issuer:always",
+ "nonRepudiation,digitalSignature,keyEncipherment"
+};
+
+/* LRU cache to store generated certificate */
+static struct lru64_head *ssl_ctx_lru_tree = NULL;
+static unsigned int ssl_ctx_lru_seed = 0;
+#endif // SSL_CTRL_SET_TLSEXT_HOSTNAME
+
+#if (defined SSL_CTRL_SET_TLSEXT_STATUS_REQ_CB && !defined OPENSSL_NO_OCSP)
+struct certificate_ocsp {
+ struct ebmb_node key;
+ unsigned char key_data[OCSP_MAX_CERTID_ASN1_LENGTH];
+ struct chunk response;
+ long expire;
+};
+
+/*
+ * This function returns the number of seconds elapsed
+ * since the Epoch, 1970-01-01 00:00:00 +0000 (UTC) and the
+ * date presented un ASN1_GENERALIZEDTIME.
+ *
+ * In parsing error case, it returns -1.
+ */
+static long asn1_generalizedtime_to_epoch(ASN1_GENERALIZEDTIME *d)
+{
+ long epoch;
+ char *p, *end;
+ const unsigned short month_offset[12] = {
+ 0, 31, 59, 90, 120, 151, 181, 212, 243, 273, 304, 334
+ };
+ int year, month;
+
+ if (!d || (d->type != V_ASN1_GENERALIZEDTIME)) return -1;
+
+ p = (char *)d->data;
+ end = p + d->length;
+
+ if (end - p < 4) return -1;
+ year = 1000 * (p[0] - '0') + 100 * (p[1] - '0') + 10 * (p[2] - '0') + p[3] - '0';
+ p += 4;
+ if (end - p < 2) return -1;
+ month = 10 * (p[0] - '0') + p[1] - '0';
+ if (month < 1 || month > 12) return -1;
+ /* Compute the number of seconds since 1 jan 1970 and the beginning of current month
+ We consider leap years and the current month (<marsh or not) */
+ epoch = ( ((year - 1970) * 365)
+ + ((year - (month < 3)) / 4 - (year - (month < 3)) / 100 + (year - (month < 3)) / 400)
+ - ((1970 - 1) / 4 - (1970 - 1) / 100 + (1970 - 1) / 400)
+ + month_offset[month-1]
+ ) * 24 * 60 * 60;
+ p += 2;
+ if (end - p < 2) return -1;
+ /* Add the number of seconds of completed days of current month */
+ epoch += (10 * (p[0] - '0') + p[1] - '0' - 1) * 24 * 60 * 60;
+ p += 2;
+ if (end - p < 2) return -1;
+ /* Add the completed hours of the current day */
+ epoch += (10 * (p[0] - '0') + p[1] - '0') * 60 * 60;
+ p += 2;
+ if (end - p < 2) return -1;
+ /* Add the completed minutes of the current hour */
+ epoch += (10 * (p[0] - '0') + p[1] - '0') * 60;
+ p += 2;
+ if (p == end) return -1;
+ /* Test if there is available seconds */
+ if (p[0] < '0' || p[0] > '9')
+ goto nosec;
+ if (end - p < 2) return -1;
+ /* Add the seconds of the current minute */
+ epoch += 10 * (p[0] - '0') + p[1] - '0';
+ p += 2;
+ if (p == end) return -1;
+ /* Ignore seconds float part if present */
+ if (p[0] == '.') {
+ do {
+ if (++p == end) return -1;
+ } while (p[0] >= '0' && p[0] <= '9');
+ }
+
+nosec:
+ if (p[0] == 'Z') {
+ if (end - p != 1) return -1;
+ return epoch;
+ }
+ else if (p[0] == '+') {
+ if (end - p != 5) return -1;
+ /* Apply timezone offset */
+ return epoch - ((10 * (p[1] - '0') + p[2] - '0') * 60 + (10 * (p[3] - '0') + p[4] - '0')) * 60;
+ }
+ else if (p[0] == '-') {
+ if (end - p != 5) return -1;
+ /* Apply timezone offset */
+ return epoch + ((10 * (p[1] - '0') + p[2] - '0') * 60 + (10 * (p[3] - '0') + p[4] - '0')) * 60;
+ }
+
+ return -1;
+}
+
+static struct eb_root cert_ocsp_tree = EB_ROOT_UNIQUE;
+
+/* This function starts to check if the OCSP response (in DER format) contained
+ * in chunk 'ocsp_response' is valid (else exits on error).
+ * If 'cid' is not NULL, it will be compared to the OCSP certificate ID
+ * contained in the OCSP Response and exits on error if no match.
+ * If it's a valid OCSP Response:
+ * If 'ocsp' is not NULL, the chunk is copied in the OCSP response's container
+ * pointed by 'ocsp'.
+ * If 'ocsp' is NULL, the function looks up into the OCSP response's
+ * containers tree (using as index the ASN1 form of the OCSP Certificate ID extracted
+ * from the response) and exits on error if not found. Finally, If an OCSP response is
+ * already present in the container, it will be overwritten.
+ *
+ * Note: OCSP response containing more than one OCSP Single response is not
+ * considered valid.
+ *
+ * Returns 0 on success, 1 in error case.
+ */
+static int ssl_sock_load_ocsp_response(struct chunk *ocsp_response, struct certificate_ocsp *ocsp, OCSP_CERTID *cid, char **err)
+{
+ OCSP_RESPONSE *resp;
+ OCSP_BASICRESP *bs = NULL;
+ OCSP_SINGLERESP *sr;
+ unsigned char *p = (unsigned char *)ocsp_response->str;
+ int rc , count_sr;
+ ASN1_GENERALIZEDTIME *revtime, *thisupd, *nextupd = NULL;
+ int reason;
+ int ret = 1;
+
+ resp = d2i_OCSP_RESPONSE(NULL, (const unsigned char **)&p, ocsp_response->len);
+ if (!resp) {
+ memprintf(err, "Unable to parse OCSP response");
+ goto out;
+ }
+
+ rc = OCSP_response_status(resp);
+ if (rc != OCSP_RESPONSE_STATUS_SUCCESSFUL) {
+ memprintf(err, "OCSP response status not successful");
+ goto out;
+ }
+
+ bs = OCSP_response_get1_basic(resp);
+ if (!bs) {
+ memprintf(err, "Failed to get basic response from OCSP Response");
+ goto out;
+ }
+
+ count_sr = OCSP_resp_count(bs);
+ if (count_sr > 1) {
+ memprintf(err, "OCSP response ignored because contains multiple single responses (%d)", count_sr);
+ goto out;
+ }
+
+ sr = OCSP_resp_get0(bs, 0);
+ if (!sr) {
+ memprintf(err, "Failed to get OCSP single response");
+ goto out;
+ }
+
+ rc = OCSP_single_get0_status(sr, &reason, &revtime, &thisupd, &nextupd);
+ if (rc != V_OCSP_CERTSTATUS_GOOD) {
+ memprintf(err, "OCSP single response: certificate status not good");
+ goto out;
+ }
+
+ if (!nextupd) {
+ memprintf(err, "OCSP single response: missing nextupdate");
+ goto out;
+ }
+
+ rc = OCSP_check_validity(thisupd, nextupd, OCSP_MAX_RESPONSE_TIME_SKEW, -1);
+ if (!rc) {
+ memprintf(err, "OCSP single response: no longer valid.");
+ goto out;
+ }
+
+ if (cid) {
+ if (OCSP_id_cmp(sr->certId, cid)) {
+ memprintf(err, "OCSP single response: Certificate ID does not match certificate and issuer");
+ goto out;
+ }
+ }
+
+ if (!ocsp) {
+ unsigned char key[OCSP_MAX_CERTID_ASN1_LENGTH];
+ unsigned char *p;
+
+ rc = i2d_OCSP_CERTID(sr->certId, NULL);
+ if (!rc) {
+ memprintf(err, "OCSP single response: Unable to encode Certificate ID");
+ goto out;
+ }
+
+ if (rc > OCSP_MAX_CERTID_ASN1_LENGTH) {
+ memprintf(err, "OCSP single response: Certificate ID too long");
+ goto out;
+ }
+
+ p = key;
+ memset(key, 0, OCSP_MAX_CERTID_ASN1_LENGTH);
+ i2d_OCSP_CERTID(sr->certId, &p);
+ ocsp = (struct certificate_ocsp *)ebmb_lookup(&cert_ocsp_tree, key, OCSP_MAX_CERTID_ASN1_LENGTH);
+ if (!ocsp) {
+ memprintf(err, "OCSP single response: Certificate ID does not match any certificate or issuer");
+ goto out;
+ }
+ }
+
+ /* According to comments on "chunk_dup", the
+ previous chunk buffer will be freed */
+ if (!chunk_dup(&ocsp->response, ocsp_response)) {
+ memprintf(err, "OCSP response: Memory allocation error");
+ goto out;
+ }
+
+ ocsp->expire = asn1_generalizedtime_to_epoch(nextupd) - OCSP_MAX_RESPONSE_TIME_SKEW;
+
+ ret = 0;
+out:
+ if (bs)
+ OCSP_BASICRESP_free(bs);
+
+ if (resp)
+ OCSP_RESPONSE_free(resp);
+
+ return ret;
+}
+/*
+ * External function use to update the OCSP response in the OCSP response's
+ * containers tree. The chunk 'ocsp_response' must contain the OCSP response
+ * to update in DER format.
+ *
+ * Returns 0 on success, 1 in error case.
+ */
+int ssl_sock_update_ocsp_response(struct chunk *ocsp_response, char **err)
+{
+ return ssl_sock_load_ocsp_response(ocsp_response, NULL, NULL, err);
+}
+
+/*
+ * This function load the OCSP Resonse in DER format contained in file at
+ * path 'ocsp_path' and call 'ssl_sock_load_ocsp_response'
+ *
+ * Returns 0 on success, 1 in error case.
+ */
+static int ssl_sock_load_ocsp_response_from_file(const char *ocsp_path, struct certificate_ocsp *ocsp, OCSP_CERTID *cid, char **err)
+{
+ int fd = -1;
+ int r = 0;
+ int ret = 1;
+
+ fd = open(ocsp_path, O_RDONLY);
+ if (fd == -1) {
+ memprintf(err, "Error opening OCSP response file");
+ goto end;
+ }
+
+ trash.len = 0;
+ while (trash.len < trash.size) {
+ r = read(fd, trash.str + trash.len, trash.size - trash.len);
+ if (r < 0) {
+ if (errno == EINTR)
+ continue;
+
+ memprintf(err, "Error reading OCSP response from file");
+ goto end;
+ }
+ else if (r == 0) {
+ break;
+ }
+ trash.len += r;
+ }
+
+ close(fd);
+ fd = -1;
+
+ ret = ssl_sock_load_ocsp_response(&trash, ocsp, cid, err);
+end:
+ if (fd != -1)
+ close(fd);
+
+ return ret;
+}
+
+#if (defined SSL_CTRL_SET_TLSEXT_TICKET_KEY_CB && TLS_TICKETS_NO > 0)
+static int ssl_tlsext_ticket_key_cb(SSL *s, unsigned char key_name[16], unsigned char *iv, EVP_CIPHER_CTX *ectx, HMAC_CTX *hctx, int enc)
+{
+ struct tls_sess_key *keys;
+ struct connection *conn;
+ int head;
+ int i;
+
+ conn = (struct connection *)SSL_get_app_data(s);
+ keys = objt_listener(conn->target)->bind_conf->keys_ref->tlskeys;
+ head = objt_listener(conn->target)->bind_conf->keys_ref->tls_ticket_enc_index;
+
+ if (enc) {
+ memcpy(key_name, keys[head].name, 16);
+
+ if(!RAND_pseudo_bytes(iv, EVP_MAX_IV_LENGTH))
+ return -1;
+
+ if(!EVP_EncryptInit_ex(ectx, EVP_aes_128_cbc(), NULL, keys[head].aes_key, iv))
+ return -1;
+
+ HMAC_Init_ex(hctx, keys[head].hmac_key, 16, HASH_FUNCT(), NULL);
+
+ return 1;
+ } else {
+ for (i = 0; i < TLS_TICKETS_NO; i++) {
+ if (!memcmp(key_name, keys[(head + i) % TLS_TICKETS_NO].name, 16))
+ goto found;
+ }
+ return 0;
+
+ found:
+ HMAC_Init_ex(hctx, keys[(head + i) % TLS_TICKETS_NO].hmac_key, 16, HASH_FUNCT(), NULL);
+ if(!EVP_DecryptInit_ex(ectx, EVP_aes_128_cbc(), NULL, keys[(head + i) % TLS_TICKETS_NO].aes_key, iv))
+ return -1;
+ /* 2 for key renewal, 1 if current key is still valid */
+ return i ? 2 : 1;
+ }
+}
+
+struct tls_keys_ref *tlskeys_ref_lookup(const char *filename)
+{
+ struct tls_keys_ref *ref;
+
+ list_for_each_entry(ref, &tlskeys_reference, list)
+ if (ref->filename && strcmp(filename, ref->filename) == 0)
+ return ref;
+ return NULL;
+}
+
+struct tls_keys_ref *tlskeys_ref_lookupid(int unique_id)
+{
+ struct tls_keys_ref *ref;
+
+ list_for_each_entry(ref, &tlskeys_reference, list)
+ if (ref->unique_id == unique_id)
+ return ref;
+ return NULL;
+}
+
+int ssl_sock_update_tlskey(char *filename, struct chunk *tlskey, char **err) {
+ struct tls_keys_ref *ref = tlskeys_ref_lookup(filename);
+
+ if(!ref) {
+ memprintf(err, "Unable to locate the referenced filename: %s", filename);
+ return 1;
+ }
+
+ memcpy((char *) (ref->tlskeys + ((ref->tls_ticket_enc_index + 2) % TLS_TICKETS_NO)), tlskey->str, tlskey->len);
+ ref->tls_ticket_enc_index = (ref->tls_ticket_enc_index + 1) % TLS_TICKETS_NO;
+
+ return 0;
+}
+
+/* This function finalize the configuration parsing. Its set all the
+ * automatic ids
+ */
+void tlskeys_finalize_config(void)
+{
+ int i = 0;
+ struct tls_keys_ref *ref, *ref2, *ref3;
+ struct list tkr = LIST_HEAD_INIT(tkr);
+
+ list_for_each_entry(ref, &tlskeys_reference, list) {
+ if (ref->unique_id == -1) {
+ /* Look for the first free id. */
+ while (1) {
+ list_for_each_entry(ref2, &tlskeys_reference, list) {
+ if (ref2->unique_id == i) {
+ i++;
+ break;
+ }
+ }
+ if (&ref2->list == &tlskeys_reference)
+ break;
+ }
+
+ /* Uses the unique id and increment it for the next entry. */
+ ref->unique_id = i;
+ i++;
+ }
+ }
+
+ /* This sort the reference list by id. */
+ list_for_each_entry_safe(ref, ref2, &tlskeys_reference, list) {
+ LIST_DEL(&ref->list);
+ list_for_each_entry(ref3, &tkr, list) {
+ if (ref->unique_id < ref3->unique_id) {
+ LIST_ADDQ(&ref3->list, &ref->list);
+ break;
+ }
+ }
+ if (&ref3->list == &tkr)
+ LIST_ADDQ(&tkr, &ref->list);
+ }
+
+ /* swap root */
+ LIST_ADD(&tkr, &tlskeys_reference);
+ LIST_DEL(&tkr);
+}
+
+#endif /* SSL_CTRL_SET_TLSEXT_TICKET_KEY_CB */
+
+/*
+ * Callback used to set OCSP status extension content in server hello.
+ */
+int ssl_sock_ocsp_stapling_cbk(SSL *ssl, void *arg)
+{
+ struct certificate_ocsp *ocsp = (struct certificate_ocsp *)arg;
+ char* ssl_buf;
+
+ if (!ocsp ||
+ !ocsp->response.str ||
+ !ocsp->response.len ||
+ (ocsp->expire < now.tv_sec))
+ return SSL_TLSEXT_ERR_NOACK;
+
+ ssl_buf = OPENSSL_malloc(ocsp->response.len);
+ if (!ssl_buf)
+ return SSL_TLSEXT_ERR_NOACK;
+
+ memcpy(ssl_buf, ocsp->response.str, ocsp->response.len);
+ SSL_set_tlsext_status_ocsp_resp(ssl, ssl_buf, ocsp->response.len);
+
+ return SSL_TLSEXT_ERR_OK;
+}
+
+/*
+ * This function enables the handling of OCSP status extension on 'ctx' if a
+ * file name 'cert_path' suffixed using ".ocsp" is present.
+ * To enable OCSP status extension, the issuer's certificate is mandatory.
+ * It should be present in the certificate's extra chain builded from file
+ * 'cert_path'. If not found, the issuer certificate is loaded from a file
+ * named 'cert_path' suffixed using '.issuer'.
+ *
+ * In addition, ".ocsp" file content is loaded as a DER format of an OCSP
+ * response. If file is empty or content is not a valid OCSP response,
+ * OCSP status extension is enabled but OCSP response is ignored (a warning
+ * is displayed).
+ *
+ * Returns 1 if no ".ocsp" file found, 0 if OCSP status extension is
+ * succesfully enabled, or -1 in other error case.
+ */
+static int ssl_sock_load_ocsp(SSL_CTX *ctx, const char *cert_path)
+{
+
+ BIO *in = NULL;
+ X509 *x, *xi = NULL, *issuer = NULL;
+ STACK_OF(X509) *chain = NULL;
+ OCSP_CERTID *cid = NULL;
+ SSL *ssl;
+ char ocsp_path[MAXPATHLEN+1];
+ int i, ret = -1;
+ struct stat st;
+ struct certificate_ocsp *ocsp = NULL, *iocsp;
+ char *warn = NULL;
+ unsigned char *p;
+
+ snprintf(ocsp_path, MAXPATHLEN+1, "%s.ocsp", cert_path);
+
+ if (stat(ocsp_path, &st))
+ return 1;
+
+ ssl = SSL_new(ctx);
+ if (!ssl)
+ goto out;
+
+ x = SSL_get_certificate(ssl);
+ if (!x)
+ goto out;
+
+ /* Try to lookup for issuer in certificate extra chain */
+#ifdef SSL_CTRL_GET_EXTRA_CHAIN_CERTS
+ SSL_CTX_get_extra_chain_certs(ctx, &chain);
+#else
+ chain = ctx->extra_certs;
+#endif
+ for (i = 0; i < sk_X509_num(chain); i++) {
+ issuer = sk_X509_value(chain, i);
+ if (X509_check_issued(issuer, x) == X509_V_OK)
+ break;
+ else
+ issuer = NULL;
+ }
+
+ /* If not found try to load issuer from a suffixed file */
+ if (!issuer) {
+ char issuer_path[MAXPATHLEN+1];
+
+ in = BIO_new(BIO_s_file());
+ if (!in)
+ goto out;
+
+ snprintf(issuer_path, MAXPATHLEN+1, "%s.issuer", cert_path);
+ if (BIO_read_filename(in, issuer_path) <= 0)
+ goto out;
+
+ xi = PEM_read_bio_X509_AUX(in, NULL, ctx->default_passwd_callback, ctx->default_passwd_callback_userdata);
+ if (!xi)
+ goto out;
+
+ if (X509_check_issued(xi, x) != X509_V_OK)
+ goto out;
+
+ issuer = xi;
+ }
+
+ cid = OCSP_cert_to_id(0, x, issuer);
+ if (!cid)
+ goto out;
+
+ i = i2d_OCSP_CERTID(cid, NULL);
+ if (!i || (i > OCSP_MAX_CERTID_ASN1_LENGTH))
+ goto out;
+
+ ocsp = calloc(1, sizeof(struct certificate_ocsp));
+ if (!ocsp)
+ goto out;
+
+ p = ocsp->key_data;
+ i2d_OCSP_CERTID(cid, &p);
+
+ iocsp = (struct certificate_ocsp *)ebmb_insert(&cert_ocsp_tree, &ocsp->key, OCSP_MAX_CERTID_ASN1_LENGTH);
+ if (iocsp == ocsp)
+ ocsp = NULL;
+
+ SSL_CTX_set_tlsext_status_cb(ctx, ssl_sock_ocsp_stapling_cbk);
+ SSL_CTX_set_tlsext_status_arg(ctx, iocsp);
+
+ ret = 0;
+
+ warn = NULL;
+ if (ssl_sock_load_ocsp_response_from_file(ocsp_path, iocsp, cid, &warn)) {
+ memprintf(&warn, "Loading '%s': %s. Content will be ignored", ocsp_path, warn ? warn : "failure");
+ Warning("%s.\n", warn);
+ }
+
+out:
+ if (ssl)
+ SSL_free(ssl);
+
+ if (in)
+ BIO_free(in);
+
+ if (xi)
+ X509_free(xi);
+
+ if (cid)
+ OCSP_CERTID_free(cid);
+
+ if (ocsp)
+ free(ocsp);
+
+ if (warn)
+ free(warn);
+
+
+ return ret;
+}
+
+#endif
+
+#if (OPENSSL_VERSION_NUMBER >= 0x1000200fL && !defined OPENSSL_NO_TLSEXT && !defined OPENSSL_IS_BORINGSSL && !defined LIBRESSL_VERSION_NUMBER)
+
+#define CT_EXTENSION_TYPE 18
+
+static int sctl_ex_index = -1;
+
+/*
+ * Try to parse Signed Certificate Timestamp List structure. This function
+ * makes only basic test if the data seems like SCTL. No signature validation
+ * is performed.
+ */
+static int ssl_sock_parse_sctl(struct chunk *sctl)
+{
+ int ret = 1;
+ int len, pos, sct_len;
+ unsigned char *data;
+
+ if (sctl->len < 2)
+ goto out;
+
+ data = (unsigned char *)sctl->str;
+ len = (data[0] << 8) | data[1];
+
+ if (len + 2 != sctl->len)
+ goto out;
+
+ data = data + 2;
+ pos = 0;
+ while (pos < len) {
+ if (len - pos < 2)
+ goto out;
+
+ sct_len = (data[pos] << 8) | data[pos + 1];
+ if (pos + sct_len + 2 > len)
+ goto out;
+
+ pos += sct_len + 2;
+ }
+
+ ret = 0;
+
+out:
+ return ret;
+}
+
+static int ssl_sock_load_sctl_from_file(const char *sctl_path, struct chunk **sctl)
+{
+ int fd = -1;
+ int r = 0;
+ int ret = 1;
+
+ *sctl = NULL;
+
+ fd = open(sctl_path, O_RDONLY);
+ if (fd == -1)
+ goto end;
+
+ trash.len = 0;
+ while (trash.len < trash.size) {
+ r = read(fd, trash.str + trash.len, trash.size - trash.len);
+ if (r < 0) {
+ if (errno == EINTR)
+ continue;
+
+ goto end;
+ }
+ else if (r == 0) {
+ break;
+ }
+ trash.len += r;
+ }
+
+ ret = ssl_sock_parse_sctl(&trash);
+ if (ret)
+ goto end;
+
+ *sctl = calloc(1, sizeof(struct chunk));
+ if (!chunk_dup(*sctl, &trash)) {
+ free(*sctl);
+ *sctl = NULL;
+ goto end;
+ }
+
+end:
+ if (fd != -1)
+ close(fd);
+
+ return ret;
+}
+
+int ssl_sock_sctl_add_cbk(SSL *ssl, unsigned ext_type, const unsigned char **out, size_t *outlen, int *al, void *add_arg)
+{
+ struct chunk *sctl = (struct chunk *)add_arg;
+
+ *out = (unsigned char *)sctl->str;
+ *outlen = sctl->len;
+
+ return 1;
+}
+
+int ssl_sock_sctl_parse_cbk(SSL *s, unsigned int ext_type, const unsigned char *in, size_t inlen, int *al, void *parse_arg)
+{
+ return 1;
+}
+
+static int ssl_sock_load_sctl(SSL_CTX *ctx, const char *cert_path)
+{
+ char sctl_path[MAXPATHLEN+1];
+ int ret = -1;
+ struct stat st;
+ struct chunk *sctl = NULL;
+
+ snprintf(sctl_path, MAXPATHLEN+1, "%s.sctl", cert_path);
+
+ if (stat(sctl_path, &st))
+ return 1;
+
+ if (ssl_sock_load_sctl_from_file(sctl_path, &sctl))
+ goto out;
+
+ if (!SSL_CTX_add_server_custom_ext(ctx, CT_EXTENSION_TYPE, ssl_sock_sctl_add_cbk, NULL, sctl, ssl_sock_sctl_parse_cbk, NULL)) {
+ free(sctl);
+ goto out;
+ }
+
+ SSL_CTX_set_ex_data(ctx, sctl_ex_index, sctl);
+
+ ret = 0;
+
+out:
+ return ret;
+}
+
+#endif
+
+void ssl_sock_infocbk(const SSL *ssl, int where, int ret)
+{
+ struct connection *conn = (struct connection *)SSL_get_app_data(ssl);
+ BIO *write_bio;
+ (void)ret; /* shut gcc stupid warning */
+
+ if (where & SSL_CB_HANDSHAKE_START) {
+ /* Disable renegotiation (CVE-2009-3555) */
+ if (conn->flags & CO_FL_CONNECTED) {
+ conn->flags |= CO_FL_ERROR;
+ conn->err_code = CO_ER_SSL_RENEG;
+ }
+ }
+
+ if ((where & SSL_CB_ACCEPT_LOOP) == SSL_CB_ACCEPT_LOOP) {
+ if (!(conn->xprt_st & SSL_SOCK_ST_FL_16K_WBFSIZE)) {
+ /* Long certificate chains optimz
+ If write and read bios are differents, we
+ consider that the buffering was activated,
+ so we rise the output buffer size from 4k
+ to 16k */
+ write_bio = SSL_get_wbio(ssl);
+ if (write_bio != SSL_get_rbio(ssl)) {
+ BIO_set_write_buffer_size(write_bio, 16384);
+ conn->xprt_st |= SSL_SOCK_ST_FL_16K_WBFSIZE;
+ }
+ }
+ }
+}
+
+/* Callback is called for each certificate of the chain during a verify
+ ok is set to 1 if preverify detect no error on current certificate.
+ Returns 0 to break the handshake, 1 otherwise. */
+int ssl_sock_bind_verifycbk(int ok, X509_STORE_CTX *x_store)
+{
+ SSL *ssl;
+ struct connection *conn;
+ int err, depth;
+
+ ssl = X509_STORE_CTX_get_ex_data(x_store, SSL_get_ex_data_X509_STORE_CTX_idx());
+ conn = (struct connection *)SSL_get_app_data(ssl);
+
+ conn->xprt_st |= SSL_SOCK_ST_FL_VERIFY_DONE;
+
+ if (ok) /* no errors */
+ return ok;
+
+ depth = X509_STORE_CTX_get_error_depth(x_store);
+ err = X509_STORE_CTX_get_error(x_store);
+
+ /* check if CA error needs to be ignored */
+ if (depth > 0) {
+ if (!SSL_SOCK_ST_TO_CA_ERROR(conn->xprt_st)) {
+ conn->xprt_st |= SSL_SOCK_CA_ERROR_TO_ST(err);
+ conn->xprt_st |= SSL_SOCK_CAEDEPTH_TO_ST(depth);
+ }
+
+ if (objt_listener(conn->target)->bind_conf->ca_ignerr & (1ULL << err)) {
+ ERR_clear_error();
+ return 1;
+ }
+
+ conn->err_code = CO_ER_SSL_CA_FAIL;
+ return 0;
+ }
+
+ if (!SSL_SOCK_ST_TO_CRTERROR(conn->xprt_st))
+ conn->xprt_st |= SSL_SOCK_CRTERROR_TO_ST(err);
+
+ /* check if certificate error needs to be ignored */
+ if (objt_listener(conn->target)->bind_conf->crt_ignerr & (1ULL << err)) {
+ ERR_clear_error();
+ return 1;
+ }
+
+ conn->err_code = CO_ER_SSL_CRT_FAIL;
+ return 0;
+}
+
+/* Callback is called for ssl protocol analyse */
+void ssl_sock_msgcbk(int write_p, int version, int content_type, const void *buf, size_t len, SSL *ssl, void *arg)
+{
+#ifdef TLS1_RT_HEARTBEAT
+ /* test heartbeat received (write_p is set to 0
+ for a received record) */
+ if ((content_type == TLS1_RT_HEARTBEAT) && (write_p == 0)) {
+ struct connection *conn = (struct connection *)SSL_get_app_data(ssl);
+ const unsigned char *p = buf;
+ unsigned int payload;
+
+ conn->xprt_st |= SSL_SOCK_RECV_HEARTBEAT;
+
+ /* Check if this is a CVE-2014-0160 exploitation attempt. */
+ if (*p != TLS1_HB_REQUEST)
+ return;
+
+ if (len < 1 + 2 + 16) /* 1 type + 2 size + 0 payload + 16 padding */
+ goto kill_it;
+
+ payload = (p[1] * 256) + p[2];
+ if (3 + payload + 16 <= len)
+ return; /* OK no problem */
+ kill_it:
+ /* We have a clear heartbleed attack (CVE-2014-0160), the
+ * advertised payload is larger than the advertised packet
+ * length, so we have garbage in the buffer between the
+ * payload and the end of the buffer (p+len). We can't know
+ * if the SSL stack is patched, and we don't know if we can
+ * safely wipe out the area between p+3+len and payload.
+ * So instead, we prevent the response from being sent by
+ * setting the max_send_fragment to 0 and we report an SSL
+ * error, which will kill this connection. It will be reported
+ * above as SSL_ERROR_SSL while an other handshake failure with
+ * a heartbeat message will be reported as SSL_ERROR_SYSCALL.
+ */
+ ssl->max_send_fragment = 0;
+ SSLerr(SSL_F_TLS1_HEARTBEAT, SSL_R_SSL_HANDSHAKE_FAILURE);
+ return;
+ }
+#endif
+}
+
+#ifdef OPENSSL_NPN_NEGOTIATED
+/* This callback is used so that the server advertises the list of
+ * negociable protocols for NPN.
+ */
+static int ssl_sock_advertise_npn_protos(SSL *s, const unsigned char **data,
+ unsigned int *len, void *arg)
+{
+ struct bind_conf *conf = arg;
+
+ *data = (const unsigned char *)conf->npn_str;
+ *len = conf->npn_len;
+ return SSL_TLSEXT_ERR_OK;
+}
+#endif
+
+#ifdef TLSEXT_TYPE_application_layer_protocol_negotiation
+/* This callback is used so that the server advertises the list of
+ * negociable protocols for ALPN.
+ */
+static int ssl_sock_advertise_alpn_protos(SSL *s, const unsigned char **out,
+ unsigned char *outlen,
+ const unsigned char *server,
+ unsigned int server_len, void *arg)
+{
+ struct bind_conf *conf = arg;
+
+ if (SSL_select_next_proto((unsigned char**) out, outlen, (const unsigned char *)conf->alpn_str,
+ conf->alpn_len, server, server_len) != OPENSSL_NPN_NEGOTIATED) {
+ return SSL_TLSEXT_ERR_NOACK;
+ }
+ return SSL_TLSEXT_ERR_OK;
+}
+#endif
+
+#ifdef SSL_CTRL_SET_TLSEXT_HOSTNAME
+static DH *ssl_get_tmp_dh(SSL *ssl, int export, int keylen);
+
+/* Create a X509 certificate with the specified servername and serial. This
+ * function returns a SSL_CTX object or NULL if an error occurs. */
+static SSL_CTX *
+ssl_sock_do_create_cert(const char *servername, unsigned int serial,
+ struct bind_conf *bind_conf, SSL *ssl)
+{
+ X509 *cacert = bind_conf->ca_sign_cert;
+ EVP_PKEY *capkey = bind_conf->ca_sign_pkey;
+ SSL_CTX *ssl_ctx = NULL;
+ X509 *newcrt = NULL;
+ EVP_PKEY *pkey = NULL;
+ X509_NAME *name;
+ const EVP_MD *digest;
+ X509V3_CTX ctx;
+ unsigned int i;
+
+ /* Get the private key of the defautl certificate and use it */
+ if (!(pkey = SSL_get_privatekey(ssl)))
+ goto mkcert_error;
+
+ /* Create the certificate */
+ if (!(newcrt = X509_new()))
+ goto mkcert_error;
+
+ /* Set version number for the certificate (X509v3) and the serial
+ * number */
+ if (X509_set_version(newcrt, 2L) != 1)
+ goto mkcert_error;
+ ASN1_INTEGER_set(X509_get_serialNumber(newcrt), serial);
+
+ /* Set duration for the certificate */
+ if (!X509_gmtime_adj(X509_get_notBefore(newcrt), (long)-60*60*24) ||
+ !X509_gmtime_adj(X509_get_notAfter(newcrt),(long)60*60*24*365))
+ goto mkcert_error;
+
+ /* set public key in the certificate */
+ if (X509_set_pubkey(newcrt, pkey) != 1)
+ goto mkcert_error;
+
+ /* Set issuer name from the CA */
+ if (!(name = X509_get_subject_name(cacert)))
+ goto mkcert_error;
+ if (X509_set_issuer_name(newcrt, name) != 1)
+ goto mkcert_error;
+
+ /* Set the subject name using the same, but the CN */
+ name = X509_NAME_dup(name);
+ if (X509_NAME_add_entry_by_txt(name, "CN", MBSTRING_ASC,
+ (const unsigned char *)servername,
+ -1, -1, 0) != 1) {
+ X509_NAME_free(name);
+ goto mkcert_error;
+ }
+ if (X509_set_subject_name(newcrt, name) != 1) {
+ X509_NAME_free(name);
+ goto mkcert_error;
+ }
+ X509_NAME_free(name);
+
+ /* Add x509v3 extensions as specified */
+ X509V3_set_ctx(&ctx, cacert, newcrt, NULL, NULL, 0);
+ for (i = 0; i < X509V3_EXT_SIZE; i++) {
+ X509_EXTENSION *ext;
+
+ if (!(ext = X509V3_EXT_conf(NULL, &ctx, x509v3_ext_names[i], x509v3_ext_values[i])))
+ goto mkcert_error;
+ if (!X509_add_ext(newcrt, ext, -1)) {
+ X509_EXTENSION_free(ext);
+ goto mkcert_error;
+ }
+ X509_EXTENSION_free(ext);
+ }
+
+ /* Sign the certificate with the CA private key */
+ if (EVP_PKEY_type(capkey->type) == EVP_PKEY_DSA)
+ digest = EVP_dss1();
+ else if (EVP_PKEY_type (capkey->type) == EVP_PKEY_RSA)
+ digest = EVP_sha256();
+ else if (EVP_PKEY_type (capkey->type) == EVP_PKEY_EC)
+ digest = EVP_sha256();
+ else {
+#if (OPENSSL_VERSION_NUMBER >= 0x1000000fL)
+ int nid;
+
+ if (EVP_PKEY_get_default_digest_nid(capkey, &nid) <= 0)
+ goto mkcert_error;
+ if (!(digest = EVP_get_digestbynid(nid)))
+ goto mkcert_error;
+#else
+ goto mkcert_error;
+#endif
+ }
+
+ if (!(X509_sign(newcrt, capkey, digest)))
+ goto mkcert_error;
+
+ /* Create and set the new SSL_CTX */
+ if (!(ssl_ctx = SSL_CTX_new(SSLv23_server_method())))
+ goto mkcert_error;
+ if (!SSL_CTX_use_PrivateKey(ssl_ctx, pkey))
+ goto mkcert_error;
+ if (!SSL_CTX_use_certificate(ssl_ctx, newcrt))
+ goto mkcert_error;
+ if (!SSL_CTX_check_private_key(ssl_ctx))
+ goto mkcert_error;
+
+ if (newcrt) X509_free(newcrt);
+
+ SSL_CTX_set_tmp_dh_callback(ssl_ctx, ssl_get_tmp_dh);
+#if defined(SSL_CTX_set_tmp_ecdh) && !defined(OPENSSL_NO_ECDH)
+ {
+ const char *ecdhe = (bind_conf->ecdhe ? bind_conf->ecdhe : ECDHE_DEFAULT_CURVE);
+ EC_KEY *ecc;
+ int nid;
+
+ if ((nid = OBJ_sn2nid(ecdhe)) == NID_undef)
+ goto end;
+ if (!(ecc = EC_KEY_new_by_curve_name(nid)))
+ goto end;
+ SSL_CTX_set_tmp_ecdh(ssl_ctx, ecc);
+ EC_KEY_free(ecc);
+ }
+#endif
+ end:
+ return ssl_ctx;
+
+ mkcert_error:
+ if (ssl_ctx) SSL_CTX_free(ssl_ctx);
+ if (newcrt) X509_free(newcrt);
+ return NULL;
+}
+
+SSL_CTX *
+ssl_sock_create_cert(struct connection *conn, const char *servername, unsigned int serial)
+{
+ struct bind_conf *bind_conf = objt_listener(conn->target)->bind_conf;
+ return ssl_sock_do_create_cert(servername, serial, bind_conf, conn->xprt_ctx);
+}
+
+/* Do a lookup for a certificate in the LRU cache used to store generated
+ * certificates. */
+SSL_CTX *
+ssl_sock_get_generated_cert(unsigned int serial, struct bind_conf *bind_conf)
+{
+ struct lru64 *lru = NULL;
+
+ if (ssl_ctx_lru_tree) {
+ lru = lru64_lookup(serial, ssl_ctx_lru_tree, bind_conf->ca_sign_cert, 0);
+ if (lru && lru->domain)
+ return (SSL_CTX *)lru->data;
+ }
+ return NULL;
+}
+
+/* Set a certificate int the LRU cache used to store generated
+ * certificate. Return 0 on success, otherwise -1 */
+int
+ssl_sock_set_generated_cert(SSL_CTX *ssl_ctx, unsigned int serial, struct bind_conf *bind_conf)
+{
+ struct lru64 *lru = NULL;
+
+ if (ssl_ctx_lru_tree) {
+ lru = lru64_get(serial, ssl_ctx_lru_tree, bind_conf->ca_sign_cert, 0);
+ if (!lru)
+ return -1;
+ if (lru->domain && lru->data)
+ lru->free((SSL_CTX *)lru->data);
+ lru64_commit(lru, ssl_ctx, bind_conf->ca_sign_cert, 0, (void (*)(void *))SSL_CTX_free);
+ return 0;
+ }
+ return -1;
+}
+
+/* Compute the serial that will be used to create/set/get a certificate. */
+unsigned int
+ssl_sock_generated_cert_serial(const void *data, size_t len)
+{
+ return XXH32(data, len, ssl_ctx_lru_seed);
+}
+
+/* Generate a cert and immediately assign it to the SSL session so that the cert's
+ * refcount is maintained regardless of the cert's presence in the LRU cache.
+ */
+static SSL_CTX *
+ssl_sock_generate_certificate(const char *servername, struct bind_conf *bind_conf, SSL *ssl)
+{
+ X509 *cacert = bind_conf->ca_sign_cert;
+ SSL_CTX *ssl_ctx = NULL;
+ struct lru64 *lru = NULL;
+ unsigned int serial;
+
+ serial = ssl_sock_generated_cert_serial(servername, strlen(servername));
+ if (ssl_ctx_lru_tree) {
+ lru = lru64_get(serial, ssl_ctx_lru_tree, cacert, 0);
+ if (lru && lru->domain)
+ ssl_ctx = (SSL_CTX *)lru->data;
+ if (!ssl_ctx && lru) {
+ ssl_ctx = ssl_sock_do_create_cert(servername, serial, bind_conf, ssl);
+ lru64_commit(lru, ssl_ctx, cacert, 0, (void (*)(void *))SSL_CTX_free);
+ }
+ SSL_set_SSL_CTX(ssl, ssl_ctx);
+ }
+ else {
+ ssl_ctx = ssl_sock_do_create_cert(servername, serial, bind_conf, ssl);
+ SSL_set_SSL_CTX(ssl, ssl_ctx);
+ /* No LRU cache, this CTX will be released as soon as the session dies */
+ SSL_CTX_free(ssl_ctx);
+ }
+ return ssl_ctx;
+}
+
+/* Sets the SSL ctx of <ssl> to match the advertised server name. Returns a
+ * warning when no match is found, which implies the default (first) cert
+ * will keep being used.
+ */
+static int ssl_sock_switchctx_cbk(SSL *ssl, int *al, struct bind_conf *s)
+{
+ const char *servername;
+ const char *wildp = NULL;
+ struct ebmb_node *node, *n;
+ int i;
+ (void)al; /* shut gcc stupid warning */
+
+ servername = SSL_get_servername(ssl, TLSEXT_NAMETYPE_host_name);
+ if (!servername) {
+ if (s->generate_certs) {
+ struct connection *conn = (struct connection *)SSL_get_app_data(ssl);
+ unsigned int serial;
+ SSL_CTX *ctx;
+
+ conn_get_to_addr(conn);
+ if (conn->flags & CO_FL_ADDR_TO_SET) {
+ serial = ssl_sock_generated_cert_serial(&conn->addr.to, get_addr_len(&conn->addr.to));
+ ctx = ssl_sock_get_generated_cert(serial, s);
+ if (ctx) {
+ /* switch ctx */
+ SSL_set_SSL_CTX(ssl, ctx);
+ return SSL_TLSEXT_ERR_OK;
+ }
+ }
+ }
+
+ return (s->strict_sni ?
+ SSL_TLSEXT_ERR_ALERT_FATAL :
+ SSL_TLSEXT_ERR_NOACK);
+ }
+
+ for (i = 0; i < trash.size; i++) {
+ if (!servername[i])
+ break;
+ trash.str[i] = tolower(servername[i]);
+ if (!wildp && (trash.str[i] == '.'))
+ wildp = &trash.str[i];
+ }
+ trash.str[i] = 0;
+
+ /* lookup in full qualified names */
+ node = ebst_lookup(&s->sni_ctx, trash.str);
+
+ /* lookup a not neg filter */
+ for (n = node; n; n = ebmb_next_dup(n)) {
+ if (!container_of(n, struct sni_ctx, name)->neg) {
+ node = n;
+ break;
+ }
+ }
+ if (!node && wildp) {
+ /* lookup in wildcards names */
+ node = ebst_lookup(&s->sni_w_ctx, wildp);
+ }
+ if (!node || container_of(node, struct sni_ctx, name)->neg) {
+ SSL_CTX *ctx;
+ if (s->generate_certs &&
+ (ctx = ssl_sock_generate_certificate(servername, s, ssl))) {
+ /* switch ctx */
+ return SSL_TLSEXT_ERR_OK;
+ }
+ return (s->strict_sni ?
+ SSL_TLSEXT_ERR_ALERT_FATAL :
+ SSL_TLSEXT_ERR_ALERT_WARNING);
+ }
+
+ /* switch ctx */
+ SSL_set_SSL_CTX(ssl, container_of(node, struct sni_ctx, name)->ctx);
+ return SSL_TLSEXT_ERR_OK;
+}
+#endif /* SSL_CTRL_SET_TLSEXT_HOSTNAME */
+
+#ifndef OPENSSL_NO_DH
+
+static DH * ssl_get_dh_1024(void)
+{
+ static unsigned char dh1024_p[]={
+ 0xFA,0xF9,0x2A,0x22,0x2A,0xA7,0x7F,0xE1,0x67,0x4E,0x53,0xF7,
+ 0x56,0x13,0xC3,0xB1,0xE3,0x29,0x6B,0x66,0x31,0x6A,0x7F,0xB3,
+ 0xC2,0x68,0x6B,0xCB,0x1D,0x57,0x39,0x1D,0x1F,0xFF,0x1C,0xC9,
+ 0xA6,0xA4,0x98,0x82,0x31,0x5D,0x25,0xFF,0x8A,0xE0,0x73,0x96,
+ 0x81,0xC8,0x83,0x79,0xC1,0x5A,0x04,0xF8,0x37,0x0D,0xA8,0x3D,
+ 0xAE,0x74,0xBC,0xDB,0xB6,0xA4,0x75,0xD9,0x71,0x8A,0xA0,0x17,
+ 0x9E,0x2D,0xC8,0xA8,0xDF,0x2C,0x5F,0x82,0x95,0xF8,0x92,0x9B,
+ 0xA7,0x33,0x5F,0x89,0x71,0xC8,0x2D,0x6B,0x18,0x86,0xC4,0x94,
+ 0x22,0xA5,0x52,0x8D,0xF6,0xF6,0xD2,0x37,0x92,0x0F,0xA5,0xCC,
+ 0xDB,0x7B,0x1D,0x3D,0xA1,0x31,0xB7,0x80,0x8F,0x0B,0x67,0x5E,
+ 0x36,0xA5,0x60,0x0C,0xF1,0x95,0x33,0x8B,
+ };
+ static unsigned char dh1024_g[]={
+ 0x02,
+ };
+
+ DH *dh = DH_new();
+ if (dh) {
+ dh->p = BN_bin2bn(dh1024_p, sizeof dh1024_p, NULL);
+ dh->g = BN_bin2bn(dh1024_g, sizeof dh1024_g, NULL);
+
+ if (!dh->p || !dh->g) {
+ DH_free(dh);
+ dh = NULL;
+ }
+ }
+ return dh;
+}
+
+static DH *ssl_get_dh_2048(void)
+{
+ static unsigned char dh2048_p[]={
+ 0xEC,0x86,0xF8,0x70,0xA0,0x33,0x16,0xEC,0x05,0x1A,0x73,0x59,
+ 0xCD,0x1F,0x8B,0xF8,0x29,0xE4,0xD2,0xCF,0x52,0xDD,0xC2,0x24,
+ 0x8D,0xB5,0x38,0x9A,0xFB,0x5C,0xA4,0xE4,0xB2,0xDA,0xCE,0x66,
+ 0x50,0x74,0xA6,0x85,0x4D,0x4B,0x1D,0x30,0xB8,0x2B,0xF3,0x10,
+ 0xE9,0xA7,0x2D,0x05,0x71,0xE7,0x81,0xDF,0x8B,0x59,0x52,0x3B,
+ 0x5F,0x43,0x0B,0x68,0xF1,0xDB,0x07,0xBE,0x08,0x6B,0x1B,0x23,
+ 0xEE,0x4D,0xCC,0x9E,0x0E,0x43,0xA0,0x1E,0xDF,0x43,0x8C,0xEC,
+ 0xBE,0xBE,0x90,0xB4,0x51,0x54,0xB9,0x2F,0x7B,0x64,0x76,0x4E,
+ 0x5D,0xD4,0x2E,0xAE,0xC2,0x9E,0xAE,0x51,0x43,0x59,0xC7,0x77,
+ 0x9C,0x50,0x3C,0x0E,0xED,0x73,0x04,0x5F,0xF1,0x4C,0x76,0x2A,
+ 0xD8,0xF8,0xCF,0xFC,0x34,0x40,0xD1,0xB4,0x42,0x61,0x84,0x66,
+ 0x42,0x39,0x04,0xF8,0x68,0xB2,0x62,0xD7,0x55,0xED,0x1B,0x74,
+ 0x75,0x91,0xE0,0xC5,0x69,0xC1,0x31,0x5C,0xDB,0x7B,0x44,0x2E,
+ 0xCE,0x84,0x58,0x0D,0x1E,0x66,0x0C,0xC8,0x44,0x9E,0xFD,0x40,
+ 0x08,0x67,0x5D,0xFB,0xA7,0x76,0x8F,0x00,0x11,0x87,0xE9,0x93,
+ 0xF9,0x7D,0xC4,0xBC,0x74,0x55,0x20,0xD4,0x4A,0x41,0x2F,0x43,
+ 0x42,0x1A,0xC1,0xF2,0x97,0x17,0x49,0x27,0x37,0x6B,0x2F,0x88,
+ 0x7E,0x1C,0xA0,0xA1,0x89,0x92,0x27,0xD9,0x56,0x5A,0x71,0xC1,
+ 0x56,0x37,0x7E,0x3A,0x9D,0x05,0xE7,0xEE,0x5D,0x8F,0x82,0x17,
+ 0xBC,0xE9,0xC2,0x93,0x30,0x82,0xF9,0xF4,0xC9,0xAE,0x49,0xDB,
+ 0xD0,0x54,0xB4,0xD9,0x75,0x4D,0xFA,0x06,0xB8,0xD6,0x38,0x41,
+ 0xB7,0x1F,0x77,0xF3,
+ };
+ static unsigned char dh2048_g[]={
+ 0x02,
+ };
+
+ DH *dh = DH_new();
+ if (dh) {
+ dh->p = BN_bin2bn(dh2048_p, sizeof dh2048_p, NULL);
+ dh->g = BN_bin2bn(dh2048_g, sizeof dh2048_g, NULL);
+
+ if (!dh->p || !dh->g) {
+ DH_free(dh);
+ dh = NULL;
+ }
+ }
+ return dh;
+}
+
+static DH *ssl_get_dh_4096(void)
+{
+ static unsigned char dh4096_p[]={
+ 0xDE,0x16,0x94,0xCD,0x99,0x58,0x07,0xF1,0xF7,0x32,0x96,0x11,
+ 0x04,0x82,0xD4,0x84,0x72,0x80,0x99,0x06,0xCA,0xF0,0xA3,0x68,
+ 0x07,0xCE,0x64,0x50,0xE7,0x74,0x45,0x20,0x80,0x5E,0x4D,0xAD,
+ 0xA5,0xB6,0xED,0xFA,0x80,0x6C,0x3B,0x35,0xC4,0x9A,0x14,0x6B,
+ 0x32,0xBB,0xFD,0x1F,0x17,0x8E,0xB7,0x1F,0xD6,0xFA,0x3F,0x7B,
+ 0xEE,0x16,0xA5,0x62,0x33,0x0D,0xED,0xBC,0x4E,0x58,0xE5,0x47,
+ 0x4D,0xE9,0xAB,0x8E,0x38,0xD3,0x6E,0x90,0x57,0xE3,0x22,0x15,
+ 0x33,0xBD,0xF6,0x43,0x45,0xB5,0x10,0x0A,0xBE,0x2C,0xB4,0x35,
+ 0xB8,0x53,0x8D,0xAD,0xFB,0xA7,0x1F,0x85,0x58,0x41,0x7A,0x79,
+ 0x20,0x68,0xB3,0xE1,0x3D,0x08,0x76,0xBF,0x86,0x0D,0x49,0xE3,
+ 0x82,0x71,0x8C,0xB4,0x8D,0x81,0x84,0xD4,0xE7,0xBE,0x91,0xDC,
+ 0x26,0x39,0x48,0x0F,0x35,0xC4,0xCA,0x65,0xE3,0x40,0x93,0x52,
+ 0x76,0x58,0x7D,0xDD,0x51,0x75,0xDC,0x69,0x61,0xBF,0x47,0x2C,
+ 0x16,0x68,0x2D,0xC9,0x29,0xD3,0xE6,0xC0,0x99,0x48,0xA0,0x9A,
+ 0xC8,0x78,0xC0,0x6D,0x81,0x67,0x12,0x61,0x3F,0x71,0xBA,0x41,
+ 0x1F,0x6C,0x89,0x44,0x03,0xBA,0x3B,0x39,0x60,0xAA,0x28,0x55,
+ 0x59,0xAE,0xB8,0xFA,0xCB,0x6F,0xA5,0x1A,0xF7,0x2B,0xDD,0x52,
+ 0x8A,0x8B,0xE2,0x71,0xA6,0x5E,0x7E,0xD8,0x2E,0x18,0xE0,0x66,
+ 0xDF,0xDD,0x22,0x21,0x99,0x52,0x73,0xA6,0x33,0x20,0x65,0x0E,
+ 0x53,0xE7,0x6B,0x9B,0xC5,0xA3,0x2F,0x97,0x65,0x76,0xD3,0x47,
+ 0x23,0x77,0x12,0xB6,0x11,0x7B,0x24,0xED,0xF1,0xEF,0xC0,0xE2,
+ 0xA3,0x7E,0x67,0x05,0x3E,0x96,0x4D,0x45,0xC2,0x18,0xD1,0x73,
+ 0x9E,0x07,0xF3,0x81,0x6E,0x52,0x63,0xF6,0x20,0x76,0xB9,0x13,
+ 0xD2,0x65,0x30,0x18,0x16,0x09,0x16,0x9E,0x8F,0xF1,0xD2,0x10,
+ 0x5A,0xD3,0xD4,0xAF,0x16,0x61,0xDA,0x55,0x2E,0x18,0x5E,0x14,
+ 0x08,0x54,0x2E,0x2A,0x25,0xA2,0x1A,0x9B,0x8B,0x32,0xA9,0xFD,
+ 0xC2,0x48,0x96,0xE1,0x80,0xCA,0xE9,0x22,0x17,0xBB,0xCE,0x3E,
+ 0x9E,0xED,0xC7,0xF1,0x1F,0xEC,0x17,0x21,0xDC,0x7B,0x82,0x48,
+ 0x8E,0xBB,0x4B,0x9D,0x5B,0x04,0x04,0xDA,0xDB,0x39,0xDF,0x01,
+ 0x40,0xC3,0xAA,0x26,0x23,0x89,0x75,0xC6,0x0B,0xD0,0xA2,0x60,
+ 0x6A,0xF1,0xCC,0x65,0x18,0x98,0x1B,0x52,0xD2,0x74,0x61,0xCC,
+ 0xBD,0x60,0xAE,0xA3,0xA0,0x66,0x6A,0x16,0x34,0x92,0x3F,0x41,
+ 0x40,0x31,0x29,0xC0,0x2C,0x63,0xB2,0x07,0x8D,0xEB,0x94,0xB8,
+ 0xE8,0x47,0x92,0x52,0x93,0x6A,0x1B,0x7E,0x1A,0x61,0xB3,0x1B,
+ 0xF0,0xD6,0x72,0x9B,0xF1,0xB0,0xAF,0xBF,0x3E,0x65,0xEF,0x23,
+ 0x1D,0x6F,0xFF,0x70,0xCD,0x8A,0x4C,0x8A,0xA0,0x72,0x9D,0xBE,
+ 0xD4,0xBB,0x24,0x47,0x4A,0x68,0xB5,0xF5,0xC6,0xD5,0x7A,0xCD,
+ 0xCA,0x06,0x41,0x07,0xAD,0xC2,0x1E,0xE6,0x54,0xA7,0xAD,0x03,
+ 0xD9,0x12,0xC1,0x9C,0x13,0xB1,0xC9,0x0A,0x43,0x8E,0x1E,0x08,
+ 0xCE,0x50,0x82,0x73,0x5F,0xA7,0x55,0x1D,0xD9,0x59,0xAC,0xB5,
+ 0xEA,0x02,0x7F,0x6C,0x5B,0x74,0x96,0x98,0x67,0x24,0xA3,0x0F,
+ 0x15,0xFC,0xA9,0x7D,0x3E,0x67,0xD1,0x70,0xF8,0x97,0xF3,0x67,
+ 0xC5,0x8C,0x88,0x44,0x08,0x02,0xC7,0x2B,
+ };
+ static unsigned char dh4096_g[]={
+ 0x02,
+ };
+
+ DH *dh = DH_new();
+ if (dh) {
+ dh->p = BN_bin2bn(dh4096_p, sizeof dh4096_p, NULL);
+ dh->g = BN_bin2bn(dh4096_g, sizeof dh4096_g, NULL);
+
+ if (!dh->p || !dh->g) {
+ DH_free(dh);
+ dh = NULL;
+ }
+ }
+ return dh;
+}
+
+/* Returns Diffie-Hellman parameters matching the private key length
+ but not exceeding global.tune.ssl_default_dh_param */
+static DH *ssl_get_tmp_dh(SSL *ssl, int export, int keylen)
+{
+ DH *dh = NULL;
+ EVP_PKEY *pkey = SSL_get_privatekey(ssl);
+ int type = pkey ? EVP_PKEY_type(pkey->type) : EVP_PKEY_NONE;
+
+ /* The keylen supplied by OpenSSL can only be 512 or 1024.
+ See ssl3_send_server_key_exchange() in ssl/s3_srvr.c
+ */
+ if (type == EVP_PKEY_RSA || type == EVP_PKEY_DSA) {
+ keylen = EVP_PKEY_bits(pkey);
+ }
+
+ if (keylen > global.tune.ssl_default_dh_param) {
+ keylen = global.tune.ssl_default_dh_param;
+ }
+
+ if (keylen >= 4096) {
+ dh = local_dh_4096;
+ }
+ else if (keylen >= 2048) {
+ dh = local_dh_2048;
+ }
+ else {
+ dh = local_dh_1024;
+ }
+
+ return dh;
+}
+
+static DH * ssl_sock_get_dh_from_file(const char *filename)
+{
+ DH *dh = NULL;
+ BIO *in = BIO_new(BIO_s_file());
+
+ if (in == NULL)
+ goto end;
+
+ if (BIO_read_filename(in, filename) <= 0)
+ goto end;
+
+ dh = PEM_read_bio_DHparams(in, NULL, NULL, NULL);
+
+end:
+ if (in)
+ BIO_free(in);
+
+ return dh;
+}
+
+int ssl_sock_load_global_dh_param_from_file(const char *filename)
+{
+ global_dh = ssl_sock_get_dh_from_file(filename);
+
+ if (global_dh) {
+ return 0;
+ }
+
+ return -1;
+}
+
+/* Loads Diffie-Hellman parameter from a file. Returns 1 if loaded, else -1
+ if an error occured, and 0 if parameter not found. */
+int ssl_sock_load_dh_params(SSL_CTX *ctx, const char *file)
+{
+ int ret = -1;
+ DH *dh = ssl_sock_get_dh_from_file(file);
+
+ if (dh) {
+ ret = 1;
+ SSL_CTX_set_tmp_dh(ctx, dh);
+
+ if (ssl_dh_ptr_index >= 0) {
+ /* store a pointer to the DH params to avoid complaining about
+ ssl-default-dh-param not being set for this SSL_CTX */
+ SSL_CTX_set_ex_data(ctx, ssl_dh_ptr_index, dh);
+ }
+ }
+ else if (global_dh) {
+ SSL_CTX_set_tmp_dh(ctx, global_dh);
+ ret = 0; /* DH params not found */
+ }
+ else {
+ /* Clear openssl global errors stack */
+ ERR_clear_error();
+
+ if (global.tune.ssl_default_dh_param <= 1024) {
+ /* we are limited to DH parameter of 1024 bits anyway */
+ local_dh_1024 = ssl_get_dh_1024();
+ if (local_dh_1024 == NULL)
+ goto end;
+
+ SSL_CTX_set_tmp_dh(ctx, local_dh_1024);
+ }
+ else {
+ SSL_CTX_set_tmp_dh_callback(ctx, ssl_get_tmp_dh);
+ }
+
+ ret = 0; /* DH params not found */
+ }
+
+end:
+ if (dh)
+ DH_free(dh);
+
+ return ret;
+}
+#endif
+
+static int ssl_sock_add_cert_sni(SSL_CTX *ctx, struct bind_conf *s, char *name, int order)
+{
+ struct sni_ctx *sc;
+ int wild = 0, neg = 0;
+
+ if (*name == '!') {
+ neg = 1;
+ name++;
+ }
+ if (*name == '*') {
+ wild = 1;
+ name++;
+ }
+ /* !* filter is a nop */
+ if (neg && wild)
+ return order;
+ if (*name) {
+ int j, len;
+ len = strlen(name);
+ sc = malloc(sizeof(struct sni_ctx) + len + 1);
+ for (j = 0; j < len; j++)
+ sc->name.key[j] = tolower(name[j]);
+ sc->name.key[len] = 0;
+ sc->ctx = ctx;
+ sc->order = order++;
+ sc->neg = neg;
+ if (wild)
+ ebst_insert(&s->sni_w_ctx, &sc->name);
+ else
+ ebst_insert(&s->sni_ctx, &sc->name);
+ }
+ return order;
+}
+
+/* Loads a certificate key and CA chain from a file. Returns 0 on error, -1 if
+ * an early error happens and the caller must call SSL_CTX_free() by itelf.
+ */
+static int ssl_sock_load_cert_chain_file(SSL_CTX *ctx, const char *file, struct bind_conf *s, char **sni_filter, int fcount)
+{
+ BIO *in;
+ X509 *x = NULL, *ca;
+ int i, err;
+ int ret = -1;
+ int order = 0;
+ X509_NAME *xname;
+ char *str;
+#ifdef SSL_CTRL_SET_TLSEXT_HOSTNAME
+ STACK_OF(GENERAL_NAME) *names;
+#endif
+
+ in = BIO_new(BIO_s_file());
+ if (in == NULL)
+ goto end;
+
+ if (BIO_read_filename(in, file) <= 0)
+ goto end;
+
+ x = PEM_read_bio_X509_AUX(in, NULL, ctx->default_passwd_callback, ctx->default_passwd_callback_userdata);
+ if (x == NULL)
+ goto end;
+
+ if (fcount) {
+ while (fcount--)
+ order = ssl_sock_add_cert_sni(ctx, s, sni_filter[fcount], order);
+ }
+ else {
+#ifdef SSL_CTRL_SET_TLSEXT_HOSTNAME
+ names = X509_get_ext_d2i(x, NID_subject_alt_name, NULL, NULL);
+ if (names) {
+ for (i = 0; i < sk_GENERAL_NAME_num(names); i++) {
+ GENERAL_NAME *name = sk_GENERAL_NAME_value(names, i);
+ if (name->type == GEN_DNS) {
+ if (ASN1_STRING_to_UTF8((unsigned char **)&str, name->d.dNSName) >= 0) {
+ order = ssl_sock_add_cert_sni(ctx, s, str, order);
+ OPENSSL_free(str);
+ }
+ }
+ }
+ sk_GENERAL_NAME_pop_free(names, GENERAL_NAME_free);
+ }
+#endif /* SSL_CTRL_SET_TLSEXT_HOSTNAME */
+ xname = X509_get_subject_name(x);
+ i = -1;
+ while ((i = X509_NAME_get_index_by_NID(xname, NID_commonName, i)) != -1) {
+ X509_NAME_ENTRY *entry = X509_NAME_get_entry(xname, i);
+ if (ASN1_STRING_to_UTF8((unsigned char **)&str, entry->value) >= 0) {
+ order = ssl_sock_add_cert_sni(ctx, s, str, order);
+ OPENSSL_free(str);
+ }
+ }
+ }
+
+ ret = 0; /* the caller must not free the SSL_CTX argument anymore */
+ if (!SSL_CTX_use_certificate(ctx, x))
+ goto end;
+
+ if (ctx->extra_certs != NULL) {
+ sk_X509_pop_free(ctx->extra_certs, X509_free);
+ ctx->extra_certs = NULL;
+ }
+
+ while ((ca = PEM_read_bio_X509(in, NULL, ctx->default_passwd_callback, ctx->default_passwd_callback_userdata))) {
+ if (!SSL_CTX_add_extra_chain_cert(ctx, ca)) {
+ X509_free(ca);
+ goto end;
+ }
+ }
+
+ err = ERR_get_error();
+ if (!err || (ERR_GET_LIB(err) == ERR_LIB_PEM && ERR_GET_REASON(err) == PEM_R_NO_START_LINE)) {
+ /* we successfully reached the last cert in the file */
+ ret = 1;
+ }
+ ERR_clear_error();
+
+end:
+ if (x)
+ X509_free(x);
+
+ if (in)
+ BIO_free(in);
+
+ return ret;
+}
+
+static int ssl_sock_load_cert_file(const char *path, struct bind_conf *bind_conf, struct proxy *curproxy, char **sni_filter, int fcount, char **err)
+{
+ int ret;
+ SSL_CTX *ctx;
+
+ ctx = SSL_CTX_new(SSLv23_server_method());
+ if (!ctx) {
+ memprintf(err, "%sunable to allocate SSL context for cert '%s'.\n",
+ err && *err ? *err : "", path);
+ return 1;
+ }
+
+ if (SSL_CTX_use_PrivateKey_file(ctx, path, SSL_FILETYPE_PEM) <= 0) {
+ memprintf(err, "%sunable to load SSL private key from PEM file '%s'.\n",
+ err && *err ? *err : "", path);
+ SSL_CTX_free(ctx);
+ return 1;
+ }
+
+ ret = ssl_sock_load_cert_chain_file(ctx, path, bind_conf, sni_filter, fcount);
+ if (ret <= 0) {
+ memprintf(err, "%sunable to load SSL certificate from PEM file '%s'.\n",
+ err && *err ? *err : "", path);
+ if (ret < 0) /* serious error, must do that ourselves */
+ SSL_CTX_free(ctx);
+ return 1;
+ }
+
+ if (SSL_CTX_check_private_key(ctx) <= 0) {
+ memprintf(err, "%sinconsistencies between private key and certificate loaded from PEM file '%s'.\n",
+ err && *err ? *err : "", path);
+ return 1;
+ }
+
+ /* we must not free the SSL_CTX anymore below, since it's already in
+ * the tree, so it will be discovered and cleaned in time.
+ */
+#ifndef OPENSSL_NO_DH
+ /* store a NULL pointer to indicate we have not yet loaded
+ a custom DH param file */
+ if (ssl_dh_ptr_index >= 0) {
+ SSL_CTX_set_ex_data(ctx, ssl_dh_ptr_index, NULL);
+ }
+
+ ret = ssl_sock_load_dh_params(ctx, path);
+ if (ret < 0) {
+ if (err)
+ memprintf(err, "%sunable to load DH parameters from file '%s'.\n",
+ *err ? *err : "", path);
+ return 1;
+ }
+#endif
+
+#if (defined SSL_CTRL_SET_TLSEXT_STATUS_REQ_CB && !defined OPENSSL_NO_OCSP)
+ ret = ssl_sock_load_ocsp(ctx, path);
+ if (ret < 0) {
+ if (err)
+ memprintf(err, "%s '%s.ocsp' is present and activates OCSP but it is impossible to compute the OCSP certificate ID (maybe the issuer could not be found)'.\n",
+ *err ? *err : "", path);
+ return 1;
+ }
+#endif
+
+#if (OPENSSL_VERSION_NUMBER >= 0x1000200fL && !defined OPENSSL_NO_TLSEXT && !defined OPENSSL_IS_BORINGSSL && !defined LIBRESSL_VERSION_NUMBER)
+ if (sctl_ex_index >= 0) {
+ ret = ssl_sock_load_sctl(ctx, path);
+ if (ret < 0) {
+ if (err)
+ memprintf(err, "%s '%s.sctl' is present but cannot be read or parsed'.\n",
+ *err ? *err : "", path);
+ return 1;
+ }
+ }
+#endif
+
+#ifndef SSL_CTRL_SET_TLSEXT_HOSTNAME
+ if (bind_conf->default_ctx) {
+ memprintf(err, "%sthis version of openssl cannot load multiple SSL certificates.\n",
+ err && *err ? *err : "");
+ return 1;
+ }
+#endif
+ if (!bind_conf->default_ctx)
+ bind_conf->default_ctx = ctx;
+
+ return 0;
+}
+
+int ssl_sock_load_cert(char *path, struct bind_conf *bind_conf, struct proxy *curproxy, char **err)
+{
+ struct dirent **de_list;
+ int i, n;
+ DIR *dir;
+ struct stat buf;
+ char *end;
+ char fp[MAXPATHLEN+1];
+ int cfgerr = 0;
+
+ if (!(dir = opendir(path)))
+ return ssl_sock_load_cert_file(path, bind_conf, curproxy, NULL, 0, err);
+
+ /* strip trailing slashes, including first one */
+ for (end = path + strlen(path) - 1; end >= path && *end == '/'; end--)
+ *end = 0;
+
+ n = scandir(path, &de_list, 0, alphasort);
+ if (n < 0) {
+ memprintf(err, "%sunable to scan directory '%s' : %s.\n",
+ err && *err ? *err : "", path, strerror(errno));
+ cfgerr++;
+ }
+ else {
+ for (i = 0; i < n; i++) {
+ struct dirent *de = de_list[i];
+
+ end = strrchr(de->d_name, '.');
+ if (end && (!strcmp(end, ".issuer") || !strcmp(end, ".ocsp") || !strcmp(end, ".sctl")))
+ goto ignore_entry;
+
+ snprintf(fp, sizeof(fp), "%s/%s", path, de->d_name);
+ if (stat(fp, &buf) != 0) {
+ memprintf(err, "%sunable to stat SSL certificate from file '%s' : %s.\n",
+ err && *err ? *err : "", fp, strerror(errno));
+ cfgerr++;
+ goto ignore_entry;
+ }
+ if (!S_ISREG(buf.st_mode))
+ goto ignore_entry;
+ cfgerr += ssl_sock_load_cert_file(fp, bind_conf, curproxy, NULL, 0, err);
+ ignore_entry:
+ free(de);
+ }
+ free(de_list);
+ }
+ closedir(dir);
+ return cfgerr;
+}
+
+/* Make sure openssl opens /dev/urandom before the chroot. The work is only
+ * done once. Zero is returned if the operation fails. No error is returned
+ * if the random is said as not implemented, because we expect that openssl
+ * will use another method once needed.
+ */
+static int ssl_initialize_random()
+{
+ unsigned char random;
+ static int random_initialized = 0;
+
+ if (!random_initialized && RAND_bytes(&random, 1) != 0)
+ random_initialized = 1;
+
+ return random_initialized;
+}
+
+int ssl_sock_load_cert_list_file(char *file, struct bind_conf *bind_conf, struct proxy *curproxy, char **err)
+{
+ char thisline[LINESIZE];
+ FILE *f;
+ int linenum = 0;
+ int cfgerr = 0;
+
+ if ((f = fopen(file, "r")) == NULL) {
+ memprintf(err, "cannot open file '%s' : %s", file, strerror(errno));
+ return 1;
+ }
+
+ while (fgets(thisline, sizeof(thisline), f) != NULL) {
+ int arg;
+ int newarg;
+ char *end;
+ char *args[MAX_LINE_ARGS + 1];
+ char *line = thisline;
+
+ linenum++;
+ end = line + strlen(line);
+ if (end-line == sizeof(thisline)-1 && *(end-1) != '\n') {
+ /* Check if we reached the limit and the last char is not \n.
+ * Watch out for the last line without the terminating '\n'!
+ */
+ memprintf(err, "line %d too long in file '%s', limit is %d characters",
+ linenum, file, (int)sizeof(thisline)-1);
+ cfgerr = 1;
+ break;
+ }
+
+ arg = 0;
+ newarg = 1;
+ while (*line) {
+ if (*line == '#' || *line == '\n' || *line == '\r') {
+ /* end of string, end of loop */
+ *line = 0;
+ break;
+ }
+ else if (isspace(*line)) {
+ newarg = 1;
+ *line = 0;
+ }
+ else if (newarg) {
+ if (arg == MAX_LINE_ARGS) {
+ memprintf(err, "too many args on line %d in file '%s'.",
+ linenum, file);
+ cfgerr = 1;
+ break;
+ }
+ newarg = 0;
+ args[arg++] = line;
+ }
+ line++;
+ }
+ if (cfgerr)
+ break;
+
+ /* empty line */
+ if (!arg)
+ continue;
+
+ cfgerr = ssl_sock_load_cert_file(args[0], bind_conf, curproxy, &args[1], arg-1, err);
+ if (cfgerr) {
+ memprintf(err, "error processing line %d in file '%s' : %s", linenum, file, *err);
+ break;
+ }
+ }
+ fclose(f);
+ return cfgerr;
+}
+
+#ifndef SSL_OP_CIPHER_SERVER_PREFERENCE /* needs OpenSSL >= 0.9.7 */
+#define SSL_OP_CIPHER_SERVER_PREFERENCE 0
+#endif
+
+#ifndef SSL_OP_NO_SESSION_RESUMPTION_ON_RENEGOTIATION /* needs OpenSSL >= 0.9.7 */
+#define SSL_OP_NO_SESSION_RESUMPTION_ON_RENEGOTIATION 0
+#define SSL_renegotiate_pending(arg) 0
+#endif
+#ifndef SSL_OP_SINGLE_ECDH_USE /* needs OpenSSL >= 0.9.8 */
+#define SSL_OP_SINGLE_ECDH_USE 0
+#endif
+#ifndef SSL_OP_NO_TICKET /* needs OpenSSL >= 0.9.8 */
+#define SSL_OP_NO_TICKET 0
+#endif
+#ifndef SSL_OP_NO_COMPRESSION /* needs OpenSSL >= 0.9.9 */
+#define SSL_OP_NO_COMPRESSION 0
+#endif
+#ifndef SSL_OP_NO_TLSv1_1 /* needs OpenSSL >= 1.0.1 */
+#define SSL_OP_NO_TLSv1_1 0
+#endif
+#ifndef SSL_OP_NO_TLSv1_2 /* needs OpenSSL >= 1.0.1 */
+#define SSL_OP_NO_TLSv1_2 0
+#endif
+#ifndef SSL_OP_SINGLE_DH_USE /* needs OpenSSL >= 0.9.6 */
+#define SSL_OP_SINGLE_DH_USE 0
+#endif
+#ifndef SSL_OP_SINGLE_ECDH_USE /* needs OpenSSL >= 1.0.0 */
+#define SSL_OP_SINGLE_ECDH_USE 0
+#endif
+#ifndef SSL_MODE_RELEASE_BUFFERS /* needs OpenSSL >= 1.0.0 */
+#define SSL_MODE_RELEASE_BUFFERS 0
+#endif
+#ifndef SSL_MODE_SMALL_BUFFERS /* needs small_records.patch */
+#define SSL_MODE_SMALL_BUFFERS 0
+#endif
+
+int ssl_sock_prepare_ctx(struct bind_conf *bind_conf, SSL_CTX *ctx, struct proxy *curproxy)
+{
+ int cfgerr = 0;
+ int verify = SSL_VERIFY_NONE;
+ long ssloptions =
+ SSL_OP_ALL | /* all known workarounds for bugs */
+ SSL_OP_NO_SSLv2 |
+ SSL_OP_NO_COMPRESSION |
+ SSL_OP_SINGLE_DH_USE |
+ SSL_OP_SINGLE_ECDH_USE |
+ SSL_OP_NO_SESSION_RESUMPTION_ON_RENEGOTIATION |
+ SSL_OP_CIPHER_SERVER_PREFERENCE;
+ long sslmode =
+ SSL_MODE_ENABLE_PARTIAL_WRITE |
+ SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER |
+ SSL_MODE_RELEASE_BUFFERS |
+ SSL_MODE_SMALL_BUFFERS;
+ STACK_OF(SSL_CIPHER) * ciphers = NULL;
+ SSL_CIPHER * cipher = NULL;
+ char cipher_description[128];
+ /* The description of ciphers using an Ephemeral Diffie Hellman key exchange
+ contains " Kx=DH " or " Kx=DH(". Beware of " Kx=DH/",
+ which is not ephemeral DH. */
+ const char dhe_description[] = " Kx=DH ";
+ const char dhe_export_description[] = " Kx=DH(";
+ int idx = 0;
+ int dhe_found = 0;
+ SSL *ssl = NULL;
+
+ /* Make sure openssl opens /dev/urandom before the chroot */
+ if (!ssl_initialize_random()) {
+ Alert("OpenSSL random data generator initialization failed.\n");
+ cfgerr++;
+ }
+
+ if (bind_conf->ssl_options & BC_SSL_O_NO_SSLV3)
+ ssloptions |= SSL_OP_NO_SSLv3;
+ if (bind_conf->ssl_options & BC_SSL_O_NO_TLSV10)
+ ssloptions |= SSL_OP_NO_TLSv1;
+ if (bind_conf->ssl_options & BC_SSL_O_NO_TLSV11)
+ ssloptions |= SSL_OP_NO_TLSv1_1;
+ if (bind_conf->ssl_options & BC_SSL_O_NO_TLSV12)
+ ssloptions |= SSL_OP_NO_TLSv1_2;
+ if (bind_conf->ssl_options & BC_SSL_O_NO_TLS_TICKETS)
+ ssloptions |= SSL_OP_NO_TICKET;
+ if (bind_conf->ssl_options & BC_SSL_O_USE_SSLV3) {
+#ifndef OPENSSL_NO_SSL3
+ SSL_CTX_set_ssl_version(ctx, SSLv3_server_method());
+#else
+ Alert("SSLv3 support requested but unavailable.\n");
+ cfgerr++;
+#endif
+ }
+ if (bind_conf->ssl_options & BC_SSL_O_USE_TLSV10)
+ SSL_CTX_set_ssl_version(ctx, TLSv1_server_method());
+#if SSL_OP_NO_TLSv1_1
+ if (bind_conf->ssl_options & BC_SSL_O_USE_TLSV11)
+ SSL_CTX_set_ssl_version(ctx, TLSv1_1_server_method());
+#endif
+#if SSL_OP_NO_TLSv1_2
+ if (bind_conf->ssl_options & BC_SSL_O_USE_TLSV12)
+ SSL_CTX_set_ssl_version(ctx, TLSv1_2_server_method());
+#endif
+
+ SSL_CTX_set_options(ctx, ssloptions);
+ SSL_CTX_set_mode(ctx, sslmode);
+ switch (bind_conf->verify) {
+ case SSL_SOCK_VERIFY_NONE:
+ verify = SSL_VERIFY_NONE;
+ break;
+ case SSL_SOCK_VERIFY_OPTIONAL:
+ verify = SSL_VERIFY_PEER;
+ break;
+ case SSL_SOCK_VERIFY_REQUIRED:
+ verify = SSL_VERIFY_PEER|SSL_VERIFY_FAIL_IF_NO_PEER_CERT;
+ break;
+ }
+ SSL_CTX_set_verify(ctx, verify, ssl_sock_bind_verifycbk);
+ if (verify & SSL_VERIFY_PEER) {
+ if (bind_conf->ca_file) {
+ /* load CAfile to verify */
+ if (!SSL_CTX_load_verify_locations(ctx, bind_conf->ca_file, NULL)) {
+ Alert("Proxy '%s': unable to load CA file '%s' for bind '%s' at [%s:%d].\n",
+ curproxy->id, bind_conf->ca_file, bind_conf->arg, bind_conf->file, bind_conf->line);
+ cfgerr++;
+ }
+ /* set CA names fo client cert request, function returns void */
+ SSL_CTX_set_client_CA_list(ctx, SSL_load_client_CA_file(bind_conf->ca_file));
+ }
+ else {
+ Alert("Proxy '%s': verify is enabled but no CA file specified for bind '%s' at [%s:%d].\n",
+ curproxy->id, bind_conf->arg, bind_conf->file, bind_conf->line);
+ cfgerr++;
+ }
+#ifdef X509_V_FLAG_CRL_CHECK
+ if (bind_conf->crl_file) {
+ X509_STORE *store = SSL_CTX_get_cert_store(ctx);
+
+ if (!store || !X509_STORE_load_locations(store, bind_conf->crl_file, NULL)) {
+ Alert("Proxy '%s': unable to configure CRL file '%s' for bind '%s' at [%s:%d].\n",
+ curproxy->id, bind_conf->crl_file, bind_conf->arg, bind_conf->file, bind_conf->line);
+ cfgerr++;
+ }
+ else {
+ X509_STORE_set_flags(store, X509_V_FLAG_CRL_CHECK|X509_V_FLAG_CRL_CHECK_ALL);
+ }
+ }
+#endif
+ ERR_clear_error();
+ }
+
+#if (defined SSL_CTRL_SET_TLSEXT_TICKET_KEY_CB && TLS_TICKETS_NO > 0)
+ if(bind_conf->keys_ref) {
+ if (!SSL_CTX_set_tlsext_ticket_key_cb(ctx, ssl_tlsext_ticket_key_cb)) {
+ Alert("Proxy '%s': unable to set callback for TLS ticket validation for bind '%s' at [%s:%d].\n",
+ curproxy->id, bind_conf->arg, bind_conf->file, bind_conf->line);
+ cfgerr++;
+ }
+ }
+#endif
+
+ if (global.tune.ssllifetime)
+ SSL_CTX_set_timeout(ctx, global.tune.ssllifetime);
+
+ shared_context_set_cache(ctx);
+ if (bind_conf->ciphers &&
+ !SSL_CTX_set_cipher_list(ctx, bind_conf->ciphers)) {
+ Alert("Proxy '%s': unable to set SSL cipher list to '%s' for bind '%s' at [%s:%d].\n",
+ curproxy->id, bind_conf->ciphers, bind_conf->arg, bind_conf->file, bind_conf->line);
+ cfgerr++;
+ }
+
+ /* If tune.ssl.default-dh-param has not been set,
+ neither has ssl-default-dh-file and no static DH
+ params were in the certificate file. */
+ if (global.tune.ssl_default_dh_param == 0 &&
+ global_dh == NULL &&
+ (ssl_dh_ptr_index == -1 ||
+ SSL_CTX_get_ex_data(ctx, ssl_dh_ptr_index) == NULL)) {
+
+ ssl = SSL_new(ctx);
+
+ if (ssl) {
+ ciphers = SSL_get_ciphers(ssl);
+
+ if (ciphers) {
+ for (idx = 0; idx < sk_SSL_CIPHER_num(ciphers); idx++) {
+ cipher = sk_SSL_CIPHER_value(ciphers, idx);
+ if (SSL_CIPHER_description(cipher, cipher_description, sizeof (cipher_description)) == cipher_description) {
+ if (strstr(cipher_description, dhe_description) != NULL ||
+ strstr(cipher_description, dhe_export_description) != NULL) {
+ dhe_found = 1;
+ break;
+ }
+ }
+ }
+ }
+ SSL_free(ssl);
+ ssl = NULL;
+ }
+
+ if (dhe_found) {
+ Warning("Setting tune.ssl.default-dh-param to 1024 by default, if your workload permits it you should set it to at least 2048. Please set a value >= 1024 to make this warning disappear.\n");
+ }
+
+ global.tune.ssl_default_dh_param = 1024;
+ }
+
+#ifndef OPENSSL_NO_DH
+ if (global.tune.ssl_default_dh_param >= 1024) {
+ if (local_dh_1024 == NULL) {
+ local_dh_1024 = ssl_get_dh_1024();
+ }
+ if (global.tune.ssl_default_dh_param >= 2048) {
+ if (local_dh_2048 == NULL) {
+ local_dh_2048 = ssl_get_dh_2048();
+ }
+ if (global.tune.ssl_default_dh_param >= 4096) {
+ if (local_dh_4096 == NULL) {
+ local_dh_4096 = ssl_get_dh_4096();
+ }
+ }
+ }
+ }
+#endif /* OPENSSL_NO_DH */
+
+ SSL_CTX_set_info_callback(ctx, ssl_sock_infocbk);
+#if OPENSSL_VERSION_NUMBER >= 0x00907000L
+ SSL_CTX_set_msg_callback(ctx, ssl_sock_msgcbk);
+#endif
+
+#ifdef OPENSSL_NPN_NEGOTIATED
+ if (bind_conf->npn_str)
+ SSL_CTX_set_next_protos_advertised_cb(ctx, ssl_sock_advertise_npn_protos, bind_conf);
+#endif
+#ifdef TLSEXT_TYPE_application_layer_protocol_negotiation
+ if (bind_conf->alpn_str)
+ SSL_CTX_set_alpn_select_cb(ctx, ssl_sock_advertise_alpn_protos, bind_conf);
+#endif
+
+#ifdef SSL_CTRL_SET_TLSEXT_HOSTNAME
+ SSL_CTX_set_tlsext_servername_callback(ctx, ssl_sock_switchctx_cbk);
+ SSL_CTX_set_tlsext_servername_arg(ctx, bind_conf);
+#endif
+#if defined(SSL_CTX_set_tmp_ecdh) && !defined(OPENSSL_NO_ECDH)
+ {
+ int i;
+ EC_KEY *ecdh;
+
+ i = OBJ_sn2nid(bind_conf->ecdhe ? bind_conf->ecdhe : ECDHE_DEFAULT_CURVE);
+ if (!i || ((ecdh = EC_KEY_new_by_curve_name(i)) == NULL)) {
+ Alert("Proxy '%s': unable to set elliptic named curve to '%s' for bind '%s' at [%s:%d].\n",
+ curproxy->id, bind_conf->ecdhe ? bind_conf->ecdhe : ECDHE_DEFAULT_CURVE,
+ bind_conf->arg, bind_conf->file, bind_conf->line);
+ cfgerr++;
+ }
+ else {
+ SSL_CTX_set_tmp_ecdh(ctx, ecdh);
+ EC_KEY_free(ecdh);
+ }
+ }
+#endif
+
+ return cfgerr;
+}
+
+static int ssl_sock_srv_hostcheck(const char *pattern, const char *hostname)
+{
+ const char *pattern_wildcard, *pattern_left_label_end, *hostname_left_label_end;
+ size_t prefixlen, suffixlen;
+
+ /* Trivial case */
+ if (strcmp(pattern, hostname) == 0)
+ return 1;
+
+ /* The rest of this logic is based on RFC 6125, section 6.4.3
+ * (http://tools.ietf.org/html/rfc6125#section-6.4.3) */
+
+ pattern_wildcard = NULL;
+ pattern_left_label_end = pattern;
+ while (*pattern_left_label_end != '.') {
+ switch (*pattern_left_label_end) {
+ case 0:
+ /* End of label not found */
+ return 0;
+ case '*':
+ /* If there is more than one wildcards */
+ if (pattern_wildcard)
+ return 0;
+ pattern_wildcard = pattern_left_label_end;
+ break;
+ }
+ pattern_left_label_end++;
+ }
+
+ /* If it's not trivial and there is no wildcard, it can't
+ * match */
+ if (!pattern_wildcard)
+ return 0;
+
+ /* Make sure all labels match except the leftmost */
+ hostname_left_label_end = strchr(hostname, '.');
+ if (!hostname_left_label_end
+ || strcmp(pattern_left_label_end, hostname_left_label_end) != 0)
+ return 0;
+
+ /* Make sure the leftmost label of the hostname is long enough
+ * that the wildcard can match */
+ if (hostname_left_label_end - hostname < (pattern_left_label_end - pattern) - 1)
+ return 0;
+
+ /* Finally compare the string on either side of the
+ * wildcard */
+ prefixlen = pattern_wildcard - pattern;
+ suffixlen = pattern_left_label_end - (pattern_wildcard + 1);
+ if ((prefixlen && (memcmp(pattern, hostname, prefixlen) != 0))
+ || (suffixlen && (memcmp(pattern_wildcard + 1, hostname_left_label_end - suffixlen, suffixlen) != 0)))
+ return 0;
+
+ return 1;
+}
+
+static int ssl_sock_srv_verifycbk(int ok, X509_STORE_CTX *ctx)
+{
+ SSL *ssl;
+ struct connection *conn;
+ char *servername;
+
+ int depth;
+ X509 *cert;
+ STACK_OF(GENERAL_NAME) *alt_names;
+ int i;
+ X509_NAME *cert_subject;
+ char *str;
+
+ if (ok == 0)
+ return ok;
+
+ ssl = X509_STORE_CTX_get_ex_data(ctx, SSL_get_ex_data_X509_STORE_CTX_idx());
+ conn = (struct connection *)SSL_get_app_data(ssl);
+
+ servername = objt_server(conn->target)->ssl_ctx.verify_host;
+
+ /* We only need to verify the CN on the actual server cert,
+ * not the indirect CAs */
+ depth = X509_STORE_CTX_get_error_depth(ctx);
+ if (depth != 0)
+ return ok;
+
+ /* At this point, the cert is *not* OK unless we can find a
+ * hostname match */
+ ok = 0;
+
+ cert = X509_STORE_CTX_get_current_cert(ctx);
+ /* It seems like this might happen if verify peer isn't set */
+ if (!cert)
+ return ok;
+
+ alt_names = X509_get_ext_d2i(cert, NID_subject_alt_name, NULL, NULL);
+ if (alt_names) {
+ for (i = 0; !ok && i < sk_GENERAL_NAME_num(alt_names); i++) {
+ GENERAL_NAME *name = sk_GENERAL_NAME_value(alt_names, i);
+ if (name->type == GEN_DNS) {
+#if OPENSSL_VERSION_NUMBER < 0x00907000L
+ if (ASN1_STRING_to_UTF8((unsigned char **)&str, name->d.ia5) >= 0) {
+#else
+ if (ASN1_STRING_to_UTF8((unsigned char **)&str, name->d.dNSName) >= 0) {
+#endif
+ ok = ssl_sock_srv_hostcheck(str, servername);
+ OPENSSL_free(str);
+ }
+ }
+ }
+ sk_GENERAL_NAME_pop_free(alt_names, GENERAL_NAME_free);
+ }
+
+ cert_subject = X509_get_subject_name(cert);
+ i = -1;
+ while (!ok && (i = X509_NAME_get_index_by_NID(cert_subject, NID_commonName, i)) != -1) {
+ X509_NAME_ENTRY *entry = X509_NAME_get_entry(cert_subject, i);
+ if (ASN1_STRING_to_UTF8((unsigned char **)&str, entry->value) >= 0) {
+ ok = ssl_sock_srv_hostcheck(str, servername);
+ OPENSSL_free(str);
+ }
+ }
+
+ return ok;
+}
+
+/* prepare ssl context from servers options. Returns an error count */
+int ssl_sock_prepare_srv_ctx(struct server *srv, struct proxy *curproxy)
+{
+ int cfgerr = 0;
+ long options =
+ SSL_OP_ALL | /* all known workarounds for bugs */
+ SSL_OP_NO_SSLv2 |
+ SSL_OP_NO_COMPRESSION;
+ long mode =
+ SSL_MODE_ENABLE_PARTIAL_WRITE |
+ SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER |
+ SSL_MODE_RELEASE_BUFFERS |
+ SSL_MODE_SMALL_BUFFERS;
+ int verify = SSL_VERIFY_NONE;
+
+ /* Make sure openssl opens /dev/urandom before the chroot */
+ if (!ssl_initialize_random()) {
+ Alert("OpenSSL random data generator initialization failed.\n");
+ cfgerr++;
+ }
+
+ /* Automatic memory computations need to know we use SSL there */
+ global.ssl_used_backend = 1;
+
+ /* Initiate SSL context for current server */
+ srv->ssl_ctx.reused_sess = NULL;
+ if (srv->use_ssl)
+ srv->xprt = &ssl_sock;
+ if (srv->check.use_ssl)
+ srv->check.xprt = &ssl_sock;
+
+ srv->ssl_ctx.ctx = SSL_CTX_new(SSLv23_client_method());
+ if (!srv->ssl_ctx.ctx) {
+ Alert("config : %s '%s', server '%s': unable to allocate ssl context.\n",
+ proxy_type_str(curproxy), curproxy->id,
+ srv->id);
+ cfgerr++;
+ return cfgerr;
+ }
+ if (srv->ssl_ctx.client_crt) {
+ if (SSL_CTX_use_PrivateKey_file(srv->ssl_ctx.ctx, srv->ssl_ctx.client_crt, SSL_FILETYPE_PEM) <= 0) {
+ Alert("config : %s '%s', server '%s': unable to load SSL private key from PEM file '%s'.\n",
+ proxy_type_str(curproxy), curproxy->id,
+ srv->id, srv->ssl_ctx.client_crt);
+ cfgerr++;
+ }
+ else if (SSL_CTX_use_certificate_chain_file(srv->ssl_ctx.ctx, srv->ssl_ctx.client_crt) <= 0) {
+ Alert("config : %s '%s', server '%s': unable to load ssl certificate from PEM file '%s'.\n",
+ proxy_type_str(curproxy), curproxy->id,
+ srv->id, srv->ssl_ctx.client_crt);
+ cfgerr++;
+ }
+ else if (SSL_CTX_check_private_key(srv->ssl_ctx.ctx) <= 0) {
+ Alert("config : %s '%s', server '%s': inconsistencies between private key and certificate loaded from PEM file '%s'.\n",
+ proxy_type_str(curproxy), curproxy->id,
+ srv->id, srv->ssl_ctx.client_crt);
+ cfgerr++;
+ }
+ }
+
+ if (srv->ssl_ctx.options & SRV_SSL_O_NO_SSLV3)
+ options |= SSL_OP_NO_SSLv3;
+ if (srv->ssl_ctx.options & SRV_SSL_O_NO_TLSV10)
+ options |= SSL_OP_NO_TLSv1;
+ if (srv->ssl_ctx.options & SRV_SSL_O_NO_TLSV11)
+ options |= SSL_OP_NO_TLSv1_1;
+ if (srv->ssl_ctx.options & SRV_SSL_O_NO_TLSV12)
+ options |= SSL_OP_NO_TLSv1_2;
+ if (srv->ssl_ctx.options & SRV_SSL_O_NO_TLS_TICKETS)
+ options |= SSL_OP_NO_TICKET;
+ if (srv->ssl_ctx.options & SRV_SSL_O_USE_SSLV3) {
+#ifndef OPENSSL_NO_SSL3
+ SSL_CTX_set_ssl_version(srv->ssl_ctx.ctx, SSLv3_client_method());
+#else
+ Alert("SSLv3 support requested but unavailable.\n");
+ cfgerr++;
+#endif
+ }
+ if (srv->ssl_ctx.options & SRV_SSL_O_USE_TLSV10)
+ SSL_CTX_set_ssl_version(srv->ssl_ctx.ctx, TLSv1_client_method());
+#if SSL_OP_NO_TLSv1_1
+ if (srv->ssl_ctx.options & SRV_SSL_O_USE_TLSV11)
+ SSL_CTX_set_ssl_version(srv->ssl_ctx.ctx, TLSv1_1_client_method());
+#endif
+#if SSL_OP_NO_TLSv1_2
+ if (srv->ssl_ctx.options & SRV_SSL_O_USE_TLSV12)
+ SSL_CTX_set_ssl_version(srv->ssl_ctx.ctx, TLSv1_2_client_method());
+#endif
+
+ SSL_CTX_set_options(srv->ssl_ctx.ctx, options);
+ SSL_CTX_set_mode(srv->ssl_ctx.ctx, mode);
+
+ if (global.ssl_server_verify == SSL_SERVER_VERIFY_REQUIRED)
+ verify = SSL_VERIFY_PEER;
+
+ switch (srv->ssl_ctx.verify) {
+ case SSL_SOCK_VERIFY_NONE:
+ verify = SSL_VERIFY_NONE;
+ break;
+ case SSL_SOCK_VERIFY_REQUIRED:
+ verify = SSL_VERIFY_PEER;
+ break;
+ }
+ SSL_CTX_set_verify(srv->ssl_ctx.ctx,
+ verify,
+ srv->ssl_ctx.verify_host ? ssl_sock_srv_verifycbk : NULL);
+ if (verify & SSL_VERIFY_PEER) {
+ if (srv->ssl_ctx.ca_file) {
+ /* load CAfile to verify */
+ if (!SSL_CTX_load_verify_locations(srv->ssl_ctx.ctx, srv->ssl_ctx.ca_file, NULL)) {
+ Alert("Proxy '%s', server '%s' [%s:%d] unable to load CA file '%s'.\n",
+ curproxy->id, srv->id,
+ srv->conf.file, srv->conf.line, srv->ssl_ctx.ca_file);
+ cfgerr++;
+ }
+ }
+ else {
+ if (global.ssl_server_verify == SSL_SERVER_VERIFY_REQUIRED)
+ Alert("Proxy '%s', server '%s' [%s:%d] verify is enabled by default but no CA file specified. If you're running on a LAN where you're certain to trust the server's certificate, please set an explicit 'verify none' statement on the 'server' line, or use 'ssl-server-verify none' in the global section to disable server-side verifications by default.\n",
+ curproxy->id, srv->id,
+ srv->conf.file, srv->conf.line);
+ else
+ Alert("Proxy '%s', server '%s' [%s:%d] verify is enabled but no CA file specified.\n",
+ curproxy->id, srv->id,
+ srv->conf.file, srv->conf.line);
+ cfgerr++;
+ }
+#ifdef X509_V_FLAG_CRL_CHECK
+ if (srv->ssl_ctx.crl_file) {
+ X509_STORE *store = SSL_CTX_get_cert_store(srv->ssl_ctx.ctx);
+
+ if (!store || !X509_STORE_load_locations(store, srv->ssl_ctx.crl_file, NULL)) {
+ Alert("Proxy '%s', server '%s' [%s:%d] unable to configure CRL file '%s'.\n",
+ curproxy->id, srv->id,
+ srv->conf.file, srv->conf.line, srv->ssl_ctx.crl_file);
+ cfgerr++;
+ }
+ else {
+ X509_STORE_set_flags(store, X509_V_FLAG_CRL_CHECK|X509_V_FLAG_CRL_CHECK_ALL);
+ }
+ }
+#endif
+ }
+
+ if (global.tune.ssllifetime)
+ SSL_CTX_set_timeout(srv->ssl_ctx.ctx, global.tune.ssllifetime);
+
+ SSL_CTX_set_session_cache_mode(srv->ssl_ctx.ctx, SSL_SESS_CACHE_OFF);
+ if (srv->ssl_ctx.ciphers &&
+ !SSL_CTX_set_cipher_list(srv->ssl_ctx.ctx, srv->ssl_ctx.ciphers)) {
+ Alert("Proxy '%s', server '%s' [%s:%d] : unable to set SSL cipher list to '%s'.\n",
+ curproxy->id, srv->id,
+ srv->conf.file, srv->conf.line, srv->ssl_ctx.ciphers);
+ cfgerr++;
+ }
+
+ return cfgerr;
+}
+
+/* Walks down the two trees in bind_conf and prepares all certs. The pointer may
+ * be NULL, in which case nothing is done. Returns the number of errors
+ * encountered.
+ */
+int ssl_sock_prepare_all_ctx(struct bind_conf *bind_conf, struct proxy *px)
+{
+ struct ebmb_node *node;
+ struct sni_ctx *sni;
+ int err = 0;
+
+ if (!bind_conf || !bind_conf->is_ssl)
+ return 0;
+
+ /* Automatic memory computations need to know we use SSL there */
+ global.ssl_used_frontend = 1;
+
+ if (bind_conf->default_ctx)
+ err += ssl_sock_prepare_ctx(bind_conf, bind_conf->default_ctx, px);
+
+ node = ebmb_first(&bind_conf->sni_ctx);
+ while (node) {
+ sni = ebmb_entry(node, struct sni_ctx, name);
+ if (!sni->order && sni->ctx != bind_conf->default_ctx)
+ /* only initialize the CTX on its first occurrence and
+ if it is not the default_ctx */
+ err += ssl_sock_prepare_ctx(bind_conf, sni->ctx, px);
+ node = ebmb_next(node);
+ }
+
+ node = ebmb_first(&bind_conf->sni_w_ctx);
+ while (node) {
+ sni = ebmb_entry(node, struct sni_ctx, name);
+ if (!sni->order && sni->ctx != bind_conf->default_ctx)
+ /* only initialize the CTX on its first occurrence and
+ if it is not the default_ctx */
+ err += ssl_sock_prepare_ctx(bind_conf, sni->ctx, px);
+ node = ebmb_next(node);
+ }
+ return err;
+}
+
+
+/* release ssl context allocated for servers. */
+void ssl_sock_free_srv_ctx(struct server *srv)
+{
+ if (srv->ssl_ctx.ctx)
+ SSL_CTX_free(srv->ssl_ctx.ctx);
+}
+
+/* Walks down the two trees in bind_conf and frees all the certs. The pointer may
+ * be NULL, in which case nothing is done. The default_ctx is nullified too.
+ */
+void ssl_sock_free_all_ctx(struct bind_conf *bind_conf)
+{
+ struct ebmb_node *node, *back;
+ struct sni_ctx *sni;
+
+ if (!bind_conf || !bind_conf->is_ssl)
+ return;
+
+ node = ebmb_first(&bind_conf->sni_ctx);
+ while (node) {
+ sni = ebmb_entry(node, struct sni_ctx, name);
+ back = ebmb_next(node);
+ ebmb_delete(node);
+ if (!sni->order) /* only free the CTX on its first occurrence */
+ SSL_CTX_free(sni->ctx);
+ free(sni);
+ node = back;
+ }
+
+ node = ebmb_first(&bind_conf->sni_w_ctx);
+ while (node) {
+ sni = ebmb_entry(node, struct sni_ctx, name);
+ back = ebmb_next(node);
+ ebmb_delete(node);
+ if (!sni->order) /* only free the CTX on its first occurrence */
+ SSL_CTX_free(sni->ctx);
+ free(sni);
+ node = back;
+ }
+
+ bind_conf->default_ctx = NULL;
+}
+
+/* Load CA cert file and private key used to generate certificates */
+int
+ssl_sock_load_ca(struct bind_conf *bind_conf, struct proxy *px)
+{
+ FILE *fp;
+ X509 *cacert = NULL;
+ EVP_PKEY *capkey = NULL;
+ int err = 0;
+
+ if (!bind_conf || !bind_conf->generate_certs)
+ return err;
+
+#ifdef SSL_CTRL_SET_TLSEXT_HOSTNAME
+ if (global.tune.ssl_ctx_cache)
+ ssl_ctx_lru_tree = lru64_new(global.tune.ssl_ctx_cache);
+ ssl_ctx_lru_seed = (unsigned int)time(NULL);
+#endif
+
+ if (!bind_conf->ca_sign_file) {
+ Alert("Proxy '%s': cannot enable certificate generation, "
+ "no CA certificate File configured at [%s:%d].\n",
+ px->id, bind_conf->file, bind_conf->line);
+ goto load_error;
+ }
+
+ /* read in the CA certificate */
+ if (!(fp = fopen(bind_conf->ca_sign_file, "r"))) {
+ Alert("Proxy '%s': Failed to read CA certificate file '%s' at [%s:%d].\n",
+ px->id, bind_conf->ca_sign_file, bind_conf->file, bind_conf->line);
+ goto load_error;
+ }
+ if (!(cacert = PEM_read_X509(fp, NULL, NULL, NULL))) {
+ Alert("Proxy '%s': Failed to read CA certificate file '%s' at [%s:%d].\n",
+ px->id, bind_conf->ca_sign_file, bind_conf->file, bind_conf->line);
+ goto read_error;
+ }
+ rewind(fp);
+ if (!(capkey = PEM_read_PrivateKey(fp, NULL, NULL, bind_conf->ca_sign_pass))) {
+ Alert("Proxy '%s': Failed to read CA private key file '%s' at [%s:%d].\n",
+ px->id, bind_conf->ca_sign_file, bind_conf->file, bind_conf->line);
+ goto read_error;
+ }
+
+ fclose (fp);
+ bind_conf->ca_sign_cert = cacert;
+ bind_conf->ca_sign_pkey = capkey;
+ return err;
+
+ read_error:
+ fclose (fp);
+ if (capkey) EVP_PKEY_free(capkey);
+ if (cacert) X509_free(cacert);
+ load_error:
+ bind_conf->generate_certs = 0;
+ err++;
+ return err;
+}
+
+/* Release CA cert and private key used to generate certificated */
+void
+ssl_sock_free_ca(struct bind_conf *bind_conf)
+{
+ if (!bind_conf)
+ return;
+
+ if (bind_conf->ca_sign_pkey)
+ EVP_PKEY_free(bind_conf->ca_sign_pkey);
+ if (bind_conf->ca_sign_cert)
+ X509_free(bind_conf->ca_sign_cert);
+}
+
+/*
+ * This function is called if SSL * context is not yet allocated. The function
+ * is designed to be called before any other data-layer operation and sets the
+ * handshake flag on the connection. It is safe to call it multiple times.
+ * It returns 0 on success and -1 in error case.
+ */
+static int ssl_sock_init(struct connection *conn)
+{
+ /* already initialized */
+ if (conn->xprt_ctx)
+ return 0;
+
+ if (!conn_ctrl_ready(conn))
+ return 0;
+
+ if (global.maxsslconn && sslconns >= global.maxsslconn) {
+ conn->err_code = CO_ER_SSL_TOO_MANY;
+ return -1;
+ }
+
+ /* If it is in client mode initiate SSL session
+ in connect state otherwise accept state */
+ if (objt_server(conn->target)) {
+ int may_retry = 1;
+
+ retry_connect:
+ /* Alloc a new SSL session ctx */
+ conn->xprt_ctx = SSL_new(objt_server(conn->target)->ssl_ctx.ctx);
+ if (!conn->xprt_ctx) {
+ if (may_retry--) {
+ pool_gc2();
+ goto retry_connect;
+ }
+ conn->err_code = CO_ER_SSL_NO_MEM;
+ return -1;
+ }
+
+ /* set fd on SSL session context */
+ if (!SSL_set_fd(conn->xprt_ctx, conn->t.sock.fd)) {
+ SSL_free(conn->xprt_ctx);
+ conn->xprt_ctx = NULL;
+ if (may_retry--) {
+ pool_gc2();
+ goto retry_connect;
+ }
+ conn->err_code = CO_ER_SSL_NO_MEM;
+ return -1;
+ }
+
+ /* set connection pointer */
+ if (!SSL_set_app_data(conn->xprt_ctx, conn)) {
+ SSL_free(conn->xprt_ctx);
+ conn->xprt_ctx = NULL;
+ if (may_retry--) {
+ pool_gc2();
+ goto retry_connect;
+ }
+ conn->err_code = CO_ER_SSL_NO_MEM;
+ return -1;
+ }
+
+ SSL_set_connect_state(conn->xprt_ctx);
+ if (objt_server(conn->target)->ssl_ctx.reused_sess) {
+ if(!SSL_set_session(conn->xprt_ctx, objt_server(conn->target)->ssl_ctx.reused_sess)) {
+ SSL_SESSION_free(objt_server(conn->target)->ssl_ctx.reused_sess);
+ objt_server(conn->target)->ssl_ctx.reused_sess = NULL;
+ }
+ }
+
+ /* leave init state and start handshake */
+ conn->flags |= CO_FL_SSL_WAIT_HS | CO_FL_WAIT_L6_CONN;
+
+ sslconns++;
+ totalsslconns++;
+ return 0;
+ }
+ else if (objt_listener(conn->target)) {
+ int may_retry = 1;
+
+ retry_accept:
+ /* Alloc a new SSL session ctx */
+ conn->xprt_ctx = SSL_new(objt_listener(conn->target)->bind_conf->default_ctx);
+ if (!conn->xprt_ctx) {
+ if (may_retry--) {
+ pool_gc2();
+ goto retry_accept;
+ }
+ conn->err_code = CO_ER_SSL_NO_MEM;
+ return -1;
+ }
+
+ /* set fd on SSL session context */
+ if (!SSL_set_fd(conn->xprt_ctx, conn->t.sock.fd)) {
+ SSL_free(conn->xprt_ctx);
+ conn->xprt_ctx = NULL;
+ if (may_retry--) {
+ pool_gc2();
+ goto retry_accept;
+ }
+ conn->err_code = CO_ER_SSL_NO_MEM;
+ return -1;
+ }
+
+ /* set connection pointer */
+ if (!SSL_set_app_data(conn->xprt_ctx, conn)) {
+ SSL_free(conn->xprt_ctx);
+ conn->xprt_ctx = NULL;
+ if (may_retry--) {
+ pool_gc2();
+ goto retry_accept;
+ }
+ conn->err_code = CO_ER_SSL_NO_MEM;
+ return -1;
+ }
+
+ SSL_set_accept_state(conn->xprt_ctx);
+
+ /* leave init state and start handshake */
+ conn->flags |= CO_FL_SSL_WAIT_HS | CO_FL_WAIT_L6_CONN;
+
+ sslconns++;
+ totalsslconns++;
+ return 0;
+ }
+ /* don't know how to handle such a target */
+ conn->err_code = CO_ER_SSL_NO_TARGET;
+ return -1;
+}
+
+
+/* This is the callback which is used when an SSL handshake is pending. It
+ * updates the FD status if it wants some polling before being called again.
+ * It returns 0 if it fails in a fatal way or needs to poll to go further,
+ * otherwise it returns non-zero and removes itself from the connection's
+ * flags (the bit is provided in <flag> by the caller).
+ */
+int ssl_sock_handshake(struct connection *conn, unsigned int flag)
+{
+ int ret;
+
+ if (!conn_ctrl_ready(conn))
+ return 0;
+
+ if (!conn->xprt_ctx)
+ goto out_error;
+
+ /* If we use SSL_do_handshake to process a reneg initiated by
+ * the remote peer, it sometimes returns SSL_ERROR_SSL.
+ * Usually SSL_write and SSL_read are used and process implicitly
+ * the reneg handshake.
+ * Here we use SSL_peek as a workaround for reneg.
+ */
+ if ((conn->flags & CO_FL_CONNECTED) && SSL_renegotiate_pending(conn->xprt_ctx)) {
+ char c;
+
+ ret = SSL_peek(conn->xprt_ctx, &c, 1);
+ if (ret <= 0) {
+ /* handshake may have not been completed, let's find why */
+ ret = SSL_get_error(conn->xprt_ctx, ret);
+ if (ret == SSL_ERROR_WANT_WRITE) {
+ /* SSL handshake needs to write, L4 connection may not be ready */
+ __conn_sock_stop_recv(conn);
+ __conn_sock_want_send(conn);
+ fd_cant_send(conn->t.sock.fd);
+ return 0;
+ }
+ else if (ret == SSL_ERROR_WANT_READ) {
+ /* handshake may have been completed but we have
+ * no more data to read.
+ */
+ if (!SSL_renegotiate_pending(conn->xprt_ctx)) {
+ ret = 1;
+ goto reneg_ok;
+ }
+ /* SSL handshake needs to read, L4 connection is ready */
+ if (conn->flags & CO_FL_WAIT_L4_CONN)
+ conn->flags &= ~CO_FL_WAIT_L4_CONN;
+ __conn_sock_stop_send(conn);
+ __conn_sock_want_recv(conn);
+ fd_cant_recv(conn->t.sock.fd);
+ return 0;
+ }
+ else if (ret == SSL_ERROR_SYSCALL) {
+ /* if errno is null, then connection was successfully established */
+ if (!errno && conn->flags & CO_FL_WAIT_L4_CONN)
+ conn->flags &= ~CO_FL_WAIT_L4_CONN;
+ if (!conn->err_code) {
+ if (!((SSL *)conn->xprt_ctx)->packet_length) {
+ if (!errno) {
+ if (conn->xprt_st & SSL_SOCK_RECV_HEARTBEAT)
+ conn->err_code = CO_ER_SSL_HANDSHAKE_HB;
+ else
+ conn->err_code = CO_ER_SSL_EMPTY;
+ }
+ else {
+ if (conn->xprt_st & SSL_SOCK_RECV_HEARTBEAT)
+ conn->err_code = CO_ER_SSL_HANDSHAKE_HB;
+ else
+ conn->err_code = CO_ER_SSL_ABORT;
+ }
+ }
+ else {
+ if (conn->xprt_st & SSL_SOCK_RECV_HEARTBEAT)
+ conn->err_code = CO_ER_SSL_HANDSHAKE_HB;
+ else
+ conn->err_code = CO_ER_SSL_HANDSHAKE;
+ }
+ }
+ goto out_error;
+ }
+ else {
+ /* Fail on all other handshake errors */
+ /* Note: OpenSSL may leave unread bytes in the socket's
+ * buffer, causing an RST to be emitted upon close() on
+ * TCP sockets. We first try to drain possibly pending
+ * data to avoid this as much as possible.
+ */
+ conn_sock_drain(conn);
+ if (!conn->err_code)
+ conn->err_code = (conn->xprt_st & SSL_SOCK_RECV_HEARTBEAT) ?
+ CO_ER_SSL_KILLED_HB : CO_ER_SSL_HANDSHAKE;
+ goto out_error;
+ }
+ }
+ /* read some data: consider handshake completed */
+ goto reneg_ok;
+ }
+
+ ret = SSL_do_handshake(conn->xprt_ctx);
+ if (ret != 1) {
+ /* handshake did not complete, let's find why */
+ ret = SSL_get_error(conn->xprt_ctx, ret);
+
+ if (ret == SSL_ERROR_WANT_WRITE) {
+ /* SSL handshake needs to write, L4 connection may not be ready */
+ __conn_sock_stop_recv(conn);
+ __conn_sock_want_send(conn);
+ fd_cant_send(conn->t.sock.fd);
+ return 0;
+ }
+ else if (ret == SSL_ERROR_WANT_READ) {
+ /* SSL handshake needs to read, L4 connection is ready */
+ if (conn->flags & CO_FL_WAIT_L4_CONN)
+ conn->flags &= ~CO_FL_WAIT_L4_CONN;
+ __conn_sock_stop_send(conn);
+ __conn_sock_want_recv(conn);
+ fd_cant_recv(conn->t.sock.fd);
+ return 0;
+ }
+ else if (ret == SSL_ERROR_SYSCALL) {
+ /* if errno is null, then connection was successfully established */
+ if (!errno && conn->flags & CO_FL_WAIT_L4_CONN)
+ conn->flags &= ~CO_FL_WAIT_L4_CONN;
+
+ if (!((SSL *)conn->xprt_ctx)->packet_length) {
+ if (!errno) {
+ if (conn->xprt_st & SSL_SOCK_RECV_HEARTBEAT)
+ conn->err_code = CO_ER_SSL_HANDSHAKE_HB;
+ else
+ conn->err_code = CO_ER_SSL_EMPTY;
+ }
+ else {
+ if (conn->xprt_st & SSL_SOCK_RECV_HEARTBEAT)
+ conn->err_code = CO_ER_SSL_HANDSHAKE_HB;
+ else
+ conn->err_code = CO_ER_SSL_ABORT;
+ }
+ }
+ else {
+ if (conn->xprt_st & SSL_SOCK_RECV_HEARTBEAT)
+ conn->err_code = CO_ER_SSL_HANDSHAKE_HB;
+ else
+ conn->err_code = CO_ER_SSL_HANDSHAKE;
+ }
+ goto out_error;
+ }
+ else {
+ /* Fail on all other handshake errors */
+ /* Note: OpenSSL may leave unread bytes in the socket's
+ * buffer, causing an RST to be emitted upon close() on
+ * TCP sockets. We first try to drain possibly pending
+ * data to avoid this as much as possible.
+ */
+ conn_sock_drain(conn);
+ if (!conn->err_code)
+ conn->err_code = (conn->xprt_st & SSL_SOCK_RECV_HEARTBEAT) ?
+ CO_ER_SSL_KILLED_HB : CO_ER_SSL_HANDSHAKE;
+ goto out_error;
+ }
+ }
+
+reneg_ok:
+ /* Handshake succeeded */
+ if (!SSL_session_reused(conn->xprt_ctx)) {
+ if (objt_server(conn->target)) {
+ update_freq_ctr(&global.ssl_be_keys_per_sec, 1);
+ if (global.ssl_be_keys_per_sec.curr_ctr > global.ssl_be_keys_max)
+ global.ssl_be_keys_max = global.ssl_be_keys_per_sec.curr_ctr;
+
+ /* check if session was reused, if not store current session on server for reuse */
+ if (objt_server(conn->target)->ssl_ctx.reused_sess)
+ SSL_SESSION_free(objt_server(conn->target)->ssl_ctx.reused_sess);
+
+ if (!(objt_server(conn->target)->ssl_ctx.options & SRV_SSL_O_NO_REUSE))
+ objt_server(conn->target)->ssl_ctx.reused_sess = SSL_get1_session(conn->xprt_ctx);
+ }
+ else {
+ update_freq_ctr(&global.ssl_fe_keys_per_sec, 1);
+ if (global.ssl_fe_keys_per_sec.curr_ctr > global.ssl_fe_keys_max)
+ global.ssl_fe_keys_max = global.ssl_fe_keys_per_sec.curr_ctr;
+ }
+ }
+
+ /* The connection is now established at both layers, it's time to leave */
+ conn->flags &= ~(flag | CO_FL_WAIT_L4_CONN | CO_FL_WAIT_L6_CONN);
+ return 1;
+
+ out_error:
+ /* Clear openssl global errors stack */
+ ERR_clear_error();
+
+ /* free resumed session if exists */
+ if (objt_server(conn->target) && objt_server(conn->target)->ssl_ctx.reused_sess) {
+ SSL_SESSION_free(objt_server(conn->target)->ssl_ctx.reused_sess);
+ objt_server(conn->target)->ssl_ctx.reused_sess = NULL;
+ }
+
+ /* Fail on all other handshake errors */
+ conn->flags |= CO_FL_ERROR;
+ if (!conn->err_code)
+ conn->err_code = CO_ER_SSL_HANDSHAKE;
+ return 0;
+}
+
+/* Receive up to <count> bytes from connection <conn>'s socket and store them
+ * into buffer <buf>. Only one call to recv() is performed, unless the
+ * buffer wraps, in which case a second call may be performed. The connection's
+ * flags are updated with whatever special event is detected (error, read0,
+ * empty). The caller is responsible for taking care of those events and
+ * avoiding the call if inappropriate. The function does not call the
+ * connection's polling update function, so the caller is responsible for this.
+ */
+static int ssl_sock_to_buf(struct connection *conn, struct buffer *buf, int count)
+{
+ int ret, done = 0;
+ int try;
+
+ if (!conn->xprt_ctx)
+ goto out_error;
+
+ if (conn->flags & CO_FL_HANDSHAKE)
+ /* a handshake was requested */
+ return 0;
+
+ /* let's realign the buffer to optimize I/O */
+ if (buffer_empty(buf))
+ buf->p = buf->data;
+
+ /* read the largest possible block. For this, we perform only one call
+ * to recv() unless the buffer wraps and we exactly fill the first hunk,
+ * in which case we accept to do it once again. A new attempt is made on
+ * EINTR too.
+ */
+ while (count > 0) {
+ /* first check if we have some room after p+i */
+ try = buf->data + buf->size - (buf->p + buf->i);
+ /* otherwise continue between data and p-o */
+ if (try <= 0) {
+ try = buf->p - (buf->data + buf->o);
+ if (try <= 0)
+ break;
+ }
+ if (try > count)
+ try = count;
+
+ ret = SSL_read(conn->xprt_ctx, bi_end(buf), try);
+ if (conn->flags & CO_FL_ERROR) {
+ /* CO_FL_ERROR may be set by ssl_sock_infocbk */
+ goto out_error;
+ }
+ if (ret > 0) {
+ buf->i += ret;
+ done += ret;
+ if (ret < try)
+ break;
+ count -= ret;
+ }
+ else if (ret == 0) {
+ ret = SSL_get_error(conn->xprt_ctx, ret);
+ if (ret != SSL_ERROR_ZERO_RETURN) {
+ /* error on protocol or underlying transport */
+ if ((ret != SSL_ERROR_SYSCALL)
+ || (errno && (errno != EAGAIN)))
+ conn->flags |= CO_FL_ERROR;
+
+ /* Clear openssl global errors stack */
+ ERR_clear_error();
+ }
+ goto read0;
+ }
+ else {
+ ret = SSL_get_error(conn->xprt_ctx, ret);
+ if (ret == SSL_ERROR_WANT_WRITE) {
+ /* handshake is running, and it needs to enable write */
+ conn->flags |= CO_FL_SSL_WAIT_HS;
+ __conn_sock_want_send(conn);
+ break;
+ }
+ else if (ret == SSL_ERROR_WANT_READ) {
+ if (SSL_renegotiate_pending(conn->xprt_ctx)) {
+ /* handshake is running, and it may need to re-enable read */
+ conn->flags |= CO_FL_SSL_WAIT_HS;
+ __conn_sock_want_recv(conn);
+ break;
+ }
+ /* we need to poll for retry a read later */
+ fd_cant_recv(conn->t.sock.fd);
+ break;
+ }
+ /* otherwise it's a real error */
+ goto out_error;
+ }
+ }
+ return done;
+
+ read0:
+ conn_sock_read0(conn);
+ return done;
+ out_error:
+ /* Clear openssl global errors stack */
+ ERR_clear_error();
+
+ conn->flags |= CO_FL_ERROR;
+ return done;
+}
+
+
+/* Send all pending bytes from buffer <buf> to connection <conn>'s socket.
+ * <flags> may contain some CO_SFL_* flags to hint the system about other
+ * pending data for example, but this flag is ignored at the moment.
+ * Only one call to send() is performed, unless the buffer wraps, in which case
+ * a second call may be performed. The connection's flags are updated with
+ * whatever special event is detected (error, empty). The caller is responsible
+ * for taking care of those events and avoiding the call if inappropriate. The
+ * function does not call the connection's polling update function, so the caller
+ * is responsible for this.
+ */
+static int ssl_sock_from_buf(struct connection *conn, struct buffer *buf, int flags)
+{
+ int ret, try, done;
+
+ done = 0;
+
+ if (!conn->xprt_ctx)
+ goto out_error;
+
+ if (conn->flags & CO_FL_HANDSHAKE)
+ /* a handshake was requested */
+ return 0;
+
+ /* send the largest possible block. For this we perform only one call
+ * to send() unless the buffer wraps and we exactly fill the first hunk,
+ * in which case we accept to do it once again.
+ */
+ while (buf->o) {
+ try = bo_contig_data(buf);
+
+ if (!(flags & CO_SFL_STREAMER) &&
+ !(conn->xprt_st & SSL_SOCK_SEND_UNLIMITED) &&
+ global.tune.ssl_max_record && try > global.tune.ssl_max_record) {
+ try = global.tune.ssl_max_record;
+ }
+ else {
+ /* we need to keep the information about the fact that
+ * we're not limiting the upcoming send(), because if it
+ * fails, we'll have to retry with at least as many data.
+ */
+ conn->xprt_st |= SSL_SOCK_SEND_UNLIMITED;
+ }
+
+ ret = SSL_write(conn->xprt_ctx, bo_ptr(buf), try);
+
+ if (conn->flags & CO_FL_ERROR) {
+ /* CO_FL_ERROR may be set by ssl_sock_infocbk */
+ goto out_error;
+ }
+ if (ret > 0) {
+ conn->xprt_st &= ~SSL_SOCK_SEND_UNLIMITED;
+
+ buf->o -= ret;
+ done += ret;
+
+ if (likely(buffer_empty(buf)))
+ /* optimize data alignment in the buffer */
+ buf->p = buf->data;
+
+ /* if the system buffer is full, don't insist */
+ if (ret < try)
+ break;
+ }
+ else {
+ ret = SSL_get_error(conn->xprt_ctx, ret);
+ if (ret == SSL_ERROR_WANT_WRITE) {
+ if (SSL_renegotiate_pending(conn->xprt_ctx)) {
+ /* handshake is running, and it may need to re-enable write */
+ conn->flags |= CO_FL_SSL_WAIT_HS;
+ __conn_sock_want_send(conn);
+ break;
+ }
+ /* we need to poll to retry a write later */
+ fd_cant_send(conn->t.sock.fd);
+ break;
+ }
+ else if (ret == SSL_ERROR_WANT_READ) {
+ /* handshake is running, and it needs to enable read */
+ conn->flags |= CO_FL_SSL_WAIT_HS;
+ __conn_sock_want_recv(conn);
+ break;
+ }
+ goto out_error;
+ }
+ }
+ return done;
+
+ out_error:
+ /* Clear openssl global errors stack */
+ ERR_clear_error();
+
+ conn->flags |= CO_FL_ERROR;
+ return done;
+}
+
+static void ssl_sock_close(struct connection *conn) {
+
+ if (conn->xprt_ctx) {
+ SSL_free(conn->xprt_ctx);
+ conn->xprt_ctx = NULL;
+ sslconns--;
+ }
+}
+
+/* This function tries to perform a clean shutdown on an SSL connection, and in
+ * any case, flags the connection as reusable if no handshake was in progress.
+ */
+static void ssl_sock_shutw(struct connection *conn, int clean)
+{
+ if (conn->flags & CO_FL_HANDSHAKE)
+ return;
+ /* no handshake was in progress, try a clean ssl shutdown */
+ if (clean && (SSL_shutdown(conn->xprt_ctx) <= 0)) {
+ /* Clear openssl global errors stack */
+ ERR_clear_error();
+ }
+
+ /* force flag on ssl to keep session in cache regardless shutdown result */
+ SSL_set_shutdown(conn->xprt_ctx, SSL_SENT_SHUTDOWN);
+}
+
+/* used for logging, may be changed for a sample fetch later */
+const char *ssl_sock_get_cipher_name(struct connection *conn)
+{
+ if (!conn->xprt && !conn->xprt_ctx)
+ return NULL;
+ return SSL_get_cipher_name(conn->xprt_ctx);
+}
+
+/* used for logging, may be changed for a sample fetch later */
+const char *ssl_sock_get_proto_version(struct connection *conn)
+{
+ if (!conn->xprt && !conn->xprt_ctx)
+ return NULL;
+ return SSL_get_version(conn->xprt_ctx);
+}
+
+/* Extract a serial from a cert, and copy it to a chunk.
+ * Returns 1 if serial is found and copied, 0 if no serial found and
+ * -1 if output is not large enough.
+ */
+static int
+ssl_sock_get_serial(X509 *crt, struct chunk *out)
+{
+ ASN1_INTEGER *serial;
+
+ serial = X509_get_serialNumber(crt);
+ if (!serial)
+ return 0;
+
+ if (out->size < serial->length)
+ return -1;
+
+ memcpy(out->str, serial->data, serial->length);
+ out->len = serial->length;
+ return 1;
+}
+
+/* Extract a cert to der, and copy it to a chunk.
+ * Returns 1 if cert is found and copied, 0 on der convertion failure and
+ * -1 if output is not large enough.
+ */
+static int
+ssl_sock_crt2der(X509 *crt, struct chunk *out)
+{
+ int len;
+ unsigned char *p = (unsigned char *)out->str;;
+
+ len =i2d_X509(crt, NULL);
+ if (len <= 0)
+ return 1;
+
+ if (out->size < len)
+ return -1;
+
+ i2d_X509(crt,&p);
+ out->len = len;
+ return 1;
+}
+
+
+/* Copy Date in ASN1_UTCTIME format in struct chunk out.
+ * Returns 1 if serial is found and copied, 0 if no valid time found
+ * and -1 if output is not large enough.
+ */
+static int
+ssl_sock_get_time(ASN1_TIME *tm, struct chunk *out)
+{
+ if (tm->type == V_ASN1_GENERALIZEDTIME) {
+ ASN1_GENERALIZEDTIME *gentm = (ASN1_GENERALIZEDTIME *)tm;
+
+ if (gentm->length < 12)
+ return 0;
+ if (gentm->data[0] != 0x32 || gentm->data[1] != 0x30)
+ return 0;
+ if (out->size < gentm->length-2)
+ return -1;
+
+ memcpy(out->str, gentm->data+2, gentm->length-2);
+ out->len = gentm->length-2;
+ return 1;
+ }
+ else if (tm->type == V_ASN1_UTCTIME) {
+ ASN1_UTCTIME *utctm = (ASN1_UTCTIME *)tm;
+
+ if (utctm->length < 10)
+ return 0;
+ if (utctm->data[0] >= 0x35)
+ return 0;
+ if (out->size < utctm->length)
+ return -1;
+
+ memcpy(out->str, utctm->data, utctm->length);
+ out->len = utctm->length;
+ return 1;
+ }
+
+ return 0;
+}
+
+/* Extract an entry from a X509_NAME and copy its value to an output chunk.
+ * Returns 1 if entry found, 0 if entry not found, or -1 if output not large enough.
+ */
+static int
+ssl_sock_get_dn_entry(X509_NAME *a, const struct chunk *entry, int pos, struct chunk *out)
+{
+ X509_NAME_ENTRY *ne;
+ int i, j, n;
+ int cur = 0;
+ const char *s;
+ char tmp[128];
+
+ out->len = 0;
+ for (i = 0; i < sk_X509_NAME_ENTRY_num(a->entries); i++) {
+ if (pos < 0)
+ j = (sk_X509_NAME_ENTRY_num(a->entries)-1) - i;
+ else
+ j = i;
+
+ ne = sk_X509_NAME_ENTRY_value(a->entries, j);
+ n = OBJ_obj2nid(ne->object);
+ if ((n == NID_undef) || ((s = OBJ_nid2sn(n)) == NULL)) {
+ i2t_ASN1_OBJECT(tmp, sizeof(tmp), ne->object);
+ s = tmp;
+ }
+
+ if (chunk_strcasecmp(entry, s) != 0)
+ continue;
+
+ if (pos < 0)
+ cur--;
+ else
+ cur++;
+
+ if (cur != pos)
+ continue;
+
+ if (ne->value->length > out->size)
+ return -1;
+
+ memcpy(out->str, ne->value->data, ne->value->length);
+ out->len = ne->value->length;
+ return 1;
+ }
+
+ return 0;
+
+}
+
+/* Extract and format full DN from a X509_NAME and copy result into a chunk
+ * Returns 1 if dn entries exits, 0 if no dn entry found or -1 if output is not large enough.
+ */
+static int
+ssl_sock_get_dn_oneline(X509_NAME *a, struct chunk *out)
+{
+ X509_NAME_ENTRY *ne;
+ int i, n, ln;
+ int l = 0;
+ const char *s;
+ char *p;
+ char tmp[128];
+
+ out->len = 0;
+ p = out->str;
+ for (i = 0; i < sk_X509_NAME_ENTRY_num(a->entries); i++) {
+ ne = sk_X509_NAME_ENTRY_value(a->entries, i);
+ n = OBJ_obj2nid(ne->object);
+ if ((n == NID_undef) || ((s = OBJ_nid2sn(n)) == NULL)) {
+ i2t_ASN1_OBJECT(tmp, sizeof(tmp), ne->object);
+ s = tmp;
+ }
+ ln = strlen(s);
+
+ l += 1 + ln + 1 + ne->value->length;
+ if (l > out->size)
+ return -1;
+ out->len = l;
+
+ *(p++)='/';
+ memcpy(p, s, ln);
+ p += ln;
+ *(p++)='=';
+ memcpy(p, ne->value->data, ne->value->length);
+ p += ne->value->length;
+ }
+
+ if (!out->len)
+ return 0;
+
+ return 1;
+}
+
+char *ssl_sock_get_version(struct connection *conn)
+{
+ if (!ssl_sock_is_ssl(conn))
+ return NULL;
+
+ return (char *)SSL_get_version(conn->xprt_ctx);
+}
+
+void ssl_sock_set_servername(struct connection *conn, const char *hostname)
+{
+#ifdef SSL_CTRL_SET_TLSEXT_HOSTNAME
+ if (!ssl_sock_is_ssl(conn))
+ return;
+
+ SSL_set_tlsext_host_name(conn->xprt_ctx, hostname);
+#endif
+}
+
+/* Extract peer certificate's common name into the chunk dest
+ * Returns
+ * the len of the extracted common name
+ * or 0 if no CN found in DN
+ * or -1 on error case (i.e. no peer certificate)
+ */
+int ssl_sock_get_remote_common_name(struct connection *conn, struct chunk *dest)
+{
+ X509 *crt = NULL;
+ X509_NAME *name;
+ const char find_cn[] = "CN";
+ const struct chunk find_cn_chunk = {
+ .str = (char *)&find_cn,
+ .len = sizeof(find_cn)-1
+ };
+ int result = -1;
+
+ if (!ssl_sock_is_ssl(conn))
+ goto out;
+
+ /* SSL_get_peer_certificate, it increase X509 * ref count */
+ crt = SSL_get_peer_certificate(conn->xprt_ctx);
+ if (!crt)
+ goto out;
+
+ name = X509_get_subject_name(crt);
+ if (!name)
+ goto out;
+
+ result = ssl_sock_get_dn_entry(name, &find_cn_chunk, 1, dest);
+out:
+ if (crt)
+ X509_free(crt);
+
+ return result;
+}
+
+/* returns 1 if client passed a certificate for this session, 0 if not */
+int ssl_sock_get_cert_used_sess(struct connection *conn)
+{
+ X509 *crt = NULL;
+
+ if (!ssl_sock_is_ssl(conn))
+ return 0;
+
+ /* SSL_get_peer_certificate, it increase X509 * ref count */
+ crt = SSL_get_peer_certificate(conn->xprt_ctx);
+ if (!crt)
+ return 0;
+
+ X509_free(crt);
+ return 1;
+}
+
+/* returns 1 if client passed a certificate for this connection, 0 if not */
+int ssl_sock_get_cert_used_conn(struct connection *conn)
+{
+ if (!ssl_sock_is_ssl(conn))
+ return 0;
+
+ return SSL_SOCK_ST_FL_VERIFY_DONE & conn->xprt_st ? 1 : 0;
+}
+
+/* returns result from SSL verify */
+unsigned int ssl_sock_get_verify_result(struct connection *conn)
+{
+ if (!ssl_sock_is_ssl(conn))
+ return (unsigned int)X509_V_ERR_APPLICATION_VERIFICATION;
+
+ return (unsigned int)SSL_get_verify_result(conn->xprt_ctx);
+}
+
+/***** Below are some sample fetching functions for ACL/patterns *****/
+
+/* boolean, returns true if client cert was present */
+static int
+smp_fetch_ssl_fc_has_crt(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct connection *conn;
+
+ conn = objt_conn(smp->sess->origin);
+ if (!conn || conn->xprt != &ssl_sock)
+ return 0;
+
+ if (!(conn->flags & CO_FL_CONNECTED)) {
+ smp->flags |= SMP_F_MAY_CHANGE;
+ return 0;
+ }
+
+ smp->flags = 0;
+ smp->data.type = SMP_T_BOOL;
+ smp->data.u.sint = SSL_SOCK_ST_FL_VERIFY_DONE & conn->xprt_st ? 1 : 0;
+
+ return 1;
+}
+
+/* binary, returns a certificate in a binary chunk (der/raw).
+ * The 5th keyword char is used to know if SSL_get_certificate or SSL_get_peer_certificate
+ * should be use.
+ */
+static int
+smp_fetch_ssl_x_der(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ int cert_peer = (kw[4] == 'c') ? 1 : 0;
+ X509 *crt = NULL;
+ int ret = 0;
+ struct chunk *smp_trash;
+ struct connection *conn;
+
+ conn = objt_conn(smp->sess->origin);
+ if (!conn || conn->xprt != &ssl_sock)
+ return 0;
+
+ if (!(conn->flags & CO_FL_CONNECTED)) {
+ smp->flags |= SMP_F_MAY_CHANGE;
+ return 0;
+ }
+
+ if (cert_peer)
+ crt = SSL_get_peer_certificate(conn->xprt_ctx);
+ else
+ crt = SSL_get_certificate(conn->xprt_ctx);
+
+ if (!crt)
+ goto out;
+
+ smp_trash = get_trash_chunk();
+ if (ssl_sock_crt2der(crt, smp_trash) <= 0)
+ goto out;
+
+ smp->data.u.str = *smp_trash;
+ smp->data.type = SMP_T_BIN;
+ ret = 1;
+out:
+ /* SSL_get_peer_certificate, it increase X509 * ref count */
+ if (cert_peer && crt)
+ X509_free(crt);
+ return ret;
+}
+
+/* binary, returns serial of certificate in a binary chunk.
+ * The 5th keyword char is used to know if SSL_get_certificate or SSL_get_peer_certificate
+ * should be use.
+ */
+static int
+smp_fetch_ssl_x_serial(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ int cert_peer = (kw[4] == 'c') ? 1 : 0;
+ X509 *crt = NULL;
+ int ret = 0;
+ struct chunk *smp_trash;
+ struct connection *conn;
+
+ conn = objt_conn(smp->sess->origin);
+ if (!conn || conn->xprt != &ssl_sock)
+ return 0;
+
+ if (!(conn->flags & CO_FL_CONNECTED)) {
+ smp->flags |= SMP_F_MAY_CHANGE;
+ return 0;
+ }
+
+ if (cert_peer)
+ crt = SSL_get_peer_certificate(conn->xprt_ctx);
+ else
+ crt = SSL_get_certificate(conn->xprt_ctx);
+
+ if (!crt)
+ goto out;
+
+ smp_trash = get_trash_chunk();
+ if (ssl_sock_get_serial(crt, smp_trash) <= 0)
+ goto out;
+
+ smp->data.u.str = *smp_trash;
+ smp->data.type = SMP_T_BIN;
+ ret = 1;
+out:
+ /* SSL_get_peer_certificate, it increase X509 * ref count */
+ if (cert_peer && crt)
+ X509_free(crt);
+ return ret;
+}
+
+/* binary, returns the client certificate's SHA-1 fingerprint (SHA-1 hash of DER-encoded certificate) in a binary chunk.
+ * The 5th keyword char is used to know if SSL_get_certificate or SSL_get_peer_certificate
+ * should be use.
+ */
+static int
+smp_fetch_ssl_x_sha1(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ int cert_peer = (kw[4] == 'c') ? 1 : 0;
+ X509 *crt = NULL;
+ const EVP_MD *digest;
+ int ret = 0;
+ struct chunk *smp_trash;
+ struct connection *conn;
+
+ conn = objt_conn(smp->sess->origin);
+ if (!conn || conn->xprt != &ssl_sock)
+ return 0;
+
+ if (!(conn->flags & CO_FL_CONNECTED)) {
+ smp->flags |= SMP_F_MAY_CHANGE;
+ return 0;
+ }
+
+ if (cert_peer)
+ crt = SSL_get_peer_certificate(conn->xprt_ctx);
+ else
+ crt = SSL_get_certificate(conn->xprt_ctx);
+ if (!crt)
+ goto out;
+
+ smp_trash = get_trash_chunk();
+ digest = EVP_sha1();
+ X509_digest(crt, digest, (unsigned char *)smp_trash->str, (unsigned int *)&smp_trash->len);
+
+ smp->data.u.str = *smp_trash;
+ smp->data.type = SMP_T_BIN;
+ ret = 1;
+out:
+ /* SSL_get_peer_certificate, it increase X509 * ref count */
+ if (cert_peer && crt)
+ X509_free(crt);
+ return ret;
+}
+
+/* string, returns certificate's notafter date in ASN1_UTCTIME format.
+ * The 5th keyword char is used to know if SSL_get_certificate or SSL_get_peer_certificate
+ * should be use.
+ */
+static int
+smp_fetch_ssl_x_notafter(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ int cert_peer = (kw[4] == 'c') ? 1 : 0;
+ X509 *crt = NULL;
+ int ret = 0;
+ struct chunk *smp_trash;
+ struct connection *conn;
+
+ conn = objt_conn(smp->sess->origin);
+ if (!conn || conn->xprt != &ssl_sock)
+ return 0;
+
+ if (!(conn->flags & CO_FL_CONNECTED)) {
+ smp->flags |= SMP_F_MAY_CHANGE;
+ return 0;
+ }
+
+ if (cert_peer)
+ crt = SSL_get_peer_certificate(conn->xprt_ctx);
+ else
+ crt = SSL_get_certificate(conn->xprt_ctx);
+ if (!crt)
+ goto out;
+
+ smp_trash = get_trash_chunk();
+ if (ssl_sock_get_time(X509_get_notAfter(crt), smp_trash) <= 0)
+ goto out;
+
+ smp->data.u.str = *smp_trash;
+ smp->data.type = SMP_T_STR;
+ ret = 1;
+out:
+ /* SSL_get_peer_certificate, it increase X509 * ref count */
+ if (cert_peer && crt)
+ X509_free(crt);
+ return ret;
+}
+
+/* string, returns a string of a formatted full dn \C=..\O=..\OU=.. \CN=.. of certificate's issuer
+ * The 5th keyword char is used to know if SSL_get_certificate or SSL_get_peer_certificate
+ * should be use.
+ */
+static int
+smp_fetch_ssl_x_i_dn(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ int cert_peer = (kw[4] == 'c') ? 1 : 0;
+ X509 *crt = NULL;
+ X509_NAME *name;
+ int ret = 0;
+ struct chunk *smp_trash;
+ struct connection *conn;
+
+ conn = objt_conn(smp->sess->origin);
+ if (!conn || conn->xprt != &ssl_sock)
+ return 0;
+
+ if (!(conn->flags & CO_FL_CONNECTED)) {
+ smp->flags |= SMP_F_MAY_CHANGE;
+ return 0;
+ }
+
+ if (cert_peer)
+ crt = SSL_get_peer_certificate(conn->xprt_ctx);
+ else
+ crt = SSL_get_certificate(conn->xprt_ctx);
+ if (!crt)
+ goto out;
+
+ name = X509_get_issuer_name(crt);
+ if (!name)
+ goto out;
+
+ smp_trash = get_trash_chunk();
+ if (args && args[0].type == ARGT_STR) {
+ int pos = 1;
+
+ if (args[1].type == ARGT_SINT)
+ pos = args[1].data.sint;
+
+ if (ssl_sock_get_dn_entry(name, &args[0].data.str, pos, smp_trash) <= 0)
+ goto out;
+ }
+ else if (ssl_sock_get_dn_oneline(name, smp_trash) <= 0)
+ goto out;
+
+ smp->data.type = SMP_T_STR;
+ smp->data.u.str = *smp_trash;
+ ret = 1;
+out:
+ /* SSL_get_peer_certificate, it increase X509 * ref count */
+ if (cert_peer && crt)
+ X509_free(crt);
+ return ret;
+}
+
+/* string, returns notbefore date in ASN1_UTCTIME format.
+ * The 5th keyword char is used to know if SSL_get_certificate or SSL_get_peer_certificate
+ * should be use.
+ */
+static int
+smp_fetch_ssl_x_notbefore(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ int cert_peer = (kw[4] == 'c') ? 1 : 0;
+ X509 *crt = NULL;
+ int ret = 0;
+ struct chunk *smp_trash;
+ struct connection *conn;
+
+ conn = objt_conn(smp->sess->origin);
+ if (!conn || conn->xprt != &ssl_sock)
+ return 0;
+
+ if (!(conn->flags & CO_FL_CONNECTED)) {
+ smp->flags |= SMP_F_MAY_CHANGE;
+ return 0;
+ }
+
+ if (cert_peer)
+ crt = SSL_get_peer_certificate(conn->xprt_ctx);
+ else
+ crt = SSL_get_certificate(conn->xprt_ctx);
+ if (!crt)
+ goto out;
+
+ smp_trash = get_trash_chunk();
+ if (ssl_sock_get_time(X509_get_notBefore(crt), smp_trash) <= 0)
+ goto out;
+
+ smp->data.u.str = *smp_trash;
+ smp->data.type = SMP_T_STR;
+ ret = 1;
+out:
+ /* SSL_get_peer_certificate, it increase X509 * ref count */
+ if (cert_peer && crt)
+ X509_free(crt);
+ return ret;
+}
+
+/* string, returns a string of a formatted full dn \C=..\O=..\OU=.. \CN=.. of certificate's subject
+ * The 5th keyword char is used to know if SSL_get_certificate or SSL_get_peer_certificate
+ * should be use.
+ */
+static int
+smp_fetch_ssl_x_s_dn(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ int cert_peer = (kw[4] == 'c') ? 1 : 0;
+ X509 *crt = NULL;
+ X509_NAME *name;
+ int ret = 0;
+ struct chunk *smp_trash;
+ struct connection *conn;
+
+ conn = objt_conn(smp->sess->origin);
+ if (!conn || conn->xprt != &ssl_sock)
+ return 0;
+
+ if (!(conn->flags & CO_FL_CONNECTED)) {
+ smp->flags |= SMP_F_MAY_CHANGE;
+ return 0;
+ }
+
+ if (cert_peer)
+ crt = SSL_get_peer_certificate(conn->xprt_ctx);
+ else
+ crt = SSL_get_certificate(conn->xprt_ctx);
+ if (!crt)
+ goto out;
+
+ name = X509_get_subject_name(crt);
+ if (!name)
+ goto out;
+
+ smp_trash = get_trash_chunk();
+ if (args && args[0].type == ARGT_STR) {
+ int pos = 1;
+
+ if (args[1].type == ARGT_SINT)
+ pos = args[1].data.sint;
+
+ if (ssl_sock_get_dn_entry(name, &args[0].data.str, pos, smp_trash) <= 0)
+ goto out;
+ }
+ else if (ssl_sock_get_dn_oneline(name, smp_trash) <= 0)
+ goto out;
+
+ smp->data.type = SMP_T_STR;
+ smp->data.u.str = *smp_trash;
+ ret = 1;
+out:
+ /* SSL_get_peer_certificate, it increase X509 * ref count */
+ if (cert_peer && crt)
+ X509_free(crt);
+ return ret;
+}
+
+/* integer, returns true if current session use a client certificate */
+static int
+smp_fetch_ssl_c_used(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ X509 *crt;
+ struct connection *conn;
+
+ conn = objt_conn(smp->sess->origin);
+ if (!conn || conn->xprt != &ssl_sock)
+ return 0;
+
+ if (!(conn->flags & CO_FL_CONNECTED)) {
+ smp->flags |= SMP_F_MAY_CHANGE;
+ return 0;
+ }
+
+ /* SSL_get_peer_certificate returns a ptr on allocated X509 struct */
+ crt = SSL_get_peer_certificate(conn->xprt_ctx);
+ if (crt) {
+ X509_free(crt);
+ }
+
+ smp->data.type = SMP_T_BOOL;
+ smp->data.u.sint = (crt != NULL);
+ return 1;
+}
+
+/* integer, returns the certificate version
+ * The 5th keyword char is used to know if SSL_get_certificate or SSL_get_peer_certificate
+ * should be use.
+ */
+static int
+smp_fetch_ssl_x_version(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ int cert_peer = (kw[4] == 'c') ? 1 : 0;
+ X509 *crt;
+ struct connection *conn;
+
+ conn = objt_conn(smp->sess->origin);
+ if (!conn || conn->xprt != &ssl_sock)
+ return 0;
+
+ if (!(conn->flags & CO_FL_CONNECTED)) {
+ smp->flags |= SMP_F_MAY_CHANGE;
+ return 0;
+ }
+
+ if (cert_peer)
+ crt = SSL_get_peer_certificate(conn->xprt_ctx);
+ else
+ crt = SSL_get_certificate(conn->xprt_ctx);
+ if (!crt)
+ return 0;
+
+ smp->data.u.sint = (unsigned int)(1 + X509_get_version(crt));
+ /* SSL_get_peer_certificate increase X509 * ref count */
+ if (cert_peer)
+ X509_free(crt);
+ smp->data.type = SMP_T_SINT;
+
+ return 1;
+}
+
+/* string, returns the certificate's signature algorithm.
+ * The 5th keyword char is used to know if SSL_get_certificate or SSL_get_peer_certificate
+ * should be use.
+ */
+static int
+smp_fetch_ssl_x_sig_alg(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ int cert_peer = (kw[4] == 'c') ? 1 : 0;
+ X509 *crt;
+ int nid;
+ struct connection *conn;
+
+ conn = objt_conn(smp->sess->origin);
+ if (!conn || conn->xprt != &ssl_sock)
+ return 0;
+
+ if (!(conn->flags & CO_FL_CONNECTED)) {
+ smp->flags |= SMP_F_MAY_CHANGE;
+ return 0;
+ }
+
+ if (cert_peer)
+ crt = SSL_get_peer_certificate(conn->xprt_ctx);
+ else
+ crt = SSL_get_certificate(conn->xprt_ctx);
+ if (!crt)
+ return 0;
+
+ nid = OBJ_obj2nid((ASN1_OBJECT *)(crt->cert_info->signature->algorithm));
+
+ smp->data.u.str.str = (char *)OBJ_nid2sn(nid);
+ if (!smp->data.u.str.str) {
+ /* SSL_get_peer_certificate increase X509 * ref count */
+ if (cert_peer)
+ X509_free(crt);
+ return 0;
+ }
+
+ smp->data.type = SMP_T_STR;
+ smp->flags |= SMP_F_CONST;
+ smp->data.u.str.len = strlen(smp->data.u.str.str);
+ /* SSL_get_peer_certificate increase X509 * ref count */
+ if (cert_peer)
+ X509_free(crt);
+
+ return 1;
+}
+
+/* string, returns the certificate's key algorithm.
+ * The 5th keyword char is used to know if SSL_get_certificate or SSL_get_peer_certificate
+ * should be use.
+ */
+static int
+smp_fetch_ssl_x_key_alg(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ int cert_peer = (kw[4] == 'c') ? 1 : 0;
+ X509 *crt;
+ int nid;
+ struct connection *conn;
+
+ conn = objt_conn(smp->sess->origin);
+ if (!conn || conn->xprt != &ssl_sock)
+ return 0;
+
+ if (!(conn->flags & CO_FL_CONNECTED)) {
+ smp->flags |= SMP_F_MAY_CHANGE;
+ return 0;
+ }
+
+ if (cert_peer)
+ crt = SSL_get_peer_certificate(conn->xprt_ctx);
+ else
+ crt = SSL_get_certificate(conn->xprt_ctx);
+ if (!crt)
+ return 0;
+
+ nid = OBJ_obj2nid((ASN1_OBJECT *)(crt->cert_info->key->algor->algorithm));
+
+ smp->data.u.str.str = (char *)OBJ_nid2sn(nid);
+ if (!smp->data.u.str.str) {
+ /* SSL_get_peer_certificate increase X509 * ref count */
+ if (cert_peer)
+ X509_free(crt);
+ return 0;
+ }
+
+ smp->data.type = SMP_T_STR;
+ smp->flags |= SMP_F_CONST;
+ smp->data.u.str.len = strlen(smp->data.u.str.str);
+ if (cert_peer)
+ X509_free(crt);
+
+ return 1;
+}
+
+/* boolean, returns true if front conn. transport layer is SSL.
+ * This function is also usable on backend conn if the fetch keyword 5th
+ * char is 'b'.
+ */
+static int
+smp_fetch_ssl_fc(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ int back_conn = (kw[4] == 'b') ? 1 : 0;
+ struct connection *conn = objt_conn(smp->strm->si[back_conn].end);
+
+ smp->data.type = SMP_T_BOOL;
+ smp->data.u.sint = (conn && conn->xprt == &ssl_sock);
+ return 1;
+}
+
+/* boolean, returns true if client present a SNI */
+static int
+smp_fetch_ssl_fc_has_sni(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+#ifdef SSL_CTRL_SET_TLSEXT_HOSTNAME
+ struct connection *conn = objt_conn(smp->sess->origin);
+
+ smp->data.type = SMP_T_BOOL;
+ smp->data.u.sint = (conn && conn->xprt == &ssl_sock) &&
+ conn->xprt_ctx &&
+ SSL_get_servername(conn->xprt_ctx, TLSEXT_NAMETYPE_host_name) != NULL;
+ return 1;
+#else
+ return 0;
+#endif
+}
+
+/* boolean, returns true if client session has been resumed */
+static int
+smp_fetch_ssl_fc_is_resumed(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct connection *conn = objt_conn(smp->sess->origin);
+
+ smp->data.type = SMP_T_BOOL;
+ smp->data.u.sint = (conn && conn->xprt == &ssl_sock) &&
+ conn->xprt_ctx &&
+ SSL_session_reused(conn->xprt_ctx);
+ return 1;
+}
+
+/* string, returns the used cipher if front conn. transport layer is SSL.
+ * This function is also usable on backend conn if the fetch keyword 5th
+ * char is 'b'.
+ */
+static int
+smp_fetch_ssl_fc_cipher(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ int back_conn = (kw[4] == 'b') ? 1 : 0;
+ struct connection *conn;
+
+ smp->flags = 0;
+
+ conn = objt_conn(smp->strm->si[back_conn].end);
+ if (!conn || !conn->xprt_ctx || conn->xprt != &ssl_sock)
+ return 0;
+
+ smp->data.u.str.str = (char *)SSL_get_cipher_name(conn->xprt_ctx);
+ if (!smp->data.u.str.str)
+ return 0;
+
+ smp->data.type = SMP_T_STR;
+ smp->flags |= SMP_F_CONST;
+ smp->data.u.str.len = strlen(smp->data.u.str.str);
+
+ return 1;
+}
+
+/* integer, returns the algoritm's keysize if front conn. transport layer
+ * is SSL.
+ * This function is also usable on backend conn if the fetch keyword 5th
+ * char is 'b'.
+ */
+static int
+smp_fetch_ssl_fc_alg_keysize(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ int back_conn = (kw[4] == 'b') ? 1 : 0;
+ struct connection *conn;
+ int sint;
+
+ smp->flags = 0;
+
+ conn = objt_conn(smp->strm->si[back_conn].end);
+ if (!conn || !conn->xprt_ctx || conn->xprt != &ssl_sock)
+ return 0;
+
+ if (!SSL_get_cipher_bits(conn->xprt_ctx, &sint))
+ return 0;
+
+ smp->data.u.sint = sint;
+ smp->data.type = SMP_T_SINT;
+
+ return 1;
+}
+
+/* integer, returns the used keysize if front conn. transport layer is SSL.
+ * This function is also usable on backend conn if the fetch keyword 5th
+ * char is 'b'.
+ */
+static int
+smp_fetch_ssl_fc_use_keysize(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ int back_conn = (kw[4] == 'b') ? 1 : 0;
+ struct connection *conn;
+
+ smp->flags = 0;
+
+ conn = objt_conn(smp->strm->si[back_conn].end);
+ if (!conn || !conn->xprt_ctx || conn->xprt != &ssl_sock)
+ return 0;
+
+ smp->data.u.sint = (unsigned int)SSL_get_cipher_bits(conn->xprt_ctx, NULL);
+ if (!smp->data.u.sint)
+ return 0;
+
+ smp->data.type = SMP_T_SINT;
+
+ return 1;
+}
+
+#ifdef OPENSSL_NPN_NEGOTIATED
+static int
+smp_fetch_ssl_fc_npn(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct connection *conn;
+
+ smp->flags = SMP_F_CONST;
+ smp->data.type = SMP_T_STR;
+
+ conn = objt_conn(smp->sess->origin);
+ if (!conn || !conn->xprt_ctx || conn->xprt != &ssl_sock)
+ return 0;
+
+ smp->data.u.str.str = NULL;
+ SSL_get0_next_proto_negotiated(conn->xprt_ctx,
+ (const unsigned char **)&smp->data.u.str.str, (unsigned *)&smp->data.u.str.len);
+
+ if (!smp->data.u.str.str)
+ return 0;
+
+ return 1;
+}
+#endif
+
+#ifdef TLSEXT_TYPE_application_layer_protocol_negotiation
+static int
+smp_fetch_ssl_fc_alpn(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct connection *conn;
+
+ smp->flags = SMP_F_CONST;
+ smp->data.type = SMP_T_STR;
+
+ conn = objt_conn(smp->sess->origin);
+ if (!conn || !conn->xprt_ctx || conn->xprt != &ssl_sock)
+ return 0;
+
+ smp->data.u.str.str = NULL;
+ SSL_get0_alpn_selected(conn->xprt_ctx,
+ (const unsigned char **)&smp->data.u.str.str, (unsigned *)&smp->data.u.str.len);
+
+ if (!smp->data.u.str.str)
+ return 0;
+
+ return 1;
+}
+#endif
+
+/* string, returns the used protocol if front conn. transport layer is SSL.
+ * This function is also usable on backend conn if the fetch keyword 5th
+ * char is 'b'.
+ */
+static int
+smp_fetch_ssl_fc_protocol(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ int back_conn = (kw[4] == 'b') ? 1 : 0;
+ struct connection *conn;
+
+ smp->flags = 0;
+
+ conn = objt_conn(smp->strm->si[back_conn].end);
+ if (!conn || !conn->xprt_ctx || conn->xprt != &ssl_sock)
+ return 0;
+
+ smp->data.u.str.str = (char *)SSL_get_version(conn->xprt_ctx);
+ if (!smp->data.u.str.str)
+ return 0;
+
+ smp->data.type = SMP_T_STR;
+ smp->flags = SMP_F_CONST;
+ smp->data.u.str.len = strlen(smp->data.u.str.str);
+
+ return 1;
+}
+
+/* binary, returns the SSL stream id if front conn. transport layer is SSL.
+ * This function is also usable on backend conn if the fetch keyword 5th
+ * char is 'b'.
+ */
+static int
+smp_fetch_ssl_fc_session_id(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+#if OPENSSL_VERSION_NUMBER > 0x0090800fL
+ int back_conn = (kw[4] == 'b') ? 1 : 0;
+ SSL_SESSION *ssl_sess;
+ struct connection *conn;
+
+ smp->flags = SMP_F_CONST;
+ smp->data.type = SMP_T_BIN;
+
+ conn = objt_conn(smp->strm->si[back_conn].end);
+ if (!conn || !conn->xprt_ctx || conn->xprt != &ssl_sock)
+ return 0;
+
+ ssl_sess = SSL_get_session(conn->xprt_ctx);
+ if (!ssl_sess)
+ return 0;
+
+ smp->data.u.str.str = (char *)SSL_SESSION_get_id(ssl_sess, (unsigned int *)&smp->data.u.str.len);
+ if (!smp->data.u.str.str || !smp->data.u.str.len)
+ return 0;
+
+ return 1;
+#else
+ return 0;
+#endif
+}
+
+static int
+smp_fetch_ssl_fc_sni(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+#ifdef SSL_CTRL_SET_TLSEXT_HOSTNAME
+ struct connection *conn;
+
+ smp->flags = SMP_F_CONST;
+ smp->data.type = SMP_T_STR;
+
+ conn = objt_conn(smp->sess->origin);
+ if (!conn || !conn->xprt_ctx || conn->xprt != &ssl_sock)
+ return 0;
+
+ smp->data.u.str.str = (char *)SSL_get_servername(conn->xprt_ctx, TLSEXT_NAMETYPE_host_name);
+ if (!smp->data.u.str.str)
+ return 0;
+
+ smp->data.u.str.len = strlen(smp->data.u.str.str);
+ return 1;
+#else
+ return 0;
+#endif
+}
+
+static int
+smp_fetch_ssl_fc_unique_id(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+#if OPENSSL_VERSION_NUMBER > 0x0090800fL
+ int back_conn = (kw[4] == 'b') ? 1 : 0;
+ struct connection *conn;
+ int finished_len;
+ struct chunk *finished_trash;
+
+ smp->flags = 0;
+
+ conn = objt_conn(smp->strm->si[back_conn].end);
+ if (!conn || !conn->xprt_ctx || conn->xprt != &ssl_sock)
+ return 0;
+
+ if (!(conn->flags & CO_FL_CONNECTED)) {
+ smp->flags |= SMP_F_MAY_CHANGE;
+ return 0;
+ }
+
+ finished_trash = get_trash_chunk();
+ if (!SSL_session_reused(conn->xprt_ctx))
+ finished_len = SSL_get_peer_finished(conn->xprt_ctx, finished_trash->str, finished_trash->size);
+ else
+ finished_len = SSL_get_finished(conn->xprt_ctx, finished_trash->str, finished_trash->size);
+
+ if (!finished_len)
+ return 0;
+
+ finished_trash->len = finished_len;
+ smp->data.u.str = *finished_trash;
+ smp->data.type = SMP_T_BIN;
+
+ return 1;
+#else
+ return 0;
+#endif
+}
+
+/* integer, returns the first verify error in CA chain of client certificate chain. */
+static int
+smp_fetch_ssl_c_ca_err(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct connection *conn;
+
+ conn = objt_conn(smp->sess->origin);
+ if (!conn || conn->xprt != &ssl_sock)
+ return 0;
+
+ if (!(conn->flags & CO_FL_CONNECTED)) {
+ smp->flags = SMP_F_MAY_CHANGE;
+ return 0;
+ }
+
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = (unsigned long long int)SSL_SOCK_ST_TO_CA_ERROR(conn->xprt_st);
+ smp->flags = 0;
+
+ return 1;
+}
+
+/* integer, returns the depth of the first verify error in CA chain of client certificate chain. */
+static int
+smp_fetch_ssl_c_ca_err_depth(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct connection *conn;
+
+ conn = objt_conn(smp->sess->origin);
+ if (!conn || conn->xprt != &ssl_sock)
+ return 0;
+
+ if (!(conn->flags & CO_FL_CONNECTED)) {
+ smp->flags = SMP_F_MAY_CHANGE;
+ return 0;
+ }
+
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = (long long int)SSL_SOCK_ST_TO_CAEDEPTH(conn->xprt_st);
+ smp->flags = 0;
+
+ return 1;
+}
+
+/* integer, returns the first verify error on client certificate */
+static int
+smp_fetch_ssl_c_err(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct connection *conn;
+
+ conn = objt_conn(smp->sess->origin);
+ if (!conn || conn->xprt != &ssl_sock)
+ return 0;
+
+ if (!(conn->flags & CO_FL_CONNECTED)) {
+ smp->flags = SMP_F_MAY_CHANGE;
+ return 0;
+ }
+
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = (long long int)SSL_SOCK_ST_TO_CRTERROR(conn->xprt_st);
+ smp->flags = 0;
+
+ return 1;
+}
+
+/* integer, returns the verify result on client cert */
+static int
+smp_fetch_ssl_c_verify(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct connection *conn;
+
+ conn = objt_conn(smp->sess->origin);
+ if (!conn || conn->xprt != &ssl_sock)
+ return 0;
+
+ if (!(conn->flags & CO_FL_CONNECTED)) {
+ smp->flags = SMP_F_MAY_CHANGE;
+ return 0;
+ }
+
+ if (!conn->xprt_ctx)
+ return 0;
+
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = (long long int)SSL_get_verify_result(conn->xprt_ctx);
+ smp->flags = 0;
+
+ return 1;
+}
+
+/* parse the "ca-file" bind keyword */
+static int bind_parse_ca_file(char **args, int cur_arg, struct proxy *px, struct bind_conf *conf, char **err)
+{
+ if (!*args[cur_arg + 1]) {
+ if (err)
+ memprintf(err, "'%s' : missing CAfile path", args[cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ if ((*args[cur_arg + 1] != '/') && global.ca_base)
+ memprintf(&conf->ca_file, "%s/%s", global.ca_base, args[cur_arg + 1]);
+ else
+ memprintf(&conf->ca_file, "%s", args[cur_arg + 1]);
+
+ return 0;
+}
+
+/* parse the "ca-sign-file" bind keyword */
+static int bind_parse_ca_sign_file(char **args, int cur_arg, struct proxy *px, struct bind_conf *conf, char **err)
+{
+ if (!*args[cur_arg + 1]) {
+ if (err)
+ memprintf(err, "'%s' : missing CAfile path", args[cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ if ((*args[cur_arg + 1] != '/') && global.ca_base)
+ memprintf(&conf->ca_sign_file, "%s/%s", global.ca_base, args[cur_arg + 1]);
+ else
+ memprintf(&conf->ca_sign_file, "%s", args[cur_arg + 1]);
+
+ return 0;
+}
+
+/* parse the ca-sign-pass bind keyword */
+
+static int bind_parse_ca_sign_pass(char **args, int cur_arg, struct proxy *px, struct bind_conf *conf, char **err)
+{
+ if (!*args[cur_arg + 1]) {
+ if (err)
+ memprintf(err, "'%s' : missing CAkey password", args[cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+ memprintf(&conf->ca_sign_pass, "%s", args[cur_arg + 1]);
+ return 0;
+}
+
+/* parse the "ciphers" bind keyword */
+static int bind_parse_ciphers(char **args, int cur_arg, struct proxy *px, struct bind_conf *conf, char **err)
+{
+ if (!*args[cur_arg + 1]) {
+ memprintf(err, "'%s' : missing cipher suite", args[cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ free(conf->ciphers);
+ conf->ciphers = strdup(args[cur_arg + 1]);
+ return 0;
+}
+
+/* parse the "crt" bind keyword */
+static int bind_parse_crt(char **args, int cur_arg, struct proxy *px, struct bind_conf *conf, char **err)
+{
+ char path[MAXPATHLEN];
+
+ if (!*args[cur_arg + 1]) {
+ memprintf(err, "'%s' : missing certificate location", args[cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ if ((*args[cur_arg + 1] != '/' ) && global.crt_base) {
+ if ((strlen(global.crt_base) + 1 + strlen(args[cur_arg + 1]) + 1) > MAXPATHLEN) {
+ memprintf(err, "'%s' : path too long", args[cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+ snprintf(path, sizeof(path), "%s/%s", global.crt_base, args[cur_arg + 1]);
+ if (ssl_sock_load_cert(path, conf, px, err) > 0)
+ return ERR_ALERT | ERR_FATAL;
+
+ return 0;
+ }
+
+ if (ssl_sock_load_cert(args[cur_arg + 1], conf, px, err) > 0)
+ return ERR_ALERT | ERR_FATAL;
+
+ return 0;
+}
+
+/* parse the "crt-list" bind keyword */
+static int bind_parse_crt_list(char **args, int cur_arg, struct proxy *px, struct bind_conf *conf, char **err)
+{
+ if (!*args[cur_arg + 1]) {
+ memprintf(err, "'%s' : missing certificate location", args[cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ if (ssl_sock_load_cert_list_file(args[cur_arg + 1], conf, px, err) > 0) {
+ memprintf(err, "'%s' : %s", args[cur_arg], *err);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ return 0;
+}
+
+/* parse the "crl-file" bind keyword */
+static int bind_parse_crl_file(char **args, int cur_arg, struct proxy *px, struct bind_conf *conf, char **err)
+{
+#ifndef X509_V_FLAG_CRL_CHECK
+ if (err)
+ memprintf(err, "'%s' : library does not support CRL verify", args[cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+#else
+ if (!*args[cur_arg + 1]) {
+ if (err)
+ memprintf(err, "'%s' : missing CRLfile path", args[cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ if ((*args[cur_arg + 1] != '/') && global.ca_base)
+ memprintf(&conf->crl_file, "%s/%s", global.ca_base, args[cur_arg + 1]);
+ else
+ memprintf(&conf->crl_file, "%s", args[cur_arg + 1]);
+
+ return 0;
+#endif
+}
+
+/* parse the "ecdhe" bind keyword keywords */
+static int bind_parse_ecdhe(char **args, int cur_arg, struct proxy *px, struct bind_conf *conf, char **err)
+{
+#if OPENSSL_VERSION_NUMBER < 0x0090800fL
+ if (err)
+ memprintf(err, "'%s' : library does not support elliptic curve Diffie-Hellman (too old)", args[cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+#elif defined(OPENSSL_NO_ECDH)
+ if (err)
+ memprintf(err, "'%s' : library does not support elliptic curve Diffie-Hellman (disabled via OPENSSL_NO_ECDH)", args[cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+#else
+ if (!*args[cur_arg + 1]) {
+ if (err)
+ memprintf(err, "'%s' : missing named curve", args[cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ conf->ecdhe = strdup(args[cur_arg + 1]);
+
+ return 0;
+#endif
+}
+
+/* parse the "crt_ignerr" and "ca_ignerr" bind keywords */
+static int bind_parse_ignore_err(char **args, int cur_arg, struct proxy *px, struct bind_conf *conf, char **err)
+{
+ int code;
+ char *p = args[cur_arg + 1];
+ unsigned long long *ignerr = &conf->crt_ignerr;
+
+ if (!*p) {
+ if (err)
+ memprintf(err, "'%s' : missing error IDs list", args[cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ if (strcmp(args[cur_arg], "ca-ignore-err") == 0)
+ ignerr = &conf->ca_ignerr;
+
+ if (strcmp(p, "all") == 0) {
+ *ignerr = ~0ULL;
+ return 0;
+ }
+
+ while (p) {
+ code = atoi(p);
+ if ((code <= 0) || (code > 63)) {
+ if (err)
+ memprintf(err, "'%s' : ID '%d' out of range (1..63) in error IDs list '%s'",
+ args[cur_arg], code, args[cur_arg + 1]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+ *ignerr |= 1ULL << code;
+ p = strchr(p, ',');
+ if (p)
+ p++;
+ }
+
+ return 0;
+}
+
+/* parse the "force-sslv3" bind keyword */
+static int bind_parse_force_sslv3(char **args, int cur_arg, struct proxy *px, struct bind_conf *conf, char **err)
+{
+ conf->ssl_options |= BC_SSL_O_USE_SSLV3;
+ return 0;
+}
+
+/* parse the "force-tlsv10" bind keyword */
+static int bind_parse_force_tlsv10(char **args, int cur_arg, struct proxy *px, struct bind_conf *conf, char **err)
+{
+ conf->ssl_options |= BC_SSL_O_USE_TLSV10;
+ return 0;
+}
+
+/* parse the "force-tlsv11" bind keyword */
+static int bind_parse_force_tlsv11(char **args, int cur_arg, struct proxy *px, struct bind_conf *conf, char **err)
+{
+#if SSL_OP_NO_TLSv1_1
+ conf->ssl_options |= BC_SSL_O_USE_TLSV11;
+ return 0;
+#else
+ if (err)
+ memprintf(err, "'%s' : library does not support protocol TLSv1.1", args[cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+#endif
+}
+
+/* parse the "force-tlsv12" bind keyword */
+static int bind_parse_force_tlsv12(char **args, int cur_arg, struct proxy *px, struct bind_conf *conf, char **err)
+{
+#if SSL_OP_NO_TLSv1_2
+ conf->ssl_options |= BC_SSL_O_USE_TLSV12;
+ return 0;
+#else
+ if (err)
+ memprintf(err, "'%s' : library does not support protocol TLSv1.2", args[cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+#endif
+}
+
+
+/* parse the "no-tls-tickets" bind keyword */
+static int bind_parse_no_tls_tickets(char **args, int cur_arg, struct proxy *px, struct bind_conf *conf, char **err)
+{
+ conf->ssl_options |= BC_SSL_O_NO_TLS_TICKETS;
+ return 0;
+}
+
+
+/* parse the "no-sslv3" bind keyword */
+static int bind_parse_no_sslv3(char **args, int cur_arg, struct proxy *px, struct bind_conf *conf, char **err)
+{
+ conf->ssl_options |= BC_SSL_O_NO_SSLV3;
+ return 0;
+}
+
+/* parse the "no-tlsv10" bind keyword */
+static int bind_parse_no_tlsv10(char **args, int cur_arg, struct proxy *px, struct bind_conf *conf, char **err)
+{
+ conf->ssl_options |= BC_SSL_O_NO_TLSV10;
+ return 0;
+}
+
+/* parse the "no-tlsv11" bind keyword */
+static int bind_parse_no_tlsv11(char **args, int cur_arg, struct proxy *px, struct bind_conf *conf, char **err)
+{
+ conf->ssl_options |= BC_SSL_O_NO_TLSV11;
+ return 0;
+}
+
+/* parse the "no-tlsv12" bind keyword */
+static int bind_parse_no_tlsv12(char **args, int cur_arg, struct proxy *px, struct bind_conf *conf, char **err)
+{
+ conf->ssl_options |= BC_SSL_O_NO_TLSV12;
+ return 0;
+}
+
+/* parse the "npn" bind keyword */
+static int bind_parse_npn(char **args, int cur_arg, struct proxy *px, struct bind_conf *conf, char **err)
+{
+#ifdef OPENSSL_NPN_NEGOTIATED
+ char *p1, *p2;
+
+ if (!*args[cur_arg + 1]) {
+ memprintf(err, "'%s' : missing the comma-delimited NPN protocol suite", args[cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ free(conf->npn_str);
+
+ /* the NPN string is built as a suite of (<len> <name>)* */
+ conf->npn_len = strlen(args[cur_arg + 1]) + 1;
+ conf->npn_str = calloc(1, conf->npn_len);
+ memcpy(conf->npn_str + 1, args[cur_arg + 1], conf->npn_len);
+
+ /* replace commas with the name length */
+ p1 = conf->npn_str;
+ p2 = p1 + 1;
+ while (1) {
+ p2 = memchr(p1 + 1, ',', conf->npn_str + conf->npn_len - (p1 + 1));
+ if (!p2)
+ p2 = p1 + 1 + strlen(p1 + 1);
+
+ if (p2 - (p1 + 1) > 255) {
+ *p2 = '\0';
+ memprintf(err, "'%s' : NPN protocol name too long : '%s'", args[cur_arg], p1 + 1);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ *p1 = p2 - (p1 + 1);
+ p1 = p2;
+
+ if (!*p2)
+ break;
+
+ *(p2++) = '\0';
+ }
+ return 0;
+#else
+ if (err)
+ memprintf(err, "'%s' : library does not support TLS NPN extension", args[cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+#endif
+}
+
+/* parse the "alpn" bind keyword */
+static int bind_parse_alpn(char **args, int cur_arg, struct proxy *px, struct bind_conf *conf, char **err)
+{
+#ifdef TLSEXT_TYPE_application_layer_protocol_negotiation
+ char *p1, *p2;
+
+ if (!*args[cur_arg + 1]) {
+ memprintf(err, "'%s' : missing the comma-delimited ALPN protocol suite", args[cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ free(conf->alpn_str);
+
+ /* the ALPN string is built as a suite of (<len> <name>)* */
+ conf->alpn_len = strlen(args[cur_arg + 1]) + 1;
+ conf->alpn_str = calloc(1, conf->alpn_len);
+ memcpy(conf->alpn_str + 1, args[cur_arg + 1], conf->alpn_len);
+
+ /* replace commas with the name length */
+ p1 = conf->alpn_str;
+ p2 = p1 + 1;
+ while (1) {
+ p2 = memchr(p1 + 1, ',', conf->alpn_str + conf->alpn_len - (p1 + 1));
+ if (!p2)
+ p2 = p1 + 1 + strlen(p1 + 1);
+
+ if (p2 - (p1 + 1) > 255) {
+ *p2 = '\0';
+ memprintf(err, "'%s' : ALPN protocol name too long : '%s'", args[cur_arg], p1 + 1);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ *p1 = p2 - (p1 + 1);
+ p1 = p2;
+
+ if (!*p2)
+ break;
+
+ *(p2++) = '\0';
+ }
+ return 0;
+#else
+ if (err)
+ memprintf(err, "'%s' : library does not support TLS ALPN extension", args[cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+#endif
+}
+
+/* parse the "ssl" bind keyword */
+static int bind_parse_ssl(char **args, int cur_arg, struct proxy *px, struct bind_conf *conf, char **err)
+{
+ struct listener *l;
+
+ conf->is_ssl = 1;
+
+ if (global.listen_default_ciphers && !conf->ciphers)
+ conf->ciphers = strdup(global.listen_default_ciphers);
+ conf->ssl_options |= global.listen_default_ssloptions;
+
+ list_for_each_entry(l, &conf->listeners, by_bind)
+ l->xprt = &ssl_sock;
+
+ return 0;
+}
+
+/* parse the "generate-certificates" bind keyword */
+static int bind_parse_generate_certs(char **args, int cur_arg, struct proxy *px, struct bind_conf *conf, char **err)
+{
+#ifdef SSL_CTRL_SET_TLSEXT_HOSTNAME
+ conf->generate_certs = 1;
+#else
+ memprintf(err, "%sthis version of openssl cannot generate SSL certificates.\n",
+ err && *err ? *err : "");
+#endif
+ return 0;
+}
+
+/* parse the "strict-sni" bind keyword */
+static int bind_parse_strict_sni(char **args, int cur_arg, struct proxy *px, struct bind_conf *conf, char **err)
+{
+ conf->strict_sni = 1;
+ return 0;
+}
+
+/* parse the "tls-ticket-keys" bind keyword */
+static int bind_parse_tls_ticket_keys(char **args, int cur_arg, struct proxy *px, struct bind_conf *conf, char **err)
+{
+#if (defined SSL_CTRL_SET_TLSEXT_TICKET_KEY_CB && TLS_TICKETS_NO > 0)
+ FILE *f;
+ int i = 0;
+ char thisline[LINESIZE];
+ struct tls_keys_ref *keys_ref;
+
+ if (!*args[cur_arg + 1]) {
+ if (err)
+ memprintf(err, "'%s' : missing TLS ticket keys file path", args[cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ keys_ref = tlskeys_ref_lookup(args[cur_arg + 1]);
+ if(keys_ref) {
+ conf->keys_ref = keys_ref;
+ return 0;
+ }
+
+ keys_ref = malloc(sizeof(struct tls_keys_ref));
+ keys_ref->tlskeys = malloc(TLS_TICKETS_NO * sizeof(struct tls_sess_key));
+
+ if ((f = fopen(args[cur_arg + 1], "r")) == NULL) {
+ if (err)
+ memprintf(err, "'%s' : unable to load ssl tickets keys file", args[cur_arg+1]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ keys_ref->filename = strdup(args[cur_arg + 1]);
+
+ while (fgets(thisline, sizeof(thisline), f) != NULL) {
+ int len = strlen(thisline);
+ /* Strip newline characters from the end */
+ if(thisline[len - 1] == '\n')
+ thisline[--len] = 0;
+
+ if(thisline[len - 1] == '\r')
+ thisline[--len] = 0;
+
+ if (base64dec(thisline, len, (char *) (keys_ref->tlskeys + i % TLS_TICKETS_NO), sizeof(struct tls_sess_key)) != sizeof(struct tls_sess_key)) {
+ if (err)
+ memprintf(err, "'%s' : unable to decode base64 key on line %d", args[cur_arg+1], i + 1);
+ return ERR_ALERT | ERR_FATAL;
+ }
+ i++;
+ }
+
+ if (i < TLS_TICKETS_NO) {
+ if (err)
+ memprintf(err, "'%s' : please supply at least %d keys in the tls-tickets-file", args[cur_arg+1], TLS_TICKETS_NO);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ fclose(f);
+
+ /* Use penultimate key for encryption, handle when TLS_TICKETS_NO = 1 */
+ i-=2;
+ keys_ref->tls_ticket_enc_index = i < 0 ? 0 : i;
+ keys_ref->unique_id = -1;
+ conf->keys_ref = keys_ref;
+
+ LIST_ADD(&tlskeys_reference, &keys_ref->list);
+
+ return 0;
+#else
+ if (err)
+ memprintf(err, "'%s' : TLS ticket callback extension not supported", args[cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+#endif /* SSL_CTRL_SET_TLSEXT_TICKET_KEY_CB */
+}
+
+/* parse the "verify" bind keyword */
+static int bind_parse_verify(char **args, int cur_arg, struct proxy *px, struct bind_conf *conf, char **err)
+{
+ if (!*args[cur_arg + 1]) {
+ if (err)
+ memprintf(err, "'%s' : missing verify method", args[cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ if (strcmp(args[cur_arg + 1], "none") == 0)
+ conf->verify = SSL_SOCK_VERIFY_NONE;
+ else if (strcmp(args[cur_arg + 1], "optional") == 0)
+ conf->verify = SSL_SOCK_VERIFY_OPTIONAL;
+ else if (strcmp(args[cur_arg + 1], "required") == 0)
+ conf->verify = SSL_SOCK_VERIFY_REQUIRED;
+ else {
+ if (err)
+ memprintf(err, "'%s' : unknown verify method '%s', only 'none', 'optional', and 'required' are supported\n",
+ args[cur_arg], args[cur_arg + 1]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ return 0;
+}
+
+/************** "server" keywords ****************/
+
+/* parse the "ca-file" server keyword */
+static int srv_parse_ca_file(char **args, int *cur_arg, struct proxy *px, struct server *newsrv, char **err)
+{
+ if (!*args[*cur_arg + 1]) {
+ if (err)
+ memprintf(err, "'%s' : missing CAfile path", args[*cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ if ((*args[*cur_arg + 1] != '/') && global.ca_base)
+ memprintf(&newsrv->ssl_ctx.ca_file, "%s/%s", global.ca_base, args[*cur_arg + 1]);
+ else
+ memprintf(&newsrv->ssl_ctx.ca_file, "%s", args[*cur_arg + 1]);
+
+ return 0;
+}
+
+/* parse the "check-ssl" server keyword */
+static int srv_parse_check_ssl(char **args, int *cur_arg, struct proxy *px, struct server *newsrv, char **err)
+{
+ newsrv->check.use_ssl = 1;
+ if (global.connect_default_ciphers && !newsrv->ssl_ctx.ciphers)
+ newsrv->ssl_ctx.ciphers = strdup(global.connect_default_ciphers);
+ newsrv->ssl_ctx.options |= global.connect_default_ssloptions;
+ return 0;
+}
+
+/* parse the "ciphers" server keyword */
+static int srv_parse_ciphers(char **args, int *cur_arg, struct proxy *px, struct server *newsrv, char **err)
+{
+ if (!*args[*cur_arg + 1]) {
+ memprintf(err, "'%s' : missing cipher suite", args[*cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ free(newsrv->ssl_ctx.ciphers);
+ newsrv->ssl_ctx.ciphers = strdup(args[*cur_arg + 1]);
+ return 0;
+}
+
+/* parse the "crl-file" server keyword */
+static int srv_parse_crl_file(char **args, int *cur_arg, struct proxy *px, struct server *newsrv, char **err)
+{
+#ifndef X509_V_FLAG_CRL_CHECK
+ if (err)
+ memprintf(err, "'%s' : library does not support CRL verify", args[*cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+#else
+ if (!*args[*cur_arg + 1]) {
+ if (err)
+ memprintf(err, "'%s' : missing CRLfile path", args[*cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ if ((*args[*cur_arg + 1] != '/') && global.ca_base)
+ memprintf(&newsrv->ssl_ctx.crl_file, "%s/%s", global.ca_base, args[*cur_arg + 1]);
+ else
+ memprintf(&newsrv->ssl_ctx.crl_file, "%s", args[*cur_arg + 1]);
+
+ return 0;
+#endif
+}
+
+/* parse the "crt" server keyword */
+static int srv_parse_crt(char **args, int *cur_arg, struct proxy *px, struct server *newsrv, char **err)
+{
+ if (!*args[*cur_arg + 1]) {
+ if (err)
+ memprintf(err, "'%s' : missing certificate file path", args[*cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ if ((*args[*cur_arg + 1] != '/') && global.crt_base)
+ memprintf(&newsrv->ssl_ctx.client_crt, "%s/%s", global.ca_base, args[*cur_arg + 1]);
+ else
+ memprintf(&newsrv->ssl_ctx.client_crt, "%s", args[*cur_arg + 1]);
+
+ return 0;
+}
+
+/* parse the "force-sslv3" server keyword */
+static int srv_parse_force_sslv3(char **args, int *cur_arg, struct proxy *px, struct server *newsrv, char **err)
+{
+ newsrv->ssl_ctx.options |= SRV_SSL_O_USE_SSLV3;
+ return 0;
+}
+
+/* parse the "force-tlsv10" server keyword */
+static int srv_parse_force_tlsv10(char **args, int *cur_arg, struct proxy *px, struct server *newsrv, char **err)
+{
+ newsrv->ssl_ctx.options |= SRV_SSL_O_USE_TLSV10;
+ return 0;
+}
+
+/* parse the "force-tlsv11" server keyword */
+static int srv_parse_force_tlsv11(char **args, int *cur_arg, struct proxy *px, struct server *newsrv, char **err)
+{
+#if SSL_OP_NO_TLSv1_1
+ newsrv->ssl_ctx.options |= SRV_SSL_O_USE_TLSV11;
+ return 0;
+#else
+ if (err)
+ memprintf(err, "'%s' : library does not support protocol TLSv1.1", args[*cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+#endif
+}
+
+/* parse the "force-tlsv12" server keyword */
+static int srv_parse_force_tlsv12(char **args, int *cur_arg, struct proxy *px, struct server *newsrv, char **err)
+{
+#if SSL_OP_NO_TLSv1_2
+ newsrv->ssl_ctx.options |= SRV_SSL_O_USE_TLSV12;
+ return 0;
+#else
+ if (err)
+ memprintf(err, "'%s' : library does not support protocol TLSv1.2", args[*cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+#endif
+}
+
+/* parse the "no-ssl-reuse" server keyword */
+static int srv_parse_no_ssl_reuse(char **args, int *cur_arg, struct proxy *px, struct server *newsrv, char **err)
+{
+ newsrv->ssl_ctx.options |= SRV_SSL_O_NO_REUSE;
+ return 0;
+}
+
+/* parse the "no-sslv3" server keyword */
+static int srv_parse_no_sslv3(char **args, int *cur_arg, struct proxy *px, struct server *newsrv, char **err)
+{
+ newsrv->ssl_ctx.options |= SRV_SSL_O_NO_SSLV3;
+ return 0;
+}
+
+/* parse the "no-tlsv10" server keyword */
+static int srv_parse_no_tlsv10(char **args, int *cur_arg, struct proxy *px, struct server *newsrv, char **err)
+{
+ newsrv->ssl_ctx.options |= SRV_SSL_O_NO_TLSV10;
+ return 0;
+}
+
+/* parse the "no-tlsv11" server keyword */
+static int srv_parse_no_tlsv11(char **args, int *cur_arg, struct proxy *px, struct server *newsrv, char **err)
+{
+ newsrv->ssl_ctx.options |= SRV_SSL_O_NO_TLSV11;
+ return 0;
+}
+
+/* parse the "no-tlsv12" server keyword */
+static int srv_parse_no_tlsv12(char **args, int *cur_arg, struct proxy *px, struct server *newsrv, char **err)
+{
+ newsrv->ssl_ctx.options |= SRV_SSL_O_NO_TLSV12;
+ return 0;
+}
+
+/* parse the "no-tls-tickets" server keyword */
+static int srv_parse_no_tls_tickets(char **args, int *cur_arg, struct proxy *px, struct server *newsrv, char **err)
+{
+ newsrv->ssl_ctx.options |= SRV_SSL_O_NO_TLS_TICKETS;
+ return 0;
+}
+/* parse the "send-proxy-v2-ssl" server keyword */
+static int srv_parse_send_proxy_ssl(char **args, int *cur_arg, struct proxy *px, struct server *newsrv, char **err)
+{
+ newsrv->pp_opts |= SRV_PP_V2;
+ newsrv->pp_opts |= SRV_PP_V2_SSL;
+ return 0;
+}
+
+/* parse the "send-proxy-v2-ssl-cn" server keyword */
+static int srv_parse_send_proxy_cn(char **args, int *cur_arg, struct proxy *px, struct server *newsrv, char **err)
+{
+ newsrv->pp_opts |= SRV_PP_V2;
+ newsrv->pp_opts |= SRV_PP_V2_SSL;
+ newsrv->pp_opts |= SRV_PP_V2_SSL_CN;
+ return 0;
+}
+
+/* parse the "sni" server keyword */
+static int srv_parse_sni(char **args, int *cur_arg, struct proxy *px, struct server *newsrv, char **err)
+{
+#ifndef SSL_CTRL_SET_TLSEXT_HOSTNAME
+ memprintf(err, "'%s' : the current SSL library doesn't support the SNI TLS extension", args[*cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+#else
+ struct sample_expr *expr;
+
+ if (!*args[*cur_arg + 1]) {
+ memprintf(err, "'%s' : missing sni expression", args[*cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ (*cur_arg)++;
+ proxy->conf.args.ctx = ARGC_SRV;
+
+ expr = sample_parse_expr((char **)args, cur_arg, px->conf.file, px->conf.line, err, &proxy->conf.args);
+ if (!expr) {
+ memprintf(err, "error detected while parsing sni expression : %s", *err);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ if (!(expr->fetch->val & SMP_VAL_BE_SRV_CON)) {
+ memprintf(err, "error detected while parsing sni expression : "
+ " fetch method '%s' extracts information from '%s', none of which is available here.\n",
+ args[*cur_arg-1], sample_src_names(expr->fetch->use));
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ px->http_needed |= !!(expr->fetch->use & SMP_USE_HTTP_ANY);
+ newsrv->ssl_ctx.sni = expr;
+ return 0;
+#endif
+}
+
+/* parse the "ssl" server keyword */
+static int srv_parse_ssl(char **args, int *cur_arg, struct proxy *px, struct server *newsrv, char **err)
+{
+ newsrv->use_ssl = 1;
+ if (global.connect_default_ciphers && !newsrv->ssl_ctx.ciphers)
+ newsrv->ssl_ctx.ciphers = strdup(global.connect_default_ciphers);
+ return 0;
+}
+
+/* parse the "verify" server keyword */
+static int srv_parse_verify(char **args, int *cur_arg, struct proxy *px, struct server *newsrv, char **err)
+{
+ if (!*args[*cur_arg + 1]) {
+ if (err)
+ memprintf(err, "'%s' : missing verify method", args[*cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ if (strcmp(args[*cur_arg + 1], "none") == 0)
+ newsrv->ssl_ctx.verify = SSL_SOCK_VERIFY_NONE;
+ else if (strcmp(args[*cur_arg + 1], "required") == 0)
+ newsrv->ssl_ctx.verify = SSL_SOCK_VERIFY_REQUIRED;
+ else {
+ if (err)
+ memprintf(err, "'%s' : unknown verify method '%s', only 'none' and 'required' are supported\n",
+ args[*cur_arg], args[*cur_arg + 1]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ return 0;
+}
+
+/* parse the "verifyhost" server keyword */
+static int srv_parse_verifyhost(char **args, int *cur_arg, struct proxy *px, struct server *newsrv, char **err)
+{
+ if (!*args[*cur_arg + 1]) {
+ if (err)
+ memprintf(err, "'%s' : missing hostname to verify against", args[*cur_arg]);
+ return ERR_ALERT | ERR_FATAL;
+ }
+
+ newsrv->ssl_ctx.verify_host = strdup(args[*cur_arg + 1]);
+
+ return 0;
+}
+
+/* parse the "ssl-default-bind-options" keyword in global section */
+static int ssl_parse_default_bind_options(char **args, int section_type, struct proxy *curpx,
+ struct proxy *defpx, const char *file, int line,
+ char **err) {
+ int i = 1;
+
+ if (*(args[i]) == 0) {
+ memprintf(err, "global statement '%s' expects an option as an argument.", args[0]);
+ return -1;
+ }
+ while (*(args[i])) {
+ if (!strcmp(args[i], "no-sslv3"))
+ global.listen_default_ssloptions |= BC_SSL_O_NO_SSLV3;
+ else if (!strcmp(args[i], "no-tlsv10"))
+ global.listen_default_ssloptions |= BC_SSL_O_NO_TLSV10;
+ else if (!strcmp(args[i], "no-tlsv11"))
+ global.listen_default_ssloptions |= BC_SSL_O_NO_TLSV11;
+ else if (!strcmp(args[i], "no-tlsv12"))
+ global.listen_default_ssloptions |= BC_SSL_O_NO_TLSV12;
+ else if (!strcmp(args[i], "force-sslv3"))
+ global.listen_default_ssloptions |= BC_SSL_O_USE_SSLV3;
+ else if (!strcmp(args[i], "force-tlsv10"))
+ global.listen_default_ssloptions |= BC_SSL_O_USE_TLSV10;
+ else if (!strcmp(args[i], "force-tlsv11")) {
+#if SSL_OP_NO_TLSv1_1
+ global.listen_default_ssloptions |= BC_SSL_O_USE_TLSV11;
+#else
+ memprintf(err, "'%s' '%s': library does not support protocol TLSv1.1", args[0], args[i]);
+ return -1;
+#endif
+ }
+ else if (!strcmp(args[i], "force-tlsv12")) {
+#if SSL_OP_NO_TLSv1_2
+ global.listen_default_ssloptions |= BC_SSL_O_USE_TLSV12;
+#else
+ memprintf(err, "'%s' '%s': library does not support protocol TLSv1.2", args[0], args[i]);
+ return -1;
+#endif
+ }
+ else if (!strcmp(args[i], "no-tls-tickets"))
+ global.listen_default_ssloptions |= BC_SSL_O_NO_TLS_TICKETS;
+ else {
+ memprintf(err, "unknown option '%s' on global statement '%s'.", args[i], args[0]);
+ return -1;
+ }
+ i++;
+ }
+ return 0;
+}
+
+/* parse the "ssl-default-server-options" keyword in global section */
+static int ssl_parse_default_server_options(char **args, int section_type, struct proxy *curpx,
+ struct proxy *defpx, const char *file, int line,
+ char **err) {
+ int i = 1;
+
+ if (*(args[i]) == 0) {
+ memprintf(err, "global statement '%s' expects an option as an argument.", args[0]);
+ return -1;
+ }
+ while (*(args[i])) {
+ if (!strcmp(args[i], "no-sslv3"))
+ global.connect_default_ssloptions |= SRV_SSL_O_NO_SSLV3;
+ else if (!strcmp(args[i], "no-tlsv10"))
+ global.connect_default_ssloptions |= SRV_SSL_O_NO_TLSV10;
+ else if (!strcmp(args[i], "no-tlsv11"))
+ global.connect_default_ssloptions |= SRV_SSL_O_NO_TLSV11;
+ else if (!strcmp(args[i], "no-tlsv12"))
+ global.connect_default_ssloptions |= SRV_SSL_O_NO_TLSV12;
+ else if (!strcmp(args[i], "force-sslv3"))
+ global.connect_default_ssloptions |= SRV_SSL_O_USE_SSLV3;
+ else if (!strcmp(args[i], "force-tlsv10"))
+ global.connect_default_ssloptions |= SRV_SSL_O_USE_TLSV10;
+ else if (!strcmp(args[i], "force-tlsv11")) {
+#if SSL_OP_NO_TLSv1_1
+ global.connect_default_ssloptions |= SRV_SSL_O_USE_TLSV11;
+#else
+ memprintf(err, "'%s' '%s': library does not support protocol TLSv1.1", args[0], args[i]);
+ return -1;
+#endif
+ }
+ else if (!strcmp(args[i], "force-tlsv12")) {
+#if SSL_OP_NO_TLSv1_2
+ global.connect_default_ssloptions |= SRV_SSL_O_USE_TLSV12;
+#else
+ memprintf(err, "'%s' '%s': library does not support protocol TLSv1.2", args[0], args[i]);
+ return -1;
+#endif
+ }
+ else if (!strcmp(args[i], "no-tls-tickets"))
+ global.connect_default_ssloptions |= SRV_SSL_O_NO_TLS_TICKETS;
+ else {
+ memprintf(err, "unknown option '%s' on global statement '%s'.", args[i], args[0]);
+ return -1;
+ }
+ i++;
+ }
+ return 0;
+}
+
+/* Note: must not be declared <const> as its list will be overwritten.
+ * Please take care of keeping this list alphabetically sorted.
+ */
+static struct sample_fetch_kw_list sample_fetch_keywords = {ILH, {
+ { "ssl_bc", smp_fetch_ssl_fc, 0, NULL, SMP_T_BOOL, SMP_USE_L5SRV },
+ { "ssl_bc_alg_keysize", smp_fetch_ssl_fc_alg_keysize, 0, NULL, SMP_T_SINT, SMP_USE_L5SRV },
+ { "ssl_bc_cipher", smp_fetch_ssl_fc_cipher, 0, NULL, SMP_T_STR, SMP_USE_L5SRV },
+ { "ssl_bc_protocol", smp_fetch_ssl_fc_protocol, 0, NULL, SMP_T_STR, SMP_USE_L5SRV },
+ { "ssl_bc_unique_id", smp_fetch_ssl_fc_unique_id, 0, NULL, SMP_T_BIN, SMP_USE_L5SRV },
+ { "ssl_bc_use_keysize", smp_fetch_ssl_fc_use_keysize, 0, NULL, SMP_T_SINT, SMP_USE_L5SRV },
+ { "ssl_bc_session_id", smp_fetch_ssl_fc_session_id, 0, NULL, SMP_T_BIN, SMP_USE_L5SRV },
+ { "ssl_c_ca_err", smp_fetch_ssl_c_ca_err, 0, NULL, SMP_T_SINT, SMP_USE_L5CLI },
+ { "ssl_c_ca_err_depth", smp_fetch_ssl_c_ca_err_depth, 0, NULL, SMP_T_SINT, SMP_USE_L5CLI },
+ { "ssl_c_der", smp_fetch_ssl_x_der, 0, NULL, SMP_T_BIN, SMP_USE_L5CLI },
+ { "ssl_c_err", smp_fetch_ssl_c_err, 0, NULL, SMP_T_SINT, SMP_USE_L5CLI },
+ { "ssl_c_i_dn", smp_fetch_ssl_x_i_dn, ARG2(0,STR,SINT), NULL, SMP_T_STR, SMP_USE_L5CLI },
+ { "ssl_c_key_alg", smp_fetch_ssl_x_key_alg, 0, NULL, SMP_T_STR, SMP_USE_L5CLI },
+ { "ssl_c_notafter", smp_fetch_ssl_x_notafter, 0, NULL, SMP_T_STR, SMP_USE_L5CLI },
+ { "ssl_c_notbefore", smp_fetch_ssl_x_notbefore, 0, NULL, SMP_T_STR, SMP_USE_L5CLI },
+ { "ssl_c_sig_alg", smp_fetch_ssl_x_sig_alg, 0, NULL, SMP_T_STR, SMP_USE_L5CLI },
+ { "ssl_c_s_dn", smp_fetch_ssl_x_s_dn, ARG2(0,STR,SINT), NULL, SMP_T_STR, SMP_USE_L5CLI },
+ { "ssl_c_serial", smp_fetch_ssl_x_serial, 0, NULL, SMP_T_BIN, SMP_USE_L5CLI },
+ { "ssl_c_sha1", smp_fetch_ssl_x_sha1, 0, NULL, SMP_T_BIN, SMP_USE_L5CLI },
+ { "ssl_c_used", smp_fetch_ssl_c_used, 0, NULL, SMP_T_BOOL, SMP_USE_L5CLI },
+ { "ssl_c_verify", smp_fetch_ssl_c_verify, 0, NULL, SMP_T_SINT, SMP_USE_L5CLI },
+ { "ssl_c_version", smp_fetch_ssl_x_version, 0, NULL, SMP_T_SINT, SMP_USE_L5CLI },
+ { "ssl_f_der", smp_fetch_ssl_x_der, 0, NULL, SMP_T_BIN, SMP_USE_L5CLI },
+ { "ssl_f_i_dn", smp_fetch_ssl_x_i_dn, ARG2(0,STR,SINT), NULL, SMP_T_STR, SMP_USE_L5CLI },
+ { "ssl_f_key_alg", smp_fetch_ssl_x_key_alg, 0, NULL, SMP_T_STR, SMP_USE_L5CLI },
+ { "ssl_f_notafter", smp_fetch_ssl_x_notafter, 0, NULL, SMP_T_STR, SMP_USE_L5CLI },
+ { "ssl_f_notbefore", smp_fetch_ssl_x_notbefore, 0, NULL, SMP_T_STR, SMP_USE_L5CLI },
+ { "ssl_f_sig_alg", smp_fetch_ssl_x_sig_alg, 0, NULL, SMP_T_STR, SMP_USE_L5CLI },
+ { "ssl_f_s_dn", smp_fetch_ssl_x_s_dn, ARG2(0,STR,SINT), NULL, SMP_T_STR, SMP_USE_L5CLI },
+ { "ssl_f_serial", smp_fetch_ssl_x_serial, 0, NULL, SMP_T_BIN, SMP_USE_L5CLI },
+ { "ssl_f_sha1", smp_fetch_ssl_x_sha1, 0, NULL, SMP_T_BIN, SMP_USE_L5CLI },
+ { "ssl_f_version", smp_fetch_ssl_x_version, 0, NULL, SMP_T_SINT, SMP_USE_L5CLI },
+ { "ssl_fc", smp_fetch_ssl_fc, 0, NULL, SMP_T_BOOL, SMP_USE_L5CLI },
+ { "ssl_fc_alg_keysize", smp_fetch_ssl_fc_alg_keysize, 0, NULL, SMP_T_SINT, SMP_USE_L5CLI },
+ { "ssl_fc_cipher", smp_fetch_ssl_fc_cipher, 0, NULL, SMP_T_STR, SMP_USE_L5CLI },
+ { "ssl_fc_has_crt", smp_fetch_ssl_fc_has_crt, 0, NULL, SMP_T_BOOL, SMP_USE_L5CLI },
+ { "ssl_fc_has_sni", smp_fetch_ssl_fc_has_sni, 0, NULL, SMP_T_BOOL, SMP_USE_L5CLI },
+ { "ssl_fc_is_resumed", smp_fetch_ssl_fc_is_resumed, 0, NULL, SMP_T_BOOL, SMP_USE_L5CLI },
+#ifdef OPENSSL_NPN_NEGOTIATED
+ { "ssl_fc_npn", smp_fetch_ssl_fc_npn, 0, NULL, SMP_T_STR, SMP_USE_L5CLI },
+#endif
+#ifdef TLSEXT_TYPE_application_layer_protocol_negotiation
+ { "ssl_fc_alpn", smp_fetch_ssl_fc_alpn, 0, NULL, SMP_T_STR, SMP_USE_L5CLI },
+#endif
+ { "ssl_fc_protocol", smp_fetch_ssl_fc_protocol, 0, NULL, SMP_T_STR, SMP_USE_L5CLI },
+ { "ssl_fc_unique_id", smp_fetch_ssl_fc_unique_id, 0, NULL, SMP_T_BIN, SMP_USE_L5CLI },
+ { "ssl_fc_use_keysize", smp_fetch_ssl_fc_use_keysize, 0, NULL, SMP_T_SINT, SMP_USE_L5CLI },
+ { "ssl_fc_session_id", smp_fetch_ssl_fc_session_id, 0, NULL, SMP_T_BIN, SMP_USE_L5CLI },
+ { "ssl_fc_sni", smp_fetch_ssl_fc_sni, 0, NULL, SMP_T_STR, SMP_USE_L5CLI },
+ { NULL, NULL, 0, 0, 0 },
+}};
+
+/* Note: must not be declared <const> as its list will be overwritten.
+ * Please take care of keeping this list alphabetically sorted.
+ */
+static struct acl_kw_list acl_kws = {ILH, {
+ { "ssl_fc_sni_end", "ssl_fc_sni", PAT_MATCH_END },
+ { "ssl_fc_sni_reg", "ssl_fc_sni", PAT_MATCH_REG },
+ { /* END */ },
+}};
+
+/* Note: must not be declared <const> as its list will be overwritten.
+ * Please take care of keeping this list alphabetically sorted, doing so helps
+ * all code contributors.
+ * Optional keywords are also declared with a NULL ->parse() function so that
+ * the config parser can report an appropriate error when a known keyword was
+ * not enabled.
+ */
+static struct bind_kw_list bind_kws = { "SSL", { }, {
+ { "alpn", bind_parse_alpn, 1 }, /* set ALPN supported protocols */
+ { "ca-file", bind_parse_ca_file, 1 }, /* set CAfile to process verify on client cert */
+ { "ca-ignore-err", bind_parse_ignore_err, 1 }, /* set error IDs to ignore on verify depth > 0 */
+ { "ca-sign-file", bind_parse_ca_sign_file, 1 }, /* set CAFile used to generate and sign server certs */
+ { "ca-sign-pass", bind_parse_ca_sign_pass, 1 }, /* set CAKey passphrase */
+ { "ciphers", bind_parse_ciphers, 1 }, /* set SSL cipher suite */
+ { "crl-file", bind_parse_crl_file, 1 }, /* set certificat revocation list file use on client cert verify */
+ { "crt", bind_parse_crt, 1 }, /* load SSL certificates from this location */
+ { "crt-ignore-err", bind_parse_ignore_err, 1 }, /* set error IDs to ingore on verify depth == 0 */
+ { "crt-list", bind_parse_crt_list, 1 }, /* load a list of crt from this location */
+ { "ecdhe", bind_parse_ecdhe, 1 }, /* defines named curve for elliptic curve Diffie-Hellman */
+ { "force-sslv3", bind_parse_force_sslv3, 0 }, /* force SSLv3 */
+ { "force-tlsv10", bind_parse_force_tlsv10, 0 }, /* force TLSv10 */
+ { "force-tlsv11", bind_parse_force_tlsv11, 0 }, /* force TLSv11 */
+ { "force-tlsv12", bind_parse_force_tlsv12, 0 }, /* force TLSv12 */
+ { "generate-certificates", bind_parse_generate_certs, 0 }, /* enable the server certificates generation */
+ { "no-sslv3", bind_parse_no_sslv3, 0 }, /* disable SSLv3 */
+ { "no-tlsv10", bind_parse_no_tlsv10, 0 }, /* disable TLSv10 */
+ { "no-tlsv11", bind_parse_no_tlsv11, 0 }, /* disable TLSv11 */
+ { "no-tlsv12", bind_parse_no_tlsv12, 0 }, /* disable TLSv12 */
+ { "no-tls-tickets", bind_parse_no_tls_tickets, 0 }, /* disable session resumption tickets */
+ { "ssl", bind_parse_ssl, 0 }, /* enable SSL processing */
+ { "strict-sni", bind_parse_strict_sni, 0 }, /* refuse negotiation if sni doesn't match a certificate */
+ { "tls-ticket-keys", bind_parse_tls_ticket_keys, 1 }, /* set file to load TLS ticket keys from */
+ { "verify", bind_parse_verify, 1 }, /* set SSL verify method */
+ { "npn", bind_parse_npn, 1 }, /* set NPN supported protocols */
+ { NULL, NULL, 0 },
+}};
+
+/* Note: must not be declared <const> as its list will be overwritten.
+ * Please take care of keeping this list alphabetically sorted, doing so helps
+ * all code contributors.
+ * Optional keywords are also declared with a NULL ->parse() function so that
+ * the config parser can report an appropriate error when a known keyword was
+ * not enabled.
+ */
+static struct srv_kw_list srv_kws = { "SSL", { }, {
+ { "ca-file", srv_parse_ca_file, 1, 0 }, /* set CAfile to process verify server cert */
+ { "check-ssl", srv_parse_check_ssl, 0, 0 }, /* enable SSL for health checks */
+ { "ciphers", srv_parse_ciphers, 1, 0 }, /* select the cipher suite */
+ { "crl-file", srv_parse_crl_file, 1, 0 }, /* set certificate revocation list file use on server cert verify */
+ { "crt", srv_parse_crt, 1, 0 }, /* set client certificate */
+ { "force-sslv3", srv_parse_force_sslv3, 0, 0 }, /* force SSLv3 */
+ { "force-tlsv10", srv_parse_force_tlsv10, 0, 0 }, /* force TLSv10 */
+ { "force-tlsv11", srv_parse_force_tlsv11, 0, 0 }, /* force TLSv11 */
+ { "force-tlsv12", srv_parse_force_tlsv12, 0, 0 }, /* force TLSv12 */
+ { "no-ssl-reuse", srv_parse_no_ssl_reuse, 0, 0 }, /* disable session reuse */
+ { "no-sslv3", srv_parse_no_sslv3, 0, 0 }, /* disable SSLv3 */
+ { "no-tlsv10", srv_parse_no_tlsv10, 0, 0 }, /* disable TLSv10 */
+ { "no-tlsv11", srv_parse_no_tlsv11, 0, 0 }, /* disable TLSv11 */
+ { "no-tlsv12", srv_parse_no_tlsv12, 0, 0 }, /* disable TLSv12 */
+ { "no-tls-tickets", srv_parse_no_tls_tickets, 0, 0 }, /* disable session resumption tickets */
+ { "send-proxy-v2-ssl", srv_parse_send_proxy_ssl, 0, 0 }, /* send PROXY protocol header v2 with SSL info */
+ { "send-proxy-v2-ssl-cn", srv_parse_send_proxy_cn, 0, 0 }, /* send PROXY protocol header v2 with CN */
+ { "sni", srv_parse_sni, 1, 0 }, /* send SNI extension */
+ { "ssl", srv_parse_ssl, 0, 0 }, /* enable SSL processing */
+ { "verify", srv_parse_verify, 1, 0 }, /* set SSL verify method */
+ { "verifyhost", srv_parse_verifyhost, 1, 0 }, /* require that SSL cert verifies for hostname */
+ { NULL, NULL, 0, 0 },
+}};
+
+static struct cfg_kw_list cfg_kws = {ILH, {
+ { CFG_GLOBAL, "ssl-default-bind-options", ssl_parse_default_bind_options },
+ { CFG_GLOBAL, "ssl-default-server-options", ssl_parse_default_server_options },
+ { 0, NULL, NULL },
+}};
+
+/* transport-layer operations for SSL sockets */
+struct xprt_ops ssl_sock = {
+ .snd_buf = ssl_sock_from_buf,
+ .rcv_buf = ssl_sock_to_buf,
+ .rcv_pipe = NULL,
+ .snd_pipe = NULL,
+ .shutr = NULL,
+ .shutw = ssl_sock_shutw,
+ .close = ssl_sock_close,
+ .init = ssl_sock_init,
+};
+
+#if (OPENSSL_VERSION_NUMBER >= 0x1000200fL && !defined OPENSSL_NO_TLSEXT && !defined OPENSSL_IS_BORINGSSL && !defined LIBRESSL_VERSION_NUMBER)
+
+static void ssl_sock_sctl_free_func(void *parent, void *ptr, CRYPTO_EX_DATA *ad, int idx, long argl, void *argp)
+{
+ if (ptr) {
+ chunk_destroy(ptr);
+ free(ptr);
+ }
+}
+
+#endif
+
+__attribute__((constructor))
+static void __ssl_sock_init(void)
+{
+ STACK_OF(SSL_COMP)* cm;
+
+#ifdef LISTEN_DEFAULT_CIPHERS
+ global.listen_default_ciphers = LISTEN_DEFAULT_CIPHERS;
+#endif
+#ifdef CONNECT_DEFAULT_CIPHERS
+ global.connect_default_ciphers = CONNECT_DEFAULT_CIPHERS;
+#endif
+ if (global.listen_default_ciphers)
+ global.listen_default_ciphers = strdup(global.listen_default_ciphers);
+ if (global.connect_default_ciphers)
+ global.connect_default_ciphers = strdup(global.connect_default_ciphers);
+ global.listen_default_ssloptions = BC_SSL_O_NONE;
+ global.connect_default_ssloptions = SRV_SSL_O_NONE;
+
+ SSL_library_init();
+ cm = SSL_COMP_get_compression_methods();
+ sk_SSL_COMP_zero(cm);
+#if (OPENSSL_VERSION_NUMBER >= 0x1000200fL && !defined OPENSSL_NO_TLSEXT && !defined OPENSSL_IS_BORINGSSL && !defined LIBRESSL_VERSION_NUMBER)
+ sctl_ex_index = SSL_CTX_get_ex_new_index(0, NULL, NULL, NULL, ssl_sock_sctl_free_func);
+#endif
+ sample_register_fetches(&sample_fetch_keywords);
+ acl_register_keywords(&acl_kws);
+ bind_register_keywords(&bind_kws);
+ srv_register_keywords(&srv_kws);
+ cfg_register_keywords(&cfg_kws);
+
+ global.ssl_session_max_cost = SSL_SESSION_MAX_COST;
+ global.ssl_handshake_max_cost = SSL_HANDSHAKE_MAX_COST;
+
+#ifndef OPENSSL_NO_DH
+ ssl_dh_ptr_index = SSL_CTX_get_ex_new_index(0, NULL, NULL, NULL, NULL);
+#endif
+}
+
+__attribute__((destructor))
+static void __ssl_sock_deinit(void)
+{
+#ifdef SSL_CTRL_SET_TLSEXT_HOSTNAME
+ lru64_destroy(ssl_ctx_lru_tree);
+#endif
+
+#ifndef OPENSSL_NO_DH
+ if (local_dh_1024) {
+ DH_free(local_dh_1024);
+ local_dh_1024 = NULL;
+ }
+
+ if (local_dh_2048) {
+ DH_free(local_dh_2048);
+ local_dh_2048 = NULL;
+ }
+
+ if (local_dh_4096) {
+ DH_free(local_dh_4096);
+ local_dh_4096 = NULL;
+ }
+
+ if (global_dh) {
+ DH_free(global_dh);
+ global_dh = NULL;
+ }
+#endif
+
+ ERR_remove_state(0);
+ ERR_free_strings();
+
+ EVP_cleanup();
+
+#if OPENSSL_VERSION_NUMBER >= 0x00907000L
+ CRYPTO_cleanup_all_ex_data();
+#endif
+}
+
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * General purpose functions.
+ *
+ * Copyright 2000-2010 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <ctype.h>
+#include <netdb.h>
+#include <stdarg.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/socket.h>
+#include <sys/un.h>
+#include <netinet/in.h>
+#include <arpa/inet.h>
+
+#include <common/chunk.h>
+#include <common/config.h>
+#include <common/standard.h>
+#include <types/global.h>
+#include <proto/dns.h>
+#include <eb32tree.h>
+
+/* enough to store NB_ITOA_STR integers of :
+ * 2^64-1 = 18446744073709551615 or
+ * -2^63 = -9223372036854775808
+ *
+ * The HTML version needs room for adding the 25 characters
+ * '<span class="rls"></span>' around digits at positions 3N+1 in order
+ * to add spacing at up to 6 positions : 18 446 744 073 709 551 615
+ */
+char itoa_str[NB_ITOA_STR][171];
+int itoa_idx = 0; /* index of next itoa_str to use */
+
+/* sometimes we'll need to quote strings (eg: in stats), and we don't expect
+ * to quote strings larger than a max configuration line.
+ */
+char quoted_str[NB_QSTR][QSTR_SIZE + 1];
+int quoted_idx = 0;
+
+/*
+ * unsigned long long ASCII representation
+ *
+ * return the last char '\0' or NULL if no enough
+ * space in dst
+ */
+char *ulltoa(unsigned long long n, char *dst, size_t size)
+{
+ int i = 0;
+ char *res;
+
+ switch(n) {
+ case 1ULL ... 9ULL:
+ i = 0;
+ break;
+
+ case 10ULL ... 99ULL:
+ i = 1;
+ break;
+
+ case 100ULL ... 999ULL:
+ i = 2;
+ break;
+
+ case 1000ULL ... 9999ULL:
+ i = 3;
+ break;
+
+ case 10000ULL ... 99999ULL:
+ i = 4;
+ break;
+
+ case 100000ULL ... 999999ULL:
+ i = 5;
+ break;
+
+ case 1000000ULL ... 9999999ULL:
+ i = 6;
+ break;
+
+ case 10000000ULL ... 99999999ULL:
+ i = 7;
+ break;
+
+ case 100000000ULL ... 999999999ULL:
+ i = 8;
+ break;
+
+ case 1000000000ULL ... 9999999999ULL:
+ i = 9;
+ break;
+
+ case 10000000000ULL ... 99999999999ULL:
+ i = 10;
+ break;
+
+ case 100000000000ULL ... 999999999999ULL:
+ i = 11;
+ break;
+
+ case 1000000000000ULL ... 9999999999999ULL:
+ i = 12;
+ break;
+
+ case 10000000000000ULL ... 99999999999999ULL:
+ i = 13;
+ break;
+
+ case 100000000000000ULL ... 999999999999999ULL:
+ i = 14;
+ break;
+
+ case 1000000000000000ULL ... 9999999999999999ULL:
+ i = 15;
+ break;
+
+ case 10000000000000000ULL ... 99999999999999999ULL:
+ i = 16;
+ break;
+
+ case 100000000000000000ULL ... 999999999999999999ULL:
+ i = 17;
+ break;
+
+ case 1000000000000000000ULL ... 9999999999999999999ULL:
+ i = 18;
+ break;
+
+ case 10000000000000000000ULL ... ULLONG_MAX:
+ i = 19;
+ break;
+ }
+ if (i + 2 > size) // (i + 1) + '\0'
+ return NULL; // too long
+ res = dst + i + 1;
+ *res = '\0';
+ for (; i >= 0; i--) {
+ dst[i] = n % 10ULL + '0';
+ n /= 10ULL;
+ }
+ return res;
+}
+
+/*
+ * unsigned long ASCII representation
+ *
+ * return the last char '\0' or NULL if no enough
+ * space in dst
+ */
+char *ultoa_o(unsigned long n, char *dst, size_t size)
+{
+ int i = 0;
+ char *res;
+
+ switch (n) {
+ case 0U ... 9UL:
+ i = 0;
+ break;
+
+ case 10U ... 99UL:
+ i = 1;
+ break;
+
+ case 100U ... 999UL:
+ i = 2;
+ break;
+
+ case 1000U ... 9999UL:
+ i = 3;
+ break;
+
+ case 10000U ... 99999UL:
+ i = 4;
+ break;
+
+ case 100000U ... 999999UL:
+ i = 5;
+ break;
+
+ case 1000000U ... 9999999UL:
+ i = 6;
+ break;
+
+ case 10000000U ... 99999999UL:
+ i = 7;
+ break;
+
+ case 100000000U ... 999999999UL:
+ i = 8;
+ break;
+#if __WORDSIZE == 32
+
+ case 1000000000ULL ... ULONG_MAX:
+ i = 9;
+ break;
+
+#elif __WORDSIZE == 64
+
+ case 1000000000ULL ... 9999999999UL:
+ i = 9;
+ break;
+
+ case 10000000000ULL ... 99999999999UL:
+ i = 10;
+ break;
+
+ case 100000000000ULL ... 999999999999UL:
+ i = 11;
+ break;
+
+ case 1000000000000ULL ... 9999999999999UL:
+ i = 12;
+ break;
+
+ case 10000000000000ULL ... 99999999999999UL:
+ i = 13;
+ break;
+
+ case 100000000000000ULL ... 999999999999999UL:
+ i = 14;
+ break;
+
+ case 1000000000000000ULL ... 9999999999999999UL:
+ i = 15;
+ break;
+
+ case 10000000000000000ULL ... 99999999999999999UL:
+ i = 16;
+ break;
+
+ case 100000000000000000ULL ... 999999999999999999UL:
+ i = 17;
+ break;
+
+ case 1000000000000000000ULL ... 9999999999999999999UL:
+ i = 18;
+ break;
+
+ case 10000000000000000000ULL ... ULONG_MAX:
+ i = 19;
+ break;
+
+#endif
+ }
+ if (i + 2 > size) // (i + 1) + '\0'
+ return NULL; // too long
+ res = dst + i + 1;
+ *res = '\0';
+ for (; i >= 0; i--) {
+ dst[i] = n % 10U + '0';
+ n /= 10U;
+ }
+ return res;
+}
+
+/*
+ * signed long ASCII representation
+ *
+ * return the last char '\0' or NULL if no enough
+ * space in dst
+ */
+char *ltoa_o(long int n, char *dst, size_t size)
+{
+ char *pos = dst;
+
+ if (n < 0) {
+ if (size < 3)
+ return NULL; // min size is '-' + digit + '\0' but another test in ultoa
+ *pos = '-';
+ pos++;
+ dst = ultoa_o(-n, pos, size - 1);
+ } else {
+ dst = ultoa_o(n, dst, size);
+ }
+ return dst;
+}
+
+/*
+ * signed long long ASCII representation
+ *
+ * return the last char '\0' or NULL if no enough
+ * space in dst
+ */
+char *lltoa(long long n, char *dst, size_t size)
+{
+ char *pos = dst;
+
+ if (n < 0) {
+ if (size < 3)
+ return NULL; // min size is '-' + digit + '\0' but another test in ulltoa
+ *pos = '-';
+ pos++;
+ dst = ulltoa(-n, pos, size - 1);
+ } else {
+ dst = ulltoa(n, dst, size);
+ }
+ return dst;
+}
+
+/*
+ * write a ascii representation of a unsigned into dst,
+ * return a pointer to the last character
+ * Pad the ascii representation with '0', using size.
+ */
+char *utoa_pad(unsigned int n, char *dst, size_t size)
+{
+ int i = 0;
+ char *ret;
+
+ switch(n) {
+ case 0U ... 9U:
+ i = 0;
+ break;
+
+ case 10U ... 99U:
+ i = 1;
+ break;
+
+ case 100U ... 999U:
+ i = 2;
+ break;
+
+ case 1000U ... 9999U:
+ i = 3;
+ break;
+
+ case 10000U ... 99999U:
+ i = 4;
+ break;
+
+ case 100000U ... 999999U:
+ i = 5;
+ break;
+
+ case 1000000U ... 9999999U:
+ i = 6;
+ break;
+
+ case 10000000U ... 99999999U:
+ i = 7;
+ break;
+
+ case 100000000U ... 999999999U:
+ i = 8;
+ break;
+
+ case 1000000000U ... 4294967295U:
+ i = 9;
+ break;
+ }
+ if (i + 2 > size) // (i + 1) + '\0'
+ return NULL; // too long
+ if (i < size)
+ i = size - 2; // padding - '\0'
+
+ ret = dst + i + 1;
+ *ret = '\0';
+ for (; i >= 0; i--) {
+ dst[i] = n % 10U + '0';
+ n /= 10U;
+ }
+ return ret;
+}
+
+/*
+ * copies at most <size-1> chars from <src> to <dst>. Last char is always
+ * set to 0, unless <size> is 0. The number of chars copied is returned
+ * (excluding the terminating zero).
+ * This code has been optimized for size and speed : on x86, it's 45 bytes
+ * long, uses only registers, and consumes only 4 cycles per char.
+ */
+int strlcpy2(char *dst, const char *src, int size)
+{
+ char *orig = dst;
+ if (size) {
+ while (--size && (*dst = *src)) {
+ src++; dst++;
+ }
+ *dst = 0;
+ }
+ return dst - orig;
+}
+
+/*
+ * This function simply returns a locally allocated string containing
+ * the ascii representation for number 'n' in decimal.
+ */
+char *ultoa_r(unsigned long n, char *buffer, int size)
+{
+ char *pos;
+
+ pos = buffer + size - 1;
+ *pos-- = '\0';
+
+ do {
+ *pos-- = '0' + n % 10;
+ n /= 10;
+ } while (n && pos >= buffer);
+ return pos + 1;
+}
+
+/*
+ * This function simply returns a locally allocated string containing
+ * the ascii representation for number 'n' in decimal.
+ */
+char *lltoa_r(long long int in, char *buffer, int size)
+{
+ char *pos;
+ int neg = 0;
+ unsigned long long int n;
+
+ pos = buffer + size - 1;
+ *pos-- = '\0';
+
+ if (in < 0) {
+ neg = 1;
+ n = -in;
+ }
+ else
+ n = in;
+
+ do {
+ *pos-- = '0' + n % 10;
+ n /= 10;
+ } while (n && pos >= buffer);
+ if (neg && pos > buffer)
+ *pos-- = '-';
+ return pos + 1;
+}
+
+/*
+ * This function simply returns a locally allocated string containing
+ * the ascii representation for signed number 'n' in decimal.
+ */
+char *sltoa_r(long n, char *buffer, int size)
+{
+ char *pos;
+
+ if (n >= 0)
+ return ultoa_r(n, buffer, size);
+
+ pos = ultoa_r(-n, buffer + 1, size - 1) - 1;
+ *pos = '-';
+ return pos;
+}
+
+/*
+ * This function simply returns a locally allocated string containing
+ * the ascii representation for number 'n' in decimal, formatted for
+ * HTML output with tags to create visual grouping by 3 digits. The
+ * output needs to support at least 171 characters.
+ */
+const char *ulltoh_r(unsigned long long n, char *buffer, int size)
+{
+ char *start;
+ int digit = 0;
+
+ start = buffer + size;
+ *--start = '\0';
+
+ do {
+ if (digit == 3 && start >= buffer + 7)
+ memcpy(start -= 7, "</span>", 7);
+
+ if (start >= buffer + 1) {
+ *--start = '0' + n % 10;
+ n /= 10;
+ }
+
+ if (digit == 3 && start >= buffer + 18)
+ memcpy(start -= 18, "<span class=\"rls\">", 18);
+
+ if (digit++ == 3)
+ digit = 1;
+ } while (n && start > buffer);
+ return start;
+}
+
+/*
+ * This function simply returns a locally allocated string containing the ascii
+ * representation for number 'n' in decimal, unless n is 0 in which case it
+ * returns the alternate string (or an empty string if the alternate string is
+ * NULL). It use is intended for limits reported in reports, where it's
+ * desirable not to display anything if there is no limit. Warning! it shares
+ * the same vector as ultoa_r().
+ */
+const char *limit_r(unsigned long n, char *buffer, int size, const char *alt)
+{
+ return (n) ? ultoa_r(n, buffer, size) : (alt ? alt : "");
+}
+
+/* returns a locally allocated string containing the quoted encoding of the
+ * input string. The output may be truncated to QSTR_SIZE chars, but it is
+ * guaranteed that the string will always be properly terminated. Quotes are
+ * encoded by doubling them as is commonly done in CSV files. QSTR_SIZE must
+ * always be at least 4 chars.
+ */
+const char *qstr(const char *str)
+{
+ char *ret = quoted_str[quoted_idx];
+ char *p, *end;
+
+ if (++quoted_idx >= NB_QSTR)
+ quoted_idx = 0;
+
+ p = ret;
+ end = ret + QSTR_SIZE;
+
+ *p++ = '"';
+
+ /* always keep 3 chars to support passing "" and the ending " */
+ while (*str && p < end - 3) {
+ if (*str == '"') {
+ *p++ = '"';
+ *p++ = '"';
+ }
+ else
+ *p++ = *str;
+ str++;
+ }
+ *p++ = '"';
+ return ret;
+}
+
+/*
+ * Returns non-zero if character <s> is a hex digit (0-9, a-f, A-F), else zero.
+ *
+ * It looks like this one would be a good candidate for inlining, but this is
+ * not interesting because it around 35 bytes long and often called multiple
+ * times within the same function.
+ */
+int ishex(char s)
+{
+ s -= '0';
+ if ((unsigned char)s <= 9)
+ return 1;
+ s -= 'A' - '0';
+ if ((unsigned char)s <= 5)
+ return 1;
+ s -= 'a' - 'A';
+ if ((unsigned char)s <= 5)
+ return 1;
+ return 0;
+}
+
+/* rounds <i> down to the closest value having max 2 digits */
+unsigned int round_2dig(unsigned int i)
+{
+ unsigned int mul = 1;
+
+ while (i >= 100) {
+ i /= 10;
+ mul *= 10;
+ }
+ return i * mul;
+}
+
+/*
+ * Checks <name> for invalid characters. Valid chars are [A-Za-z0-9_:.-]. If an
+ * invalid character is found, a pointer to it is returned. If everything is
+ * fine, NULL is returned.
+ */
+const char *invalid_char(const char *name)
+{
+ if (!*name)
+ return name;
+
+ while (*name) {
+ if (!isalnum((int)(unsigned char)*name) && *name != '.' && *name != ':' &&
+ *name != '_' && *name != '-')
+ return name;
+ name++;
+ }
+ return NULL;
+}
+
+/*
+ * Checks <domainname> for invalid characters. Valid chars are [A-Za-z0-9_.-].
+ * If an invalid character is found, a pointer to it is returned.
+ * If everything is fine, NULL is returned.
+ */
+const char *invalid_domainchar(const char *name) {
+
+ if (!*name)
+ return name;
+
+ while (*name) {
+ if (!isalnum((int)(unsigned char)*name) && *name != '.' &&
+ *name != '_' && *name != '-')
+ return name;
+
+ name++;
+ }
+
+ return NULL;
+}
+
+/*
+ * converts <str> to a struct sockaddr_storage* provided by the caller. The
+ * caller must have zeroed <sa> first, and may have set sa->ss_family to force
+ * parse a specific address format. If the ss_family is 0 or AF_UNSPEC, then
+ * the function tries to guess the address family from the syntax. If the
+ * family is forced and the format doesn't match, an error is returned. The
+ * string is assumed to contain only an address, no port. The address can be a
+ * dotted IPv4 address, an IPv6 address, a host name, or empty or "*" to
+ * indicate INADDR_ANY. NULL is returned if the host part cannot be resolved.
+ * The return address will only have the address family and the address set,
+ * all other fields remain zero. The string is not supposed to be modified.
+ * The IPv6 '::' address is IN6ADDR_ANY. If <resolve> is non-zero, the hostname
+ * is resolved, otherwise only IP addresses are resolved, and anything else
+ * returns NULL.
+ */
+struct sockaddr_storage *str2ip2(const char *str, struct sockaddr_storage *sa, int resolve)
+{
+ struct hostent *he;
+
+ /* Any IPv6 address */
+ if (str[0] == ':' && str[1] == ':' && !str[2]) {
+ if (!sa->ss_family || sa->ss_family == AF_UNSPEC)
+ sa->ss_family = AF_INET6;
+ else if (sa->ss_family != AF_INET6)
+ goto fail;
+ return sa;
+ }
+
+ /* Any address for the family, defaults to IPv4 */
+ if (!str[0] || (str[0] == '*' && !str[1])) {
+ if (!sa->ss_family || sa->ss_family == AF_UNSPEC)
+ sa->ss_family = AF_INET;
+ return sa;
+ }
+
+ /* check for IPv6 first */
+ if ((!sa->ss_family || sa->ss_family == AF_UNSPEC || sa->ss_family == AF_INET6) &&
+ inet_pton(AF_INET6, str, &((struct sockaddr_in6 *)sa)->sin6_addr)) {
+ sa->ss_family = AF_INET6;
+ return sa;
+ }
+
+ /* then check for IPv4 */
+ if ((!sa->ss_family || sa->ss_family == AF_UNSPEC || sa->ss_family == AF_INET) &&
+ inet_pton(AF_INET, str, &((struct sockaddr_in *)sa)->sin_addr)) {
+ sa->ss_family = AF_INET;
+ return sa;
+ }
+
+ if (!resolve)
+ return NULL;
+
+ if (!dns_hostname_validation(str, NULL))
+ return NULL;
+
+#ifdef USE_GETADDRINFO
+ if (global.tune.options & GTUNE_USE_GAI) {
+ struct addrinfo hints, *result;
+
+ memset(&result, 0, sizeof(result));
+ memset(&hints, 0, sizeof(hints));
+ hints.ai_family = sa->ss_family ? sa->ss_family : AF_UNSPEC;
+ hints.ai_socktype = SOCK_DGRAM;
+ hints.ai_flags = 0;
+ hints.ai_protocol = 0;
+
+ if (getaddrinfo(str, NULL, &hints, &result) == 0) {
+ if (!sa->ss_family || sa->ss_family == AF_UNSPEC)
+ sa->ss_family = result->ai_family;
+ else if (sa->ss_family != result->ai_family)
+ goto fail;
+
+ switch (result->ai_family) {
+ case AF_INET:
+ memcpy((struct sockaddr_in *)sa, result->ai_addr, result->ai_addrlen);
+ return sa;
+ case AF_INET6:
+ memcpy((struct sockaddr_in6 *)sa, result->ai_addr, result->ai_addrlen);
+ return sa;
+ }
+ }
+
+ if (result)
+ freeaddrinfo(result);
+ }
+#endif
+ /* try to resolve an IPv4/IPv6 hostname */
+ he = gethostbyname(str);
+ if (he) {
+ if (!sa->ss_family || sa->ss_family == AF_UNSPEC)
+ sa->ss_family = he->h_addrtype;
+ else if (sa->ss_family != he->h_addrtype)
+ goto fail;
+
+ switch (sa->ss_family) {
+ case AF_INET:
+ ((struct sockaddr_in *)sa)->sin_addr = *(struct in_addr *) *(he->h_addr_list);
+ return sa;
+ case AF_INET6:
+ ((struct sockaddr_in6 *)sa)->sin6_addr = *(struct in6_addr *) *(he->h_addr_list);
+ return sa;
+ }
+ }
+
+ /* unsupported address family */
+ fail:
+ return NULL;
+}
+
+/*
+ * Converts <str> to a locally allocated struct sockaddr_storage *, and a port
+ * range or offset consisting in two integers that the caller will have to
+ * check to find the relevant input format. The following format are supported :
+ *
+ * String format | address | port | low | high
+ * addr | <addr> | 0 | 0 | 0
+ * addr: | <addr> | 0 | 0 | 0
+ * addr:port | <addr> | <port> | <port> | <port>
+ * addr:pl-ph | <addr> | <pl> | <pl> | <ph>
+ * addr:+port | <addr> | <port> | 0 | <port>
+ * addr:-port | <addr> |-<port> | <port> | 0
+ *
+ * The detection of a port range or increment by the caller is made by
+ * comparing <low> and <high>. If both are equal, then port 0 means no port
+ * was specified. The caller may pass NULL for <low> and <high> if it is not
+ * interested in retrieving port ranges.
+ *
+ * Note that <addr> above may also be :
+ * - empty ("") => family will be AF_INET and address will be INADDR_ANY
+ * - "*" => family will be AF_INET and address will be INADDR_ANY
+ * - "::" => family will be AF_INET6 and address will be IN6ADDR_ANY
+ * - a host name => family and address will depend on host name resolving.
+ *
+ * A prefix may be passed in before the address above to force the family :
+ * - "ipv4@" => force address to resolve as IPv4 and fail if not possible.
+ * - "ipv6@" => force address to resolve as IPv6 and fail if not possible.
+ * - "unix@" => force address to be a path to a UNIX socket even if the
+ * path does not start with a '/'
+ * - 'abns@' -> force address to belong to the abstract namespace (Linux
+ * only). These sockets are just like Unix sockets but without
+ * the need for an underlying file system. The address is a
+ * string. Technically it's like a Unix socket with a zero in
+ * the first byte of the address.
+ * - "fd@" => an integer must follow, and is a file descriptor number.
+ *
+ * Also note that in order to avoid any ambiguity with IPv6 addresses, the ':'
+ * is mandatory after the IP address even when no port is specified. NULL is
+ * returned if the address cannot be parsed. The <low> and <high> ports are
+ * always initialized if non-null, even for non-IP families.
+ *
+ * If <pfx> is non-null, it is used as a string prefix before any path-based
+ * address (typically the path to a unix socket).
+ *
+ * if <fqdn> is non-null, it will be filled with :
+ * - a pointer to the FQDN of the server name to resolve if there's one, and
+ * that the caller will have to free(),
+ * - NULL if there was an explicit address that doesn't require resolution.
+ *
+ * Hostnames are only resolved if <resolve> is non-null.
+ *
+ * When a file descriptor is passed, its value is put into the s_addr part of
+ * the address when cast to sockaddr_in and the address family is AF_UNSPEC.
+ */
+struct sockaddr_storage *str2sa_range(const char *str, int *low, int *high, char **err, const char *pfx, char **fqdn, int resolve)
+{
+ static struct sockaddr_storage ss;
+ struct sockaddr_storage *ret = NULL;
+ char *back, *str2;
+ char *port1, *port2;
+ int portl, porth, porta;
+ int abstract = 0;
+
+ portl = porth = porta = 0;
+ if (fqdn)
+ *fqdn = NULL;
+
+ str2 = back = env_expand(strdup(str));
+ if (str2 == NULL) {
+ memprintf(err, "out of memory in '%s'\n", __FUNCTION__);
+ goto out;
+ }
+
+ if (!*str2) {
+ memprintf(err, "'%s' resolves to an empty address (environment variable missing?)\n", str);
+ goto out;
+ }
+
+ memset(&ss, 0, sizeof(ss));
+
+ if (strncmp(str2, "unix@", 5) == 0) {
+ str2 += 5;
+ abstract = 0;
+ ss.ss_family = AF_UNIX;
+ }
+ else if (strncmp(str2, "abns@", 5) == 0) {
+ str2 += 5;
+ abstract = 1;
+ ss.ss_family = AF_UNIX;
+ }
+ else if (strncmp(str2, "ipv4@", 5) == 0) {
+ str2 += 5;
+ ss.ss_family = AF_INET;
+ }
+ else if (strncmp(str2, "ipv6@", 5) == 0) {
+ str2 += 5;
+ ss.ss_family = AF_INET6;
+ }
+ else if (*str2 == '/') {
+ ss.ss_family = AF_UNIX;
+ }
+ else
+ ss.ss_family = AF_UNSPEC;
+
+ if (ss.ss_family == AF_UNSPEC && strncmp(str2, "fd@", 3) == 0) {
+ char *endptr;
+
+ str2 += 3;
+ ((struct sockaddr_in *)&ss)->sin_addr.s_addr = strtol(str2, &endptr, 10);
+
+ if (!*str2 || *endptr) {
+ memprintf(err, "file descriptor '%s' is not a valid integer in '%s'\n", str2, str);
+ goto out;
+ }
+
+ /* we return AF_UNSPEC if we use a file descriptor number */
+ ss.ss_family = AF_UNSPEC;
+ }
+ else if (ss.ss_family == AF_UNIX) {
+ int prefix_path_len;
+ int max_path_len;
+ int adr_len;
+
+ /* complete unix socket path name during startup or soft-restart is
+ * <unix_bind_prefix><path>.<pid>.<bak|tmp>
+ */
+ prefix_path_len = (pfx && !abstract) ? strlen(pfx) : 0;
+ max_path_len = (sizeof(((struct sockaddr_un *)&ss)->sun_path) - 1) -
+ (prefix_path_len ? prefix_path_len + 1 + 5 + 1 + 3 : 0);
+
+ adr_len = strlen(str2);
+ if (adr_len > max_path_len) {
+ memprintf(err, "socket path '%s' too long (max %d)\n", str, max_path_len);
+ goto out;
+ }
+
+ /* when abstract==1, we skip the first zero and copy all bytes except the trailing zero */
+ memset(((struct sockaddr_un *)&ss)->sun_path, 0, sizeof(((struct sockaddr_un *)&ss)->sun_path));
+ if (prefix_path_len)
+ memcpy(((struct sockaddr_un *)&ss)->sun_path, pfx, prefix_path_len);
+ memcpy(((struct sockaddr_un *)&ss)->sun_path + prefix_path_len + abstract, str2, adr_len + 1 - abstract);
+ }
+ else { /* IPv4 and IPv6 */
+ int use_fqdn = 0;
+
+ port1 = strrchr(str2, ':');
+ if (port1)
+ *port1++ = '\0';
+ else
+ port1 = "";
+
+ if (str2ip2(str2, &ss, 0) == NULL) {
+ use_fqdn = 1;
+ if (!resolve || str2ip2(str2, &ss, 1) == NULL) {
+ memprintf(err, "invalid address: '%s' in '%s'\n", str2, str);
+ goto out;
+ }
+ }
+
+ if (isdigit((int)(unsigned char)*port1)) { /* single port or range */
+ port2 = strchr(port1, '-');
+ if (port2)
+ *port2++ = '\0';
+ else
+ port2 = port1;
+ portl = atoi(port1);
+ porth = atoi(port2);
+ porta = portl;
+ }
+ else if (*port1 == '-') { /* negative offset */
+ portl = atoi(port1 + 1);
+ porta = -portl;
+ }
+ else if (*port1 == '+') { /* positive offset */
+ porth = atoi(port1 + 1);
+ porta = porth;
+ }
+ else if (*port1) { /* other any unexpected char */
+ memprintf(err, "invalid character '%c' in port number '%s' in '%s'\n", *port1, port1, str);
+ goto out;
+ }
+ set_host_port(&ss, porta);
+
+ if (use_fqdn && fqdn) {
+ if (str2 != back)
+ memmove(back, str2, strlen(str2) + 1);
+ *fqdn = back;
+ back = NULL;
+ }
+ }
+
+ ret = &ss;
+ out:
+ if (low)
+ *low = portl;
+ if (high)
+ *high = porth;
+ free(back);
+ return ret;
+}
+
+/* converts <str> to a struct in_addr containing a network mask. It can be
+ * passed in dotted form (255.255.255.0) or in CIDR form (24). It returns 1
+ * if the conversion succeeds otherwise non-zero.
+ */
+int str2mask(const char *str, struct in_addr *mask)
+{
+ if (strchr(str, '.') != NULL) { /* dotted notation */
+ if (!inet_pton(AF_INET, str, mask))
+ return 0;
+ }
+ else { /* mask length */
+ char *err;
+ unsigned long len = strtol(str, &err, 10);
+
+ if (!*str || (err && *err) || (unsigned)len > 32)
+ return 0;
+ if (len)
+ mask->s_addr = htonl(~0UL << (32 - len));
+ else
+ mask->s_addr = 0;
+ }
+ return 1;
+}
+
+/* convert <cidr> to struct in_addr <mask>. It returns 1 if the conversion
+ * succeeds otherwise zero.
+ */
+int cidr2dotted(int cidr, struct in_addr *mask) {
+
+ if (cidr < 0 || cidr > 32)
+ return 0;
+
+ mask->s_addr = cidr ? htonl(~0UL << (32 - cidr)) : 0;
+ return 1;
+}
+
+/*
+ * converts <str> to two struct in_addr* which must be pre-allocated.
+ * The format is "addr[/mask]", where "addr" cannot be empty, and mask
+ * is optionnal and either in the dotted or CIDR notation.
+ * Note: "addr" can also be a hostname. Returns 1 if OK, 0 if error.
+ */
+int str2net(const char *str, int resolve, struct in_addr *addr, struct in_addr *mask)
+{
+ __label__ out_free, out_err;
+ char *c, *s;
+ int ret_val;
+
+ s = strdup(str);
+ if (!s)
+ return 0;
+
+ memset(mask, 0, sizeof(*mask));
+ memset(addr, 0, sizeof(*addr));
+
+ if ((c = strrchr(s, '/')) != NULL) {
+ *c++ = '\0';
+ /* c points to the mask */
+ if (!str2mask(c, mask))
+ goto out_err;
+ }
+ else {
+ mask->s_addr = ~0U;
+ }
+ if (!inet_pton(AF_INET, s, addr)) {
+ struct hostent *he;
+
+ if (!resolve)
+ goto out_err;
+
+ if ((he = gethostbyname(s)) == NULL) {
+ goto out_err;
+ }
+ else
+ *addr = *(struct in_addr *) *(he->h_addr_list);
+ }
+
+ ret_val = 1;
+ out_free:
+ free(s);
+ return ret_val;
+ out_err:
+ ret_val = 0;
+ goto out_free;
+}
+
+
+/*
+ * converts <str> to two struct in6_addr* which must be pre-allocated.
+ * The format is "addr[/mask]", where "addr" cannot be empty, and mask
+ * is an optionnal number of bits (128 being the default).
+ * Returns 1 if OK, 0 if error.
+ */
+int str62net(const char *str, struct in6_addr *addr, unsigned char *mask)
+{
+ char *c, *s;
+ int ret_val = 0;
+ char *err;
+ unsigned long len = 128;
+
+ s = strdup(str);
+ if (!s)
+ return 0;
+
+ memset(mask, 0, sizeof(*mask));
+ memset(addr, 0, sizeof(*addr));
+
+ if ((c = strrchr(s, '/')) != NULL) {
+ *c++ = '\0'; /* c points to the mask */
+ if (!*c)
+ goto out_free;
+
+ len = strtoul(c, &err, 10);
+ if ((err && *err) || (unsigned)len > 128)
+ goto out_free;
+ }
+ *mask = len; /* OK we have a valid mask in <len> */
+
+ if (!inet_pton(AF_INET6, s, addr))
+ goto out_free;
+
+ ret_val = 1;
+ out_free:
+ free(s);
+ return ret_val;
+}
+
+
+/*
+ * Parse IPv4 address found in url.
+ */
+int url2ipv4(const char *addr, struct in_addr *dst)
+{
+ int saw_digit, octets, ch;
+ u_char tmp[4], *tp;
+ const char *cp = addr;
+
+ saw_digit = 0;
+ octets = 0;
+ *(tp = tmp) = 0;
+
+ while (*addr) {
+ unsigned char digit = (ch = *addr++) - '0';
+ if (digit > 9 && ch != '.')
+ break;
+ if (digit <= 9) {
+ u_int new = *tp * 10 + digit;
+ if (new > 255)
+ return 0;
+ *tp = new;
+ if (!saw_digit) {
+ if (++octets > 4)
+ return 0;
+ saw_digit = 1;
+ }
+ } else if (ch == '.' && saw_digit) {
+ if (octets == 4)
+ return 0;
+ *++tp = 0;
+ saw_digit = 0;
+ } else
+ return 0;
+ }
+
+ if (octets < 4)
+ return 0;
+
+ memcpy(&dst->s_addr, tmp, 4);
+ return addr-cp-1;
+}
+
+/*
+ * Resolve destination server from URL. Convert <str> to a sockaddr_storage.
+ * <out> contain the code of the dectected scheme, the start and length of
+ * the hostname. Actually only http and https are supported. <out> can be NULL.
+ * This function returns the consumed length. It is useful if you parse complete
+ * url like http://host:port/path, because the consumed length corresponds to
+ * the first character of the path. If the conversion fails, it returns -1.
+ *
+ * This function tries to resolve the DNS name if haproxy is in starting mode.
+ * So, this function may be used during the configuration parsing.
+ */
+int url2sa(const char *url, int ulen, struct sockaddr_storage *addr, struct split_url *out)
+{
+ const char *curr = url, *cp = url;
+ const char *end;
+ int ret, url_code = 0;
+ unsigned long long int http_code = 0;
+ int default_port;
+ struct hostent *he;
+ char *p;
+
+ /* Firstly, try to find :// pattern */
+ while (curr < url+ulen && url_code != 0x3a2f2f) {
+ url_code = ((url_code & 0xffff) << 8);
+ url_code += (unsigned char)*curr++;
+ }
+
+ /* Secondly, if :// pattern is found, verify parsed stuff
+ * before pattern is matching our http pattern.
+ * If so parse ip address and port in uri.
+ *
+ * WARNING: Current code doesn't support dynamic async dns resolver.
+ */
+ if (url_code != 0x3a2f2f)
+ return -1;
+
+ /* Copy scheme, and utrn to lower case. */
+ while (cp < curr - 3)
+ http_code = (http_code << 8) + *cp++;
+ http_code |= 0x2020202020202020ULL; /* Turn everything to lower case */
+
+ /* HTTP or HTTPS url matching */
+ if (http_code == 0x2020202068747470ULL) {
+ default_port = 80;
+ if (out)
+ out->scheme = SCH_HTTP;
+ }
+ else if (http_code == 0x2020206874747073ULL) {
+ default_port = 443;
+ if (out)
+ out->scheme = SCH_HTTPS;
+ }
+ else
+ return -1;
+
+ /* If the next char is '[', the host address is IPv6. */
+ if (*curr == '[') {
+ curr++;
+
+ /* Check trash size */
+ if (trash.size < ulen)
+ return -1;
+
+ /* Look for ']' and copy the address in a trash buffer. */
+ p = trash.str;
+ for (end = curr;
+ end < url + ulen && *end != ']';
+ end++, p++)
+ *p = *end;
+ if (*end != ']')
+ return -1;
+ *p = '\0';
+
+ /* Update out. */
+ if (out) {
+ out->host = curr;
+ out->host_len = end - curr;
+ }
+
+ /* Try IPv6 decoding. */
+ if (!inet_pton(AF_INET6, trash.str, &((struct sockaddr_in6 *)addr)->sin6_addr))
+ return -1;
+ end++;
+
+ /* Decode port. */
+ if (*end == ':') {
+ end++;
+ default_port = read_uint(&end, url + ulen);
+ }
+ ((struct sockaddr_in6 *)addr)->sin6_port = htons(default_port);
+ ((struct sockaddr_in6 *)addr)->sin6_family = AF_INET6;
+ return end - url;
+ }
+ else {
+ /* We are looking for IP address. If you want to parse and
+ * resolve hostname found in url, you can use str2sa_range(), but
+ * be warned this can slow down global daemon performances
+ * while handling lagging dns responses.
+ */
+ ret = url2ipv4(curr, &((struct sockaddr_in *)addr)->sin_addr);
+ if (ret) {
+ /* Update out. */
+ if (out) {
+ out->host = curr;
+ out->host_len = ret;
+ }
+
+ curr += ret;
+
+ /* Decode port. */
+ if (*curr == ':') {
+ curr++;
+ default_port = read_uint(&curr, url + ulen);
+ }
+ ((struct sockaddr_in *)addr)->sin_port = htons(default_port);
+
+ /* Set family. */
+ ((struct sockaddr_in *)addr)->sin_family = AF_INET;
+ return curr - url;
+ }
+ else if (global.mode & MODE_STARTING) {
+ /* The IPv4 and IPv6 decoding fails, maybe the url contain name. Try to execute
+ * synchronous DNS request only if HAProxy is in the start state.
+ */
+
+ /* look for : or / or end */
+ for (end = curr;
+ end < url + ulen && *end != '/' && *end != ':';
+ end++);
+ memcpy(trash.str, curr, end - curr);
+ trash.str[end - curr] = '\0';
+
+ /* try to resolve an IPv4/IPv6 hostname */
+ he = gethostbyname(trash.str);
+ if (!he)
+ return -1;
+
+ /* Update out. */
+ if (out) {
+ out->host = curr;
+ out->host_len = end - curr;
+ }
+
+ /* Decode port. */
+ if (*end == ':') {
+ end++;
+ default_port = read_uint(&end, url + ulen);
+ }
+
+ /* Copy IP address, set port and family. */
+ switch (he->h_addrtype) {
+ case AF_INET:
+ ((struct sockaddr_in *)addr)->sin_addr = *(struct in_addr *) *(he->h_addr_list);
+ ((struct sockaddr_in *)addr)->sin_port = htons(default_port);
+ ((struct sockaddr_in *)addr)->sin_family = AF_INET;
+ return end - url;
+
+ case AF_INET6:
+ ((struct sockaddr_in6 *)addr)->sin6_addr = *(struct in6_addr *) *(he->h_addr_list);
+ ((struct sockaddr_in6 *)addr)->sin6_port = htons(default_port);
+ ((struct sockaddr_in6 *)addr)->sin6_family = AF_INET6;
+ return end - url;
+ }
+ }
+ }
+ return -1;
+}
+
+/* Tries to convert a sockaddr_storage address to text form. Upon success, the
+ * address family is returned so that it's easy for the caller to adapt to the
+ * output format. Zero is returned if the address family is not supported. -1
+ * is returned upon error, with errno set. AF_INET, AF_INET6 and AF_UNIX are
+ * supported.
+ */
+int addr_to_str(struct sockaddr_storage *addr, char *str, int size)
+{
+
+ void *ptr;
+
+ if (size < 5)
+ return 0;
+ *str = '\0';
+
+ switch (addr->ss_family) {
+ case AF_INET:
+ ptr = &((struct sockaddr_in *)addr)->sin_addr;
+ break;
+ case AF_INET6:
+ ptr = &((struct sockaddr_in6 *)addr)->sin6_addr;
+ break;
+ case AF_UNIX:
+ memcpy(str, "unix", 5);
+ return addr->ss_family;
+ default:
+ return 0;
+ }
+
+ if (inet_ntop(addr->ss_family, ptr, str, size))
+ return addr->ss_family;
+
+ /* failed */
+ return -1;
+}
+
+/* Tries to convert a sockaddr_storage port to text form. Upon success, the
+ * address family is returned so that it's easy for the caller to adapt to the
+ * output format. Zero is returned if the address family is not supported. -1
+ * is returned upon error, with errno set. AF_INET, AF_INET6 and AF_UNIX are
+ * supported.
+ */
+int port_to_str(struct sockaddr_storage *addr, char *str, int size)
+{
+
+ uint16_t port;
+
+
+ if (size < 5)
+ return 0;
+ *str = '\0';
+
+ switch (addr->ss_family) {
+ case AF_INET:
+ port = ((struct sockaddr_in *)addr)->sin_port;
+ break;
+ case AF_INET6:
+ port = ((struct sockaddr_in6 *)addr)->sin6_port;
+ break;
+ case AF_UNIX:
+ memcpy(str, "unix", 5);
+ return addr->ss_family;
+ default:
+ return 0;
+ }
+
+ snprintf(str, size, "%u", ntohs(port));
+ return addr->ss_family;
+}
+
+/* will try to encode the string <string> replacing all characters tagged in
+ * <map> with the hexadecimal representation of their ASCII-code (2 digits)
+ * prefixed by <escape>, and will store the result between <start> (included)
+ * and <stop> (excluded), and will always terminate the string with a '\0'
+ * before <stop>. The position of the '\0' is returned if the conversion
+ * completes. If bytes are missing between <start> and <stop>, then the
+ * conversion will be incomplete and truncated. If <stop> <= <start>, the '\0'
+ * cannot even be stored so we return <start> without writing the 0.
+ * The input string must also be zero-terminated.
+ */
+const char hextab[16] = "0123456789ABCDEF";
+char *encode_string(char *start, char *stop,
+ const char escape, const fd_set *map,
+ const char *string)
+{
+ if (start < stop) {
+ stop--; /* reserve one byte for the final '\0' */
+ while (start < stop && *string != '\0') {
+ if (!FD_ISSET((unsigned char)(*string), map))
+ *start++ = *string;
+ else {
+ if (start + 3 >= stop)
+ break;
+ *start++ = escape;
+ *start++ = hextab[(*string >> 4) & 15];
+ *start++ = hextab[*string & 15];
+ }
+ string++;
+ }
+ *start = '\0';
+ }
+ return start;
+}
+
+/*
+ * Same behavior as encode_string() above, except that it encodes chunk
+ * <chunk> instead of a string.
+ */
+char *encode_chunk(char *start, char *stop,
+ const char escape, const fd_set *map,
+ const struct chunk *chunk)
+{
+ char *str = chunk->str;
+ char *end = chunk->str + chunk->len;
+
+ if (start < stop) {
+ stop--; /* reserve one byte for the final '\0' */
+ while (start < stop && str < end) {
+ if (!FD_ISSET((unsigned char)(*str), map))
+ *start++ = *str;
+ else {
+ if (start + 3 >= stop)
+ break;
+ *start++ = escape;
+ *start++ = hextab[(*str >> 4) & 15];
+ *start++ = hextab[*str & 15];
+ }
+ str++;
+ }
+ *start = '\0';
+ }
+ return start;
+}
+
+/* Check a string for using it in a CSV output format. If the string contains
+ * one of the following four char <">, <,>, CR or LF, the string is
+ * encapsulated between <"> and the <"> are escaped by a <""> sequence.
+ * <str> is the input string to be escaped. The function assumes that
+ * the input string is null-terminated.
+ *
+ * If <quote> is 0, the result is returned escaped but without double quote.
+ * Is it useful if the escaped string is used between double quotes in the
+ * format.
+ *
+ * printf("..., \"%s\", ...\r\n", csv_enc(str, 0));
+ *
+ * If the <quote> is 1, the converter put the quotes only if any character is
+ * escaped. If the <quote> is 2, the converter put always the quotes.
+ *
+ * <output> is a struct chunk used for storing the output string if any
+ * change will be done.
+ *
+ * The function returns the converted string on this output. If an error
+ * occurs, the function return an empty string. This type of output is useful
+ * for using the function directly as printf() argument.
+ *
+ * If the output buffer is too short to contain the input string, the result
+ * is truncated.
+ */
+const char *csv_enc(const char *str, int quote, struct chunk *output)
+{
+ char *end = output->str + output->size;
+ char *out = output->str + 1; /* +1 for reserving space for a first <"> */
+
+ while (*str && out < end - 2) { /* -2 for reserving space for <"> and \0. */
+ *out = *str;
+ if (*str == '"') {
+ if (quote == 1)
+ quote = 2;
+ out++;
+ if (out >= end - 2) {
+ out--;
+ break;
+ }
+ *out = '"';
+ }
+ if (quote == 1 && ( *str == '\r' || *str == '\n' || *str == ',') )
+ quote = 2;
+ out++;
+ str++;
+ }
+
+ if (quote == 1)
+ quote = 0;
+
+ if (!quote) {
+ *out = '\0';
+ return output->str + 1;
+ }
+
+ /* else quote == 2 */
+ *output->str = '"';
+ *out = '"';
+ out++;
+ *out = '\0';
+ return output->str;
+}
+
+/* Decode an URL-encoded string in-place. The resulting string might
+ * be shorter. If some forbidden characters are found, the conversion is
+ * aborted, the string is truncated before the issue and a negative value is
+ * returned, otherwise the operation returns the length of the decoded string.
+ */
+int url_decode(char *string)
+{
+ char *in, *out;
+ int ret = -1;
+
+ in = string;
+ out = string;
+ while (*in) {
+ switch (*in) {
+ case '+' :
+ *out++ = ' ';
+ break;
+ case '%' :
+ if (!ishex(in[1]) || !ishex(in[2]))
+ goto end;
+ *out++ = (hex2i(in[1]) << 4) + hex2i(in[2]);
+ in += 2;
+ break;
+ default:
+ *out++ = *in;
+ break;
+ }
+ in++;
+ }
+ ret = out - string; /* success */
+ end:
+ *out = 0;
+ return ret;
+}
+
+unsigned int str2ui(const char *s)
+{
+ return __str2ui(s);
+}
+
+unsigned int str2uic(const char *s)
+{
+ return __str2uic(s);
+}
+
+unsigned int strl2ui(const char *s, int len)
+{
+ return __strl2ui(s, len);
+}
+
+unsigned int strl2uic(const char *s, int len)
+{
+ return __strl2uic(s, len);
+}
+
+unsigned int read_uint(const char **s, const char *end)
+{
+ return __read_uint(s, end);
+}
+
+/* This function reads an unsigned integer from the string pointed to by <s> and
+ * returns it. The <s> pointer is adjusted to point to the first unread char. The
+ * function automatically stops at <end>. If the number overflows, the 2^64-1
+ * value is returned.
+ */
+unsigned long long int read_uint64(const char **s, const char *end)
+{
+ const char *ptr = *s;
+ unsigned long long int i = 0, tmp;
+ unsigned int j;
+
+ while (ptr < end) {
+
+ /* read next char */
+ j = *ptr - '0';
+ if (j > 9)
+ goto read_uint64_end;
+
+ /* add char to the number and check overflow. */
+ tmp = i * 10;
+ if (tmp / 10 != i) {
+ i = ULLONG_MAX;
+ goto read_uint64_eat;
+ }
+ if (ULLONG_MAX - tmp < j) {
+ i = ULLONG_MAX;
+ goto read_uint64_eat;
+ }
+ i = tmp + j;
+ ptr++;
+ }
+read_uint64_eat:
+ /* eat each numeric char */
+ while (ptr < end) {
+ if ((unsigned int)(*ptr - '0') > 9)
+ break;
+ ptr++;
+ }
+read_uint64_end:
+ *s = ptr;
+ return i;
+}
+
+/* This function reads an integer from the string pointed to by <s> and returns
+ * it. The <s> pointer is adjusted to point to the first unread char. The function
+ * automatically stops at <end>. Il the number is bigger than 2^63-2, the 2^63-1
+ * value is returned. If the number is lowest than -2^63-1, the -2^63 value is
+ * returned.
+ */
+long long int read_int64(const char **s, const char *end)
+{
+ unsigned long long int i = 0;
+ int neg = 0;
+
+ /* Look for minus char. */
+ if (**s == '-') {
+ neg = 1;
+ (*s)++;
+ }
+ else if (**s == '+')
+ (*s)++;
+
+ /* convert as positive number. */
+ i = read_uint64(s, end);
+
+ if (neg) {
+ if (i > 0x8000000000000000ULL)
+ return LLONG_MIN;
+ return -i;
+ }
+ if (i > 0x7fffffffffffffffULL)
+ return LLONG_MAX;
+ return i;
+}
+
+/* This one is 7 times faster than strtol() on athlon with checks.
+ * It returns the value of the number composed of all valid digits read,
+ * and can process negative numbers too.
+ */
+int strl2ic(const char *s, int len)
+{
+ int i = 0;
+ int j, k;
+
+ if (len > 0) {
+ if (*s != '-') {
+ /* positive number */
+ while (len-- > 0) {
+ j = (*s++) - '0';
+ k = i * 10;
+ if (j > 9)
+ break;
+ i = k + j;
+ }
+ } else {
+ /* negative number */
+ s++;
+ while (--len > 0) {
+ j = (*s++) - '0';
+ k = i * 10;
+ if (j > 9)
+ break;
+ i = k - j;
+ }
+ }
+ }
+ return i;
+}
+
+
+/* This function reads exactly <len> chars from <s> and converts them to a
+ * signed integer which it stores into <ret>. It accurately detects any error
+ * (truncated string, invalid chars, overflows). It is meant to be used in
+ * applications designed for hostile environments. It returns zero when the
+ * number has successfully been converted, non-zero otherwise. When an error
+ * is returned, the <ret> value is left untouched. It is yet 5 to 40 times
+ * faster than strtol().
+ */
+int strl2irc(const char *s, int len, int *ret)
+{
+ int i = 0;
+ int j;
+
+ if (!len)
+ return 1;
+
+ if (*s != '-') {
+ /* positive number */
+ while (len-- > 0) {
+ j = (*s++) - '0';
+ if (j > 9) return 1; /* invalid char */
+ if (i > INT_MAX / 10) return 1; /* check for multiply overflow */
+ i = i * 10;
+ if (i + j < i) return 1; /* check for addition overflow */
+ i = i + j;
+ }
+ } else {
+ /* negative number */
+ s++;
+ while (--len > 0) {
+ j = (*s++) - '0';
+ if (j > 9) return 1; /* invalid char */
+ if (i < INT_MIN / 10) return 1; /* check for multiply overflow */
+ i = i * 10;
+ if (i - j > i) return 1; /* check for subtract overflow */
+ i = i - j;
+ }
+ }
+ *ret = i;
+ return 0;
+}
+
+
+/* This function reads exactly <len> chars from <s> and converts them to a
+ * signed integer which it stores into <ret>. It accurately detects any error
+ * (truncated string, invalid chars, overflows). It is meant to be used in
+ * applications designed for hostile environments. It returns zero when the
+ * number has successfully been converted, non-zero otherwise. When an error
+ * is returned, the <ret> value is left untouched. It is about 3 times slower
+ * than str2irc().
+ */
+
+int strl2llrc(const char *s, int len, long long *ret)
+{
+ long long i = 0;
+ int j;
+
+ if (!len)
+ return 1;
+
+ if (*s != '-') {
+ /* positive number */
+ while (len-- > 0) {
+ j = (*s++) - '0';
+ if (j > 9) return 1; /* invalid char */
+ if (i > LLONG_MAX / 10LL) return 1; /* check for multiply overflow */
+ i = i * 10LL;
+ if (i + j < i) return 1; /* check for addition overflow */
+ i = i + j;
+ }
+ } else {
+ /* negative number */
+ s++;
+ while (--len > 0) {
+ j = (*s++) - '0';
+ if (j > 9) return 1; /* invalid char */
+ if (i < LLONG_MIN / 10LL) return 1; /* check for multiply overflow */
+ i = i * 10LL;
+ if (i - j > i) return 1; /* check for subtract overflow */
+ i = i - j;
+ }
+ }
+ *ret = i;
+ return 0;
+}
+
+/* This function is used with pat_parse_dotted_ver(). It converts a string
+ * composed by two number separated by a dot. Each part must contain in 16 bits
+ * because internally they will be represented as a 32-bit quantity stored in
+ * a 64-bit integer. It returns zero when the number has successfully been
+ * converted, non-zero otherwise. When an error is returned, the <ret> value
+ * is left untouched.
+ *
+ * "1.3" -> 0x0000000000010003
+ * "65535.65535" -> 0x00000000ffffffff
+ */
+int strl2llrc_dotted(const char *text, int len, long long *ret)
+{
+ const char *end = &text[len];
+ const char *p;
+ long long major, minor;
+
+ /* Look for dot. */
+ for (p = text; p < end; p++)
+ if (*p == '.')
+ break;
+
+ /* Convert major. */
+ if (strl2llrc(text, p - text, &major) != 0)
+ return 1;
+
+ /* Check major. */
+ if (major >= 65536)
+ return 1;
+
+ /* Convert minor. */
+ minor = 0;
+ if (p < end)
+ if (strl2llrc(p + 1, end - (p + 1), &minor) != 0)
+ return 1;
+
+ /* Check minor. */
+ if (minor >= 65536)
+ return 1;
+
+ /* Compose value. */
+ *ret = (major << 16) | (minor & 0xffff);
+ return 0;
+}
+
+/* This function parses a time value optionally followed by a unit suffix among
+ * "d", "h", "m", "s", "ms" or "us". It converts the value into the unit
+ * expected by the caller. The computation does its best to avoid overflows.
+ * The value is returned in <ret> if everything is fine, and a NULL is returned
+ * by the function. In case of error, a pointer to the error is returned and
+ * <ret> is left untouched. Values are automatically rounded up when needed.
+ */
+const char *parse_time_err(const char *text, unsigned *ret, unsigned unit_flags)
+{
+ unsigned imult, idiv;
+ unsigned omult, odiv;
+ unsigned value;
+
+ omult = odiv = 1;
+
+ switch (unit_flags & TIME_UNIT_MASK) {
+ case TIME_UNIT_US: omult = 1000000; break;
+ case TIME_UNIT_MS: omult = 1000; break;
+ case TIME_UNIT_S: break;
+ case TIME_UNIT_MIN: odiv = 60; break;
+ case TIME_UNIT_HOUR: odiv = 3600; break;
+ case TIME_UNIT_DAY: odiv = 86400; break;
+ default: break;
+ }
+
+ value = 0;
+
+ while (1) {
+ unsigned int j;
+
+ j = *text - '0';
+ if (j > 9)
+ break;
+ text++;
+ value *= 10;
+ value += j;
+ }
+
+ imult = idiv = 1;
+ switch (*text) {
+ case '\0': /* no unit = default unit */
+ imult = omult = idiv = odiv = 1;
+ break;
+ case 's': /* second = unscaled unit */
+ break;
+ case 'u': /* microsecond : "us" */
+ if (text[1] == 's') {
+ idiv = 1000000;
+ text++;
+ }
+ break;
+ case 'm': /* millisecond : "ms" or minute: "m" */
+ if (text[1] == 's') {
+ idiv = 1000;
+ text++;
+ } else
+ imult = 60;
+ break;
+ case 'h': /* hour : "h" */
+ imult = 3600;
+ break;
+ case 'd': /* day : "d" */
+ imult = 86400;
+ break;
+ default:
+ return text;
+ break;
+ }
+
+ if (omult % idiv == 0) { omult /= idiv; idiv = 1; }
+ if (idiv % omult == 0) { idiv /= omult; omult = 1; }
+ if (imult % odiv == 0) { imult /= odiv; odiv = 1; }
+ if (odiv % imult == 0) { odiv /= imult; imult = 1; }
+
+ value = (value * (imult * omult) + (idiv * odiv - 1)) / (idiv * odiv);
+ *ret = value;
+ return NULL;
+}
+
+/* this function converts the string starting at <text> to an unsigned int
+ * stored in <ret>. If an error is detected, the pointer to the unexpected
+ * character is returned. If the conversio is succesful, NULL is returned.
+ */
+const char *parse_size_err(const char *text, unsigned *ret) {
+ unsigned value = 0;
+
+ while (1) {
+ unsigned int j;
+
+ j = *text - '0';
+ if (j > 9)
+ break;
+ if (value > ~0U / 10)
+ return text;
+ value *= 10;
+ if (value > (value + j))
+ return text;
+ value += j;
+ text++;
+ }
+
+ switch (*text) {
+ case '\0':
+ break;
+ case 'K':
+ case 'k':
+ if (value > ~0U >> 10)
+ return text;
+ value = value << 10;
+ break;
+ case 'M':
+ case 'm':
+ if (value > ~0U >> 20)
+ return text;
+ value = value << 20;
+ break;
+ case 'G':
+ case 'g':
+ if (value > ~0U >> 30)
+ return text;
+ value = value << 30;
+ break;
+ default:
+ return text;
+ }
+
+ if (*text != '\0' && *++text != '\0')
+ return text;
+
+ *ret = value;
+ return NULL;
+}
+
+/*
+ * Parse binary string written in hexadecimal (source) and store the decoded
+ * result into binstr and set binstrlen to the lengh of binstr. Memory for
+ * binstr is allocated by the function. In case of error, returns 0 with an
+ * error message in err. In succes case, it returns the consumed length.
+ */
+int parse_binary(const char *source, char **binstr, int *binstrlen, char **err)
+{
+ int len;
+ const char *p = source;
+ int i,j;
+ int alloc;
+
+ len = strlen(source);
+ if (len % 2) {
+ memprintf(err, "an even number of hex digit is expected");
+ return 0;
+ }
+
+ len = len >> 1;
+
+ if (!*binstr) {
+ *binstr = calloc(len, sizeof(char));
+ if (!*binstr) {
+ memprintf(err, "out of memory while loading string pattern");
+ return 0;
+ }
+ alloc = 1;
+ }
+ else {
+ if (*binstrlen < len) {
+ memprintf(err, "no space avalaible in the buffer. expect %d, provides %d",
+ len, *binstrlen);
+ return 0;
+ }
+ alloc = 0;
+ }
+ *binstrlen = len;
+
+ i = j = 0;
+ while (j < len) {
+ if (!ishex(p[i++]))
+ goto bad_input;
+ if (!ishex(p[i++]))
+ goto bad_input;
+ (*binstr)[j++] = (hex2i(p[i-2]) << 4) + hex2i(p[i-1]);
+ }
+ return len << 1;
+
+bad_input:
+ memprintf(err, "an hex digit is expected (found '%c')", p[i-1]);
+ if (alloc)
+ free(binstr);
+ return 0;
+}
+
+/* copies at most <n> characters from <src> and always terminates with '\0' */
+char *my_strndup(const char *src, int n)
+{
+ int len = 0;
+ char *ret;
+
+ while (len < n && src[len])
+ len++;
+
+ ret = (char *)malloc(len + 1);
+ if (!ret)
+ return ret;
+ memcpy(ret, src, len);
+ ret[len] = '\0';
+ return ret;
+}
+
+/*
+ * search needle in haystack
+ * returns the pointer if found, returns NULL otherwise
+ */
+const void *my_memmem(const void *haystack, size_t haystacklen, const void *needle, size_t needlelen)
+{
+ const void *c = NULL;
+ unsigned char f;
+
+ if ((haystack == NULL) || (needle == NULL) || (haystacklen < needlelen))
+ return NULL;
+
+ f = *(char *)needle;
+ c = haystack;
+ while ((c = memchr(c, f, haystacklen - (c - haystack))) != NULL) {
+ if ((haystacklen - (c - haystack)) < needlelen)
+ return NULL;
+
+ if (memcmp(c, needle, needlelen) == 0)
+ return c;
+ ++c;
+ }
+ return NULL;
+}
+
+/* This function returns the first unused key greater than or equal to <key> in
+ * ID tree <root>. Zero is returned if no place is found.
+ */
+unsigned int get_next_id(struct eb_root *root, unsigned int key)
+{
+ struct eb32_node *used;
+
+ do {
+ used = eb32_lookup_ge(root, key);
+ if (!used || used->key > key)
+ return key; /* key is available */
+ key++;
+ } while (key);
+ return key;
+}
+
+/* This function compares a sample word possibly followed by blanks to another
+ * clean word. The compare is case-insensitive. 1 is returned if both are equal,
+ * otherwise zero. This intends to be used when checking HTTP headers for some
+ * values. Note that it validates a word followed only by blanks but does not
+ * validate a word followed by blanks then other chars.
+ */
+int word_match(const char *sample, int slen, const char *word, int wlen)
+{
+ if (slen < wlen)
+ return 0;
+
+ while (wlen) {
+ char c = *sample ^ *word;
+ if (c && c != ('A' ^ 'a'))
+ return 0;
+ sample++;
+ word++;
+ slen--;
+ wlen--;
+ }
+
+ while (slen) {
+ if (*sample != ' ' && *sample != '\t')
+ return 0;
+ sample++;
+ slen--;
+ }
+ return 1;
+}
+
+/* Converts any text-formatted IPv4 address to a host-order IPv4 address. It
+ * is particularly fast because it avoids expensive operations such as
+ * multiplies, which are optimized away at the end. It requires a properly
+ * formated address though (3 points).
+ */
+unsigned int inetaddr_host(const char *text)
+{
+ const unsigned int ascii_zero = ('0' << 24) | ('0' << 16) | ('0' << 8) | '0';
+ register unsigned int dig100, dig10, dig1;
+ int s;
+ const char *p, *d;
+
+ dig1 = dig10 = dig100 = ascii_zero;
+ s = 24;
+
+ p = text;
+ while (1) {
+ if (((unsigned)(*p - '0')) <= 9) {
+ p++;
+ continue;
+ }
+
+ /* here, we have a complete byte between <text> and <p> (exclusive) */
+ if (p == text)
+ goto end;
+
+ d = p - 1;
+ dig1 |= (unsigned int)(*d << s);
+ if (d == text)
+ goto end;
+
+ d--;
+ dig10 |= (unsigned int)(*d << s);
+ if (d == text)
+ goto end;
+
+ d--;
+ dig100 |= (unsigned int)(*d << s);
+ end:
+ if (!s || *p != '.')
+ break;
+
+ s -= 8;
+ text = ++p;
+ }
+
+ dig100 -= ascii_zero;
+ dig10 -= ascii_zero;
+ dig1 -= ascii_zero;
+ return ((dig100 * 10) + dig10) * 10 + dig1;
+}
+
+/*
+ * Idem except the first unparsed character has to be passed in <stop>.
+ */
+unsigned int inetaddr_host_lim(const char *text, const char *stop)
+{
+ const unsigned int ascii_zero = ('0' << 24) | ('0' << 16) | ('0' << 8) | '0';
+ register unsigned int dig100, dig10, dig1;
+ int s;
+ const char *p, *d;
+
+ dig1 = dig10 = dig100 = ascii_zero;
+ s = 24;
+
+ p = text;
+ while (1) {
+ if (((unsigned)(*p - '0')) <= 9 && p < stop) {
+ p++;
+ continue;
+ }
+
+ /* here, we have a complete byte between <text> and <p> (exclusive) */
+ if (p == text)
+ goto end;
+
+ d = p - 1;
+ dig1 |= (unsigned int)(*d << s);
+ if (d == text)
+ goto end;
+
+ d--;
+ dig10 |= (unsigned int)(*d << s);
+ if (d == text)
+ goto end;
+
+ d--;
+ dig100 |= (unsigned int)(*d << s);
+ end:
+ if (!s || p == stop || *p != '.')
+ break;
+
+ s -= 8;
+ text = ++p;
+ }
+
+ dig100 -= ascii_zero;
+ dig10 -= ascii_zero;
+ dig1 -= ascii_zero;
+ return ((dig100 * 10) + dig10) * 10 + dig1;
+}
+
+/*
+ * Idem except the pointer to first unparsed byte is returned into <ret> which
+ * must not be NULL.
+ */
+unsigned int inetaddr_host_lim_ret(char *text, char *stop, char **ret)
+{
+ const unsigned int ascii_zero = ('0' << 24) | ('0' << 16) | ('0' << 8) | '0';
+ register unsigned int dig100, dig10, dig1;
+ int s;
+ char *p, *d;
+
+ dig1 = dig10 = dig100 = ascii_zero;
+ s = 24;
+
+ p = text;
+ while (1) {
+ if (((unsigned)(*p - '0')) <= 9 && p < stop) {
+ p++;
+ continue;
+ }
+
+ /* here, we have a complete byte between <text> and <p> (exclusive) */
+ if (p == text)
+ goto end;
+
+ d = p - 1;
+ dig1 |= (unsigned int)(*d << s);
+ if (d == text)
+ goto end;
+
+ d--;
+ dig10 |= (unsigned int)(*d << s);
+ if (d == text)
+ goto end;
+
+ d--;
+ dig100 |= (unsigned int)(*d << s);
+ end:
+ if (!s || p == stop || *p != '.')
+ break;
+
+ s -= 8;
+ text = ++p;
+ }
+
+ *ret = p;
+ dig100 -= ascii_zero;
+ dig10 -= ascii_zero;
+ dig1 -= ascii_zero;
+ return ((dig100 * 10) + dig10) * 10 + dig1;
+}
+
+/* Convert a fixed-length string to an IP address. Returns 0 in case of error,
+ * or the number of chars read in case of success. Maybe this could be replaced
+ * by one of the functions above. Also, apparently this function does not support
+ * hosts above 255 and requires exactly 4 octets.
+ * The destination is only modified on success.
+ */
+int buf2ip(const char *buf, size_t len, struct in_addr *dst)
+{
+ const char *addr;
+ int saw_digit, octets, ch;
+ u_char tmp[4], *tp;
+ const char *cp = buf;
+
+ saw_digit = 0;
+ octets = 0;
+ *(tp = tmp) = 0;
+
+ for (addr = buf; addr - buf < len; addr++) {
+ unsigned char digit = (ch = *addr) - '0';
+
+ if (digit > 9 && ch != '.')
+ break;
+
+ if (digit <= 9) {
+ u_int new = *tp * 10 + digit;
+
+ if (new > 255)
+ return 0;
+
+ *tp = new;
+
+ if (!saw_digit) {
+ if (++octets > 4)
+ return 0;
+ saw_digit = 1;
+ }
+ } else if (ch == '.' && saw_digit) {
+ if (octets == 4)
+ return 0;
+
+ *++tp = 0;
+ saw_digit = 0;
+ } else
+ return 0;
+ }
+
+ if (octets < 4)
+ return 0;
+
+ memcpy(&dst->s_addr, tmp, 4);
+ return addr - cp;
+}
+
+/* This function converts the string in <buf> of the len <len> to
+ * struct in6_addr <dst> which must be allocated by the caller.
+ * This function returns 1 in success case, otherwise zero.
+ * The destination is only modified on success.
+ */
+int buf2ip6(const char *buf, size_t len, struct in6_addr *dst)
+{
+ char null_term_ip6[INET6_ADDRSTRLEN + 1];
+ struct in6_addr out;
+
+ if (len > INET6_ADDRSTRLEN)
+ return 0;
+
+ memcpy(null_term_ip6, buf, len);
+ null_term_ip6[len] = '\0';
+
+ if (!inet_pton(AF_INET6, null_term_ip6, &out))
+ return 0;
+
+ *dst = out;
+ return 1;
+}
+
+/* To be used to quote config arg positions. Returns the short string at <ptr>
+ * surrounded by simple quotes if <ptr> is valid and non-empty, or "end of line"
+ * if ptr is NULL or empty. The string is locally allocated.
+ */
+const char *quote_arg(const char *ptr)
+{
+ static char val[32];
+ int i;
+
+ if (!ptr || !*ptr)
+ return "end of line";
+ val[0] = '\'';
+ for (i = 1; i < sizeof(val) - 2 && *ptr; i++)
+ val[i] = *ptr++;
+ val[i++] = '\'';
+ val[i] = '\0';
+ return val;
+}
+
+/* returns an operator among STD_OP_* for string <str> or < 0 if unknown */
+int get_std_op(const char *str)
+{
+ int ret = -1;
+
+ if (*str == 'e' && str[1] == 'q')
+ ret = STD_OP_EQ;
+ else if (*str == 'n' && str[1] == 'e')
+ ret = STD_OP_NE;
+ else if (*str == 'l') {
+ if (str[1] == 'e') ret = STD_OP_LE;
+ else if (str[1] == 't') ret = STD_OP_LT;
+ }
+ else if (*str == 'g') {
+ if (str[1] == 'e') ret = STD_OP_GE;
+ else if (str[1] == 't') ret = STD_OP_GT;
+ }
+
+ if (ret == -1 || str[2] != '\0')
+ return -1;
+ return ret;
+}
+
+/* hash a 32-bit integer to another 32-bit integer */
+unsigned int full_hash(unsigned int a)
+{
+ return __full_hash(a);
+}
+
+/* Return non-zero if IPv4 address is part of the network,
+ * otherwise zero.
+ */
+int in_net_ipv4(struct in_addr *addr, struct in_addr *mask, struct in_addr *net)
+{
+ return((addr->s_addr & mask->s_addr) == (net->s_addr & mask->s_addr));
+}
+
+/* Return non-zero if IPv6 address is part of the network,
+ * otherwise zero.
+ */
+int in_net_ipv6(struct in6_addr *addr, struct in6_addr *mask, struct in6_addr *net)
+{
+ int i;
+
+ for (i = 0; i < sizeof(struct in6_addr) / sizeof(int); i++)
+ if (((((int *)addr)[i] & ((int *)mask)[i])) !=
+ (((int *)net)[i] & ((int *)mask)[i]))
+ return 0;
+ return 1;
+}
+
+/* RFC 4291 prefix */
+const char rfc4291_pfx[] = { 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xFF, 0xFF };
+
+/* Map IPv4 adress on IPv6 address, as specified in RFC 3513.
+ * Input and output may overlap.
+ */
+void v4tov6(struct in6_addr *sin6_addr, struct in_addr *sin_addr)
+{
+ struct in_addr tmp_addr;
+
+ tmp_addr.s_addr = sin_addr->s_addr;
+ memcpy(sin6_addr->s6_addr, rfc4291_pfx, sizeof(rfc4291_pfx));
+ memcpy(sin6_addr->s6_addr+12, &tmp_addr.s_addr, 4);
+}
+
+/* Map IPv6 adress on IPv4 address, as specified in RFC 3513.
+ * Return true if conversion is possible and false otherwise.
+ */
+int v6tov4(struct in_addr *sin_addr, struct in6_addr *sin6_addr)
+{
+ if (memcmp(sin6_addr->s6_addr, rfc4291_pfx, sizeof(rfc4291_pfx)) == 0) {
+ memcpy(&(sin_addr->s_addr), &(sin6_addr->s6_addr[12]),
+ sizeof(struct in_addr));
+ return 1;
+ }
+
+ return 0;
+}
+
+char *human_time(int t, short hz_div) {
+ static char rv[sizeof("24855d23h")+1]; // longest of "23h59m" and "59m59s"
+ char *p = rv;
+ char *end = rv + sizeof(rv);
+ int cnt=2; // print two numbers
+
+ if (unlikely(t < 0 || hz_div <= 0)) {
+ snprintf(p, end - p, "?");
+ return rv;
+ }
+
+ if (unlikely(hz_div > 1))
+ t /= hz_div;
+
+ if (t >= DAY) {
+ p += snprintf(p, end - p, "%dd", t / DAY);
+ cnt--;
+ }
+
+ if (cnt && t % DAY / HOUR) {
+ p += snprintf(p, end - p, "%dh", t % DAY / HOUR);
+ cnt--;
+ }
+
+ if (cnt && t % HOUR / MINUTE) {
+ p += snprintf(p, end - p, "%dm", t % HOUR / MINUTE);
+ cnt--;
+ }
+
+ if ((cnt && t % MINUTE) || !t) // also display '0s'
+ p += snprintf(p, end - p, "%ds", t % MINUTE / SEC);
+
+ return rv;
+}
+
+const char *monthname[12] = {
+ "Jan", "Feb", "Mar", "Apr", "May", "Jun",
+ "Jul", "Aug", "Sep", "Oct", "Nov", "Dec"
+};
+
+/* date2str_log: write a date in the format :
+ * sprintf(str, "%02d/%s/%04d:%02d:%02d:%02d.%03d",
+ * tm.tm_mday, monthname[tm.tm_mon], tm.tm_year+1900,
+ * tm.tm_hour, tm.tm_min, tm.tm_sec, (int)date.tv_usec/1000);
+ *
+ * without using sprintf. return a pointer to the last char written (\0) or
+ * NULL if there isn't enough space.
+ */
+char *date2str_log(char *dst, struct tm *tm, struct timeval *date, size_t size)
+{
+
+ if (size < 25) /* the size is fixed: 24 chars + \0 */
+ return NULL;
+
+ dst = utoa_pad((unsigned int)tm->tm_mday, dst, 3); // day
+ *dst++ = '/';
+ memcpy(dst, monthname[tm->tm_mon], 3); // month
+ dst += 3;
+ *dst++ = '/';
+ dst = utoa_pad((unsigned int)tm->tm_year+1900, dst, 5); // year
+ *dst++ = ':';
+ dst = utoa_pad((unsigned int)tm->tm_hour, dst, 3); // hour
+ *dst++ = ':';
+ dst = utoa_pad((unsigned int)tm->tm_min, dst, 3); // minutes
+ *dst++ = ':';
+ dst = utoa_pad((unsigned int)tm->tm_sec, dst, 3); // secondes
+ *dst++ = '.';
+ utoa_pad((unsigned int)(date->tv_usec/1000), dst, 4); // millisecondes
+ dst += 3; // only the 3 first digits
+ *dst = '\0';
+
+ return dst;
+}
+
+/* gmt2str_log: write a date in the format :
+ * "%02d/%s/%04d:%02d:%02d:%02d +0000" without using snprintf
+ * return a pointer to the last char written (\0) or
+ * NULL if there isn't enough space.
+ */
+char *gmt2str_log(char *dst, struct tm *tm, size_t size)
+{
+ if (size < 27) /* the size is fixed: 26 chars + \0 */
+ return NULL;
+
+ dst = utoa_pad((unsigned int)tm->tm_mday, dst, 3); // day
+ *dst++ = '/';
+ memcpy(dst, monthname[tm->tm_mon], 3); // month
+ dst += 3;
+ *dst++ = '/';
+ dst = utoa_pad((unsigned int)tm->tm_year+1900, dst, 5); // year
+ *dst++ = ':';
+ dst = utoa_pad((unsigned int)tm->tm_hour, dst, 3); // hour
+ *dst++ = ':';
+ dst = utoa_pad((unsigned int)tm->tm_min, dst, 3); // minutes
+ *dst++ = ':';
+ dst = utoa_pad((unsigned int)tm->tm_sec, dst, 3); // secondes
+ *dst++ = ' ';
+ *dst++ = '+';
+ *dst++ = '0';
+ *dst++ = '0';
+ *dst++ = '0';
+ *dst++ = '0';
+ *dst = '\0';
+
+ return dst;
+}
+
+/* localdate2str_log: write a date in the format :
+ * "%02d/%s/%04d:%02d:%02d:%02d +0000(local timezone)" without using snprintf
+ * * return a pointer to the last char written (\0) or
+ * * NULL if there isn't enough space.
+ */
+char *localdate2str_log(char *dst, struct tm *tm, size_t size)
+{
+ if (size < 27) /* the size is fixed: 26 chars + \0 */
+ return NULL;
+
+ dst = utoa_pad((unsigned int)tm->tm_mday, dst, 3); // day
+ *dst++ = '/';
+ memcpy(dst, monthname[tm->tm_mon], 3); // month
+ dst += 3;
+ *dst++ = '/';
+ dst = utoa_pad((unsigned int)tm->tm_year+1900, dst, 5); // year
+ *dst++ = ':';
+ dst = utoa_pad((unsigned int)tm->tm_hour, dst, 3); // hour
+ *dst++ = ':';
+ dst = utoa_pad((unsigned int)tm->tm_min, dst, 3); // minutes
+ *dst++ = ':';
+ dst = utoa_pad((unsigned int)tm->tm_sec, dst, 3); // secondes
+ *dst++ = ' ';
+ memcpy(dst, localtimezone, 5); // timezone
+ dst += 5;
+ *dst = '\0';
+
+ return dst;
+}
+
+/* Dynamically allocates a string of the proper length to hold the formatted
+ * output. NULL is returned on error. The caller is responsible for freeing the
+ * memory area using free(). The resulting string is returned in <out> if the
+ * pointer is not NULL. A previous version of <out> might be used to build the
+ * new string, and it will be freed before returning if it is not NULL, which
+ * makes it possible to build complex strings from iterative calls without
+ * having to care about freeing intermediate values, as in the example below :
+ *
+ * memprintf(&err, "invalid argument: '%s'", arg);
+ * ...
+ * memprintf(&err, "parser said : <%s>\n", *err);
+ * ...
+ * free(*err);
+ *
+ * This means that <err> must be initialized to NULL before first invocation.
+ * The return value also holds the allocated string, which eases error checking
+ * and immediate consumption. If the output pointer is not used, NULL must be
+ * passed instead and it will be ignored. The returned message will then also
+ * be NULL so that the caller does not have to bother with freeing anything.
+ *
+ * It is also convenient to use it without any free except the last one :
+ * err = NULL;
+ * if (!fct1(err)) report(*err);
+ * if (!fct2(err)) report(*err);
+ * if (!fct3(err)) report(*err);
+ * free(*err);
+ */
+char *memprintf(char **out, const char *format, ...)
+{
+ va_list args;
+ char *ret = NULL;
+ int allocated = 0;
+ int needed = 0;
+
+ if (!out)
+ return NULL;
+
+ do {
+ /* vsnprintf() will return the required length even when the
+ * target buffer is NULL. We do this in a loop just in case
+ * intermediate evaluations get wrong.
+ */
+ va_start(args, format);
+ needed = vsnprintf(ret, allocated, format, args);
+ va_end(args);
+
+ if (needed < allocated) {
+ /* Note: on Solaris 8, the first iteration always
+ * returns -1 if allocated is zero, so we force a
+ * retry.
+ */
+ if (!allocated)
+ needed = 0;
+ else
+ break;
+ }
+
+ allocated = needed + 1;
+ ret = realloc(ret, allocated);
+ } while (ret);
+
+ if (needed < 0) {
+ /* an error was encountered */
+ free(ret);
+ ret = NULL;
+ }
+
+ if (out) {
+ free(*out);
+ *out = ret;
+ }
+
+ return ret;
+}
+
+/* Used to add <level> spaces before each line of <out>, unless there is only one line.
+ * The input argument is automatically freed and reassigned. The result will have to be
+ * freed by the caller. It also supports being passed a NULL which results in the same
+ * output.
+ * Example of use :
+ * parse(cmd, &err); (callee: memprintf(&err, ...))
+ * fprintf(stderr, "Parser said: %s\n", indent_error(&err));
+ * free(err);
+ */
+char *indent_msg(char **out, int level)
+{
+ char *ret, *in, *p;
+ int needed = 0;
+ int lf = 0;
+ int lastlf = 0;
+ int len;
+
+ if (!out || !*out)
+ return NULL;
+
+ in = *out - 1;
+ while ((in = strchr(in + 1, '\n')) != NULL) {
+ lastlf = in - *out;
+ lf++;
+ }
+
+ if (!lf) /* single line, no LF, return it as-is */
+ return *out;
+
+ len = strlen(*out);
+
+ if (lf == 1 && lastlf == len - 1) {
+ /* single line, LF at end, strip it and return as-is */
+ (*out)[lastlf] = 0;
+ return *out;
+ }
+
+ /* OK now we have at least one LF, we need to process the whole string
+ * as a multi-line string. What we'll do :
+ * - prefix with an LF if there is none
+ * - add <level> spaces before each line
+ * This means at most ( 1 + level + (len-lf) + lf*<1+level) ) =
+ * 1 + level + len + lf * level = 1 + level * (lf + 1) + len.
+ */
+
+ needed = 1 + level * (lf + 1) + len + 1;
+ p = ret = malloc(needed);
+ in = *out;
+
+ /* skip initial LFs */
+ while (*in == '\n')
+ in++;
+
+ /* copy each line, prefixed with LF and <level> spaces, and without the trailing LF */
+ while (*in) {
+ *p++ = '\n';
+ memset(p, ' ', level);
+ p += level;
+ do {
+ *p++ = *in++;
+ } while (*in && *in != '\n');
+ if (*in)
+ in++;
+ }
+ *p = 0;
+
+ free(*out);
+ *out = ret;
+
+ return ret;
+}
+
+/* Convert occurrences of environment variables in the input string to their
+ * corresponding value. A variable is identified as a series of alphanumeric
+ * characters or underscores following a '$' sign. The <in> string must be
+ * free()able. NULL returns NULL. The resulting string might be reallocated if
+ * some expansion is made. Variable names may also be enclosed into braces if
+ * needed (eg: to concatenate alphanum characters).
+ */
+char *env_expand(char *in)
+{
+ char *txt_beg;
+ char *out;
+ char *txt_end;
+ char *var_beg;
+ char *var_end;
+ char *value;
+ char *next;
+ int out_len;
+ int val_len;
+
+ if (!in)
+ return in;
+
+ value = out = NULL;
+ out_len = 0;
+
+ txt_beg = in;
+ do {
+ /* look for next '$' sign in <in> */
+ for (txt_end = txt_beg; *txt_end && *txt_end != '$'; txt_end++);
+
+ if (!*txt_end && !out) /* end and no expansion performed */
+ return in;
+
+ val_len = 0;
+ next = txt_end;
+ if (*txt_end == '$') {
+ char save;
+
+ var_beg = txt_end + 1;
+ if (*var_beg == '{')
+ var_beg++;
+
+ var_end = var_beg;
+ while (isalnum((int)(unsigned char)*var_end) || *var_end == '_') {
+ var_end++;
+ }
+
+ next = var_end;
+ if (*var_end == '}' && (var_beg > txt_end + 1))
+ next++;
+
+ /* get value of the variable name at this location */
+ save = *var_end;
+ *var_end = '\0';
+ value = getenv(var_beg);
+ *var_end = save;
+ val_len = value ? strlen(value) : 0;
+ }
+
+ out = realloc(out, out_len + (txt_end - txt_beg) + val_len + 1);
+ if (txt_end > txt_beg) {
+ memcpy(out + out_len, txt_beg, txt_end - txt_beg);
+ out_len += txt_end - txt_beg;
+ }
+ if (val_len) {
+ memcpy(out + out_len, value, val_len);
+ out_len += val_len;
+ }
+ out[out_len] = 0;
+ txt_beg = next;
+ } while (*txt_beg);
+
+ /* here we know that <out> was allocated and that we don't need <in> anymore */
+ free(in);
+ return out;
+}
+
+
+/* same as strstr() but case-insensitive and with limit length */
+const char *strnistr(const char *str1, int len_str1, const char *str2, int len_str2)
+{
+ char *pptr, *sptr, *start;
+ unsigned int slen, plen;
+ unsigned int tmp1, tmp2;
+
+ if (str1 == NULL || len_str1 == 0) // search pattern into an empty string => search is not found
+ return NULL;
+
+ if (str2 == NULL || len_str2 == 0) // pattern is empty => every str1 match
+ return str1;
+
+ if (len_str1 < len_str2) // pattern is longer than string => search is not found
+ return NULL;
+
+ for (tmp1 = 0, start = (char *)str1, pptr = (char *)str2, slen = len_str1, plen = len_str2; slen >= plen; start++, slen--) {
+ while (toupper(*start) != toupper(*str2)) {
+ start++;
+ slen--;
+ tmp1++;
+
+ if (tmp1 >= len_str1)
+ return NULL;
+
+ /* if pattern longer than string */
+ if (slen < plen)
+ return NULL;
+ }
+
+ sptr = start;
+ pptr = (char *)str2;
+
+ tmp2 = 0;
+ while (toupper(*sptr) == toupper(*pptr)) {
+ sptr++;
+ pptr++;
+ tmp2++;
+
+ if (*pptr == '\0' || tmp2 == len_str2) /* end of pattern found */
+ return start;
+ if (*sptr == '\0' || tmp2 == len_str1) /* end of string found and the pattern is not fully found */
+ return NULL;
+ }
+ }
+ return NULL;
+}
+
+/* This function read the next valid utf8 char.
+ * <s> is the byte srray to be decode, <len> is its length.
+ * The function returns decoded char encoded like this:
+ * The 4 msb are the return code (UTF8_CODE_*), the 4 lsb
+ * are the length read. The decoded character is stored in <c>.
+ */
+unsigned char utf8_next(const char *s, int len, unsigned int *c)
+{
+ const unsigned char *p = (unsigned char *)s;
+ int dec;
+ unsigned char code = UTF8_CODE_OK;
+
+ if (len < 1)
+ return UTF8_CODE_OK;
+
+ /* Check the type of UTF8 sequence
+ *
+ * 0... .... 0x00 <= x <= 0x7f : 1 byte: ascii char
+ * 10.. .... 0x80 <= x <= 0xbf : invalid sequence
+ * 110. .... 0xc0 <= x <= 0xdf : 2 bytes
+ * 1110 .... 0xe0 <= x <= 0xef : 3 bytes
+ * 1111 0... 0xf0 <= x <= 0xf7 : 4 bytes
+ * 1111 10.. 0xf8 <= x <= 0xfb : 5 bytes
+ * 1111 110. 0xfc <= x <= 0xfd : 6 bytes
+ * 1111 111. 0xfe <= x <= 0xff : invalid sequence
+ */
+ switch (*p) {
+ case 0x00 ... 0x7f:
+ *c = *p;
+ return UTF8_CODE_OK | 1;
+
+ case 0x80 ... 0xbf:
+ *c = *p;
+ return UTF8_CODE_BADSEQ | 1;
+
+ case 0xc0 ... 0xdf:
+ if (len < 2) {
+ *c = *p;
+ return UTF8_CODE_BADSEQ | 1;
+ }
+ *c = *p & 0x1f;
+ dec = 1;
+ break;
+
+ case 0xe0 ... 0xef:
+ if (len < 3) {
+ *c = *p;
+ return UTF8_CODE_BADSEQ | 1;
+ }
+ *c = *p & 0x0f;
+ dec = 2;
+ break;
+
+ case 0xf0 ... 0xf7:
+ if (len < 4) {
+ *c = *p;
+ return UTF8_CODE_BADSEQ | 1;
+ }
+ *c = *p & 0x07;
+ dec = 3;
+ break;
+
+ case 0xf8 ... 0xfb:
+ if (len < 5) {
+ *c = *p;
+ return UTF8_CODE_BADSEQ | 1;
+ }
+ *c = *p & 0x03;
+ dec = 4;
+ break;
+
+ case 0xfc ... 0xfd:
+ if (len < 6) {
+ *c = *p;
+ return UTF8_CODE_BADSEQ | 1;
+ }
+ *c = *p & 0x01;
+ dec = 5;
+ break;
+
+ case 0xfe ... 0xff:
+ default:
+ *c = *p;
+ return UTF8_CODE_BADSEQ | 1;
+ }
+
+ p++;
+
+ while (dec > 0) {
+
+ /* need 0x10 for the 2 first bits */
+ if ( ( *p & 0xc0 ) != 0x80 )
+ return UTF8_CODE_BADSEQ | ((p-(unsigned char *)s)&0xffff);
+
+ /* add data at char */
+ *c = ( *c << 6 ) | ( *p & 0x3f );
+
+ dec--;
+ p++;
+ }
+
+ /* Check ovelong encoding.
+ * 1 byte : 5 + 6 : 11 : 0x80 ... 0x7ff
+ * 2 bytes : 4 + 6 + 6 : 16 : 0x800 ... 0xffff
+ * 3 bytes : 3 + 6 + 6 + 6 : 21 : 0x10000 ... 0x1fffff
+ */
+ if (( *c <= 0x7f && (p-(unsigned char *)s) > 1) ||
+ (*c >= 0x80 && *c <= 0x7ff && (p-(unsigned char *)s) > 2) ||
+ (*c >= 0x800 && *c <= 0xffff && (p-(unsigned char *)s) > 3) ||
+ (*c >= 0x10000 && *c <= 0x1fffff && (p-(unsigned char *)s) > 4))
+ code |= UTF8_CODE_OVERLONG;
+
+ /* Check invalid UTF8 range. */
+ if ((*c >= 0xd800 && *c <= 0xdfff) ||
+ (*c >= 0xfffe && *c <= 0xffff))
+ code |= UTF8_CODE_INVRANGE;
+
+ return code | ((p-(unsigned char *)s)&0x0f);
+}
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * Stick tables management functions.
+ *
+ * Copyright 2009-2010 EXCELIANCE, Emeric Brun <ebrun@exceliance.fr>
+ * Copyright (C) 2010 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <string.h>
+
+#include <common/config.h>
+#include <common/memory.h>
+#include <common/mini-clist.h>
+#include <common/standard.h>
+#include <common/time.h>
+
+#include <ebmbtree.h>
+#include <ebsttree.h>
+
+#include <proto/arg.h>
+#include <proto/proto_http.h>
+#include <proto/proto_tcp.h>
+#include <proto/proxy.h>
+#include <proto/sample.h>
+#include <proto/stream.h>
+#include <proto/stick_table.h>
+#include <proto/task.h>
+#include <proto/peers.h>
+#include <types/global.h>
+
+/* structure used to return a table key built from a sample */
+struct stktable_key *static_table_key;
+
+/*
+ * Free an allocated sticky session <ts>, and decrease sticky sessions counter
+ * in table <t>.
+ */
+void stksess_free(struct stktable *t, struct stksess *ts)
+{
+ t->current--;
+ pool_free2(t->pool, (void *)ts - t->data_size);
+}
+
+/*
+ * Kill an stksess (only if its ref_cnt is zero).
+ */
+void stksess_kill(struct stktable *t, struct stksess *ts)
+{
+ if (ts->ref_cnt)
+ return;
+
+ eb32_delete(&ts->exp);
+ eb32_delete(&ts->upd);
+ ebmb_delete(&ts->key);
+ stksess_free(t, ts);
+}
+
+/*
+ * Initialize or update the key in the sticky session <ts> present in table <t>
+ * from the value present in <key>.
+ */
+void stksess_setkey(struct stktable *t, struct stksess *ts, struct stktable_key *key)
+{
+ if (t->type != SMP_T_STR)
+ memcpy(ts->key.key, key->key, t->key_size);
+ else {
+ memcpy(ts->key.key, key->key, MIN(t->key_size - 1, key->key_len));
+ ts->key.key[MIN(t->key_size - 1, key->key_len)] = 0;
+ }
+}
+
+
+/*
+ * Init sticky session <ts> of table <t>. The data parts are cleared and <ts>
+ * is returned.
+ */
+static struct stksess *stksess_init(struct stktable *t, struct stksess * ts)
+{
+ memset((void *)ts - t->data_size, 0, t->data_size);
+ ts->ref_cnt = 0;
+ ts->key.node.leaf_p = NULL;
+ ts->exp.node.leaf_p = NULL;
+ ts->upd.node.leaf_p = NULL;
+ return ts;
+}
+
+/*
+ * Trash oldest <to_batch> sticky sessions from table <t>
+ * Returns number of trashed sticky sessions.
+ */
+int stktable_trash_oldest(struct stktable *t, int to_batch)
+{
+ struct stksess *ts;
+ struct eb32_node *eb;
+ int batched = 0;
+ int looped = 0;
+
+ eb = eb32_lookup_ge(&t->exps, now_ms - TIMER_LOOK_BACK);
+
+ while (batched < to_batch) {
+
+ if (unlikely(!eb)) {
+ /* we might have reached the end of the tree, typically because
+ * <now_ms> is in the first half and we're first scanning the last
+ * half. Let's loop back to the beginning of the tree now if we
+ * have not yet visited it.
+ */
+ if (looped)
+ break;
+ looped = 1;
+ eb = eb32_first(&t->exps);
+ if (likely(!eb))
+ break;
+ }
+
+ /* timer looks expired, detach it from the queue */
+ ts = eb32_entry(eb, struct stksess, exp);
+ eb = eb32_next(eb);
+
+ /* don't delete an entry which is currently referenced */
+ if (ts->ref_cnt)
+ continue;
+
+ eb32_delete(&ts->exp);
+
+ if (ts->expire != ts->exp.key) {
+ if (!tick_isset(ts->expire))
+ continue;
+
+ ts->exp.key = ts->expire;
+ eb32_insert(&t->exps, &ts->exp);
+
+ if (!eb || eb->key > ts->exp.key)
+ eb = &ts->exp;
+
+ continue;
+ }
+
+ /* session expired, trash it */
+ ebmb_delete(&ts->key);
+ eb32_delete(&ts->upd);
+ stksess_free(t, ts);
+ batched++;
+ }
+
+ return batched;
+}
+
+/*
+ * Allocate and initialise a new sticky session.
+ * The new sticky session is returned or NULL in case of lack of memory.
+ * Sticky sessions should only be allocated this way, and must be freed using
+ * stksess_free(). Table <t>'s sticky session counter is increased. If <key>
+ * is not NULL, it is assigned to the new session.
+ */
+struct stksess *stksess_new(struct stktable *t, struct stktable_key *key)
+{
+ struct stksess *ts;
+
+ if (unlikely(t->current == t->size)) {
+ if ( t->nopurge )
+ return NULL;
+
+ if (!stktable_trash_oldest(t, (t->size >> 8) + 1))
+ return NULL;
+ }
+
+ ts = pool_alloc2(t->pool) + t->data_size;
+ if (ts) {
+ t->current++;
+ stksess_init(t, ts);
+ if (key)
+ stksess_setkey(t, ts, key);
+ }
+
+ return ts;
+}
+
+/*
+ * Looks in table <t> for a sticky session matching key <key>.
+ * Returns pointer on requested sticky session or NULL if none was found.
+ */
+struct stksess *stktable_lookup_key(struct stktable *t, struct stktable_key *key)
+{
+ struct ebmb_node *eb;
+
+ if (t->type == SMP_T_STR)
+ eb = ebst_lookup_len(&t->keys, key->key, key->key_len+1 < t->key_size ? key->key_len : t->key_size-1);
+ else
+ eb = ebmb_lookup(&t->keys, key->key, t->key_size);
+
+ if (unlikely(!eb)) {
+ /* no session found */
+ return NULL;
+ }
+
+ return ebmb_entry(eb, struct stksess, key);
+}
+
+/* Lookup and touch <key> in <table>, or create the entry if it does not exist.
+ * This is mainly used for situations where we want to refresh a key's usage so
+ * that it does not expire, and we want to have it created if it was not there.
+ * The stksess is returned, or NULL if it could not be created.
+ */
+struct stksess *stktable_update_key(struct stktable *table, struct stktable_key *key)
+{
+ struct stksess *ts;
+
+ ts = stktable_lookup_key(table, key);
+ if (likely(ts))
+ return stktable_touch(table, ts, 1);
+
+ /* entry does not exist, initialize a new one */
+ ts = stksess_new(table, key);
+ if (likely(ts))
+ stktable_store(table, ts, 1);
+ return ts;
+}
+
+/*
+ * Looks in table <t> for a sticky session with same key as <ts>.
+ * Returns pointer on requested sticky session or NULL if none was found.
+ */
+struct stksess *stktable_lookup(struct stktable *t, struct stksess *ts)
+{
+ struct ebmb_node *eb;
+
+ if (t->type == SMP_T_STR)
+ eb = ebst_lookup(&(t->keys), (char *)ts->key.key);
+ else
+ eb = ebmb_lookup(&(t->keys), ts->key.key, t->key_size);
+
+ if (unlikely(!eb))
+ return NULL;
+
+ return ebmb_entry(eb, struct stksess, key);
+}
+
+/* Update the expiration timer for <ts> but do not touch its expiration node.
+ * The table's expiration timer is updated if set.
+ */
+struct stksess *stktable_touch(struct stktable *t, struct stksess *ts, int local)
+{
+ struct eb32_node * eb;
+ ts->expire = tick_add(now_ms, MS_TO_TICKS(t->expire));
+ if (t->expire) {
+ t->exp_task->expire = t->exp_next = tick_first(ts->expire, t->exp_next);
+ task_queue(t->exp_task);
+ }
+
+ /* If sync is enabled and update is local */
+ if (t->sync_task && local) {
+ /* If this entry is not in the tree
+ or not scheduled for at least one peer */
+ if (!ts->upd.node.leaf_p
+ || (int)(t->commitupdate - ts->upd.key) >= 0
+ || (int)(ts->upd.key - t->localupdate) >= 0) {
+ ts->upd.key = ++t->update;
+ t->localupdate = t->update;
+ eb32_delete(&ts->upd);
+ eb = eb32_insert(&t->updates, &ts->upd);
+ if (eb != &ts->upd) {
+ eb32_delete(eb);
+ eb32_insert(&t->updates, &ts->upd);
+ }
+ }
+ task_wakeup(t->sync_task, TASK_WOKEN_MSG);
+ }
+ return ts;
+}
+
+/* Insert new sticky session <ts> in the table. It is assumed that it does not
+ * yet exist (the caller must check this). The table's timeout is updated if it
+ * is set. <ts> is returned.
+ */
+struct stksess *stktable_store(struct stktable *t, struct stksess *ts, int local)
+{
+ ebmb_insert(&t->keys, &ts->key, t->key_size);
+ stktable_touch(t, ts, local);
+ ts->exp.key = ts->expire;
+ eb32_insert(&t->exps, &ts->exp);
+ return ts;
+}
+
+/* Returns a valid or initialized stksess for the specified stktable_key in the
+ * specified table, or NULL if the key was NULL, or if no entry was found nor
+ * could be created. The entry's expiration is updated.
+ */
+struct stksess *stktable_get_entry(struct stktable *table, struct stktable_key *key)
+{
+ struct stksess *ts;
+
+ if (!key)
+ return NULL;
+
+ ts = stktable_lookup_key(table, key);
+ if (ts == NULL) {
+ /* entry does not exist, initialize a new one */
+ ts = stksess_new(table, key);
+ if (!ts)
+ return NULL;
+ stktable_store(table, ts, 1);
+ }
+ else
+ stktable_touch(table, ts, 1);
+ return ts;
+}
+
+/*
+ * Trash expired sticky sessions from table <t>. The next expiration date is
+ * returned.
+ */
+static int stktable_trash_expired(struct stktable *t)
+{
+ struct stksess *ts;
+ struct eb32_node *eb;
+ int looped = 0;
+
+ eb = eb32_lookup_ge(&t->exps, now_ms - TIMER_LOOK_BACK);
+
+ while (1) {
+ if (unlikely(!eb)) {
+ /* we might have reached the end of the tree, typically because
+ * <now_ms> is in the first half and we're first scanning the last
+ * half. Let's loop back to the beginning of the tree now if we
+ * have not yet visited it.
+ */
+ if (looped)
+ break;
+ looped = 1;
+ eb = eb32_first(&t->exps);
+ if (likely(!eb))
+ break;
+ }
+
+ if (likely(tick_is_lt(now_ms, eb->key))) {
+ /* timer not expired yet, revisit it later */
+ t->exp_next = eb->key;
+ return t->exp_next;
+ }
+
+ /* timer looks expired, detach it from the queue */
+ ts = eb32_entry(eb, struct stksess, exp);
+ eb = eb32_next(eb);
+
+ /* don't delete an entry which is currently referenced */
+ if (ts->ref_cnt)
+ continue;
+
+ eb32_delete(&ts->exp);
+
+ if (!tick_is_expired(ts->expire, now_ms)) {
+ if (!tick_isset(ts->expire))
+ continue;
+
+ ts->exp.key = ts->expire;
+ eb32_insert(&t->exps, &ts->exp);
+
+ if (!eb || eb->key > ts->exp.key)
+ eb = &ts->exp;
+ continue;
+ }
+
+ /* session expired, trash it */
+ ebmb_delete(&ts->key);
+ eb32_delete(&ts->upd);
+ stksess_free(t, ts);
+ }
+
+ /* We have found no task to expire in any tree */
+ t->exp_next = TICK_ETERNITY;
+ return t->exp_next;
+}
+
+/*
+ * Task processing function to trash expired sticky sessions. A pointer to the
+ * task itself is returned since it never dies.
+ */
+static struct task *process_table_expire(struct task *task)
+{
+ struct stktable *t = (struct stktable *)task->context;
+
+ task->expire = stktable_trash_expired(t);
+ return task;
+}
+
+/* Perform minimal stick table intializations, report 0 in case of error, 1 if OK. */
+int stktable_init(struct stktable *t)
+{
+ if (t->size) {
+ memset(&t->keys, 0, sizeof(t->keys));
+ memset(&t->exps, 0, sizeof(t->exps));
+ t->updates = EB_ROOT_UNIQUE;
+
+ t->pool = create_pool("sticktables", sizeof(struct stksess) + t->data_size + t->key_size, MEM_F_SHARED);
+
+ t->exp_next = TICK_ETERNITY;
+ if ( t->expire ) {
+ t->exp_task = task_new();
+ t->exp_task->process = process_table_expire;
+ t->exp_task->expire = TICK_ETERNITY;
+ t->exp_task->context = (void *)t;
+ }
+ if (t->peers.p && t->peers.p->peers_fe && t->peers.p->peers_fe->state != PR_STSTOPPED) {
+ peers_register_table(t->peers.p, t);
+ }
+
+ return t->pool != NULL;
+ }
+ return 1;
+}
+
+/*
+ * Configuration keywords of known table types
+ */
+struct stktable_type stktable_types[SMP_TYPES] = {
+ [SMP_T_SINT] = { "integer", 0, 4 },
+ [SMP_T_IPV4] = { "ip", 0, 4 },
+ [SMP_T_IPV6] = { "ipv6", 0, 16 },
+ [SMP_T_STR] = { "string", STK_F_CUSTOM_KEYSIZE, 32 },
+ [SMP_T_BIN] = { "binary", STK_F_CUSTOM_KEYSIZE, 32 }
+};
+
+/*
+ * Parse table type configuration.
+ * Returns 0 on successful parsing, else 1.
+ * <myidx> is set at next configuration <args> index.
+ */
+int stktable_parse_type(char **args, int *myidx, unsigned long *type, size_t *key_size)
+{
+ for (*type = 0; *type < SMP_TYPES; (*type)++) {
+ if (!stktable_types[*type].kw)
+ continue;
+ if (strcmp(args[*myidx], stktable_types[*type].kw) != 0)
+ continue;
+
+ *key_size = stktable_types[*type].default_size;
+ (*myidx)++;
+
+ if (stktable_types[*type].flags & STK_F_CUSTOM_KEYSIZE) {
+ if (strcmp("len", args[*myidx]) == 0) {
+ (*myidx)++;
+ *key_size = atol(args[*myidx]);
+ if (!*key_size)
+ break;
+ if (*type == SMP_T_STR) {
+ /* null terminated string needs +1 for '\0'. */
+ (*key_size)++;
+ }
+ (*myidx)++;
+ }
+ }
+ return 0;
+ }
+ return 1;
+}
+
+/* Prepares a stktable_key from a sample <smp> to search into table <t>.
+ * Returns NULL if the sample could not be converted (eg: no matching type),
+ * otherwise a pointer to the static stktable_key filled with what is needed
+ * for the lookup.
+ */
+struct stktable_key *smp_to_stkey(struct sample *smp, struct stktable *t)
+{
+ /* Convert sample. */
+ if (!sample_convert(smp, t->type))
+ return NULL;
+
+ /* Fill static_table_key. */
+ switch (t->type) {
+
+ case SMP_T_IPV4:
+ static_table_key->key = &smp->data.u.ipv4;
+ static_table_key->key_len = 4;
+ break;
+
+ case SMP_T_IPV6:
+ static_table_key->key = &smp->data.u.ipv6;
+ static_table_key->key_len = 16;
+ break;
+
+ case SMP_T_SINT:
+ /* The stick table require a 32bit unsigned int, "sint" is a
+ * signed 64 it, so we can convert it inplace.
+ */
+ *(unsigned int *)&smp->data.u.sint = (unsigned int)smp->data.u.sint;
+ static_table_key->key = &smp->data.u.sint;
+ static_table_key->key_len = 4;
+ break;
+
+ case SMP_T_STR:
+ /* Must be NULL terminated. */
+ if (smp->data.u.str.len >= smp->data.u.str.size ||
+ smp->data.u.str.str[smp->data.u.str.len] != '\0') {
+ if (!smp_dup(smp))
+ return NULL;
+ if (smp->data.u.str.len >= smp->data.u.str.size)
+ return NULL;
+ smp->data.u.str.str[smp->data.u.str.len] = '\0';
+ }
+ static_table_key->key = smp->data.u.str.str;
+ static_table_key->key_len = smp->data.u.str.len;
+ break;
+
+ case SMP_T_BIN:
+ if (smp->data.u.str.len < t->key_size) {
+ /* This type needs padding with 0. */
+ if (smp->data.u.str.size < t->key_size)
+ if (!smp_dup(smp))
+ return NULL;
+ if (smp->data.u.str.size < t->key_size)
+ return NULL;
+ memset(smp->data.u.str.str + smp->data.u.str.len, 0,
+ t->key_size - smp->data.u.str.len);
+ smp->data.u.str.len = t->key_size;
+ }
+ static_table_key->key = smp->data.u.str.str;
+ static_table_key->key_len = smp->data.u.str.len;
+ break;
+
+ default: /* impossible case. */
+ return NULL;
+ }
+
+ return static_table_key;
+}
+
+/*
+ * Process a fetch + format conversion as defined by the sample expression <expr>
+ * on request or response considering the <opt> parameter. Returns either NULL if
+ * no key could be extracted, or a pointer to the converted result stored in
+ * static_table_key in format <table_type>. If <smp> is not NULL, it will be reset
+ * and its flags will be initialized so that the caller gets a copy of the input
+ * sample, and knows why it was not accepted (eg: SMP_F_MAY_CHANGE is present
+ * without SMP_OPT_FINAL). The output will be usable like this :
+ *
+ * return MAY_CHANGE FINAL Meaning for the sample
+ * NULL 0 * Not present and will never be (eg: header)
+ * NULL 1 0 Not present or unstable, could change (eg: req_len)
+ * NULL 1 1 Not present, will not change anymore
+ * smp 0 * Present and will not change (eg: header)
+ * smp 1 0 not possible
+ * smp 1 1 Present, last known value (eg: request length)
+ */
+struct stktable_key *stktable_fetch_key(struct stktable *t, struct proxy *px, struct session *sess, struct stream *strm,
+ unsigned int opt, struct sample_expr *expr, struct sample *smp)
+{
+ if (smp)
+ memset(smp, 0, sizeof(*smp));
+
+ smp = sample_process(px, sess, strm, opt, expr, smp);
+ if (!smp)
+ return NULL;
+
+ if ((smp->flags & SMP_F_MAY_CHANGE) && !(opt & SMP_OPT_FINAL))
+ return NULL; /* we can only use stable samples */
+
+ return smp_to_stkey(smp, t);
+}
+
+/*
+ * Returns 1 if sample expression <expr> result can be converted to table key of
+ * type <table_type>, otherwise zero. Used in configuration check.
+ */
+int stktable_compatible_sample(struct sample_expr *expr, unsigned long table_type)
+{
+ int out_type;
+
+ if (table_type >= SMP_TYPES || !stktable_types[table_type].kw)
+ return 0;
+
+ out_type = smp_expr_output_type(expr);
+
+ /* Convert sample. */
+ if (!sample_casts[out_type][table_type])
+ return 0;
+
+ return 1;
+}
+
+/* Extra data types processing : after the last one, some room may remain
+ * before STKTABLE_DATA_TYPES that may be used to register extra data types
+ * at run time.
+ */
+struct stktable_data_type stktable_data_types[STKTABLE_DATA_TYPES] = {
+ [STKTABLE_DT_SERVER_ID] = { .name = "server_id", .std_type = STD_T_SINT },
+ [STKTABLE_DT_GPT0] = { .name = "gpt0", .std_type = STD_T_UINT },
+ [STKTABLE_DT_GPC0] = { .name = "gpc0", .std_type = STD_T_UINT },
+ [STKTABLE_DT_GPC0_RATE] = { .name = "gpc0_rate", .std_type = STD_T_FRQP, .arg_type = ARG_T_DELAY },
+ [STKTABLE_DT_CONN_CNT] = { .name = "conn_cnt", .std_type = STD_T_UINT },
+ [STKTABLE_DT_CONN_RATE] = { .name = "conn_rate", .std_type = STD_T_FRQP, .arg_type = ARG_T_DELAY },
+ [STKTABLE_DT_CONN_CUR] = { .name = "conn_cur", .std_type = STD_T_UINT },
+ [STKTABLE_DT_SESS_CNT] = { .name = "sess_cnt", .std_type = STD_T_UINT },
+ [STKTABLE_DT_SESS_RATE] = { .name = "sess_rate", .std_type = STD_T_FRQP, .arg_type = ARG_T_DELAY },
+ [STKTABLE_DT_HTTP_REQ_CNT] = { .name = "http_req_cnt", .std_type = STD_T_UINT },
+ [STKTABLE_DT_HTTP_REQ_RATE] = { .name = "http_req_rate", .std_type = STD_T_FRQP, .arg_type = ARG_T_DELAY },
+ [STKTABLE_DT_HTTP_ERR_CNT] = { .name = "http_err_cnt", .std_type = STD_T_UINT },
+ [STKTABLE_DT_HTTP_ERR_RATE] = { .name = "http_err_rate", .std_type = STD_T_FRQP, .arg_type = ARG_T_DELAY },
+ [STKTABLE_DT_BYTES_IN_CNT] = { .name = "bytes_in_cnt", .std_type = STD_T_ULL },
+ [STKTABLE_DT_BYTES_IN_RATE] = { .name = "bytes_in_rate", .std_type = STD_T_FRQP, .arg_type = ARG_T_DELAY },
+ [STKTABLE_DT_BYTES_OUT_CNT] = { .name = "bytes_out_cnt", .std_type = STD_T_ULL },
+ [STKTABLE_DT_BYTES_OUT_RATE]= { .name = "bytes_out_rate", .std_type = STD_T_FRQP, .arg_type = ARG_T_DELAY },
+};
+
+/* Registers stick-table extra data type with index <idx>, name <name>, type
+ * <std_type> and arg type <arg_type>. If the index is negative, the next free
+ * index is automatically allocated. The allocated index is returned, or -1 if
+ * no free index was found or <name> was already registered. The <name> is used
+ * directly as a pointer, so if it's not stable, the caller must allocate it.
+ */
+int stktable_register_data_store(int idx, const char *name, int std_type, int arg_type)
+{
+ if (idx < 0) {
+ for (idx = 0; idx < STKTABLE_DATA_TYPES; idx++) {
+ if (!stktable_data_types[idx].name)
+ break;
+
+ if (strcmp(stktable_data_types[idx].name, name) == 0)
+ return -1;
+ }
+ }
+
+ if (idx >= STKTABLE_DATA_TYPES)
+ return -1;
+
+ if (stktable_data_types[idx].name != NULL)
+ return -1;
+
+ stktable_data_types[idx].name = name;
+ stktable_data_types[idx].std_type = std_type;
+ stktable_data_types[idx].arg_type = arg_type;
+ return idx;
+}
+
+/*
+ * Returns the data type number for the stktable_data_type whose name is <name>,
+ * or <0 if not found.
+ */
+int stktable_get_data_type(char *name)
+{
+ int type;
+
+ for (type = 0; type < STKTABLE_DATA_TYPES; type++) {
+ if (!stktable_data_types[type].name)
+ continue;
+ if (strcmp(name, stktable_data_types[type].name) == 0)
+ return type;
+ }
+ return -1;
+}
+
+/* Casts sample <smp> to the type of the table specified in arg(0), and looks
+ * it up into this table. Returns true if found, false otherwise. The input
+ * type is STR so that input samples are converted to string (since all types
+ * can be converted to strings), then the function casts the string again into
+ * the table's type. This is a double conversion, but in the future we might
+ * support automatic input types to perform the cast on the fly.
+ */
+static int sample_conv_in_table(const struct arg *arg_p, struct sample *smp, void *private)
+{
+ struct stktable *t;
+ struct stktable_key *key;
+ struct stksess *ts;
+
+ t = &arg_p[0].data.prx->table;
+
+ key = smp_to_stkey(smp, t);
+ if (!key)
+ return 0;
+
+ ts = stktable_lookup_key(t, key);
+
+ smp->data.type = SMP_T_BOOL;
+ smp->data.u.sint = !!ts;
+ smp->flags = SMP_F_VOL_TEST;
+ return 1;
+}
+
+/* Casts sample <smp> to the type of the table specified in arg(0), and looks
+ * it up into this table. Returns the data rate received from clients in bytes/s
+ * if the key is present in the table, otherwise zero, so that comparisons can
+ * be easily performed. If the inspected parameter is not stored in the table,
+ * <not found> is returned.
+ */
+static int sample_conv_table_bytes_in_rate(const struct arg *arg_p, struct sample *smp, void *private)
+{
+ struct stktable *t;
+ struct stktable_key *key;
+ struct stksess *ts;
+ void *ptr;
+
+ t = &arg_p[0].data.prx->table;
+
+ key = smp_to_stkey(smp, t);
+ if (!key)
+ return 0;
+
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = 0;
+
+ ts = stktable_lookup_key(t, key);
+ if (!ts) /* key not present */
+ return 1;
+
+ ptr = stktable_data_ptr(t, ts, STKTABLE_DT_BYTES_IN_RATE);
+ if (!ptr)
+ return 0; /* parameter not stored */
+
+ smp->data.u.sint = read_freq_ctr_period(&stktable_data_cast(ptr, bytes_in_rate),
+ t->data_arg[STKTABLE_DT_BYTES_IN_RATE].u);
+ return 1;
+}
+
+/* Casts sample <smp> to the type of the table specified in arg(0), and looks
+ * it up into this table. Returns the cumulated number of connections for the key
+ * if the key is present in the table, otherwise zero, so that comparisons can
+ * be easily performed. If the inspected parameter is not stored in the table,
+ * <not found> is returned.
+ */
+static int sample_conv_table_conn_cnt(const struct arg *arg_p, struct sample *smp, void *private)
+{
+ struct stktable *t;
+ struct stktable_key *key;
+ struct stksess *ts;
+ void *ptr;
+
+ t = &arg_p[0].data.prx->table;
+
+ key = smp_to_stkey(smp, t);
+ if (!key)
+ return 0;
+
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = 0;
+
+ ts = stktable_lookup_key(t, key);
+ if (!ts) /* key not present */
+ return 1;
+
+ ptr = stktable_data_ptr(t, ts, STKTABLE_DT_CONN_CNT);
+ if (!ptr)
+ return 0; /* parameter not stored */
+
+ smp->data.u.sint = stktable_data_cast(ptr, conn_cnt);
+ return 1;
+}
+
+/* Casts sample <smp> to the type of the table specified in arg(0), and looks
+ * it up into this table. Returns the number of concurrent connections for the
+ * key if the key is present in the table, otherwise zero, so that comparisons
+ * can be easily performed. If the inspected parameter is not stored in the
+ * table, <not found> is returned.
+ */
+static int sample_conv_table_conn_cur(const struct arg *arg_p, struct sample *smp, void *private)
+{
+ struct stktable *t;
+ struct stktable_key *key;
+ struct stksess *ts;
+ void *ptr;
+
+ t = &arg_p[0].data.prx->table;
+
+ key = smp_to_stkey(smp, t);
+ if (!key)
+ return 0;
+
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = 0;
+
+ ts = stktable_lookup_key(t, key);
+ if (!ts) /* key not present */
+ return 1;
+
+ ptr = stktable_data_ptr(t, ts, STKTABLE_DT_CONN_CUR);
+ if (!ptr)
+ return 0; /* parameter not stored */
+
+ smp->data.u.sint = stktable_data_cast(ptr, conn_cur);
+ return 1;
+}
+
+/* Casts sample <smp> to the type of the table specified in arg(0), and looks
+ * it up into this table. Returns the rate of incoming connections from the key
+ * if the key is present in the table, otherwise zero, so that comparisons can
+ * be easily performed. If the inspected parameter is not stored in the table,
+ * <not found> is returned.
+ */
+static int sample_conv_table_conn_rate(const struct arg *arg_p, struct sample *smp, void *private)
+{
+ struct stktable *t;
+ struct stktable_key *key;
+ struct stksess *ts;
+ void *ptr;
+
+ t = &arg_p[0].data.prx->table;
+
+ key = smp_to_stkey(smp, t);
+ if (!key)
+ return 0;
+
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = 0;
+
+ ts = stktable_lookup_key(t, key);
+ if (!ts) /* key not present */
+ return 1;
+
+ ptr = stktable_data_ptr(t, ts, STKTABLE_DT_CONN_RATE);
+ if (!ptr)
+ return 0; /* parameter not stored */
+
+ smp->data.u.sint = read_freq_ctr_period(&stktable_data_cast(ptr, conn_rate),
+ t->data_arg[STKTABLE_DT_CONN_RATE].u);
+ return 1;
+}
+
+/* Casts sample <smp> to the type of the table specified in arg(0), and looks
+ * it up into this table. Returns the data rate sent to clients in bytes/s
+ * if the key is present in the table, otherwise zero, so that comparisons can
+ * be easily performed. If the inspected parameter is not stored in the table,
+ * <not found> is returned.
+ */
+static int sample_conv_table_bytes_out_rate(const struct arg *arg_p, struct sample *smp, void *private)
+{
+ struct stktable *t;
+ struct stktable_key *key;
+ struct stksess *ts;
+ void *ptr;
+
+ t = &arg_p[0].data.prx->table;
+
+ key = smp_to_stkey(smp, t);
+ if (!key)
+ return 0;
+
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = 0;
+
+ ts = stktable_lookup_key(t, key);
+ if (!ts) /* key not present */
+ return 1;
+
+ ptr = stktable_data_ptr(t, ts, STKTABLE_DT_BYTES_OUT_RATE);
+ if (!ptr)
+ return 0; /* parameter not stored */
+
+ smp->data.u.sint = read_freq_ctr_period(&stktable_data_cast(ptr, bytes_out_rate),
+ t->data_arg[STKTABLE_DT_BYTES_OUT_RATE].u);
+ return 1;
+}
+
+/* Casts sample <smp> to the type of the table specified in arg(0), and looks
+ * it up into this table. Returns the value of the GPT0 tag for the key
+ * if the key is present in the table, otherwise false, so that comparisons can
+ * be easily performed. If the inspected parameter is not stored in the table,
+ * <not found> is returned.
+ */
+static int sample_conv_table_gpt0(const struct arg *arg_p, struct sample *smp, void *private)
+{
+ struct stktable *t;
+ struct stktable_key *key;
+ struct stksess *ts;
+ void *ptr;
+
+ t = &arg_p[0].data.prx->table;
+
+ key = smp_to_stkey(smp, t);
+ if (!key)
+ return 0;
+
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = 0;
+
+ ts = stktable_lookup_key(t, key);
+ if (!ts) /* key not present */
+ return 1;
+
+ ptr = stktable_data_ptr(t, ts, STKTABLE_DT_GPT0);
+ if (!ptr)
+ return 0; /* parameter not stored */
+
+ smp->data.u.sint = stktable_data_cast(ptr, gpt0);
+ return 1;
+}
+
+/* Casts sample <smp> to the type of the table specified in arg(0), and looks
+ * it up into this table. Returns the value of the GPC0 counter for the key
+ * if the key is present in the table, otherwise zero, so that comparisons can
+ * be easily performed. If the inspected parameter is not stored in the table,
+ * <not found> is returned.
+ */
+static int sample_conv_table_gpc0(const struct arg *arg_p, struct sample *smp, void *private)
+{
+ struct stktable *t;
+ struct stktable_key *key;
+ struct stksess *ts;
+ void *ptr;
+
+ t = &arg_p[0].data.prx->table;
+
+ key = smp_to_stkey(smp, t);
+ if (!key)
+ return 0;
+
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = 0;
+
+ ts = stktable_lookup_key(t, key);
+ if (!ts) /* key not present */
+ return 1;
+
+ ptr = stktable_data_ptr(t, ts, STKTABLE_DT_GPC0);
+ if (!ptr)
+ return 0; /* parameter not stored */
+
+ smp->data.u.sint = stktable_data_cast(ptr, gpc0);
+ return 1;
+}
+
+/* Casts sample <smp> to the type of the table specified in arg(0), and looks
+ * it up into this table. Returns the event rate of the GPC0 counter for the key
+ * if the key is present in the table, otherwise zero, so that comparisons can
+ * be easily performed. If the inspected parameter is not stored in the table,
+ * <not found> is returned.
+ */
+static int sample_conv_table_gpc0_rate(const struct arg *arg_p, struct sample *smp, void *private)
+{
+ struct stktable *t;
+ struct stktable_key *key;
+ struct stksess *ts;
+ void *ptr;
+
+ t = &arg_p[0].data.prx->table;
+
+ key = smp_to_stkey(smp, t);
+ if (!key)
+ return 0;
+
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = 0;
+
+ ts = stktable_lookup_key(t, key);
+ if (!ts) /* key not present */
+ return 1;
+
+ ptr = stktable_data_ptr(t, ts, STKTABLE_DT_GPC0_RATE);
+ if (!ptr)
+ return 0; /* parameter not stored */
+
+ smp->data.u.sint = read_freq_ctr_period(&stktable_data_cast(ptr, gpc0_rate),
+ t->data_arg[STKTABLE_DT_GPC0_RATE].u);
+ return 1;
+}
+
+/* Casts sample <smp> to the type of the table specified in arg(0), and looks
+ * it up into this table. Returns the cumulated number of HTTP request errors
+ * for the key if the key is present in the table, otherwise zero, so that
+ * comparisons can be easily performed. If the inspected parameter is not stored
+ * in the table, <not found> is returned.
+ */
+static int sample_conv_table_http_err_cnt(const struct arg *arg_p, struct sample *smp, void *private)
+{
+ struct stktable *t;
+ struct stktable_key *key;
+ struct stksess *ts;
+ void *ptr;
+
+ t = &arg_p[0].data.prx->table;
+
+ key = smp_to_stkey(smp, t);
+ if (!key)
+ return 0;
+
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = 0;
+
+ ts = stktable_lookup_key(t, key);
+ if (!ts) /* key not present */
+ return 1;
+
+ ptr = stktable_data_ptr(t, ts, STKTABLE_DT_HTTP_ERR_CNT);
+ if (!ptr)
+ return 0; /* parameter not stored */
+
+ smp->data.u.sint = stktable_data_cast(ptr, http_err_cnt);
+ return 1;
+}
+
+/* Casts sample <smp> to the type of the table specified in arg(0), and looks
+ * it up into this table. Returns the HTTP request error rate the key
+ * if the key is present in the table, otherwise zero, so that comparisons can
+ * be easily performed. If the inspected parameter is not stored in the table,
+ * <not found> is returned.
+ */
+static int sample_conv_table_http_err_rate(const struct arg *arg_p, struct sample *smp, void *private)
+{
+ struct stktable *t;
+ struct stktable_key *key;
+ struct stksess *ts;
+ void *ptr;
+
+ t = &arg_p[0].data.prx->table;
+
+ key = smp_to_stkey(smp, t);
+ if (!key)
+ return 0;
+
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = 0;
+
+ ts = stktable_lookup_key(t, key);
+ if (!ts) /* key not present */
+ return 1;
+
+ ptr = stktable_data_ptr(t, ts, STKTABLE_DT_HTTP_ERR_RATE);
+ if (!ptr)
+ return 0; /* parameter not stored */
+
+ smp->data.u.sint = read_freq_ctr_period(&stktable_data_cast(ptr, http_err_rate),
+ t->data_arg[STKTABLE_DT_HTTP_ERR_RATE].u);
+ return 1;
+}
+
+/* Casts sample <smp> to the type of the table specified in arg(0), and looks
+ * it up into this table. Returns the cumulated number of HTTP request for the
+ * key if the key is present in the table, otherwise zero, so that comparisons
+ * can be easily performed. If the inspected parameter is not stored in the
+ * table, <not found> is returned.
+ */
+static int sample_conv_table_http_req_cnt(const struct arg *arg_p, struct sample *smp, void *private)
+{
+ struct stktable *t;
+ struct stktable_key *key;
+ struct stksess *ts;
+ void *ptr;
+
+ t = &arg_p[0].data.prx->table;
+
+ key = smp_to_stkey(smp, t);
+ if (!key)
+ return 0;
+
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = 0;
+
+ ts = stktable_lookup_key(t, key);
+ if (!ts) /* key not present */
+ return 1;
+
+ ptr = stktable_data_ptr(t, ts, STKTABLE_DT_HTTP_REQ_CNT);
+ if (!ptr)
+ return 0; /* parameter not stored */
+
+ smp->data.u.sint = stktable_data_cast(ptr, http_req_cnt);
+ return 1;
+}
+
+/* Casts sample <smp> to the type of the table specified in arg(0), and looks
+ * it up into this table. Returns the HTTP request rate the key if the key is
+ * present in the table, otherwise zero, so that comparisons can be easily
+ * performed. If the inspected parameter is not stored in the table, <not found>
+ * is returned.
+ */
+static int sample_conv_table_http_req_rate(const struct arg *arg_p, struct sample *smp, void *private)
+{
+ struct stktable *t;
+ struct stktable_key *key;
+ struct stksess *ts;
+ void *ptr;
+
+ t = &arg_p[0].data.prx->table;
+
+ key = smp_to_stkey(smp, t);
+ if (!key)
+ return 0;
+
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = 0;
+
+ ts = stktable_lookup_key(t, key);
+ if (!ts) /* key not present */
+ return 1;
+
+ ptr = stktable_data_ptr(t, ts, STKTABLE_DT_HTTP_REQ_RATE);
+ if (!ptr)
+ return 0; /* parameter not stored */
+
+ smp->data.u.sint = read_freq_ctr_period(&stktable_data_cast(ptr, http_req_rate),
+ t->data_arg[STKTABLE_DT_HTTP_REQ_RATE].u);
+ return 1;
+}
+
+/* Casts sample <smp> to the type of the table specified in arg(0), and looks
+ * it up into this table. Returns the volume of datareceived from clients in kbytes
+ * if the key is present in the table, otherwise zero, so that comparisons can
+ * be easily performed. If the inspected parameter is not stored in the table,
+ * <not found> is returned.
+ */
+static int sample_conv_table_kbytes_in(const struct arg *arg_p, struct sample *smp, void *private)
+{
+ struct stktable *t;
+ struct stktable_key *key;
+ struct stksess *ts;
+ void *ptr;
+
+ t = &arg_p[0].data.prx->table;
+
+ key = smp_to_stkey(smp, t);
+ if (!key)
+ return 0;
+
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = 0;
+
+ ts = stktable_lookup_key(t, key);
+ if (!ts) /* key not present */
+ return 1;
+
+ ptr = stktable_data_ptr(t, ts, STKTABLE_DT_BYTES_IN_CNT);
+ if (!ptr)
+ return 0; /* parameter not stored */
+
+ smp->data.u.sint = stktable_data_cast(ptr, bytes_in_cnt) >> 10;
+ return 1;
+}
+
+/* Casts sample <smp> to the type of the table specified in arg(0), and looks
+ * it up into this table. Returns the volume of data sent to clients in kbytes
+ * if the key is present in the table, otherwise zero, so that comparisons can
+ * be easily performed. If the inspected parameter is not stored in the table,
+ * <not found> is returned.
+ */
+static int sample_conv_table_kbytes_out(const struct arg *arg_p, struct sample *smp, void *private)
+{
+ struct stktable *t;
+ struct stktable_key *key;
+ struct stksess *ts;
+ void *ptr;
+
+ t = &arg_p[0].data.prx->table;
+
+ key = smp_to_stkey(smp, t);
+ if (!key)
+ return 0;
+
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = 0;
+
+ ts = stktable_lookup_key(t, key);
+ if (!ts) /* key not present */
+ return 1;
+
+ ptr = stktable_data_ptr(t, ts, STKTABLE_DT_BYTES_OUT_CNT);
+ if (!ptr)
+ return 0; /* parameter not stored */
+
+ smp->data.u.sint = stktable_data_cast(ptr, bytes_out_cnt) >> 10;
+ return 1;
+}
+
+/* Casts sample <smp> to the type of the table specified in arg(0), and looks
+ * it up into this table. Returns the server ID associated with the key if the
+ * key is present in the table, otherwise zero, so that comparisons can be
+ * easily performed. If the inspected parameter is not stored in the table,
+ * <not found> is returned.
+ */
+static int sample_conv_table_server_id(const struct arg *arg_p, struct sample *smp, void *private)
+{
+ struct stktable *t;
+ struct stktable_key *key;
+ struct stksess *ts;
+ void *ptr;
+
+ t = &arg_p[0].data.prx->table;
+
+ key = smp_to_stkey(smp, t);
+ if (!key)
+ return 0;
+
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = 0;
+
+ ts = stktable_lookup_key(t, key);
+ if (!ts) /* key not present */
+ return 1;
+
+ ptr = stktable_data_ptr(t, ts, STKTABLE_DT_SERVER_ID);
+ if (!ptr)
+ return 0; /* parameter not stored */
+
+ smp->data.u.sint = stktable_data_cast(ptr, server_id);
+ return 1;
+}
+
+/* Casts sample <smp> to the type of the table specified in arg(0), and looks
+ * it up into this table. Returns the cumulated number of sessions for the
+ * key if the key is present in the table, otherwise zero, so that comparisons
+ * can be easily performed. If the inspected parameter is not stored in the
+ * table, <not found> is returned.
+ */
+static int sample_conv_table_sess_cnt(const struct arg *arg_p, struct sample *smp, void *private)
+{
+ struct stktable *t;
+ struct stktable_key *key;
+ struct stksess *ts;
+ void *ptr;
+
+ t = &arg_p[0].data.prx->table;
+
+ key = smp_to_stkey(smp, t);
+ if (!key)
+ return 0;
+
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = 0;
+
+ ts = stktable_lookup_key(t, key);
+ if (!ts) /* key not present */
+ return 1;
+
+ ptr = stktable_data_ptr(t, ts, STKTABLE_DT_SESS_CNT);
+ if (!ptr)
+ return 0; /* parameter not stored */
+
+ smp->data.u.sint = stktable_data_cast(ptr, sess_cnt);
+ return 1;
+}
+
+/* Casts sample <smp> to the type of the table specified in arg(0), and looks
+ * it up into this table. Returns the session rate the key if the key is
+ * present in the table, otherwise zero, so that comparisons can be easily
+ * performed. If the inspected parameter is not stored in the table, <not found>
+ * is returned.
+ */
+static int sample_conv_table_sess_rate(const struct arg *arg_p, struct sample *smp, void *private)
+{
+ struct stktable *t;
+ struct stktable_key *key;
+ struct stksess *ts;
+ void *ptr;
+
+ t = &arg_p[0].data.prx->table;
+
+ key = smp_to_stkey(smp, t);
+ if (!key)
+ return 0;
+
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = 0;
+
+ ts = stktable_lookup_key(t, key);
+ if (!ts) /* key not present */
+ return 1;
+
+ ptr = stktable_data_ptr(t, ts, STKTABLE_DT_SESS_RATE);
+ if (!ptr)
+ return 0; /* parameter not stored */
+
+ smp->data.u.sint = read_freq_ctr_period(&stktable_data_cast(ptr, sess_rate),
+ t->data_arg[STKTABLE_DT_SESS_RATE].u);
+ return 1;
+}
+
+/* Casts sample <smp> to the type of the table specified in arg(0), and looks
+ * it up into this table. Returns the amount of concurrent connections tracking
+ * the same key if the key is present in the table, otherwise zero, so that
+ * comparisons can be easily performed. If the inspected parameter is not
+ * stored in the table, <not found> is returned.
+ */
+static int sample_conv_table_trackers(const struct arg *arg_p, struct sample *smp, void *private)
+{
+ struct stktable *t;
+ struct stktable_key *key;
+ struct stksess *ts;
+
+ t = &arg_p[0].data.prx->table;
+
+ key = smp_to_stkey(smp, t);
+ if (!key)
+ return 0;
+
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = 0;
+
+ ts = stktable_lookup_key(t, key);
+ if (ts)
+ smp->data.u.sint = ts->ref_cnt;
+
+ return 1;
+}
+
+/* Always returns 1. */
+static enum act_return action_inc_gpc0(struct act_rule *rule, struct proxy *px,
+ struct session *sess, struct stream *s, int flags)
+{
+ void *ptr;
+ struct stksess *ts;
+ struct stkctr *stkctr;
+
+ /* Extract the stksess, return OK if no stksess available. */
+ if (s)
+ stkctr = &s->stkctr[rule->arg.gpc.sc];
+ else
+ stkctr = &sess->stkctr[rule->arg.gpc.sc];
+ ts = stkctr_entry(stkctr);
+ if (!ts)
+ return ACT_RET_CONT;
+
+ /* Store the sample in the required sc, and ignore errors. */
+ ptr = stktable_data_ptr(stkctr->table, ts, STKTABLE_DT_GPC0);
+ if (!ptr)
+ return ACT_RET_CONT;
+
+ stktable_data_cast(ptr, gpc0)++;
+ return ACT_RET_CONT;
+}
+
+/* This function is a common parser for using variables. It understands
+ * the formats:
+ *
+ * sc-inc-gpc0(<stick-table ID>)
+ *
+ * It returns 0 if fails and <err> is filled with an error message. Otherwise,
+ * it returns 1 and the variable <expr> is filled with the pointer to the
+ * expression to execute.
+ */
+static enum act_parse_ret parse_inc_gpc0(const char **args, int *arg, struct proxy *px,
+ struct act_rule *rule, char **err)
+{
+ const char *cmd_name = args[*arg-1];
+ char *error;
+
+ cmd_name += strlen("sc-inc-gpc0");
+ if (*cmd_name == '\0') {
+ /* default stick table id. */
+ rule->arg.gpc.sc = 0;
+ } else {
+ /* parse the stick table id. */
+ if (*cmd_name != '(') {
+ memprintf(err, "invalid stick table track ID. Expects %s(<Track ID>)", args[*arg-1]);
+ return ACT_RET_PRS_ERR;
+ }
+ cmd_name++; /* jump the '(' */
+ rule->arg.gpc.sc = strtol(cmd_name, &error, 10); /* Convert stick table id. */
+ if (*error != ')') {
+ memprintf(err, "invalid stick table track ID. Expects %s(<Track ID>)", args[*arg-1]);
+ return ACT_RET_PRS_ERR;
+ }
+
+ if (rule->arg.gpc.sc >= ACT_ACTION_TRK_SCMAX) {
+ memprintf(err, "invalid stick table track ID. The max allowed ID is %d",
+ ACT_ACTION_TRK_SCMAX-1);
+ return ACT_RET_PRS_ERR;
+ }
+ }
+ rule->action = ACT_CUSTOM;
+ rule->action_ptr = action_inc_gpc0;
+ return ACT_RET_PRS_OK;
+}
+
+/* Always returns 1. */
+static enum act_return action_set_gpt0(struct act_rule *rule, struct proxy *px,
+ struct session *sess, struct stream *s, int flags)
+{
+ void *ptr;
+ struct stksess *ts;
+ struct stkctr *stkctr;
+
+ /* Extract the stksess, return OK if no stksess available. */
+ if (s)
+ stkctr = &s->stkctr[rule->arg.gpt.sc];
+ else
+ stkctr = &sess->stkctr[rule->arg.gpt.sc];
+ ts = stkctr_entry(stkctr);
+ if (!ts)
+ return ACT_RET_CONT;
+
+ /* Store the sample in the required sc, and ignore errors. */
+ ptr = stktable_data_ptr(stkctr->table, ts, STKTABLE_DT_GPT0);
+ if (ptr)
+ stktable_data_cast(ptr, gpt0) = rule->arg.gpt.value;
+ return ACT_RET_CONT;
+}
+
+/* This function is a common parser for using variables. It understands
+ * the format:
+ *
+ * set-gpt0(<stick-table ID>) <expression>
+ *
+ * It returns 0 if fails and <err> is filled with an error message. Otherwise,
+ * it returns 1 and the variable <expr> is filled with the pointer to the
+ * expression to execute.
+ */
+static enum act_parse_ret parse_set_gpt0(const char **args, int *arg, struct proxy *px,
+ struct act_rule *rule, char **err)
+
+
+{
+ const char *cmd_name = args[*arg-1];
+ char *error;
+
+ cmd_name += strlen("sc-set-gpt0");
+ if (*cmd_name == '\0') {
+ /* default stick table id. */
+ rule->arg.gpt.sc = 0;
+ } else {
+ /* parse the stick table id. */
+ if (*cmd_name != '(') {
+ memprintf(err, "invalid stick table track ID '%s'. Expects sc-set-gpt0(<Track ID>)", args[*arg-1]);
+ return ACT_RET_PRS_ERR;
+ }
+ cmd_name++; /* jump the '(' */
+ rule->arg.gpt.sc = strtol(cmd_name, &error, 10); /* Convert stick table id. */
+ if (*error != ')') {
+ memprintf(err, "invalid stick table track ID '%s'. Expects sc-set-gpt0(<Track ID>)", args[*arg-1]);
+ return ACT_RET_PRS_ERR;
+ }
+
+ if (rule->arg.gpt.sc >= ACT_ACTION_TRK_SCMAX) {
+ memprintf(err, "invalid stick table track ID '%s'. The max allowed ID is %d",
+ args[*arg-1], ACT_ACTION_TRK_SCMAX-1);
+ return ACT_RET_PRS_ERR;
+ }
+ }
+
+ rule->arg.gpt.value = strtol(args[*arg], &error, 10);
+ if (*error != '\0') {
+ memprintf(err, "invalid integer value '%s'", args[*arg]);
+ return ACT_RET_PRS_ERR;
+ }
+ (*arg)++;
+
+ rule->action = ACT_CUSTOM;
+ rule->action_ptr = action_set_gpt0;
+
+ return ACT_RET_PRS_OK;
+}
+
+static struct action_kw_list tcp_conn_kws = { { }, {
+ { "sc-inc-gpc0", parse_inc_gpc0, 1 },
+ { "sc-set-gpt0", parse_set_gpt0, 1 },
+ { /* END */ }
+}};
+
+static struct action_kw_list tcp_req_kws = { { }, {
+ { "sc-inc-gpc0", parse_inc_gpc0, 1 },
+ { "sc-set-gpt0", parse_set_gpt0, 1 },
+ { /* END */ }
+}};
+
+static struct action_kw_list tcp_res_kws = { { }, {
+ { "sc-inc-gpc0", parse_inc_gpc0, 1 },
+ { "sc-set-gpt0", parse_set_gpt0, 1 },
+ { /* END */ }
+}};
+
+static struct action_kw_list http_req_kws = { { }, {
+ { "sc-inc-gpc0", parse_inc_gpc0, 1 },
+ { "sc-set-gpt0", parse_set_gpt0, 1 },
+ { /* END */ }
+}};
+
+static struct action_kw_list http_res_kws = { { }, {
+ { "sc-inc-gpc0", parse_inc_gpc0, 1 },
+ { "sc-set-gpt0", parse_set_gpt0, 1 },
+ { /* END */ }
+}};
+
+/* Note: must not be declared <const> as its list will be overwritten */
+static struct sample_conv_kw_list sample_conv_kws = {ILH, {
+ { "in_table", sample_conv_in_table, ARG1(1,TAB), NULL, SMP_T_STR, SMP_T_BOOL },
+ { "table_bytes_in_rate", sample_conv_table_bytes_in_rate, ARG1(1,TAB), NULL, SMP_T_STR, SMP_T_SINT },
+ { "table_bytes_out_rate", sample_conv_table_bytes_out_rate, ARG1(1,TAB), NULL, SMP_T_STR, SMP_T_SINT },
+ { "table_conn_cnt", sample_conv_table_conn_cnt, ARG1(1,TAB), NULL, SMP_T_STR, SMP_T_SINT },
+ { "table_conn_cur", sample_conv_table_conn_cur, ARG1(1,TAB), NULL, SMP_T_STR, SMP_T_SINT },
+ { "table_conn_rate", sample_conv_table_conn_rate, ARG1(1,TAB), NULL, SMP_T_STR, SMP_T_SINT },
+ { "table_gpt0", sample_conv_table_gpt0, ARG1(1,TAB), NULL, SMP_T_STR, SMP_T_SINT },
+ { "table_gpc0", sample_conv_table_gpc0, ARG1(1,TAB), NULL, SMP_T_STR, SMP_T_SINT },
+ { "table_gpc0_rate", sample_conv_table_gpc0_rate, ARG1(1,TAB), NULL, SMP_T_STR, SMP_T_SINT },
+ { "table_http_err_cnt", sample_conv_table_http_err_cnt, ARG1(1,TAB), NULL, SMP_T_STR, SMP_T_SINT },
+ { "table_http_err_rate", sample_conv_table_http_err_rate, ARG1(1,TAB), NULL, SMP_T_STR, SMP_T_SINT },
+ { "table_http_req_cnt", sample_conv_table_http_req_cnt, ARG1(1,TAB), NULL, SMP_T_STR, SMP_T_SINT },
+ { "table_http_req_rate", sample_conv_table_http_req_rate, ARG1(1,TAB), NULL, SMP_T_STR, SMP_T_SINT },
+ { "table_kbytes_in", sample_conv_table_kbytes_in, ARG1(1,TAB), NULL, SMP_T_STR, SMP_T_SINT },
+ { "table_kbytes_out", sample_conv_table_kbytes_out, ARG1(1,TAB), NULL, SMP_T_STR, SMP_T_SINT },
+ { "table_server_id", sample_conv_table_server_id, ARG1(1,TAB), NULL, SMP_T_STR, SMP_T_SINT },
+ { "table_sess_cnt", sample_conv_table_sess_cnt, ARG1(1,TAB), NULL, SMP_T_STR, SMP_T_SINT },
+ { "table_sess_rate", sample_conv_table_sess_rate, ARG1(1,TAB), NULL, SMP_T_STR, SMP_T_SINT },
+ { "table_trackers", sample_conv_table_trackers, ARG1(1,TAB), NULL, SMP_T_STR, SMP_T_SINT },
+ { /* END */ },
+}};
+
+__attribute__((constructor))
+static void __stick_table_init(void)
+{
+ /* register som action keywords. */
+ tcp_req_conn_keywords_register(&tcp_conn_kws);
+ tcp_req_cont_keywords_register(&tcp_req_kws);
+ tcp_res_cont_keywords_register(&tcp_res_kws);
+ http_req_keywords_register(&http_req_kws);
+ http_res_keywords_register(&http_res_kws);
+
+ /* register sample fetch and format conversion keywords */
+ sample_register_convs(&sample_conv_kws);
+}
--- /dev/null
+/*
+ * Stream management functions.
+ *
+ * Copyright 2000-2012 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <stdlib.h>
+#include <unistd.h>
+#include <fcntl.h>
+
+#include <common/cfgparse.h>
+#include <common/config.h>
+#include <common/buffer.h>
+#include <common/debug.h>
+#include <common/memory.h>
+
+#include <types/applet.h>
+#include <types/capture.h>
+#include <types/global.h>
+
+#include <proto/acl.h>
+#include <proto/action.h>
+#include <proto/arg.h>
+#include <proto/backend.h>
+#include <proto/channel.h>
+#include <proto/checks.h>
+#include <proto/connection.h>
+#include <proto/dumpstats.h>
+#include <proto/fd.h>
+#include <proto/freq_ctr.h>
+#include <proto/frontend.h>
+#include <proto/hdr_idx.h>
+#include <proto/hlua.h>
+#include <proto/listener.h>
+#include <proto/log.h>
+#include <proto/raw_sock.h>
+#include <proto/session.h>
+#include <proto/stream.h>
+#include <proto/pipe.h>
+#include <proto/proto_http.h>
+#include <proto/proto_tcp.h>
+#include <proto/proxy.h>
+#include <proto/queue.h>
+#include <proto/server.h>
+#include <proto/sample.h>
+#include <proto/stick_table.h>
+#include <proto/stream_interface.h>
+#include <proto/task.h>
+#include <proto/vars.h>
+
+struct pool_head *pool2_stream;
+struct list streams;
+
+/* list of streams waiting for at least one buffer */
+struct list buffer_wq = LIST_HEAD_INIT(buffer_wq);
+
+/* List of all use-service keywords. */
+static struct list service_keywords = LIST_HEAD_INIT(service_keywords);
+
+/* This function is called from the session handler which detects the end of
+ * handshake, in order to complete initialization of a valid stream. It must be
+ * called with a session (which may be embryonic). It returns the pointer to
+ * the newly created stream, or NULL in case of fatal error. The client-facing
+ * end point is assigned to <origin>, which must be valid. The task's context
+ * is set to the new stream, and its function is set to process_stream().
+ * Target and analysers are null.
+ */
+struct stream *stream_new(struct session *sess, struct task *t, enum obj_type *origin)
+{
+ struct stream *s;
+ struct connection *conn = objt_conn(origin);
+ struct appctx *appctx = objt_appctx(origin);
+
+ if (unlikely((s = pool_alloc2(pool2_stream)) == NULL))
+ return s;
+
+ /* minimum stream initialization required for an embryonic stream is
+ * fairly low. We need very little to execute L4 ACLs, then we need a
+ * task to make the client-side connection live on its own.
+ * - flags
+ * - stick-entry tracking
+ */
+ s->flags = 0;
+ s->logs.logwait = sess->fe->to_log;
+ s->logs.level = 0;
+ s->logs.accept_date = sess->accept_date; /* user-visible date for logging */
+ s->logs.tv_accept = sess->tv_accept; /* corrected date for internal use */
+ tv_zero(&s->logs.tv_request);
+ s->logs.t_queue = -1;
+ s->logs.t_connect = -1;
+ s->logs.t_data = -1;
+ s->logs.t_close = 0;
+ s->logs.bytes_in = s->logs.bytes_out = 0;
+ s->logs.prx_queue_size = 0; /* we get the number of pending conns before us */
+ s->logs.srv_queue_size = 0; /* we will get this number soon */
+
+ /* default logging function */
+ s->do_log = strm_log;
+
+ /* default error reporting function, may be changed by analysers */
+ s->srv_error = default_srv_error;
+
+ /* Initialise the current rule list pointer to NULL. We are sure that
+ * any rulelist match the NULL pointer.
+ */
+ s->current_rule_list = NULL;
+ s->current_rule = NULL;
+
+ /* Copy SC counters for the stream. We don't touch refcounts because
+ * any reference we have is inherited from the session. Since the stream
+ * doesn't exist without the session, the session's existence guarantees
+ * we don't lose the entry. During the store operation, the stream won't
+ * touch these ones.
+ */
+ memcpy(s->stkctr, sess->stkctr, sizeof(s->stkctr));
+
+ s->sess = sess;
+ s->si[0].flags = SI_FL_NONE;
+ s->si[1].flags = SI_FL_ISBACK;
+
+ s->uniq_id = global.req_count++;
+
+ /* OK, we're keeping the stream, so let's properly initialize the stream */
+ LIST_ADDQ(&streams, &s->list);
+ LIST_INIT(&s->back_refs);
+ LIST_INIT(&s->buffer_wait);
+
+ s->flags |= SF_INITIALIZED;
+ s->unique_id = NULL;
+
+ s->task = t;
+ t->process = process_stream;
+ t->context = s;
+ t->expire = TICK_ETERNITY;
+
+ /* Note: initially, the stream's backend points to the frontend.
+ * This changes later when switching rules are executed or
+ * when the default backend is assigned.
+ */
+ s->be = sess->fe;
+ s->comp_algo = NULL;
+ s->req.buf = s->res.buf = NULL;
+ s->req_cap = NULL;
+ s->res_cap = NULL;
+
+ /* Initialise all the variables contexts even if not used.
+ * This permits to prune these contexts without errors.
+ */
+ vars_init(&s->vars_txn, SCOPE_TXN);
+ vars_init(&s->vars_reqres, SCOPE_REQ);
+
+ /* this part should be common with other protocols */
+ si_reset(&s->si[0]);
+ si_set_state(&s->si[0], SI_ST_EST);
+
+ /* attach the incoming connection to the stream interface now. */
+ if (conn)
+ si_attach_conn(&s->si[0], conn);
+ else if (appctx)
+ si_attach_appctx(&s->si[0], appctx);
+
+ if (likely(sess->fe->options2 & PR_O2_INDEPSTR))
+ s->si[0].flags |= SI_FL_INDEP_STR;
+
+ /* pre-initialize the other side's stream interface to an INIT state. The
+ * callbacks will be initialized before attempting to connect.
+ */
+ si_reset(&s->si[1]);
+
+ if (likely(sess->fe->options2 & PR_O2_INDEPSTR))
+ s->si[1].flags |= SI_FL_INDEP_STR;
+
+ stream_init_srv_conn(s);
+ s->target = NULL;
+ s->pend_pos = NULL;
+
+ /* init store persistence */
+ s->store_count = 0;
+
+ channel_init(&s->req);
+ s->req.flags |= CF_READ_ATTACHED; /* the producer is already connected */
+ s->req.analysers = 0;
+ channel_auto_connect(&s->req); /* don't wait to establish connection */
+ channel_auto_close(&s->req); /* let the producer forward close requests */
+
+ s->req.rto = sess->fe->timeout.client;
+ s->req.wto = TICK_ETERNITY;
+ s->req.rex = TICK_ETERNITY;
+ s->req.wex = TICK_ETERNITY;
+ s->req.analyse_exp = TICK_ETERNITY;
+
+ channel_init(&s->res);
+ s->res.flags |= CF_ISRESP;
+ s->res.analysers = 0;
+
+ if (sess->fe->options2 & PR_O2_NODELAY) {
+ s->req.flags |= CF_NEVER_WAIT;
+ s->res.flags |= CF_NEVER_WAIT;
+ }
+
+ s->res.wto = sess->fe->timeout.client;
+ s->res.rto = TICK_ETERNITY;
+ s->res.rex = TICK_ETERNITY;
+ s->res.wex = TICK_ETERNITY;
+ s->res.analyse_exp = TICK_ETERNITY;
+
+ s->txn = NULL;
+
+ HLUA_INIT(&s->hlua);
+
+ /* finish initialization of the accepted file descriptor */
+ if (conn)
+ conn_data_want_recv(conn);
+ else if (appctx)
+ si_applet_want_get(&s->si[0]);
+
+ if (sess->fe->accept && sess->fe->accept(s) < 0)
+ goto out_fail_accept;
+
+ /* it is important not to call the wakeup function directly but to
+ * pass through task_wakeup(), because this one knows how to apply
+ * priorities to tasks.
+ */
+ task_wakeup(t, TASK_WOKEN_INIT);
+ return s;
+
+ /* Error unrolling */
+ out_fail_accept:
+ LIST_DEL(&s->list);
+ pool_free2(pool2_stream, s);
+ return NULL;
+}
+
+/*
+ * frees the context associated to a stream. It must have been removed first.
+ */
+static void stream_free(struct stream *s)
+{
+ struct session *sess = strm_sess(s);
+ struct proxy *fe = sess->fe;
+ struct bref *bref, *back;
+ struct connection *cli_conn = objt_conn(sess->origin);
+ int i;
+
+ if (s->pend_pos)
+ pendconn_free(s->pend_pos);
+
+ if (objt_server(s->target)) { /* there may be requests left pending in queue */
+ if (s->flags & SF_CURR_SESS) {
+ s->flags &= ~SF_CURR_SESS;
+ objt_server(s->target)->cur_sess--;
+ }
+ if (may_dequeue_tasks(objt_server(s->target), s->be))
+ process_srv_queue(objt_server(s->target));
+ }
+
+ if (unlikely(s->srv_conn)) {
+ /* the stream still has a reserved slot on a server, but
+ * it should normally be only the same as the one above,
+ * so this should not happen in fact.
+ */
+ sess_change_server(s, NULL);
+ }
+
+ if (s->req.pipe)
+ put_pipe(s->req.pipe);
+
+ if (s->res.pipe)
+ put_pipe(s->res.pipe);
+
+ /* We may still be present in the buffer wait queue */
+ if (!LIST_ISEMPTY(&s->buffer_wait)) {
+ LIST_DEL(&s->buffer_wait);
+ LIST_INIT(&s->buffer_wait);
+ }
+
+ b_drop(&s->req.buf);
+ b_drop(&s->res.buf);
+ if (!LIST_ISEMPTY(&buffer_wq))
+ stream_offer_buffers();
+
+ hlua_ctx_destroy(&s->hlua);
+ if (s->txn)
+ http_end_txn(s);
+
+ /* ensure the client-side transport layer is destroyed */
+ if (cli_conn)
+ conn_force_close(cli_conn);
+
+ for (i = 0; i < s->store_count; i++) {
+ if (!s->store[i].ts)
+ continue;
+ stksess_free(s->store[i].table, s->store[i].ts);
+ s->store[i].ts = NULL;
+ }
+
+ if (s->txn) {
+ pool_free2(pool2_hdr_idx, s->txn->hdr_idx.v);
+ pool_free2(pool2_http_txn, s->txn);
+ s->txn = NULL;
+ }
+
+ if (fe) {
+ pool_free2(fe->rsp_cap_pool, s->res_cap);
+ pool_free2(fe->req_cap_pool, s->req_cap);
+ }
+
+ /* Cleanup all variable contexts. */
+ vars_prune(&s->vars_txn, s);
+ vars_prune(&s->vars_reqres, s);
+
+ stream_store_counters(s);
+
+ list_for_each_entry_safe(bref, back, &s->back_refs, users) {
+ /* we have to unlink all watchers. We must not relink them if
+ * this stream was the last one in the list.
+ */
+ LIST_DEL(&bref->users);
+ LIST_INIT(&bref->users);
+ if (s->list.n != &streams)
+ LIST_ADDQ(&LIST_ELEM(s->list.n, struct stream *, list)->back_refs, &bref->users);
+ bref->ref = s->list.n;
+ }
+ LIST_DEL(&s->list);
+ si_release_endpoint(&s->si[1]);
+ si_release_endpoint(&s->si[0]);
+
+ /* FIXME: for now we have a 1:1 relation between stream and session so
+ * the stream must free the session.
+ */
+ pool_free2(pool2_stream, s);
+ session_free(sess);
+
+ /* We may want to free the maximum amount of pools if the proxy is stopping */
+ if (fe && unlikely(fe->state == PR_STSTOPPED)) {
+ pool_flush2(pool2_buffer);
+ pool_flush2(pool2_http_txn);
+ pool_flush2(pool2_hdr_idx);
+ pool_flush2(pool2_requri);
+ pool_flush2(pool2_capture);
+ pool_flush2(pool2_stream);
+ pool_flush2(pool2_session);
+ pool_flush2(pool2_connection);
+ pool_flush2(pool2_pendconn);
+ pool_flush2(fe->req_cap_pool);
+ pool_flush2(fe->rsp_cap_pool);
+ }
+}
+
+/* Allocates a receive buffer for channel <chn>, but only if it's guaranteed
+ * that it's not the last available buffer or it's the response buffer. Unless
+ * the buffer is the response buffer, an extra control is made so that we always
+ * keep <tune.buffers.reserved> buffers available after this allocation. To be
+ * called at the beginning of recv() callbacks to ensure that the required
+ * buffers are properly allocated. Returns 0 in case of failure, non-zero
+ * otherwise.
+ */
+int stream_alloc_recv_buffer(struct channel *chn)
+{
+ struct stream *s;
+ struct buffer *b;
+ int margin = 0;
+
+ if (!(chn->flags & CF_ISRESP))
+ margin = global.tune.reserved_bufs;
+
+ s = chn_strm(chn);
+
+ b = b_alloc_margin(&chn->buf, margin);
+ if (b)
+ return 1;
+
+ if (LIST_ISEMPTY(&s->buffer_wait))
+ LIST_ADDQ(&buffer_wq, &s->buffer_wait);
+ return 0;
+}
+
+/* Allocates a work buffer for stream <s>. It is meant to be called inside
+ * process_stream(). It will only allocate the side needed for the function
+ * to work fine, which is the response buffer so that an error message may be
+ * built and returned. Response buffers may be allocated from the reserve, this
+ * is critical to ensure that a response may always flow and will never block a
+ * server from releasing a connection. Returns 0 in case of failure, non-zero
+ * otherwise.
+ */
+int stream_alloc_work_buffer(struct stream *s)
+{
+ if (!LIST_ISEMPTY(&s->buffer_wait)) {
+ LIST_DEL(&s->buffer_wait);
+ LIST_INIT(&s->buffer_wait);
+ }
+
+ if (b_alloc_margin(&s->res.buf, 0))
+ return 1;
+
+ LIST_ADDQ(&buffer_wq, &s->buffer_wait);
+ return 0;
+}
+
+/* releases unused buffers after processing. Typically used at the end of the
+ * update() functions. It will try to wake up as many tasks as the number of
+ * buffers that it releases. In practice, most often streams are blocked on
+ * a single buffer, so it makes sense to try to wake two up when two buffers
+ * are released at once.
+ */
+void stream_release_buffers(struct stream *s)
+{
+ if (s->req.buf->size && buffer_empty(s->req.buf))
+ b_free(&s->req.buf);
+
+ if (s->res.buf->size && buffer_empty(s->res.buf))
+ b_free(&s->res.buf);
+
+ /* if we're certain to have at least 1 buffer available, and there is
+ * someone waiting, we can wake up a waiter and offer them.
+ */
+ if (!LIST_ISEMPTY(&buffer_wq))
+ stream_offer_buffers();
+}
+
+/* Runs across the list of pending streams waiting for a buffer and wakes one
+ * up if buffers are available. Will stop when the run queue reaches <rqlimit>.
+ * Should not be called directly, use stream_offer_buffers() instead.
+ */
+void __stream_offer_buffers(int rqlimit)
+{
+ struct stream *sess, *bak;
+
+ list_for_each_entry_safe(sess, bak, &buffer_wq, buffer_wait) {
+ if (rqlimit <= run_queue)
+ break;
+
+ if (sess->task->state & TASK_RUNNING)
+ continue;
+
+ LIST_DEL(&sess->buffer_wait);
+ LIST_INIT(&sess->buffer_wait);
+ task_wakeup(sess->task, TASK_WOKEN_RES);
+ }
+}
+
+/* perform minimal intializations, report 0 in case of error, 1 if OK. */
+int init_stream()
+{
+ LIST_INIT(&streams);
+ pool2_stream = create_pool("stream", sizeof(struct stream), MEM_F_SHARED);
+ return pool2_stream != NULL;
+}
+
+void stream_process_counters(struct stream *s)
+{
+ struct session *sess = s->sess;
+ unsigned long long bytes;
+ void *ptr1,*ptr2;
+ int i;
+
+ bytes = s->req.total - s->logs.bytes_in;
+ s->logs.bytes_in = s->req.total;
+ if (bytes) {
+ sess->fe->fe_counters.bytes_in += bytes;
+
+ s->be->be_counters.bytes_in += bytes;
+
+ if (objt_server(s->target))
+ objt_server(s->target)->counters.bytes_in += bytes;
+
+ if (sess->listener && sess->listener->counters)
+ sess->listener->counters->bytes_in += bytes;
+
+ for (i = 0; i < MAX_SESS_STKCTR; i++) {
+ struct stkctr *stkctr = &s->stkctr[i];
+
+ if (!stkctr_entry(stkctr)) {
+ stkctr = &sess->stkctr[i];
+ if (!stkctr_entry(stkctr))
+ continue;
+ }
+
+ ptr1 = stktable_data_ptr(stkctr->table, stkctr_entry(stkctr), STKTABLE_DT_BYTES_IN_CNT);
+ if (ptr1)
+ stktable_data_cast(ptr1, bytes_in_cnt) += bytes;
+
+ ptr2 = stktable_data_ptr(stkctr->table, stkctr_entry(stkctr), STKTABLE_DT_BYTES_IN_RATE);
+ if (ptr2)
+ update_freq_ctr_period(&stktable_data_cast(ptr2, bytes_in_rate),
+ stkctr->table->data_arg[STKTABLE_DT_BYTES_IN_RATE].u, bytes);
+
+ /* If data was modified, we need to touch to re-schedule sync */
+ if (ptr1 || ptr2)
+ stktable_touch(stkctr->table, stkctr_entry(stkctr), 1);
+ }
+ }
+
+ bytes = s->res.total - s->logs.bytes_out;
+ s->logs.bytes_out = s->res.total;
+ if (bytes) {
+ sess->fe->fe_counters.bytes_out += bytes;
+
+ s->be->be_counters.bytes_out += bytes;
+
+ if (objt_server(s->target))
+ objt_server(s->target)->counters.bytes_out += bytes;
+
+ if (sess->listener && sess->listener->counters)
+ sess->listener->counters->bytes_out += bytes;
+
+ for (i = 0; i < MAX_SESS_STKCTR; i++) {
+ struct stkctr *stkctr = &s->stkctr[i];
+
+ if (!stkctr_entry(stkctr)) {
+ stkctr = &sess->stkctr[i];
+ if (!stkctr_entry(stkctr))
+ continue;
+ }
+
+ ptr1 = stktable_data_ptr(stkctr->table, stkctr_entry(stkctr), STKTABLE_DT_BYTES_OUT_CNT);
+ if (ptr1)
+ stktable_data_cast(ptr1, bytes_out_cnt) += bytes;
+
+ ptr2 = stktable_data_ptr(stkctr->table, stkctr_entry(stkctr), STKTABLE_DT_BYTES_OUT_RATE);
+ if (ptr2)
+ update_freq_ctr_period(&stktable_data_cast(ptr2, bytes_out_rate),
+ stkctr->table->data_arg[STKTABLE_DT_BYTES_OUT_RATE].u, bytes);
+
+ /* If data was modified, we need to touch to re-schedule sync */
+ if (ptr1 || ptr2)
+ stktable_touch(stkctr->table, stkctr_entry(stkctr), 1);
+ }
+ }
+}
+
+/* This function is called with (si->state == SI_ST_CON) meaning that a
+ * connection was attempted and that the file descriptor is already allocated.
+ * We must check for establishment, error and abort. Possible output states
+ * are SI_ST_EST (established), SI_ST_CER (error), SI_ST_DIS (abort), and
+ * SI_ST_CON (no change). The function returns 0 if it switches to SI_ST_CER,
+ * otherwise 1. This only works with connection-based streams.
+ */
+static int sess_update_st_con_tcp(struct stream *s)
+{
+ struct stream_interface *si = &s->si[1];
+ struct channel *req = &s->req;
+ struct channel *rep = &s->res;
+ struct connection *srv_conn = __objt_conn(si->end);
+
+ /* If we got an error, or if nothing happened and the connection timed
+ * out, we must give up. The CER state handler will take care of retry
+ * attempts and error reports.
+ */
+ if (unlikely(si->flags & (SI_FL_EXP|SI_FL_ERR))) {
+ if (unlikely(req->flags & CF_WRITE_PARTIAL)) {
+ /* Some data were sent past the connection establishment,
+ * so we need to pretend we're established to log correctly
+ * and let later states handle the failure.
+ */
+ si->state = SI_ST_EST;
+ si->err_type = SI_ET_DATA_ERR;
+ rep->flags |= CF_READ_ERROR | CF_WRITE_ERROR;
+ return 1;
+ }
+ si->exp = TICK_ETERNITY;
+ si->state = SI_ST_CER;
+
+ conn_force_close(srv_conn);
+
+ if (si->err_type)
+ return 0;
+
+ if (si->flags & SI_FL_ERR)
+ si->err_type = SI_ET_CONN_ERR;
+ else
+ si->err_type = SI_ET_CONN_TO;
+ return 0;
+ }
+
+ /* OK, maybe we want to abort */
+ if (!(req->flags & CF_WRITE_PARTIAL) &&
+ unlikely((rep->flags & CF_SHUTW) ||
+ ((req->flags & CF_SHUTW_NOW) && /* FIXME: this should not prevent a connection from establishing */
+ ((!(req->flags & CF_WRITE_ACTIVITY) && channel_is_empty(req)) ||
+ s->be->options & PR_O_ABRT_CLOSE)))) {
+ /* give up */
+ si_shutw(si);
+ si->err_type |= SI_ET_CONN_ABRT;
+ if (s->srv_error)
+ s->srv_error(s, si);
+ return 1;
+ }
+
+ /* we need to wait a bit more if there was no activity either */
+ if (!(req->flags & CF_WRITE_ACTIVITY))
+ return 1;
+
+ /* OK, this means that a connection succeeded. The caller will be
+ * responsible for handling the transition from CON to EST.
+ */
+ si->state = SI_ST_EST;
+ si->err_type = SI_ET_NONE;
+ return 1;
+}
+
+/* This function is called with (si->state == SI_ST_CER) meaning that a
+ * previous connection attempt has failed and that the file descriptor
+ * has already been released. Possible causes include asynchronous error
+ * notification and time out. Possible output states are SI_ST_CLO when
+ * retries are exhausted, SI_ST_TAR when a delay is wanted before a new
+ * connection attempt, SI_ST_ASS when it's wise to retry on the same server,
+ * and SI_ST_REQ when an immediate redispatch is wanted. The buffers are
+ * marked as in error state. It returns 0.
+ */
+static int sess_update_st_cer(struct stream *s)
+{
+ struct stream_interface *si = &s->si[1];
+
+ /* we probably have to release last stream from the server */
+ if (objt_server(s->target)) {
+ health_adjust(objt_server(s->target), HANA_STATUS_L4_ERR);
+
+ if (s->flags & SF_CURR_SESS) {
+ s->flags &= ~SF_CURR_SESS;
+ objt_server(s->target)->cur_sess--;
+ }
+ }
+
+ /* ensure that we have enough retries left */
+ si->conn_retries--;
+ if (si->conn_retries < 0) {
+ if (!si->err_type) {
+ si->err_type = SI_ET_CONN_ERR;
+ }
+
+ if (objt_server(s->target))
+ objt_server(s->target)->counters.failed_conns++;
+ s->be->be_counters.failed_conns++;
+ sess_change_server(s, NULL);
+ if (may_dequeue_tasks(objt_server(s->target), s->be))
+ process_srv_queue(objt_server(s->target));
+
+ /* shutw is enough so stop a connecting socket */
+ si_shutw(si);
+ s->req.flags |= CF_WRITE_ERROR;
+ s->res.flags |= CF_READ_ERROR;
+
+ si->state = SI_ST_CLO;
+ if (s->srv_error)
+ s->srv_error(s, si);
+ return 0;
+ }
+
+ /* If the "redispatch" option is set on the backend, we are allowed to
+ * retry on another server. By default this redispatch occurs on the
+ * last retry, but if configured we allow redispatches to occur on
+ * configurable intervals, e.g. on every retry. In order to achieve this,
+ * we must mark the stream unassigned, and eventually clear the DIRECT
+ * bit to ignore any persistence cookie. We won't count a retry nor a
+ * redispatch yet, because this will depend on what server is selected.
+ * If the connection is not persistent, the balancing algorithm is not
+ * determinist (round robin) and there is more than one active server,
+ * we accept to perform an immediate redispatch without waiting since
+ * we don't care about this particular server.
+ */
+ if (objt_server(s->target) &&
+ (s->be->options & PR_O_REDISP) && !(s->flags & SF_FORCE_PRST) &&
+ ((((s->be->redispatch_after > 0) &&
+ ((s->be->conn_retries - si->conn_retries) %
+ s->be->redispatch_after == 0)) ||
+ ((s->be->redispatch_after < 0) &&
+ ((s->be->conn_retries - si->conn_retries) %
+ (s->be->conn_retries + 1 + s->be->redispatch_after) == 0))) ||
+ (!(s->flags & SF_DIRECT) && s->be->srv_act > 1 &&
+ ((s->be->lbprm.algo & BE_LB_KIND) == BE_LB_KIND_RR)))) {
+ sess_change_server(s, NULL);
+ if (may_dequeue_tasks(objt_server(s->target), s->be))
+ process_srv_queue(objt_server(s->target));
+
+ s->flags &= ~(SF_DIRECT | SF_ASSIGNED | SF_ADDR_SET);
+ si->state = SI_ST_REQ;
+ } else {
+ if (objt_server(s->target))
+ objt_server(s->target)->counters.retries++;
+ s->be->be_counters.retries++;
+ si->state = SI_ST_ASS;
+ }
+
+ if (si->flags & SI_FL_ERR) {
+ /* The error was an asynchronous connection error, and we will
+ * likely have to retry connecting to the same server, most
+ * likely leading to the same result. To avoid this, we wait
+ * MIN(one second, connect timeout) before retrying.
+ */
+
+ int delay = 1000;
+
+ if (s->be->timeout.connect && s->be->timeout.connect < delay)
+ delay = s->be->timeout.connect;
+
+ if (!si->err_type)
+ si->err_type = SI_ET_CONN_ERR;
+
+ /* only wait when we're retrying on the same server */
+ if (si->state == SI_ST_ASS ||
+ (s->be->lbprm.algo & BE_LB_KIND) != BE_LB_KIND_RR ||
+ (s->be->srv_act <= 1)) {
+ si->state = SI_ST_TAR;
+ si->exp = tick_add(now_ms, MS_TO_TICKS(delay));
+ }
+ return 0;
+ }
+ return 0;
+}
+
+/*
+ * This function handles the transition between the SI_ST_CON state and the
+ * SI_ST_EST state. It must only be called after switching from SI_ST_CON (or
+ * SI_ST_INI) to SI_ST_EST, but only when a ->proto is defined.
+ */
+static void sess_establish(struct stream *s)
+{
+ struct stream_interface *si = &s->si[1];
+ struct channel *req = &s->req;
+ struct channel *rep = &s->res;
+
+ /* First, centralize the timers information */
+ s->logs.t_connect = tv_ms_elapsed(&s->logs.tv_accept, &now);
+ si->exp = TICK_ETERNITY;
+
+ if (objt_server(s->target))
+ health_adjust(objt_server(s->target), HANA_STATUS_L4_OK);
+
+ if (s->be->mode == PR_MODE_TCP) { /* let's allow immediate data connection in this case */
+ /* if the user wants to log as soon as possible, without counting
+ * bytes from the server, then this is the right moment. */
+ if (!LIST_ISEMPTY(&strm_fe(s)->logformat) && !(s->logs.logwait & LW_BYTES)) {
+ s->logs.t_close = s->logs.t_connect; /* to get a valid end date */
+ s->do_log(s);
+ }
+ }
+ else {
+ rep->flags |= CF_READ_DONTWAIT; /* a single read is enough to get response headers */
+ }
+
+ rep->analysers |= strm_fe(s)->fe_rsp_ana | s->be->be_rsp_ana;
+ rep->flags |= CF_READ_ATTACHED; /* producer is now attached */
+ if (req->flags & CF_WAKE_CONNECT) {
+ req->flags |= CF_WAKE_ONCE;
+ req->flags &= ~CF_WAKE_CONNECT;
+ }
+ if (objt_conn(si->end)) {
+ /* real connections have timeouts */
+ req->wto = s->be->timeout.server;
+ rep->rto = s->be->timeout.server;
+ }
+ req->wex = TICK_ETERNITY;
+}
+
+/* Update back stream interface status for input states SI_ST_ASS, SI_ST_QUE,
+ * SI_ST_TAR. Other input states are simply ignored.
+ * Possible output states are SI_ST_CLO, SI_ST_TAR, SI_ST_ASS, SI_ST_REQ, SI_ST_CON
+ * and SI_ST_EST. Flags must have previously been updated for timeouts and other
+ * conditions.
+ */
+static void sess_update_stream_int(struct stream *s)
+{
+ struct server *srv = objt_server(s->target);
+ struct stream_interface *si = &s->si[1];
+ struct channel *req = &s->req;
+
+ DPRINTF(stderr,"[%u] %s: sess=%p rq=%p, rp=%p, exp(r,w)=%u,%u rqf=%08x rpf=%08x rqh=%d rqt=%d rph=%d rpt=%d cs=%d ss=%d\n",
+ now_ms, __FUNCTION__,
+ s,
+ req, &s->res,
+ req->rex, s->res.wex,
+ req->flags, s->res.flags,
+ req->buf->i, req->buf->o, s->res.buf->i, s->res.buf->o, s->si[0].state, s->si[1].state);
+
+ if (si->state == SI_ST_ASS) {
+ /* Server assigned to connection request, we have to try to connect now */
+ int conn_err;
+
+ conn_err = connect_server(s);
+ srv = objt_server(s->target);
+
+ if (conn_err == SF_ERR_NONE) {
+ /* state = SI_ST_CON or SI_ST_EST now */
+ if (srv)
+ srv_inc_sess_ctr(srv);
+ if (srv)
+ srv_set_sess_last(srv);
+ return;
+ }
+
+ /* We have received a synchronous error. We might have to
+ * abort, retry immediately or redispatch.
+ */
+ if (conn_err == SF_ERR_INTERNAL) {
+ if (!si->err_type) {
+ si->err_type = SI_ET_CONN_OTHER;
+ }
+
+ if (srv)
+ srv_inc_sess_ctr(srv);
+ if (srv)
+ srv_set_sess_last(srv);
+ if (srv)
+ srv->counters.failed_conns++;
+ s->be->be_counters.failed_conns++;
+
+ /* release other streams waiting for this server */
+ sess_change_server(s, NULL);
+ if (may_dequeue_tasks(srv, s->be))
+ process_srv_queue(srv);
+
+ /* Failed and not retryable. */
+ si_shutr(si);
+ si_shutw(si);
+ req->flags |= CF_WRITE_ERROR;
+
+ s->logs.t_queue = tv_ms_elapsed(&s->logs.tv_accept, &now);
+
+ /* no stream was ever accounted for this server */
+ si->state = SI_ST_CLO;
+ if (s->srv_error)
+ s->srv_error(s, si);
+ return;
+ }
+
+ /* We are facing a retryable error, but we don't want to run a
+ * turn-around now, as the problem is likely a source port
+ * allocation problem, so we want to retry now.
+ */
+ si->state = SI_ST_CER;
+ si->flags &= ~SI_FL_ERR;
+ sess_update_st_cer(s);
+ /* now si->state is one of SI_ST_CLO, SI_ST_TAR, SI_ST_ASS, SI_ST_REQ */
+ return;
+ }
+ else if (si->state == SI_ST_QUE) {
+ /* connection request was queued, check for any update */
+ if (!s->pend_pos) {
+ /* The connection is not in the queue anymore. Either
+ * we have a server connection slot available and we
+ * go directly to the assigned state, or we need to
+ * load-balance first and go to the INI state.
+ */
+ si->exp = TICK_ETERNITY;
+ if (unlikely(!(s->flags & SF_ASSIGNED)))
+ si->state = SI_ST_REQ;
+ else {
+ s->logs.t_queue = tv_ms_elapsed(&s->logs.tv_accept, &now);
+ si->state = SI_ST_ASS;
+ }
+ return;
+ }
+
+ /* Connection request still in queue... */
+ if (si->flags & SI_FL_EXP) {
+ /* ... and timeout expired */
+ si->exp = TICK_ETERNITY;
+ s->logs.t_queue = tv_ms_elapsed(&s->logs.tv_accept, &now);
+ if (srv)
+ srv->counters.failed_conns++;
+ s->be->be_counters.failed_conns++;
+ si_shutr(si);
+ si_shutw(si);
+ req->flags |= CF_WRITE_TIMEOUT;
+ if (!si->err_type)
+ si->err_type = SI_ET_QUEUE_TO;
+ si->state = SI_ST_CLO;
+ if (s->srv_error)
+ s->srv_error(s, si);
+ return;
+ }
+
+ /* Connection remains in queue, check if we have to abort it */
+ if ((req->flags & (CF_READ_ERROR)) ||
+ ((req->flags & CF_SHUTW_NOW) && /* empty and client aborted */
+ (channel_is_empty(req) || s->be->options & PR_O_ABRT_CLOSE))) {
+ /* give up */
+ si->exp = TICK_ETERNITY;
+ s->logs.t_queue = tv_ms_elapsed(&s->logs.tv_accept, &now);
+ si_shutr(si);
+ si_shutw(si);
+ si->err_type |= SI_ET_QUEUE_ABRT;
+ si->state = SI_ST_CLO;
+ if (s->srv_error)
+ s->srv_error(s, si);
+ return;
+ }
+
+ /* Nothing changed */
+ return;
+ }
+ else if (si->state == SI_ST_TAR) {
+ /* Connection request might be aborted */
+ if ((req->flags & (CF_READ_ERROR)) ||
+ ((req->flags & CF_SHUTW_NOW) && /* empty and client aborted */
+ (channel_is_empty(req) || s->be->options & PR_O_ABRT_CLOSE))) {
+ /* give up */
+ si->exp = TICK_ETERNITY;
+ si_shutr(si);
+ si_shutw(si);
+ si->err_type |= SI_ET_CONN_ABRT;
+ si->state = SI_ST_CLO;
+ if (s->srv_error)
+ s->srv_error(s, si);
+ return;
+ }
+
+ if (!(si->flags & SI_FL_EXP))
+ return; /* still in turn-around */
+
+ si->exp = TICK_ETERNITY;
+
+ /* we keep trying on the same server as long as the stream is
+ * marked "assigned".
+ * FIXME: Should we force a redispatch attempt when the server is down ?
+ */
+ if (s->flags & SF_ASSIGNED)
+ si->state = SI_ST_ASS;
+ else
+ si->state = SI_ST_REQ;
+ return;
+ }
+}
+
+/* Set correct stream termination flags in case no analyser has done it. It
+ * also counts a failed request if the server state has not reached the request
+ * stage.
+ */
+static void sess_set_term_flags(struct stream *s)
+{
+ if (!(s->flags & SF_FINST_MASK)) {
+ if (s->si[1].state < SI_ST_REQ) {
+
+ strm_fe(s)->fe_counters.failed_req++;
+ if (strm_li(s) && strm_li(s)->counters)
+ strm_li(s)->counters->failed_req++;
+
+ s->flags |= SF_FINST_R;
+ }
+ else if (s->si[1].state == SI_ST_QUE)
+ s->flags |= SF_FINST_Q;
+ else if (s->si[1].state < SI_ST_EST)
+ s->flags |= SF_FINST_C;
+ else if (s->si[1].state == SI_ST_EST || s->si[1].prev_state == SI_ST_EST)
+ s->flags |= SF_FINST_D;
+ else
+ s->flags |= SF_FINST_L;
+ }
+}
+
+/* This function initiates a server connection request on a stream interface
+ * already in SI_ST_REQ state. Upon success, the state goes to SI_ST_ASS for
+ * a real connection to a server, indicating that a server has been assigned,
+ * or SI_ST_EST for a successful connection to an applet. It may also return
+ * SI_ST_QUE, or SI_ST_CLO upon error.
+ */
+static void sess_prepare_conn_req(struct stream *s)
+{
+ struct stream_interface *si = &s->si[1];
+
+ DPRINTF(stderr,"[%u] %s: sess=%p rq=%p, rp=%p, exp(r,w)=%u,%u rqf=%08x rpf=%08x rqh=%d rqt=%d rph=%d rpt=%d cs=%d ss=%d\n",
+ now_ms, __FUNCTION__,
+ s,
+ &s->req, &s->res,
+ s->req.rex, s->res.wex,
+ s->req.flags, s->res.flags,
+ s->req.buf->i, s->req.buf->o, s->res.buf->i, s->res.buf->o, s->si[0].state, s->si[1].state);
+
+ if (si->state != SI_ST_REQ)
+ return;
+
+ if (unlikely(obj_type(s->target) == OBJ_TYPE_APPLET)) {
+ /* the applet directly goes to the EST state */
+ struct appctx *appctx = objt_appctx(si->end);
+
+ if (!appctx || appctx->applet != __objt_applet(s->target))
+ appctx = stream_int_register_handler(si, objt_applet(s->target));
+
+ if (!appctx) {
+ /* No more memory, let's immediately abort. Force the
+ * error code to ignore the ERR_LOCAL which is not a
+ * real error.
+ */
+ s->flags &= ~(SF_ERR_MASK | SF_FINST_MASK);
+
+ si_shutr(si);
+ si_shutw(si);
+ s->req.flags |= CF_WRITE_ERROR;
+ si->err_type = SI_ET_CONN_RES;
+ si->state = SI_ST_CLO;
+ if (s->srv_error)
+ s->srv_error(s, si);
+ return;
+ }
+
+ s->logs.t_queue = tv_ms_elapsed(&s->logs.tv_accept, &now);
+ si->state = SI_ST_EST;
+ si->err_type = SI_ET_NONE;
+ be_set_sess_last(s->be);
+ /* let sess_establish() finish the job */
+ return;
+ }
+
+ /* Try to assign a server */
+ if (srv_redispatch_connect(s) != 0) {
+ /* We did not get a server. Either we queued the
+ * connection request, or we encountered an error.
+ */
+ if (si->state == SI_ST_QUE)
+ return;
+
+ /* we did not get any server, let's check the cause */
+ si_shutr(si);
+ si_shutw(si);
+ s->req.flags |= CF_WRITE_ERROR;
+ if (!si->err_type)
+ si->err_type = SI_ET_CONN_OTHER;
+ si->state = SI_ST_CLO;
+ if (s->srv_error)
+ s->srv_error(s, si);
+ return;
+ }
+
+ /* The server is assigned */
+ s->logs.t_queue = tv_ms_elapsed(&s->logs.tv_accept, &now);
+ si->state = SI_ST_ASS;
+ be_set_sess_last(s->be);
+}
+
+/* This function parses the use-service action ruleset. It executes
+ * the associated ACL and set an applet as a stream or txn final node.
+ * it returns ACT_RET_ERR if an error occurs, the proxy left in
+ * consistent state. It returns ACT_RET_STOP in succes case because
+ * use-service must be a terminal action. Returns ACT_RET_YIELD
+ * if the initialisation function require more data.
+ */
+enum act_return process_use_service(struct act_rule *rule, struct proxy *px,
+ struct session *sess, struct stream *s, int flags)
+
+{
+ struct appctx *appctx;
+
+ /* Initialises the applet if it is required. */
+ if (flags & ACT_FLAG_FIRST) {
+ /* Register applet. this function schedules the applet. */
+ s->target = &rule->applet.obj_type;
+ if (unlikely(!stream_int_register_handler(&s->si[1], objt_applet(s->target))))
+ return ACT_RET_ERR;
+
+ /* Initialise the context. */
+ appctx = si_appctx(&s->si[1]);
+ memset(&appctx->ctx, 0, sizeof(appctx->ctx));
+ appctx->rule = rule;
+ }
+ else
+ appctx = si_appctx(&s->si[1]);
+
+ /* Stops the applet sheduling, in case of the init function miss
+ * some data.
+ */
+ appctx_pause(appctx);
+ si_applet_stop_get(&s->si[1]);
+
+ /* Call initialisation. */
+ if (rule->applet.init)
+ switch (rule->applet.init(appctx, px, s)) {
+ case 0: return ACT_RET_ERR;
+ case 1: break;
+ default: return ACT_RET_YIELD;
+ }
+
+ /* Now we can schedule the applet. */
+ si_applet_cant_get(&s->si[1]);
+ appctx_wakeup(appctx);
+
+ if (sess->fe == s->be) /* report it if the request was intercepted by the frontend */
+ sess->fe->fe_counters.intercepted_req++;
+
+ /* The flag SF_ASSIGNED prevent from server assignment. */
+ s->flags |= SF_ASSIGNED;
+
+ return ACT_RET_STOP;
+}
+
+/* This stream analyser checks the switching rules and changes the backend
+ * if appropriate. The default_backend rule is also considered, then the
+ * target backend's forced persistence rules are also evaluated last if any.
+ * It returns 1 if the processing can continue on next analysers, or zero if it
+ * either needs more data or wants to immediately abort the request.
+ */
+static int process_switching_rules(struct stream *s, struct channel *req, int an_bit)
+{
+ struct persist_rule *prst_rule;
+ struct session *sess = s->sess;
+ struct proxy *fe = sess->fe;
+
+ req->analysers &= ~an_bit;
+ req->analyse_exp = TICK_ETERNITY;
+
+ DPRINTF(stderr,"[%u] %s: stream=%p b=%p, exp(r,w)=%u,%u bf=%08x bh=%d analysers=%02x\n",
+ now_ms, __FUNCTION__,
+ s,
+ req,
+ req->rex, req->wex,
+ req->flags,
+ req->buf->i,
+ req->analysers);
+
+ /* now check whether we have some switching rules for this request */
+ if (!(s->flags & SF_BE_ASSIGNED)) {
+ struct switching_rule *rule;
+
+ list_for_each_entry(rule, &fe->switching_rules, list) {
+ int ret = 1;
+
+ if (rule->cond) {
+ ret = acl_exec_cond(rule->cond, fe, sess, s, SMP_OPT_DIR_REQ|SMP_OPT_FINAL);
+ ret = acl_pass(ret);
+ if (rule->cond->pol == ACL_COND_UNLESS)
+ ret = !ret;
+ }
+
+ if (ret) {
+ /* If the backend name is dynamic, try to resolve the name.
+ * If we can't resolve the name, or if any error occurs, break
+ * the loop and fallback to the default backend.
+ */
+ struct proxy *backend;
+
+ if (rule->dynamic) {
+ struct chunk *tmp = get_trash_chunk();
+ if (!build_logline(s, tmp->str, tmp->size, &rule->be.expr))
+ break;
+ backend = proxy_be_by_name(tmp->str);
+ if (!backend)
+ break;
+ }
+ else
+ backend = rule->be.backend;
+
+ if (!stream_set_backend(s, backend))
+ goto sw_failed;
+ break;
+ }
+ }
+
+ /* To ensure correct connection accounting on the backend, we
+ * have to assign one if it was not set (eg: a listen). This
+ * measure also takes care of correctly setting the default
+ * backend if any.
+ */
+ if (!(s->flags & SF_BE_ASSIGNED))
+ if (!stream_set_backend(s, fe->defbe.be ? fe->defbe.be : s->be))
+ goto sw_failed;
+ }
+
+ /* we don't want to run the TCP or HTTP filters again if the backend has not changed */
+ if (fe == s->be) {
+ s->req.analysers &= ~AN_REQ_INSPECT_BE;
+ s->req.analysers &= ~AN_REQ_HTTP_PROCESS_BE;
+ }
+
+ /* as soon as we know the backend, we must check if we have a matching forced or ignored
+ * persistence rule, and report that in the stream.
+ */
+ list_for_each_entry(prst_rule, &s->be->persist_rules, list) {
+ int ret = 1;
+
+ if (prst_rule->cond) {
+ ret = acl_exec_cond(prst_rule->cond, s->be, sess, s, SMP_OPT_DIR_REQ|SMP_OPT_FINAL);
+ ret = acl_pass(ret);
+ if (prst_rule->cond->pol == ACL_COND_UNLESS)
+ ret = !ret;
+ }
+
+ if (ret) {
+ /* no rule, or the rule matches */
+ if (prst_rule->type == PERSIST_TYPE_FORCE) {
+ s->flags |= SF_FORCE_PRST;
+ } else {
+ s->flags |= SF_IGNORE_PRST;
+ }
+ break;
+ }
+ }
+
+ return 1;
+
+ sw_failed:
+ /* immediately abort this request in case of allocation failure */
+ channel_abort(&s->req);
+ channel_abort(&s->res);
+
+ if (!(s->flags & SF_ERR_MASK))
+ s->flags |= SF_ERR_RESOURCE;
+ if (!(s->flags & SF_FINST_MASK))
+ s->flags |= SF_FINST_R;
+
+ if (s->txn)
+ s->txn->status = 500;
+ s->req.analysers = 0;
+ s->req.analyse_exp = TICK_ETERNITY;
+ return 0;
+}
+
+/* This stream analyser works on a request. It applies all use-server rules on
+ * it then returns 1. The data must already be present in the buffer otherwise
+ * they won't match. It always returns 1.
+ */
+static int process_server_rules(struct stream *s, struct channel *req, int an_bit)
+{
+ struct proxy *px = s->be;
+ struct session *sess = s->sess;
+ struct server_rule *rule;
+
+ DPRINTF(stderr,"[%u] %s: stream=%p b=%p, exp(r,w)=%u,%u bf=%08x bl=%d analysers=%02x\n",
+ now_ms, __FUNCTION__,
+ s,
+ req,
+ req->rex, req->wex,
+ req->flags,
+ req->buf->i + req->buf->o,
+ req->analysers);
+
+ if (!(s->flags & SF_ASSIGNED)) {
+ list_for_each_entry(rule, &px->server_rules, list) {
+ int ret;
+
+ ret = acl_exec_cond(rule->cond, s->be, sess, s, SMP_OPT_DIR_REQ|SMP_OPT_FINAL);
+ ret = acl_pass(ret);
+ if (rule->cond->pol == ACL_COND_UNLESS)
+ ret = !ret;
+
+ if (ret) {
+ struct server *srv = rule->srv.ptr;
+
+ if ((srv->state != SRV_ST_STOPPED) ||
+ (px->options & PR_O_PERSIST) ||
+ (s->flags & SF_FORCE_PRST)) {
+ s->flags |= SF_DIRECT | SF_ASSIGNED;
+ s->target = &srv->obj_type;
+ break;
+ }
+ /* if the server is not UP, let's go on with next rules
+ * just in case another one is suited.
+ */
+ }
+ }
+ }
+
+ req->analysers &= ~an_bit;
+ req->analyse_exp = TICK_ETERNITY;
+ return 1;
+}
+
+/* This stream analyser works on a request. It applies all sticking rules on
+ * it then returns 1. The data must already be present in the buffer otherwise
+ * they won't match. It always returns 1.
+ */
+static int process_sticking_rules(struct stream *s, struct channel *req, int an_bit)
+{
+ struct proxy *px = s->be;
+ struct session *sess = s->sess;
+ struct sticking_rule *rule;
+
+ DPRINTF(stderr,"[%u] %s: stream=%p b=%p, exp(r,w)=%u,%u bf=%08x bh=%d analysers=%02x\n",
+ now_ms, __FUNCTION__,
+ s,
+ req,
+ req->rex, req->wex,
+ req->flags,
+ req->buf->i,
+ req->analysers);
+
+ list_for_each_entry(rule, &px->sticking_rules, list) {
+ int ret = 1 ;
+ int i;
+
+ /* Only the first stick store-request of each table is applied
+ * and other ones are ignored. The purpose is to allow complex
+ * configurations which look for multiple entries by decreasing
+ * order of precision and to stop at the first which matches.
+ * An example could be a store of the IP address from an HTTP
+ * header first, then from the source if not found.
+ */
+ for (i = 0; i < s->store_count; i++) {
+ if (rule->table.t == s->store[i].table)
+ break;
+ }
+
+ if (i != s->store_count)
+ continue;
+
+ if (rule->cond) {
+ ret = acl_exec_cond(rule->cond, px, sess, s, SMP_OPT_DIR_REQ|SMP_OPT_FINAL);
+ ret = acl_pass(ret);
+ if (rule->cond->pol == ACL_COND_UNLESS)
+ ret = !ret;
+ }
+
+ if (ret) {
+ struct stktable_key *key;
+
+ key = stktable_fetch_key(rule->table.t, px, sess, s, SMP_OPT_DIR_REQ|SMP_OPT_FINAL, rule->expr, NULL);
+ if (!key)
+ continue;
+
+ if (rule->flags & STK_IS_MATCH) {
+ struct stksess *ts;
+
+ if ((ts = stktable_lookup_key(rule->table.t, key)) != NULL) {
+ if (!(s->flags & SF_ASSIGNED)) {
+ struct eb32_node *node;
+ void *ptr;
+
+ /* srv found in table */
+ ptr = stktable_data_ptr(rule->table.t, ts, STKTABLE_DT_SERVER_ID);
+ node = eb32_lookup(&px->conf.used_server_id, stktable_data_cast(ptr, server_id));
+ if (node) {
+ struct server *srv;
+
+ srv = container_of(node, struct server, conf.id);
+ if ((srv->state != SRV_ST_STOPPED) ||
+ (px->options & PR_O_PERSIST) ||
+ (s->flags & SF_FORCE_PRST)) {
+ s->flags |= SF_DIRECT | SF_ASSIGNED;
+ s->target = &srv->obj_type;
+ }
+ }
+ }
+ stktable_touch(rule->table.t, ts, 1);
+ }
+ }
+ if (rule->flags & STK_IS_STORE) {
+ if (s->store_count < (sizeof(s->store) / sizeof(s->store[0]))) {
+ struct stksess *ts;
+
+ ts = stksess_new(rule->table.t, key);
+ if (ts) {
+ s->store[s->store_count].table = rule->table.t;
+ s->store[s->store_count++].ts = ts;
+ }
+ }
+ }
+ }
+ }
+
+ req->analysers &= ~an_bit;
+ req->analyse_exp = TICK_ETERNITY;
+ return 1;
+}
+
+/* This stream analyser works on a response. It applies all store rules on it
+ * then returns 1. The data must already be present in the buffer otherwise
+ * they won't match. It always returns 1.
+ */
+static int process_store_rules(struct stream *s, struct channel *rep, int an_bit)
+{
+ struct proxy *px = s->be;
+ struct session *sess = s->sess;
+ struct sticking_rule *rule;
+ int i;
+ int nbreq = s->store_count;
+
+ DPRINTF(stderr,"[%u] %s: stream=%p b=%p, exp(r,w)=%u,%u bf=%08x bh=%d analysers=%02x\n",
+ now_ms, __FUNCTION__,
+ s,
+ rep,
+ rep->rex, rep->wex,
+ rep->flags,
+ rep->buf->i,
+ rep->analysers);
+
+ list_for_each_entry(rule, &px->storersp_rules, list) {
+ int ret = 1 ;
+
+ /* Only the first stick store-response of each table is applied
+ * and other ones are ignored. The purpose is to allow complex
+ * configurations which look for multiple entries by decreasing
+ * order of precision and to stop at the first which matches.
+ * An example could be a store of a set-cookie value, with a
+ * fallback to a parameter found in a 302 redirect.
+ *
+ * The store-response rules are not allowed to override the
+ * store-request rules for the same table, but they may coexist.
+ * Thus we can have up to one store-request entry and one store-
+ * response entry for the same table at any time.
+ */
+ for (i = nbreq; i < s->store_count; i++) {
+ if (rule->table.t == s->store[i].table)
+ break;
+ }
+
+ /* skip existing entries for this table */
+ if (i < s->store_count)
+ continue;
+
+ if (rule->cond) {
+ ret = acl_exec_cond(rule->cond, px, sess, s, SMP_OPT_DIR_RES|SMP_OPT_FINAL);
+ ret = acl_pass(ret);
+ if (rule->cond->pol == ACL_COND_UNLESS)
+ ret = !ret;
+ }
+
+ if (ret) {
+ struct stktable_key *key;
+
+ key = stktable_fetch_key(rule->table.t, px, sess, s, SMP_OPT_DIR_RES|SMP_OPT_FINAL, rule->expr, NULL);
+ if (!key)
+ continue;
+
+ if (s->store_count < (sizeof(s->store) / sizeof(s->store[0]))) {
+ struct stksess *ts;
+
+ ts = stksess_new(rule->table.t, key);
+ if (ts) {
+ s->store[s->store_count].table = rule->table.t;
+ s->store[s->store_count++].ts = ts;
+ }
+ }
+ }
+ }
+
+ /* process store request and store response */
+ for (i = 0; i < s->store_count; i++) {
+ struct stksess *ts;
+ void *ptr;
+
+ if (objt_server(s->target) && objt_server(s->target)->flags & SRV_F_NON_STICK) {
+ stksess_free(s->store[i].table, s->store[i].ts);
+ s->store[i].ts = NULL;
+ continue;
+ }
+
+ ts = stktable_lookup(s->store[i].table, s->store[i].ts);
+ if (ts) {
+ /* the entry already existed, we can free ours */
+ stktable_touch(s->store[i].table, ts, 1);
+ stksess_free(s->store[i].table, s->store[i].ts);
+ }
+ else
+ ts = stktable_store(s->store[i].table, s->store[i].ts, 1);
+
+ s->store[i].ts = NULL;
+ ptr = stktable_data_ptr(s->store[i].table, ts, STKTABLE_DT_SERVER_ID);
+ stktable_data_cast(ptr, server_id) = objt_server(s->target)->puid;
+ }
+ s->store_count = 0; /* everything is stored */
+
+ rep->analysers &= ~an_bit;
+ rep->analyse_exp = TICK_ETERNITY;
+ return 1;
+}
+
+/* This macro is very specific to the function below. See the comments in
+ * process_stream() below to understand the logic and the tests.
+ */
+#define UPDATE_ANALYSERS(real, list, back, flag) { \
+ list = (((list) & ~(flag)) | ~(back)) & (real); \
+ back = real; \
+ if (!(list)) \
+ break; \
+ if (((list) ^ ((list) & ((list) - 1))) < (flag)) \
+ continue; \
+}
+
+/* Processes the client, server, request and response jobs of a stream task,
+ * then puts it back to the wait queue in a clean state, or cleans up its
+ * resources if it must be deleted. Returns in <next> the date the task wants
+ * to be woken up, or TICK_ETERNITY. In order not to call all functions for
+ * nothing too many times, the request and response buffers flags are monitored
+ * and each function is called only if at least another function has changed at
+ * least one flag it is interested in.
+ */
+struct task *process_stream(struct task *t)
+{
+ struct server *srv;
+ struct stream *s = t->context;
+ struct session *sess = s->sess;
+ unsigned int rqf_last, rpf_last;
+ unsigned int rq_prod_last, rq_cons_last;
+ unsigned int rp_cons_last, rp_prod_last;
+ unsigned int req_ana_back;
+ struct channel *req, *res;
+ struct stream_interface *si_f, *si_b;
+
+ req = &s->req;
+ res = &s->res;
+
+ si_f = &s->si[0];
+ si_b = &s->si[1];
+
+ //DPRINTF(stderr, "%s:%d: cs=%d ss=%d(%d) rqf=0x%08x rpf=0x%08x\n", __FUNCTION__, __LINE__,
+ // si_f->state, si_b->state, si_b->err_type, req->flags, res->flags);
+
+ /* this data may be no longer valid, clear it */
+ if (s->txn)
+ memset(&s->txn->auth, 0, sizeof(s->txn->auth));
+
+ /* This flag must explicitly be set every time */
+ req->flags &= ~(CF_READ_NOEXP|CF_WAKE_WRITE);
+ res->flags &= ~(CF_READ_NOEXP|CF_WAKE_WRITE);
+
+ /* Keep a copy of req/rep flags so that we can detect shutdowns */
+ rqf_last = req->flags & ~CF_MASK_ANALYSER;
+ rpf_last = res->flags & ~CF_MASK_ANALYSER;
+
+ /* we don't want the stream interface functions to recursively wake us up */
+ si_f->flags |= SI_FL_DONT_WAKE;
+ si_b->flags |= SI_FL_DONT_WAKE;
+
+ /* 1a: Check for low level timeouts if needed. We just set a flag on
+ * stream interfaces when their timeouts have expired.
+ */
+ if (unlikely(t->state & TASK_WOKEN_TIMER)) {
+ stream_int_check_timeouts(si_f);
+ stream_int_check_timeouts(si_b);
+
+ /* check channel timeouts, and close the corresponding stream interfaces
+ * for future reads or writes. Note: this will also concern upper layers
+ * but we do not touch any other flag. We must be careful and correctly
+ * detect state changes when calling them.
+ */
+
+ channel_check_timeouts(req);
+
+ if (unlikely((req->flags & (CF_SHUTW|CF_WRITE_TIMEOUT)) == CF_WRITE_TIMEOUT)) {
+ si_b->flags |= SI_FL_NOLINGER;
+ si_shutw(si_b);
+ }
+
+ if (unlikely((req->flags & (CF_SHUTR|CF_READ_TIMEOUT)) == CF_READ_TIMEOUT)) {
+ if (si_f->flags & SI_FL_NOHALF)
+ si_f->flags |= SI_FL_NOLINGER;
+ si_shutr(si_f);
+ }
+
+ channel_check_timeouts(res);
+
+ if (unlikely((res->flags & (CF_SHUTW|CF_WRITE_TIMEOUT)) == CF_WRITE_TIMEOUT)) {
+ si_f->flags |= SI_FL_NOLINGER;
+ si_shutw(si_f);
+ }
+
+ if (unlikely((res->flags & (CF_SHUTR|CF_READ_TIMEOUT)) == CF_READ_TIMEOUT)) {
+ if (si_b->flags & SI_FL_NOHALF)
+ si_b->flags |= SI_FL_NOLINGER;
+ si_shutr(si_b);
+ }
+
+ /* Once in a while we're woken up because the task expires. But
+ * this does not necessarily mean that a timeout has been reached.
+ * So let's not run a whole stream processing if only an expiration
+ * timeout needs to be refreshed.
+ */
+ if (!((req->flags | res->flags) &
+ (CF_SHUTR|CF_READ_ACTIVITY|CF_READ_TIMEOUT|CF_SHUTW|
+ CF_WRITE_ACTIVITY|CF_WRITE_TIMEOUT|CF_ANA_TIMEOUT)) &&
+ !((si_f->flags | si_b->flags) & (SI_FL_EXP|SI_FL_ERR)) &&
+ ((t->state & TASK_WOKEN_ANY) == TASK_WOKEN_TIMER))
+ goto update_exp_and_leave;
+ }
+
+ /* below we may emit error messages so we have to ensure that we have
+ * our buffers properly allocated.
+ */
+ if (!stream_alloc_work_buffer(s)) {
+ /* No buffer available, we've been subscribed to the list of
+ * buffer waiters, let's wait for our turn.
+ */
+ goto update_exp_and_leave;
+ }
+
+ /* 1b: check for low-level errors reported at the stream interface.
+ * First we check if it's a retryable error (in which case we don't
+ * want to tell the buffer). Otherwise we report the error one level
+ * upper by setting flags into the buffers. Note that the side towards
+ * the client cannot have connect (hence retryable) errors. Also, the
+ * connection setup code must be able to deal with any type of abort.
+ */
+ srv = objt_server(s->target);
+ if (unlikely(si_f->flags & SI_FL_ERR)) {
+ if (si_f->state == SI_ST_EST || si_f->state == SI_ST_DIS) {
+ si_shutr(si_f);
+ si_shutw(si_f);
+ stream_int_report_error(si_f);
+ if (!(req->analysers) && !(res->analysers)) {
+ s->be->be_counters.cli_aborts++;
+ sess->fe->fe_counters.cli_aborts++;
+ if (srv)
+ srv->counters.cli_aborts++;
+ if (!(s->flags & SF_ERR_MASK))
+ s->flags |= SF_ERR_CLICL;
+ if (!(s->flags & SF_FINST_MASK))
+ s->flags |= SF_FINST_D;
+ }
+ }
+ }
+
+ if (unlikely(si_b->flags & SI_FL_ERR)) {
+ if (si_b->state == SI_ST_EST || si_b->state == SI_ST_DIS) {
+ si_shutr(si_b);
+ si_shutw(si_b);
+ stream_int_report_error(si_b);
+ s->be->be_counters.failed_resp++;
+ if (srv)
+ srv->counters.failed_resp++;
+ if (!(req->analysers) && !(res->analysers)) {
+ s->be->be_counters.srv_aborts++;
+ sess->fe->fe_counters.srv_aborts++;
+ if (srv)
+ srv->counters.srv_aborts++;
+ if (!(s->flags & SF_ERR_MASK))
+ s->flags |= SF_ERR_SRVCL;
+ if (!(s->flags & SF_FINST_MASK))
+ s->flags |= SF_FINST_D;
+ }
+ }
+ /* note: maybe we should process connection errors here ? */
+ }
+
+ if (si_b->state == SI_ST_CON) {
+ /* we were trying to establish a connection on the server side,
+ * maybe it succeeded, maybe it failed, maybe we timed out, ...
+ */
+ if (unlikely(!sess_update_st_con_tcp(s)))
+ sess_update_st_cer(s);
+ else if (si_b->state == SI_ST_EST)
+ sess_establish(s);
+
+ /* state is now one of SI_ST_CON (still in progress), SI_ST_EST
+ * (established), SI_ST_DIS (abort), SI_ST_CLO (last error),
+ * SI_ST_ASS/SI_ST_TAR/SI_ST_REQ for retryable errors.
+ */
+ }
+
+ rq_prod_last = si_f->state;
+ rq_cons_last = si_b->state;
+ rp_cons_last = si_f->state;
+ rp_prod_last = si_b->state;
+
+ resync_stream_interface:
+ /* Check for connection closure */
+
+ DPRINTF(stderr,
+ "[%u] %s:%d: task=%p s=%p, sfl=0x%08x, rq=%p, rp=%p, exp(r,w)=%u,%u rqf=%08x rpf=%08x rqh=%d rqt=%d rph=%d rpt=%d cs=%d ss=%d, cet=0x%x set=0x%x retr=%d\n",
+ now_ms, __FUNCTION__, __LINE__,
+ t,
+ s, s->flags,
+ req, res,
+ req->rex, res->wex,
+ req->flags, res->flags,
+ req->buf->i, req->buf->o, res->buf->i, res->buf->o, si_f->state, si_b->state,
+ si_f->err_type, si_b->err_type,
+ si_b->conn_retries);
+
+ /* nothing special to be done on client side */
+ if (unlikely(si_f->state == SI_ST_DIS))
+ si_f->state = SI_ST_CLO;
+
+ /* When a server-side connection is released, we have to count it and
+ * check for pending connections on this server.
+ */
+ if (unlikely(si_b->state == SI_ST_DIS)) {
+ si_b->state = SI_ST_CLO;
+ srv = objt_server(s->target);
+ if (srv) {
+ if (s->flags & SF_CURR_SESS) {
+ s->flags &= ~SF_CURR_SESS;
+ srv->cur_sess--;
+ }
+ sess_change_server(s, NULL);
+ if (may_dequeue_tasks(srv, s->be))
+ process_srv_queue(srv);
+ }
+ }
+
+ /*
+ * Note: of the transient states (REQ, CER, DIS), only REQ may remain
+ * at this point.
+ */
+
+ resync_request:
+ /* Analyse request */
+ if (((req->flags & ~rqf_last) & CF_MASK_ANALYSER) ||
+ ((req->flags ^ rqf_last) & CF_MASK_STATIC) ||
+ si_f->state != rq_prod_last ||
+ si_b->state != rq_cons_last ||
+ s->task->state & TASK_WOKEN_MSG) {
+ unsigned int flags = req->flags;
+
+ if (si_f->state >= SI_ST_EST) {
+ int max_loops = global.tune.maxpollevents;
+ unsigned int ana_list;
+ unsigned int ana_back;
+
+ /* it's up to the analysers to stop new connections,
+ * disable reading or closing. Note: if an analyser
+ * disables any of these bits, it is responsible for
+ * enabling them again when it disables itself, so
+ * that other analysers are called in similar conditions.
+ */
+ channel_auto_read(req);
+ channel_auto_connect(req);
+ channel_auto_close(req);
+
+ /* We will call all analysers for which a bit is set in
+ * req->analysers, following the bit order from LSB
+ * to MSB. The analysers must remove themselves from
+ * the list when not needed. Any analyser may return 0
+ * to break out of the loop, either because of missing
+ * data to take a decision, or because it decides to
+ * kill the stream. We loop at least once through each
+ * analyser, and we may loop again if other analysers
+ * are added in the middle.
+ *
+ * We build a list of analysers to run. We evaluate all
+ * of these analysers in the order of the lower bit to
+ * the higher bit. This ordering is very important.
+ * An analyser will often add/remove other analysers,
+ * including itself. Any changes to itself have no effect
+ * on the loop. If it removes any other analysers, we
+ * want those analysers not to be called anymore during
+ * this loop. If it adds an analyser that is located
+ * after itself, we want it to be scheduled for being
+ * processed during the loop. If it adds an analyser
+ * which is located before it, we want it to switch to
+ * it immediately, even if it has already been called
+ * once but removed since.
+ *
+ * In order to achieve this, we compare the analyser
+ * list after the call with a copy of it before the
+ * call. The work list is fed with analyser bits that
+ * appeared during the call. Then we compare previous
+ * work list with the new one, and check the bits that
+ * appeared. If the lowest of these bits is lower than
+ * the current bit, it means we have enabled a previous
+ * analyser and must immediately loop again.
+ */
+
+ ana_list = ana_back = req->analysers;
+ while (ana_list && max_loops--) {
+ /* Warning! ensure that analysers are always placed in ascending order! */
+
+ if (ana_list & AN_REQ_INSPECT_FE) {
+ if (!tcp_inspect_request(s, req, AN_REQ_INSPECT_FE))
+ break;
+ UPDATE_ANALYSERS(req->analysers, ana_list, ana_back, AN_REQ_INSPECT_FE);
+ }
+
+ if (ana_list & AN_REQ_WAIT_HTTP) {
+ if (!http_wait_for_request(s, req, AN_REQ_WAIT_HTTP))
+ break;
+ UPDATE_ANALYSERS(req->analysers, ana_list, ana_back, AN_REQ_WAIT_HTTP);
+ }
+
+ if (ana_list & AN_REQ_HTTP_BODY) {
+ if (!http_wait_for_request_body(s, req, AN_REQ_HTTP_BODY))
+ break;
+ UPDATE_ANALYSERS(req->analysers, ana_list, ana_back, AN_REQ_HTTP_BODY);
+ }
+
+ if (ana_list & AN_REQ_HTTP_PROCESS_FE) {
+ if (!http_process_req_common(s, req, AN_REQ_HTTP_PROCESS_FE, sess->fe))
+ break;
+ UPDATE_ANALYSERS(req->analysers, ana_list, ana_back, AN_REQ_HTTP_PROCESS_FE);
+ }
+
+ if (ana_list & AN_REQ_SWITCHING_RULES) {
+ if (!process_switching_rules(s, req, AN_REQ_SWITCHING_RULES))
+ break;
+ UPDATE_ANALYSERS(req->analysers, ana_list, ana_back, AN_REQ_SWITCHING_RULES);
+ }
+
+ if (ana_list & AN_REQ_INSPECT_BE) {
+ if (!tcp_inspect_request(s, req, AN_REQ_INSPECT_BE))
+ break;
+ UPDATE_ANALYSERS(req->analysers, ana_list, ana_back, AN_REQ_INSPECT_BE);
+ }
+
+ if (ana_list & AN_REQ_HTTP_PROCESS_BE) {
+ if (!http_process_req_common(s, req, AN_REQ_HTTP_PROCESS_BE, s->be))
+ break;
+ UPDATE_ANALYSERS(req->analysers, ana_list, ana_back, AN_REQ_HTTP_PROCESS_BE);
+ }
+
+ if (ana_list & AN_REQ_HTTP_TARPIT) {
+ if (!http_process_tarpit(s, req, AN_REQ_HTTP_TARPIT))
+ break;
+ UPDATE_ANALYSERS(req->analysers, ana_list, ana_back, AN_REQ_HTTP_TARPIT);
+ }
+
+ if (ana_list & AN_REQ_SRV_RULES) {
+ if (!process_server_rules(s, req, AN_REQ_SRV_RULES))
+ break;
+ UPDATE_ANALYSERS(req->analysers, ana_list, ana_back, AN_REQ_SRV_RULES);
+ }
+
+ if (ana_list & AN_REQ_HTTP_INNER) {
+ if (!http_process_request(s, req, AN_REQ_HTTP_INNER))
+ break;
+ UPDATE_ANALYSERS(req->analysers, ana_list, ana_back, AN_REQ_HTTP_INNER);
+ }
+
+ if (ana_list & AN_REQ_PRST_RDP_COOKIE) {
+ if (!tcp_persist_rdp_cookie(s, req, AN_REQ_PRST_RDP_COOKIE))
+ break;
+ UPDATE_ANALYSERS(req->analysers, ana_list, ana_back, AN_REQ_PRST_RDP_COOKIE);
+ }
+
+ if (ana_list & AN_REQ_STICKING_RULES) {
+ if (!process_sticking_rules(s, req, AN_REQ_STICKING_RULES))
+ break;
+ UPDATE_ANALYSERS(req->analysers, ana_list, ana_back, AN_REQ_STICKING_RULES);
+ }
+
+ if (ana_list & AN_REQ_HTTP_XFER_BODY) {
+ if (!http_request_forward_body(s, req, AN_REQ_HTTP_XFER_BODY))
+ break;
+ UPDATE_ANALYSERS(req->analysers, ana_list, ana_back, AN_REQ_HTTP_XFER_BODY);
+ }
+ break;
+ }
+ }
+
+ rq_prod_last = si_f->state;
+ rq_cons_last = si_b->state;
+ req->flags &= ~CF_WAKE_ONCE;
+ rqf_last = req->flags;
+
+ if ((req->flags ^ flags) & CF_MASK_STATIC)
+ goto resync_request;
+ }
+
+ /* we'll monitor the request analysers while parsing the response,
+ * because some response analysers may indirectly enable new request
+ * analysers (eg: HTTP keep-alive).
+ */
+ req_ana_back = req->analysers;
+
+ resync_response:
+ /* Analyse response */
+
+ if (((res->flags & ~rpf_last) & CF_MASK_ANALYSER) ||
+ (res->flags ^ rpf_last) & CF_MASK_STATIC ||
+ si_f->state != rp_cons_last ||
+ si_b->state != rp_prod_last ||
+ s->task->state & TASK_WOKEN_MSG) {
+ unsigned int flags = res->flags;
+
+ if ((res->flags & CF_MASK_ANALYSER) &&
+ (res->analysers & AN_REQ_ALL)) {
+ /* Due to HTTP pipelining, the HTTP request analyser might be waiting
+ * for some free space in the response buffer, so we might need to call
+ * it when something changes in the response buffer, but still we pass
+ * it the request buffer. Note that the SI state might very well still
+ * be zero due to us returning a flow of redirects!
+ */
+ res->analysers &= ~AN_REQ_ALL;
+ req->flags |= CF_WAKE_ONCE;
+ }
+
+ if (si_b->state >= SI_ST_EST) {
+ int max_loops = global.tune.maxpollevents;
+ unsigned int ana_list;
+ unsigned int ana_back;
+
+ /* it's up to the analysers to stop disable reading or
+ * closing. Note: if an analyser disables any of these
+ * bits, it is responsible for enabling them again when
+ * it disables itself, so that other analysers are called
+ * in similar conditions.
+ */
+ channel_auto_read(res);
+ channel_auto_close(res);
+
+ /* We will call all analysers for which a bit is set in
+ * res->analysers, following the bit order from LSB
+ * to MSB. The analysers must remove themselves from
+ * the list when not needed. Any analyser may return 0
+ * to break out of the loop, either because of missing
+ * data to take a decision, or because it decides to
+ * kill the stream. We loop at least once through each
+ * analyser, and we may loop again if other analysers
+ * are added in the middle.
+ */
+
+ ana_list = ana_back = res->analysers;
+ while (ana_list && max_loops--) {
+ /* Warning! ensure that analysers are always placed in ascending order! */
+
+ if (ana_list & AN_RES_INSPECT) {
+ if (!tcp_inspect_response(s, res, AN_RES_INSPECT))
+ break;
+ UPDATE_ANALYSERS(res->analysers, ana_list, ana_back, AN_RES_INSPECT);
+ }
+
+ if (ana_list & AN_RES_WAIT_HTTP) {
+ if (!http_wait_for_response(s, res, AN_RES_WAIT_HTTP))
+ break;
+ UPDATE_ANALYSERS(res->analysers, ana_list, ana_back, AN_RES_WAIT_HTTP);
+ }
+
+ if (ana_list & AN_RES_STORE_RULES) {
+ if (!process_store_rules(s, res, AN_RES_STORE_RULES))
+ break;
+ UPDATE_ANALYSERS(res->analysers, ana_list, ana_back, AN_RES_STORE_RULES);
+ }
+
+ if (ana_list & AN_RES_HTTP_PROCESS_BE) {
+ if (!http_process_res_common(s, res, AN_RES_HTTP_PROCESS_BE, s->be))
+ break;
+ UPDATE_ANALYSERS(res->analysers, ana_list, ana_back, AN_RES_HTTP_PROCESS_BE);
+ }
+
+ if (ana_list & AN_RES_HTTP_XFER_BODY) {
+ if (!http_response_forward_body(s, res, AN_RES_HTTP_XFER_BODY))
+ break;
+ UPDATE_ANALYSERS(res->analysers, ana_list, ana_back, AN_RES_HTTP_XFER_BODY);
+ }
+ break;
+ }
+ }
+
+ rp_cons_last = si_f->state;
+ rp_prod_last = si_b->state;
+ rpf_last = res->flags;
+
+ if ((res->flags ^ flags) & CF_MASK_STATIC)
+ goto resync_response;
+ }
+
+ /* maybe someone has added some request analysers, so we must check and loop */
+ if (req->analysers & ~req_ana_back)
+ goto resync_request;
+
+ if ((req->flags & ~rqf_last) & CF_MASK_ANALYSER)
+ goto resync_request;
+
+ /* FIXME: here we should call protocol handlers which rely on
+ * both buffers.
+ */
+
+
+ /*
+ * Now we propagate unhandled errors to the stream. Normally
+ * we're just in a data phase here since it means we have not
+ * seen any analyser who could set an error status.
+ */
+ srv = objt_server(s->target);
+ if (unlikely(!(s->flags & SF_ERR_MASK))) {
+ if (req->flags & (CF_READ_ERROR|CF_READ_TIMEOUT|CF_WRITE_ERROR|CF_WRITE_TIMEOUT)) {
+ /* Report it if the client got an error or a read timeout expired */
+ req->analysers = 0;
+ if (req->flags & CF_READ_ERROR) {
+ s->be->be_counters.cli_aborts++;
+ sess->fe->fe_counters.cli_aborts++;
+ if (srv)
+ srv->counters.cli_aborts++;
+ s->flags |= SF_ERR_CLICL;
+ }
+ else if (req->flags & CF_READ_TIMEOUT) {
+ s->be->be_counters.cli_aborts++;
+ sess->fe->fe_counters.cli_aborts++;
+ if (srv)
+ srv->counters.cli_aborts++;
+ s->flags |= SF_ERR_CLITO;
+ }
+ else if (req->flags & CF_WRITE_ERROR) {
+ s->be->be_counters.srv_aborts++;
+ sess->fe->fe_counters.srv_aborts++;
+ if (srv)
+ srv->counters.srv_aborts++;
+ s->flags |= SF_ERR_SRVCL;
+ }
+ else {
+ s->be->be_counters.srv_aborts++;
+ sess->fe->fe_counters.srv_aborts++;
+ if (srv)
+ srv->counters.srv_aborts++;
+ s->flags |= SF_ERR_SRVTO;
+ }
+ sess_set_term_flags(s);
+ }
+ else if (res->flags & (CF_READ_ERROR|CF_READ_TIMEOUT|CF_WRITE_ERROR|CF_WRITE_TIMEOUT)) {
+ /* Report it if the server got an error or a read timeout expired */
+ res->analysers = 0;
+ if (res->flags & CF_READ_ERROR) {
+ s->be->be_counters.srv_aborts++;
+ sess->fe->fe_counters.srv_aborts++;
+ if (srv)
+ srv->counters.srv_aborts++;
+ s->flags |= SF_ERR_SRVCL;
+ }
+ else if (res->flags & CF_READ_TIMEOUT) {
+ s->be->be_counters.srv_aborts++;
+ sess->fe->fe_counters.srv_aborts++;
+ if (srv)
+ srv->counters.srv_aborts++;
+ s->flags |= SF_ERR_SRVTO;
+ }
+ else if (res->flags & CF_WRITE_ERROR) {
+ s->be->be_counters.cli_aborts++;
+ sess->fe->fe_counters.cli_aborts++;
+ if (srv)
+ srv->counters.cli_aborts++;
+ s->flags |= SF_ERR_CLICL;
+ }
+ else {
+ s->be->be_counters.cli_aborts++;
+ sess->fe->fe_counters.cli_aborts++;
+ if (srv)
+ srv->counters.cli_aborts++;
+ s->flags |= SF_ERR_CLITO;
+ }
+ sess_set_term_flags(s);
+ }
+ }
+
+ /*
+ * Here we take care of forwarding unhandled data. This also includes
+ * connection establishments and shutdown requests.
+ */
+
+
+ /* If noone is interested in analysing data, it's time to forward
+ * everything. We configure the buffer to forward indefinitely.
+ * Note that we're checking CF_SHUTR_NOW as an indication of a possible
+ * recent call to channel_abort().
+ */
+ if (unlikely(!req->analysers &&
+ !(req->flags & (CF_SHUTW|CF_SHUTR_NOW)) &&
+ (si_f->state >= SI_ST_EST) &&
+ (req->to_forward != CHN_INFINITE_FORWARD))) {
+ /* This buffer is freewheeling, there's no analyser
+ * attached to it. If any data are left in, we'll permit them to
+ * move.
+ */
+ channel_auto_read(req);
+ channel_auto_connect(req);
+ channel_auto_close(req);
+ buffer_flush(req->buf);
+
+ /* We'll let data flow between the producer (if still connected)
+ * to the consumer (which might possibly not be connected yet).
+ */
+ if (!(req->flags & (CF_SHUTR|CF_SHUTW_NOW)))
+ channel_forward(req, CHN_INFINITE_FORWARD);
+
+ /* Just in order to support fetching HTTP contents after start
+ * of forwarding when the HTTP forwarding analyser is not used,
+ * we simply reset msg->sov so that HTTP rewinding points to the
+ * headers.
+ */
+ if (s->txn)
+ s->txn->req.sov = s->txn->req.eoh + s->txn->req.eol - req->buf->o;
+ }
+
+ /* check if it is wise to enable kernel splicing to forward request data */
+ if (!(req->flags & (CF_KERN_SPLICING|CF_SHUTR)) &&
+ req->to_forward &&
+ (global.tune.options & GTUNE_USE_SPLICE) &&
+ (objt_conn(si_f->end) && __objt_conn(si_f->end)->xprt && __objt_conn(si_f->end)->xprt->rcv_pipe) &&
+ (objt_conn(si_b->end) && __objt_conn(si_b->end)->xprt && __objt_conn(si_b->end)->xprt->snd_pipe) &&
+ (pipes_used < global.maxpipes) &&
+ (((sess->fe->options2|s->be->options2) & PR_O2_SPLIC_REQ) ||
+ (((sess->fe->options2|s->be->options2) & PR_O2_SPLIC_AUT) &&
+ (req->flags & CF_STREAMER_FAST)))) {
+ req->flags |= CF_KERN_SPLICING;
+ }
+
+ /* reflect what the L7 analysers have seen last */
+ rqf_last = req->flags;
+
+ /*
+ * Now forward all shutdown requests between both sides of the buffer
+ */
+
+ /* first, let's check if the request buffer needs to shutdown(write), which may
+ * happen either because the input is closed or because we want to force a close
+ * once the server has begun to respond. If a half-closed timeout is set, we adjust
+ * the other side's timeout as well.
+ */
+ if (unlikely((req->flags & (CF_SHUTW|CF_SHUTW_NOW|CF_AUTO_CLOSE|CF_SHUTR)) ==
+ (CF_AUTO_CLOSE|CF_SHUTR))) {
+ channel_shutw_now(req);
+ }
+
+ /* shutdown(write) pending */
+ if (unlikely((req->flags & (CF_SHUTW|CF_SHUTW_NOW)) == CF_SHUTW_NOW &&
+ channel_is_empty(req))) {
+ if (req->flags & CF_READ_ERROR)
+ si_b->flags |= SI_FL_NOLINGER;
+ si_shutw(si_b);
+ if (tick_isset(s->be->timeout.serverfin)) {
+ res->rto = s->be->timeout.serverfin;
+ res->rex = tick_add(now_ms, res->rto);
+ }
+ }
+
+ /* shutdown(write) done on server side, we must stop the client too */
+ if (unlikely((req->flags & (CF_SHUTW|CF_SHUTR|CF_SHUTR_NOW)) == CF_SHUTW &&
+ !req->analysers))
+ channel_shutr_now(req);
+
+ /* shutdown(read) pending */
+ if (unlikely((req->flags & (CF_SHUTR|CF_SHUTR_NOW)) == CF_SHUTR_NOW)) {
+ if (si_f->flags & SI_FL_NOHALF)
+ si_f->flags |= SI_FL_NOLINGER;
+ si_shutr(si_f);
+ }
+
+ /* it's possible that an upper layer has requested a connection setup or abort.
+ * There are 2 situations where we decide to establish a new connection :
+ * - there are data scheduled for emission in the buffer
+ * - the CF_AUTO_CONNECT flag is set (active connection)
+ */
+ if (si_b->state == SI_ST_INI) {
+ if (!(req->flags & CF_SHUTW)) {
+ if ((req->flags & CF_AUTO_CONNECT) || !channel_is_empty(req)) {
+ /* If we have an appctx, there is no connect method, so we
+ * immediately switch to the connected state, otherwise we
+ * perform a connection request.
+ */
+ si_b->state = SI_ST_REQ; /* new connection requested */
+ si_b->conn_retries = s->be->conn_retries;
+ }
+ }
+ else {
+ si_b->state = SI_ST_CLO; /* shutw+ini = abort */
+ channel_shutw_now(req); /* fix buffer flags upon abort */
+ channel_shutr_now(res);
+ }
+ }
+
+
+ /* we may have a pending connection request, or a connection waiting
+ * for completion.
+ */
+ if (si_b->state >= SI_ST_REQ && si_b->state < SI_ST_CON) {
+
+ /* prune the request variables and swap to the response variables. */
+ if (s->vars_reqres.scope != SCOPE_RES) {
+ vars_prune(&s->vars_reqres, s);
+ vars_init(&s->vars_reqres, SCOPE_RES);
+ }
+
+ do {
+ /* nb: step 1 might switch from QUE to ASS, but we first want
+ * to give a chance to step 2 to perform a redirect if needed.
+ */
+ if (si_b->state != SI_ST_REQ)
+ sess_update_stream_int(s);
+ if (si_b->state == SI_ST_REQ)
+ sess_prepare_conn_req(s);
+
+ /* applets directly go to the ESTABLISHED state. Similarly,
+ * servers experience the same fate when their connection
+ * is reused.
+ */
+ if (unlikely(si_b->state == SI_ST_EST))
+ sess_establish(s);
+
+ /* Now we can add the server name to a header (if requested) */
+ /* check for HTTP mode and proxy server_name_hdr_name != NULL */
+ if ((si_b->state >= SI_ST_CON) && (si_b->state < SI_ST_CLO) &&
+ (s->be->server_id_hdr_name != NULL) &&
+ (s->be->mode == PR_MODE_HTTP) &&
+ objt_server(s->target)) {
+ http_send_name_header(s->txn, s->be, objt_server(s->target)->id);
+ }
+
+ srv = objt_server(s->target);
+ if (si_b->state == SI_ST_ASS && srv && srv->rdr_len && (s->flags & SF_REDIRECTABLE))
+ http_perform_server_redirect(s, si_b);
+ } while (si_b->state == SI_ST_ASS);
+ }
+
+ /* Benchmarks have shown that it's optimal to do a full resync now */
+ if (si_f->state == SI_ST_DIS || si_b->state == SI_ST_DIS)
+ goto resync_stream_interface;
+
+ /* otherwise we want to check if we need to resync the req buffer or not */
+ if ((req->flags ^ rqf_last) & CF_MASK_STATIC)
+ goto resync_request;
+
+ /* perform output updates to the response buffer */
+
+ /* If noone is interested in analysing data, it's time to forward
+ * everything. We configure the buffer to forward indefinitely.
+ * Note that we're checking CF_SHUTR_NOW as an indication of a possible
+ * recent call to channel_abort().
+ */
+ if (unlikely(!res->analysers &&
+ !(res->flags & (CF_SHUTW|CF_SHUTR_NOW)) &&
+ (si_b->state >= SI_ST_EST) &&
+ (res->to_forward != CHN_INFINITE_FORWARD))) {
+ /* This buffer is freewheeling, there's no analyser
+ * attached to it. If any data are left in, we'll permit them to
+ * move.
+ */
+ channel_auto_read(res);
+ channel_auto_close(res);
+ buffer_flush(res->buf);
+
+ /* We'll let data flow between the producer (if still connected)
+ * to the consumer.
+ */
+ if (!(res->flags & (CF_SHUTR|CF_SHUTW_NOW)))
+ channel_forward(res, CHN_INFINITE_FORWARD);
+
+ /* Just in order to support fetching HTTP contents after start
+ * of forwarding when the HTTP forwarding analyser is not used,
+ * we simply reset msg->sov so that HTTP rewinding points to the
+ * headers.
+ */
+ if (s->txn)
+ s->txn->rsp.sov = s->txn->rsp.eoh + s->txn->rsp.eol - res->buf->o;
+
+ /* if we have no analyser anymore in any direction and have a
+ * tunnel timeout set, use it now. Note that we must respect
+ * the half-closed timeouts as well.
+ */
+ if (!req->analysers && s->be->timeout.tunnel) {
+ req->rto = req->wto = res->rto = res->wto =
+ s->be->timeout.tunnel;
+
+ if ((req->flags & CF_SHUTR) && tick_isset(sess->fe->timeout.clientfin))
+ res->wto = sess->fe->timeout.clientfin;
+ if ((req->flags & CF_SHUTW) && tick_isset(s->be->timeout.serverfin))
+ res->rto = s->be->timeout.serverfin;
+ if ((res->flags & CF_SHUTR) && tick_isset(s->be->timeout.serverfin))
+ req->wto = s->be->timeout.serverfin;
+ if ((res->flags & CF_SHUTW) && tick_isset(sess->fe->timeout.clientfin))
+ req->rto = sess->fe->timeout.clientfin;
+
+ req->rex = tick_add(now_ms, req->rto);
+ req->wex = tick_add(now_ms, req->wto);
+ res->rex = tick_add(now_ms, res->rto);
+ res->wex = tick_add(now_ms, res->wto);
+ }
+ }
+
+ /* check if it is wise to enable kernel splicing to forward response data */
+ if (!(res->flags & (CF_KERN_SPLICING|CF_SHUTR)) &&
+ res->to_forward &&
+ (global.tune.options & GTUNE_USE_SPLICE) &&
+ (objt_conn(si_f->end) && __objt_conn(si_f->end)->xprt && __objt_conn(si_f->end)->xprt->snd_pipe) &&
+ (objt_conn(si_b->end) && __objt_conn(si_b->end)->xprt && __objt_conn(si_b->end)->xprt->rcv_pipe) &&
+ (pipes_used < global.maxpipes) &&
+ (((sess->fe->options2|s->be->options2) & PR_O2_SPLIC_RTR) ||
+ (((sess->fe->options2|s->be->options2) & PR_O2_SPLIC_AUT) &&
+ (res->flags & CF_STREAMER_FAST)))) {
+ res->flags |= CF_KERN_SPLICING;
+ }
+
+ /* reflect what the L7 analysers have seen last */
+ rpf_last = res->flags;
+
+ /*
+ * Now forward all shutdown requests between both sides of the buffer
+ */
+
+ /*
+ * FIXME: this is probably where we should produce error responses.
+ */
+
+ /* first, let's check if the response buffer needs to shutdown(write) */
+ if (unlikely((res->flags & (CF_SHUTW|CF_SHUTW_NOW|CF_AUTO_CLOSE|CF_SHUTR)) ==
+ (CF_AUTO_CLOSE|CF_SHUTR))) {
+ channel_shutw_now(res);
+ }
+
+ /* shutdown(write) pending */
+ if (unlikely((res->flags & (CF_SHUTW|CF_SHUTW_NOW)) == CF_SHUTW_NOW &&
+ channel_is_empty(res))) {
+ si_shutw(si_f);
+ if (tick_isset(sess->fe->timeout.clientfin)) {
+ req->rto = sess->fe->timeout.clientfin;
+ req->rex = tick_add(now_ms, req->rto);
+ }
+ }
+
+ /* shutdown(write) done on the client side, we must stop the server too */
+ if (unlikely((res->flags & (CF_SHUTW|CF_SHUTR|CF_SHUTR_NOW)) == CF_SHUTW) &&
+ !res->analysers)
+ channel_shutr_now(res);
+
+ /* shutdown(read) pending */
+ if (unlikely((res->flags & (CF_SHUTR|CF_SHUTR_NOW)) == CF_SHUTR_NOW)) {
+ if (si_b->flags & SI_FL_NOHALF)
+ si_b->flags |= SI_FL_NOLINGER;
+ si_shutr(si_b);
+ }
+
+ if (si_f->state == SI_ST_DIS || si_b->state == SI_ST_DIS)
+ goto resync_stream_interface;
+
+ if (req->flags != rqf_last)
+ goto resync_request;
+
+ if ((res->flags ^ rpf_last) & CF_MASK_STATIC)
+ goto resync_response;
+
+ /* we're interested in getting wakeups again */
+ si_f->flags &= ~SI_FL_DONT_WAKE;
+ si_b->flags &= ~SI_FL_DONT_WAKE;
+
+ /* This is needed only when debugging is enabled, to indicate
+ * client-side or server-side close. Please note that in the unlikely
+ * event where both sides would close at once, the sequence is reported
+ * on the server side first.
+ */
+ if (unlikely((global.mode & MODE_DEBUG) &&
+ (!(global.mode & MODE_QUIET) ||
+ (global.mode & MODE_VERBOSE)))) {
+ if (si_b->state == SI_ST_CLO &&
+ si_b->prev_state == SI_ST_EST) {
+ chunk_printf(&trash, "%08x:%s.srvcls[%04x:%04x]\n",
+ s->uniq_id, s->be->id,
+ objt_conn(si_f->end) ? (unsigned short)objt_conn(si_f->end)->t.sock.fd : -1,
+ objt_conn(si_b->end) ? (unsigned short)objt_conn(si_b->end)->t.sock.fd : -1);
+ shut_your_big_mouth_gcc(write(1, trash.str, trash.len));
+ }
+
+ if (si_f->state == SI_ST_CLO &&
+ si_f->prev_state == SI_ST_EST) {
+ chunk_printf(&trash, "%08x:%s.clicls[%04x:%04x]\n",
+ s->uniq_id, s->be->id,
+ objt_conn(si_f->end) ? (unsigned short)objt_conn(si_f->end)->t.sock.fd : -1,
+ objt_conn(si_b->end) ? (unsigned short)objt_conn(si_b->end)->t.sock.fd : -1);
+ shut_your_big_mouth_gcc(write(1, trash.str, trash.len));
+ }
+ }
+
+ if (likely((si_f->state != SI_ST_CLO) ||
+ (si_b->state > SI_ST_INI && si_b->state < SI_ST_CLO))) {
+
+ if ((sess->fe->options & PR_O_CONTSTATS) && (s->flags & SF_BE_ASSIGNED))
+ stream_process_counters(s);
+
+ if (si_f->state == SI_ST_EST)
+ si_update(si_f);
+
+ if (si_b->state == SI_ST_EST)
+ si_update(si_b);
+
+ req->flags &= ~(CF_READ_NULL|CF_READ_PARTIAL|CF_WRITE_NULL|CF_WRITE_PARTIAL|CF_READ_ATTACHED);
+ res->flags &= ~(CF_READ_NULL|CF_READ_PARTIAL|CF_WRITE_NULL|CF_WRITE_PARTIAL|CF_READ_ATTACHED);
+ si_f->prev_state = si_f->state;
+ si_b->prev_state = si_b->state;
+ si_f->flags &= ~(SI_FL_ERR|SI_FL_EXP);
+ si_b->flags &= ~(SI_FL_ERR|SI_FL_EXP);
+
+ /* Trick: if a request is being waiting for the server to respond,
+ * and if we know the server can timeout, we don't want the timeout
+ * to expire on the client side first, but we're still interested
+ * in passing data from the client to the server (eg: POST). Thus,
+ * we can cancel the client's request timeout if the server's
+ * request timeout is set and the server has not yet sent a response.
+ */
+
+ if ((res->flags & (CF_AUTO_CLOSE|CF_SHUTR)) == 0 &&
+ (tick_isset(req->wex) || tick_isset(res->rex))) {
+ req->flags |= CF_READ_NOEXP;
+ req->rex = TICK_ETERNITY;
+ }
+
+ update_exp_and_leave:
+ t->expire = tick_first(tick_first(req->rex, req->wex),
+ tick_first(res->rex, res->wex));
+ if (req->analysers)
+ t->expire = tick_first(t->expire, req->analyse_exp);
+
+ if (si_f->exp)
+ t->expire = tick_first(t->expire, si_f->exp);
+
+ if (si_b->exp)
+ t->expire = tick_first(t->expire, si_b->exp);
+
+#ifdef DEBUG_FULL
+ fprintf(stderr,
+ "[%u] queuing with exp=%u req->rex=%u req->wex=%u req->ana_exp=%u"
+ " rep->rex=%u rep->wex=%u, si[0].exp=%u, si[1].exp=%u, cs=%d, ss=%d\n",
+ now_ms, t->expire, req->rex, req->wex, req->analyse_exp,
+ res->rex, res->wex, si_f->exp, si_b->exp, si_f->state, si_b->state);
+#endif
+
+#ifdef DEBUG_DEV
+ /* this may only happen when no timeout is set or in case of an FSM bug */
+ if (!tick_isset(t->expire))
+ ABORT_NOW();
+#endif
+ stream_release_buffers(s);
+ return t; /* nothing more to do */
+ }
+
+ sess->fe->feconn--;
+ if (s->flags & SF_BE_ASSIGNED)
+ s->be->beconn--;
+ jobs--;
+ if (sess->listener) {
+ if (!(sess->listener->options & LI_O_UNLIMITED))
+ actconn--;
+ sess->listener->nbconn--;
+ if (sess->listener->state == LI_FULL)
+ resume_listener(sess->listener);
+
+ /* Dequeues all of the listeners waiting for a resource */
+ if (!LIST_ISEMPTY(&global_listener_queue))
+ dequeue_all_listeners(&global_listener_queue);
+
+ if (!LIST_ISEMPTY(&sess->fe->listener_queue) &&
+ (!sess->fe->fe_sps_lim || freq_ctr_remain(&sess->fe->fe_sess_per_sec, sess->fe->fe_sps_lim, 0) > 0))
+ dequeue_all_listeners(&sess->fe->listener_queue);
+ }
+
+ if (unlikely((global.mode & MODE_DEBUG) &&
+ (!(global.mode & MODE_QUIET) || (global.mode & MODE_VERBOSE)))) {
+ chunk_printf(&trash, "%08x:%s.closed[%04x:%04x]\n",
+ s->uniq_id, s->be->id,
+ objt_conn(si_f->end) ? (unsigned short)objt_conn(si_f->end)->t.sock.fd : -1,
+ objt_conn(si_b->end) ? (unsigned short)objt_conn(si_b->end)->t.sock.fd : -1);
+ shut_your_big_mouth_gcc(write(1, trash.str, trash.len));
+ }
+
+ s->logs.t_close = tv_ms_elapsed(&s->logs.tv_accept, &now);
+ stream_process_counters(s);
+
+ if (s->txn && s->txn->status) {
+ int n;
+
+ n = s->txn->status / 100;
+ if (n < 1 || n > 5)
+ n = 0;
+
+ if (sess->fe->mode == PR_MODE_HTTP) {
+ sess->fe->fe_counters.p.http.rsp[n]++;
+ if (s->comp_algo && (s->flags & SF_COMP_READY))
+ sess->fe->fe_counters.p.http.comp_rsp++;
+ }
+ if ((s->flags & SF_BE_ASSIGNED) &&
+ (s->be->mode == PR_MODE_HTTP)) {
+ s->be->be_counters.p.http.rsp[n]++;
+ s->be->be_counters.p.http.cum_req++;
+ if (s->comp_algo && (s->flags & SF_COMP_READY))
+ s->be->be_counters.p.http.comp_rsp++;
+ }
+ }
+
+ /* let's do a final log if we need it */
+ if (!LIST_ISEMPTY(&sess->fe->logformat) && s->logs.logwait &&
+ !(s->flags & SF_MONITOR) &&
+ (!(sess->fe->options & PR_O_NULLNOLOG) || req->total)) {
+ s->do_log(s);
+ }
+
+ /* update time stats for this stream */
+ stream_update_time_stats(s);
+
+ /* the task MUST not be in the run queue anymore */
+ stream_free(s);
+ task_delete(t);
+ task_free(t);
+ return NULL;
+}
+
+/* Update the stream's backend and server time stats */
+void stream_update_time_stats(struct stream *s)
+{
+ int t_request;
+ int t_queue;
+ int t_connect;
+ int t_data;
+ int t_close;
+ struct server *srv;
+
+ t_request = 0;
+ t_queue = s->logs.t_queue;
+ t_connect = s->logs.t_connect;
+ t_close = s->logs.t_close;
+ t_data = s->logs.t_data;
+
+ if (s->be->mode != PR_MODE_HTTP)
+ t_data = t_connect;
+
+ if (t_connect < 0 || t_data < 0)
+ return;
+
+ if (tv_isge(&s->logs.tv_request, &s->logs.tv_accept))
+ t_request = tv_ms_elapsed(&s->logs.tv_accept, &s->logs.tv_request);
+
+ t_data -= t_connect;
+ t_connect -= t_queue;
+ t_queue -= t_request;
+
+ srv = objt_server(s->target);
+ if (srv) {
+ swrate_add(&srv->counters.q_time, TIME_STATS_SAMPLES, t_queue);
+ swrate_add(&srv->counters.c_time, TIME_STATS_SAMPLES, t_connect);
+ swrate_add(&srv->counters.d_time, TIME_STATS_SAMPLES, t_data);
+ swrate_add(&srv->counters.t_time, TIME_STATS_SAMPLES, t_close);
+ }
+ swrate_add(&s->be->be_counters.q_time, TIME_STATS_SAMPLES, t_queue);
+ swrate_add(&s->be->be_counters.c_time, TIME_STATS_SAMPLES, t_connect);
+ swrate_add(&s->be->be_counters.d_time, TIME_STATS_SAMPLES, t_data);
+ swrate_add(&s->be->be_counters.t_time, TIME_STATS_SAMPLES, t_close);
+}
+
+/*
+ * This function adjusts sess->srv_conn and maintains the previous and new
+ * server's served stream counts. Setting newsrv to NULL is enough to release
+ * current connection slot. This function also notifies any LB algo which might
+ * expect to be informed about any change in the number of active streams on a
+ * server.
+ */
+void sess_change_server(struct stream *sess, struct server *newsrv)
+{
+ if (sess->srv_conn == newsrv)
+ return;
+
+ if (sess->srv_conn) {
+ sess->srv_conn->served--;
+ if (sess->srv_conn->proxy->lbprm.server_drop_conn)
+ sess->srv_conn->proxy->lbprm.server_drop_conn(sess->srv_conn);
+ stream_del_srv_conn(sess);
+ }
+
+ if (newsrv) {
+ newsrv->served++;
+ if (newsrv->proxy->lbprm.server_take_conn)
+ newsrv->proxy->lbprm.server_take_conn(newsrv);
+ stream_add_srv_conn(sess, newsrv);
+ }
+}
+
+/* Handle server-side errors for default protocols. It is called whenever a a
+ * connection setup is aborted or a request is aborted in queue. It sets the
+ * stream termination flags so that the caller does not have to worry about
+ * them. It's installed as ->srv_error for the server-side stream_interface.
+ */
+void default_srv_error(struct stream *s, struct stream_interface *si)
+{
+ int err_type = si->err_type;
+ int err = 0, fin = 0;
+
+ if (err_type & SI_ET_QUEUE_ABRT) {
+ err = SF_ERR_CLICL;
+ fin = SF_FINST_Q;
+ }
+ else if (err_type & SI_ET_CONN_ABRT) {
+ err = SF_ERR_CLICL;
+ fin = SF_FINST_C;
+ }
+ else if (err_type & SI_ET_QUEUE_TO) {
+ err = SF_ERR_SRVTO;
+ fin = SF_FINST_Q;
+ }
+ else if (err_type & SI_ET_QUEUE_ERR) {
+ err = SF_ERR_SRVCL;
+ fin = SF_FINST_Q;
+ }
+ else if (err_type & SI_ET_CONN_TO) {
+ err = SF_ERR_SRVTO;
+ fin = SF_FINST_C;
+ }
+ else if (err_type & SI_ET_CONN_ERR) {
+ err = SF_ERR_SRVCL;
+ fin = SF_FINST_C;
+ }
+ else if (err_type & SI_ET_CONN_RES) {
+ err = SF_ERR_RESOURCE;
+ fin = SF_FINST_C;
+ }
+ else /* SI_ET_CONN_OTHER and others */ {
+ err = SF_ERR_INTERNAL;
+ fin = SF_FINST_C;
+ }
+
+ if (!(s->flags & SF_ERR_MASK))
+ s->flags |= err;
+ if (!(s->flags & SF_FINST_MASK))
+ s->flags |= fin;
+}
+
+/* kill a stream and set the termination flags to <why> (one of SF_ERR_*) */
+void stream_shutdown(struct stream *stream, int why)
+{
+ if (stream->req.flags & (CF_SHUTW|CF_SHUTW_NOW))
+ return;
+
+ channel_shutw_now(&stream->req);
+ channel_shutr_now(&stream->res);
+ stream->task->nice = 1024;
+ if (!(stream->flags & SF_ERR_MASK))
+ stream->flags |= why;
+ task_wakeup(stream->task, TASK_WOKEN_OTHER);
+}
+
+/************************************************************************/
+/* All supported ACL keywords must be declared here. */
+/************************************************************************/
+
+/* Returns a pointer to a stkctr depending on the fetch keyword name.
+ * It is designed to be called as sc[0-9]_* sc_* or src_* exclusively.
+ * sc[0-9]_* will return a pointer to the respective field in the
+ * stream <l4>. sc_* requires an UINT argument specifying the stick
+ * counter number. src_* will fill a locally allocated structure with
+ * the table and entry corresponding to what is specified with src_*.
+ * NULL may be returned if the designated stkctr is not tracked. For
+ * the sc_* and sc[0-9]_* forms, an optional table argument may be
+ * passed. When present, the currently tracked key is then looked up
+ * in the specified table instead of the current table. The purpose is
+ * to be able to convery multiple values per key (eg: have gpc0 from
+ * multiple tables). <strm> is allowed to be NULL, in which case only
+ * the session will be consulted.
+ */
+struct stkctr *
+smp_fetch_sc_stkctr(struct session *sess, struct stream *strm, const struct arg *args, const char *kw)
+{
+ static struct stkctr stkctr;
+ struct stkctr *stkptr;
+ struct stksess *stksess;
+ unsigned int num = kw[2] - '0';
+ int arg = 0;
+
+ if (num == '_' - '0') {
+ /* sc_* variant, args[0] = ctr# (mandatory) */
+ num = args[arg++].data.sint;
+ if (num >= MAX_SESS_STKCTR)
+ return NULL;
+ }
+ else if (num > 9) { /* src_* variant, args[0] = table */
+ struct stktable_key *key;
+ struct connection *conn = objt_conn(sess->origin);
+ struct sample smp;
+
+ if (!conn)
+ return NULL;
+
+ /* Fetch source adress in a sample. */
+ smp.px = NULL;
+ smp.sess = sess;
+ smp.strm = strm;
+ if (!smp_fetch_src(NULL, &smp, NULL, NULL))
+ return NULL;
+
+ /* Converts into key. */
+ key = smp_to_stkey(&smp, &args->data.prx->table);
+ if (!key)
+ return NULL;
+
+ stkctr.table = &args->data.prx->table;
+ stkctr_set_entry(&stkctr, stktable_lookup_key(stkctr.table, key));
+ return &stkctr;
+ }
+
+ /* Here, <num> contains the counter number from 0 to 9 for
+ * the sc[0-9]_ form, or even higher using sc_(num) if needed.
+ * args[arg] is the first optional argument. We first lookup the
+ * ctr form the stream, then from the session if it was not there.
+ */
+
+ stkptr = &strm->stkctr[num];
+ if (!strm || !stkctr_entry(stkptr)) {
+ stkptr = &sess->stkctr[num];
+ if (!stkctr_entry(stkptr))
+ return NULL;
+ }
+
+ stksess = stkctr_entry(stkptr);
+ if (!stksess)
+ return NULL;
+
+ if (unlikely(args[arg].type == ARGT_TAB)) {
+ /* an alternate table was specified, let's look up the same key there */
+ stkctr.table = &args[arg].data.prx->table;
+ stkctr_set_entry(&stkctr, stktable_lookup(stkctr.table, stksess));
+ return &stkctr;
+ }
+ return stkptr;
+}
+
+/* same as smp_fetch_sc_stkctr() but dedicated to src_* and can create
+ * the entry if it doesn't exist yet. This is needed for a few fetch
+ * functions which need to create an entry, such as src_inc_gpc* and
+ * src_clr_gpc*.
+ */
+struct stkctr *
+smp_create_src_stkctr(struct session *sess, struct stream *strm, const struct arg *args, const char *kw)
+{
+ static struct stkctr stkctr;
+ struct stktable_key *key;
+ struct connection *conn = objt_conn(sess->origin);
+ struct sample smp;
+
+ if (strncmp(kw, "src_", 4) != 0)
+ return NULL;
+
+ if (!conn)
+ return NULL;
+
+ /* Fetch source adress in a sample. */
+ smp.px = NULL;
+ smp.sess = sess;
+ smp.strm = strm;
+ if (!smp_fetch_src(NULL, &smp, NULL, NULL))
+ return NULL;
+
+ /* Converts into key. */
+ key = smp_to_stkey(&smp, &args->data.prx->table);
+ if (!key)
+ return NULL;
+
+ stkctr.table = &args->data.prx->table;
+ stkctr_set_entry(&stkctr, stktable_update_key(stkctr.table, key));
+ return &stkctr;
+}
+
+/* set return a boolean indicating if the requested stream counter is
+ * currently being tracked or not.
+ * Supports being called as "sc[0-9]_tracked" only.
+ */
+static int
+smp_fetch_sc_tracked(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_BOOL;
+ smp->data.u.sint = !!smp_fetch_sc_stkctr(smp->sess, smp->strm, args, kw);
+ return 1;
+}
+
+/* set <smp> to the General Purpose Flag 0 value from the stream's tracked
+ * frontend counters or from the src.
+ * Supports being called as "sc[0-9]_get_gpc0" or "src_get_gpt0" only. Value
+ * zero is returned if the key is new.
+ */
+static int
+smp_fetch_sc_get_gpt0(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct stkctr *stkctr = smp_fetch_sc_stkctr(smp->sess, smp->strm, args, kw);
+
+ if (!stkctr)
+ return 0;
+
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = 0;
+
+ if (stkctr_entry(stkctr) != NULL) {
+ void *ptr = stktable_data_ptr(stkctr->table, stkctr_entry(stkctr), STKTABLE_DT_GPT0);
+ if (!ptr)
+ return 0; /* parameter not stored */
+ smp->data.u.sint = stktable_data_cast(ptr, gpt0);
+ }
+ return 1;
+}
+
+/* set <smp> to the General Purpose Counter 0 value from the stream's tracked
+ * frontend counters or from the src.
+ * Supports being called as "sc[0-9]_get_gpc0" or "src_get_gpc0" only. Value
+ * zero is returned if the key is new.
+ */
+static int
+smp_fetch_sc_get_gpc0(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct stkctr *stkctr = smp_fetch_sc_stkctr(smp->sess, smp->strm, args, kw);
+
+ if (!stkctr)
+ return 0;
+
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = 0;
+
+ if (stkctr_entry(stkctr) != NULL) {
+ void *ptr = stktable_data_ptr(stkctr->table, stkctr_entry(stkctr), STKTABLE_DT_GPC0);
+ if (!ptr)
+ return 0; /* parameter not stored */
+ smp->data.u.sint = stktable_data_cast(ptr, gpc0);
+ }
+ return 1;
+}
+
+/* set <smp> to the General Purpose Counter 0's event rate from the stream's
+ * tracked frontend counters or from the src.
+ * Supports being called as "sc[0-9]_gpc0_rate" or "src_gpc0_rate" only.
+ * Value zero is returned if the key is new.
+ */
+static int
+smp_fetch_sc_gpc0_rate(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct stkctr *stkctr = smp_fetch_sc_stkctr(smp->sess, smp->strm, args, kw);
+
+ if (!stkctr)
+ return 0;
+
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = 0;
+ if (stkctr_entry(stkctr) != NULL) {
+ void *ptr = stktable_data_ptr(stkctr->table, stkctr_entry(stkctr), STKTABLE_DT_GPC0_RATE);
+ if (!ptr)
+ return 0; /* parameter not stored */
+ smp->data.u.sint = read_freq_ctr_period(&stktable_data_cast(ptr, gpc0_rate),
+ stkctr->table->data_arg[STKTABLE_DT_GPC0_RATE].u);
+ }
+ return 1;
+}
+
+/* Increment the General Purpose Counter 0 value from the stream's tracked
+ * frontend counters and return it into temp integer.
+ * Supports being called as "sc[0-9]_inc_gpc0" or "src_inc_gpc0" only.
+ */
+static int
+smp_fetch_sc_inc_gpc0(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct stkctr *stkctr = smp_fetch_sc_stkctr(smp->sess, smp->strm, args, kw);
+
+ if (!stkctr)
+ return 0;
+
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = 0;
+
+ if (stkctr_entry(stkctr) == NULL)
+ stkctr = smp_create_src_stkctr(smp->sess, smp->strm, args, kw);
+
+ if (stkctr_entry(stkctr) != NULL) {
+ void *ptr1,*ptr2;
+
+ /* First, update gpc0_rate if it's tracked. Second, update its
+ * gpc0 if tracked. Returns gpc0's value otherwise the curr_ctr.
+ */
+ ptr1 = stktable_data_ptr(stkctr->table, stkctr_entry(stkctr), STKTABLE_DT_GPC0_RATE);
+ if (ptr1) {
+ update_freq_ctr_period(&stktable_data_cast(ptr1, gpc0_rate),
+ stkctr->table->data_arg[STKTABLE_DT_GPC0_RATE].u, 1);
+ smp->data.u.sint = (&stktable_data_cast(ptr1, gpc0_rate))->curr_ctr;
+ }
+
+ ptr2 = stktable_data_ptr(stkctr->table, stkctr_entry(stkctr), STKTABLE_DT_GPC0);
+ if (ptr2)
+ smp->data.u.sint = ++stktable_data_cast(ptr2, gpc0);
+
+ /* If data was modified, we need to touch to re-schedule sync */
+ if (ptr1 || ptr2)
+ stktable_touch(stkctr->table, stkctr_entry(stkctr), 1);
+ }
+ return 1;
+}
+
+/* Clear the General Purpose Counter 0 value from the stream's tracked
+ * frontend counters and return its previous value into temp integer.
+ * Supports being called as "sc[0-9]_clr_gpc0" or "src_clr_gpc0" only.
+ */
+static int
+smp_fetch_sc_clr_gpc0(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct stkctr *stkctr = smp_fetch_sc_stkctr(smp->sess, smp->strm, args, kw);
+
+ if (!stkctr)
+ return 0;
+
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = 0;
+
+ if (stkctr_entry(stkctr) == NULL)
+ stkctr = smp_create_src_stkctr(smp->sess, smp->strm, args, kw);
+
+ if (stkctr_entry(stkctr) != NULL) {
+ void *ptr = stktable_data_ptr(stkctr->table, stkctr_entry(stkctr), STKTABLE_DT_GPC0);
+ if (!ptr)
+ return 0; /* parameter not stored */
+ smp->data.u.sint = stktable_data_cast(ptr, gpc0);
+ stktable_data_cast(ptr, gpc0) = 0;
+ /* If data was modified, we need to touch to re-schedule sync */
+ stktable_touch(stkctr->table, stkctr_entry(stkctr), 1);
+ }
+ return 1;
+}
+
+/* set <smp> to the cumulated number of connections from the stream's tracked
+ * frontend counters. Supports being called as "sc[0-9]_conn_cnt" or
+ * "src_conn_cnt" only.
+ */
+static int
+smp_fetch_sc_conn_cnt(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct stkctr *stkctr = smp_fetch_sc_stkctr(smp->sess, smp->strm, args, kw);
+
+ if (!stkctr)
+ return 0;
+
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = 0;
+ if (stkctr_entry(stkctr) != NULL) {
+ void *ptr = stktable_data_ptr(stkctr->table, stkctr_entry(stkctr), STKTABLE_DT_CONN_CNT);
+ if (!ptr)
+ return 0; /* parameter not stored */
+ smp->data.u.sint = stktable_data_cast(ptr, conn_cnt);
+ }
+ return 1;
+}
+
+/* set <smp> to the connection rate from the stream's tracked frontend
+ * counters. Supports being called as "sc[0-9]_conn_rate" or "src_conn_rate"
+ * only.
+ */
+static int
+smp_fetch_sc_conn_rate(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct stkctr *stkctr = smp_fetch_sc_stkctr(smp->sess, smp->strm, args, kw);
+
+ if (!stkctr)
+ return 0;
+
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = 0;
+ if (stkctr_entry(stkctr) != NULL) {
+ void *ptr = stktable_data_ptr(stkctr->table, stkctr_entry(stkctr), STKTABLE_DT_CONN_RATE);
+ if (!ptr)
+ return 0; /* parameter not stored */
+ smp->data.u.sint = read_freq_ctr_period(&stktable_data_cast(ptr, conn_rate),
+ stkctr->table->data_arg[STKTABLE_DT_CONN_RATE].u);
+ }
+ return 1;
+}
+
+/* set temp integer to the number of connections from the stream's source address
+ * in the table pointed to by expr, after updating it.
+ * Accepts exactly 1 argument of type table.
+ */
+static int
+smp_fetch_src_updt_conn_cnt(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct connection *conn = objt_conn(smp->sess->origin);
+ struct stksess *ts;
+ struct stktable_key *key;
+ void *ptr;
+ struct proxy *px;
+
+ if (!conn)
+ return 0;
+
+ /* Fetch source adress in a sample. */
+ if (!smp_fetch_src(NULL, smp, NULL, NULL))
+ return 0;
+
+ /* Converts into key. */
+ key = smp_to_stkey(smp, &args->data.prx->table);
+ if (!key)
+ return 0;
+
+ px = args->data.prx;
+
+ if ((ts = stktable_update_key(&px->table, key)) == NULL)
+ /* entry does not exist and could not be created */
+ return 0;
+
+ ptr = stktable_data_ptr(&px->table, ts, STKTABLE_DT_CONN_CNT);
+ if (!ptr)
+ return 0; /* parameter not stored in this table */
+
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = ++stktable_data_cast(ptr, conn_cnt);
+ /* Touch was previously performed by stktable_update_key */
+ smp->flags = SMP_F_VOL_TEST;
+ return 1;
+}
+
+/* set <smp> to the number of concurrent connections from the stream's tracked
+ * frontend counters. Supports being called as "sc[0-9]_conn_cur" or
+ * "src_conn_cur" only.
+ */
+static int
+smp_fetch_sc_conn_cur(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct stkctr *stkctr = smp_fetch_sc_stkctr(smp->sess, smp->strm, args, kw);
+
+ if (!stkctr)
+ return 0;
+
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = 0;
+ if (stkctr_entry(stkctr) != NULL) {
+ void *ptr = stktable_data_ptr(stkctr->table, stkctr_entry(stkctr), STKTABLE_DT_CONN_CUR);
+ if (!ptr)
+ return 0; /* parameter not stored */
+ smp->data.u.sint = stktable_data_cast(ptr, conn_cur);
+ }
+ return 1;
+}
+
+/* set <smp> to the cumulated number of streams from the stream's tracked
+ * frontend counters. Supports being called as "sc[0-9]_sess_cnt" or
+ * "src_sess_cnt" only.
+ */
+static int
+smp_fetch_sc_sess_cnt(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct stkctr *stkctr = smp_fetch_sc_stkctr(smp->sess, smp->strm, args, kw);
+
+ if (!stkctr)
+ return 0;
+
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = 0;
+ if (stkctr_entry(stkctr) != NULL) {
+ void *ptr = stktable_data_ptr(stkctr->table, stkctr_entry(stkctr), STKTABLE_DT_SESS_CNT);
+ if (!ptr)
+ return 0; /* parameter not stored */
+ smp->data.u.sint = stktable_data_cast(ptr, sess_cnt);
+ }
+ return 1;
+}
+
+/* set <smp> to the stream rate from the stream's tracked frontend counters.
+ * Supports being called as "sc[0-9]_sess_rate" or "src_sess_rate" only.
+ */
+static int
+smp_fetch_sc_sess_rate(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct stkctr *stkctr = smp_fetch_sc_stkctr(smp->sess, smp->strm, args, kw);
+
+ if (!stkctr)
+ return 0;
+
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = 0;
+ if (stkctr_entry(stkctr) != NULL) {
+ void *ptr = stktable_data_ptr(stkctr->table, stkctr_entry(stkctr), STKTABLE_DT_SESS_RATE);
+ if (!ptr)
+ return 0; /* parameter not stored */
+ smp->data.u.sint = read_freq_ctr_period(&stktable_data_cast(ptr, sess_rate),
+ stkctr->table->data_arg[STKTABLE_DT_SESS_RATE].u);
+ }
+ return 1;
+}
+
+/* set <smp> to the cumulated number of HTTP requests from the stream's tracked
+ * frontend counters. Supports being called as "sc[0-9]_http_req_cnt" or
+ * "src_http_req_cnt" only.
+ */
+static int
+smp_fetch_sc_http_req_cnt(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct stkctr *stkctr = smp_fetch_sc_stkctr(smp->sess, smp->strm, args, kw);
+
+ if (!stkctr)
+ return 0;
+
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = 0;
+ if (stkctr_entry(stkctr) != NULL) {
+ void *ptr = stktable_data_ptr(stkctr->table, stkctr_entry(stkctr), STKTABLE_DT_HTTP_REQ_CNT);
+ if (!ptr)
+ return 0; /* parameter not stored */
+ smp->data.u.sint = stktable_data_cast(ptr, http_req_cnt);
+ }
+ return 1;
+}
+
+/* set <smp> to the HTTP request rate from the stream's tracked frontend
+ * counters. Supports being called as "sc[0-9]_http_req_rate" or
+ * "src_http_req_rate" only.
+ */
+static int
+smp_fetch_sc_http_req_rate(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct stkctr *stkctr = smp_fetch_sc_stkctr(smp->sess, smp->strm, args, kw);
+
+ if (!stkctr)
+ return 0;
+
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = 0;
+ if (stkctr_entry(stkctr) != NULL) {
+ void *ptr = stktable_data_ptr(stkctr->table, stkctr_entry(stkctr), STKTABLE_DT_HTTP_REQ_RATE);
+ if (!ptr)
+ return 0; /* parameter not stored */
+ smp->data.u.sint = read_freq_ctr_period(&stktable_data_cast(ptr, http_req_rate),
+ stkctr->table->data_arg[STKTABLE_DT_HTTP_REQ_RATE].u);
+ }
+ return 1;
+}
+
+/* set <smp> to the cumulated number of HTTP requests errors from the stream's
+ * tracked frontend counters. Supports being called as "sc[0-9]_http_err_cnt" or
+ * "src_http_err_cnt" only.
+ */
+static int
+smp_fetch_sc_http_err_cnt(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct stkctr *stkctr = smp_fetch_sc_stkctr(smp->sess, smp->strm, args, kw);
+
+ if (!stkctr)
+ return 0;
+
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = 0;
+ if (stkctr_entry(stkctr) != NULL) {
+ void *ptr = stktable_data_ptr(stkctr->table, stkctr_entry(stkctr), STKTABLE_DT_HTTP_ERR_CNT);
+ if (!ptr)
+ return 0; /* parameter not stored */
+ smp->data.u.sint = stktable_data_cast(ptr, http_err_cnt);
+ }
+ return 1;
+}
+
+/* set <smp> to the HTTP request error rate from the stream's tracked frontend
+ * counters. Supports being called as "sc[0-9]_http_err_rate" or
+ * "src_http_err_rate" only.
+ */
+static int
+smp_fetch_sc_http_err_rate(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct stkctr *stkctr = smp_fetch_sc_stkctr(smp->sess, smp->strm, args, kw);
+
+ if (!stkctr)
+ return 0;
+
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = 0;
+ if (stkctr_entry(stkctr) != NULL) {
+ void *ptr = stktable_data_ptr(stkctr->table, stkctr_entry(stkctr), STKTABLE_DT_HTTP_ERR_RATE);
+ if (!ptr)
+ return 0; /* parameter not stored */
+ smp->data.u.sint = read_freq_ctr_period(&stktable_data_cast(ptr, http_err_rate),
+ stkctr->table->data_arg[STKTABLE_DT_HTTP_ERR_RATE].u);
+ }
+ return 1;
+}
+
+/* set <smp> to the number of kbytes received from clients, as found in the
+ * stream's tracked frontend counters. Supports being called as
+ * "sc[0-9]_kbytes_in" or "src_kbytes_in" only.
+ */
+static int
+smp_fetch_sc_kbytes_in(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct stkctr *stkctr = smp_fetch_sc_stkctr(smp->sess, smp->strm, args, kw);
+
+ if (!stkctr)
+ return 0;
+
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = 0;
+ if (stkctr_entry(stkctr) != NULL) {
+ void *ptr = stktable_data_ptr(stkctr->table, stkctr_entry(stkctr), STKTABLE_DT_BYTES_IN_CNT);
+ if (!ptr)
+ return 0; /* parameter not stored */
+ smp->data.u.sint = stktable_data_cast(ptr, bytes_in_cnt) >> 10;
+ }
+ return 1;
+}
+
+/* set <smp> to the data rate received from clients in bytes/s, as found
+ * in the stream's tracked frontend counters. Supports being called as
+ * "sc[0-9]_bytes_in_rate" or "src_bytes_in_rate" only.
+ */
+static int
+smp_fetch_sc_bytes_in_rate(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct stkctr *stkctr = smp_fetch_sc_stkctr(smp->sess, smp->strm, args, kw);
+
+ if (!stkctr)
+ return 0;
+
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = 0;
+ if (stkctr_entry(stkctr) != NULL) {
+ void *ptr = stktable_data_ptr(stkctr->table, stkctr_entry(stkctr), STKTABLE_DT_BYTES_IN_RATE);
+ if (!ptr)
+ return 0; /* parameter not stored */
+ smp->data.u.sint = read_freq_ctr_period(&stktable_data_cast(ptr, bytes_in_rate),
+ stkctr->table->data_arg[STKTABLE_DT_BYTES_IN_RATE].u);
+ }
+ return 1;
+}
+
+/* set <smp> to the number of kbytes sent to clients, as found in the
+ * stream's tracked frontend counters. Supports being called as
+ * "sc[0-9]_kbytes_out" or "src_kbytes_out" only.
+ */
+static int
+smp_fetch_sc_kbytes_out(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct stkctr *stkctr = smp_fetch_sc_stkctr(smp->sess, smp->strm, args, kw);
+
+ if (!stkctr)
+ return 0;
+
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = 0;
+ if (stkctr_entry(stkctr) != NULL) {
+ void *ptr = stktable_data_ptr(stkctr->table, stkctr_entry(stkctr), STKTABLE_DT_BYTES_OUT_CNT);
+ if (!ptr)
+ return 0; /* parameter not stored */
+ smp->data.u.sint = stktable_data_cast(ptr, bytes_out_cnt) >> 10;
+ }
+ return 1;
+}
+
+/* set <smp> to the data rate sent to clients in bytes/s, as found in the
+ * stream's tracked frontend counters. Supports being called as
+ * "sc[0-9]_bytes_out_rate" or "src_bytes_out_rate" only.
+ */
+static int
+smp_fetch_sc_bytes_out_rate(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct stkctr *stkctr = smp_fetch_sc_stkctr(smp->sess, smp->strm, args, kw);
+
+ if (!stkctr)
+ return 0;
+
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = 0;
+ if (stkctr_entry(stkctr) != NULL) {
+ void *ptr = stktable_data_ptr(stkctr->table, stkctr_entry(stkctr), STKTABLE_DT_BYTES_OUT_RATE);
+ if (!ptr)
+ return 0; /* parameter not stored */
+ smp->data.u.sint = read_freq_ctr_period(&stktable_data_cast(ptr, bytes_out_rate),
+ stkctr->table->data_arg[STKTABLE_DT_BYTES_OUT_RATE].u);
+ }
+ return 1;
+}
+
+/* set <smp> to the number of active trackers on the SC entry in the stream's
+ * tracked frontend counters. Supports being called as "sc[0-9]_trackers" only.
+ */
+static int
+smp_fetch_sc_trackers(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct stkctr *stkctr = smp_fetch_sc_stkctr(smp->sess, smp->strm, args, kw);
+
+ if (!stkctr)
+ return 0;
+
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = stkctr_entry(stkctr)->ref_cnt;
+ return 1;
+}
+
+/* set temp integer to the number of used entries in the table pointed to by expr.
+ * Accepts exactly 1 argument of type table.
+ */
+static int
+smp_fetch_table_cnt(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = args->data.prx->table.current;
+ return 1;
+}
+
+/* set temp integer to the number of free entries in the table pointed to by expr.
+ * Accepts exactly 1 argument of type table.
+ */
+static int
+smp_fetch_table_avl(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ struct proxy *px;
+
+ px = args->data.prx;
+ smp->flags = SMP_F_VOL_TEST;
+ smp->data.type = SMP_T_SINT;
+ smp->data.u.sint = px->table.size - px->table.current;
+ return 1;
+}
+
+/* 0=OK, <0=Alert, >0=Warning */
+static enum act_parse_ret stream_parse_use_service(const char **args, int *cur_arg,
+ struct proxy *px, struct act_rule *rule,
+ char **err)
+{
+ struct action_kw *kw;
+
+ /* Check if the service name exists. */
+ if (*(args[*cur_arg]) == 0) {
+ memprintf(err, "'%s' expects a service name.", args[0]);
+ return ACT_RET_PRS_ERR;
+ }
+
+ /* lookup for keyword corresponding to a service. */
+ kw = action_lookup(&service_keywords, args[*cur_arg]);
+ if (!kw) {
+ memprintf(err, "'%s' unknown service name.", args[1]);
+ return ACT_RET_PRS_ERR;
+ }
+ (*cur_arg)++;
+
+ /* executes specific rule parser. */
+ rule->kw = kw;
+ if (kw->parse((const char **)args, cur_arg, px, rule, err) == ACT_RET_PRS_ERR)
+ return ACT_RET_PRS_ERR;
+
+ /* Register processing function. */
+ rule->action_ptr = process_use_service;
+ rule->action = ACT_CUSTOM;
+
+ return ACT_RET_PRS_OK;
+}
+
+void service_keywords_register(struct action_kw_list *kw_list)
+{
+ LIST_ADDQ(&service_keywords, &kw_list->list);
+}
+
+/* main configuration keyword registration. */
+static struct action_kw_list stream_tcp_keywords = { ILH, {
+ { "use-service", stream_parse_use_service },
+ { /* END */ }
+}};
+
+static struct action_kw_list stream_http_keywords = { ILH, {
+ { "use-service", stream_parse_use_service },
+ { /* END */ }
+}};
+
+/* Note: must not be declared <const> as its list will be overwritten.
+ * Please take care of keeping this list alphabetically sorted.
+ */
+static struct acl_kw_list acl_kws = {ILH, {
+ { /* END */ },
+}};
+
+/* Note: must not be declared <const> as its list will be overwritten.
+ * Please take care of keeping this list alphabetically sorted.
+ */
+static struct sample_fetch_kw_list smp_fetch_keywords = {ILH, {
+ { "sc_bytes_in_rate", smp_fetch_sc_bytes_in_rate, ARG2(1,SINT,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc_bytes_out_rate", smp_fetch_sc_bytes_out_rate, ARG2(1,SINT,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc_clr_gpc0", smp_fetch_sc_clr_gpc0, ARG2(1,SINT,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc_conn_cnt", smp_fetch_sc_conn_cnt, ARG2(1,SINT,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc_conn_cur", smp_fetch_sc_conn_cur, ARG2(1,SINT,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc_conn_rate", smp_fetch_sc_conn_rate, ARG2(1,SINT,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc_get_gpt0", smp_fetch_sc_get_gpt0, ARG2(1,SINT,TAB), NULL, SMP_T_BOOL, SMP_USE_INTRN, },
+ { "sc_get_gpc0", smp_fetch_sc_get_gpc0, ARG2(1,SINT,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc_gpc0_rate", smp_fetch_sc_gpc0_rate, ARG2(1,SINT,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc_http_err_cnt", smp_fetch_sc_http_err_cnt, ARG2(1,SINT,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc_http_err_rate", smp_fetch_sc_http_err_rate, ARG2(1,SINT,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc_http_req_cnt", smp_fetch_sc_http_req_cnt, ARG2(1,SINT,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc_http_req_rate", smp_fetch_sc_http_req_rate, ARG2(1,SINT,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc_inc_gpc0", smp_fetch_sc_inc_gpc0, ARG2(1,SINT,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc_kbytes_in", smp_fetch_sc_kbytes_in, ARG2(1,SINT,TAB), NULL, SMP_T_SINT, SMP_USE_L4CLI, },
+ { "sc_kbytes_out", smp_fetch_sc_kbytes_out, ARG2(1,SINT,TAB), NULL, SMP_T_SINT, SMP_USE_L4CLI, },
+ { "sc_sess_cnt", smp_fetch_sc_sess_cnt, ARG2(1,SINT,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc_sess_rate", smp_fetch_sc_sess_rate, ARG2(1,SINT,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc_tracked", smp_fetch_sc_tracked, ARG2(1,SINT,TAB), NULL, SMP_T_BOOL, SMP_USE_INTRN, },
+ { "sc_trackers", smp_fetch_sc_trackers, ARG2(1,SINT,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc0_bytes_in_rate", smp_fetch_sc_bytes_in_rate, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc0_bytes_out_rate", smp_fetch_sc_bytes_out_rate, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc0_clr_gpc0", smp_fetch_sc_clr_gpc0, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc0_conn_cnt", smp_fetch_sc_conn_cnt, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc0_conn_cur", smp_fetch_sc_conn_cur, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc0_conn_rate", smp_fetch_sc_conn_rate, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc0_get_gpt0", smp_fetch_sc_get_gpt0, ARG1(0,TAB), NULL, SMP_T_BOOL, SMP_USE_INTRN, },
+ { "sc0_get_gpc0", smp_fetch_sc_get_gpc0, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc0_gpc0_rate", smp_fetch_sc_gpc0_rate, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc0_http_err_cnt", smp_fetch_sc_http_err_cnt, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc0_http_err_rate", smp_fetch_sc_http_err_rate, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc0_http_req_cnt", smp_fetch_sc_http_req_cnt, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc0_http_req_rate", smp_fetch_sc_http_req_rate, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc0_inc_gpc0", smp_fetch_sc_inc_gpc0, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc0_kbytes_in", smp_fetch_sc_kbytes_in, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_L4CLI, },
+ { "sc0_kbytes_out", smp_fetch_sc_kbytes_out, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_L4CLI, },
+ { "sc0_sess_cnt", smp_fetch_sc_sess_cnt, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc0_sess_rate", smp_fetch_sc_sess_rate, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc0_tracked", smp_fetch_sc_tracked, ARG1(0,TAB), NULL, SMP_T_BOOL, SMP_USE_INTRN, },
+ { "sc0_trackers", smp_fetch_sc_trackers, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc1_bytes_in_rate", smp_fetch_sc_bytes_in_rate, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc1_bytes_out_rate", smp_fetch_sc_bytes_out_rate, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc1_clr_gpc0", smp_fetch_sc_clr_gpc0, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc1_conn_cnt", smp_fetch_sc_conn_cnt, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc1_conn_cur", smp_fetch_sc_conn_cur, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc1_conn_rate", smp_fetch_sc_conn_rate, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc1_get_gpt0", smp_fetch_sc_get_gpt0, ARG1(0,TAB), NULL, SMP_T_BOOL, SMP_USE_INTRN, },
+ { "sc1_get_gpc0", smp_fetch_sc_get_gpc0, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc1_gpc0_rate", smp_fetch_sc_gpc0_rate, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc1_http_err_cnt", smp_fetch_sc_http_err_cnt, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc1_http_err_rate", smp_fetch_sc_http_err_rate, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc1_http_req_cnt", smp_fetch_sc_http_req_cnt, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc1_http_req_rate", smp_fetch_sc_http_req_rate, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc1_inc_gpc0", smp_fetch_sc_inc_gpc0, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc1_kbytes_in", smp_fetch_sc_kbytes_in, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_L4CLI, },
+ { "sc1_kbytes_out", smp_fetch_sc_kbytes_out, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_L4CLI, },
+ { "sc1_sess_cnt", smp_fetch_sc_sess_cnt, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc1_sess_rate", smp_fetch_sc_sess_rate, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc1_tracked", smp_fetch_sc_tracked, ARG1(0,TAB), NULL, SMP_T_BOOL, SMP_USE_INTRN, },
+ { "sc1_trackers", smp_fetch_sc_trackers, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc2_bytes_in_rate", smp_fetch_sc_bytes_in_rate, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc2_bytes_out_rate", smp_fetch_sc_bytes_out_rate, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc2_clr_gpc0", smp_fetch_sc_clr_gpc0, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc2_conn_cnt", smp_fetch_sc_conn_cnt, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc2_conn_cur", smp_fetch_sc_conn_cur, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc2_conn_rate", smp_fetch_sc_conn_rate, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc2_get_gpt0", smp_fetch_sc_get_gpt0, ARG1(0,TAB), NULL, SMP_T_BOOL, SMP_USE_INTRN, },
+ { "sc2_get_gpc0", smp_fetch_sc_get_gpc0, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc2_gpc0_rate", smp_fetch_sc_gpc0_rate, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc2_http_err_cnt", smp_fetch_sc_http_err_cnt, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc2_http_err_rate", smp_fetch_sc_http_err_rate, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc2_http_req_cnt", smp_fetch_sc_http_req_cnt, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc2_http_req_rate", smp_fetch_sc_http_req_rate, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc2_inc_gpc0", smp_fetch_sc_inc_gpc0, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc2_kbytes_in", smp_fetch_sc_kbytes_in, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_L4CLI, },
+ { "sc2_kbytes_out", smp_fetch_sc_kbytes_out, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_L4CLI, },
+ { "sc2_sess_cnt", smp_fetch_sc_sess_cnt, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc2_sess_rate", smp_fetch_sc_sess_rate, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "sc2_tracked", smp_fetch_sc_tracked, ARG1(0,TAB), NULL, SMP_T_BOOL, SMP_USE_INTRN, },
+ { "sc2_trackers", smp_fetch_sc_trackers, ARG1(0,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "src_bytes_in_rate", smp_fetch_sc_bytes_in_rate, ARG1(1,TAB), NULL, SMP_T_SINT, SMP_USE_L4CLI, },
+ { "src_bytes_out_rate", smp_fetch_sc_bytes_out_rate, ARG1(1,TAB), NULL, SMP_T_SINT, SMP_USE_L4CLI, },
+ { "src_clr_gpc0", smp_fetch_sc_clr_gpc0, ARG1(1,TAB), NULL, SMP_T_SINT, SMP_USE_L4CLI, },
+ { "src_conn_cnt", smp_fetch_sc_conn_cnt, ARG1(1,TAB), NULL, SMP_T_SINT, SMP_USE_L4CLI, },
+ { "src_conn_cur", smp_fetch_sc_conn_cur, ARG1(1,TAB), NULL, SMP_T_SINT, SMP_USE_L4CLI, },
+ { "src_conn_rate", smp_fetch_sc_conn_rate, ARG1(1,TAB), NULL, SMP_T_SINT, SMP_USE_L4CLI, },
+ { "src_get_gpt0", smp_fetch_sc_get_gpt0, ARG1(1,TAB), NULL, SMP_T_BOOL, SMP_USE_L4CLI, },
+ { "src_get_gpc0", smp_fetch_sc_get_gpc0, ARG1(1,TAB), NULL, SMP_T_SINT, SMP_USE_L4CLI, },
+ { "src_gpc0_rate", smp_fetch_sc_gpc0_rate, ARG1(1,TAB), NULL, SMP_T_SINT, SMP_USE_L4CLI, },
+ { "src_http_err_cnt", smp_fetch_sc_http_err_cnt, ARG1(1,TAB), NULL, SMP_T_SINT, SMP_USE_L4CLI, },
+ { "src_http_err_rate", smp_fetch_sc_http_err_rate, ARG1(1,TAB), NULL, SMP_T_SINT, SMP_USE_L4CLI, },
+ { "src_http_req_cnt", smp_fetch_sc_http_req_cnt, ARG1(1,TAB), NULL, SMP_T_SINT, SMP_USE_L4CLI, },
+ { "src_http_req_rate", smp_fetch_sc_http_req_rate, ARG1(1,TAB), NULL, SMP_T_SINT, SMP_USE_L4CLI, },
+ { "src_inc_gpc0", smp_fetch_sc_inc_gpc0, ARG1(1,TAB), NULL, SMP_T_SINT, SMP_USE_L4CLI, },
+ { "src_kbytes_in", smp_fetch_sc_kbytes_in, ARG1(1,TAB), NULL, SMP_T_SINT, SMP_USE_L4CLI, },
+ { "src_kbytes_out", smp_fetch_sc_kbytes_out, ARG1(1,TAB), NULL, SMP_T_SINT, SMP_USE_L4CLI, },
+ { "src_sess_cnt", smp_fetch_sc_sess_cnt, ARG1(1,TAB), NULL, SMP_T_SINT, SMP_USE_L4CLI, },
+ { "src_sess_rate", smp_fetch_sc_sess_rate, ARG1(1,TAB), NULL, SMP_T_SINT, SMP_USE_L4CLI, },
+ { "src_updt_conn_cnt", smp_fetch_src_updt_conn_cnt, ARG1(1,TAB), NULL, SMP_T_SINT, SMP_USE_L4CLI, },
+ { "table_avl", smp_fetch_table_avl, ARG1(1,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { "table_cnt", smp_fetch_table_cnt, ARG1(1,TAB), NULL, SMP_T_SINT, SMP_USE_INTRN, },
+ { /* END */ },
+}};
+
+__attribute__((constructor))
+static void __stream_init(void)
+{
+ sample_register_fetches(&smp_fetch_keywords);
+ acl_register_keywords(&acl_kws);
+ tcp_req_cont_keywords_register(&stream_tcp_keywords);
+ http_req_keywords_register(&stream_http_keywords);
+}
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * Functions managing stream_interface structures
+ *
+ * Copyright 2000-2012 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <errno.h>
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+
+#include <sys/socket.h>
+#include <sys/stat.h>
+#include <sys/types.h>
+
+#include <common/buffer.h>
+#include <common/compat.h>
+#include <common/config.h>
+#include <common/debug.h>
+#include <common/standard.h>
+#include <common/ticks.h>
+#include <common/time.h>
+
+#include <proto/applet.h>
+#include <proto/channel.h>
+#include <proto/connection.h>
+#include <proto/pipe.h>
+#include <proto/stream.h>
+#include <proto/stream_interface.h>
+#include <proto/task.h>
+
+#include <types/pipe.h>
+
+/* socket functions used when running a stream interface as a task */
+static void stream_int_shutr(struct stream_interface *si);
+static void stream_int_shutw(struct stream_interface *si);
+static void stream_int_chk_rcv(struct stream_interface *si);
+static void stream_int_chk_snd(struct stream_interface *si);
+static void stream_int_shutr_conn(struct stream_interface *si);
+static void stream_int_shutw_conn(struct stream_interface *si);
+static void stream_int_chk_rcv_conn(struct stream_interface *si);
+static void stream_int_chk_snd_conn(struct stream_interface *si);
+static void stream_int_shutr_applet(struct stream_interface *si);
+static void stream_int_shutw_applet(struct stream_interface *si);
+static void stream_int_chk_rcv_applet(struct stream_interface *si);
+static void stream_int_chk_snd_applet(struct stream_interface *si);
+static void si_conn_recv_cb(struct connection *conn);
+static void si_conn_send_cb(struct connection *conn);
+static int si_conn_wake_cb(struct connection *conn);
+static int si_idle_conn_wake_cb(struct connection *conn);
+static void si_idle_conn_null_cb(struct connection *conn);
+
+/* stream-interface operations for embedded tasks */
+struct si_ops si_embedded_ops = {
+ .chk_rcv = stream_int_chk_rcv,
+ .chk_snd = stream_int_chk_snd,
+ .shutr = stream_int_shutr,
+ .shutw = stream_int_shutw,
+};
+
+/* stream-interface operations for connections */
+struct si_ops si_conn_ops = {
+ .update = stream_int_update_conn,
+ .chk_rcv = stream_int_chk_rcv_conn,
+ .chk_snd = stream_int_chk_snd_conn,
+ .shutr = stream_int_shutr_conn,
+ .shutw = stream_int_shutw_conn,
+};
+
+/* stream-interface operations for connections */
+struct si_ops si_applet_ops = {
+ .update = stream_int_update_applet,
+ .chk_rcv = stream_int_chk_rcv_applet,
+ .chk_snd = stream_int_chk_snd_applet,
+ .shutr = stream_int_shutr_applet,
+ .shutw = stream_int_shutw_applet,
+};
+
+struct data_cb si_conn_cb = {
+ .recv = si_conn_recv_cb,
+ .send = si_conn_send_cb,
+ .wake = si_conn_wake_cb,
+};
+
+struct data_cb si_idle_conn_cb = {
+ .recv = si_idle_conn_null_cb,
+ .send = si_idle_conn_null_cb,
+ .wake = si_idle_conn_wake_cb,
+};
+
+/*
+ * This function only has to be called once after a wakeup event in case of
+ * suspected timeout. It controls the stream interface timeouts and sets
+ * si->flags accordingly. It does NOT close anything, as this timeout may
+ * be used for any purpose. It returns 1 if the timeout fired, otherwise
+ * zero.
+ */
+int stream_int_check_timeouts(struct stream_interface *si)
+{
+ if (tick_is_expired(si->exp, now_ms)) {
+ si->flags |= SI_FL_EXP;
+ return 1;
+ }
+ return 0;
+}
+
+/* to be called only when in SI_ST_DIS with SI_FL_ERR */
+void stream_int_report_error(struct stream_interface *si)
+{
+ if (!si->err_type)
+ si->err_type = SI_ET_DATA_ERR;
+
+ si_oc(si)->flags |= CF_WRITE_ERROR;
+ si_ic(si)->flags |= CF_READ_ERROR;
+}
+
+/*
+ * Returns a message to the client ; the connection is shut down for read,
+ * and the request is cleared so that no server connection can be initiated.
+ * The buffer is marked for read shutdown on the other side to protect the
+ * message, and the buffer write is enabled. The message is contained in a
+ * "chunk". If it is null, then an empty message is used. The reply buffer does
+ * not need to be empty before this, and its contents will not be overwritten.
+ * The primary goal of this function is to return error messages to a client.
+ */
+void stream_int_retnclose(struct stream_interface *si, const struct chunk *msg)
+{
+ struct channel *ic = si_ic(si);
+ struct channel *oc = si_oc(si);
+
+ channel_auto_read(ic);
+ channel_abort(ic);
+ channel_auto_close(ic);
+ channel_erase(ic);
+ channel_truncate(oc);
+
+ if (likely(msg && msg->len))
+ bo_inject(oc, msg->str, msg->len);
+
+ oc->wex = tick_add_ifset(now_ms, oc->wto);
+ channel_auto_read(oc);
+ channel_auto_close(oc);
+ channel_shutr_now(oc);
+}
+
+/*
+ * This function performs a shutdown-read on a detached stream interface in a
+ * connected or init state (it does nothing for other states). It either shuts
+ * the read side or marks itself as closed. The buffer flags are updated to
+ * reflect the new state. If the stream interface has SI_FL_NOHALF, we also
+ * forward the close to the write side. The owner task is woken up if it exists.
+ */
+static void stream_int_shutr(struct stream_interface *si)
+{
+ struct channel *ic = si_ic(si);
+
+ ic->flags &= ~CF_SHUTR_NOW;
+ if (ic->flags & CF_SHUTR)
+ return;
+ ic->flags |= CF_SHUTR;
+ ic->rex = TICK_ETERNITY;
+ si->flags &= ~SI_FL_WAIT_ROOM;
+
+ if (si->state != SI_ST_EST && si->state != SI_ST_CON)
+ return;
+
+ if (si_oc(si)->flags & CF_SHUTW) {
+ si->state = SI_ST_DIS;
+ si->exp = TICK_ETERNITY;
+ }
+ else if (si->flags & SI_FL_NOHALF) {
+ /* we want to immediately forward this close to the write side */
+ return stream_int_shutw(si);
+ }
+
+ /* note that if the task exists, it must unregister itself once it runs */
+ if (!(si->flags & SI_FL_DONT_WAKE))
+ task_wakeup(si_task(si), TASK_WOKEN_IO);
+}
+
+/*
+ * This function performs a shutdown-write on a detached stream interface in a
+ * connected or init state (it does nothing for other states). It either shuts
+ * the write side or marks itself as closed. The buffer flags are updated to
+ * reflect the new state. It does also close everything if the SI was marked as
+ * being in error state. The owner task is woken up if it exists.
+ */
+static void stream_int_shutw(struct stream_interface *si)
+{
+ struct channel *ic = si_ic(si);
+ struct channel *oc = si_oc(si);
+
+ oc->flags &= ~CF_SHUTW_NOW;
+ if (oc->flags & CF_SHUTW)
+ return;
+ oc->flags |= CF_SHUTW;
+ oc->wex = TICK_ETERNITY;
+ si->flags &= ~SI_FL_WAIT_DATA;
+
+ switch (si->state) {
+ case SI_ST_EST:
+ /* we have to shut before closing, otherwise some short messages
+ * may never leave the system, especially when there are remaining
+ * unread data in the socket input buffer, or when nolinger is set.
+ * However, if SI_FL_NOLINGER is explicitly set, we know there is
+ * no risk so we close both sides immediately.
+ */
+ if (!(si->flags & (SI_FL_ERR | SI_FL_NOLINGER)) &&
+ !(ic->flags & (CF_SHUTR|CF_DONT_READ)))
+ return;
+
+ /* fall through */
+ case SI_ST_CON:
+ case SI_ST_CER:
+ case SI_ST_QUE:
+ case SI_ST_TAR:
+ /* Note that none of these states may happen with applets */
+ si->state = SI_ST_DIS;
+ default:
+ si->flags &= ~(SI_FL_WAIT_ROOM | SI_FL_NOLINGER);
+ ic->flags &= ~CF_SHUTR_NOW;
+ ic->flags |= CF_SHUTR;
+ ic->rex = TICK_ETERNITY;
+ si->exp = TICK_ETERNITY;
+ }
+
+ /* note that if the task exists, it must unregister itself once it runs */
+ if (!(si->flags & SI_FL_DONT_WAKE))
+ task_wakeup(si_task(si), TASK_WOKEN_IO);
+}
+
+/* default chk_rcv function for scheduled tasks */
+static void stream_int_chk_rcv(struct stream_interface *si)
+{
+ struct channel *ic = si_ic(si);
+
+ DPRINTF(stderr, "%s: si=%p, si->state=%d ic->flags=%08x oc->flags=%08x\n",
+ __FUNCTION__,
+ si, si->state, ic->flags, si_oc(si)->flags);
+
+ if (unlikely(si->state != SI_ST_EST || (ic->flags & (CF_SHUTR|CF_DONT_READ))))
+ return;
+
+ if (!channel_may_recv(ic) || ic->pipe) {
+ /* stop reading */
+ si->flags |= SI_FL_WAIT_ROOM;
+ }
+ else {
+ /* (re)start reading */
+ si->flags &= ~SI_FL_WAIT_ROOM;
+ if (!(si->flags & SI_FL_DONT_WAKE))
+ task_wakeup(si_task(si), TASK_WOKEN_IO);
+ }
+}
+
+/* default chk_snd function for scheduled tasks */
+static void stream_int_chk_snd(struct stream_interface *si)
+{
+ struct channel *oc = si_oc(si);
+
+ DPRINTF(stderr, "%s: si=%p, si->state=%d ic->flags=%08x oc->flags=%08x\n",
+ __FUNCTION__,
+ si, si->state, si_ic(si)->flags, oc->flags);
+
+ if (unlikely(si->state != SI_ST_EST || (oc->flags & CF_SHUTW)))
+ return;
+
+ if (!(si->flags & SI_FL_WAIT_DATA) || /* not waiting for data */
+ channel_is_empty(oc)) /* called with nothing to send ! */
+ return;
+
+ /* Otherwise there are remaining data to be sent in the buffer,
+ * so we tell the handler.
+ */
+ si->flags &= ~SI_FL_WAIT_DATA;
+ if (!tick_isset(oc->wex))
+ oc->wex = tick_add_ifset(now_ms, oc->wto);
+
+ if (!(si->flags & SI_FL_DONT_WAKE))
+ task_wakeup(si_task(si), TASK_WOKEN_IO);
+}
+
+/* Register an applet to handle a stream_interface as a new appctx. The SI will
+ * wake it up everytime it is solicited. The appctx must be deleted by the task
+ * handler using si_release_endpoint(), possibly from within the function itself.
+ * It also pre-initializes the applet's context and returns it (or NULL in case
+ * it could not be allocated).
+ */
+struct appctx *stream_int_register_handler(struct stream_interface *si, struct applet *app)
+{
+ struct appctx *appctx;
+
+ DPRINTF(stderr, "registering handler %p for si %p (was %p)\n", app, si, si_task(si));
+
+ appctx = si_alloc_appctx(si, app);
+ if (!appctx)
+ return NULL;
+
+ si_applet_cant_get(si);
+ appctx_wakeup(appctx);
+ return si_appctx(si);
+}
+
+/* This callback is used to send a valid PROXY protocol line to a socket being
+ * established. It returns 0 if it fails in a fatal way or needs to poll to go
+ * further, otherwise it returns non-zero and removes itself from the connection's
+ * flags (the bit is provided in <flag> by the caller). It is designed to be
+ * called by the connection handler and relies on it to commit polling changes.
+ * Note that it can emit a PROXY line by relying on the other end's address
+ * when the connection is attached to a stream interface, or by resolving the
+ * local address otherwise (also called a LOCAL line).
+ */
+int conn_si_send_proxy(struct connection *conn, unsigned int flag)
+{
+ /* we might have been called just after an asynchronous shutw */
+ if (conn->flags & CO_FL_SOCK_WR_SH)
+ goto out_error;
+
+ if (!conn_ctrl_ready(conn))
+ goto out_error;
+
+ /* If we have a PROXY line to send, we'll use this to validate the
+ * connection, in which case the connection is validated only once
+ * we've sent the whole proxy line. Otherwise we use connect().
+ */
+ while (conn->send_proxy_ofs) {
+ int ret;
+
+ /* The target server expects a PROXY line to be sent first.
+ * If the send_proxy_ofs is negative, it corresponds to the
+ * offset to start sending from then end of the proxy string
+ * (which is recomputed every time since it's constant). If
+ * it is positive, it means we have to send from the start.
+ * We can only send a "normal" PROXY line when the connection
+ * is attached to a stream interface. Otherwise we can only
+ * send a LOCAL line (eg: for use with health checks).
+ */
+ if (conn->data == &si_conn_cb) {
+ struct stream_interface *si = conn->owner;
+ struct connection *remote = objt_conn(si_opposite(si)->end);
+ ret = make_proxy_line(trash.str, trash.size, objt_server(conn->target), remote);
+ }
+ else {
+ /* The target server expects a LOCAL line to be sent first. Retrieving
+ * local or remote addresses may fail until the connection is established.
+ */
+ conn_get_from_addr(conn);
+ if (!(conn->flags & CO_FL_ADDR_FROM_SET))
+ goto out_wait;
+
+ conn_get_to_addr(conn);
+ if (!(conn->flags & CO_FL_ADDR_TO_SET))
+ goto out_wait;
+
+ ret = make_proxy_line(trash.str, trash.size, objt_server(conn->target), conn);
+ }
+
+ if (!ret)
+ goto out_error;
+
+ if (conn->send_proxy_ofs > 0)
+ conn->send_proxy_ofs = -ret; /* first call */
+
+ /* we have to send trash from (ret+sp for -sp bytes). If the
+ * data layer has a pending write, we'll also set MSG_MORE.
+ */
+ ret = conn_sock_send(conn, trash.str + ret + conn->send_proxy_ofs, -conn->send_proxy_ofs,
+ (conn->flags & CO_FL_DATA_WR_ENA) ? MSG_MORE : 0);
+
+ if (ret < 0)
+ goto out_error;
+
+ conn->send_proxy_ofs += ret; /* becomes zero once complete */
+ if (conn->send_proxy_ofs != 0)
+ goto out_wait;
+
+ /* OK we've sent the whole line, we're connected */
+ break;
+ }
+
+ /* The connection is ready now, simply return and let the connection
+ * handler notify upper layers if needed.
+ */
+ if (conn->flags & CO_FL_WAIT_L4_CONN)
+ conn->flags &= ~CO_FL_WAIT_L4_CONN;
+ conn->flags &= ~flag;
+ return 1;
+
+ out_error:
+ /* Write error on the file descriptor */
+ conn->flags |= CO_FL_ERROR;
+ return 0;
+
+ out_wait:
+ __conn_sock_stop_recv(conn);
+ return 0;
+}
+
+
+/* Tiny I/O callback called on recv/send I/O events on idle connections.
+ * It simply sets the CO_FL_SOCK_RD_SH flag so that si_idle_conn_wake_cb()
+ * is notified and can kill the connection.
+ */
+static void si_idle_conn_null_cb(struct connection *conn)
+{
+ conn_sock_drain(conn);
+}
+
+/* Callback to be used by connection I/O handlers when some activity is detected
+ * on an idle server connection. Its main purpose is to kill the connection once
+ * a close was detected on it. It returns 0 if it did nothing serious, or -1 if
+ * it killed the connection.
+ */
+static int si_idle_conn_wake_cb(struct connection *conn)
+{
+ struct stream_interface *si = conn->owner;
+
+ if (!conn_ctrl_ready(conn))
+ return 0;
+
+ if (conn->flags & (CO_FL_ERROR | CO_FL_SOCK_RD_SH)) {
+ /* warning, we can't do anything on <conn> after this call ! */
+ si_release_endpoint(si);
+ return -1;
+ }
+ return 0;
+}
+
+/* This function is the equivalent to stream_int_update() except that it's
+ * designed to be called from outside the stream handlers, typically the lower
+ * layers (applets, connections) after I/O completion. After updating the stream
+ * interface and timeouts, it will try to forward what can be forwarded, then to
+ * wake the associated task up if an important event requires special handling.
+ * It should not be called from within the stream itself, stream_int_update()
+ * is designed for this.
+ */
+void stream_int_notify(struct stream_interface *si)
+{
+ struct channel *ic = si_ic(si);
+ struct channel *oc = si_oc(si);
+
+ /* process consumer side */
+ if (channel_is_empty(oc)) {
+ if (((oc->flags & (CF_SHUTW|CF_SHUTW_NOW)) == CF_SHUTW_NOW) &&
+ (si->state == SI_ST_EST))
+ si_shutw(si);
+ oc->wex = TICK_ETERNITY;
+ }
+
+ /* indicate that we may be waiting for data from the output channel */
+ if ((oc->flags & (CF_SHUTW|CF_SHUTW_NOW)) == 0 && channel_may_recv(oc))
+ si->flags |= SI_FL_WAIT_DATA;
+
+ /* update OC timeouts and wake the other side up if it's waiting for room */
+ if (oc->flags & CF_WRITE_ACTIVITY) {
+ if ((oc->flags & (CF_SHUTW|CF_WRITE_PARTIAL)) == CF_WRITE_PARTIAL &&
+ !channel_is_empty(oc))
+ if (tick_isset(oc->wex))
+ oc->wex = tick_add_ifset(now_ms, oc->wto);
+
+ if (!(si->flags & SI_FL_INDEP_STR))
+ if (tick_isset(ic->rex))
+ ic->rex = tick_add_ifset(now_ms, ic->rto);
+
+ if (likely((oc->flags & (CF_SHUTW|CF_WRITE_PARTIAL|CF_DONT_READ)) == CF_WRITE_PARTIAL &&
+ channel_may_recv(oc) &&
+ (si_opposite(si)->flags & SI_FL_WAIT_ROOM)))
+ si_chk_rcv(si_opposite(si));
+ }
+
+ /* Notify the other side when we've injected data into the IC that
+ * needs to be forwarded. We can do fast-forwarding as soon as there
+ * are output data, but we avoid doing this if some of the data are
+ * not yet scheduled for being forwarded, because it is very likely
+ * that it will be done again immediately afterwards once the following
+ * data are parsed (eg: HTTP chunking). We only SI_FL_WAIT_ROOM once
+ * we've emptied *some* of the output buffer, and not just when there
+ * is available room, because applets are often forced to stop before
+ * the buffer is full. We must not stop based on input data alone because
+ * an HTTP parser might need more data to complete the parsing.
+ */
+ if (!channel_is_empty(ic) &&
+ (si_opposite(si)->flags & SI_FL_WAIT_DATA) &&
+ (ic->buf->i == 0 || ic->pipe)) {
+ int new_len, last_len;
+
+ last_len = ic->buf->o;
+ if (ic->pipe)
+ last_len += ic->pipe->data;
+
+ si_chk_snd(si_opposite(si));
+
+ new_len = ic->buf->o;
+ if (ic->pipe)
+ new_len += ic->pipe->data;
+
+ /* check if the consumer has freed some space either in the
+ * buffer or in the pipe.
+ */
+ if (channel_may_recv(ic) && new_len < last_len)
+ si->flags &= ~SI_FL_WAIT_ROOM;
+ }
+
+ if (si->flags & SI_FL_WAIT_ROOM) {
+ ic->rex = TICK_ETERNITY;
+ }
+ else if ((ic->flags & (CF_SHUTR|CF_READ_PARTIAL|CF_DONT_READ)) == CF_READ_PARTIAL &&
+ channel_may_recv(ic)) {
+ /* we must re-enable reading if si_chk_snd() has freed some space */
+ if (!(ic->flags & CF_READ_NOEXP) && tick_isset(ic->rex))
+ ic->rex = tick_add_ifset(now_ms, ic->rto);
+ }
+
+ /* wake the task up only when needed */
+ if (/* changes on the production side */
+ (ic->flags & (CF_READ_NULL|CF_READ_ERROR)) ||
+ si->state != SI_ST_EST ||
+ (si->flags & SI_FL_ERR) ||
+ ((ic->flags & CF_READ_PARTIAL) &&
+ (!ic->to_forward || si_opposite(si)->state != SI_ST_EST)) ||
+
+ /* changes on the consumption side */
+ (oc->flags & (CF_WRITE_NULL|CF_WRITE_ERROR)) ||
+ ((oc->flags & CF_WRITE_ACTIVITY) &&
+ ((oc->flags & CF_SHUTW) ||
+ ((oc->flags & CF_WAKE_WRITE) &&
+ (si_opposite(si)->state != SI_ST_EST ||
+ (channel_is_empty(oc) && !oc->to_forward)))))) {
+ task_wakeup(si_task(si), TASK_WOKEN_IO);
+ }
+ if (ic->flags & CF_READ_ACTIVITY)
+ ic->flags &= ~CF_READ_DONTWAIT;
+
+ stream_release_buffers(si_strm(si));
+}
+
+
+/* Callback to be used by connection I/O handlers upon completion. It propagates
+ * connection flags to the stream interface, updates the stream (which may or
+ * may not take this opportunity to try to forward data), then update the
+ * connection's polling based on the channels and stream interface's final
+ * states. The function always returns 0.
+ */
+static int si_conn_wake_cb(struct connection *conn)
+{
+ struct stream_interface *si = conn->owner;
+ struct channel *ic = si_ic(si);
+ struct channel *oc = si_oc(si);
+
+ /* First step, report to the stream-int what was detected at the
+ * connection layer : errors and connection establishment.
+ */
+ if (conn->flags & CO_FL_ERROR)
+ si->flags |= SI_FL_ERR;
+
+ if (unlikely(!(conn->flags & (CO_FL_WAIT_L4_CONN | CO_FL_WAIT_L6_CONN | CO_FL_CONNECTED)))) {
+ si->exp = TICK_ETERNITY;
+ oc->flags |= CF_WRITE_NULL;
+ }
+
+ /* Second step : update the stream-int and channels, try to forward any
+ * pending data, then possibly wake the stream up based on the new
+ * stream-int status.
+ */
+ stream_int_notify(si);
+
+ /* Third step : update the connection's polling status based on what
+ * was done above (eg: maybe some buffers got emptied).
+ */
+ if (channel_is_empty(oc))
+ __conn_data_stop_send(conn);
+
+
+ if (si->flags & SI_FL_WAIT_ROOM) {
+ __conn_data_stop_recv(conn);
+ }
+ else if ((ic->flags & (CF_SHUTR|CF_READ_PARTIAL|CF_DONT_READ)) == CF_READ_PARTIAL &&
+ channel_may_recv(ic)) {
+ __conn_data_want_recv(conn);
+ }
+ return 0;
+}
+
+/*
+ * This function is called to send buffer data to a stream socket.
+ * It calls the transport layer's snd_buf function. It relies on the
+ * caller to commit polling changes. The caller should check conn->flags
+ * for errors.
+ */
+static void si_conn_send(struct connection *conn)
+{
+ struct stream_interface *si = conn->owner;
+ struct channel *oc = si_oc(si);
+ int ret;
+
+ if (oc->pipe && conn->xprt->snd_pipe) {
+ ret = conn->xprt->snd_pipe(conn, oc->pipe);
+ if (ret > 0)
+ oc->flags |= CF_WRITE_PARTIAL | CF_WROTE_DATA;
+
+ if (!oc->pipe->data) {
+ put_pipe(oc->pipe);
+ oc->pipe = NULL;
+ }
+
+ if (conn->flags & CO_FL_ERROR)
+ return;
+ }
+
+ /* At this point, the pipe is empty, but we may still have data pending
+ * in the normal buffer.
+ */
+ if (!oc->buf->o)
+ return;
+
+ /* when we're here, we already know that there is no spliced
+ * data left, and that there are sendable buffered data.
+ */
+ if (!(conn->flags & (CO_FL_ERROR | CO_FL_SOCK_WR_SH | CO_FL_DATA_WR_SH | CO_FL_WAIT_DATA | CO_FL_HANDSHAKE))) {
+ /* check if we want to inform the kernel that we're interested in
+ * sending more data after this call. We want this if :
+ * - we're about to close after this last send and want to merge
+ * the ongoing FIN with the last segment.
+ * - we know we can't send everything at once and must get back
+ * here because of unaligned data
+ * - there is still a finite amount of data to forward
+ * The test is arranged so that the most common case does only 2
+ * tests.
+ */
+ unsigned int send_flag = 0;
+
+ if ((!(oc->flags & (CF_NEVER_WAIT|CF_SEND_DONTWAIT)) &&
+ ((oc->to_forward && oc->to_forward != CHN_INFINITE_FORWARD) ||
+ (oc->flags & CF_EXPECT_MORE))) ||
+ ((oc->flags & (CF_SHUTW|CF_SHUTW_NOW)) == CF_SHUTW_NOW))
+ send_flag |= CO_SFL_MSG_MORE;
+
+ if (oc->flags & CF_STREAMER)
+ send_flag |= CO_SFL_STREAMER;
+
+ ret = conn->xprt->snd_buf(conn, oc->buf, send_flag);
+ if (ret > 0) {
+ oc->flags |= CF_WRITE_PARTIAL | CF_WROTE_DATA;
+
+ if (!oc->buf->o) {
+ /* Always clear both flags once everything has been sent, they're one-shot */
+ oc->flags &= ~(CF_EXPECT_MORE | CF_SEND_DONTWAIT);
+ }
+
+ /* if some data remain in the buffer, it's only because the
+ * system buffers are full, we will try next time.
+ */
+ }
+ }
+}
+
+/* This function is designed to be called from within the stream handler to
+ * update the channels' expiration timers and the stream interface's flags
+ * based on the channels' flags. It needs to be called only once after the
+ * channels' flags have settled down, and before they are cleared, though it
+ * doesn't harm to call it as often as desired (it just slightly hurts
+ * performance). It must not be called from outside of the stream handler,
+ * as what it does will be used to compute the stream task's expiration.
+ */
+void stream_int_update(struct stream_interface *si)
+{
+ struct channel *ic = si_ic(si);
+ struct channel *oc = si_oc(si);
+
+ if (!(ic->flags & CF_SHUTR)) {
+ /* Read not closed, update FD status and timeout for reads */
+ if ((ic->flags & CF_DONT_READ) || !channel_may_recv(ic)) {
+ /* stop reading */
+ if (!(si->flags & SI_FL_WAIT_ROOM)) {
+ if (!(ic->flags & CF_DONT_READ)) /* full */
+ si->flags |= SI_FL_WAIT_ROOM;
+ ic->rex = TICK_ETERNITY;
+ }
+ }
+ else {
+ /* (re)start reading and update timeout. Note: we don't recompute the timeout
+ * everytime we get here, otherwise it would risk never to expire. We only
+ * update it if is was not yet set. The stream socket handler will already
+ * have updated it if there has been a completed I/O.
+ */
+ si->flags &= ~SI_FL_WAIT_ROOM;
+ if (!(ic->flags & (CF_READ_NOEXP|CF_DONT_READ)) && !tick_isset(ic->rex))
+ ic->rex = tick_add_ifset(now_ms, ic->rto);
+ }
+ }
+
+ if (!(oc->flags & CF_SHUTW)) {
+ /* Write not closed, update FD status and timeout for writes */
+ if (channel_is_empty(oc)) {
+ /* stop writing */
+ if (!(si->flags & SI_FL_WAIT_DATA)) {
+ if ((oc->flags & CF_SHUTW_NOW) == 0)
+ si->flags |= SI_FL_WAIT_DATA;
+ oc->wex = TICK_ETERNITY;
+ }
+ }
+ else {
+ /* (re)start writing and update timeout. Note: we don't recompute the timeout
+ * everytime we get here, otherwise it would risk never to expire. We only
+ * update it if is was not yet set. The stream socket handler will already
+ * have updated it if there has been a completed I/O.
+ */
+ si->flags &= ~SI_FL_WAIT_DATA;
+ if (!tick_isset(oc->wex)) {
+ oc->wex = tick_add_ifset(now_ms, oc->wto);
+ if (tick_isset(ic->rex) && !(si->flags & SI_FL_INDEP_STR)) {
+ /* Note: depending on the protocol, we don't know if we're waiting
+ * for incoming data or not. So in order to prevent the socket from
+ * expiring read timeouts during writes, we refresh the read timeout,
+ * except if it was already infinite or if we have explicitly setup
+ * independent streams.
+ */
+ ic->rex = tick_add_ifset(now_ms, ic->rto);
+ }
+ }
+ }
+ }
+}
+
+/* Updates the polling status of a connection outside of the connection handler
+ * based on the channel's flags and the stream interface's flags. It needs to be
+ * called once after the channels' flags have settled down and the stream has
+ * been updated. It is not designed to be called from within the connection
+ * handler itself.
+ */
+void stream_int_update_conn(struct stream_interface *si)
+{
+ struct channel *ic = si_ic(si);
+ struct channel *oc = si_oc(si);
+ struct connection *conn = __objt_conn(si->end);
+
+ if (!(ic->flags & CF_SHUTR)) {
+ /* Read not closed */
+ if ((ic->flags & CF_DONT_READ) || !channel_may_recv(ic))
+ __conn_data_stop_recv(conn);
+ else
+ __conn_data_want_recv(conn);
+ }
+
+ if (!(oc->flags & CF_SHUTW)) {
+ /* Write not closed */
+ if (channel_is_empty(oc))
+ __conn_data_stop_send(conn);
+ else
+ __conn_data_want_send(conn);
+ }
+
+ conn_cond_update_data_polling(conn);
+}
+
+/*
+ * This function performs a shutdown-read on a stream interface attached to
+ * a connection in a connected or init state (it does nothing for other
+ * states). It either shuts the read side or marks itself as closed. The buffer
+ * flags are updated to reflect the new state. If the stream interface has
+ * SI_FL_NOHALF, we also forward the close to the write side. If a control
+ * layer is defined, then it is supposed to be a socket layer and file
+ * descriptors are then shutdown or closed accordingly. The function
+ * automatically disables polling if needed.
+ */
+static void stream_int_shutr_conn(struct stream_interface *si)
+{
+ struct connection *conn = __objt_conn(si->end);
+ struct channel *ic = si_ic(si);
+
+ ic->flags &= ~CF_SHUTR_NOW;
+ if (ic->flags & CF_SHUTR)
+ return;
+ ic->flags |= CF_SHUTR;
+ ic->rex = TICK_ETERNITY;
+ si->flags &= ~SI_FL_WAIT_ROOM;
+
+ if (si->state != SI_ST_EST && si->state != SI_ST_CON)
+ return;
+
+ if (si_oc(si)->flags & CF_SHUTW) {
+ conn_full_close(conn);
+ si->state = SI_ST_DIS;
+ si->exp = TICK_ETERNITY;
+ }
+ else if (si->flags & SI_FL_NOHALF) {
+ /* we want to immediately forward this close to the write side */
+ return stream_int_shutw_conn(si);
+ }
+ else if (conn->ctrl) {
+ /* we want the caller to disable polling on this FD */
+ conn_data_stop_recv(conn);
+ }
+}
+
+/*
+ * This function performs a shutdown-write on a stream interface attached to
+ * a connection in a connected or init state (it does nothing for other
+ * states). It either shuts the write side or marks itself as closed. The
+ * buffer flags are updated to reflect the new state. It does also close
+ * everything if the SI was marked as being in error state. If there is a
+ * data-layer shutdown, it is called.
+ */
+static void stream_int_shutw_conn(struct stream_interface *si)
+{
+ struct connection *conn = __objt_conn(si->end);
+ struct channel *ic = si_ic(si);
+ struct channel *oc = si_oc(si);
+
+ oc->flags &= ~CF_SHUTW_NOW;
+ if (oc->flags & CF_SHUTW)
+ return;
+ oc->flags |= CF_SHUTW;
+ oc->wex = TICK_ETERNITY;
+ si->flags &= ~SI_FL_WAIT_DATA;
+
+ switch (si->state) {
+ case SI_ST_EST:
+ /* we have to shut before closing, otherwise some short messages
+ * may never leave the system, especially when there are remaining
+ * unread data in the socket input buffer, or when nolinger is set.
+ * However, if SI_FL_NOLINGER is explicitly set, we know there is
+ * no risk so we close both sides immediately.
+ */
+ if (si->flags & SI_FL_ERR) {
+ /* quick close, the socket is alredy shut anyway */
+ }
+ else if (si->flags & SI_FL_NOLINGER) {
+ /* unclean data-layer shutdown */
+ conn_data_shutw_hard(conn);
+ }
+ else {
+ /* clean data-layer shutdown */
+ conn_data_shutw(conn);
+
+ /* If the stream interface is configured to disable half-open
+ * connections, we'll skip the shutdown(), but only if the
+ * read size is already closed. Otherwise we can't support
+ * closed write with pending read (eg: abortonclose while
+ * waiting for the server).
+ */
+ if (!(si->flags & SI_FL_NOHALF) || !(ic->flags & (CF_SHUTR|CF_DONT_READ))) {
+ /* We shutdown transport layer */
+ conn_sock_shutw(conn);
+
+ if (!(ic->flags & (CF_SHUTR|CF_DONT_READ))) {
+ /* OK just a shutw, but we want the caller
+ * to disable polling on this FD if exists.
+ */
+ conn_cond_update_polling(conn);
+ return;
+ }
+ }
+ }
+
+ /* fall through */
+ case SI_ST_CON:
+ /* we may have to close a pending connection, and mark the
+ * response buffer as shutr
+ */
+ conn_full_close(conn);
+ /* fall through */
+ case SI_ST_CER:
+ case SI_ST_QUE:
+ case SI_ST_TAR:
+ si->state = SI_ST_DIS;
+ /* fall through */
+ default:
+ si->flags &= ~(SI_FL_WAIT_ROOM | SI_FL_NOLINGER);
+ ic->flags &= ~CF_SHUTR_NOW;
+ ic->flags |= CF_SHUTR;
+ ic->rex = TICK_ETERNITY;
+ si->exp = TICK_ETERNITY;
+ }
+}
+
+/* This function is used for inter-stream-interface calls. It is called by the
+ * consumer to inform the producer side that it may be interested in checking
+ * for free space in the buffer. Note that it intentionally does not update
+ * timeouts, so that we can still check them later at wake-up. This function is
+ * dedicated to connection-based stream interfaces.
+ */
+static void stream_int_chk_rcv_conn(struct stream_interface *si)
+{
+ struct channel *ic = si_ic(si);
+ struct connection *conn = __objt_conn(si->end);
+
+ if (unlikely(si->state > SI_ST_EST || (ic->flags & CF_SHUTR)))
+ return;
+
+ conn_refresh_polling_flags(conn);
+
+ if ((ic->flags & CF_DONT_READ) || !channel_may_recv(ic)) {
+ /* stop reading */
+ if (!(ic->flags & CF_DONT_READ)) /* full */
+ si->flags |= SI_FL_WAIT_ROOM;
+ __conn_data_stop_recv(conn);
+ }
+ else {
+ /* (re)start reading */
+ si->flags &= ~SI_FL_WAIT_ROOM;
+ __conn_data_want_recv(conn);
+ }
+ conn_cond_update_data_polling(conn);
+}
+
+
+/* This function is used for inter-stream-interface calls. It is called by the
+ * producer to inform the consumer side that it may be interested in checking
+ * for data in the buffer. Note that it intentionally does not update timeouts,
+ * so that we can still check them later at wake-up.
+ */
+static void stream_int_chk_snd_conn(struct stream_interface *si)
+{
+ struct channel *oc = si_oc(si);
+ struct connection *conn = __objt_conn(si->end);
+
+ if (unlikely(si->state > SI_ST_EST || (oc->flags & CF_SHUTW)))
+ return;
+
+ if (unlikely(channel_is_empty(oc))) /* called with nothing to send ! */
+ return;
+
+ if (!oc->pipe && /* spliced data wants to be forwarded ASAP */
+ !(si->flags & SI_FL_WAIT_DATA)) /* not waiting for data */
+ return;
+
+ if (conn->flags & (CO_FL_DATA_WR_ENA|CO_FL_CURR_WR_ENA)) {
+ /* already subscribed to write notifications, will be called
+ * anyway, so let's avoid calling it especially if the reader
+ * is not ready.
+ */
+ return;
+ }
+
+ /* Before calling the data-level operations, we have to prepare
+ * the polling flags to ensure we properly detect changes.
+ */
+ conn_refresh_polling_flags(conn);
+ __conn_data_want_send(conn);
+
+ if (!(conn->flags & (CO_FL_HANDSHAKE|CO_FL_WAIT_L4_CONN|CO_FL_WAIT_L6_CONN))) {
+ si_conn_send(conn);
+ if (conn->flags & CO_FL_ERROR) {
+ /* Write error on the file descriptor */
+ __conn_data_stop_both(conn);
+ si->flags |= SI_FL_ERR;
+ goto out_wakeup;
+ }
+ }
+
+ /* OK, so now we know that some data might have been sent, and that we may
+ * have to poll first. We have to do that too if the buffer is not empty.
+ */
+ if (channel_is_empty(oc)) {
+ /* the connection is established but we can't write. Either the
+ * buffer is empty, or we just refrain from sending because the
+ * ->o limit was reached. Maybe we just wrote the last
+ * chunk and need to close.
+ */
+ __conn_data_stop_send(conn);
+ if (((oc->flags & (CF_SHUTW|CF_AUTO_CLOSE|CF_SHUTW_NOW)) ==
+ (CF_AUTO_CLOSE|CF_SHUTW_NOW)) &&
+ (si->state == SI_ST_EST)) {
+ si_shutw(si);
+ goto out_wakeup;
+ }
+
+ if ((oc->flags & (CF_SHUTW|CF_SHUTW_NOW)) == 0)
+ si->flags |= SI_FL_WAIT_DATA;
+ oc->wex = TICK_ETERNITY;
+ }
+ else {
+ /* Otherwise there are remaining data to be sent in the buffer,
+ * which means we have to poll before doing so.
+ */
+ __conn_data_want_send(conn);
+ si->flags &= ~SI_FL_WAIT_DATA;
+ if (!tick_isset(oc->wex))
+ oc->wex = tick_add_ifset(now_ms, oc->wto);
+ }
+
+ if (likely(oc->flags & CF_WRITE_ACTIVITY)) {
+ struct channel *ic = si_ic(si);
+
+ /* update timeout if we have written something */
+ if ((oc->flags & (CF_SHUTW|CF_WRITE_PARTIAL)) == CF_WRITE_PARTIAL &&
+ !channel_is_empty(oc))
+ oc->wex = tick_add_ifset(now_ms, oc->wto);
+
+ if (tick_isset(ic->rex) && !(si->flags & SI_FL_INDEP_STR)) {
+ /* Note: to prevent the client from expiring read timeouts
+ * during writes, we refresh it. We only do this if the
+ * interface is not configured for "independent streams",
+ * because for some applications it's better not to do this,
+ * for instance when continuously exchanging small amounts
+ * of data which can full the socket buffers long before a
+ * write timeout is detected.
+ */
+ ic->rex = tick_add_ifset(now_ms, ic->rto);
+ }
+ }
+
+ /* in case of special condition (error, shutdown, end of write...), we
+ * have to notify the task.
+ */
+ if (likely((oc->flags & (CF_WRITE_NULL|CF_WRITE_ERROR|CF_SHUTW)) ||
+ ((oc->flags & CF_WAKE_WRITE) &&
+ ((channel_is_empty(oc) && !oc->to_forward) ||
+ si->state != SI_ST_EST)))) {
+ out_wakeup:
+ if (!(si->flags & SI_FL_DONT_WAKE))
+ task_wakeup(si_task(si), TASK_WOKEN_IO);
+ }
+
+ /* commit possible polling changes */
+ conn_cond_update_polling(conn);
+}
+
+/*
+ * This is the callback which is called by the connection layer to receive data
+ * into the buffer from the connection. It iterates over the transport layer's
+ * rcv_buf function.
+ */
+static void si_conn_recv_cb(struct connection *conn)
+{
+ struct stream_interface *si = conn->owner;
+ struct channel *ic = si_ic(si);
+ int ret, max, cur_read;
+ int read_poll = MAX_READ_POLL_LOOPS;
+
+ /* stop immediately on errors. Note that we DON'T want to stop on
+ * POLL_ERR, as the poller might report a write error while there
+ * are still data available in the recv buffer. This typically
+ * happens when we send too large a request to a backend server
+ * which rejects it before reading it all.
+ */
+ if (conn->flags & CO_FL_ERROR)
+ return;
+
+ /* stop here if we reached the end of data */
+ if (conn_data_read0_pending(conn))
+ goto out_shutdown_r;
+
+ /* maybe we were called immediately after an asynchronous shutr */
+ if (ic->flags & CF_SHUTR)
+ return;
+
+ cur_read = 0;
+
+ if ((ic->flags & (CF_STREAMER | CF_STREAMER_FAST)) && !ic->buf->o &&
+ global.tune.idle_timer &&
+ (unsigned short)(now_ms - ic->last_read) >= global.tune.idle_timer) {
+ /* The buffer was empty and nothing was transferred for more
+ * than one second. This was caused by a pause and not by
+ * congestion. Reset any streaming mode to reduce latency.
+ */
+ ic->xfer_small = 0;
+ ic->xfer_large = 0;
+ ic->flags &= ~(CF_STREAMER | CF_STREAMER_FAST);
+ }
+
+ /* First, let's see if we may splice data across the channel without
+ * using a buffer.
+ */
+ if (conn->xprt->rcv_pipe &&
+ (ic->pipe || ic->to_forward >= MIN_SPLICE_FORWARD) &&
+ ic->flags & CF_KERN_SPLICING) {
+ if (buffer_not_empty(ic->buf)) {
+ /* We're embarrassed, there are already data pending in
+ * the buffer and we don't want to have them at two
+ * locations at a time. Let's indicate we need some
+ * place and ask the consumer to hurry.
+ */
+ goto abort_splice;
+ }
+
+ if (unlikely(ic->pipe == NULL)) {
+ if (pipes_used >= global.maxpipes || !(ic->pipe = get_pipe())) {
+ ic->flags &= ~CF_KERN_SPLICING;
+ goto abort_splice;
+ }
+ }
+
+ ret = conn->xprt->rcv_pipe(conn, ic->pipe, ic->to_forward);
+ if (ret < 0) {
+ /* splice not supported on this end, let's disable it */
+ ic->flags &= ~CF_KERN_SPLICING;
+ goto abort_splice;
+ }
+
+ if (ret > 0) {
+ if (ic->to_forward != CHN_INFINITE_FORWARD)
+ ic->to_forward -= ret;
+ ic->total += ret;
+ cur_read += ret;
+ ic->flags |= CF_READ_PARTIAL;
+ }
+
+ if (conn_data_read0_pending(conn))
+ goto out_shutdown_r;
+
+ if (conn->flags & CO_FL_ERROR)
+ return;
+
+ if (conn->flags & CO_FL_WAIT_ROOM) {
+ /* the pipe is full or we have read enough data that it
+ * could soon be full. Let's stop before needing to poll.
+ */
+ si->flags |= SI_FL_WAIT_ROOM;
+ __conn_data_stop_recv(conn);
+ }
+
+ /* splice not possible (anymore), let's go on on standard copy */
+ }
+
+ abort_splice:
+ if (ic->pipe && unlikely(!ic->pipe->data)) {
+ put_pipe(ic->pipe);
+ ic->pipe = NULL;
+ }
+
+ /* now we'll need a buffer */
+ if (!stream_alloc_recv_buffer(ic)) {
+ si->flags |= SI_FL_WAIT_ROOM;
+ goto end_recv;
+ }
+
+ /* Important note : if we're called with POLL_IN|POLL_HUP, it means the read polling
+ * was enabled, which implies that the recv buffer was not full. So we have a guarantee
+ * that if such an event is not handled above in splice, it will be handled here by
+ * recv().
+ */
+ while (!(conn->flags & (CO_FL_ERROR | CO_FL_SOCK_RD_SH | CO_FL_DATA_RD_SH | CO_FL_WAIT_ROOM | CO_FL_HANDSHAKE))) {
+ max = channel_recv_max(ic);
+
+ if (!max) {
+ si->flags |= SI_FL_WAIT_ROOM;
+ break;
+ }
+
+ ret = conn->xprt->rcv_buf(conn, ic->buf, max);
+ if (ret <= 0)
+ break;
+
+ cur_read += ret;
+
+ /* if we're allowed to directly forward data, we must update ->o */
+ if (ic->to_forward && !(ic->flags & (CF_SHUTW|CF_SHUTW_NOW))) {
+ unsigned long fwd = ret;
+ if (ic->to_forward != CHN_INFINITE_FORWARD) {
+ if (fwd > ic->to_forward)
+ fwd = ic->to_forward;
+ ic->to_forward -= fwd;
+ }
+ b_adv(ic->buf, fwd);
+ }
+
+ ic->flags |= CF_READ_PARTIAL;
+ ic->total += ret;
+
+ if (!channel_may_recv(ic)) {
+ si->flags |= SI_FL_WAIT_ROOM;
+ break;
+ }
+
+ if ((ic->flags & CF_READ_DONTWAIT) || --read_poll <= 0) {
+ si->flags |= SI_FL_WAIT_ROOM;
+ __conn_data_stop_recv(conn);
+ break;
+ }
+
+ /* if too many bytes were missing from last read, it means that
+ * it's pointless trying to read again because the system does
+ * not have them in buffers.
+ */
+ if (ret < max) {
+ /* if a streamer has read few data, it may be because we
+ * have exhausted system buffers. It's not worth trying
+ * again.
+ */
+ if (ic->flags & CF_STREAMER)
+ break;
+
+ /* if we read a large block smaller than what we requested,
+ * it's almost certain we'll never get anything more.
+ */
+ if (ret >= global.tune.recv_enough)
+ break;
+ }
+ } /* while !flags */
+
+ if (cur_read) {
+ if ((ic->flags & (CF_STREAMER | CF_STREAMER_FAST)) &&
+ (cur_read <= ic->buf->size / 2)) {
+ ic->xfer_large = 0;
+ ic->xfer_small++;
+ if (ic->xfer_small >= 3) {
+ /* we have read less than half of the buffer in
+ * one pass, and this happened at least 3 times.
+ * This is definitely not a streamer.
+ */
+ ic->flags &= ~(CF_STREAMER | CF_STREAMER_FAST);
+ }
+ else if (ic->xfer_small >= 2) {
+ /* if the buffer has been at least half full twice,
+ * we receive faster than we send, so at least it
+ * is not a "fast streamer".
+ */
+ ic->flags &= ~CF_STREAMER_FAST;
+ }
+ }
+ else if (!(ic->flags & CF_STREAMER_FAST) &&
+ (cur_read >= ic->buf->size - global.tune.maxrewrite)) {
+ /* we read a full buffer at once */
+ ic->xfer_small = 0;
+ ic->xfer_large++;
+ if (ic->xfer_large >= 3) {
+ /* we call this buffer a fast streamer if it manages
+ * to be filled in one call 3 consecutive times.
+ */
+ ic->flags |= (CF_STREAMER | CF_STREAMER_FAST);
+ }
+ }
+ else {
+ ic->xfer_small = 0;
+ ic->xfer_large = 0;
+ }
+ ic->last_read = now_ms;
+ }
+
+ end_recv:
+ if (conn->flags & CO_FL_ERROR)
+ return;
+
+ if (conn_data_read0_pending(conn))
+ /* connection closed */
+ goto out_shutdown_r;
+
+ return;
+
+ out_shutdown_r:
+ /* we received a shutdown */
+ ic->flags |= CF_READ_NULL;
+ if (ic->flags & CF_AUTO_CLOSE)
+ channel_shutw_now(ic);
+ stream_sock_read0(si);
+ conn_data_read0(conn);
+ return;
+}
+
+/*
+ * This is the callback which is called by the connection layer to send data
+ * from the buffer to the connection. It iterates over the transport layer's
+ * snd_buf function.
+ */
+static void si_conn_send_cb(struct connection *conn)
+{
+ struct stream_interface *si = conn->owner;
+
+ if (conn->flags & CO_FL_ERROR)
+ return;
+
+ if (conn->flags & CO_FL_HANDSHAKE)
+ /* a handshake was requested */
+ return;
+
+ /* we might have been called just after an asynchronous shutw */
+ if (si_oc(si)->flags & CF_SHUTW)
+ return;
+
+ /* OK there are data waiting to be sent */
+ si_conn_send(conn);
+
+ /* OK all done */
+ return;
+}
+
+/*
+ * This function propagates a null read received on a socket-based connection.
+ * It updates the stream interface. If the stream interface has SI_FL_NOHALF,
+ * the close is also forwarded to the write side as an abort.
+ */
+void stream_sock_read0(struct stream_interface *si)
+{
+ struct connection *conn = __objt_conn(si->end);
+ struct channel *ic = si_ic(si);
+ struct channel *oc = si_oc(si);
+
+ ic->flags &= ~CF_SHUTR_NOW;
+ if (ic->flags & CF_SHUTR)
+ return;
+ ic->flags |= CF_SHUTR;
+ ic->rex = TICK_ETERNITY;
+ si->flags &= ~SI_FL_WAIT_ROOM;
+
+ if (si->state != SI_ST_EST && si->state != SI_ST_CON)
+ return;
+
+ if (oc->flags & CF_SHUTW)
+ goto do_close;
+
+ if (si->flags & SI_FL_NOHALF) {
+ /* we want to immediately forward this close to the write side */
+ /* force flag on ssl to keep stream in cache */
+ conn_data_shutw_hard(conn);
+ goto do_close;
+ }
+
+ /* otherwise that's just a normal read shutdown */
+ __conn_data_stop_recv(conn);
+ return;
+
+ do_close:
+ /* OK we completely close the socket here just as if we went through si_shut[rw]() */
+ conn_full_close(conn);
+
+ ic->flags &= ~CF_SHUTR_NOW;
+ ic->flags |= CF_SHUTR;
+ ic->rex = TICK_ETERNITY;
+
+ oc->flags &= ~CF_SHUTW_NOW;
+ oc->flags |= CF_SHUTW;
+ oc->wex = TICK_ETERNITY;
+
+ si->flags &= ~(SI_FL_WAIT_DATA | SI_FL_WAIT_ROOM);
+
+ si->state = SI_ST_DIS;
+ si->exp = TICK_ETERNITY;
+ return;
+}
+
+/* Callback to be used by applet handlers upon completion. It updates the stream
+ * (which may or may not take this opportunity to try to forward data), then
+ * may disable the applet's based on the channels and stream interface's final
+ * states.
+ */
+void si_applet_wake_cb(struct stream_interface *si)
+{
+ struct channel *ic = si_ic(si);
+
+ /* If the applet wants to write and the channel is closed, it's a
+ * broken pipe and it must be reported.
+ */
+ if ((si->flags & SI_FL_WANT_PUT) && (ic->flags & CF_SHUTR))
+ si->flags |= SI_FL_ERR;
+
+ /* update the stream-int, channels, and possibly wake the stream up */
+ stream_int_notify(si);
+
+ /* Get away from the active list if we can't work anymore.
+ * We also do that if the main task has already scheduled, because it
+ * saves a useless wakeup/pause/wakeup cycle causing one useless call
+ * per session on average.
+ */
+ if (task_in_rq(si_task(si)) ||
+ (((si->flags & (SI_FL_WANT_PUT|SI_FL_WAIT_ROOM)) != SI_FL_WANT_PUT) &&
+ ((si->flags & (SI_FL_WANT_GET|SI_FL_WAIT_DATA)) != SI_FL_WANT_GET)))
+ appctx_pause(si_appctx(si));
+}
+
+
+/* Updates the activity status of an applet outside of the applet handler based
+ * on the channel's flags and the stream interface's flags. It needs to be
+ * called once after the channels' flags have settled down and the stream has
+ * been updated. It is not designed to be called from within the applet handler
+ * itself.
+ */
+void stream_int_update_applet(struct stream_interface *si)
+{
+ if (((si->flags & (SI_FL_WANT_PUT|SI_FL_WAIT_ROOM)) == SI_FL_WANT_PUT) ||
+ ((si->flags & (SI_FL_WANT_GET|SI_FL_WAIT_DATA)) == SI_FL_WANT_GET))
+ appctx_wakeup(si_appctx(si));
+ else
+ appctx_pause(si_appctx(si));
+}
+
+/*
+ * This function performs a shutdown-read on a stream interface attached to an
+ * applet in a connected or init state (it does nothing for other states). It
+ * either shuts the read side or marks itself as closed. The buffer flags are
+ * updated to reflect the new state. If the stream interface has SI_FL_NOHALF,
+ * we also forward the close to the write side. The owner task is woken up if
+ * it exists.
+ */
+static void stream_int_shutr_applet(struct stream_interface *si)
+{
+ struct channel *ic = si_ic(si);
+
+ ic->flags &= ~CF_SHUTR_NOW;
+ if (ic->flags & CF_SHUTR)
+ return;
+ ic->flags |= CF_SHUTR;
+ ic->rex = TICK_ETERNITY;
+ si->flags &= ~SI_FL_WAIT_ROOM;
+
+ /* Note: on shutr, we don't call the applet */
+
+ if (si->state != SI_ST_EST && si->state != SI_ST_CON)
+ return;
+
+ if (si_oc(si)->flags & CF_SHUTW) {
+ si_applet_release(si);
+ si->state = SI_ST_DIS;
+ si->exp = TICK_ETERNITY;
+ }
+ else if (si->flags & SI_FL_NOHALF) {
+ /* we want to immediately forward this close to the write side */
+ return stream_int_shutw_applet(si);
+ }
+}
+
+/*
+ * This function performs a shutdown-write on a stream interface attached to an
+ * applet in a connected or init state (it does nothing for other states). It
+ * either shuts the write side or marks itself as closed. The buffer flags are
+ * updated to reflect the new state. It does also close everything if the SI
+ * was marked as being in error state. The owner task is woken up if it exists.
+ */
+static void stream_int_shutw_applet(struct stream_interface *si)
+{
+ struct channel *ic = si_ic(si);
+ struct channel *oc = si_oc(si);
+
+ oc->flags &= ~CF_SHUTW_NOW;
+ if (oc->flags & CF_SHUTW)
+ return;
+ oc->flags |= CF_SHUTW;
+ oc->wex = TICK_ETERNITY;
+ si->flags &= ~SI_FL_WAIT_DATA;
+
+ /* on shutw we always wake the applet up */
+ appctx_wakeup(si_appctx(si));
+
+ switch (si->state) {
+ case SI_ST_EST:
+ /* we have to shut before closing, otherwise some short messages
+ * may never leave the system, especially when there are remaining
+ * unread data in the socket input buffer, or when nolinger is set.
+ * However, if SI_FL_NOLINGER is explicitly set, we know there is
+ * no risk so we close both sides immediately.
+ */
+ if (!(si->flags & (SI_FL_ERR | SI_FL_NOLINGER)) &&
+ !(ic->flags & (CF_SHUTR|CF_DONT_READ)))
+ return;
+
+ /* fall through */
+ case SI_ST_CON:
+ case SI_ST_CER:
+ case SI_ST_QUE:
+ case SI_ST_TAR:
+ /* Note that none of these states may happen with applets */
+ si_applet_release(si);
+ si->state = SI_ST_DIS;
+ default:
+ si->flags &= ~(SI_FL_WAIT_ROOM | SI_FL_NOLINGER);
+ ic->flags &= ~CF_SHUTR_NOW;
+ ic->flags |= CF_SHUTR;
+ ic->rex = TICK_ETERNITY;
+ si->exp = TICK_ETERNITY;
+ }
+}
+
+/* chk_rcv function for applets */
+static void stream_int_chk_rcv_applet(struct stream_interface *si)
+{
+ struct channel *ic = si_ic(si);
+
+ DPRINTF(stderr, "%s: si=%p, si->state=%d ic->flags=%08x oc->flags=%08x\n",
+ __FUNCTION__,
+ si, si->state, ic->flags, si_oc(si)->flags);
+
+ if (unlikely(si->state != SI_ST_EST || (ic->flags & (CF_SHUTR|CF_DONT_READ))))
+ return;
+ /* here we only wake the applet up if it was waiting for some room */
+ if (!(si->flags & SI_FL_WAIT_ROOM))
+ return;
+
+ if (channel_may_recv(ic) && !ic->pipe) {
+ /* (re)start reading */
+ appctx_wakeup(si_appctx(si));
+ }
+}
+
+/* chk_snd function for applets */
+static void stream_int_chk_snd_applet(struct stream_interface *si)
+{
+ struct channel *oc = si_oc(si);
+
+ DPRINTF(stderr, "%s: si=%p, si->state=%d ic->flags=%08x oc->flags=%08x\n",
+ __FUNCTION__,
+ si, si->state, si_ic(si)->flags, oc->flags);
+
+ if (unlikely(si->state != SI_ST_EST || (oc->flags & CF_SHUTW)))
+ return;
+
+ /* we only wake the applet up if it was waiting for some data */
+
+ if (!(si->flags & SI_FL_WAIT_DATA))
+ return;
+
+ if (!tick_isset(oc->wex))
+ oc->wex = tick_add_ifset(now_ms, oc->wto);
+
+ if (!channel_is_empty(oc)) {
+ /* (re)start sending */
+ appctx_wakeup(si_appctx(si));
+ }
+}
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * Task management functions.
+ *
+ * Copyright 2000-2009 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <string.h>
+
+#include <common/config.h>
+#include <common/memory.h>
+#include <common/mini-clist.h>
+#include <common/standard.h>
+#include <common/time.h>
+#include <eb32tree.h>
+
+#include <proto/proxy.h>
+#include <proto/stream.h>
+#include <proto/task.h>
+
+struct pool_head *pool2_task;
+
+unsigned int nb_tasks = 0;
+unsigned int run_queue = 0;
+unsigned int run_queue_cur = 0; /* copy of the run queue size */
+unsigned int nb_tasks_cur = 0; /* copy of the tasks count */
+unsigned int niced_tasks = 0; /* number of niced tasks in the run queue */
+struct eb32_node *last_timer = NULL; /* optimization: last queued timer */
+struct eb32_node *rq_next = NULL; /* optimization: next task except if delete/insert */
+
+static struct eb_root timers; /* sorted timers tree */
+static struct eb_root rqueue; /* tree constituting the run queue */
+static unsigned int rqueue_ticks; /* insertion count */
+
+/* Puts the task <t> in run queue at a position depending on t->nice. <t> is
+ * returned. The nice value assigns boosts in 32th of the run queue size. A
+ * nice value of -1024 sets the task to -run_queue*32, while a nice value of
+ * 1024 sets the task to run_queue*32. The state flags are cleared, so the
+ * caller will have to set its flags after this call.
+ * The task must not already be in the run queue. If unsure, use the safer
+ * task_wakeup() function.
+ */
+struct task *__task_wakeup(struct task *t)
+{
+ run_queue++;
+ t->rq.key = ++rqueue_ticks;
+
+ if (likely(t->nice)) {
+ int offset;
+
+ niced_tasks++;
+ if (likely(t->nice > 0))
+ offset = (unsigned)((run_queue * (unsigned int)t->nice) / 32U);
+ else
+ offset = -(unsigned)((run_queue * (unsigned int)-t->nice) / 32U);
+ t->rq.key += offset;
+ }
+
+ /* clear state flags at the same time */
+ t->state &= ~TASK_WOKEN_ANY;
+
+ eb32_insert(&rqueue, &t->rq);
+ rq_next = NULL;
+ return t;
+}
+
+/*
+ * __task_queue()
+ *
+ * Inserts a task into the wait queue at the position given by its expiration
+ * date. It does not matter if the task was already in the wait queue or not,
+ * as it will be unlinked. The task must not have an infinite expiration timer.
+ * Last, tasks must not be queued further than the end of the tree, which is
+ * between <now_ms> and <now_ms> + 2^31 ms (now+24days in 32bit).
+ *
+ * This function should not be used directly, it is meant to be called by the
+ * inline version of task_queue() which performs a few cheap preliminary tests
+ * before deciding to call __task_queue().
+ */
+void __task_queue(struct task *task)
+{
+ if (likely(task_in_wq(task)))
+ __task_unlink_wq(task);
+
+ /* the task is not in the queue now */
+ task->wq.key = task->expire;
+#ifdef DEBUG_CHECK_INVALID_EXPIRATION_DATES
+ if (tick_is_lt(task->wq.key, now_ms))
+ /* we're queuing too far away or in the past (most likely) */
+ return;
+#endif
+
+ if (likely(last_timer &&
+ last_timer->node.bit < 0 &&
+ last_timer->key == task->wq.key &&
+ last_timer->node.node_p)) {
+ /* Most often, last queued timer has the same expiration date, so
+ * if it's not queued at the root, let's queue a dup directly there.
+ * Note that we can only use dups at the dup tree's root (most
+ * negative bit).
+ */
+ eb_insert_dup(&last_timer->node, &task->wq.node);
+ if (task->wq.node.bit < last_timer->node.bit)
+ last_timer = &task->wq;
+ return;
+ }
+ eb32_insert(&timers, &task->wq);
+
+ /* Make sure we don't assign the last_timer to a node-less entry */
+ if (task->wq.node.node_p && (!last_timer || (task->wq.node.bit < last_timer->node.bit)))
+ last_timer = &task->wq;
+ return;
+}
+
+/*
+ * Extract all expired timers from the timer queue, and wakes up all
+ * associated tasks. Returns the date of next event (or eternity).
+ */
+int wake_expired_tasks()
+{
+ struct task *task;
+ struct eb32_node *eb;
+
+ eb = eb32_lookup_ge(&timers, now_ms - TIMER_LOOK_BACK);
+ while (1) {
+ if (unlikely(!eb)) {
+ /* we might have reached the end of the tree, typically because
+ * <now_ms> is in the first half and we're first scanning the last
+ * half. Let's loop back to the beginning of the tree now.
+ */
+ eb = eb32_first(&timers);
+ if (likely(!eb))
+ break;
+ }
+
+ if (likely(tick_is_lt(now_ms, eb->key))) {
+ /* timer not expired yet, revisit it later */
+ return eb->key;
+ }
+
+ /* timer looks expired, detach it from the queue */
+ task = eb32_entry(eb, struct task, wq);
+ eb = eb32_next(eb);
+ __task_unlink_wq(task);
+
+ /* It is possible that this task was left at an earlier place in the
+ * tree because a recent call to task_queue() has not moved it. This
+ * happens when the new expiration date is later than the old one.
+ * Since it is very unlikely that we reach a timeout anyway, it's a
+ * lot cheaper to proceed like this because we almost never update
+ * the tree. We may also find disabled expiration dates there. Since
+ * we have detached the task from the tree, we simply call task_queue
+ * to take care of this. Note that we might occasionally requeue it at
+ * the same place, before <eb>, so we have to check if this happens,
+ * and adjust <eb>, otherwise we may skip it which is not what we want.
+ * We may also not requeue the task (and not point eb at it) if its
+ * expiration time is not set.
+ */
+ if (!tick_is_expired(task->expire, now_ms)) {
+ if (!tick_isset(task->expire))
+ continue;
+ __task_queue(task);
+ if (!eb || eb->key > task->wq.key)
+ eb = &task->wq;
+ continue;
+ }
+ task_wakeup(task, TASK_WOKEN_TIMER);
+ }
+
+ /* No task is expired */
+ return TICK_ETERNITY;
+}
+
+/* The run queue is chronologically sorted in a tree. An insertion counter is
+ * used to assign a position to each task. This counter may be combined with
+ * other variables (eg: nice value) to set the final position in the tree. The
+ * counter may wrap without a problem, of course. We then limit the number of
+ * tasks processed at once to 1/4 of the number of tasks in the queue, and to
+ * 200 max in any case, so that general latency remains low and so that task
+ * positions have a chance to be considered.
+ *
+ * The function adjusts <next> if a new event is closer.
+ */
+void process_runnable_tasks()
+{
+ struct task *t;
+ unsigned int max_processed;
+
+ run_queue_cur = run_queue; /* keep a copy for reporting */
+ nb_tasks_cur = nb_tasks;
+ max_processed = run_queue;
+
+ if (!run_queue)
+ return;
+
+ if (max_processed > 200)
+ max_processed = 200;
+
+ if (likely(niced_tasks))
+ max_processed = (max_processed + 3) / 4;
+
+ while (max_processed--) {
+ /* Note: this loop is one of the fastest code path in
+ * the whole program. It should not be re-arranged
+ * without a good reason.
+ */
+ if (unlikely(!rq_next)) {
+ rq_next = eb32_lookup_ge(&rqueue, rqueue_ticks - TIMER_LOOK_BACK);
+ if (!rq_next) {
+ /* we might have reached the end of the tree, typically because
+ * <rqueue_ticks> is in the first half and we're first scanning
+ * the last half. Let's loop back to the beginning of the tree now.
+ */
+ rq_next = eb32_first(&rqueue);
+ if (!rq_next)
+ break;
+ }
+ }
+
+ /* detach the task from the queue after updating the pointer to
+ * the next entry.
+ */
+ t = eb32_entry(rq_next, struct task, rq);
+ rq_next = eb32_next(rq_next);
+ __task_unlink_rq(t);
+
+ t->state |= TASK_RUNNING;
+ /* This is an optimisation to help the processor's branch
+ * predictor take this most common call.
+ */
+ t->calls++;
+ if (likely(t->process == process_stream))
+ t = process_stream(t);
+ else
+ t = t->process(t);
+
+ if (likely(t != NULL)) {
+ t->state &= ~TASK_RUNNING;
+ if (t->expire)
+ task_queue(t);
+ }
+ }
+}
+
+/* perform minimal intializations, report 0 in case of error, 1 if OK. */
+int init_task()
+{
+ memset(&timers, 0, sizeof(timers));
+ memset(&rqueue, 0, sizeof(rqueue));
+ pool2_task = create_pool("task", sizeof(struct task), MEM_F_SHARED);
+ return pool2_task != NULL;
+}
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * Time calculation functions.
+ *
+ * Copyright 2000-2011 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <sys/time.h>
+
+#include <common/config.h>
+#include <common/standard.h>
+#include <common/time.h>
+
+unsigned int curr_sec_ms; /* millisecond of current second (0..999) */
+unsigned int ms_left_scaled; /* milliseconds left for current second (0..2^32-1) */
+unsigned int now_ms; /* internal date in milliseconds (may wrap) */
+unsigned int samp_time; /* total elapsed time over current sample */
+unsigned int idle_time; /* total idle time over current sample */
+unsigned int idle_pct; /* idle to total ratio over last sample (percent) */
+struct timeval now; /* internal date is a monotonic function of real clock */
+struct timeval date; /* the real current date */
+struct timeval start_date; /* the process's start date */
+struct timeval before_poll; /* system date before calling poll() */
+struct timeval after_poll; /* system date after leaving poll() */
+
+/*
+ * adds <ms> ms to <from>, set the result to <tv> and returns a pointer <tv>
+ */
+REGPRM3 struct timeval *_tv_ms_add(struct timeval *tv, const struct timeval *from, int ms)
+{
+ tv->tv_usec = from->tv_usec + (ms % 1000) * 1000;
+ tv->tv_sec = from->tv_sec + (ms / 1000);
+ while (tv->tv_usec >= 1000000) {
+ tv->tv_usec -= 1000000;
+ tv->tv_sec++;
+ }
+ return tv;
+}
+
+/*
+ * compares <tv1> and <tv2> modulo 1ms: returns 0 if equal, -1 if tv1 < tv2, 1 if tv1 > tv2
+ * Must not be used when either argument is eternity. Use tv_ms_cmp2() for that.
+ */
+REGPRM2 int _tv_ms_cmp(const struct timeval *tv1, const struct timeval *tv2)
+{
+ return __tv_ms_cmp(tv1, tv2);
+}
+
+/*
+ * compares <tv1> and <tv2> modulo 1 ms: returns 0 if equal, -1 if tv1 < tv2, 1 if tv1 > tv2,
+ * assuming that TV_ETERNITY is greater than everything.
+ */
+REGPRM2 int _tv_ms_cmp2(const struct timeval *tv1, const struct timeval *tv2)
+{
+ return __tv_ms_cmp2(tv1, tv2);
+}
+
+/*
+ * compares <tv1> and <tv2> modulo 1 ms: returns 1 if tv1 <= tv2, 0 if tv1 > tv2,
+ * assuming that TV_ETERNITY is greater than everything. Returns 0 if tv1 is
+ * TV_ETERNITY, and always assumes that tv2 != TV_ETERNITY. Designed to replace
+ * occurrences of (tv_ms_cmp2(tv,now) <= 0).
+ */
+REGPRM2 int _tv_ms_le2(const struct timeval *tv1, const struct timeval *tv2)
+{
+ return __tv_ms_le2(tv1, tv2);
+}
+
+/*
+ * returns the remaining time between tv1=now and event=tv2
+ * if tv2 is passed, 0 is returned.
+ * Must not be used when either argument is eternity.
+ */
+REGPRM2 unsigned long _tv_ms_remain(const struct timeval *tv1, const struct timeval *tv2)
+{
+ return __tv_ms_remain(tv1, tv2);
+}
+
+/*
+ * returns the remaining time between tv1=now and event=tv2
+ * if tv2 is passed, 0 is returned.
+ * Returns TIME_ETERNITY if tv2 is eternity.
+ */
+REGPRM2 unsigned long _tv_ms_remain2(const struct timeval *tv1, const struct timeval *tv2)
+{
+ if (tv_iseternity(tv2))
+ return TIME_ETERNITY;
+
+ return __tv_ms_remain(tv1, tv2);
+}
+
+/*
+ * Returns the time in ms elapsed between tv1 and tv2, assuming that tv1<=tv2.
+ * Must not be used when either argument is eternity.
+ */
+REGPRM2 unsigned long _tv_ms_elapsed(const struct timeval *tv1, const struct timeval *tv2)
+{
+ return __tv_ms_elapsed(tv1, tv2);
+}
+
+/*
+ * adds <inc> to <from>, set the result to <tv> and returns a pointer <tv>
+ */
+REGPRM3 struct timeval *_tv_add(struct timeval *tv, const struct timeval *from, const struct timeval *inc)
+{
+ return __tv_add(tv, from, inc);
+}
+
+/*
+ * If <inc> is set, then add it to <from> and set the result to <tv>, then
+ * return 1, otherwise return 0. It is meant to be used in if conditions.
+ */
+REGPRM3 int _tv_add_ifset(struct timeval *tv, const struct timeval *from, const struct timeval *inc)
+{
+ return __tv_add_ifset(tv, from, inc);
+}
+
+/*
+ * Computes the remaining time between tv1=now and event=tv2. if tv2 is passed,
+ * 0 is returned. The result is stored into tv.
+ */
+REGPRM3 struct timeval *_tv_remain(const struct timeval *tv1, const struct timeval *tv2, struct timeval *tv)
+{
+ return __tv_remain(tv1, tv2, tv);
+}
+
+/*
+ * Computes the remaining time between tv1=now and event=tv2. if tv2 is passed,
+ * 0 is returned. The result is stored into tv. Returns ETERNITY if tv2 is
+ * eternity.
+ */
+REGPRM3 struct timeval *_tv_remain2(const struct timeval *tv1, const struct timeval *tv2, struct timeval *tv)
+{
+ return __tv_remain2(tv1, tv2, tv);
+}
+
+/* tv_isle: compares <tv1> and <tv2> : returns 1 if tv1 <= tv2, otherwise 0 */
+REGPRM2 int _tv_isle(const struct timeval *tv1, const struct timeval *tv2)
+{
+ return __tv_isle(tv1, tv2);
+}
+
+/* tv_isgt: compares <tv1> and <tv2> : returns 1 if tv1 > tv2, otherwise 0 */
+REGPRM2 int _tv_isgt(const struct timeval *tv1, const struct timeval *tv2)
+{
+ return __tv_isgt(tv1, tv2);
+}
+
+/* tv_udpate_date: sets <date> to system time, and sets <now> to something as
+ * close as possible to real time, following a monotonic function. The main
+ * principle consists in detecting backwards and forwards time jumps and adjust
+ * an offset to correct them. This function should be called once after each
+ * poll, and never farther apart than MAX_DELAY_MS*2. The poll's timeout should
+ * be passed in <max_wait>, and the return value in <interrupted> (a non-zero
+ * value means that we have not expired the timeout). Calling it with (-1,*)
+ * sets both <date> and <now> to current date, and calling it with (0,1) simply
+ * updates the values.
+ */
+REGPRM2 void tv_update_date(int max_wait, int interrupted)
+{
+ static struct timeval tv_offset; /* warning: signed offset! */
+ struct timeval adjusted, deadline;
+
+ gettimeofday(&date, NULL);
+ if (unlikely(max_wait < 0)) {
+ tv_zero(&tv_offset);
+ adjusted = date;
+ after_poll = date;
+ samp_time = idle_time = 0;
+ idle_pct = 100;
+ goto to_ms;
+ }
+ __tv_add(&adjusted, &date, &tv_offset);
+ if (unlikely(__tv_islt(&adjusted, &now))) {
+ goto fixup; /* jump in the past */
+ }
+
+ /* OK we did not jump backwards, let's see if we have jumped too far
+ * forwards. The poll value was in <max_wait>, we accept that plus
+ * MAX_DELAY_MS to cover additional time.
+ */
+ _tv_ms_add(&deadline, &now, max_wait + MAX_DELAY_MS);
+ if (likely(__tv_islt(&adjusted, &deadline)))
+ goto to_ms; /* OK time is within expected range */
+ fixup:
+ /* Large jump. If the poll was interrupted, we consider that the date
+ * has not changed (immediate wake-up), otherwise we add the poll
+ * time-out to the previous date. The new offset is recomputed.
+ */
+ _tv_ms_add(&adjusted, &now, interrupted ? 0 : max_wait);
+
+ tv_offset.tv_sec = adjusted.tv_sec - date.tv_sec;
+ tv_offset.tv_usec = adjusted.tv_usec - date.tv_usec;
+ if (tv_offset.tv_usec < 0) {
+ tv_offset.tv_usec += 1000000;
+ tv_offset.tv_sec--;
+ }
+ to_ms:
+ now = adjusted;
+ curr_sec_ms = now.tv_usec / 1000; /* ms of current second */
+
+ /* For frequency counters, we'll need to know the ratio of the previous
+ * value to add to current value depending on the current millisecond.
+ * The principle is that during the first millisecond, we use 999/1000
+ * of the past value and that during the last millisecond we use 0/1000
+ * of the past value. In summary, we only use the past value during the
+ * first 999 ms of a second, and the last ms is used to complete the
+ * current measure. The value is scaled to (2^32-1) so that a simple
+ * multiply followed by a shift gives us the final value.
+ */
+ ms_left_scaled = (999U - curr_sec_ms) * 4294967U;
+ now_ms = now.tv_sec * 1000 + curr_sec_ms;
+ return;
+}
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+/*
+ * Function call tracing for gcc >= 2.95
+ *
+ * Copyright 2012 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ * gcc is able to call a specific function when entering and leaving any
+ * function when compiled with -finstrument-functions. This code must not
+ * be built with this argument. The performance impact is huge, so this
+ * feature should only be used when debugging.
+ *
+ * The entry and exits of all functions will be dumped into a file designated
+ * by the HAPROXY_TRACE environment variable, or by default "trace.out". If the
+ * trace file name is empty or "/dev/null", then traces are disabled. If
+ * opening the trace file fails, then stderr is used. If HAPROXY_TRACE_FAST is
+ * used, then the time is taken from the global <now> variable. Last, if
+ * HAPROXY_TRACE_TSC is used, then the machine's TSC is used instead of the
+ * real time (almost twice as fast).
+ *
+ * The output format is :
+ *
+ * <sec.usec> <level> <caller_ptr> <dir> <callee_ptr>
+ * or :
+ * <tsc> <level> <caller_ptr> <dir> <callee_ptr>
+ *
+ * where <dir> is '>' when entering a function and '<' when leaving.
+ *
+ * The article below is a nice explanation of how this works :
+ * http://balau82.wordpress.com/2010/10/06/trace-and-profile-function-calls-with-gcc/
+ */
+
+#include <sys/time.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <common/compiler.h>
+#include <common/time.h>
+
+static FILE *log;
+static int level;
+static int disabled;
+static int fast_time;
+static int use_tsc;
+static struct timeval trace_now;
+static struct timeval *now_ptr;
+static char line[128]; /* more than enough for a message (9+1+6+1+3+1+18+1+1+18+1+1) */
+
+static int open_trace()
+{
+ const char *output = getenv("HAPROXY_TRACE");
+
+ if (!output)
+ output = "trace.out";
+
+ if (!*output || strcmp(output, "/dev/null") == 0) {
+ disabled = 1;
+ return 0;
+ }
+
+ log = fopen(output, "w");
+ if (!log)
+ log = stderr;
+
+ now_ptr = &now;
+ if (getenv("HAPROXY_TRACE_FAST") != NULL) {
+ fast_time = 1;
+ now_ptr = &trace_now;
+ }
+ if (getenv("HAPROXY_TRACE_TSC") != NULL) {
+ fast_time = 1;
+ use_tsc = 1;
+ }
+ return 1;
+}
+
+/* This function first divides the number by 100M then iteratively multiplies it
+ * by 100 (using adds and shifts). The trick is that dividing by 100M is equivalent
+ * to multiplying by 1/100M, which approximates to 1441151881/2^57. All local
+ * variables fit in registers on x86. This version outputs two digits per round.
+ * <min_pairs> indicates the minimum number of pairs of digits that have to be
+ * emitted, which might be left-padded with zeroes.
+ * It returns the pointer to the ending '\0'.
+ */
+static char *ultoad2(unsigned int x, char *out, int min_pairs)
+{
+ unsigned int q;
+ char *p = out;
+ int pos = 4;
+ unsigned long long y;
+
+ static const unsigned short bcd[100] = {
+ 0x3030, 0x3130, 0x3230, 0x3330, 0x3430, 0x3530, 0x3630, 0x3730, 0x3830, 0x3930,
+ 0x3031, 0x3131, 0x3231, 0x3331, 0x3431, 0x3531, 0x3631, 0x3731, 0x3831, 0x3931,
+ 0x3032, 0x3132, 0x3232, 0x3332, 0x3432, 0x3532, 0x3632, 0x3732, 0x3832, 0x3932,
+ 0x3033, 0x3133, 0x3233, 0x3333, 0x3433, 0x3533, 0x3633, 0x3733, 0x3833, 0x3933,
+ 0x3034, 0x3134, 0x3234, 0x3334, 0x3434, 0x3534, 0x3634, 0x3734, 0x3834, 0x3934,
+ 0x3035, 0x3135, 0x3235, 0x3335, 0x3435, 0x3535, 0x3635, 0x3735, 0x3835, 0x3935,
+ 0x3036, 0x3136, 0x3236, 0x3336, 0x3436, 0x3536, 0x3636, 0x3736, 0x3836, 0x3936,
+ 0x3037, 0x3137, 0x3237, 0x3337, 0x3437, 0x3537, 0x3637, 0x3737, 0x3837, 0x3937,
+ 0x3038, 0x3138, 0x3238, 0x3338, 0x3438, 0x3538, 0x3638, 0x3738, 0x3838, 0x3938,
+ 0x3039, 0x3139, 0x3239, 0x3339, 0x3439, 0x3539, 0x3639, 0x3739, 0x3839, 0x3939 };
+
+ y = x * 1441151881ULL; /* y>>57 will be the integer part of x/100M */
+ while (1) {
+ q = y >> 57;
+ /* Q is composed of the first digit in the lower byte and the second
+ * digit in the higher byte.
+ */
+ if (p != out || q > 9 || pos < min_pairs) {
+#if defined(__i386__) || defined(__x86_64__)
+ /* unaligned accesses are fast on x86 */
+ *(unsigned short *)p = bcd[q];
+ p += 2;
+#else
+ *(p++) = bcd[q];
+ *(p++) = bcd[q] >> 8;
+#endif
+ }
+ else if (q || !pos) {
+ /* only at most one digit */
+ *(p++) = bcd[q] >> 8;
+ }
+ if (--pos < 0)
+ break;
+
+ y &= 0x1FFFFFFFFFFFFFFULL; // remainder
+
+ if (sizeof(long) >= sizeof(long long)) {
+ /* shifting is preferred on 64-bit archs, while mult is faster on 32-bit.
+ * We multiply by 100 by doing *5, *5 and *4, all of which are trivial.
+ */
+ y += (y << 2);
+ y += (y << 2);
+ y <<= 2;
+ }
+ else
+ y *= 100;
+ }
+
+ *p = '\0';
+ return p;
+}
+
+/* Send <h> as hex into <out>. Returns the pointer to the ending '\0'. */
+static char *emit_hex(unsigned long h, char *out)
+{
+ static unsigned char hextab[16] = "0123456789abcdef";
+ int shift = sizeof(h) * 8 - 4;
+ unsigned int idx;
+
+ do {
+ idx = (h >> shift);
+ if (idx || !shift)
+ *out++ = hextab[idx & 15];
+ shift -= 4;
+ } while (shift >= 0);
+ *out = '\0';
+ return out;
+}
+
+#if defined(__i386__) || defined(__x86_64__)
+static inline unsigned long long rdtsc()
+{
+ unsigned int a, d;
+ asm volatile("rdtsc" : "=a" (a), "=d" (d));
+ return a + ((unsigned long long)d << 32);
+}
+#else
+static inline unsigned long long rdtsc()
+{
+ struct timeval tv;
+ gettimeofday(&tv, NULL);
+ return tv.tv_sec * 1000000 + tv.tv_usec;
+}
+#endif
+
+static void make_line(void *from, void *to, int level, char dir)
+{
+ char *p = line;
+
+ if (unlikely(!log) && !open_trace())
+ return;
+
+ if (unlikely(!fast_time))
+ gettimeofday(now_ptr, NULL);
+
+#ifdef USE_SLOW_FPRINTF
+ if (!use_tsc)
+ fprintf(log, "%u.%06u %d %p %c %p\n",
+ (unsigned int)now_ptr->tv_sec,
+ (unsigned int)now_ptr->tv_usec,
+ level, from, dir, to);
+ else
+ fprintf(log, "%llx %d %p %c %p\n",
+ rdtsc(), level, from, dir, to);
+ return;
+#endif
+
+ if (unlikely(!use_tsc)) {
+ /* "%u.06u", tv_sec, tv_usec */
+ p = ultoad2(now_ptr->tv_sec, p, 0);
+ *p++ = '.';
+ p = ultoad2(now_ptr->tv_usec, p, 3);
+ } else {
+ /* "%08x%08x", high, low */
+ unsigned long long t = rdtsc();
+ if (sizeof(long) < sizeof(long long))
+ p = emit_hex((unsigned long)(t >> 32U), p);
+ p = emit_hex((unsigned long)(t), p);
+ }
+
+ /* " %u", level */
+ *p++ = ' ';
+ p = ultoad2(level, p, 0);
+
+ /* " %p", from */
+ *p++ = ' '; *p++ = '0'; *p++ = 'x';
+ p = emit_hex((unsigned long)from, p);
+
+ /* " %c", dir */
+ *p++ = ' '; *p++ = dir;
+
+ /* " %p", to */
+ *p++ = ' '; *p++ = '0'; *p++ = 'x';
+ p = emit_hex((unsigned long)to, p);
+
+ *p++ = '\n';
+
+ fwrite(line, p - line, 1, log);
+}
+
+/* These are the functions GCC calls */
+void __cyg_profile_func_enter(void *to, void *from)
+{
+ if (!disabled)
+ return make_line(from, to, ++level, '>');
+}
+
+void __cyg_profile_func_exit(void *to, void *from)
+{
+ if (!disabled)
+ return make_line(from, to, level--, '<');
+}
--- /dev/null
+/*
+ * URI-based user authentication using the HTTP basic method.
+ *
+ * Copyright 2006-2007 Willy Tarreau <w@1wt.eu>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <stdlib.h>
+#include <string.h>
+
+#include <common/base64.h>
+#include <common/config.h>
+#include <common/uri_auth.h>
+
+#include <proto/log.h>
+
+/*
+ * Initializes a basic uri_auth structure header and returns a pointer to it.
+ * Uses the pointer provided if not NULL and not initialized.
+ */
+struct uri_auth *stats_check_init_uri_auth(struct uri_auth **root)
+{
+ struct uri_auth *u;
+
+ if (!root || !*root) {
+ if ((u = (struct uri_auth *)calloc(1, sizeof (*u))) == NULL)
+ goto out_u;
+
+ LIST_INIT(&u->http_req_rules);
+ LIST_INIT(&u->admin_rules);
+ } else
+ u = *root;
+
+ if (!u->uri_prefix) {
+ u->uri_len = strlen(STATS_DEFAULT_URI);
+ if ((u->uri_prefix = strdup(STATS_DEFAULT_URI)) == NULL)
+ goto out_uri;
+ }
+
+ if (root && !*root)
+ *root = u;
+
+ return u;
+
+ out_uri:
+ if (!root || !*root)
+ free(u);
+ out_u:
+ return NULL;
+}
+
+/*
+ * Returns a default uri_auth with <uri> set as the uri_prefix.
+ * Uses the pointer provided if not NULL and not initialized.
+ */
+struct uri_auth *stats_set_uri(struct uri_auth **root, char *uri)
+{
+ struct uri_auth *u;
+ char *uri_copy;
+ int uri_len;
+
+ uri_len = strlen(uri);
+ if ((uri_copy = strdup(uri)) == NULL)
+ goto out_uri;
+
+ if ((u = stats_check_init_uri_auth(root)) == NULL)
+ goto out_u;
+
+ free(u->uri_prefix);
+ u->uri_prefix = uri_copy;
+ u->uri_len = uri_len;
+ return u;
+
+ out_u:
+ free(uri_copy);
+ out_uri:
+ return NULL;
+}
+
+/*
+ * Returns a default uri_auth with <realm> set as the realm.
+ * Uses the pointer provided if not NULL and not initialized.
+ */
+struct uri_auth *stats_set_realm(struct uri_auth **root, char *realm)
+{
+ struct uri_auth *u;
+ char *realm_copy;
+
+ if ((realm_copy = strdup(realm)) == NULL)
+ goto out_realm;
+
+ if ((u = stats_check_init_uri_auth(root)) == NULL)
+ goto out_u;
+
+ free(u->auth_realm);
+ u->auth_realm = realm_copy;
+ return u;
+
+ out_u:
+ free(realm_copy);
+ out_realm:
+ return NULL;
+}
+
+/*
+ * Returns a default uri_auth with ST_SHNODE flag enabled and
+ * <node> set as the name if it is not empty.
+ * Uses the pointer provided if not NULL and not initialized.
+ */
+struct uri_auth *stats_set_node(struct uri_auth **root, char *name)
+{
+ struct uri_auth *u;
+ char *node_copy = NULL;
+
+ if (name && *name) {
+ node_copy = strdup(name);
+ if (node_copy == NULL)
+ goto out_realm;
+ }
+
+ if ((u = stats_check_init_uri_auth(root)) == NULL)
+ goto out_u;
+
+ if (!stats_set_flag(root, ST_SHNODE))
+ goto out_u;
+
+ if (node_copy) {
+ free(u->node);
+ u->node = node_copy;
+ }
+
+ return u;
+
+ out_u:
+ free(node_copy);
+ out_realm:
+ return NULL;
+}
+
+/*
+ * Returns a default uri_auth with ST_SHDESC flag enabled and
+ * <description> set as the desc if it is not empty.
+ * Uses the pointer provided if not NULL and not initialized.
+ */
+struct uri_auth *stats_set_desc(struct uri_auth **root, char *desc)
+{
+ struct uri_auth *u;
+ char *desc_copy = NULL;
+
+ if (desc && *desc) {
+ desc_copy = strdup(desc);
+ if (desc_copy == NULL)
+ goto out_realm;
+ }
+
+ if ((u = stats_check_init_uri_auth(root)) == NULL)
+ goto out_u;
+
+ if (!stats_set_flag(root, ST_SHDESC))
+ goto out_u;
+
+ if (desc_copy) {
+ free(u->desc);
+ u->desc = desc_copy;
+ }
+
+ return u;
+
+ out_u:
+ free(desc_copy);
+ out_realm:
+ return NULL;
+}
+
+/*
+ * Returns a default uri_auth with the <refresh> refresh interval.
+ * Uses the pointer provided if not NULL and not initialized.
+ */
+struct uri_auth *stats_set_refresh(struct uri_auth **root, int interval)
+{
+ struct uri_auth *u;
+
+ if ((u = stats_check_init_uri_auth(root)) != NULL)
+ u->refresh = interval;
+ return u;
+}
+
+/*
+ * Returns a default uri_auth with the <flag> set.
+ * Uses the pointer provided if not NULL and not initialized.
+ */
+struct uri_auth *stats_set_flag(struct uri_auth **root, int flag)
+{
+ struct uri_auth *u;
+
+ if ((u = stats_check_init_uri_auth(root)) != NULL)
+ u->flags |= flag;
+ return u;
+}
+
+/*
+ * Returns a default uri_auth with a <user:passwd> entry added to the list of
+ * authorized users. If a matching entry is found, no update will be performed.
+ * Uses the pointer provided if not NULL and not initialized.
+ */
+struct uri_auth *stats_add_auth(struct uri_auth **root, char *user)
+{
+ struct uri_auth *u;
+ struct auth_users *newuser;
+ char *pass;
+
+ pass = strchr(user, ':');
+ if (pass)
+ *pass++ = '\0';
+ else
+ pass = "";
+
+ if ((u = stats_check_init_uri_auth(root)) == NULL)
+ return NULL;
+
+ if (!u->userlist)
+ u->userlist = (struct userlist *)calloc(1, sizeof(struct userlist));
+
+ if (!u->userlist)
+ return NULL;
+
+ if (!u->userlist->name)
+ u->userlist->name = strdup(".internal-stats-userlist");
+
+ if (!u->userlist->name)
+ return NULL;
+
+ for (newuser = u->userlist->users; newuser; newuser = newuser->next)
+ if (!strcmp(newuser->user, user)) {
+ Warning("uri auth: ignoring duplicated user '%s'.\n",
+ user);
+ return u;
+ }
+
+ newuser = (struct auth_users *)calloc(1, sizeof(struct auth_users));
+ if (!newuser)
+ return NULL;
+
+ newuser->user = strdup(user);
+ if (!newuser->user) {
+ free(newuser);
+ return NULL;
+ }
+
+ newuser->pass = strdup(pass);
+ if (!newuser->pass) {
+ free(newuser->user);
+ free(newuser);
+ return NULL;
+ }
+
+ newuser->flags |= AU_O_INSECURE;
+ newuser->next = u->userlist->users;
+ u->userlist->users = newuser;
+
+ return u;
+}
+
+/*
+ * Returns a default uri_auth with a <scope> entry added to the list of
+ * allowed scopes. If a matching entry is found, no update will be performed.
+ * Uses the pointer provided if not NULL and not initialized.
+ */
+struct uri_auth *stats_add_scope(struct uri_auth **root, char *scope)
+{
+ struct uri_auth *u;
+ char *new_name;
+ struct stat_scope *old_scope, **scope_list;
+
+ if ((u = stats_check_init_uri_auth(root)) == NULL)
+ goto out;
+
+ scope_list = &u->scope;
+ while ((old_scope = *scope_list)) {
+ if (!strcmp(old_scope->px_id, scope))
+ break;
+ scope_list = &old_scope->next;
+ }
+
+ if (!old_scope) {
+ if ((new_name = strdup(scope)) == NULL)
+ goto out_u;
+
+ if ((old_scope = (struct stat_scope *)calloc(1, sizeof(*old_scope))) == NULL)
+ goto out_name;
+
+ old_scope->px_id = new_name;
+ old_scope->px_len = strlen(new_name);
+ *scope_list = old_scope;
+ }
+ return u;
+
+ out_name:
+ free(new_name);
+ out_u:
+ free(u);
+ out:
+ return NULL;
+}
+
+/*
+ * Local variables:
+ * c-indent-level: 8
+ * c-basic-offset: 8
+ * End:
+ */
--- /dev/null
+#include <ctype.h>
+
+#include <common/cfgparse.h>
+#include <common/mini-clist.h>
+
+#include <types/vars.h>
+
+#include <proto/arg.h>
+#include <proto/proto_http.h>
+#include <proto/proto_tcp.h>
+#include <proto/sample.h>
+#include <proto/stream.h>
+#include <proto/vars.h>
+
+/* This contains a pool of struct vars */
+static struct pool_head *var_pool = NULL;
+
+/* This array contain all the names of all the HAProxy vars.
+ * This permits to identify two variables name with
+ * only one pointer. It permits to not using strdup() for
+ * each variable name used during the runtime.
+ */
+static char **var_names = NULL;
+static int var_names_nb = 0;
+
+/* This array of int contains the system limits per context. */
+static unsigned int var_global_limit = 0;
+static unsigned int var_global_size = 0;
+static unsigned int var_sess_limit = 0;
+static unsigned int var_txn_limit = 0;
+static unsigned int var_reqres_limit = 0;
+
+/* This function adds or remove memory size from the accounting. The inner
+ * pointers may be null when setting the outer ones only.
+ */
+static void var_accounting_diff(struct vars *vars, struct vars *per_sess, struct vars *per_strm, struct vars *per_chn, int size)
+{
+ switch (vars->scope) {
+ case SCOPE_REQ:
+ case SCOPE_RES:
+ per_chn->size += size;
+ case SCOPE_TXN:
+ per_strm->size += size;
+ case SCOPE_SESS:
+ per_sess->size += size;
+ var_global_size += size;
+ }
+}
+
+/* This function returns 1 if the <size> is available in the var
+ * pool <vars>, otherwise returns 0. If the space is avalaible,
+ * the size is reserved. The inner pointers may be null when setting
+ * the outer ones only.
+ */
+static int var_accounting_add(struct vars *vars, struct vars *per_sess, struct vars *per_strm, struct vars *per_chn, int size)
+{
+ switch (vars->scope) {
+ case SCOPE_REQ:
+ case SCOPE_RES:
+ if (var_reqres_limit && per_chn->size + size > var_reqres_limit)
+ return 0;
+ case SCOPE_TXN:
+ if (var_txn_limit && per_strm->size + size > var_txn_limit)
+ return 0;
+ case SCOPE_SESS:
+ if (var_sess_limit && per_sess->size + size > var_sess_limit)
+ return 0;
+ if (var_global_limit && var_global_size + size > var_global_limit)
+ return 0;
+ }
+ var_accounting_diff(vars, per_sess, per_strm, per_chn, size);
+ return 1;
+}
+
+/* This function free all the memory used by all the varaibles
+ * in the list.
+ */
+void vars_prune(struct vars *vars, struct stream *strm)
+{
+ struct var *var, *tmp;
+ unsigned int size = 0;
+
+ list_for_each_entry_safe(var, tmp, &vars->head, l) {
+ if (var->data.type == SMP_T_STR ||
+ var->data.type == SMP_T_BIN) {
+ free(var->data.u.str.str);
+ size += var->data.u.str.len;
+ }
+ else if (var->data.type == SMP_T_METH) {
+ free(var->data.u.meth.str.str);
+ size += var->data.u.meth.str.len;
+ }
+ LIST_DEL(&var->l);
+ pool_free2(var_pool, var);
+ size += sizeof(struct var);
+ }
+ var_accounting_diff(vars, &strm->sess->vars, &strm->vars_txn, &strm->vars_reqres, -size);
+}
+
+/* This function frees all the memory used by all the session variables in the
+ * list starting at <vars>.
+ */
+void vars_prune_per_sess(struct vars *vars)
+{
+ struct var *var, *tmp;
+ unsigned int size = 0;
+
+ list_for_each_entry_safe(var, tmp, &vars->head, l) {
+ if (var->data.type == SMP_T_STR ||
+ var->data.type == SMP_T_BIN) {
+ free(var->data.u.str.str);
+ size += var->data.u.str.len;
+ }
+ else if (var->data.type == SMP_T_METH) {
+ free(var->data.u.meth.str.str);
+ size += var->data.u.meth.str.len;
+ }
+ LIST_DEL(&var->l);
+ pool_free2(var_pool, var);
+ size += sizeof(struct var);
+ }
+ vars->size -= size;
+ var_global_size -= size;
+}
+
+/* This function init a list of variabes. */
+void vars_init(struct vars *vars, enum vars_scope scope)
+{
+ LIST_INIT(&vars->head);
+ vars->scope = scope;
+ vars->size = 0;
+}
+
+/* This function declares a new variable name. It returns a pointer
+ * on the string identifying the name. This function assures that
+ * the same name exists only once.
+ *
+ * This function check if the variable name is acceptable.
+ *
+ * The function returns NULL if an error occurs, and <err> is filled.
+ * In this case, the HAProxy must be stopped because the structs are
+ * left inconsistent. Otherwise, it returns the pointer on the global
+ * name.
+ */
+static char *register_name(const char *name, int len, enum vars_scope *scope, char **err)
+{
+ int i;
+ const char *tmp;
+
+ /* Check length. */
+ if (len == 0) {
+ memprintf(err, "Empty variable name cannot be accepted");
+ return NULL;
+ }
+
+ /* Check scope. */
+ if (len > 5 && strncmp(name, "sess.", 5) == 0) {
+ name += 5;
+ len -= 5;
+ *scope = SCOPE_SESS;
+ }
+ else if (len > 4 && strncmp(name, "txn.", 4) == 0) {
+ name += 4;
+ len -= 4;
+ *scope = SCOPE_TXN;
+ }
+ else if (len > 4 && strncmp(name, "req.", 4) == 0) {
+ name += 4;
+ len -= 4;
+ *scope = SCOPE_REQ;
+ }
+ else if (len > 4 && strncmp(name, "res.", 4) == 0) {
+ name += 4;
+ len -= 4;
+ *scope = SCOPE_RES;
+ }
+ else {
+ memprintf(err, "invalid variable name '%s'. A variable name must be start by its scope. "
+ "The scope can be 'sess', 'txn', 'req' or 'res'", name);
+ return NULL;
+ }
+
+ /* Look for existing variable name. */
+ for (i = 0; i < var_names_nb; i++)
+ if (strncmp(var_names[i], name, len) == 0)
+ return var_names[i];
+
+ /* Store variable name. */
+ var_names_nb++;
+ var_names = realloc(var_names, var_names_nb * sizeof(*var_names));
+ if (!var_names) {
+ memprintf(err, "out of memory error");
+ return NULL;
+ }
+ var_names[var_names_nb - 1] = malloc(len + 1);
+ if (!var_names[var_names_nb - 1]) {
+ memprintf(err, "out of memory error");
+ return NULL;
+ }
+ memcpy(var_names[var_names_nb - 1], name, len);
+ var_names[var_names_nb - 1][len] = '\0';
+
+ /* Check variable name syntax. */
+ tmp = var_names[var_names_nb - 1];
+ while (*tmp) {
+ if (!isalnum((int)(unsigned char)*tmp) && *tmp != '_') {
+ memprintf(err, "invalid syntax at char '%s'", tmp);
+ return NULL;
+ }
+ tmp++;
+ }
+
+ /* Return the result. */
+ return var_names[var_names_nb - 1];
+}
+
+/* This function returns an existing variable or returns NULL. */
+static inline struct var *var_get(struct vars *vars, const char *name)
+{
+ struct var *var;
+
+ list_for_each_entry(var, &vars->head, l)
+ if (var->name == name)
+ return var;
+ return NULL;
+}
+
+/* Returns 0 if fails, else returns 1. */
+static int smp_fetch_var(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+ const struct var_desc *var_desc = &args[0].data.var;
+ struct var *var;
+ struct vars *vars;
+
+ /* Check the availibity of the variable. */
+ switch (var_desc->scope) {
+ case SCOPE_SESS: vars = &smp->strm->sess->vars; break;
+ case SCOPE_TXN: vars = &smp->strm->vars_txn; break;
+ case SCOPE_REQ:
+ case SCOPE_RES:
+ default: vars = &smp->strm->vars_reqres; break;
+ }
+ if (vars->scope != var_desc->scope)
+ return 0;
+ var = var_get(vars, var_desc->name);
+
+ /* check for the variable avalaibility */
+ if (!var)
+ return 0;
+
+ /* Copy sample. */
+ smp->data = var->data;
+ smp->flags |= SMP_F_CONST;
+ return 1;
+}
+
+/* This function search in the <head> a variable with the same
+ * pointer value that the <name>. If the variable doesn't exists,
+ * create it. The function stores a copy of smp> if the variable.
+ * It returns 0 if fails, else returns 1.
+ */
+static int sample_store(struct vars *vars, const char *name, struct stream *strm, struct sample *smp)
+{
+ struct var *var;
+
+ /* Look for existing variable name. */
+ var = var_get(vars, name);
+
+ if (var) {
+ /* free its used memory. */
+ if (var->data.type == SMP_T_STR ||
+ var->data.type == SMP_T_BIN) {
+ free(var->data.u.str.str);
+ var_accounting_diff(vars, &strm->sess->vars, &strm->vars_txn, &strm->vars_reqres, -var->data.u.str.len);
+ }
+ else if (var->data.type == SMP_T_METH) {
+ free(var->data.u.meth.str.str);
+ var_accounting_diff(vars, &strm->sess->vars, &strm->vars_txn, &strm->vars_reqres, -var->data.u.meth.str.len);
+ }
+ } else {
+
+ /* Check memory avalaible. */
+ if (!var_accounting_add(vars, &strm->sess->vars, &strm->vars_txn, &strm->vars_reqres, sizeof(struct var)))
+ return 0;
+
+ /* Create new entry. */
+ var = pool_alloc2(var_pool);
+ if (!var)
+ return 0;
+ LIST_ADDQ(&vars->head, &var->l);
+ var->name = name;
+ }
+
+ /* Set type. */
+ var->data.type = smp->data.type;
+
+ /* Copy data. If the data needs memory, the function can fail. */
+ switch (var->data.type) {
+ case SMP_T_BOOL:
+ case SMP_T_SINT:
+ var->data.u.sint = smp->data.u.sint;
+ break;
+ case SMP_T_IPV4:
+ var->data.u.ipv4 = smp->data.u.ipv4;
+ break;
+ case SMP_T_IPV6:
+ var->data.u.ipv6 = smp->data.u.ipv6;
+ break;
+ case SMP_T_STR:
+ case SMP_T_BIN:
+ if (!var_accounting_add(vars, &strm->sess->vars, &strm->vars_txn, &strm->vars_reqres, smp->data.u.str.len)) {
+ var->data.type = SMP_T_BOOL; /* This type doesn't use additional memory. */
+ return 0;
+ }
+ var->data.u.str.str = malloc(smp->data.u.str.len);
+ if (!var->data.u.str.str) {
+ var_accounting_diff(vars, &strm->sess->vars, &strm->vars_txn, &strm->vars_reqres, -smp->data.u.str.len);
+ var->data.type = SMP_T_BOOL; /* This type doesn't use additional memory. */
+ return 0;
+ }
+ var->data.u.str.len = smp->data.u.str.len;
+ memcpy(var->data.u.str.str, smp->data.u.str.str, var->data.u.str.len);
+ break;
+ case SMP_T_METH:
+ if (!var_accounting_add(vars, &strm->sess->vars, &strm->vars_txn, &strm->vars_reqres, smp->data.u.meth.str.len)) {
+ var->data.type = SMP_T_BOOL; /* This type doesn't use additional memory. */
+ return 0;
+ }
+ var->data.u.meth.str.str = malloc(smp->data.u.meth.str.len);
+ if (!var->data.u.meth.str.str) {
+ var_accounting_diff(vars, &strm->sess->vars, &strm->vars_txn, &strm->vars_reqres, -smp->data.u.meth.str.len);
+ var->data.type = SMP_T_BOOL; /* This type doesn't use additional memory. */
+ return 0;
+ }
+ var->data.u.meth.meth = smp->data.u.meth.meth;
+ var->data.u.meth.str.len = smp->data.u.meth.str.len;
+ var->data.u.meth.str.size = smp->data.u.meth.str.len;
+ memcpy(var->data.u.meth.str.str, smp->data.u.meth.str.str, var->data.u.meth.str.len);
+ break;
+ }
+ return 1;
+}
+
+/* Returns 0 if fails, else returns 1. */
+static inline int sample_store_stream(const char *name, enum vars_scope scope,
+ struct stream *strm, struct sample *smp)
+{
+ struct vars *vars;
+
+ switch (scope) {
+ case SCOPE_SESS: vars = &strm->sess->vars; break;
+ case SCOPE_TXN: vars = &strm->vars_txn; break;
+ case SCOPE_REQ:
+ case SCOPE_RES:
+ default: vars = &strm->vars_reqres; break;
+ }
+ if (vars->scope != scope)
+ return 0;
+ return sample_store(vars, name, strm, smp);
+}
+
+/* Returns 0 if fails, else returns 1. */
+static int smp_conv_store(const struct arg *args, struct sample *smp, void *private)
+{
+ return sample_store_stream(args[0].data.var.name, args[1].data.var.scope, smp->strm, smp);
+}
+
+/* This fucntions check an argument entry and fill it with a variable
+ * type. The argumen must be a string. If the variable lookup fails,
+ * the function retuns 0 and fill <err>, otherwise it returns 1.
+ */
+int vars_check_arg(struct arg *arg, char **err)
+{
+ char *name;
+ enum vars_scope scope;
+
+ /* Check arg type. */
+ if (arg->type != ARGT_STR) {
+ memprintf(err, "unexpected argument type");
+ return 0;
+ }
+
+ /* Register new variable name. */
+ name = register_name(arg->data.str.str, arg->data.str.len, &scope, err);
+ if (!name)
+ return 0;
+
+ /* Use the global variable name pointer. */
+ arg->type = ARGT_VAR;
+ arg->data.var.name = name;
+ arg->data.var.scope = scope;
+ return 1;
+}
+
+/* This function store a sample in a variable.
+ * In error case, it fails silently.
+ */
+void vars_set_by_name(const char *name, size_t len, struct stream *strm, struct sample *smp)
+{
+ enum vars_scope scope;
+
+ /* Resolve name and scope. */
+ name = register_name(name, len, &scope, NULL);
+ if (!name)
+ return;
+
+ sample_store_stream(name, scope, strm, smp);
+}
+
+/* this function fills a sample with the
+ * variable content. Returns 1 if the sample
+ * is filled, otherwise it returns 0.
+ */
+int vars_get_by_name(const char *name, size_t len, struct stream *strm, struct sample *smp)
+{
+ struct vars *vars;
+ struct var *var;
+ enum vars_scope scope;
+
+ /* Resolve name and scope. */
+ name = register_name(name, len, &scope, NULL);
+ if (!name)
+ return 0;
+
+ /* Select "vars" pool according with the scope. */
+ switch (scope) {
+ case SCOPE_SESS: vars = &strm->sess->vars; break;
+ case SCOPE_TXN: vars = &strm->vars_txn; break;
+ case SCOPE_REQ:
+ case SCOPE_RES:
+ default: vars = &strm->vars_reqres; break;
+ }
+
+ /* Check if the scope is avalaible a this point of processing. */
+ if (vars->scope != scope)
+ return 0;
+
+ /* Get the variable entry. */
+ var = var_get(vars, name);
+ if (!var)
+ return 0;
+
+ /* Copy sample. */
+ smp->data = var->data;
+ smp->flags = SMP_F_CONST;
+ return 1;
+}
+
+/* this function fills a sample with the
+ * content of the varaible described by <var_desc>. Returns 1
+ * if the sample is filled, otherwise it returns 0.
+ */
+int vars_get_by_desc(const struct var_desc *var_desc, struct stream *strm, struct sample *smp)
+{
+ struct vars *vars;
+ struct var *var;
+
+ /* Select "vars" pool according with the scope. */
+ switch (var_desc->scope) {
+ case SCOPE_SESS: vars = &strm->sess->vars; break;
+ case SCOPE_TXN: vars = &strm->vars_txn; break;
+ case SCOPE_REQ:
+ case SCOPE_RES:
+ default: vars = &strm->vars_reqres; break;
+ }
+
+ /* Check if the scope is avalaible a this point of processing. */
+ if (vars->scope != var_desc->scope)
+ return 0;
+
+ /* Get the variable entry. */
+ var = var_get(vars, var_desc->name);
+ if (!var)
+ return 0;
+
+ /* Copy sample. */
+ smp->data = var->data;
+ smp->flags = SMP_F_CONST;
+ return 1;
+}
+
+/* Always returns ACT_RET_CONT even if an error occurs. */
+static enum act_return action_store(struct act_rule *rule, struct proxy *px,
+ struct session *sess, struct stream *s, int flags)
+{
+ struct sample smp;
+ int dir;
+
+ switch (rule->from) {
+ case ACT_F_TCP_REQ_CNT: dir = SMP_OPT_DIR_REQ; break;
+ case ACT_F_TCP_RES_CNT: dir = SMP_OPT_DIR_RES; break;
+ case ACT_F_HTTP_REQ: dir = SMP_OPT_DIR_REQ; break;
+ case ACT_F_HTTP_RES: dir = SMP_OPT_DIR_RES; break;
+ default:
+ send_log(px, LOG_ERR, "Vars: internal error while execute action store.");
+ if (!(global.mode & MODE_QUIET) || (global.mode & MODE_VERBOSE))
+ Alert("Vars: internal error while execute action store.\n");
+ return ACT_RET_CONT;
+ }
+
+ /* Process the expression. */
+ memset(&smp, 0, sizeof(smp));
+ if (!sample_process(px, s->sess, s, dir|SMP_OPT_FINAL,
+ rule->arg.vars.expr, &smp))
+ return ACT_RET_CONT;
+
+ /* Store the sample, and ignore errors. */
+ sample_store_stream(rule->arg.vars.name, rule->arg.vars.scope, s, &smp);
+ return ACT_RET_CONT;
+}
+
+/* This two function checks the variable name and replace the
+ * configuration string name by the global string name. its
+ * the same string, but the global pointer can be easy to
+ * compare.
+ *
+ * The first function checks a sample-fetch and the second
+ * checks a converter.
+ */
+static int smp_check_var(struct arg *args, char **err)
+{
+ return vars_check_arg(&args[0], err);
+}
+
+static int conv_check_var(struct arg *args, struct sample_conv *conv,
+ const char *file, int line, char **err_msg)
+{
+ return vars_check_arg(&args[0], err_msg);
+}
+
+/* This function is a common parser for using variables. It understands
+ * the format:
+ *
+ * set-var(<variable-name>) <expression>
+ *
+ * It returns ACT_RET_PRS_ERR if fails and <err> is filled with an error
+ * message. Otherwise, it returns ACT_RET_PRS_OK and the variable <expr>
+ * is filled with the pointer to the expression to execute.
+ */
+static enum act_parse_ret parse_store(const char **args, int *arg, struct proxy *px,
+ struct act_rule *rule, char **err)
+{
+ const char *var_name = args[*arg-1];
+ int var_len;
+ const char *kw_name;
+ int flags;
+
+ var_name += strlen("set-var");
+ if (*var_name != '(') {
+ memprintf(err, "invalid variable '%s'. Expects 'set-var(<var-name>)'", args[*arg-1]);
+ return ACT_RET_PRS_ERR;
+ }
+ var_name++; /* jump the '(' */
+ var_len = strlen(var_name);
+ var_len--; /* remove the ')' */
+ if (var_name[var_len] != ')') {
+ memprintf(err, "invalid variable '%s'. Expects 'set-var(<var-name>)'", args[*arg-1]);
+ return ACT_RET_PRS_ERR;
+ }
+
+ rule->arg.vars.name = register_name(var_name, var_len, &rule->arg.vars.scope, err);
+ if (!rule->arg.vars.name)
+ return ACT_RET_PRS_ERR;
+
+ kw_name = args[*arg-1];
+
+ rule->arg.vars.expr = sample_parse_expr((char **)args, arg, px->conf.args.file,
+ px->conf.args.line, err, &px->conf.args);
+ if (!rule->arg.vars.expr)
+ return ACT_RET_PRS_ERR;
+
+ switch (rule->from) {
+ case ACT_F_TCP_REQ_CNT: flags = SMP_VAL_FE_REQ_CNT; break;
+ case ACT_F_TCP_RES_CNT: flags = SMP_VAL_BE_RES_CNT; break;
+ case ACT_F_HTTP_REQ: flags = SMP_VAL_FE_HRQ_HDR; break;
+ case ACT_F_HTTP_RES: flags = SMP_VAL_BE_HRS_HDR; break;
+ default:
+ memprintf(err,
+ "internal error, unexpected rule->from=%d, please report this bug!",
+ rule->from);
+ return ACT_RET_PRS_ERR;
+ }
+ if (!(rule->arg.vars.expr->fetch->val & flags)) {
+ memprintf(err,
+ "fetch method '%s' extracts information from '%s', none of which is available here",
+ kw_name, sample_src_names(rule->arg.vars.expr->fetch->use));
+ free(rule->arg.vars.expr);
+ return ACT_RET_PRS_ERR;
+ }
+
+ rule->action = ACT_CUSTOM;
+ rule->action_ptr = action_store;
+ return ACT_RET_PRS_OK;
+}
+
+static int vars_max_size(char **args, int section_type, struct proxy *curpx,
+ struct proxy *defpx, const char *file, int line,
+ char **err, unsigned int *limit)
+{
+ char *error;
+
+ *limit = strtol(args[1], &error, 10);
+ if (*error != 0) {
+ memprintf(err, "%s: '%s' is an invalid size", args[0], args[1]);
+ return -1;
+ }
+ return 0;
+}
+
+static int vars_max_size_global(char **args, int section_type, struct proxy *curpx,
+ struct proxy *defpx, const char *file, int line,
+ char **err)
+{
+ return vars_max_size(args, section_type, curpx, defpx, file, line, err, &var_global_limit);
+}
+
+static int vars_max_size_sess(char **args, int section_type, struct proxy *curpx,
+ struct proxy *defpx, const char *file, int line,
+ char **err)
+{
+ return vars_max_size(args, section_type, curpx, defpx, file, line, err, &var_sess_limit);
+}
+
+static int vars_max_size_txn(char **args, int section_type, struct proxy *curpx,
+ struct proxy *defpx, const char *file, int line,
+ char **err)
+{
+ return vars_max_size(args, section_type, curpx, defpx, file, line, err, &var_txn_limit);
+}
+
+static int vars_max_size_reqres(char **args, int section_type, struct proxy *curpx,
+ struct proxy *defpx, const char *file, int line,
+ char **err)
+{
+ return vars_max_size(args, section_type, curpx, defpx, file, line, err, &var_reqres_limit);
+}
+
+static struct sample_fetch_kw_list sample_fetch_keywords = {ILH, {
+
+ { "var", smp_fetch_var, ARG1(1,STR), smp_check_var, SMP_T_STR, SMP_USE_HTTP_ANY },
+ { /* END */ },
+}};
+
+static struct sample_conv_kw_list sample_conv_kws = {ILH, {
+ { "set-var", smp_conv_store, ARG1(1,STR), conv_check_var, SMP_T_ANY, SMP_T_ANY },
+ { /* END */ },
+}};
+
+static struct action_kw_list tcp_req_kws = { { }, {
+ { "set-var", parse_store, 1 },
+ { /* END */ }
+}};
+
+static struct action_kw_list tcp_res_kws = { { }, {
+ { "set-var", parse_store, 1 },
+ { /* END */ }
+}};
+
+static struct action_kw_list http_req_kws = { { }, {
+ { "set-var", parse_store, 1 },
+ { /* END */ }
+}};
+
+static struct action_kw_list http_res_kws = { { }, {
+ { "set-var", parse_store, 1 },
+ { /* END */ }
+}};
+
+static struct cfg_kw_list cfg_kws = {{ },{
+ { CFG_GLOBAL, "tune.vars.global-max-size", vars_max_size_global },
+ { CFG_GLOBAL, "tune.vars.sess-max-size", vars_max_size_sess },
+ { CFG_GLOBAL, "tune.vars.txn-max-size", vars_max_size_txn },
+ { CFG_GLOBAL, "tune.vars.reqres-max-size", vars_max_size_reqres },
+ { /* END */ }
+}};
+
+__attribute__((constructor))
+static void __http_protocol_init(void)
+{
+ var_pool = create_pool("vars", sizeof(struct var), MEM_F_SHARED);
+
+ sample_register_fetches(&sample_fetch_keywords);
+ sample_register_convs(&sample_conv_kws);
+ tcp_req_cont_keywords_register(&tcp_req_kws);
+ tcp_res_cont_keywords_register(&tcp_res_kws);
+ http_req_keywords_register(&http_req_kws);
+ http_res_keywords_register(&http_res_kws);
+ cfg_register_keywords(&cfg_kws);
+}
--- /dev/null
+/*
+xxHash - Fast Hash algorithm
+Copyright (C) 2012-2014, Yann Collet.
+BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php)
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are
+met:
+
+* Redistributions of source code must retain the above copyright
+notice, this list of conditions and the following disclaimer.
+* Redistributions in binary form must reproduce the above
+copyright notice, this list of conditions and the following disclaimer
+in the documentation and/or other materials provided with the
+distribution.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+You can contact the author at :
+- xxHash source repository : http://code.google.com/p/xxhash/
+- public discussion board : https://groups.google.com/forum/#!forum/lz4c
+*/
+
+
+//**************************************
+// Tuning parameters
+//**************************************
+// Unaligned memory access is automatically enabled for "common" CPU, such as x86.
+// For others CPU, the compiler will be more cautious, and insert extra code to ensure aligned access is respected.
+// If you know your target CPU supports unaligned memory access, you want to force this option manually to improve performance.
+// You can also enable this parameter if you know your input data will always be aligned (boundaries of 4, for U32).
+#if defined(__ARM_FEATURE_UNALIGNED) || defined(__i386) || defined(_M_IX86) || defined(__x86_64__) || defined(_M_X64)
+# define XXH_USE_UNALIGNED_ACCESS 1
+#endif
+
+// XXH_ACCEPT_NULL_INPUT_POINTER :
+// If the input pointer is a null pointer, xxHash default behavior is to trigger a memory access error, since it is a bad pointer.
+// When this option is enabled, xxHash output for null input pointers will be the same as a null-length input.
+// This option has a very small performance cost (only measurable on small inputs).
+// By default, this option is disabled. To enable it, uncomment below define :
+// #define XXH_ACCEPT_NULL_INPUT_POINTER 1
+
+// XXH_FORCE_NATIVE_FORMAT :
+// By default, xxHash library provides endian-independant Hash values, based on little-endian convention.
+// Results are therefore identical for little-endian and big-endian CPU.
+// This comes at a performance cost for big-endian CPU, since some swapping is required to emulate little-endian format.
+// Should endian-independance be of no importance for your application, you may set the #define below to 1.
+// It will improve speed for Big-endian CPU.
+// This option has no impact on Little_Endian CPU.
+#define XXH_FORCE_NATIVE_FORMAT 0
+
+//**************************************
+// Compiler Specific Options
+//**************************************
+// Disable some Visual warning messages
+#ifdef _MSC_VER // Visual Studio
+# pragma warning(disable : 4127) // disable: C4127: conditional expression is constant
+#endif
+
+#ifdef _MSC_VER // Visual Studio
+# define FORCE_INLINE static __forceinline
+#else
+# ifdef __GNUC__
+# define FORCE_INLINE static inline __attribute__((always_inline))
+# else
+# define FORCE_INLINE static inline
+# endif
+#endif
+
+//**************************************
+// Includes & Memory related functions
+//**************************************
+#include <import/xxhash.h>
+// Modify the local functions below should you wish to use some other memory routines
+// for malloc(), free()
+#include <stdlib.h>
+static void* XXH_malloc(size_t s) { return malloc(s); }
+static void XXH_free (void* p) { free(p); }
+// for memcpy()
+#include <string.h>
+static void* XXH_memcpy(void* dest, const void* src, size_t size)
+{
+ return memcpy(dest,src,size);
+}
+
+
+//**************************************
+// Basic Types
+//**************************************
+#if defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L // C99
+# include <stdint.h>
+typedef uint8_t BYTE;
+typedef uint16_t U16;
+typedef uint32_t U32;
+typedef int32_t S32;
+typedef uint64_t U64;
+#else
+typedef unsigned char BYTE;
+typedef unsigned short U16;
+typedef unsigned int U32;
+typedef signed int S32;
+typedef unsigned long long U64;
+#endif
+
+#if defined(__GNUC__) && !defined(XXH_USE_UNALIGNED_ACCESS)
+# define _PACKED __attribute__ ((packed))
+#else
+# define _PACKED
+#endif
+
+#if !defined(XXH_USE_UNALIGNED_ACCESS) && !defined(__GNUC__)
+# ifdef __IBMC__
+# pragma pack(1)
+# else
+# pragma pack(push, 1)
+# endif
+#endif
+
+typedef struct _U32_S
+{
+ U32 v;
+} _PACKED U32_S;
+typedef struct _U64_S
+{
+ U64 v;
+} _PACKED U64_S;
+
+#if !defined(XXH_USE_UNALIGNED_ACCESS) && !defined(__GNUC__)
+# pragma pack(pop)
+#endif
+
+#define A32(x) (((U32_S *)(x))->v)
+#define A64(x) (((U64_S *)(x))->v)
+
+
+//***************************************
+// Compiler-specific Functions and Macros
+//***************************************
+#define GCC_VERSION (__GNUC__ * 100 + __GNUC_MINOR__)
+
+// Note : although _rotl exists for minGW (GCC under windows), performance seems poor
+#if defined(_MSC_VER)
+# define XXH_rotl32(x,r) _rotl(x,r)
+# define XXH_rotl64(x,r) _rotl64(x,r)
+#else
+# define XXH_rotl32(x,r) ((x << r) | (x >> (32 - r)))
+# define XXH_rotl64(x,r) ((x << r) | (x >> (64 - r)))
+#endif
+
+#if defined(_MSC_VER) // Visual Studio
+# define XXH_swap32 _byteswap_ulong
+# define XXH_swap64 _byteswap_uint64
+#elif GCC_VERSION >= 403
+# define XXH_swap32 __builtin_bswap32
+# define XXH_swap64 __builtin_bswap64
+#else
+static inline U32 XXH_swap32 (U32 x)
+{
+ return ((x << 24) & 0xff000000 ) |
+ ((x << 8) & 0x00ff0000 ) |
+ ((x >> 8) & 0x0000ff00 ) |
+ ((x >> 24) & 0x000000ff );
+}
+static inline U64 XXH_swap64 (U64 x)
+{
+ return ((x << 56) & 0xff00000000000000ULL) |
+ ((x << 40) & 0x00ff000000000000ULL) |
+ ((x << 24) & 0x0000ff0000000000ULL) |
+ ((x << 8) & 0x000000ff00000000ULL) |
+ ((x >> 8) & 0x00000000ff000000ULL) |
+ ((x >> 24) & 0x0000000000ff0000ULL) |
+ ((x >> 40) & 0x000000000000ff00ULL) |
+ ((x >> 56) & 0x00000000000000ffULL);
+}
+#endif
+
+
+//**************************************
+// Constants
+//**************************************
+#define PRIME32_1 2654435761U
+#define PRIME32_2 2246822519U
+#define PRIME32_3 3266489917U
+#define PRIME32_4 668265263U
+#define PRIME32_5 374761393U
+
+#define PRIME64_1 11400714785074694791ULL
+#define PRIME64_2 14029467366897019727ULL
+#define PRIME64_3 1609587929392839161ULL
+#define PRIME64_4 9650029242287828579ULL
+#define PRIME64_5 2870177450012600261ULL
+
+//**************************************
+// Architecture Macros
+//**************************************
+typedef enum { XXH_bigEndian=0, XXH_littleEndian=1 } XXH_endianess;
+#ifndef XXH_CPU_LITTLE_ENDIAN // It is possible to define XXH_CPU_LITTLE_ENDIAN externally, for example using a compiler switch
+static const int one = 1;
+# define XXH_CPU_LITTLE_ENDIAN (*(char*)(&one))
+#endif
+
+
+//**************************************
+// Macros
+//**************************************
+#define XXH_STATIC_ASSERT(c) { enum { XXH_static_assert = 1/(!!(c)) }; } // use only *after* variable declarations
+
+
+//****************************
+// Memory reads
+//****************************
+typedef enum { XXH_aligned, XXH_unaligned } XXH_alignment;
+
+FORCE_INLINE U32 XXH_readLE32_align(const void* ptr, XXH_endianess endian, XXH_alignment align)
+{
+ if (align==XXH_unaligned)
+ return endian==XXH_littleEndian ? A32(ptr) : XXH_swap32(A32(ptr));
+ else
+ return endian==XXH_littleEndian ? *(U32*)ptr : XXH_swap32(*(U32*)ptr);
+}
+
+FORCE_INLINE U32 XXH_readLE32(const void* ptr, XXH_endianess endian)
+{
+ return XXH_readLE32_align(ptr, endian, XXH_unaligned);
+}
+
+FORCE_INLINE U64 XXH_readLE64_align(const void* ptr, XXH_endianess endian, XXH_alignment align)
+{
+ if (align==XXH_unaligned)
+ return endian==XXH_littleEndian ? A64(ptr) : XXH_swap64(A64(ptr));
+ else
+ return endian==XXH_littleEndian ? *(U64*)ptr : XXH_swap64(*(U64*)ptr);
+}
+
+FORCE_INLINE U64 XXH_readLE64(const void* ptr, XXH_endianess endian)
+{
+ return XXH_readLE64_align(ptr, endian, XXH_unaligned);
+}
+
+
+//****************************
+// Simple Hash Functions
+//****************************
+FORCE_INLINE U32 XXH32_endian_align(const void* input, size_t len, U32 seed, XXH_endianess endian, XXH_alignment align)
+{
+ const BYTE* p = (const BYTE*)input;
+ const BYTE* bEnd = p + len;
+ U32 h32;
+#define XXH_get32bits(p) XXH_readLE32_align(p, endian, align)
+
+#ifdef XXH_ACCEPT_NULL_INPUT_POINTER
+ if (p==NULL)
+ {
+ len=0;
+ bEnd=p=(const BYTE*)(size_t)16;
+ }
+#endif
+
+ if (len>=16)
+ {
+ const BYTE* const limit = bEnd - 16;
+ U32 v1 = seed + PRIME32_1 + PRIME32_2;
+ U32 v2 = seed + PRIME32_2;
+ U32 v3 = seed + 0;
+ U32 v4 = seed - PRIME32_1;
+
+ do
+ {
+ v1 += XXH_get32bits(p) * PRIME32_2;
+ v1 = XXH_rotl32(v1, 13);
+ v1 *= PRIME32_1;
+ p+=4;
+ v2 += XXH_get32bits(p) * PRIME32_2;
+ v2 = XXH_rotl32(v2, 13);
+ v2 *= PRIME32_1;
+ p+=4;
+ v3 += XXH_get32bits(p) * PRIME32_2;
+ v3 = XXH_rotl32(v3, 13);
+ v3 *= PRIME32_1;
+ p+=4;
+ v4 += XXH_get32bits(p) * PRIME32_2;
+ v4 = XXH_rotl32(v4, 13);
+ v4 *= PRIME32_1;
+ p+=4;
+ }
+ while (p<=limit);
+
+ h32 = XXH_rotl32(v1, 1) + XXH_rotl32(v2, 7) + XXH_rotl32(v3, 12) + XXH_rotl32(v4, 18);
+ }
+ else
+ {
+ h32 = seed + PRIME32_5;
+ }
+
+ h32 += (U32) len;
+
+ while (p+4<=bEnd)
+ {
+ h32 += XXH_get32bits(p) * PRIME32_3;
+ h32 = XXH_rotl32(h32, 17) * PRIME32_4 ;
+ p+=4;
+ }
+
+ while (p<bEnd)
+ {
+ h32 += (*p) * PRIME32_5;
+ h32 = XXH_rotl32(h32, 11) * PRIME32_1 ;
+ p++;
+ }
+
+ h32 ^= h32 >> 15;
+ h32 *= PRIME32_2;
+ h32 ^= h32 >> 13;
+ h32 *= PRIME32_3;
+ h32 ^= h32 >> 16;
+
+ return h32;
+}
+
+
+unsigned int XXH32 (const void* input, size_t len, unsigned seed)
+{
+#if 0
+ // Simple version, good for code maintenance, but unfortunately slow for small inputs
+ XXH32_state_t state;
+ XXH32_reset(&state, seed);
+ XXH32_update(&state, input, len);
+ return XXH32_digest(&state);
+#else
+ XXH_endianess endian_detected = (XXH_endianess)XXH_CPU_LITTLE_ENDIAN;
+
+# if !defined(XXH_USE_UNALIGNED_ACCESS)
+ if ((((size_t)input) & 3) == 0) // Input is aligned, let's leverage the speed advantage
+ {
+ if ((endian_detected==XXH_littleEndian) || XXH_FORCE_NATIVE_FORMAT)
+ return XXH32_endian_align(input, len, seed, XXH_littleEndian, XXH_aligned);
+ else
+ return XXH32_endian_align(input, len, seed, XXH_bigEndian, XXH_aligned);
+ }
+# endif
+
+ if ((endian_detected==XXH_littleEndian) || XXH_FORCE_NATIVE_FORMAT)
+ return XXH32_endian_align(input, len, seed, XXH_littleEndian, XXH_unaligned);
+ else
+ return XXH32_endian_align(input, len, seed, XXH_bigEndian, XXH_unaligned);
+#endif
+}
+
+FORCE_INLINE U64 XXH64_endian_align(const void* input, size_t len, U64 seed, XXH_endianess endian, XXH_alignment align)
+{
+ const BYTE* p = (const BYTE*)input;
+ const BYTE* bEnd = p + len;
+ U64 h64;
+#define XXH_get64bits(p) XXH_readLE64_align(p, endian, align)
+
+#ifdef XXH_ACCEPT_NULL_INPUT_POINTER
+ if (p==NULL)
+ {
+ len=0;
+ bEnd=p=(const BYTE*)(size_t)32;
+ }
+#endif
+
+ if (len>=32)
+ {
+ const BYTE* const limit = bEnd - 32;
+ U64 v1 = seed + PRIME64_1 + PRIME64_2;
+ U64 v2 = seed + PRIME64_2;
+ U64 v3 = seed + 0;
+ U64 v4 = seed - PRIME64_1;
+
+ do
+ {
+ v1 += XXH_get64bits(p) * PRIME64_2;
+ p+=8;
+ v1 = XXH_rotl64(v1, 31);
+ v1 *= PRIME64_1;
+ v2 += XXH_get64bits(p) * PRIME64_2;
+ p+=8;
+ v2 = XXH_rotl64(v2, 31);
+ v2 *= PRIME64_1;
+ v3 += XXH_get64bits(p) * PRIME64_2;
+ p+=8;
+ v3 = XXH_rotl64(v3, 31);
+ v3 *= PRIME64_1;
+ v4 += XXH_get64bits(p) * PRIME64_2;
+ p+=8;
+ v4 = XXH_rotl64(v4, 31);
+ v4 *= PRIME64_1;
+ }
+ while (p<=limit);
+
+ h64 = XXH_rotl64(v1, 1) + XXH_rotl64(v2, 7) + XXH_rotl64(v3, 12) + XXH_rotl64(v4, 18);
+
+ v1 *= PRIME64_2;
+ v1 = XXH_rotl64(v1, 31);
+ v1 *= PRIME64_1;
+ h64 ^= v1;
+ h64 = h64 * PRIME64_1 + PRIME64_4;
+
+ v2 *= PRIME64_2;
+ v2 = XXH_rotl64(v2, 31);
+ v2 *= PRIME64_1;
+ h64 ^= v2;
+ h64 = h64 * PRIME64_1 + PRIME64_4;
+
+ v3 *= PRIME64_2;
+ v3 = XXH_rotl64(v3, 31);
+ v3 *= PRIME64_1;
+ h64 ^= v3;
+ h64 = h64 * PRIME64_1 + PRIME64_4;
+
+ v4 *= PRIME64_2;
+ v4 = XXH_rotl64(v4, 31);
+ v4 *= PRIME64_1;
+ h64 ^= v4;
+ h64 = h64 * PRIME64_1 + PRIME64_4;
+ }
+ else
+ {
+ h64 = seed + PRIME64_5;
+ }
+
+ h64 += (U64) len;
+
+ while (p+8<=bEnd)
+ {
+ U64 k1 = XXH_get64bits(p);
+ k1 *= PRIME64_2;
+ k1 = XXH_rotl64(k1,31);
+ k1 *= PRIME64_1;
+ h64 ^= k1;
+ h64 = XXH_rotl64(h64,27) * PRIME64_1 + PRIME64_4;
+ p+=8;
+ }
+
+ if (p+4<=bEnd)
+ {
+ h64 ^= (U64)(XXH_get32bits(p)) * PRIME64_1;
+ h64 = XXH_rotl64(h64, 23) * PRIME64_2 + PRIME64_3;
+ p+=4;
+ }
+
+ while (p<bEnd)
+ {
+ h64 ^= (*p) * PRIME64_5;
+ h64 = XXH_rotl64(h64, 11) * PRIME64_1;
+ p++;
+ }
+
+ h64 ^= h64 >> 33;
+ h64 *= PRIME64_2;
+ h64 ^= h64 >> 29;
+ h64 *= PRIME64_3;
+ h64 ^= h64 >> 32;
+
+ return h64;
+}
+
+
+unsigned long long XXH64 (const void* input, size_t len, unsigned long long seed)
+{
+#if 0
+ // Simple version, good for code maintenance, but unfortunately slow for small inputs
+ XXH64_state_t state;
+ XXH64_reset(&state, seed);
+ XXH64_update(&state, input, len);
+ return XXH64_digest(&state);
+#else
+ XXH_endianess endian_detected = (XXH_endianess)XXH_CPU_LITTLE_ENDIAN;
+
+# if !defined(XXH_USE_UNALIGNED_ACCESS)
+ if ((((size_t)input) & 7)==0) // Input is aligned, let's leverage the speed advantage
+ {
+ if ((endian_detected==XXH_littleEndian) || XXH_FORCE_NATIVE_FORMAT)
+ return XXH64_endian_align(input, len, seed, XXH_littleEndian, XXH_aligned);
+ else
+ return XXH64_endian_align(input, len, seed, XXH_bigEndian, XXH_aligned);
+ }
+# endif
+
+ if ((endian_detected==XXH_littleEndian) || XXH_FORCE_NATIVE_FORMAT)
+ return XXH64_endian_align(input, len, seed, XXH_littleEndian, XXH_unaligned);
+ else
+ return XXH64_endian_align(input, len, seed, XXH_bigEndian, XXH_unaligned);
+#endif
+}
+
+/****************************************************
+ * Advanced Hash Functions
+****************************************************/
+
+/*** Allocation ***/
+typedef struct
+{
+ U64 total_len;
+ U32 seed;
+ U32 v1;
+ U32 v2;
+ U32 v3;
+ U32 v4;
+ U32 mem32[4]; /* defined as U32 for alignment */
+ U32 memsize;
+} XXH_istate32_t;
+
+typedef struct
+{
+ U64 total_len;
+ U64 seed;
+ U64 v1;
+ U64 v2;
+ U64 v3;
+ U64 v4;
+ U64 mem64[4]; /* defined as U64 for alignment */
+ U32 memsize;
+} XXH_istate64_t;
+
+
+XXH32_state_t* XXH32_createState(void)
+{
+ XXH_STATIC_ASSERT(sizeof(XXH32_state_t) >= sizeof(XXH_istate32_t)); // A compilation error here means XXH32_state_t is not large enough
+ return (XXH32_state_t*)XXH_malloc(sizeof(XXH32_state_t));
+}
+XXH_errorcode XXH32_freeState(XXH32_state_t* statePtr)
+{
+ XXH_free(statePtr);
+ return XXH_OK;
+};
+
+XXH64_state_t* XXH64_createState(void)
+{
+ XXH_STATIC_ASSERT(sizeof(XXH64_state_t) >= sizeof(XXH_istate64_t)); // A compilation error here means XXH64_state_t is not large enough
+ return (XXH64_state_t*)XXH_malloc(sizeof(XXH64_state_t));
+}
+XXH_errorcode XXH64_freeState(XXH64_state_t* statePtr)
+{
+ XXH_free(statePtr);
+ return XXH_OK;
+};
+
+
+/*** Hash feed ***/
+
+XXH_errorcode XXH32_reset(XXH32_state_t* state_in, U32 seed)
+{
+ XXH_istate32_t* state = (XXH_istate32_t*) state_in;
+ state->seed = seed;
+ state->v1 = seed + PRIME32_1 + PRIME32_2;
+ state->v2 = seed + PRIME32_2;
+ state->v3 = seed + 0;
+ state->v4 = seed - PRIME32_1;
+ state->total_len = 0;
+ state->memsize = 0;
+ return XXH_OK;
+}
+
+XXH_errorcode XXH64_reset(XXH64_state_t* state_in, unsigned long long seed)
+{
+ XXH_istate64_t* state = (XXH_istate64_t*) state_in;
+ state->seed = seed;
+ state->v1 = seed + PRIME64_1 + PRIME64_2;
+ state->v2 = seed + PRIME64_2;
+ state->v3 = seed + 0;
+ state->v4 = seed - PRIME64_1;
+ state->total_len = 0;
+ state->memsize = 0;
+ return XXH_OK;
+}
+
+
+FORCE_INLINE XXH_errorcode XXH32_update_endian (XXH32_state_t* state_in, const void* input, size_t len, XXH_endianess endian)
+{
+ XXH_istate32_t* state = (XXH_istate32_t *) state_in;
+ const BYTE* p = (const BYTE*)input;
+ const BYTE* const bEnd = p + len;
+
+#ifdef XXH_ACCEPT_NULL_INPUT_POINTER
+ if (input==NULL) return XXH_ERROR;
+#endif
+
+ state->total_len += len;
+
+ if (state->memsize + len < 16) // fill in tmp buffer
+ {
+ XXH_memcpy((BYTE*)(state->mem32) + state->memsize, input, len);
+ state->memsize += (U32)len;
+ return XXH_OK;
+ }
+
+ if (state->memsize) // some data left from previous update
+ {
+ XXH_memcpy((BYTE*)(state->mem32) + state->memsize, input, 16-state->memsize);
+ {
+ const U32* p32 = state->mem32;
+ state->v1 += XXH_readLE32(p32, endian) * PRIME32_2;
+ state->v1 = XXH_rotl32(state->v1, 13);
+ state->v1 *= PRIME32_1;
+ p32++;
+ state->v2 += XXH_readLE32(p32, endian) * PRIME32_2;
+ state->v2 = XXH_rotl32(state->v2, 13);
+ state->v2 *= PRIME32_1;
+ p32++;
+ state->v3 += XXH_readLE32(p32, endian) * PRIME32_2;
+ state->v3 = XXH_rotl32(state->v3, 13);
+ state->v3 *= PRIME32_1;
+ p32++;
+ state->v4 += XXH_readLE32(p32, endian) * PRIME32_2;
+ state->v4 = XXH_rotl32(state->v4, 13);
+ state->v4 *= PRIME32_1;
+ p32++;
+ }
+ p += 16-state->memsize;
+ state->memsize = 0;
+ }
+
+ if (p <= bEnd-16)
+ {
+ const BYTE* const limit = bEnd - 16;
+ U32 v1 = state->v1;
+ U32 v2 = state->v2;
+ U32 v3 = state->v3;
+ U32 v4 = state->v4;
+
+ do
+ {
+ v1 += XXH_readLE32(p, endian) * PRIME32_2;
+ v1 = XXH_rotl32(v1, 13);
+ v1 *= PRIME32_1;
+ p+=4;
+ v2 += XXH_readLE32(p, endian) * PRIME32_2;
+ v2 = XXH_rotl32(v2, 13);
+ v2 *= PRIME32_1;
+ p+=4;
+ v3 += XXH_readLE32(p, endian) * PRIME32_2;
+ v3 = XXH_rotl32(v3, 13);
+ v3 *= PRIME32_1;
+ p+=4;
+ v4 += XXH_readLE32(p, endian) * PRIME32_2;
+ v4 = XXH_rotl32(v4, 13);
+ v4 *= PRIME32_1;
+ p+=4;
+ }
+ while (p<=limit);
+
+ state->v1 = v1;
+ state->v2 = v2;
+ state->v3 = v3;
+ state->v4 = v4;
+ }
+
+ if (p < bEnd)
+ {
+ XXH_memcpy(state->mem32, p, bEnd-p);
+ state->memsize = (int)(bEnd-p);
+ }
+
+ return XXH_OK;
+}
+
+XXH_errorcode XXH32_update (XXH32_state_t* state_in, const void* input, size_t len)
+{
+ XXH_endianess endian_detected = (XXH_endianess)XXH_CPU_LITTLE_ENDIAN;
+
+ if ((endian_detected==XXH_littleEndian) || XXH_FORCE_NATIVE_FORMAT)
+ return XXH32_update_endian(state_in, input, len, XXH_littleEndian);
+ else
+ return XXH32_update_endian(state_in, input, len, XXH_bigEndian);
+}
+
+
+
+FORCE_INLINE U32 XXH32_digest_endian (const XXH32_state_t* state_in, XXH_endianess endian)
+{
+ XXH_istate32_t* state = (XXH_istate32_t*) state_in;
+ const BYTE * p = (const BYTE*)state->mem32;
+ BYTE* bEnd = (BYTE*)(state->mem32) + state->memsize;
+ U32 h32;
+
+ if (state->total_len >= 16)
+ {
+ h32 = XXH_rotl32(state->v1, 1) + XXH_rotl32(state->v2, 7) + XXH_rotl32(state->v3, 12) + XXH_rotl32(state->v4, 18);
+ }
+ else
+ {
+ h32 = state->seed + PRIME32_5;
+ }
+
+ h32 += (U32) state->total_len;
+
+ while (p+4<=bEnd)
+ {
+ h32 += XXH_readLE32(p, endian) * PRIME32_3;
+ h32 = XXH_rotl32(h32, 17) * PRIME32_4;
+ p+=4;
+ }
+
+ while (p<bEnd)
+ {
+ h32 += (*p) * PRIME32_5;
+ h32 = XXH_rotl32(h32, 11) * PRIME32_1;
+ p++;
+ }
+
+ h32 ^= h32 >> 15;
+ h32 *= PRIME32_2;
+ h32 ^= h32 >> 13;
+ h32 *= PRIME32_3;
+ h32 ^= h32 >> 16;
+
+ return h32;
+}
+
+
+U32 XXH32_digest (const XXH32_state_t* state_in)
+{
+ XXH_endianess endian_detected = (XXH_endianess)XXH_CPU_LITTLE_ENDIAN;
+
+ if ((endian_detected==XXH_littleEndian) || XXH_FORCE_NATIVE_FORMAT)
+ return XXH32_digest_endian(state_in, XXH_littleEndian);
+ else
+ return XXH32_digest_endian(state_in, XXH_bigEndian);
+}
+
+
+FORCE_INLINE XXH_errorcode XXH64_update_endian (XXH64_state_t* state_in, const void* input, size_t len, XXH_endianess endian)
+{
+ XXH_istate64_t * state = (XXH_istate64_t *) state_in;
+ const BYTE* p = (const BYTE*)input;
+ const BYTE* const bEnd = p + len;
+
+#ifdef XXH_ACCEPT_NULL_INPUT_POINTER
+ if (input==NULL) return XXH_ERROR;
+#endif
+
+ state->total_len += len;
+
+ if (state->memsize + len < 32) // fill in tmp buffer
+ {
+ XXH_memcpy(((BYTE*)state->mem64) + state->memsize, input, len);
+ state->memsize += (U32)len;
+ return XXH_OK;
+ }
+
+ if (state->memsize) // some data left from previous update
+ {
+ XXH_memcpy(((BYTE*)state->mem64) + state->memsize, input, 32-state->memsize);
+ {
+ const U64* p64 = state->mem64;
+ state->v1 += XXH_readLE64(p64, endian) * PRIME64_2;
+ state->v1 = XXH_rotl64(state->v1, 31);
+ state->v1 *= PRIME64_1;
+ p64++;
+ state->v2 += XXH_readLE64(p64, endian) * PRIME64_2;
+ state->v2 = XXH_rotl64(state->v2, 31);
+ state->v2 *= PRIME64_1;
+ p64++;
+ state->v3 += XXH_readLE64(p64, endian) * PRIME64_2;
+ state->v3 = XXH_rotl64(state->v3, 31);
+ state->v3 *= PRIME64_1;
+ p64++;
+ state->v4 += XXH_readLE64(p64, endian) * PRIME64_2;
+ state->v4 = XXH_rotl64(state->v4, 31);
+ state->v4 *= PRIME64_1;
+ p64++;
+ }
+ p += 32-state->memsize;
+ state->memsize = 0;
+ }
+
+ if (p+32 <= bEnd)
+ {
+ const BYTE* const limit = bEnd - 32;
+ U64 v1 = state->v1;
+ U64 v2 = state->v2;
+ U64 v3 = state->v3;
+ U64 v4 = state->v4;
+
+ do
+ {
+ v1 += XXH_readLE64(p, endian) * PRIME64_2;
+ v1 = XXH_rotl64(v1, 31);
+ v1 *= PRIME64_1;
+ p+=8;
+ v2 += XXH_readLE64(p, endian) * PRIME64_2;
+ v2 = XXH_rotl64(v2, 31);
+ v2 *= PRIME64_1;
+ p+=8;
+ v3 += XXH_readLE64(p, endian) * PRIME64_2;
+ v3 = XXH_rotl64(v3, 31);
+ v3 *= PRIME64_1;
+ p+=8;
+ v4 += XXH_readLE64(p, endian) * PRIME64_2;
+ v4 = XXH_rotl64(v4, 31);
+ v4 *= PRIME64_1;
+ p+=8;
+ }
+ while (p<=limit);
+
+ state->v1 = v1;
+ state->v2 = v2;
+ state->v3 = v3;
+ state->v4 = v4;
+ }
+
+ if (p < bEnd)
+ {
+ XXH_memcpy(state->mem64, p, bEnd-p);
+ state->memsize = (int)(bEnd-p);
+ }
+
+ return XXH_OK;
+}
+
+XXH_errorcode XXH64_update (XXH64_state_t* state_in, const void* input, size_t len)
+{
+ XXH_endianess endian_detected = (XXH_endianess)XXH_CPU_LITTLE_ENDIAN;
+
+ if ((endian_detected==XXH_littleEndian) || XXH_FORCE_NATIVE_FORMAT)
+ return XXH64_update_endian(state_in, input, len, XXH_littleEndian);
+ else
+ return XXH64_update_endian(state_in, input, len, XXH_bigEndian);
+}
+
+
+
+FORCE_INLINE U64 XXH64_digest_endian (const XXH64_state_t* state_in, XXH_endianess endian)
+{
+ XXH_istate64_t * state = (XXH_istate64_t *) state_in;
+ const BYTE * p = (const BYTE*)state->mem64;
+ BYTE* bEnd = (BYTE*)state->mem64 + state->memsize;
+ U64 h64;
+
+ if (state->total_len >= 32)
+ {
+ U64 v1 = state->v1;
+ U64 v2 = state->v2;
+ U64 v3 = state->v3;
+ U64 v4 = state->v4;
+
+ h64 = XXH_rotl64(v1, 1) + XXH_rotl64(v2, 7) + XXH_rotl64(v3, 12) + XXH_rotl64(v4, 18);
+
+ v1 *= PRIME64_2;
+ v1 = XXH_rotl64(v1, 31);
+ v1 *= PRIME64_1;
+ h64 ^= v1;
+ h64 = h64*PRIME64_1 + PRIME64_4;
+
+ v2 *= PRIME64_2;
+ v2 = XXH_rotl64(v2, 31);
+ v2 *= PRIME64_1;
+ h64 ^= v2;
+ h64 = h64*PRIME64_1 + PRIME64_4;
+
+ v3 *= PRIME64_2;
+ v3 = XXH_rotl64(v3, 31);
+ v3 *= PRIME64_1;
+ h64 ^= v3;
+ h64 = h64*PRIME64_1 + PRIME64_4;
+
+ v4 *= PRIME64_2;
+ v4 = XXH_rotl64(v4, 31);
+ v4 *= PRIME64_1;
+ h64 ^= v4;
+ h64 = h64*PRIME64_1 + PRIME64_4;
+ }
+ else
+ {
+ h64 = state->seed + PRIME64_5;
+ }
+
+ h64 += (U64) state->total_len;
+
+ while (p+8<=bEnd)
+ {
+ U64 k1 = XXH_readLE64(p, endian);
+ k1 *= PRIME64_2;
+ k1 = XXH_rotl64(k1,31);
+ k1 *= PRIME64_1;
+ h64 ^= k1;
+ h64 = XXH_rotl64(h64,27) * PRIME64_1 + PRIME64_4;
+ p+=8;
+ }
+
+ if (p+4<=bEnd)
+ {
+ h64 ^= (U64)(XXH_readLE32(p, endian)) * PRIME64_1;
+ h64 = XXH_rotl64(h64, 23) * PRIME64_2 + PRIME64_3;
+ p+=4;
+ }
+
+ while (p<bEnd)
+ {
+ h64 ^= (*p) * PRIME64_5;
+ h64 = XXH_rotl64(h64, 11) * PRIME64_1;
+ p++;
+ }
+
+ h64 ^= h64 >> 33;
+ h64 *= PRIME64_2;
+ h64 ^= h64 >> 29;
+ h64 *= PRIME64_3;
+ h64 ^= h64 >> 32;
+
+ return h64;
+}
+
+
+unsigned long long XXH64_digest (const XXH64_state_t* state_in)
+{
+ XXH_endianess endian_detected = (XXH_endianess)XXH_CPU_LITTLE_ENDIAN;
+
+ if ((endian_detected==XXH_littleEndian) || XXH_FORCE_NATIVE_FORMAT)
+ return XXH64_digest_endian(state_in, XXH_littleEndian);
+ else
+ return XXH64_digest_endian(state_in, XXH_bigEndian);
+}
+
+
--- /dev/null
+From: Krzysztof Oledzki <ole@ans.pl>
+Date: Sun, 20 Apr 2008 22:19:09 +0200 (CEST)
+Subject: Re: [PATCH] Flush buffers also where there are exactly 0 bytes left
+
+I'm also attaching a debug patch that helps to trigger this bug.
+
+Without the fix:
+# echo -ne "GET /haproxy?stats;csv;norefresh HTTP/1.0\r\n\r\n"|nc 127.0.0.1=
+ 801|wc -c
+16384
+
+With the fix:
+# echo -ne "GET /haproxy?stats;csv;norefresh HTTP/1.0\r\n\r\n"|nc 127.0.0.1=
+ 801|wc -c
+33089
+
+Best regards,
+
+diff --git a/src/dumpstats.c b/src/dumpstats.c
+index ddadddd..28bbfce 100644
+--- a/src/dumpstats.c
++++ b/src/dumpstats.c
+@@ -593,6 +593,7 @@ int stats_dump_proxy(struct session *s, struct proxy *px, struct uri_auth *uri)
+
+ msg.len = 0;
+ msg.str = trash;
++ int i;
+
+ switch (s->data_ctx.stats.px_st) {
+ case DATA_ST_PX_INIT:
+@@ -667,6 +668,13 @@ int stats_dump_proxy(struct session *s, struct proxy *px, struct uri_auth *uri)
+ /* print the frontend */
+ if ((px->cap & PR_CAP_FE) &&
+ (!(s->data_ctx.stats.flags & STAT_BOUND) || (s->data_ctx.stats.type & (1 << STATS_TYPE_FE)))) {
++
++ if (1) {
++ for (i=0; i<16096; i++)
++ chunk_printf(&msg, trashlen, "*");
++
++ chunk_printf(&msg, trashlen, "\n");
++#if 0
+ if (!(s->data_ctx.stats.flags & STAT_FMT_CSV)) {
+ chunk_printf(&msg, trashlen,
+ /* name, queue */
+@@ -694,6 +702,7 @@ int stats_dump_proxy(struct session *s, struct proxy *px, struct uri_auth *uri)
+ px->failed_req,
+ px->state == PR_STRUN ? "OPEN" :
+ px->state == PR_STIDLE ? "FULL" : "STOP");
++#endif
+ } else {
+ chunk_printf(&msg, trashlen,
+ /* pxid, name, queue cur, queue max, */
+
+
--- /dev/null
+/*
+ * experimental weighted round robin scheduler - (c) 2007 willy tarreau.
+ *
+ * This filling algorithm is excellent at spreading the servers, as it also
+ * takes care of keeping the most uniform distance between occurences of each
+ * server, by maximizing this distance. It reduces the number of variables
+ * and expensive operations.
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include "eb32tree.h"
+
+struct srv {
+ struct eb32_node node;
+ struct eb_root *tree; // we want to know where the server is
+ int num;
+ int w; /* weight */
+ int next, last;
+ int rem;
+} *srv;
+
+/* those trees represent a sliding window of 3 time frames */
+struct eb_root tree_0 = EB_ROOT;
+struct eb_root tree_1 = EB_ROOT;
+struct eb_root tree_2 = EB_ROOT;
+
+struct eb_root *init_tree; /* receives positions 0..sw-1 */
+struct eb_root *next_tree; /* receives positions >= 2sw */
+
+int nsrv; /* # of servers */
+int nsw, sw; /* sum of weights */
+int p; /* current position, between sw..2sw-1 */
+
+/* queue a server in the weights tree */
+void queue_by_weight(struct eb_root *root, struct srv *s) {
+ s->node.key = 255 - s->w;
+ eb32_insert(root, &s->node);
+ s->tree = root;
+}
+
+/* queue a server in the weight tree <root>, except if its weight is 0 */
+void queue_by_weight_0(struct eb_root *root, struct srv *s) {
+ if (s->w) {
+ s->node.key = 255 - s->w;
+ eb32_insert(root, &s->node);
+ s->tree = root;
+ } else {
+ s->tree = NULL;
+ }
+}
+
+static inline void dequeue_srv(struct srv *s) {
+ eb32_delete(&s->node);
+}
+
+/* queues a server into the correct tree depending on ->next */
+void put_srv(struct srv *s) {
+ if (s->w <= 0 ||
+ s->next >= 2*sw || /* delay everything which does not fit into the window */
+ s->next >= sw+nsw) { /* and everything which does not fit into the theorical new window */
+ /* put into next tree */
+ s->next -= sw; // readjust next in case we could finally take this back to current.
+ queue_by_weight_0(next_tree, s);
+ } else {
+ // The overflow problem is caused by the scale we want to apply to user weight
+ // to turn it into effective weight. Since this is only used to provide a smooth
+ // slowstart on very low weights (1), it is a pure waste. Thus, we just have to
+ // apply a small scaling factor and warn the user that slowstart is not very smooth
+ // on low weights.
+ // The max key is about ((scale*maxw)*(scale*maxw)*nbsrv)/ratio (where the ratio is
+ // the arbitrary divide we perform in the examples above). Assuming that ratio==scale,
+ // this translates to maxkey=scale*maxw^2*nbsrv, so
+ // max_nbsrv=2^32/255^2/scale ~= 66051/scale
+ // Using a scale of 16 is enough to support 4000 servers without overflow, providing
+ // 6% steps during slowstart.
+
+ s->node.key = 256 * s->next + (16*255 + s->rem - s->w) / 16;
+
+ /* check for overflows */
+ if ((int)s->node.key < 0)
+ printf(" OV: srv=%p w=%d rem=%d next=%d key=%d", s, s->w, s->rem, s->next, s->node.key);
+ eb32_insert(&tree_0, &s->node);
+ s->tree = &tree_0;
+ }
+}
+
+/* prepares a server when extracting it from the init tree */
+static inline void get_srv_init(struct srv *s) {
+ s->next = s->rem = 0;
+}
+
+/* prepares a server when extracting it from the next tree */
+static inline void get_srv_next(struct srv *s) {
+ s->next += sw;
+}
+
+/* prepares a server when extracting it from the next tree */
+static inline void get_srv_down(struct srv *s) {
+ s->next = p;
+}
+
+/* prepares a server when extracting it from its tree */
+void get_srv(struct srv *s) {
+ if (s->tree == init_tree) {
+ get_srv_init(s);
+ }
+ else if (s->tree == next_tree) {
+ get_srv_next(s);
+ }
+ else if (s->tree == NULL) {
+ get_srv_down(s);
+ }
+}
+
+
+/* return next server from the current tree, or a server from the init tree
+ * if appropriate. If both trees are empty, return NULL.
+ */
+struct srv *get_next_server() {
+ struct eb32_node *node;
+ struct srv *s;
+
+ node = eb32_first(&tree_0);
+ s = eb32_entry(node, struct srv, node);
+
+ if (!node || s->next > p) {
+ /* either we have no server left, or we have a hole */
+ struct eb32_node *node2;
+ node2 = eb32_first(init_tree);
+ if (node2) {
+ node = node2;
+ s = eb32_entry(node, struct srv, node);
+ get_srv_init(s);
+ if (s->w == 0)
+ node = NULL;
+ s->node.key = 0; // do not display random values
+ }
+ }
+ if (node)
+ return s;
+ else
+ return NULL;
+}
+
+void update_position(struct srv *s) {
+ //if (s->tree == init_tree) {
+ if (!s->next) {
+ // first time ever for this server
+ s->last = p;
+ s->next = p + nsw / s->w;
+ s->rem += nsw % s->w;
+
+ if (s->rem >= s->w) {
+ s->rem -= s->w;
+ s->next++;
+ }
+ } else {
+ s->last = s->next; // or p ?
+ //s->next += sw / s->w;
+ //s->rem += sw % s->w;
+ s->next += nsw / s->w;
+ s->rem += nsw % s->w;
+
+ if (s->rem >= s->w) {
+ s->rem -= s->w;
+ s->next++;
+ }
+ }
+}
+
+
+/* switches trees init_tree and next_tree. init_tree should be empty when
+ * this happens, and next_tree filled with servers sorted by weights.
+ */
+void switch_trees() {
+ struct eb_root *swap;
+ swap = init_tree;
+ init_tree = next_tree;
+ next_tree = swap;
+ sw = nsw;
+ p = sw;
+}
+
+main(int argc, char **argv) {
+ int conns;
+ int i;
+
+ struct srv *s;
+
+ argc--; argv++;
+ nsrv = argc;
+
+ if (!nsrv)
+ exit(1);
+
+ srv = (struct srv *)calloc(nsrv, sizeof(struct srv));
+
+ sw = 0;
+ for (i = 0; i < nsrv; i++) {
+ s = &srv[i];
+ s->num = i;
+ s->w = atol(argv[i]);
+ sw += s->w;
+ }
+
+ nsw = sw;
+
+ init_tree = &tree_1;
+ next_tree = &tree_2;
+
+ /* and insert all the servers in the PREV tree */
+ /* note that it is required to insert them according to
+ * the reverse order of their weights.
+ */
+ printf("---------------:");
+ for (i = 0; i < nsrv; i++) {
+ s = &srv[i];
+ queue_by_weight_0(init_tree, s);
+ printf("%2d", s->w);
+ }
+ printf("\n");
+
+ p = sw; // time base of current tree
+ conns = 0;
+ while (1) {
+ struct eb32_node *node;
+
+ printf("%08d|%06d: ", conns, p);
+
+ /* if we have en empty tree, let's first try to collect weights
+ * which might have changed.
+ */
+ if (!sw) {
+ if (nsw) {
+ sw = nsw;
+ p = sw;
+ /* do not switch trees, otherwise new servers (from init)
+ * would end up in next.
+ */
+ //switch_trees();
+ //printf("bla\n");
+ }
+ else
+ goto next_iteration;
+ }
+
+ s = get_next_server();
+ if (!s) {
+ printf("----------- switch (empty) -- sw=%d -> %d ---------\n", sw, nsw);
+ switch_trees();
+ s = get_next_server();
+ printf("%08d|%06d: ", conns, p);
+
+ if (!s)
+ goto next_iteration;
+ }
+ else if (s->next >= 2*sw) {
+ printf("ARGGGGG! s[%d].next=%d, max=%d\n", s->num, s->next, 2*sw-1);
+ }
+
+ /* now we have THE server we want to put at this position */
+ for (i = 0; i < s->num; i++) {
+ if (srv[i].w > 0)
+ printf(". ");
+ else
+ printf("_ ");
+ }
+ printf("# ");
+ for (i = s->num + 1; i < nsrv; i++) {
+ if (srv[i].w > 0)
+ printf(". ");
+ else
+ printf("_ ");
+ }
+ printf(" : ");
+
+ printf("s=%02d v=%04d w=%03d n=%03d r=%03d ",
+ s->num, s->node.key, s->w, s->next, s->rem);
+
+ update_position(s);
+ printf(" | next=%03d, rem=%03d ", s->next, s->rem);
+
+ if (s->next >= sw * 2) {
+ dequeue_srv(s);
+ //queue_by_weight(next_tree, s);
+ put_srv(s);
+ printf(" => next (w=%d, n=%d) ", s->w, s->next);
+ }
+ else {
+ printf(" => curr ");
+
+ //s->node.key = s->next;
+ /* we want to ensure that in case of conflicts, servers with
+ * the highest weights will get served first. Also, we still
+ * have the remainder to see where the entry expected to be
+ * inserted.
+ */
+ //s->node.key = 256 * s->next + 255 - s->w;
+ //s->node.key = sw * s->next + sw / s->w;
+ //s->node.key = sw * s->next + s->rem; /// seems best (check with filltab15) !
+
+ //s->node.key = (2 * sw * s->next) + s->rem + sw / s->w;
+
+ /* FIXME: must be optimized */
+ dequeue_srv(s);
+ put_srv(s);
+ //eb32i_insert(&tree_0, &s->node);
+ //s->tree = &tree_0;
+ }
+
+ next_iteration:
+ p++;
+ conns++;
+ if (/*conns == 30*/ /**/random()%100 == 0/**/) {
+ int w = /*20*//**/random()%4096/**/;
+ int num = /*1*//**/random()%nsrv/**/;
+ struct srv *s = &srv[num];
+
+ nsw = nsw - s->w + w;
+ //sw=nsw;
+
+ if (s->tree == init_tree) {
+ printf(" -- chgwght1(%d): %d->%d, n=%d --", s->num, s->w, w, s->next);
+ printf("(init)");
+ s->w = w;
+ dequeue_srv(s);
+ queue_by_weight_0(s->tree, s);
+ }
+ else if (s->tree == NULL) {
+ printf(" -- chgwght2(%d): %d->%d, n=%d --", s->num, s->w, w, s->next);
+ printf("(down)");
+ s->w = w;
+ dequeue_srv(s);
+ //queue_by_weight_0(init_tree, s);
+ get_srv(s);
+ s->next = p + (nsw + sw - p) / s->w;
+ put_srv(s);
+ }
+ else {
+ int oldnext;
+
+ /* the server is either active or in the next queue */
+ get_srv(s);
+ printf(" -- chgwght3(%d): %d->%d, n=%d, sw=%d, nsw=%d --", s->num, s->w, w, s->next, sw, nsw);
+
+ oldnext = s->next;
+ s->w = w;
+
+ /* we must measure how far we are from the end of the current window
+ * and try to fit their as many entries as should theorically be.
+ */
+
+ //s->w = s->w * (2*sw - p) / sw;
+ if (s->w > 0) {
+ int step = (nsw /*+ sw - p*/) / s->w;
+ s->next = s->last + step;
+ s->rem = 0;
+ if (s->next > oldnext) {
+ s->next = oldnext;
+ printf(" aaaaaaa ");
+ }
+
+ if (s->next < p + 2) {
+ s->next = p + step;
+ printf(" bbbbbb ");
+ }
+ } else {
+ printf(" push -- ");
+ /* push it into the next tree */
+ s->w = 0;
+ s->next = p + sw;
+ }
+
+
+ dequeue_srv(s);
+ printf(" n=%d", s->next);
+ put_srv(s);
+ }
+ }
+
+ printf("\n");
+
+ if (0 && conns % 50000 == 0) {
+ printf("-------- %-5d : changing all weights ----\n", conns);
+
+ for (i = 0; i < nsrv; i++) {
+ int w = i + 1;
+ s = &srv[i];
+ nsw = nsw - s->w + w;
+ s->w = w;
+ dequeue_srv(s);
+ queue_by_weight_0(next_tree, s); // or init_tree ?
+ }
+ }
+
+ }
+}
+
--- /dev/null
+Test: ./test_hashes | sort -k 3 -r
+
+Note: haproxy_server_hash should be avoided as it's just a 32 bit XOR.
+
+Athlon @ 1533 MHz, gcc-3.4 -march=i686 :
+ haproxy_server_hash : 18477000 run/sec
+ SuperFastHash : 6983511 run/sec
+ hash_djbx33 : 4164334 run/sec
+ bernstein : 3371838 run/sec
+ kr_hash : 3257684 run/sec
+ sax_hash : 3027567 run/sec
+ fnv_hash : 2818374 run/sec
+ haproxy_uri_hash : 2108346 run/sec
+ oat_hash : 2106181 run/sec
+ hashword : 1936973 run/sec
+ hashpjw : 1803475 run/sec
+ fnv_32a_str : 1499198 run/sec
+
+Pentium-M @1700 MHz, gcc-3.4 -march=i686 :
+ haproxy_server_hash : 15471737 run/sec
+ SuperFastHash : 8155706 run/sec
+ hash_djbx33 : 4520191 run/sec
+ bernstein : 3956142 run/sec
+ kr_hash : 3725125 run/sec
+ fnv_hash : 3155413 run/sec
+ sax_hash : 2688323 run/sec
+ oat_hash : 2452789 run/sec
+ haproxy_uri_hash : 2010853 run/sec
+ hashword : 1831441 run/sec
+ hashpjw : 1737000 run/sec
+ fnv_32a_str : 1643737 run/sec
+
+Athlon @ 1533 MHz, gcc-4.1 -march=i686 :
+ haproxy_server_hash : 13592089 run/sec
+ SuperFastHash2 : 8687957 run/sec
+ SuperFastHash : 7361242 run/sec
+ hash_djbx33 : 5741546 run/sec
+ bernstein : 3368909 run/sec
+ sax_hash : 3339880 run/sec
+ kr_hash : 3277230 run/sec
+ fnv_hash : 2832402 run/sec
+ hashword : 2500317 run/sec
+ haproxy_uri_hash : 2433241 run/sec
+ oat_hash : 2403118 run/sec
+ hashpjw : 1881229 run/sec
+ fnv_32a_str : 1815709 run/sec
+
+Pentium-M @1700 MHz, gcc-4.1 -march=i686 :
+ haproxy_server_hash : 14128788 run/sec
+ SuperFastHash2 : 8157119 run/sec
+ SuperFastHash : 7481027 run/sec
+ hash_djbx33 : 5660711 run/sec
+ bernstein : 3961493 run/sec
+ fnv_hash : 3590727 run/sec
+ kr_hash : 3389393 run/sec
+ sax_hash : 2667227 run/sec
+ oat_hash : 2348211 run/sec
+ hashword : 2278856 run/sec
+ haproxy_uri_hash : 2098022 run/sec
+ hashpjw : 1846583 run/sec
+ fnv_32a_str : 1661219 run/sec
+
+Pentium-M @600 MHz, gcc-4.1 -march=i686 :
+ haproxy_server_hash : 5318468 run/sec
+ SuperFastHash2 : 3126165 run/sec
+ SuperFastHash : 2729981 run/sec
+ hash_djbx33 : 2042181 run/sec
+ bernstein : 1422927 run/sec
+ fnv_hash : 1287736 run/sec
+ kr_hash : 1217924 run/sec
+ sax_hash : 949694 run/sec
+ oat_hash : 837279 run/sec
+ hashword : 812868 run/sec
+ haproxy_uri_hash : 747611 run/sec
+ hashpjw : 659890 run/sec
+ fnv_32a_str : 590895 run/sec
+
+athlon @ 1.5 GHz, gcc-2.95 -march=i686 :
+ haproxy_server_hash : 13592864 run/sec
+ SuperFastHash : 6931251 run/sec
+ bernstein : 4105179 run/sec
+ hash_djbx33 : 3920059 run/sec
+ kr_hash : 2985794 run/sec
+ fnv_hash : 2815457 run/sec
+ sax_hash : 2791358 run/sec
+ haproxy_uri_hash : 2786663 run/sec
+ oat_hash : 2237859 run/sec
+ hashword : 1985740 run/sec
+ hashpjw : 1757733 run/sec
+ fnv_32a_str : 1697299 run/sec
+
+Pentium-M @ 600 MHz, gcc-2.95 -march=i686 :
+ SuperFastHash : 2934387 run/sec
+ haproxy_server_hash : 2864668 run/sec
+ hash_djbx33 : 1498043 run/sec
+ bernstein : 1414993 run/sec
+ kr_hash : 1297907 run/sec
+ fnv_hash : 1260343 run/sec
+ sax_hash : 924764 run/sec
+ oat_hash : 854545 run/sec
+ haproxy_uri_hash : 790040 run/sec
+ hashword : 693501 run/sec
+ hashpjw : 647346 run/sec
+ fnv_32a_str : 579691 run/sec
+
+Pentium-M @ 1700 MHz, gcc-2.95 -march=i686 :
+ SuperFastHash : 8006127 run/sec
+ haproxy_server_hash : 7834162 run/sec
+ hash_djbx33 : 4186025 run/sec
+ bernstein : 3941492 run/sec
+ kr_hash : 3630713 run/sec
+ fnv_hash : 3507488 run/sec
+ sax_hash : 2528128 run/sec
+ oat_hash : 2395188 run/sec
+ haproxy_uri_hash : 2158924 run/sec
+ hashword : 1910992 run/sec
+ hashpjw : 1819894 run/sec
+ fnv_32a_str : 1629844 run/sec
+
+UltraSparc @ 400 MHz, gcc-3.4.3 :
+ haproxy_server_hash : 5573220 run/sec
+ SuperFastHash : 1372714 run/sec
+ bernstein : 1361733 run/sec
+ hash_djbx33 : 1090373 run/sec
+ sax_hash : 872499 run/sec
+ oat_hash : 730354 run/sec
+ kr_hash : 645431 run/sec
+ haproxy_uri_hash : 541157 run/sec
+ fnv_32a_str : 442608 run/sec
+ hashpjw : 434858 run/sec
+ fnv_hash : 401945 run/sec
+ hashword : 340594 run/sec
+
+UltraSparc @ 400 MHz, gcc-3.4.3 -mcpu=v9 :
+ haproxy_server_hash : 5671183 run/sec
+ bernstein : 1437122 run/sec
+ hash_djbx33 : 1376294 run/sec
+ SuperFastHash : 1306634 run/sec
+ sax_hash : 873650 run/sec
+ kr_hash : 801439 run/sec
+ oat_hash : 729920 run/sec
+ haproxy_uri_hash : 545341 run/sec
+ hashpjw : 472190 run/sec
+ fnv_32a_str : 443668 run/sec
+ hashword : 357295 run/sec
+ fnv_hash : 208823 run/sec
+
+
+Alpha EV6 @ 466 MHz, gcc-3.3 :
+ haproxy_server_hash : 2495928 run/sec
+ SuperFastHash : 2037208 run/sec
+ hash_djbx33 : 1625092 run/sec
+ kr_hash : 1532206 run/sec
+ bernstein : 1256746 run/sec
+ haproxy_uri_hash : 999106 run/sec
+ oat_hash : 841943 run/sec
+ sax_hash : 737447 run/sec
+ hashpjw : 676170 run/sec
+ fnv_hash : 644054 run/sec
+ fnv_32a_str : 638526 run/sec
+ hashword : 421777 run/sec
+
+VIA EPIA @ 533 MHz, gcc-2.95 -march=i586 :
+ haproxy_server_hash : 1391374 run/sec
+ SuperFastHash : 912397 run/sec
+ hash_djbx33 : 589868 run/sec
+ kr_hash : 453706 run/sec
+ bernstein : 437318 run/sec
+ sax_hash : 334456 run/sec
+ hashpjw : 316670 run/sec
+ hashword : 315476 run/sec
+ haproxy_uri_hash : 311112 run/sec
+ oat_hash : 259127 run/sec
+ fnv_32a_str : 229485 run/sec
+ fnv_hash : 151620 run/sec
+
+VIA EPIA @ 533 MHz, gcc-3.4 -march=i586 :
+ haproxy_server_hash : 1660407 run/sec
+ SuperFastHash : 791981 run/sec
+ hash_djbx33 : 680498 run/sec
+ kr_hash : 384076 run/sec
+ bernstein : 377247 run/sec
+ sax_hash : 355183 run/sec
+ hashpjw : 298879 run/sec
+ haproxy_uri_hash : 296748 run/sec
+ oat_hash : 283932 run/sec
+ hashword : 269429 run/sec
+ fnv_32a_str : 204776 run/sec
+ fnv_hash : 155301 run/sec
+
+Pentium @ 133 MHz, gcc-3.4 -march=i586 :
+ haproxy_server_hash : 930788 run/sec
+ SuperFastHash : 344988 run/sec
+ hash_djbx33 : 278996 run/sec
+ bernstein : 211545 run/sec
+ sax_hash : 185225 run/sec
+ kr_hash : 156603 run/sec
+ oat_hash : 135163 run/sec
+ hashword : 128518 run/sec
+ fnv_hash : 107024 run/sec
+ haproxy_uri_hash : 105523 run/sec
+ fnv_32a_str : 99913 run/sec
+ hashpjw : 97860 run/sec
+
+VAX VLC4000 @30 MHz, gcc-2.95 :
+ haproxy_server_hash : 13208 run/sec
+ hash_djbx33 : 12963 run/sec
+ fnv_hash : 12150 run/sec
+ SuperFastHash : 12037 run/sec
+ bernstein : 11765 run/sec
+ kr_hash : 11111 run/sec
+ sax_hash : 7273 run/sec
+ hashword : 7143 run/sec
+ oat_hash : 6931 run/sec
+ hashpjw : 6667 run/sec
+ haproxy_uri_hash : 5714 run/sec
+ fnv_32a_str : 4800 run/sec
+
--- /dev/null
+These are the result of tests conducted to determine efficacy of hashing
+algorithms and avalache application in haproxy. All results below were
+generated using version 1.5. See the document on hashing under internal docs
+for a detailed description on the tests the methodology and interpretation
+of the results.
+
+The following was the set up used
+
+(a) hash-type consistent/map-bases
+(b) avalanche on/off
+(c) balanche host(hdr)
+(d) 3 criteria for inputs
+ - ~ 10K requests, including duplicates
+ - ~ 46K requests, unique requests from 1 MM requests were obtained
+ - ~ 250K requests, including duplicates
+(e) 17 servers in backend, all servers were assigned the same weight
+
+The results can be interpreted across 3 dimensions corresponding to input
+criteria (a)/(b) and (d) above
+
+== 10 K requests ==
+
+=== Consistent with avalanche ===
+
+Servers SDBM DJB2 WT6
+-----------------------------
+1 576 637 592
+2 552 608 600
+3 539 559 551
+4 578 586 493
+5 534 555 549
+6 614 607 576
+7 519 556 554
+8 591 565 607
+9 529 604 575
+10 642 550 678
+11 537 591 506
+12 568 571 567
+13 589 606 572
+14 648 568 711
+15 645 557 603
+16 583 627 591
+17 699 596 618
+-----------------------------
+Std Dev 48.95 26.29 51.75
+
+=== Consistent without avalanche ===
+
+Servers SDBM DJB2 WT6
+-----------------------------
+1 612 627 579
+2 631 607 563
+3 585 605 604
+4 594 502 518
+5 583 526 602
+6 589 594 555
+7 591 602 511
+8 518 540 623
+9 550 519 523
+10 600 637 647
+11 568 536 550
+12 552 605 645
+13 547 556 564
+14 615 674 635
+15 642 624 618
+16 575 585 609
+17 591 604 597
+-----------------------------
+Std Dev 30.71 45.97 42.52
+
+=== Map based without avalanche ===
+
+Servers SDBM DJB2 WT6
+-----------------------------
+1 602 560 598
+2 576 583 583
+3 579 624 593
+4 608 587 551
+5 579 549 588
+6 582 560 590
+7 553 616 562
+8 568 600 551
+9 594 607 620
+10 574 611 635
+11 578 607 603
+12 563 581 547
+13 604 531 572
+14 621 606 618
+15 600 561 602
+16 555 570 585
+17 607 590 545
+-----------------------------
+Std Dev 19.24 25.56 26.29
+
+=== Map based with avalanche ===
+
+Servers SDBM DJB2 WT6
+-----------------------------
+Servers SDBM DJB2 WT6
+1 548 641 597
+2 612 563 655
+3 596 536 595
+4 609 574 537
+5 586 610 570
+6 600 568 562
+7 589 573 578
+8 584 549 573
+9 561 636 603
+10 607 553 603
+11 554 602 616
+12 560 577 568
+13 597 534 570
+14 597 647 570
+15 563 581 647
+16 575 647 565
+17 605 552 534
+-----------------------------
+Std Dev 20.23 37.47 32.16
+
+== Uniques in 1 MM ==
+
+=== Consistent with avalanche ===
+
+Servers SDBM DJB2 WT6
+-----------------------------
+1 2891 2963 2947
+2 2802 2849 2771
+3 2824 2854 2904
+4 2704 2740 2763
+5 2664 2699 2646
+6 2902 2876 2935
+7 2829 2745 2730
+8 2648 2768 2800
+9 2710 2741 2689
+10 3070 3111 3106
+11 2733 2638 2589
+12 2828 2866 2885
+13 2876 2961 2870
+14 3090 2997 3044
+15 2871 2879 2827
+16 2881 2727 2921
+17 2936 2845 2832
+-----------------------------
+Std Dev 121.66 118.59 131.61
+
+=== Consistent without avalanche ===
+
+Servers SDBM DJB2 WT6
+-----------------------------
+1 2879 2930 2863
+2 2835 2856 2853
+3 2875 2741 2899
+4 2720 2718 2761
+5 2703 2754 2689
+6 2848 2901 2925
+7 2829 2756 2838
+8 2761 2779 2805
+9 2719 2671 2746
+10 3015 3176 3079
+11 2620 2661 2656
+12 2879 2773 2713
+13 2829 2844 2925
+14 3064 2951 3041
+15 2898 2928 2877
+16 2880 2867 2791
+17 2905 2953 2798
+-----------------------------
+Std Dev 107.65 125.2 111.34
+
+=== Map based without avalanche ===
+
+Servers SDBM DJB2 WT6
+-----------------------------
+1 2863 2837 2923
+2 2966 2829 2847
+3 2865 2803 2808
+4 2682 2816 2787
+5 2847 2782 2815
+6 2910 2862 2862
+7 2821 2784 2793
+8 2837 2834 2796
+9 2857 2891 2859
+10 2829 2906 2873
+11 2742 2851 2841
+12 2790 2837 2870
+13 2765 2902 2794
+14 2870 2732 2900
+15 2898 2891 2759
+16 2877 2860 2863
+17 2840 2842 2869
+-----------------------------
+Std Dev 64.65 45.16 43.38
+
+=== Map based with avalanche ===
+
+Servers SDBM DJB2 WT6
+-----------------------------
+1 2816 2859 2895
+2 2841 2739 2789
+3 2846 2903 2888
+4 2817 2878 2812
+5 2750 2794 2852
+6 2816 2917 2847
+7 2792 2782 2786
+8 2800 2814 2868
+9 2854 2883 2842
+10 2770 2854 2855
+11 2851 2854 2837
+12 2910 2846 2776
+13 2904 2792 2882
+14 2984 2767 2854
+15 2766 2863 2823
+16 2902 2797 2907
+17 2840 2917 2746
+-----------------------------
+Std Dev 58.39 52.16 43.72
+
+== 250K requests ==
+
+=== Consistent with avalanche ===
+
+Servers SDBM DJB2 WT6
+-----------------------------
+1 14182 12996 20924
+2 14881 18376 8901
+3 13537 17935 13639
+4 11031 12582 19758
+5 15429 10084 12112
+6 18712 12574 14052
+7 14271 11257 14538
+8 12048 18582 16653
+9 10570 10283 13949
+10 11683 13081 23530
+11 9288 14828 10818
+12 10775 13607 19844
+13 10036 19138 15413
+14 31903 15222 11824
+15 21276 11963 10405
+16 17233 23116 11316
+17 11437 12668 10616
+-----------------------------
+Std Dev 5355.95 3512.39 4096.65
+
+=== Consistent without avalanche ===
+
+Servers SDBM DJB2 WT6
+-----------------------------
+1 12411 17833 11831
+2 14213 11165 14833
+3 11431 10241 11671
+4 14080 13913 20224
+5 10886 12101 14272
+6 15168 12470 14641
+7 18802 12211 10164
+8 18678 11852 12421
+9 17468 10865 17655
+10 19801 28493 13221
+11 10885 20201 13507
+12 20419 11660 14078
+13 12591 18616 13906
+14 12798 18200 24152
+15 13338 10532 14111
+16 11715 10478 14759
+17 13608 17461 12846
+-----------------------------
+Std Dev 3113.33 4749.97 3256.04
+
+=== Map based without avalanche ===
+
+Servers SDBM DJB2 WT6
+-----------------------------
+1 14660 12483 11472
+2 11118 11552 12146
+3 15407 19952 11032
+4 15444 12218 14572
+5 22091 11434 13738
+6 18273 17587 21337
+7 10527 16784 15118
+8 13013 12010 17195
+9 15754 9886 14611
+10 13758 11613 14844
+11 19564 16453 17403
+12 9692 17246 14469
+13 13905 11885 20024
+14 19401 15350 10611
+15 11889 25485 11172
+16 13846 13928 12109
+17 9950 12426 16439
+-----------------------------
+Std Dev 3481.45 3847.74 3031.93
+
+=== Map based with avalanche ===
+
+Servers SDBM DJB2 WT6
+-----------------------------
+1 15546 11454 12871
+2 15388 11464 17587
+3 11767 15527 14785
+4 15843 13214 11420
+5 11129 12192 15083
+6 15647 17875 11051
+7 18723 13629 23006
+8 10938 11295 11223
+9 12653 17202 23347
+10 10108 12867 14178
+11 12116 11190 20523
+12 14982 12341 11881
+13 13221 13929 11828
+14 17642 19621 15320
+15 12410 26171 11721
+16 25075 14764 13544
+17 15104 13557 8924
+-----------------------------
+Std Dev 3521.83 3742.21 4101.2
--- /dev/null
+---------- epoll without limits --------
+% time seconds usecs/call calls errors syscall
+------ ----------- ----------- --------- --------- ----------------
+ 47.19 2.671077 56 48093 22397 recv
+ 47.15 2.668840 106 25060 4858 send
+ 2.19 0.124020 10 12150 epoll_ctl
+ 1.96 0.110904 286 388 epoll_wait
+ 0.56 0.031565 47 670 close
+ 0.19 0.010481 28 380 350 connect
+ 0.15 0.008650 25 350 socket
+ 0.14 0.008204 26 320 shutdown
+ 0.14 0.007655 22 355 35 accept
+ 0.12 0.006871 10 670 setsockopt
+ 0.11 0.006194 9 670 fcntl64
+ 0.07 0.004148 12 355 brk
+ 0.04 0.002055 5 389 gettimeofday
+------ ----------- ----------- --------- --------- ----------------
+100.00 5.660664 89850 27640 total
+
+
+---------- sepoll without limit --------
+% time seconds usecs/call calls errors syscall
+------ ----------- ----------- --------- --------- ----------------
+ 49.43 2.770682 97 28486 3861 send
+ 46.48 2.605336 53 49317 23434 recv
+ 2.00 0.111916 206 542 epoll_wait
+ 0.65 0.036325 12 3030 epoll_ctl
+ 0.45 0.025282 38 670 close
+ 0.24 0.013247 34 388 358 connect
+ 0.17 0.009544 27 350 socket
+ 0.16 0.008734 27 320 shutdown
+ 0.11 0.006432 18 357 37 accept
+ 0.10 0.005699 9 670 setsockopt
+ 0.08 0.004724 7 670 fcntl64
+ 0.08 0.004568 6 767 gettimeofday
+ 0.06 0.003127 9 356 brk
+------ ----------- ----------- --------- --------- ----------------
+100.00 5.605616 85923 27690 total
+
+
+---------- sepoll with send limit only --------
+% time seconds usecs/call calls errors syscall
+------ ----------- ----------- --------- --------- ----------------
+ 49.21 2.779349 109 25417 418 send
+ 46.94 2.651058 54 49150 23368 recv
+ 1.77 0.099863 264 378 epoll_wait
+ 0.57 0.032141 14 2351 epoll_ctl
+ 0.46 0.025822 39 670 close
+ 0.25 0.014300 37 387 357 connect
+ 0.19 0.010530 30 350 socket
+ 0.15 0.008656 27 320 shutdown
+ 0.14 0.008008 23 354 34 accept
+ 0.11 0.006051 9 670 setsockopt
+ 0.10 0.005461 8 670 fcntl64
+ 0.07 0.003842 6 604 gettimeofday
+ 0.06 0.003120 9 358 brk
+------ ----------- ----------- --------- --------- ----------------
+100.00 5.648201 81679 24177 total
+
+
+---------- sepoll with send + recv limits --------
+Process 3173 attached - interrupt to quit
+Process 3173 detached
+% time seconds usecs/call calls errors syscall
+------ ----------- ----------- --------- --------- ----------------
+ 49.09 2.802918 105 26771 596 send
+ 47.72 2.724651 89 30761 728 recv
+ 1.12 0.063952 55 1169 epoll_wait
+ 0.47 0.026810 40 676 close
+ 0.44 0.025358 11 2329 epoll_ctl
+ 0.21 0.012255 30 403 367 connect
+ 0.20 0.011135 35 320 shutdown
+ 0.18 0.010313 29 356 socket
+ 0.15 0.008614 6 1351 gettimeofday
+ 0.13 0.007678 21 360 40 accept
+ 0.13 0.007218 11 676 setsockopt
+ 0.10 0.005559 8 676 fcntl64
+ 0.05 0.002882 9 327 brk
+------ ----------- ----------- --------- --------- ----------------
+100.00 5.709343 66175 1731 total
+
+---------- epoll with send+recv limits -----------
+Process 3271 attached - interrupt to quit
+Process 3271 detached
+% time seconds usecs/call calls errors syscall
+------ ----------- ----------- --------- --------- ----------------
+ 46.96 2.742476 124 22193 send
+ 46.55 2.718027 98 27730 recv
+ 2.58 0.150701 11 13331 epoll_ctl
+ 2.30 0.134350 135 998 epoll_wait
+ 0.52 0.030520 45 673 close
+ 0.23 0.013422 42 320 shutdown
+ 0.19 0.011282 29 386 353 connect
+ 0.19 0.011063 31 353 socket
+ 0.12 0.007039 20 359 39 accept
+ 0.11 0.006629 10 673 fcntl64
+ 0.10 0.005920 9 673 setsockopt
+ 0.09 0.005157 5 999 gettimeofday
+ 0.05 0.002885 9 335 brk
+------ ----------- ----------- --------- --------- ----------------
+100.00 5.839471 69023 392 total
+
+
+Conclusion
+----------
+epoll = 89850 syscalls
+sepoll = 85923 syscalls
+epoll+limits = 69023 syscalls
+sepoll+limits = 66175 syscalls
+
+=> limits reduce the number of syscalls by 23%
+=> sepoll reduces the number of syscalls by 4%
+=> sepoll reduces the number of epoll_ctl by 83%
+=> limits reduce the number of epoll_ctl by 24%
+=> limits increase the number of epoll_wait by 115%
+
--- /dev/null
+/*
+ * Integer hashing tests. These functions work with 32-bit integers, so are
+ * perfectly suited for IPv4 addresses. A few tests show that they may also
+ * be chained for larger keys (eg: IPv6), this way :
+ * f(x[0-3]) = f(f(f(f(x[0])^x[1])^x[2])^x[3])
+ *
+ * See also bob jenkin's site for more info on hashing, and check perfect
+ * hashing for constants (eg: header names).
+ */
+
+#include <stdio.h>
+#include <string.h>
+#include <arpa/inet.h>
+#include <math.h>
+
+#define NSERV 8
+#define MAXLINE 1000
+
+
+int counts_id[NSERV][NSERV];
+uint32_t hash_id( uint32_t a)
+{
+ return a;
+}
+
+/* Full-avalanche integer hashing function from Thomas Wang, suitable for use
+ * with a modulo. See below, worth a read !
+ * http://www.concentric.net/~Ttwang/tech/inthash.htm
+ *
+ * See also tests performed by Bob Jenkins (says it's faster than his) :
+ * http://burtleburtle.net/bob/hash/integer.html
+ *
+ * This function is small and fast. It does not seem as smooth as bj6 though.
+ * About 0x40 bytes, 6 shifts.
+ */
+int counts_tw1[NSERV][NSERV];
+uint32_t hash_tw1(uint32_t a)
+{
+ a += ~(a<<15);
+ a ^= (a>>10);
+ a += (a<<3);
+ a ^= (a>>6);
+ a += ~(a<<11);
+ a ^= (a>>16);
+ return a;
+}
+
+/* Thomas Wang's mix function. The multiply is optimized away by the compiler
+ * on most platforms.
+ * It is about equivalent to the one above.
+ */
+int counts_tw2[NSERV][NSERV];
+uint32_t hash_tw2(uint32_t a)
+{
+ a = ~a + (a << 15);
+ a = a ^ (a >> 12);
+ a = a + (a << 2);
+ a = a ^ (a >> 4);
+ a = a * 2057;
+ a = a ^ (a >> 16);
+ return a;
+}
+
+/* Thomas Wang's multiplicative hash function. About 0x30 bytes, and it is
+ * extremely fast on recent processors with a fast multiply. However, it
+ * must not be used on low bits only, as multiples of 0x00100010 only return
+ * even values !
+ */
+int counts_tw3[NSERV][NSERV];
+uint32_t hash_tw3(uint32_t a)
+{
+ a = (a ^ 61) ^ (a >> 16);
+ a = a + (a << 3);
+ a = a ^ (a >> 4);
+ a = a * 0x27d4eb2d;
+ a = a ^ (a >> 15);
+ return a;
+}
+
+
+/* Full-avalanche integer hashing function from Bob Jenkins, suitable for use
+ * with a modulo. It has a very smooth distribution.
+ * http://burtleburtle.net/bob/hash/integer.html
+ * About 0x50 bytes, 6 shifts.
+ */
+int counts_bj6[NSERV][NSERV];
+int counts_bj6x[NSERV][NSERV];
+uint32_t hash_bj6(uint32_t a)
+{
+ a = (a+0x7ed55d16) + (a<<12);
+ a = (a^0xc761c23c) ^ (a>>19);
+ a = (a+0x165667b1) + (a<<5);
+ a = (a+0xd3a2646c) ^ (a<<9);
+ a = (a+0xfd7046c5) + (a<<3);
+ a = (a^0xb55a4f09) ^ (a>>16);
+ return a;
+}
+
+/* Similar function with one more shift and no magic number. It is slightly
+ * slower but provides the overall smoothest distribution.
+ * About 0x40 bytes, 7 shifts.
+ */
+int counts_bj7[NSERV][NSERV];
+int counts_bj7x[NSERV][NSERV];
+uint32_t hash_bj7(uint32_t a)
+{
+ a -= (a<<6);
+ a ^= (a>>17);
+ a -= (a<<9);
+ a ^= (a<<4);
+ a -= (a<<3);
+ a ^= (a<<10);
+ a ^= (a>>15);
+ return a;
+}
+
+
+void count_hash_results(unsigned long hash, int counts[NSERV][NSERV]) {
+ int srv, nsrv;
+
+ for (nsrv = 0; nsrv < NSERV; nsrv++) {
+ srv = hash % (nsrv + 1);
+ counts[nsrv][srv]++;
+ }
+}
+
+void dump_hash_results(char *name, int counts[NSERV][NSERV]) {
+ int srv, nsrv;
+ double err, total_err, max_err;
+
+ printf("%s:\n", name);
+ for (nsrv = 0; nsrv < NSERV; nsrv++) {
+ total_err = 0.0;
+ max_err = 0.0;
+ printf("%02d srv: ", nsrv+1);
+ for (srv = 0; srv <= nsrv; srv++) {
+ err = 100.0*(counts[nsrv][srv] - (double)counts[0][0]/(nsrv+1)) / (double)counts[0][0];
+ //printf("%6d ", counts[nsrv][srv]);
+ printf("% 3.1f%%%c ", err,
+ counts[nsrv][srv]?' ':'*'); /* display '*' when a server is never selected */
+ err = fabs(err);
+ total_err += err;
+ if (err > max_err)
+ max_err = err;
+ }
+ total_err /= (double)(nsrv+1);
+ for (srv = nsrv+1; srv < NSERV; srv++)
+ printf(" ");
+ printf(" avg_err=%3.1f, max_err=%3.1f\n", total_err, max_err);
+ }
+ printf("\n");
+}
+
+int main() {
+ int nr;
+ unsigned int address = 0;
+ unsigned int mask = ~0;
+
+ memset(counts_id, 0, sizeof(counts_id));
+ memset(counts_tw1, 0, sizeof(counts_tw1));
+ memset(counts_tw2, 0, sizeof(counts_tw2));
+ memset(counts_tw3, 0, sizeof(counts_tw3));
+ memset(counts_bj6, 0, sizeof(counts_bj6));
+ memset(counts_bj7, 0, sizeof(counts_bj7));
+
+ address = 0x10000000;
+ mask = 0xffffff00; // user mask to apply to addresses
+ for (nr = 0; nr < 0x10; nr++) {
+ //address += ~nr; // semi-random addresses.
+ //address += 1;
+ address += 0x00000100;
+ //address += 0x11111111;
+ //address += 7;
+ //address += 8;
+ //address += 256;
+ //address += 65536;
+ //address += 131072;
+ //address += 0x00100010; // this increment kills tw3 !
+ count_hash_results(hash_id (address & mask), counts_id); // 0.69s / 100M
+ count_hash_results(hash_tw1(address & mask), counts_tw1); // 1.04s / 100M
+ count_hash_results(hash_tw2(address & mask), counts_tw2); // 1.13s / 100M
+ count_hash_results(hash_tw3(address & mask), counts_tw3); // 1.01s / 100M
+ count_hash_results(hash_bj6(address & mask), counts_bj6); // 1.07s / 100M
+ count_hash_results(hash_bj7(address & mask), counts_bj7); // 1.20s / 100M
+ /* adding the original address after the hash reduces the error
+ * rate in in presence of very small data sets (eg: 16 source
+ * addresses for 8 servers). In this case, bj7 is very good.
+ */
+ count_hash_results(hash_bj6(address & mask)+(address&mask), counts_bj6x); // 1.07s / 100M
+ count_hash_results(hash_bj7(address & mask)+(address&mask), counts_bj7x); // 1.20s / 100M
+ }
+
+ dump_hash_results("hash_id", counts_id);
+ dump_hash_results("hash_tw1", counts_tw1);
+ dump_hash_results("hash_tw2", counts_tw2);
+ dump_hash_results("hash_tw3", counts_tw3);
+ dump_hash_results("hash_bj6", counts_bj6);
+ dump_hash_results("hash_bj6x", counts_bj6x);
+ dump_hash_results("hash_bj7", counts_bj7);
+ dump_hash_results("hash_bj7x", counts_bj7x);
+ return 0;
+}
--- /dev/null
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <unistd.h>
+#include <netinet/in.h>
+#include <arpa/inet.h>
+#include <sys/types.h>
+#include <sys/socket.h>
+#include <fcntl.h>
+
+int main(int argc, char **argv) {
+ char *addr;
+ int port;
+ int sock;
+ struct sockaddr_in saddr;
+ const struct linger nolinger = { .l_onoff = 1, .l_linger = 0 };
+
+ if (argc < 4) {
+ fprintf(stderr,
+ "usage : %s <addr> <port> <string>\n"
+ " This will connect to TCP port <addr>:<port> and send string <string>\n"
+ " then immediately reset.\n",
+ argv[0]);
+ exit(1);
+ }
+
+ addr = argv[1];
+ port = atoi(argv[2]);
+
+ sock = socket(AF_INET, SOCK_STREAM, 0);
+ bzero(&saddr, sizeof(saddr));
+ saddr.sin_addr.s_addr = inet_addr(addr);
+ saddr.sin_port = htons(port);
+ saddr.sin_family = AF_INET;
+
+ if (connect(sock, (struct sockaddr *)&saddr, sizeof(saddr)) < 0) {
+ perror("connect");
+ exit(1);
+ }
+
+ send(sock, argv[3], strlen(argv[3]), MSG_DONTWAIT | MSG_NOSIGNAL);
+ setsockopt(sock, SOL_SOCKET, SO_LINGER, (struct linger *) &nolinger, sizeof(struct linger));
+ close(sock);
+ exit(0);
+}
--- /dev/null
+willy@pcw:haproxy-1.1.17-pre3$ cat /proc/net/sockstat
+sockets: used 117
+TCP: inuse 82 orphan 15 tw 561836 alloc 88 mem 75
+UDP: inuse 13
+RAW: inuse 0
+FRAG: inuse 0 memory 0
+
--- /dev/null
+# This config file aims to trigger all error detection cases in the ACL
+# expression parser related to the fetch arguments.
+
+# silence some warnings
+defaults
+ mode http
+ timeout client 1s
+ timeout server 1s
+ timeout connect 1s
+
+frontend 1
+ bind :10000
+
+ # missing fetch method in ACL expression '(arg)'.
+ block if { (arg) }
+
+ # unknown fetch method 'blah' in ACL expression 'blah(arg)'.
+ block if { blah(arg) }
+
+ # missing closing ')' after arguments to fetch keyword 'req.hdr' in ACL expression 'req.hdr('.
+ block if { req.hdr( }
+
+ # cannot be triggerred : "returns type of fetch method '%s' is unknown"
+
+ # fetch method 'always_true' : no argument supported, but got 'arg' in ACL expression 'always_true(arg)'.
+ block if { always_true(arg) }
+
+ # fetch method 'req.hdr' : failed to parse 'a' as type 'signed integer' at position 2 in ACL expression 'req.hdr(a,a)'.
+ block if { req.hdr(a,a) }
+
+ # in argument to 'payload_lv', payload length must be > 0.
+ block if { payload_lv(0,0) }
+
+ # ACL keyword 'payload_lv' : expected type 'unsigned integer' at position 1, but got nothing.
+ block if { payload_lv }
+
--- /dev/null
+global
+ maxconn 200
+ #debug
+ #daemon
+
+defaults
+ mode http
+ contimeout 50s
+ clitimeout 50s
+ srvtimeout 50s
+
+frontend b11 :11000,:11001
+
+frontend b12 :12000-12009,:12020-12029
+
+#frontend b13 ::13000,::13001
+
+frontend b14 :::14000,:::14001
+
+frontend b15 *:15000,*:15001
+
+frontend b16 0.0.0.0:16000,0.0.0.0:16001
+
+listen l21
+ bind :21000,:21001
+
+listen l22
+ bind :22000-22009,:22020-22029
+
+#listen l23
+# bind ::23000,::23001
+
+listen l24
+ bind :::24000,:::24001
+
+listen l25
+ bind *:25000,*:25001
+
+listen l26
+ bind 0.0.0.0:26000,0.0.0.0:26001
+
+listen l35 :35000
+ server s1 :80
+ #server s2 ::80
+ server s3 :::80
+ server s4 *:80
+ server s5 0.0.0.0:80
+ server s5 0::0:80
+
+listen l36 :36000
+ server s1 1.1.1.1:80
+ server s2 1::1:80
+ server s3 ::1.1.1.1:80
+ server s4 localhost:80
+# server s5 localhost6:80
+
+listen l37 :37000
+ server s1 1.1.1.1
+ server s2 1::1:
+ server s3 ::1.1.1.1:
+ server s4 localhost
+# server s5 localhost6
+
+listen l38 :38000
+ server s1 1.1.1.1:+1
+ server s2 1::1:+1
+ server s3 ::1.1.1.1:+1
+ server s4 localhost:+1
+# server s5 localhost6:+1
+
+listen l39 :39000
+ server s1 1.1.1.1 check addr 2.2.2.2
+ server s2 1::1: check addr 2::2:
+ server s3 ::1.1.1.1: check addr ::2.2.2.2:
+ server s4 ::1.1.1.1: check addr localhost
+# server s5 ::1.1.1.1: check addr localhost6
+
+listen l40 :40000
+ server s1 1.1.1.1 source 0.0.0.0
+ server s2 1.1.1.1 source :1-10
+ server s3 1.1.1.1 source :::1-10
+ server s3 1.1.1.1 source 0::0:1-10
+ server s3 1.1.1.1 source ::0.0.0.0:1-10
+
--- /dev/null
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include "proto/arg.h"
+
+int main(int argc, char **argv)
+{
+ int nbargs, err_arg, mask;
+ struct arg *argp;
+ char *err_msg = NULL;
+ const char *err_ptr = NULL;
+
+ if (argc < 2) {
+ printf("Usage: %s arg_list [arg_mask]\n"
+ " mask defaults to 0x86543290\n"
+ " eg: %s 10k,+20,Host,1.2.3.4,24,::5.6.7.8,120s\n", *argv, *argv);
+ return 1;
+ }
+
+ mask = ARG7(0,SIZE,SINT,STR,IPV4,MSK4,IPV6,TIME);
+ if (argc >= 3)
+ mask = atoll(argv[2]);
+
+ printf("Using mask=0x%08x\n", mask);
+ nbargs = make_arg_list(argv[1], strlen(argv[1]), mask,
+ &argp, &err_msg, &err_ptr, &err_arg);
+
+ printf("nbargs=%d\n", nbargs);
+ if (nbargs < 0) {
+ printf("err_msg=%s\n", err_msg); free(err_msg);
+ printf("err_ptr=%s (str+%d)\n", err_ptr, err_ptr - argv[1]);
+ printf("err_arg=%d\n", err_arg);
+ return 1;
+ }
+
+ if (nbargs > 0) {
+ int arg;
+
+ for (arg = 0; arg < nbargs; arg++)
+ printf("arg %d: type=%d, int=0x%08x\n",
+ arg, argp[arg].type, *(int*)&argp[arg].data.uint);
+ }
+ return 0;
+}
--- /dev/null
+# This is a test configuration.
+# It is used to check that the backlog queue works as expected.
+
+global
+ maxconn 200
+ stats timeout 3s
+
+frontend backlog_def
+ mode http
+ timeout client 15s
+ maxconn 100
+ bind :8000
+ option httpclose
+
+frontend backlog_max
+ mode http
+ timeout client 15s
+ maxconn 100
+ backlog 100000
+ bind :8001
+ option httpclose
+
--- /dev/null
+# This is a test configuration.
+# It is used to involve the various http-check expect features. It queries
+# a local web server for an object which is called the same as the keyword.
+
+global
+ maxconn 500
+ stats socket /tmp/sock1 mode 600 level admin
+ stats timeout 3000
+ stats maxconn 2000
+
+defaults
+ mode http
+ retries 1
+ option redispatch
+ timeout connect 1000
+ timeout client 5000
+ timeout server 5000
+ maxconn 400
+ option http-server-close
+
+listen stats
+ bind :8080
+ stats uri /
+
+backend chk-exp-status-nolb
+ # note: 404 should not produce an error here, just a soft-stop
+ balance roundrobin
+ option httpchk GET /status
+ http-check disable-on-404
+ http-check expect status 200
+ server s1 127.0.0.1:80 check inter 1000
+
+backend chk-nexp-status-nolb
+ balance roundrobin
+ option httpchk GET /status
+ http-check disable-on-404
+ http-check expect ! status 200
+ server s1 127.0.0.1:80 check inter 1000
+
+backend chk-exp-status
+ balance roundrobin
+ option httpchk GET /status
+ http-check expect status 200
+ server s1 127.0.0.1:80 check inter 1000
+
+backend chk-nexp-status
+ balance roundrobin
+ option httpchk GET /status
+ http-check expect ! status 200
+ server s1 127.0.0.1:80 check inter 1000
+
+backend chk-exp-rstatus
+ balance roundrobin
+ option httpchk GET /rstatus
+ http-check expect rstatus ^2[0-9][0-9]
+ server s1 127.0.0.1:80 check inter 1000
+
+backend chk-nexp-rstatus
+ balance roundrobin
+ option httpchk GET /rstatus
+ http-check expect ! rstatus ^2[0-9][0-9]
+ server s1 127.0.0.1:80 check inter 1000
+
+backend chk-exp-string
+ balance roundrobin
+ option httpchk GET /string
+ http-check expect string this\ is\ ok
+ server s1 127.0.0.1:80 check inter 1000
+
+backend chk-nexp-string
+ balance roundrobin
+ option httpchk GET /string
+ http-check expect ! string this\ is\ ok
+ server s1 127.0.0.1:80 check inter 1000
+
+backend chk-exp-rstring
+ balance roundrobin
+ option httpchk GET /rstring
+ http-check expect rstring this\ is\ ok
+ server s1 127.0.0.1:80 check inter 1000
+
+backend chk-nexp-rstring
+ balance roundrobin
+ option httpchk GET /rstring
+ http-check expect ! rstring this\ is\ ok
+ server s1 127.0.0.1:80 check inter 1000
+
--- /dev/null
+# This is a test configuration.
+# It is used to check the various connection modes
+
+global
+ maxconn 100
+
+defaults
+ mode http
+ timeout client 10000
+ timeout server 10000
+ timeout connect 10000
+ balance roundrobin
+
+listen httpclose
+ option httpclose
+ bind :8001
+ server srv 127.0.0.1:8080
+ reqadd X-request:\ mode=httpclose
+ rspadd X-response:\ mode=httpclose
+
+listen server-close
+ option http-server-close
+ bind :8002
+ server srv 127.0.0.1:8080
+ reqadd X-request:\ mode=server-close
+ rspadd X-response:\ mode=server-close
+
+listen httpclose_server-close
+ option httpclose
+ option http-server-close
+ bind :8003
+ server srv 127.0.0.1:8080
+ reqadd X-request:\ mode=httpclose+server-close
+ rspadd X-response:\ mode=httpclose+server-close
+
+listen forceclose
+ option forceclose
+ bind :8004
+ server srv 127.0.0.1:8080
+ reqadd X-request:\ mode=forceclose
+ rspadd X-response:\ mode=forceclose
+
--- /dev/null
+# Test configuration. It listens on port 8000, forwards to
+# local ports 8001/8002 as two distinct servers, and relies
+# on a server running on local port 8080 to handle the request.
+
+# Example of request that must be handled (taken from RFC2965 and mangled
+# a bit) :
+# POST /acme/process HTTP/1.1
+# Cookie: $Version="1";
+# Customer="WILE_E_COYOTE"; $Path="/acme";
+# SID= s2 ; $Path="/";
+# Part_Number="Rocket_Launcher_0001"; $Path="/acme";
+# Shipping="FedEx"; $Path="/acme"
+#
+#
+#
+
+global
+ maxconn 500
+ stats socket /tmp/sock1 mode 777 level admin
+ stats timeout 1d
+
+defaults
+ mode http
+ option http-server-close
+ timeout client 30s
+ timeout server 30s
+ timeout connect 5s
+
+listen test
+ log 127.0.0.1 local0
+ option httplog
+ bind :8000
+ cookie SID insert indirect
+ server s1 127.0.0.1:8001 cookie s1
+ server s2 127.0.0.1:8002 cookie s2
+ capture cookie toto= len 10
+
+listen s1
+ bind 127.0.0.1:8001
+ server srv 127.0.0.1:8080
+ reqadd x-haproxy-used:\ s1
+
+listen s2
+ bind 127.0.0.1:8002
+ server srv 127.0.0.1:8080
+ reqadd x-haproxy-used:\ s2
+
--- /dev/null
+# Test configuration. It listens on port 8000, forwards to
+# local ports 8001/8002 as two distinct servers, and relies
+# on a server running on local port 8080 to handle the request.
+
+global
+ maxconn 500
+ stats socket /tmp/sock1 mode 777 level admin
+ stats timeout 1d
+
+defaults
+ mode http
+ option http-server-close
+ timeout client 30s
+ timeout server 30s
+ timeout connect 5s
+
+listen test
+ log 127.0.0.1 local0
+ option httplog
+ bind :8000
+ cookie SID insert
+ server s1 127.0.0.1:8001 cookie s1
+ server s2 127.0.0.1:8002 cookie s2
+ capture cookie toto= len 10
+
+listen s1
+ bind 127.0.0.1:8001
+ server srv 127.0.0.1:8080
+ reqadd x-haproxy-used:\ s1
+
+listen s2
+ bind 127.0.0.1:8002
+ server srv 127.0.0.1:8080
+ reqadd x-haproxy-used:\ s2
+
--- /dev/null
+# Test configuration. It listens on port 8000, forwards to
+# local ports 8001/8002 as two distinct servers, and relies
+# on a server running on local port 8080 to handle the request.
+
+global
+ maxconn 500
+ stats socket /tmp/sock1 mode 777 level admin
+ stats timeout 1d
+
+defaults
+ mode http
+ option http-server-close
+ timeout client 30s
+ timeout server 30s
+ timeout connect 5s
+
+listen test
+ log 127.0.0.1 local0
+ option httplog
+ bind :8000
+ cookie SID
+ server s1 127.0.0.1:8001 cookie s1
+ server s2 127.0.0.1:8002 cookie s2
+ capture cookie toto= len 10
+
+listen s1
+ bind 127.0.0.1:8001
+ server srv 127.0.0.1:8080
+ reqadd x-haproxy-used:\ s1
+
+listen s2
+ bind 127.0.0.1:8002
+ server srv 127.0.0.1:8080
+ reqadd x-haproxy-used:\ s2
+
--- /dev/null
+# Test configuration. It listens on port 8000, forwards to
+# local ports 8001/8002 as two distinct servers, and relies
+# on a server running on local port 8080 to handle the request.
+
+global
+ maxconn 500
+ stats socket /tmp/sock1 mode 777 level admin
+ stats timeout 1d
+
+defaults
+ mode http
+ option http-server-close
+ timeout client 30s
+ timeout server 30s
+ timeout connect 5s
+
+listen test
+ log 127.0.0.1 local0
+ option httplog
+ bind :8000
+ cookie SID prefix
+ server s1 127.0.0.1:8001 cookie s1
+ server s2 127.0.0.1:8002 cookie s2
+ capture cookie toto= len 10
+
+listen s1
+ bind 127.0.0.1:8001
+ server srv 127.0.0.1:8080
+ reqadd x-haproxy-used:\ s1
+
+listen s2
+ bind 127.0.0.1:8002
+ server srv 127.0.0.1:8080
+ reqadd x-haproxy-used:\ s2
+
--- /dev/null
+# Test configuration. It listens on port 8000, forwards to
+# local ports 8001/8002 as two distinct servers, and relies
+# on a server running on local port 8080 to handle the request.
+
+global
+ maxconn 500
+ stats socket /tmp/sock1 mode 777 level admin
+ stats timeout 1d
+
+defaults
+ mode http
+ option http-server-close
+ timeout client 30s
+ timeout server 30s
+ timeout connect 5s
+
+listen test
+ log 127.0.0.1 local0
+ option httplog
+ bind :8000
+ cookie SID rewrite
+ server s1 127.0.0.1:8001 cookie s1
+ server s2 127.0.0.1:8002 cookie s2
+ capture cookie toto= len 10
+
+listen s1
+ bind 127.0.0.1:8001
+ server srv 127.0.0.1:8080
+ reqadd x-haproxy-used:\ s1
+
+listen s2
+ bind 127.0.0.1:8002
+ server srv 127.0.0.1:8080
+ reqadd x-haproxy-used:\ s2
+
--- /dev/null
+# This is a test configuration.
+# It makes use of a farm built from 4 active servers and 4 backup servers,
+# all listenening to different IP addresses on port 80. Health-checks are
+# TCP only on port 81 so that iptables rules permit easy selection of which
+# servers are enabled or disabled. It checks for the file /alive, and disables
+# the server if the response is 404.
+#
+# Create statistics counters this way :
+#
+# iptables -N http
+# iptables -A OUTPUT -p tcp --syn --dport 80 -j http
+# for i in $(seq 1 8); do iptables -A http -d 127.0.0.$i; done
+# iptables -A http -d 127.0.0.0/24
+#
+# Consult the statistics using iptables this way:
+#
+# iptables --line-numbers -nxvL http
+# iptables -Z http
+#
+# Block individual servers like this :
+# iptables -I INPUT -p tcp --dport 81 -d 127.0.0.1 -j DROP
+#
+# Enable each server like this :
+# touch $SRV_ROOT/alive
+#
+# Disable each server like this :
+# rm -f $SRV_ROOT/alive
+#
+
+global
+ maxconn 1000
+ stats socket /tmp/sock1 mode 600
+ stats timeout 3000
+ stats maxconn 2000
+
+listen sample1
+ mode http
+ retries 1
+ redispatch
+ contimeout 1000
+ clitimeout 5000
+ srvtimeout 5000
+ maxconn 40000
+ bind :8080
+ cookie SRV insert indirect nocache
+ #balance source
+ balance roundrobin
+ option allbackups
+ server act1 127.0.0.1:80 cookie a1 weight 10 check port 81 inter 1000 fall 4
+ server act2 127.0.0.2:80 cookie a2 weight 20 check port 81 inter 1000 fall 4
+ server act3 127.0.0.3:80 cookie a3 weight 30 check port 81 inter 1000 fall 4
+ server act4 127.0.0.4:80 cookie a4 weight 40 check port 81 inter 1000 fall 4
+ server bck1 127.0.0.5:80 cookie b1 weight 10 check port 81 inter 1000 fall 4 backup
+ server bck2 127.0.0.6:80 cookie b2 weight 20 check port 81 inter 1000 fall 4 backup
+ server bck3 127.0.0.7:80 cookie b3 weight 30 check port 81 inter 1000 fall 4 backup
+ server bck4 127.0.0.8:80 cookie b4 weight 40 check port 81 inter 1000 fall 4 backup
+ option httpclose
+ stats uri /stats
+ stats refresh 5
+ option httpchk GET /alive
+ http-check disable-on-404
--- /dev/null
+# This is a test configuration.
+# It exercises some critical state machine transitions. Start it in debug
+# mode with syslogd listening to UDP socket, facility local0.
+
+########### test#0001: process_cli(), read or write error
+# setup :
+# server = (printf "HTTP/1.0 200 OK\r\n\r\n";cat /dev/zero) | nc -lp4000
+# client = (printf "GET / HTTP/1.0\r\n\r\n";cat) | nc 127.1 8001 >/dev/null
+# action : kill client during transfer (Ctrl-C)
+# result : both sides must close, and logs must report "CD" flags with
+# valid timers and counters. (note: high CPU usage expected)
+# example: 0/0/3/0/5293 200 76420076 - - CD--
+
+########### test#0002: process_cli(), read closed on client side first
+# setup :
+# server = (printf "HTTP/1.0 200 OK\r\n\r\n";cat /dev/zero) | nc -lp4000
+# client = (printf "GET / HTTP/1.0\r\n\r\n";cat) | nc 127.1 8001
+# action : end client output during transfer (Ctrl-D)
+# result : client exits, log is emitted immediately, server must be terminated
+# by hand. Logs indicate "----" with correct timers.
+# example: 0/0/3/0/5293 200 76420076 - - ----
+# note : test#0003 is triggered immediately after this test
+
+########### test3: process_cli(), read closed on server side first
+# setup :
+# server = (printf "HTTP/1.0 200 OK\r\n\r\n";cat) | nc -lp4000
+# client = (printf "GET / HTTP/1.0\r\n\r\n";cat) | nc 127.1 8001
+# action : end server output during transfer (Ctrl-D)
+# result : server exits, log is emitted immediately, client must be terminated
+# by hand. Logs indicate "----" with correct timers.
+# example: 0/0/3/0/5293 200 76420076 - - ----
+# note : test#0002 is triggered immediately after this test
+
+########### test4: process_cli(), read timeout on client side
+# setup :
+# server = (printf "HTTP/1.0 200 OK\r\n\r\n";cat) | nc -lp4000
+# client = (printf "GET / HTTP/1.0\r\n\r\n";cat) | nc 127.1 8001
+# action : wait at least 7 seconds and check the logs
+# result : log is emitted immediately, client and server must be terminated
+# by hand. Logs indicate "cD--" with correct timers.
+# example: 0/0/1/0/7006 200 19 - - cD--
+# note : test#0003 is triggered immediately after this test
+
+########### test5: ability to restart read after a request buffer full
+# setup :
+# server = (printf "HTTP/1.0 200 OK\r\n\r\n"; cat) | nc -lp4000
+# client = (printf "GET / HTTP/1.0\r\n\r\n";cat 8k.txt;cat) | nc 127.1 8001
+# action : enter data on the client, press enter, then Ctrl-D
+# result : data transferred to the server, client exits, log is emitted
+# immediately, server must be terminated by hand. Logs indicate
+# "----" with correct timers.
+# example: 0/0/0/0/3772 200 19 - - ----
+
+########### test6: ability to restart read after a request buffer partially full
+# setup :
+# server = (printf "HTTP/1.0 200 OK\r\n\r\n"; cat) | nc -lp4000
+# client = (printf "GET / HTTP/1.0\r\n\r\n";cat 7k.txt;cat) | nc 127.1 8001
+# action : enter data on the client, press enter, then Ctrl-D
+# result : data transferred to the server, client exits, log is emitted
+# immediately, server must be terminated by hand. Logs indicate
+# "----" with correct timers.
+# example: 0/0/0/0/3772 200 19 - - ----
+
+########### test7: ability to restart read after a response buffer full
+# setup :
+# server = (printf "HTTP/1.0 200 OK\r\n\r\n";cat 8k.txt; cat) | nc -lp4000
+# client = (printf "GET / HTTP/1.0\r\n\r\n";cat) | nc 127.1 8001
+# action : enter data on the server, press enter, then Ctrl-D
+# result : data transferred to the client, server exits, log is emitted
+# immediately, client must be terminated by hand. Logs indicate
+# "----" with correct timers.
+# example: 0/0/0/0/3087 200 8242 - - ----
+
+########### test8: ability to restart read after a response buffer partially full
+# setup :
+# server = (printf "HTTP/1.0 200 OK\r\n\r\n";cat 7k.txt; cat) | nc -lp4000
+# client = (printf "GET / HTTP/1.0\r\n\r\n";cat) | nc 127.1 8001
+# action : enter data on the server, press enter, then Ctrl-D
+# result : data transferred to the client, server exits, log is emitted
+# immediately, client must be terminated by hand. Logs indicate
+# "----" with correct timers.
+# example: 0/0/0/0/5412 200 7213 - - ----
+
+########### test9: process_cli(), read timeout on empty request
+# setup :
+# server = (printf "HTTP/1.0 200 OK\r\n\r\n";cat) | nc -lp4000
+# client = nc 127.1 8001
+# action : wait at least 5 seconds and check the logs
+# result : client returns 408, log is emitted immediately, server must be
+# terminated by hand. Logs indicate "cR--" with correct timers
+# and "<BADREQ>"
+# example: -1/-1/-1/-1/5000 408 212 - - cR--
+
+########### test10: process_cli(), read timeout on client headers
+# setup :
+# server = (printf "HTTP/1.0 200 OK\r\n\r\n";cat) | nc -lp4000
+# client = (printf "GET / HTTP/1.0\r\nTest: test\r\n";cat) | nc 127.1 8001
+# action : wait at least 5 seconds and check the logs
+# result : client returns 408, log is emitted immediately, both must be
+# terminated by hand. Logs indicate "cR--" with correct timers
+# and "<BADREQ>"
+# example: -1/-1/-1/-1/5004 408 212 - - cR--
+
+########### test11: process_cli(), read abort on empty request
+# setup :
+# server = (printf "HTTP/1.0 200 OK\r\n\r\n";cat) | nc -lp4000
+# client = echo -n | nc 127.1 8001
+# action : just check the logs after the client immediately returns
+# result : client returns 400, log is emitted immediately, server must be
+# terminated by hand. Logs indicate "CR--" with correct timers
+# and "<BADREQ>"
+# example: -1/-1/-1/-1/0 400 187 - - CR--
+
+########### test12: process_cli(), read abort on client headers
+# setup :
+# server = (printf "HTTP/1.0 200 OK\r\n\r\n";cat) | nc -lp4000
+# client = (printf "GET / HTTP/1.0\r\nTest: test\r\n") | nc 127.1 8001
+# action : just check the logs after the client immediately returns
+# result : client returns 400, log is emitted immediately, server must be
+# terminated by hand. Logs indicate "CR--" with correct timers
+# and "<BADREQ>"
+# example: -1/-1/-1/-1/0 400 187 - - CR--
+
+########### test13: process_srv(), read timeout on empty response
+# setup :
+# server = nc -lp4000
+# client = (printf "GET / HTTP/1.0\r\n\r\n"; cat) | nc 127.1 8001
+# action : wait 9 seconds and check response
+# result : client exits with 504, log is emitted immediately, client must be
+# terminated by hand. Logs indicate "sH--" with correct timers.
+# example: 0/0/0/-1/8002 504 194 - - sH--
+
+########### test14: process_srv(), closed client during response timeout
+# setup :
+# server = nc6 --half-close -lp4000
+# client = (printf "GET / HTTP/1.0\r\n\r\n"; sleep 1) | nc 127.1 8001
+# action : wait 9 seconds and check response
+# result : client exits with 504, log is emitted immediately, server exits
+# immediately. Logs indicate "sH--" with correct timers, which
+# is 8s regardless of the "sleep 1".
+# example: 0/0/0/-1/8002 504 194 - - sH--
+
+########### test15: process_srv(), client close not causing server close
+# setup :
+# server = nc6 -lp4000
+# client = (printf "GET / HTTP/1.0\r\n\r\n"; sleep 1) | nc 127.1 8001
+# action : wait 9 second and check response
+# result : client exits with 504, log is emitted immediately, server exits
+# immediately. Logs indicate "sH--" with correct timers, which
+# is 8s regardless of the "sleep 1".
+# example: 0/0/0/-1/8002 504 194 - - sH--
+
+########### test16: process_srv(), read timeout on server headers
+# setup :
+# server = (printf "HTTP/1.0 200 OK\r\nTest: test\r\n";cat) | nc -lp4000
+# client = (printf "GET / HTTP/1.0\r\n\r\n"; cat) | nc 127.1 8001
+# action : wait 8 seconds and check response
+# result : client exits with 504, log is emitted immediately, both must be
+# terminated by hand. Logs indicate "sH--" with correct timers.
+# example: 0/0/0/-1/8004 504 223 - - sH--
+
+########### test17: process_srv(), connection time-out
+# setup :
+# config = retries 1
+# server = none
+# client = (printf "GET / HTTP/1.0\r\n\r\n";cat) | nc 127.1 8002
+# action : wait at least 12 seconds and check the logs
+# result : client returns 503 and must be terminated by hand. Log is emitted
+# immediately. Logs indicate "sC--" with correct timers.
+# example: 0/0/-1/-1/12001 503 212 - - sC--
+
+########### test18: process_srv(), client close during connection time-out
+# setup :
+# config = retries 1
+# server = none
+# client = (printf "GET / HTTP/1.0\r\n\r\n";sleep 1) | nc 127.1 8002
+# action : wait at least 12 seconds and check the logs
+# result : client returns 503 and automatically closes. Log is emitted
+# immediately. Logs indicate "sC--" with correct timers.
+# example: 0/0/-1/-1/12001 503 212 - - sC--
+
+########### test19: process_srv(), immediate server close after empty response
+# setup :
+# server = echo -n | nc -lp4000
+# client = (printf "GET / HTTP/1.0\r\n\r\n") | nc 127.1 8001
+# action : just check logs after immediate return.
+# result : client and server exit with 502, log is emitted immediately. Logs
+# indicate "SH--" with correct timers.
+# example: 0/0/0/-1/0 502 204 - - SH--
+
+########### test20: process_srv(), immediate server close after incomplete headers
+# setup :
+# server = (printf "HTTP/1.0 200 OK\r\nTest: test\r\n") | nc -lp4000
+# client = (printf "GET / HTTP/1.0\r\n\r\n") | nc 127.1 8001
+# action : just check logs after immediate return.
+# result : client and server exit with 502, log is emitted immediately. Logs
+# indicate "SH--" with correct timers.
+# example: 0/0/0/-1/0 502 233 - - SH--
+
+########### test21: process_srv(), immediate server close after complete headers
+# setup :
+# server = (printf "HTTP/1.0 200 OK\r\nTest: test\r\n\r\n") | nc -lp4000
+# client = (printf "GET / HTTP/1.0\r\n\r\n") | nc 127.1 8001
+# action : just check logs after immediate return.
+# result : client and server exit with 200, log is emitted immediately. Logs
+# indicate "----" with correct timers.
+# example: 0/0/0/0/0 200 31 - - ----
+
+########### test22: process_srv(), timeout on request body
+# setup :
+# server = (printf "HTTP/1.0 200 OK\r\nTest: test\r\n\r\n") | nc -lp4000
+# client = (printf "POST / HTTP/1.0\r\nContent-length: 20\r\n\r\n";cat) | nc 127.1 8001
+# action : wait 7s for the request body to timeout.
+# result : The server receives the request and responds immediately with 200.
+# Log is emitted after the timeout occurs. Logs indicate "cD--" with correct timers.
+# example: 1/0/0/0/7004 200 31 - - cD--
+
+########### test23: process_srv(), client close on request body
+# setup :
+# server = (printf "HTTP/1.0 200 OK\r\nTest: test\r\n\r\n") | nc -lp4000
+# client = (printf "POST / HTTP/1.0\r\nContent-length: 20\r\n\r\n";cat) | nc 127.1 8001
+# action : wait 2s then press Ctrl-C on the client
+# result : The server immediately aborts and the logs are emitted immediately with a 400.
+# Logs indicate "CD--" with correct timers.
+# example: 1/0/0/0/1696 400 31 - - CD--
+
+########### test24 process_srv(), server close on request body
+# setup :
+# server = (printf "HTTP/1.0 200 OK\r\nTest: test\r\n\r\n") | nc -lp4000
+# client = (printf "POST / HTTP/1.0\r\nContent-length: 20\r\n\r\n";cat) | nc 127.1 8001
+# action : wait 2s then press Ctrl-C on the server and press enter a few times on the client
+# result : The logs are emitted immediately with a 200 (server's incomplete response).
+# Logs indicate "SD--" with correct timers. Client must be terminated by hand.
+# example: 1/0/0/0/2186 200 31 - - SD--
+
+########### test25: process_srv(), client timeout on request body when url_param is used
+# setup :
+# server = none
+# client = (printf "POST / HTTP/1.0\r\nContent-length: 20\r\n\r\n";cat) | nc 127.1 8003
+# action : wait 5s for the request body to timeout.
+# result : The client receives a 408 and closes. The log is emitted immediately.
+# Logs indicate "cD--" with correct timers.
+# example: 0/-1/-1/-1/5003 408 212 - - cD--
+
+########### test26: process_srv(), client abort on request body when url_param is used
+# setup :
+# server = none
+# client = (printf "POST / HTTP/1.0\r\nContent-length: 20\r\n\r\n";cat) | nc 127.1 8003
+# action : wait 2s then press Ctrl-C on the client
+# result : The logs are emitted immediately with a 400.
+# Logs indicate "CD--" with correct timers.
+# example: 594/-1/-1/-1/594 400 187 - - CD--
+
+
+
+global
+ maxconn 100
+ log 127.0.0.1 local0
+ debug
+
+# connect to port 8000 to consult statistics
+listen stats
+ timeout client 5s
+ mode http
+ bind :8000
+ balance
+ stats uri /stat
+
+# connect port 8001 to localhost:4000
+listen frt8001
+ log global
+ bind :8001
+ mode http
+ option httplog
+ maxconn 100
+
+ timeout http-request 5s
+ timeout connect 6s
+ timeout client 7s
+ timeout server 8s
+ timeout queue 9s
+
+ balance roundrobin
+ server srv4000 127.0.0.1:4000
+
+# connect port 8002 to nowhere
+listen frt8002
+ log global
+ bind :8002
+ mode http
+ option httplog
+ maxconn 100
+
+ timeout http-request 5s
+ timeout connect 6s
+ timeout client 7s
+ timeout server 8s
+ timeout queue 9s
+ retries 1
+
+ balance url_param foo check_post
+ server srv4000 192.168.255.255:4000
+
+# connect port 8003 to localhost:4000 with url_param
+listen frt8003
+ log global
+ bind :8003
+ mode http
+ option httplog
+ maxconn 100
+
+ timeout http-request 5s
+ timeout connect 6s
+ timeout client 7s
+ timeout server 8s
+ timeout queue 9s
+
+ balance url_param foo check_post
+ server srv4000 127.0.0.1:4000
+
+
+# listen frt8002
+# log global
+# bind :8002
+# mode http
+# option httplog
+# maxconn 100
+#
+# timeout http-request 5s
+# timeout connect 6s
+# timeout client 7s
+# timeout server 8s
+# timeout queue 9s
+#
+# #tcp-request inspect-delay 4s
+# acl white_list src 127.0.0.2
+#
+# tcp-request content accept if white_list
+# tcp-request content reject if !REQ_CONTENT
+#
+# balance url_param foo check_post
+# #balance url_param foo #check_post
+# server srv4000 127.0.0.1:4000
+#
+# # control activity this way
+# stats uri /stat
+#
--- /dev/null
+# This is a test configuration.
+# It makes use of a farm built from 4 active servers and 4 backup servers,
+# all listenening to different IP addresses on port 80. Health-checks are
+# TCP only on port 81 so that iptables rules permit easy selection of which
+# servers are enabled or disabled.
+#
+# Create statistics counters this way :
+#
+# iptables -N http
+# iptables -A OUTPUT -p tcp --syn --dport 80 -j http
+# for i in $(seq 1 8); do iptables -A http -d 127.0.0.$i; done
+# iptables -A http -d 127.0.0.0/24
+#
+# Consult the statistics using iptables this way:
+#
+# iptables --line-numbers -nxvL http
+# iptables -Z http
+#
+#
+# Block individual servers like this :
+# iptables -I INPUT -p tcp --dport 81 -d 127.0.0.1 -j DROP
+#
+
+global
+ maxconn 1000
+ stats socket /tmp/sock1 mode 600
+ stats timeout 3000
+ stats maxconn 2000
+
+listen sample1
+ mode tcp
+ retries 1
+ redispatch
+ contimeout 1000
+ clitimeout 120000
+ srvtimeout 120000
+ maxconn 40000
+ bind :8080
+ balance leastconn
+ option allbackups
+ server act1 127.0.0.1:80 weight 10 maxconn 200 check inter 1000 fall 1
+ server act2 127.0.0.2:80 weight 20 maxconn 200 check inter 1000 fall 1
+ server act3 127.0.0.3:80 weight 30 maxconn 200 check inter 1000 fall 1
+ server act4 127.0.0.4:80 weight 40 maxconn 200 check inter 1000 fall 1
+ server bck1 127.0.0.5:80 weight 10 check inter 1000 fall 1 backup
+ server bck2 127.0.0.6:80 weight 20 check inter 1000 fall 1 backup
+ server bck3 127.0.0.7:80 weight 30 check inter 1000 fall 1 backup
+ server bck4 127.0.0.8:80 weight 40 check inter 1000 fall 1 backup
+ option httpclose
+
+listen sample1
+ mode http
+ contimeout 1000
+ clitimeout 50000
+ srvtimeout 50000
+ maxconn 40000
+ bind :8081
+ balance leastconn
+ option httpclose
+ stats uri /stats
+ stats refresh 5
--- /dev/null
+# This is a test configuration.
+# It makes use of a farm built from 4 active servers and 4 backup servers,
+# all listenening to different IP addresses on port 80. Health-checks are
+# TCP only on port 81 so that iptables rules permit easy selection of which
+# servers are enabled or disabled.
+#
+# Create statistics counters this way :
+#
+# iptables -N http
+# iptables -A OUTPUT -p tcp --syn --dport 80 -j http
+# for i in $(seq 1 8); do iptables -A http -d 127.0.0.$i; done
+# iptables -A http -d 127.0.0.0/24
+#
+# Consult the statistics using iptables this way:
+#
+# iptables --line-numbers -nxvL http
+# iptables -Z http
+#
+#
+# Block individual servers like this :
+# iptables -I INPUT -p tcp --dport 81 -d 127.0.0.1 -j DROP
+#
+
+global
+ maxconn 1000
+ stats socket /tmp/sock1 mode 600
+ stats timeout 3000
+ stats maxconn 2000
+
+listen sample1
+ mode http
+ retries 1
+ redispatch
+ contimeout 1000
+ clitimeout 5000
+ srvtimeout 5000
+ maxconn 40000
+ bind :8080
+ balance roundrobin
+ option allbackups
+ server act1 127.0.0.1:80 weight 10 check port 81 inter 1000 fall 1
+ server act2 127.0.0.2:80 weight 20 check port 81 inter 1000 fall 1
+ server act3 127.0.0.3:80 weight 30 check port 81 inter 1000 fall 1
+ server act4 127.0.0.4:80 weight 40 check port 81 inter 1000 fall 1
+ server bck1 127.0.0.5:80 weight 10 check port 81 inter 1000 fall 1 backup
+ server bck2 127.0.0.6:80 weight 20 check port 81 inter 1000 fall 1 backup
+ server bck3 127.0.0.7:80 weight 30 check port 81 inter 1000 fall 1 backup
+ server bck4 127.0.0.8:80 weight 40 check port 81 inter 1000 fall 1 backup
+ option httpclose
+ stats uri /stats
+ stats refresh 5
--- /dev/null
+# Test Rewriting Host header
+global
+ maxconn 100
+
+defaults
+ mode http
+ timeout client 10000
+ timeout server 10000
+ timeout connect 10000
+ balance roundrobin
+
+listen send-name-silo-id
+ bind :8001
+
+ # Set the test conditions: Add a new header
+ http-send-name-header X-Silo-Id
+ server srv-silo1 127.0.0.1:8080
+
+ # Add headers containing the correct values for test verification
+ reqadd X-test-server-name-header:\ X-Silo-Id
+ reqadd X-test-server-name-value:\ srv-silo1
+
+listen send-name-host
+ bind :8002
+
+ # Set the test conditions: Replace an existing header
+ http-send-name-header host
+ server srv-host 127.0.0.1:8080
+
+ # Add headers containing the correct values for test verification
+ reqadd X-test-server-name-header:\ Host
+ reqadd X-test-server-name-value:\ srv-host
+
--- /dev/null
+# This is a test configuration. It listens on port 8025, waits for an incoming
+# connection, and applies the following rules :
+# - if the address is in the white list, then accept it and forward the
+# connection to the server (local port 25)
+# - if the address is in the black list, then immediately drop it
+# - otherwise, wait up to 35 seconds. If the client talks during this time,
+# drop the connection.
+# - then accept the connection if it passes all the tests.
+#
+# Note that the rules are evaluated at every new chunk of data read, and at
+# delay expiration. Rules which apply to incomplete data don't match as long
+# as the timer has not expired.
+
+listen block-fake-mailers
+ log 127.0.0.1:514 local0
+ option tcplog
+
+ mode tcp
+ bind :8025
+ timeout client 60s
+ timeout server 60s
+ timeout queue 60s
+ timeout connect 5s
+
+ tcp-request inspect-delay 35s
+
+ acl white_list src 127.0.0.2
+ acl black_fast src 127.0.0.3 # those ones are immediately rejected
+ acl black_slow src 127.0.0.4 # those ones are rejected after a delay
+
+ tcp-request content accept if white_list
+ tcp-request content reject if black_fast
+ tcp-request content reject if black_slow WAIT_END
+ tcp-request content reject if REQ_CONTENT
+ # note that it is possible to wait for the end of the analysis period
+ # before rejecting undesired contents
+ # tcp-request content reject if REQ_CONTENT WAIT_END
+
+ # on Linux+transparent proxy patch, it's useful to reuse the client'IP
+ # source 0.0.0.0 usesrc clientip
+
+ balance roundrobin
+ server mail 127.0.0.1:25
+
--- /dev/null
+# This is a test configuration. It listens on port 8443, waits for an incoming
+# connection, and applies the following rules :
+# - if the address is in the white list, then accept it and forward the
+# connection to the server (local port 443)
+# - if the address is in the black list, then immediately drop it
+# - otherwise, wait up to 3 seconds for valid SSL data to come in. If those
+# data are identified as SSL, the connection is immediately accepted, and
+# if they are definitely identified as non-SSL, the connection is rejected,
+# which will happen upon timeout if they still don't match SSL.
+
+listen block-non-ssl
+ log 127.0.0.1:514 local0
+ option tcplog
+
+ mode tcp
+ bind :8443
+ timeout client 6s
+ timeout server 6s
+ timeout connect 6s
+
+ tcp-request inspect-delay 4s
+
+ acl white_list src 127.0.0.2
+ acl black_list src 127.0.0.3
+
+ # note: SSLv2 is not used anymore, SSLv3.1 is TLSv1.
+ acl obsolete_ssl req_ssl_ver lt 3
+ acl correct_ssl req_ssl_ver 3.0-3.1
+ acl invalid_ssl req_ssl_ver gt 3.1
+
+ tcp-request content accept if white_list
+ tcp-request content reject if black_list
+ tcp-request content reject if !correct_ssl
+
+ balance roundrobin
+ server srv1 127.0.0.1:443
+
--- /dev/null
+# This is a test configuration.
+# It presents 4 instances using fixed and relative port assignments from
+# ports 8001 to 8004. TCP only is used, and the destination address is not
+# relevant (use netstat -an).
+
+global
+ maxconn 100
+
+defaults
+ mode tcp
+ clitimeout 15000
+ srvtimeout 15000
+ contimeout 15000
+ balance roundrobin
+
+listen fixed
+ bind :8001
+ server s1 1.1.1.1:8001
+
+listen same
+ bind :8002
+ server s2 1.1.1.2
+
+listen plus1000
+ bind :8003
+ server s3 1.1.1.3:+1000
+
+listen minus1000
+ bind :8004
+ server s4 1.1.1.4:-1000
+
--- /dev/null
+# This is the smallest possible configuration. It does not
+# bind to any port, and is enough to check the polling
+# system in use without disturbing any running process.
+#
+# To be used that way: haproxy -V -f test-pollers.cfg
+
+global
+ #nosepoll
+ #noepoll
+ #nopoll
+
+# fake backend to pass the config checks
+backend dummy
+ balance
+
--- /dev/null
+# This is a test configuration.
+# It is used to check the redirect keyword.
+
+global
+ maxconn 400
+ stats timeout 3s
+
+listen sample1
+ mode http
+ retries 1
+ option redispatch
+ timeout client 1m
+ timeout connect 5s
+ timeout server 1m
+ maxconn 400
+ bind :8000
+
+ acl url_test1 url_reg test1
+ acl url_test2 url_reg test2
+ acl url_test3 url_reg test3
+ acl url_test4 url_reg test4
+
+ acl seen hdr_sub(cookie) SEEN=1
+
+ redirect location /abs/test code 301 if url_test1
+ redirect prefix /pfx/test code 302 if url_test2
+ redirect prefix /pfx/test code 303 drop-query if url_test3
+
+ redirect prefix / code 302 set-cookie SEEN=1 if url_test4 !seen
+ redirect location / code 302 clear-cookie SEEN= if url_test4 seen
+
+ ### unconditional redirection
+ #redirect location https://example.com/ if TRUE
+
+ ### parser must detect invalid syntaxes below
+ #redirect
+ #redirect blah
+ #redirect location
+ #redirect location /abs/test
+ #redirect location /abs/test code
+ #redirect location /abs/test code 300
+ #redirect location /abs/test code 301
+ #redirect location /abs/test code 304
+
+ balance roundrobin
+ server act1 127.0.0.1:80 weight 10
+ option httpclose
+ stats uri /stats
+ stats refresh 5000ms
--- /dev/null
+# This config file aims to trigger all error detection cases in the sample
+# fetch expression parser related to the fetch arguments.
+
+# silence some warnings
+defaults
+ mode http
+ timeout client 1s
+ timeout server 1s
+ timeout connect 1s
+
+frontend 1
+ bind :10000
+
+ # missing fetch method
+ http-request add-header name %[(arg)]
+
+ # unknown fetch method 'blah'
+ http-request add-header name %[blah(arg)]
+
+ # missing closing ')' after arguments to fetch keyword 'req.hdr'
+ http-request add-header name %[req.hdr(]
+
+ # cannot be triggerred : "returns type of fetch method '%s' is unknown"
+
+ # fetch method 'always_true' : no argument supported, but got 'arg'
+ http-request add-header name %[always_true(arg)]
+
+ # fetch method 'req.hdr' : failed to parse 'a' as 'signed integer' at position 2
+ http-request add-header name %[req.hdr(a,a)]
+
+ # invalid args in fetch method 'payload_lv' : payload length must be > 0
+ http-request add-header name %[payload_lv(0,0)]
+
+ # fetch method 'payload_lv' : expected type 'unsigned integer' at position 1, but got nothing
+ http-request add-header name %[payload_lv]
+
--- /dev/null
+# This config file aims to trigger all error detection cases in the sample
+# fetch expression parser related to the sample converters.
+
+# silence some warnings
+defaults
+ mode http
+ timeout client 1s
+ timeout server 1s
+ timeout connect 1s
+
+frontend 1
+ bind :10000
+
+ # report "missing comma after fetch keyword %s"
+ http-request add-header name %[hdr(arg))]
+
+ # report "missing comma after conv keyword %s"
+ http-request add-header name %[hdr(arg),ipmask(2))]
+
+ # report "unknown conv method '%s'"
+ http-request add-header name %[hdr(arg),blah]
+
+ # report "syntax error: missing ')' after conv keyword '%s'"
+ http-request add-header name %[hdr(arg),ipmask(2]
+
+ # no way to report "returns type of conv method '%s' is unknown"
+
+ # "conv method '%s' cannot be applied"
+ http-request add-header name %[wait_end,ipmask(2)]
+
+ # "conv method '%s' does not support any args"
+ http-request add-header name %[hdr(arg),upper()]
+
+ # "invalid arg %d in conv method '%s' : %s"
+ http-request add-header name %[hdr(arg),ipmask(a)]
+
+ # "invalid args in conv method '%s' : %s"
+ http-request add-header name %[hdr(arg),map()]
+
+ # "missing args for conv method '%s'"
+ http-request add-header name %[hdr(arg),ipmask]
+
--- /dev/null
+# This is a test configuration.
+# It requires a mysql server running on local port 3306.
+
+global
+ maxconn 500
+
+defaults
+ contimeout 1000
+ clitimeout 5000
+ srvtimeout 5000
+ retries 1
+ option redispatch
+
+listen stats
+ bind :8080
+ mode http
+ stats enable
+ stats uri /stats
+
+listen mysql_1
+ bind :3307
+ mode tcp
+ balance roundrobin
+ option mysql-check user haproxy
+ server srv1 127.0.0.1:3306 check port 3306 inter 1000 fall 1
+# server srv2 127.0.0.2:3306 check port 3306 inter 1000 fall 1
+# server srv3 127.0.0.3:3306 check port 3306 inter 1000 fall 1
+# server srv4 127.0.0.4:3306 check port 3306 inter 1000 fall 1
+
--- /dev/null
+global
+ stats socket /tmp/sock1 mode 666 level admin
+ stats timeout 2d
+ #log 127.0.0.1:1000 local0 # good
+ #log 127.0.0.1 local0 # good
+ #log 127.0.0.1:1001-1002 local0
+ #log 127.0.0.1:-1003 local0
+ #log 127.0.0.1:+1004 local0
+
+defaults
+ timeout client 5s
+ timeout server 5s
+ timeout connect 5s
+ #log 127.0.0.1:1000 local0 # good
+ #log 127.0.0.1 local0 # good
+ #log 127.0.0.1:1001-1002 local0
+ #log 127.0.0.1:-1003 local0
+ #log 127.0.0.1:+1004 local0
+
+listen p
+ mode http
+ bind :8001
+ bind *:8002
+ bind :::8003
+ bind 127.0.0.1:8004
+ #bind ::127.0.0.1:8005
+ bind :::8006
+ bind 127.0.0.1:8007-8009
+ #bind 127.0.0.1:8010-
+ #bind 127.0.0.1:-8011
+ #bind 127.0.0.1:+8012
+
+ stats uri /stat
+ #dispatch 192.168.0.176:8005 # good
+ #dispatch 192.168.0.176
+ #dispatch 192.168.0.176:8001-8002
+ #dispatch 192.168.0.176:-8003
+ #dispatch 192.168.0.176:+8004
+
+ server s1 192.168.0.176:80 check addr 192.168.0.176:8000 source 192.168.0.1:10000-59999 check
+
+ #server s1 192.168.0.176:80 addr 192.168.0.176:-8000 source 192.168.0.1:10000-59999 check
+ #server s1 192.168.0.176:80 addr 192.168.0.176:+8000 source 192.168.0.1:10000-59999 check
+ #server s1 192.168.0.176:80 addr 192.168.0.176:8000-8001 source 192.168.0.1:10000-59999 check
+
+ #source 192.168.0.1:8000 # good
+ #source 192.168.0.1:-8000
+ #source 192.168.0.1:+8000
+ #source 192.168.0.1:8000-8001
+
+ #source 192.168.0.1:8000-8001
+ #source 192.168.0.1 usesrc 192.168.0.1:8000-8001
+
+peers truc
+ #peer machin1 127.0.0.1 # good
+ #peer machin2 127.0.0.2:1000-2000
+ #peer machin2 127.0.0.3:-2000
+ #peer machin2 127.0.0.4:+2000
+ #peer machin2 127.0.0.5:2000
+
--- /dev/null
+# This is a test configuration.
+# It is used to check that time units are correctly parsed.
+
+global
+ maxconn 1000
+ stats timeout 3s
+
+listen sample1
+ mode http
+ retries 1
+ redispatch
+ contimeout 5s
+ clitimeout 15m
+ srvtimeout 15m
+ maxconn 40000
+ bind :8080
+ balance roundrobin
+ option allbackups
+ server act1 127.0.0.1:80 weight 10 check port 81 inter 500ms fall 1
+ server act2 127.0.0.2:80 weight 20 check port 81 inter 500ms fall 1
+ server act3 127.0.0.3:80 weight 30 check port 81 inter 500ms fall 1
+ option httpclose
+ stats uri /stats
+ stats refresh 5000ms
--- /dev/null
+# This is a test configuration.
+# It is used to check that time units are correctly parsed.
+
+global
+ maxconn 1000
+ stats timeout 3s
+
+listen sample1
+ mode http
+ retries 1
+ redispatch
+ timeout client 15m
+ timeout http-request 6s
+ timeout tarpit 20s
+ timeout queue 60s
+ timeout connect 5s
+ timeout server 15m
+ maxconn 40000
+ bind :8000
+ balance roundrobin
+ option allbackups
+ server act1 127.0.0.1:80 weight 10 check port 81 inter 500ms fall 1
+ server act2 127.0.0.2:80 weight 20 check port 81 inter 500ms fall 1
+ server act3 127.0.0.3:80 weight 30 check port 81 inter 500ms fall 1
+ option httpclose
+ stats uri /stats
+ stats refresh 5000ms
--- /dev/null
+# This is a test configuration.
+# It exercises the "url_param" balance algorithm. It looks for
+# an URL parameter named "foo".
+
+global
+ maxconn 100
+ log 127.0.0.1 local0
+
+listen vip1
+ log global
+ option httplog
+ bind :8000
+ mode http
+ maxconn 100
+ clitimeout 5000
+ contimeout 5000
+ srvtimeout 5000
+ balance url_param foo
+ server srv1 127.0.0.1:80
+ server srv2 127.0.0.1:80
+
+ # control activity this way
+ stats uri /stat
+
+listen vip2
+ log global
+ option httplog
+ bind :8001
+ mode http
+ maxconn 100
+ clitimeout 5000
+ contimeout 5000
+ srvtimeout 5000
+ balance url_param foo check_post
+ server srv1 127.0.0.1:80
+ server srv2 127.0.0.1:80
+
+ # control activity this way
+ stats uri /stat
+
--- /dev/null
+# This is a test configuration.
+# It checks instances, servers and acl names.
+
+listen valid_listen1
+ bind :8000
+ clitimeout 5000
+ contimeout 5000
+ srvtimeout 5000
+ balance roundrobin
+ server srv1 127.0.0.1:80
+
+frontend www.valid-frontend.net:80
+ bind :8001
+ clitimeout 5000
+ acl host_www.valid-frontend.net:80 hdr(host) www.valid-frontend.net
+
+backend Valid_BK-1
+ contimeout 5000
+ srvtimeout 5000
+ balance roundrobin
+ server bk1_srv-1:80 127.0.0.1:80
+
+frontend www.test-frontend.net:8002/invalid
+ bind :8002
+ clitimeout 5000
+
+frontend ft1_acl
+ bind :8003
+ clitimeout 5000
+ acl invalid!name url /
+
+backend bk2_srv
+ contimeout 5000
+ srvtimeout 5000
+ balance roundrobin
+ server bk2/srv-1 127.0.0.1:80
+
--- /dev/null
+main() {
+ write(1, "HTTP", 4);
+ write(1, "/1.0", 4);
+ write(1, " 200", 4);
+ write(1, " OK\r\n", 5);
+ write(1, "TOTO: 1\r\n", 9);
+ write(1, "Hdr2: 2\r\n", 9);
+ write(1, "Hdr3:", 5);
+ write(1, " 2\r\n", 4);
+ write(1, "\r\n\r\n", 4);
+ write(1, "DATA\r\n", 6);
+}
+
--- /dev/null
+/*
+ This file only show how many operations a hash is able to handle.
+ It don't show the distribution nor collisions.
+
+ gcc -Wall -O3 -o test_hashes test_hashes.c
+ ./test_hashes |sort -k 3 -r
+ */
+#include <sys/time.h>
+#include <time.h>
+#include <stdlib.h>
+#include <stdbool.h>
+#include <string.h>
+#include <stdio.h>
+//#include <stdint.h>
+
+
+static struct timeval timeval_current(void)
+{
+ struct timeval tv;
+ gettimeofday(&tv, NULL);
+ return tv;
+}
+
+static double timeval_elapsed(struct timeval *tv)
+{
+ struct timeval tv2 = timeval_current();
+ return (tv2.tv_sec - tv->tv_sec) +
+ (tv2.tv_usec - tv->tv_usec)*1.0e-6;
+}
+
+#define HAPROXY_BACKENDS 4
+
+unsigned long haproxy_uri_hash(char *uri, int uri_len){
+
+ unsigned long hash = 0;
+ int c;
+
+ while (uri_len--) {
+ c = *uri++;
+ if (c == '?')
+ break;
+ hash = c + (hash << 6) + (hash << 16) - hash;
+ }
+
+ return hash%HAPROXY_BACKENDS; /* I assume 4 active backends */
+} /* end haproxy_hash() */
+
+/*
+ * http://eternallyconfuzzled.com/tuts/algorithms/jsw_tut_hashing.aspx
+ */
+unsigned sax_hash ( void *key, int len )
+{
+ unsigned char *p = key;
+ unsigned h = 0;
+ int i;
+
+ for ( i = 0; i < len; i++ )
+ h ^= ( h << 5 ) + ( h >> 2 ) + p[i];
+
+ return h;
+}
+
+#include <arpa/inet.h>
+/* len 4 for ipv4 and 16 for ipv6 */
+unsigned int haproxy_server_hash(const char *addr, int len){
+ unsigned int h, l;
+ l = h = 0;
+
+ while ((l + sizeof (int)) <= len) {
+ h ^= ntohl(*(unsigned int *)(&addr[l]));
+ l += sizeof (int);
+ }
+ return h %= HAPROXY_BACKENDS;
+}/* end haproxy_server_hash() */
+
+
+int hashpjw(const void *key) {
+
+ const char *ptr;
+ unsigned int val;
+ /*********************************************************************
+ * *
+ * Hash the key by performing a number of bit operations on it. *
+ * *
+ *********************************************************************/
+
+ val = 0;
+ ptr = key;
+
+ while (*ptr != '\0') {
+
+ int tmp;
+
+ val = (val << 4) + (*ptr);
+
+ if((tmp = (val & 0xf0000000))) {
+ val = val ^ (tmp >> 24);
+ val = val ^ tmp;
+ }
+ ptr++;
+ }/* end while */
+
+ return val;
+}/* end hashpjw */
+
+static unsigned long
+hash_djbx33(
+ register unsigned char *key,
+ register size_t len)
+{
+ register unsigned long hash = 5381;
+
+ /* the hash unrolled eight times */
+ for (; len >= 8; len -= 8) {
+ hash = ((hash << 5) + hash) + *key++;
+ hash = ((hash << 5) + hash) + *key++;
+ hash = ((hash << 5) + hash) + *key++;
+ hash = ((hash << 5) + hash) + *key++;
+ hash = ((hash << 5) + hash) + *key++;
+ hash = ((hash << 5) + hash) + *key++;
+ hash = ((hash << 5) + hash) + *key++;
+ hash = ((hash << 5) + hash) + *key++;
+ }
+ switch (len) {
+ case 7: hash = ((hash << 5) + hash) + *key++; /* fallthrough... */
+ case 6: hash = ((hash << 5) + hash) + *key++; /* fallthrough... */
+ case 5: hash = ((hash << 5) + hash) + *key++; /* fallthrough... */
+ case 4: hash = ((hash << 5) + hash) + *key++; /* fallthrough... */
+ case 3: hash = ((hash << 5) + hash) + *key++; /* fallthrough... */
+ case 2: hash = ((hash << 5) + hash) + *key++; /* fallthrough... */
+ case 1: hash = ((hash << 5) + hash) + *key++; break;
+ default: /* case 0: */ break;
+ }
+ return hash;
+}
+
+typedef unsigned long int ub4; /* unsigned 4-byte quantities */
+typedef unsigned char ub1; /* unsigned 1-byte quantities */
+
+ub4 bernstein(ub1 *key, ub4 len, ub4 level){
+ ub4 hash = level;
+ ub4 i;
+ for (i=0; i<len; ++i) hash = 33*hash + key[i];
+ return hash;
+}
+
+/*
+ * http://www.azillionmonkeys.com/qed/hash.html
+ */
+#undef get16bits
+#if (defined(__GNUC__) && defined(__i386__)) || defined(__WATCOMC__) \
+ || defined(_MSC_VER) || defined (__BORLANDC__) || defined (__TURBOC__)
+#define get16bits(d) (*((const uint16_t *) (d)))
+#endif
+
+#if !defined (get16bits)
+#define get16bits(d) ((((uint32_t)(((const uint8_t *)(d))[1])) << 8)\
+ +(uint32_t)(((const uint8_t *)(d))[0]) )
+#endif
+
+uint32_t SuperFastHash (const char * data, int len) {
+uint32_t hash = len, tmp;
+int rem;
+
+ if (len <= 0 || data == NULL) return 0;
+
+ rem = len & 3;
+ len >>= 2;
+
+ /* Main loop */
+ for (;len > 0; len--) {
+ hash += get16bits (data);
+ tmp = (get16bits (data+2) << 11) ^ hash;
+ hash = (hash << 16) ^ tmp;
+ data += 2*sizeof (uint16_t);
+ hash += hash >> 11;
+ }
+
+ /* Handle end cases */
+ switch (rem) {
+ case 3: hash += get16bits (data);
+ hash ^= hash << 16;
+ hash ^= data[sizeof (uint16_t)] << 18;
+ hash += hash >> 11;
+ break;
+ case 2: hash += get16bits (data);
+ hash ^= hash << 11;
+ hash += hash >> 17;
+ break;
+ case 1: hash += *data;
+ hash ^= hash << 10;
+ hash += hash >> 1;
+ }
+
+ /* Force "avalanching" of final 127 bits */
+ hash ^= hash << 3;
+ hash += hash >> 5;
+ hash ^= hash << 4;
+ hash += hash >> 17;
+ hash ^= hash << 25;
+ hash += hash >> 6;
+
+ return hash;
+}
+
+/*
+ * This variant is about 15% faster.
+ */
+uint32_t SuperFastHash2 (const char * data, int len) {
+uint32_t hash = len, tmp;
+int rem;
+
+ if (len <= 0 || data == NULL) return 0;
+
+ rem = len & 3;
+ len >>= 2;
+
+ /* Main loop */
+ for (;len > 0; len--) {
+ register uint32_t next;
+ next = get16bits(data+2);
+ hash += get16bits(data);
+ tmp = (next << 11) ^ hash;
+ hash = (hash << 16) ^ tmp;
+ data += 2*sizeof (uint16_t);
+ hash += hash >> 11;
+ }
+
+ /* Handle end cases */
+ switch (rem) {
+ case 3: hash += get16bits (data);
+ hash ^= hash << 16;
+ hash ^= data[sizeof (uint16_t)] << 18;
+ hash += hash >> 11;
+ break;
+ case 2: hash += get16bits (data);
+ hash ^= hash << 11;
+ hash += hash >> 17;
+ break;
+ case 1: hash += *data;
+ hash ^= hash << 10;
+ hash += hash >> 1;
+ }
+
+ /* Force "avalanching" of final 127 bits */
+ hash ^= hash << 3;
+ hash += hash >> 5;
+ hash ^= hash << 4;
+ hash += hash >> 17;
+ hash ^= hash << 25;
+ hash += hash >> 6;
+
+ return hash;
+}
+
+/*
+ * 32 bit FNV-0 hash type
+ */
+typedef unsigned long Fnv32_t;
+
+/*
+ * fnv_32a_str - perform a 32 bit Fowler/Noll/Vo FNV-1a hash on a string
+ *
+ * input:
+ * str - string to hash
+ * hval - previous hash value or 0 if first call
+ *
+ * returns:
+ * 32 bit hash as a static hash type
+ *
+ * NOTE: To use the recommended 32 bit FNV-1a hash, use FNV1_32A_INIT as the
+ * hval arg on the first call to either fnv_32a_buf() or fnv_32a_str().
+ */
+Fnv32_t
+fnv_32a_str(char *str, Fnv32_t hval)
+{
+ unsigned char *s = (unsigned char *)str; /* unsigned string */
+
+ /*
+ * FNV-1a hash each octet in the buffer
+ */
+ while (*s) {
+
+ /* xor the bottom with the current octet */
+ hval ^= (Fnv32_t)*s++;
+
+/* #define NO_FNV_GCC_OPTIMIZATION */
+ /* multiply by the 32 bit FNV magic prime mod 2^32 */
+#if defined(NO_FNV_GCC_OPTIMIZATION)
+ /*
+ * 32 bit magic FNV-1a prime
+ */
+#define FNV_32_PRIME ((Fnv32_t)0x01000193)
+ hval *= FNV_32_PRIME;
+#else
+ hval += (hval<<1) + (hval<<4) + (hval<<7) + (hval<<8) + (hval<<24);
+#endif
+ }
+
+ /* return our new hash value */
+ return hval;
+}
+
+/*
+ * from lookup3.c, by Bob Jenkins, May 2006, Public Domain.
+ */
+
+#define rot(x,k) (((x)<<(k)) | ((x)>>(32-(k))))
+
+/*
+-------------------------------------------------------------------------------
+mix -- mix 3 32-bit values reversibly.
+
+This is reversible, so any information in (a,b,c) before mix() is
+still in (a,b,c) after mix().
+
+If four pairs of (a,b,c) inputs are run through mix(), or through
+mix() in reverse, there are at least 32 bits of the output that
+are sometimes the same for one pair and different for another pair.
+This was tested for:
+* pairs that differed by one bit, by two bits, in any combination
+ of top bits of (a,b,c), or in any combination of bottom bits of
+ (a,b,c).
+* "differ" is defined as +, -, ^, or ~^. For + and -, I transformed
+ the output delta to a Gray code (a^(a>>1)) so a string of 1's (as
+ is commonly produced by subtraction) look like a single 1-bit
+ difference.
+* the base values were pseudorandom, all zero but one bit set, or
+ all zero plus a counter that starts at zero.
+
+Some k values for my "a-=c; a^=rot(c,k); c+=b;" arrangement that
+satisfy this are
+ 4 6 8 16 19 4
+ 9 15 3 18 27 15
+ 14 9 3 7 17 3
+Well, "9 15 3 18 27 15" didn't quite get 32 bits diffing
+for "differ" defined as + with a one-bit base and a two-bit delta. I
+used http://burtleburtle.net/bob/hash/avalanche.html to choose
+the operations, constants, and arrangements of the variables.
+
+This does not achieve avalanche. There are input bits of (a,b,c)
+that fail to affect some output bits of (a,b,c), especially of a. The
+most thoroughly mixed value is c, but it doesn't really even achieve
+avalanche in c.
+
+This allows some parallelism. Read-after-writes are good at doubling
+the number of bits affected, so the goal of mixing pulls in the opposite
+direction as the goal of parallelism. I did what I could. Rotates
+seem to cost as much as shifts on every machine I could lay my hands
+on, and rotates are much kinder to the top and bottom bits, so I used
+rotates.
+-------------------------------------------------------------------------------
+*/
+#define mix(a,b,c) \
+{ \
+ a -= c; a ^= rot(c, 4); c += b; \
+ b -= a; b ^= rot(a, 6); a += c; \
+ c -= b; c ^= rot(b, 8); b += a; \
+ a -= c; a ^= rot(c,16); c += b; \
+ b -= a; b ^= rot(a,19); a += c; \
+ c -= b; c ^= rot(b, 4); b += a; \
+}
+
+/*
+-------------------------------------------------------------------------------
+final -- final mixing of 3 32-bit values (a,b,c) into c
+
+Pairs of (a,b,c) values differing in only a few bits will usually
+produce values of c that look totally different. This was tested for
+* pairs that differed by one bit, by two bits, in any combination
+ of top bits of (a,b,c), or in any combination of bottom bits of
+ (a,b,c).
+* "differ" is defined as +, -, ^, or ~^. For + and -, I transformed
+ the output delta to a Gray code (a^(a>>1)) so a string of 1's (as
+ is commonly produced by subtraction) look like a single 1-bit
+ difference.
+* the base values were pseudorandom, all zero but one bit set, or
+ all zero plus a counter that starts at zero.
+
+These constants passed:
+ 14 11 25 16 4 14 24
+ 12 14 25 16 4 14 24
+and these came close:
+ 4 8 15 26 3 22 24
+ 10 8 15 26 3 22 24
+ 11 8 15 26 3 22 24
+-------------------------------------------------------------------------------
+*/
+#define final(a,b,c) \
+{ \
+ c ^= b; c -= rot(b,14); \
+ a ^= c; a -= rot(c,11); \
+ b ^= a; b -= rot(a,25); \
+ c ^= b; c -= rot(b,16); \
+ a ^= c; a -= rot(c,4); \
+ b ^= a; b -= rot(a,14); \
+ c ^= b; c -= rot(b,24); \
+}
+
+/*
+--------------------------------------------------------------------
+ This works on all machines. To be useful, it requires
+ -- that the key be an array of uint32_t's, and
+ -- that the length be the number of uint32_t's in the key
+
+ The function hashword() is identical to hashlittle() on little-endian
+ machines, and identical to hashbig() on big-endian machines,
+ except that the length has to be measured in uint32_ts rather than in
+ bytes. hashlittle() is more complicated than hashword() only because
+ hashlittle() has to dance around fitting the key bytes into registers.
+--------------------------------------------------------------------
+*/
+uint32_t hashword(
+const uint32_t *k, /* the key, an array of uint32_t values */
+size_t length, /* the length of the key, in uint32_ts */
+uint32_t initval) /* the previous hash, or an arbitrary value */
+{
+ uint32_t a,b,c;
+
+ /* Set up the internal state */
+ a = b = c = 0xdeadbeef + (((uint32_t)length)<<2) + initval;
+
+ /*------------------------------------------------- handle most of the key */
+ while (length > 3)
+ {
+ a += k[0];
+ b += k[1];
+ c += k[2];
+ mix(a,b,c);
+ length -= 3;
+ k += 3;
+ }
+
+ /*------------------------------------------- handle the last 3 uint32_t's */
+ switch(length) /* all the case statements fall through */
+ {
+ case 3 : c+=k[2];
+ case 2 : b+=k[1];
+ case 1 : a+=k[0];
+ final(a,b,c);
+ case 0: /* case 0: nothing left to add */
+ break;
+ }
+ /*------------------------------------------------------ report the result */
+ return c;
+}
+
+/* from K&R book site 139 */
+#define HASHSIZE 101
+
+unsigned kr_hash(char *s){
+ unsigned hashval;
+
+ for(hashval = 0; *s != '\0';s++)
+ hashval = *s + 31 * hashval;
+
+ return hashval % HASHSIZE;
+
+} /* end kr_hash() */
+
+unsigned fnv_hash ( void *key, int len )
+{
+ unsigned char *p = key;
+ unsigned h = 2166136261;
+ int i;
+
+ for ( i = 0; i < len; i++ )
+ h = ( h * 16777619 ) ^ p[i];
+
+ return h;
+}
+
+unsigned oat_hash ( void *key, int len )
+{
+ unsigned char *p = key;
+ unsigned h = 0;
+ int i;
+
+ for ( i = 0; i < len; i++ ) {
+ h += p[i];
+ h += ( h << 10 );
+ h ^= ( h >> 6 );
+ }
+
+ h += ( h << 3 );
+ h ^= ( h >> 11 );
+ h += ( h << 15 );
+
+ return h;
+}
+
+unsigned wt_hash ( void *key, int len )
+{
+ unsigned char *p = key;
+ unsigned h = 0x783c965aUL;
+ unsigned step = 16;
+
+ for (; len > 0; len--) {
+ h ^= *p * 9;
+ p++;
+ h = (h << step) | (h >> (32-step));
+ step ^= h;
+ step &= 0x1F;
+ }
+
+ return h;
+}
+
+
+#define run_test(fct, args) { \
+ unsigned long loop, count; \
+ volatile unsigned long result; \
+ double delta; \
+ struct timeval tv; \
+ fprintf(stderr, "Starting %s\n", #fct); \
+ tv = timeval_current(); \
+ count = 0; \
+ do { \
+ delta = timeval_elapsed(&tv); \
+ for (loop = 0; loop < 1000; loop++) { \
+ result = fct args; \
+ count++; \
+ } \
+ } while (delta < 1.0); \
+ fprintf(stdout, "%-20s : %10.0f run/sec\n", #fct, count/delta); \
+ fflush(stdout); \
+}
+
+int main(){
+
+ char **start;
+ int len;
+
+ char *urls[] = {
+ "http://www.microsoft.com/shared/core/1/webservice/navigation.asmx/DisplayDownlevelNavHtml",
+ NULL
+ };
+
+ start = urls;
+ len = strlen(*urls);
+
+ run_test(wt_hash, (*urls, len));
+ run_test(SuperFastHash2, (*urls, len));
+ run_test(SuperFastHash, (*urls, len));
+ run_test(haproxy_uri_hash, (*urls, len));
+ run_test(haproxy_server_hash, (*urls, len));
+ run_test(hashpjw, (*urls));
+ run_test(hash_djbx33, ((unsigned char *)*urls, len));
+ run_test(bernstein, ((unsigned char *)*urls, len, 4));
+ run_test(fnv_32a_str, (*urls, 0));
+ run_test(hashword, ((const uint32_t *)*urls,strlen(*urls),0));
+ run_test(kr_hash, (*urls));
+ run_test(sax_hash, (*urls, len));
+ run_test(fnv_hash, (*urls, len));
+ run_test(oat_hash, (*urls, len));
+
+ return 0;
+
+}/* end main() */
--- /dev/null
+/*
+ * Contribution from Aleksandar Lazic <al-haproxy@none.at>
+ *
+ * Build with :
+ * gcc -O2 -o test_pools test_pools.c
+ * or with dlmalloc too :
+ * gcc -O2 -o test_pools -D USE_DLMALLOC test_pools.c -DUSE_DL_PREFIX dlmalloc.c
+ */
+
+#include <sys/time.h>
+#include <time.h>
+#include <stdlib.h>
+#include <stdbool.h>
+#include <string.h>
+#include <stdio.h>
+
+static struct timeval timeval_current(void)
+{
+ struct timeval tv;
+ gettimeofday(&tv, NULL);
+ return tv;
+}
+
+static double timeval_elapsed(struct timeval *tv)
+{
+ struct timeval tv2 = timeval_current();
+ return (tv2.tv_sec - tv->tv_sec) +
+ (tv2.tv_usec - tv->tv_usec)*1.0e-6;
+}
+
+#define torture_assert(test, expr, str) if (!(expr)) { \
+ printf("failure: %s [\n%s: Expression %s failed: %s\n]\n", \
+ test, __location__, #expr, str); \
+ return false; \
+}
+
+#define torture_assert_str_equal(test, arg1, arg2, desc) \
+ if (strcmp(arg1, arg2)) { \
+ printf("failure: %s [\n%s: Expected %s, got %s: %s\n]\n", \
+ test, __location__, arg1, arg2, desc); \
+ return false; \
+ }
+
+/* added pools from haproxy */
+#include <stdlib.h>
+
+/*
+ * Returns a pointer to an area of <__len> bytes taken from the pool <pool> or
+ * dynamically allocated. In the first case, <__pool> is updated to point to
+ * the next element in the list.
+ */
+#define pool_alloc_from(__pool, __len) \
+({ \
+ void *__p; \
+ if ((__p = (__pool)) == NULL) \
+ __p = malloc(((__len) >= sizeof (void *)) ? \
+ (__len) : sizeof(void *)); \
+ else { \
+ __pool = *(void **)(__pool); \
+ } \
+ __p; \
+})
+
+/*
+ * Puts a memory area back to the corresponding pool.
+ * Items are chained directly through a pointer that
+ * is written in the beginning of the memory area, so
+ * there's no need for any carrier cell. This implies
+ * that each memory area is at least as big as one
+ * pointer.
+ */
+#define pool_free_to(__pool, __ptr) \
+({ \
+ *(void **)(__ptr) = (void *)(__pool); \
+ __pool = (void *)(__ptr); \
+})
+
+/*
+ * Returns a pointer to type <type> taken from the
+ * pool <pool_type> or dynamically allocated. In the
+ * first case, <pool_type> is updated to point to the
+ * next element in the list.
+ */
+#define pool_alloc(type) \
+({ \
+ void *__p; \
+ if ((__p = pool_##type) == NULL) \
+ __p = malloc(sizeof_##type); \
+ else { \
+ pool_##type = *(void **)pool_##type; \
+ } \
+ __p; \
+})
+
+/*
+ * Puts a memory area back to the corresponding pool.
+ * Items are chained directly through a pointer that
+ * is written in the beginning of the memory area, so
+ * there's no need for any carrier cell. This implies
+ * that each memory area is at least as big as one
+ * pointer.
+ */
+#define pool_free(type, ptr) \
+({ \
+ *(void **)ptr = (void *)pool_##type; \
+ pool_##type = (void *)ptr; \
+})
+
+/*
+ * This function destroys a pull by freeing it completely.
+ * This should be called only under extreme circumstances.
+ */
+static inline void pool_destroy(void **pool)
+{
+ void *temp, *next;
+ next = pool;
+ while (next) {
+ temp = next;
+ next = *(void **)temp;
+ free(temp);
+ }
+}
+
+#define sizeof_talloc 1000
+
+/*
+ measure the speed of hapx versus malloc
+*/
+static bool test_speed1(void)
+{
+ void **pool_talloc = NULL;
+ void *ctx = pool_alloc(talloc);
+ unsigned count;
+ const int loop = 1000;
+ int i;
+ struct timeval tv;
+
+ printf("test: speed [\nhaproxy-pool VS MALLOC SPEED 2\n]\n");
+
+ tv = timeval_current();
+ count = 0;
+ do {
+ void *p1, *p2, *p3;
+ for (i=0;i<loop;i++) {
+ p1 = pool_alloc_from(pool_talloc, 10 + loop % 100);
+ p2 = pool_alloc_from(pool_talloc, strlen("foo bar") + 1);
+ strcpy(p2, "foo bar");
+ p3 = pool_alloc_from(pool_talloc, 300);
+ pool_free_to(pool_talloc,p1);
+ pool_free_to(pool_talloc,p3);
+ pool_free_to(pool_talloc,p2);
+ }
+ count += 3 * loop;
+ } while (timeval_elapsed(&tv) < 5.0);
+
+ fprintf(stderr, "haproxy : %10.0f ops/sec\n", count/timeval_elapsed(&tv));
+
+ pool_destroy(pool_talloc);
+
+ tv = timeval_current();
+ count = 0;
+ do {
+ void *p1, *p2, *p3;
+ for (i=0;i<loop;i++) {
+ p1 = malloc(10 + loop % 100);
+ p2 = malloc(strlen("foo bar") + 1);
+ strcpy(p2, "foo bar");
+ p3 = malloc(300);
+ free(p1);
+ free(p2);
+ free(p3);
+ }
+ count += 3 * loop;
+ } while (timeval_elapsed(&tv) < 5.0);
+ fprintf(stderr, "malloc : %10.0f ops/sec\n", count/timeval_elapsed(&tv));
+
+#ifdef USE_DLMALLOC
+ tv = timeval_current();
+ count = 0;
+ do {
+ void *p1, *p2, *p3;
+ for (i=0;i<loop;i++) {
+ p1 = dlmalloc(10 + loop % 100);
+ p2 = dlmalloc(strlen("foo bar") + 1);
+ strcpy(p2, "foo bar");
+ p3 = dlmalloc(300);
+ dlfree(p1);
+ dlfree(p2);
+ dlfree(p3);
+ }
+ count += 3 * loop;
+ } while (timeval_elapsed(&tv) < 5.0);
+ fprintf(stderr, "dlmalloc: %10.0f ops/sec\n", count/timeval_elapsed(&tv));
+#endif
+
+ printf("success: speed1\n");
+
+ return true;
+}
+
+int main(void)
+{
+ bool ret = test_speed1();
+ if (!ret)
+ return -1;
+ return 0;
+}
--- /dev/null
+#include <stdio.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <ctype.h>
+#include <sys/time.h>
+#include <sys/types.h>
+#include <sys/socket.h>
+#include <netinet/tcp.h>
+#include <netinet/in.h>
+#include <arpa/inet.h>
+#include <netdb.h>
+#include <fcntl.h>
+#include <errno.h>
+#include <signal.h>
+#include <stdarg.h>
+#include <sys/resource.h>
+#include <time.h>
+#include <regex.h>
+#include <syslog.h>
+
+
+main() {
+ printf("sizeof sockaddr=%d\n", sizeof(struct sockaddr));
+ printf("sizeof sockaddr_in=%d\n", sizeof(struct sockaddr_in));
+ printf("sizeof sockaddr_in6=%d\n", sizeof(struct sockaddr_in6));
+}
--- /dev/null
+#include <stdio.h>
+#include <string.h>
+#include <arpa/inet.h>
+
+#define NSERV 10
+#define MAXLINE 1000
+
+char line[MAXLINE];
+
+int counts_gd1[NSERV][NSERV];
+static unsigned long hash_gd1(char *uri)
+{
+ unsigned long hash = 0;
+ int c;
+
+ while ((c = *uri++))
+ hash = c + (hash << 6) + (hash << 16) - hash;
+
+ return hash;
+}
+
+int counts_gd2[NSERV][NSERV];
+static unsigned long hash_gd2(char *uri)
+{
+ unsigned long hash = 0;
+ int c;
+
+ while ((c = *uri++)) {
+ if (c == '?' || c == '\n')
+ break;
+ hash = c + (hash << 6) + (hash << 16) - hash;
+ }
+
+ return hash;
+}
+
+
+int counts_gd3[NSERV][NSERV];
+static unsigned long hash_gd3(char *uri)
+{
+ unsigned long hash = 0;
+ int c;
+
+ while ((c = *uri++)) {
+ if (c == '?' || c == '\n')
+ break;
+ hash = c - (hash << 3) + (hash << 15) - hash;
+ }
+
+ return hash;
+}
+
+
+int counts_gd4[NSERV][NSERV];
+static unsigned long hash_gd4(char *uri)
+{
+ unsigned long hash = 0;
+ int c;
+
+ while ((c = *uri++)) {
+ if (c == '?' || c == '\n')
+ break;
+ hash = hash + (hash << 6) - (hash << 15) - c;
+ }
+
+ return hash;
+}
+
+
+int counts_gd5[NSERV][NSERV];
+static unsigned long hash_gd5(char *uri)
+{
+ unsigned long hash = 0;
+ int c;
+
+ while ((c = *uri++)) {
+ if (c == '?' || c == '\n')
+ break;
+ hash = hash + (hash << 2) - (hash << 19) - c;
+ }
+
+ return hash;
+}
+
+
+int counts_gd6[NSERV][NSERV];
+static unsigned long hash_gd6(char *uri)
+{
+ unsigned long hash = 0;
+ int c;
+
+ while ((c = *uri++)) {
+ if (c == '?' || c == '\n')
+ break;
+ hash = hash + (hash << 2) - (hash << 22) - c;
+ }
+
+ return hash;
+}
+
+
+int counts_wt1[NSERV][NSERV];
+static unsigned long hash_wt1(int hsize, char *string) {
+ int bits;
+ unsigned long data, val;
+
+ bits = val = data = 0;
+ while (*string) {
+ if (*string == '?' || *string == '\n')
+ break;
+ data |= ((unsigned long)(unsigned char)*string) << bits;
+ bits += 8;
+ while (bits >= hsize) {
+ val ^= data - (val >> hsize);
+ bits -= hsize;
+ data >>= hsize;
+ }
+ string++;
+ }
+ val ^= data;
+ while (val > ((1 << hsize) - 1)) {
+ val = (val & ((1 << hsize) - 1)) ^ (val >> hsize);
+ }
+ return val;
+}
+
+/*
+ * efficient hash : no duplicate on the first 65536 values of 2 bytes.
+ * less than 0.1% duplicates for the first 1.6 M values of 3 bytes.
+ */
+int counts_wt2[NSERV][NSERV];
+typedef unsigned int u_int32_t;
+
+static inline u_int32_t shl32(u_int32_t i, int count) {
+ if (count == 32)
+ return 0;
+ return i << count;
+}
+
+static inline u_int32_t shr32(u_int32_t i, int count) {
+ if (count == 32)
+ return 0;
+ return i >> count;
+}
+
+static unsigned int rev32(unsigned int c) {
+ c = ((c & 0x0000FFFF) << 16)| ((c & 0xFFFF0000) >> 16);
+ c = ((c & 0x00FF00FF) << 8) | ((c & 0xFF00FF00) >> 8);
+ c = ((c & 0x0F0F0F0F) << 4) | ((c & 0xF0F0F0F0) >> 4);
+ c = ((c & 0x33333333) << 2) | ((c & 0xCCCCCCCC) >> 2);
+ c = ((c & 0x55555555) << 1) | ((c & 0xAAAAAAAA) >> 1);
+ return c;
+}
+
+int hash_wt2(const char *src, int len) {
+ unsigned int i = 0x3C964BA5; /* as many ones as zeroes */
+ unsigned int j;
+ unsigned int ih, il;
+ int bit;
+
+ while (len--) {
+ j = (unsigned char)*src++;
+ if (j == '?' || j == '\n')
+ break;
+ bit = rev32(j - i);
+ bit = bit - (bit >> 3) + (bit >> 16) - j;
+
+ bit &= 0x1f;
+ ih = shr32(i, bit);
+ il = i & (shl32(1, bit) - 1);
+ i = shl32(il, 32-bit) - ih - ~j;
+ }
+ return i;
+}
+
+
+//typedef unsigned int uint32_t;
+//typedef unsigned short uint8_t;
+//typedef unsigned char uint16_t;
+
+/*
+ * http://www.azillionmonkeys.com/qed/hash.html
+ */
+#undef get16bits
+#if (defined(__GNUC__) && defined(__i386__)) || defined(__WATCOMC__) \
+ || defined(_MSC_VER) || defined (__BORLANDC__) || defined (__TURBOC__)
+#define get16bits(d) (*((const uint16_t *) (d)))
+#endif
+
+#if !defined (get16bits)
+#define get16bits(d) ((((uint32_t)(((const uint8_t *)(d))[1])) << 8)\
+ +(uint32_t)(((const uint8_t *)(d))[0]) )
+#endif
+
+/*
+ * This function has a hole of 11 unused bits in bytes 2 and 3 of each block of
+ * 32 bits.
+ */
+int counts_SuperFastHash[NSERV][NSERV];
+
+uint32_t SuperFastHash (const char * data, int len) {
+uint32_t hash = len, tmp;
+int rem;
+
+ if (len <= 0 || data == NULL) return 0;
+
+ rem = len & 3;
+ len >>= 2;
+
+ /* Main loop */
+ for (;len > 0; len--) {
+ hash += get16bits (data);
+ tmp = (get16bits (data+2) << 11) ^ hash;
+ hash = (hash << 16) ^ tmp;
+ data += 2*sizeof (uint16_t);
+ hash += hash >> 11;
+ }
+
+ /* Handle end cases */
+ switch (rem) {
+ case 3: hash += get16bits (data);
+ hash ^= hash << 16;
+ hash ^= data[sizeof (uint16_t)] << 18;
+ hash += hash >> 11;
+ break;
+ case 2: hash += get16bits (data);
+ hash ^= hash << 11;
+ hash += hash >> 17;
+ break;
+ case 1: hash += *data;
+ hash ^= hash << 10;
+ hash += hash >> 1;
+ }
+
+ /* Force "avalanching" of final 127 bits */
+ hash ^= hash << 3;
+ hash += hash >> 5;
+ hash ^= hash << 4;
+ hash += hash >> 17;
+ hash ^= hash << 25;
+ hash += hash >> 6;
+
+ return hash;
+}
+
+/*
+ * This variant uses all bits from the input block, and is about 15% faster.
+ */
+int counts_SuperFastHash2[NSERV][NSERV];
+uint32_t SuperFastHash2 (const char * data, int len) {
+uint32_t hash = len, tmp;
+int rem;
+
+ if (len <= 0 || data == NULL) return 0;
+
+ rem = len & 3;
+ len >>= 2;
+
+ /* Main loop */
+ for (;len > 0; len--) {
+ register uint32_t next;
+ next = get16bits(data+2);
+ hash += get16bits(data);
+ tmp = ((next << 11) | (next >> 21)) ^ hash;
+ hash = (hash << 16) ^ tmp;
+ data += 2*sizeof (uint16_t);
+ hash += hash >> 11;
+ }
+
+ /* Handle end cases */
+ switch (rem) {
+ case 3: hash += get16bits (data);
+ hash ^= hash << 16;
+ hash ^= data[sizeof (uint16_t)] << 18;
+ hash += hash >> 11;
+ break;
+ case 2: hash += get16bits (data);
+ hash ^= hash << 11;
+ hash += hash >> 17;
+ break;
+ case 1: hash += *data;
+ hash ^= hash << 10;
+ hash += hash >> 1;
+ }
+
+ /* Force "avalanching" of final 127 bits */
+ hash ^= hash << 3;
+ hash += hash >> 5;
+ hash ^= hash << 4;
+ hash += hash >> 17;
+ hash ^= hash << 25;
+ hash += hash >> 6;
+
+ return hash;
+}
+
+/* len 4 for ipv4 and 16 for ipv6 */
+int counts_srv[NSERV][NSERV];
+unsigned int haproxy_server_hash(const char *addr, int len){
+ unsigned int h, l;
+ l = h = 0;
+
+ while ((l + sizeof (int)) <= len) {
+ h ^= ntohl(*(unsigned int *)(&addr[l]));
+ l += sizeof (int);
+ }
+ return h;
+}/* end haproxy_server_hash() */
+
+
+
+void count_hash_results(unsigned long hash, int counts[NSERV][NSERV]) {
+ int srv, nsrv;
+
+ for (nsrv = 0; nsrv < NSERV; nsrv++) {
+ srv = hash % (nsrv + 1);
+ counts[nsrv][srv]++;
+ }
+}
+
+void dump_hash_results(char *name, int counts[NSERV][NSERV]) {
+ int srv, nsrv;
+
+ printf("%s:\n", name);
+ for (nsrv = 0; nsrv < NSERV; nsrv++) {
+ printf("%02d srv: ", nsrv+1);
+ for (srv = 0; srv <= nsrv; srv++) {
+ //printf("%6d ", counts[nsrv][srv]);
+ //printf("%3.1f ", (100.0*counts[nsrv][srv]) / (double)counts[0][0]);
+ printf("%3.1f ", 100.0*(counts[nsrv][srv] - (double)counts[0][0]/(nsrv+1)) / (double)counts[0][0]);
+ }
+ printf("\n");
+ }
+ printf("\n");
+}
+
+int main() {
+ memset(counts_gd1, 0, sizeof(counts_gd1));
+ memset(counts_gd2, 0, sizeof(counts_gd2));
+ memset(counts_gd3, 0, sizeof(counts_gd3));
+ memset(counts_gd4, 0, sizeof(counts_gd4));
+ memset(counts_gd5, 0, sizeof(counts_gd5));
+ memset(counts_gd6, 0, sizeof(counts_gd6));
+ memset(counts_wt1, 0, sizeof(counts_wt1));
+ memset(counts_wt2, 0, sizeof(counts_wt2));
+ memset(counts_srv, 0, sizeof(counts_srv));
+ memset(counts_SuperFastHash, 0, sizeof(counts_SuperFastHash));
+ memset(counts_SuperFastHash2, 0, sizeof(counts_SuperFastHash2));
+
+ while (fgets(line, MAXLINE, stdin) != NULL) {
+ count_hash_results(hash_gd1(line), counts_gd1);
+ count_hash_results(hash_gd2(line), counts_gd2);
+ count_hash_results(hash_gd3(line), counts_gd3);
+ count_hash_results(hash_gd4(line), counts_gd4);
+ count_hash_results(hash_gd5(line), counts_gd5);
+ count_hash_results(hash_gd6(line), counts_gd6);
+ count_hash_results(hash_wt1(31, line), counts_wt1);
+ count_hash_results(hash_wt2(line, strlen(line)), counts_wt2);
+ count_hash_results(haproxy_server_hash(line, strlen(line)), counts_srv);
+ count_hash_results(SuperFastHash(line, strlen(line)), counts_SuperFastHash);
+ count_hash_results(SuperFastHash2(line, strlen(line)), counts_SuperFastHash2);
+ }
+
+ dump_hash_results("hash_gd1", counts_gd1);
+ dump_hash_results("hash_gd2", counts_gd2);
+ dump_hash_results("hash_gd3", counts_gd3);
+ dump_hash_results("hash_gd4", counts_gd4);
+ dump_hash_results("hash_gd5", counts_gd5);
+ dump_hash_results("hash_gd6", counts_gd6);
+ dump_hash_results("hash_wt1", counts_wt1);
+ dump_hash_results("hash_wt2", counts_wt2);
+ dump_hash_results("haproxy_server_hash", counts_srv);
+ dump_hash_results("SuperFastHash", counts_SuperFastHash);
+ dump_hash_results("SuperFastHash2", counts_SuperFastHash2);
+
+ return 0;
+}